Publications of Interest
The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.
How recent are these publications?
These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.
How are topics selected?
The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.
How can I submit or suggest a publication?
Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.
Submissions and suggestions may be sent to: research (at) securedatabank.net
(ID#:14-2287)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Anonymity
Minimizing privacy risk is one of the major problems attendant on the development of social media and hand-held smart phone technologies. K-anonymity is one main method for anonymizing data. Many of the articles cited here focus on k-anonymity to ensure privacy. Others look at elliptic keys and privacy enhancing techniques more generally. These articles were presented between January and September, 2014.
- Wu, S.; Wang, X.; Wang, S.; Zhang, Z.; Tung, AK.H., "K-Anonymity for Crowdsourcing Database," Knowledge and Data Engineering, IEEE Transactions on , vol.26, no.9, pp.2207,2221, Sept. 2014. doi: 10.1109/TKDE.2013.93 In crowdsourcing database, human operators are embedded into the database engine and collaborate with other conventional database operators to process the queries. Each human operator publishes small HITs (Human Intelligent Task) to the crowdsourcing platform, which consists of a set of database records and corresponding questions for human workers. The human workers complete the HITs and return the results to the crowdsourcing database for further processing. In practice, published records in HITs may contain sensitive attributes, probably causing privacy leakage so that malicious workers could link them with other public databases to reveal individual private information. Conventional privacy protection techniques, such as K-Anonymity, can be applied to partially solve the problem. However, after generalizing the data, the result of standard K-Anonymity algorithms may render uncontrollable information loss and affects the accuracy of crowdsourcing. In this paper, we first study the tradeoff between the privacy and accuracy for the human operator within data anonymization process. A probability model is proposed to estimate the lower bound and upper bound of the accuracy for general K-Anonymity approaches. We show that searching the optimal anonymity approach is NP-Hard and only heuristic approach is available. The second contribution of the paper is a general feedback-based K-Anonymity scheme. In our scheme, synthetic samples are published to the human workers, the results of which are used to guide the selection on anonymity strategies. We apply the scheme on Mondrian algorithm by adaptively cutting the dimensions based on our feedback results on the synthetic samples. We evaluate the performance of the feedback-based approach on U.S. census dataset, and show that given a predefined (K) , our proposal outperforms standard K-Anonymity approaches on retaining the effectiveness- of crowdsourcing. Keywords: Crowdsourcing; Database Management; General; Information Technology and Systems; K-Anonymity; Query design and implementation languages; Security;and protection; data partition; database privacy; integrity (ID#:14-2289) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6529080&isnumber=6871455
- Jianpei Zhang; Ying Zhao; Yue Yang; Jing Yang, "A K-anonymity Clustering Algorithm Based On The Information Entropy," Computer Supported Cooperative Work in Design (CSCWD), Proceedings of the 2014 IEEE 18th International Conference on , vol., no., pp.319,324, 21-23 May 2014. doi: 10.1109/CSCWD.2014.6846862 Data anonymization techniques are the main way to achieve privacy protection, and as a classical anonymity model, K-anonymity is the most effective and frequently-used. But the majority of K-anonymity algorithms can hardly balance the data quality and efficiency, and ignore the privacy of the data to improve the data quality. To solve the problems above, by introducing the concept of "diameter" and a new clustering criterion based on the parameter of the maximum threshold of equivalence classes, we proposed a K-anonymity clustering algorithm based on the information entropy. The results of experiments showed that both the algorithm efficiency and data security are improved, and meanwhile the total information loss is acceptable, so the proposed algorithm has some practicability in application. Keywords: data privacy; entropy; pattern clustering; security of data; K-anonymity clustering algorithm; classical anonymity model; data anonymization techniques; data efficiency; data quality improvement; data security; information entropy; maximum equivalence class threshold; privacy protection; Algorithm design and analysis; Classification algorithms; Clustering algorithms; Data security; Entropy; Information entropy; Loss measurement; K-anonymity; clustering; information entropy; privacy preserving (ID#:14-2290) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846862&isnumber=6846800
- Liu, J.K.; Man Ho Au; Susilo, W.; Jianying Zhou, "Linkable Ring Signature with Unconditional Anonymity," Knowledge and Data Engineering, IEEE Transactions on, vol.26, no.1, pp.157,165, Jan. 2014. doi: 10.1109/TKDE.2013.17 In this paper, we construct a linkable ring signature scheme with unconditional anonymity. It has been regarded as an open problem in [22] since 2004 for the construction of an unconditional anonymous linkable ring signature scheme. We are the first to solve this open problem by giving a concrete instantiation, which is proven secure in the random oracle model. Our construction is even more efficient than other schemes that can only provide computational anonymity. Simultaneously, our scheme can act as an counterexample to show that [19, Theorem 1] is not always true, which stated that linkable ring signature scheme cannot provide strong anonymity. Yet we prove that our scheme can achieve strong anonymity (under one of the interpretations). Keywords: cryptography; digital signatures; computational anonymity ;random oracle model; unconditional anonymity; unconditional anonymous linkable ring signature scheme; Adaptive systems; Electronic voting; Games; Indexes; Mathematical model; Public key; Ring signature; anonymity; linkable (ID#:14-2291) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6420832&isnumber=6674933
- Ren-Hung Hwang; Fu-Hui Huang, "SocialCloaking: A Distributed Architecture For K-Anonymity Location Privacy Protection," Computing, Networking and Communications (ICNC), 2014 International Conference on , vol., no., pp.247,251, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785340 As location information becomes commonly available in smart phones, applications of Location Based Service (LBS) has also become very popular and are widely used by smart phone users. Since the query of LBS contains user's location, it raises a privacy concern of exposure of user's location. K-anonymity is a commonly adopted technique for location privacy protection. In the literature, a centralized architecture which consists of a trusted anonymity server is widely adopted. However, this approach exhibits several apparent weaknesses, such as single point of failure, performance bottleneck, serious security threats, and not trustable to users, etc. In this paper, we re-examine the location privacy protection problem in LBS applications. We first provide an overview of the problem itself, to include types of query, privacy protection methods, adversary models, system architectures, and their related works in the literature. We then discuss the challenges of adopting a distributed architecture which does not need to set up a trusted anonymity server and propose a solution by combining unique features of structured peer-to-peer architecture and trust relationships among users of their on-line social networking relations. Keywords: data privacy; mobile computing; query processing; social networking (online);trusted computing; K-anonymity location privacy protection; LBS query; SocialCloaking; adversary model; centralized architecture; distributed architecture; failure point; location information; location-based service; on-line social networking relation; security threat; smart phones; structured peer-to-peer architecture; system architecture;trust relationship; trusted anonymity server; user location; Computer architecture; Mobile communication; Mobile handsets; Peer-to-peer computing; Privacy; Servers; Trajectory; Distributed Anonymity Server Architecture; Location Based Service; Location Privacy; Peer-to-Peer; Social Networking (ID#:14-2292) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785340&isnumber=6785290
- Shinganjude, R.D.; Theng, D.P., "Inspecting the Ways of Source Anonymity in Wireless Sensor Network," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.705,707, 7-9 April 2014. doi: 10.1109/CSNT.2014.148 Sensor networks mainly deployed to monitor and report real events, and thus it is very difficult and expensive to achieve event source anonymity for it, as sensor networks are very limited in resources. Data obscurity i.e. the source anonymity problem implies that an unauthorized observer must be unable to detect the origin of events by analyzing the network traffic; this problem has emerged as an important topic in the security of wireless sensor networks. This work inspects the different approaches carried for attaining the source anonymity in wireless sensor network, with variety of techniques based on different adversarial assumptions. The approach meeting the best result in source anonymity is proposed for further improvement in the source location privacy. The paper suggests the implementation of most prominent and effective LSB Steganography technique for the improvement. Keywords: steganography; telecommunication traffic; wireless sensor networks ;LSB steganography technique; adversarial assumptions; event source anonymity; network traffic; source location privacy; wireless sensor networks; Communication systems; Wireless sensor network; anonymity; coding theory; persistent dummy traffic; statistical test; steganography (ID#:14-2293) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821490&isnumber=6821334
- Sabra, Z.; Artail, H., "Preserving Anonymity And Quality Of Service For VOIP Applications Over Hybrid Networks," Mediterranean Electrotechnical Conference (MELECON), 2014 17th IEEE , vol., no., pp.421,425, 13-16 April 2014. doi: 10.1109/MELCON.2014.6820571 In this work we seek to achieve VoIP end users' profile privacy without violating the QoS constraints on the throughput, end to end delay, and jitter, as these parameters are the most sensitive factors in multimedia applications. We propose an end-to-end user anonymity design that takes into consideration these constraints in a hybrid environment that involves ad-hoc and infrastructure networks. Using clusterheads for communication, and encryption of RTP payload, we prove using analysis and OPNET simulations, that our model can be easily integrated to present network infrastructures. Keywords: Internet telephony; cryptography; jitter; quality of service; OPNET simulations; QoS constraints; RTP payload; VoIP applications; anonymity preservation; encryption; end to end delay; hybrid networks; jitter; quality of service; Authentication; Conferences; Cryptography; Delays; Privacy; Protocols; Quality of service; Anonymity; Multimedia; QoS; VoIP; WLAN (ID#:14-2294) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6820571&isnumber=6820492
- Liping Zhang; Shanyu Tang; Zhihua Cai, "Robust and Efficient Password Authenticated Key Agreement With User Anonymity For Session Initiation Protocol-Based Communications," Communications, IET , vol.8, no.1, pp.83,91, Jan. 3 2014. doi: 10.1049/iet-com.2012.0783 A suitable key agreement protocol plays an essential role in protecting the communications over open channels among users using voice over Internet protocol (VoIP). This study presents a robust and flexible password authenticated key agreement protocol with user anonymity for session initiation protocol (SIP) used by VoIP communications. Security analysis demonstrates that the proposed protocol enjoys many unique properties, such as user anonymity, no password table, session key agreement, mutual authentication, password updating freely, conveniently revoking lost smartcards and so on. Furthermore, the proposed protocol can resist the replay attack, the impersonation attack, the stolen-verifier attack, the man-in-middle attack, the Denning-Sacco attack and the offline dictionary attack with or without smartcards. Finally, the performance analysis shows that the protocol is more suitable for practical application in comparison with other related protocols. Keywords: Internet telephony; computer network security; cryptographic protocols; private key cryptography; public key cryptography; signaling protocols; Denning-Sacco attack; SIP; VoIP communications; flexible password authenticated key agreement protocol; impersonation attack; man-in-middle attack; offline dictionary attack; replay attack; security analysis; session initiation protocol-based communications; smartcards; stolen-verifier attack; user anonymity; voice over Internet protocol (ID#:14-2295) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6711996&isnumber=6711983
- Burke, M.-J.; Kayem, AV.D.M., "K-Anonymity for Privacy Preserving Crime Data Publishing in Resource Constrained Environments," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.833,840, 13-16 May 2014. doi: 10.1109/WAINA.2014.131 Mobile crime report services have become a pervasive approach to enabling community-based crime reporting (CBCR) in developing nations. These services hold the advantage of facilitating law enforcement when resource constraints make using standard crime investigation approaches challenging. However, CBCRs have failed to achieve widespread popularity in developing nations because of concerns for privacy. Users are hesitant to make crime reports with out strong guarantees of privacy preservation. Furthermore, oftentimes lack of data mining expertise within the law enforcement agencies implies that the reported data needs to be processed manually which is a time-consuming process. In this paper we make two contributions to facilitate effective and efficient CBCR and crime data mining as well as to address the user privacy concern. The first is a practical framework for mobile CBCR and the second, is a hybrid k-anonymity algorithm to guarantee privacy preservation of the reported crime data. We use a hierarchy-based generalization algorithm to classify the data to minimize information loss by optimizing the nodal degree of the classification tree. Results from our proof-of-concept implementation demonstrate that in addition to guaranteeing privacy, our proposed scheme offers a classification accuracy of about 38% and a drop in information loss of nearly 50% over previous schemes when compared on various sizes of datasets. Performance-wise we observe an average improvement of about 50ms proportionate to the size of the dataset. Keywords: criminal law; data mining; data privacy; generalisation (artificial intelligence);mobile computing; pattern classification; CBCR; classification accuracy; classification tree; community-based crime reporting; crime data mining; crime investigation approach; hierarchy-based generalization algorithm k-anonymity; law enforcement; mobile crime report services; pervasive approach; privacy preserving crime data publishing; resource constrained environment; user privacy concern; Cloud computing; Data privacy; Encryption; Law enforcement; Mobile communication; Privacy; Anonymity; Developing Countries; Encryption; Information Loss; Public/Private Key Cryptography; Resource Constrained Environments (ID#:14-2296) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844743&isnumber=6844560
- Sharma, V., "Methods For Privacy Protection Using K-Anonymity," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on, vol., no., pp.149,152, 6-8 Feb. 2014. doi: 10.1109/ICROIT.2014.6798301 Large amount of data is produced in electronic form by various governmental and nongovernmental organizations. This data also has information related to specific individual. Information related to specific individual needs to be protected, so that it may not harm the privacy. Moreover sensitive information related to organization also needs to be protected. Data is released from various organizations as it is demanded by researchers and data mining companies to develop newer and better methods for finding patterns and trends. Any organization who wished to release data has two goals, one is to release the data as close as possible to the original form and second to protect the privacy of individuals and sensitive information from being released. K-anonymity has been used as successful technique in this regard. This method provides a guarantee that released data is at least k-anonymous. Various methods have been suggested to achieve k-anonymity for the given dataset. I categories these methods into four main domains based on the principle these are based and methods they are applying to achieve k-anonymous data. These methods have their respective advantages and disadvantages relating to loss of information, feasibility in real world and suitability to the number of tuples in the dataset. Keywords: data mining; data protection; data mining; data privacy protection; governmental organizations; information loss; k-anonymous data; nongovernmental organizations; Computers; Data privacy; Diseases; Hypertension; Anonymity; generalization; privacy (ID#:14-2297) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798301&isnumber=6798279
- Ma, R.; Rath, H.K.; Balamuralidhar, P., "Design of a Mix Network Using Connectivity Index -- A Novel Privacy Enhancement Approach," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp.512, 517, 13-16 May 2014. doi: 10.1109/WAINA.2014.86 Privacy Enhancing Techniques (PET) are key to the success in building the trust among the users of the digital world. Enhancing the communication privacy is getting attention nowadays. In this direction, anonymity schemes such as mix, mix networks, onion routing, crowds etc., have started in roads into the deployment at individual and community network levels. To measure the effectiveness and accuracy of such schemes, degree of anonymity is proposed as a privacy metric in literature. To measure the degree of anonymity, many empirical techniques are proposed. We observe that these techniques are computationally intensive and are infeasible for real-time requirements and thus may not be suitable to measure the degree of anonymity under the dynamic changes in the configuration of the network in real-time. In this direction, we propose a novel lightweight privacy metric to measure the degree of anonymity for mix, mix networks and their variants using graph theoretic approach based on Connectivity Index (CI). Further, we also extend this approach with Weighted Connectivity Index (WCI) and have demonstrated the usefulness of the metric through analytical analysis. Keywords: data privacy; graph theory; anonymity schemes; communication privacy; crowds; digital world; graph theoretic approach; lightweight privacy metric; mix network design; mix networks; onion routing; privacy enhancing techniques; real-time requirements; user trust; weighted connectivity index; Algorithm design and analysis; Complexity theory ;Indexes; Measurement; Ports (Computers); Privacy; Real-time systems; Anonymity; Connectivity Index; Mix; Mix Network; Privacy (ID#:14-2298) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844688&isnumber=6844560
- Pervaiz, Z.; Aref, W.G.; Ghafoor, A; Prabhu, N., "Accuracy-Constrained Privacy-Preserving Access Control Mechanismfor Relational Data," Knowledge and Data Engineering, IEEE Transactions on , vol.26, no.4, pp.795,807, April 2014. doi: 10.1109/TKDE.2013.71 Access control mechanisms protect sensitive information from unauthorized users. However, when sensitive information is shared and a Privacy Protection Mechanism (PPM) is not in place, an authorized user can still compromise the privacy of a person leading to identity disclosure. A PPM can use suppression and generalization of relational data to anonymize and satisfy privacy requirements, e.g., k-anonymity and l-diversity, against identity and attribute disclosure. However, privacy is achieved at the cost of precision of authorized information. In this paper, we propose an accuracy-constrained privacy-preserving access control framework. The access control policies define selection predicates available to roles while the privacy requirement is to satisfy the k-anonymity or l-diversity. An additional constraint that needs to be satisfied by the PPM is the imprecision bound for each selection predicate. The techniques for workload-aware anonymization for selection predicates have been discussed in the literature. However, to the best of our knowledge, the problem of satisfying the accuracy constraints for multiple roles has not been studied before. In our formulation of the aforementioned problem, we propose heuristics for anonymization algorithms and show empirically that the proposed approach satisfies imprecision bounds for more permissions and has lower total imprecision than the current state of the art. Keywords: authorisation; data protection; query processing; relational databases; PPM; access control policies; accuracy constraints; accuracy-constrained privacy-preserving access control mechanism; anonymization algorithms; attribute disclosure; authorized information precision; authorized user; empirical analysis; identity disclosure; imprecision bound; imprecision bounds; k-anonymity ;l-diversity; person privacy; privacy protection mechanism; privacy requirement anonymization; privacy requirement satisfaction; query processing; relational data generalization; relational data suppression; selection predicates; sensitive information protection; sensitive information sharing; unauthorized users; workload-aware anonymization; $k$ -anonymity; Access control; privacy; query evaluation (ID#:14-2299) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6512493&isnumber=6777369
- Zakhary, S.; Radenkovic, M.; Benslimane, A, "Efficient Location Privacy-Aware Forwarding in Opportunistic Mobile Networks," Vehicular Technology, IEEE Transactions on , vol.63, no.2, pp.893,906, Feb. 2014. doi: 10.1109/TVT.2013.2279671 This paper proposes a novel fully distributed and collaborative k-anonymity protocol (LPAF) to protect users' location information and ensure better privacy while forwarding queries/replies to/from untrusted location-based service (LBS) over opportunistic mobile networks (OppMNets). We utilize a lightweight multihop Markov-based stochastic model for location prediction to guide queries toward the LBS's location and to reduce required resources in terms of retransmission overheads. We develop a formal analytical model and present theoretical analysis and simulation of the proposed protocol performance. We further validate our results by performing extensive simulation experiments over a pseudorealistic city map using map-based mobility models and using real-world data trace to compare LPAF to existing location privacy and benchmark protocols. We show that LPAF manages to keep higher privacy levels in terms of k-anonymity and quality of service in terms of success ratio and delay, as compared with other protocols, while maintaining lower overheads. Simulation results show that LPAF achieves up to an 11% improvement in success ratio for pseudorealistic scenarios, whereas real-world data trace experiments show up to a 24% improvement with a slight increase in the average delay. Keywords: Markov processes; mobile ad hoc networks; mobility management (mobile radio);protocols; quality of service; telecommunication security ;LBS; LPAF; OppMNets; benchmark protocols; collaborative k-anonymity protocol; lightweight multihop Markov-based stochastic model; location prediction; location privacy-aware forwarding; location-based service; map-based mobility models; opportunistic mobile networks; pseudorealistic city map; quality of service; retransmission overhead; success ratio; Analytical models; Delays; Equations; Markov processes; Mathematical model; Privacy; Protocols; Anonymity; distributed computing; location privacy; mobile ad hoc network (ID#:14-2300) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6587139&isnumber=6739143
- Banerjee, D.; Bo Dong; Biswas, S.; Taghizadeh, M., "Privacy-Preserving Channel Access Using Blindfolded Packet Transmissions," Communication Systems and Networks (COMSNETS), 2014 Sixth International Conference on, pp.1,8, 6-10 Jan. 2014. doi: 10.1109/COMSNETS.2014.6734887 This paper proposes a novel wireless MAC-layer approach towards achieving channel access anonymity. Nodes autonomously select periodic TDMA-like time-slots for channel access by employing a novel channel sensing strategy, and they do so without explicitly sharing any identity information with other nodes in the network. An add-on hardware module for the proposed channel sensing has been developed and the proposed protocol has been implemented in Tinyos-2.x. Extensive evaluation has been done on a test-bed consisting of Mica2 hardware, where we have studied the protocol's functionality and convergence characteristics. The functionality results collected at a sniffer node using RSSI traces validate the syntax and semantics of the protocol. Experimentally evaluated convergence characteristics from the Tinyos test-bed were also found to be satisfactory. Keywords: data privacy; time division multiple access; wireless channels; wireless sensor networks;Mica2 hardware;RSSI;Tinyos-2x test-bed implementation; add-on hardware module; blindfolded packet transmission; channel sensing strategy; periodic TDMA-Iike time-slot; privacy-preserving channel access anonymity; protocol; wireless MAC-layer approach; Convergence; Cryptography; Equations; Google; Heating; Interference; Noise; Anonymity; MAC protocols; Privacy; TDMA (ID#:14-2301) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6734887&isnumber=6734849
- Ullah, R.; Nizamuddin; Umar, AI; ul Amin, N., "Blind Signcryption Scheme Based On Elliptic Curves," Information Assurance and Cyber Security (CIACS), 2014 Conference on , vol., no., pp.51,54, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861332 In this paper blind signcryption using elliptic curves cryptosystem is presented. It satisfies the functionalities of Confidentiality, Message Integrity, Unforgeability, Signer Non-repudiation, Message Unlink-ability, Sender anonymity and Forward Secrecy. The proposed scheme has low computation and communication overhead as compared to existing blind Signcryption schemes and best suited for mobile phone voting and m-commerce. Keywords: public key cryptography; blind signcryption scheme; communication overhead;confidentiality; elliptic curves cryptosystem; forward secrecy; m-commerce; message integrity; message unlink-ability; mobile phone voting; sender anonymity; signer nonrepudiation; unforgeability; Digital signatures; Elliptic curve cryptography; Elliptic curves; Equations; Mobile handsets; Anonymity; Blind Signature; Blind Signcryption; (ID#:14-2302) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861332&isnumber=6861314
- Perez-Gonzalez, F.; Troncoso, C.; Oya, S., "A Least Squares Approach to the Static Traffic Analysis of High-Latency Anonymous Communication Systems," Information Forensics and Security, IEEE Transactions on , vol.9, no.9, pp.1341,1355, Sept. 2014. doi: 10.1109/TIFS.2014.2330696 Mixes, relaying routers that hide the relation between incoming and outgoing messages, are the main building block of high-latency anonymous communication networks. A number of so-called disclosure attacks have been proposed to effectively deanonymize traffic sent through these channels. Yet, the dependence of their success on the system parameters is not well-understood. We propose the least squares disclosure attack (LSDA), in which user profiles are estimated by solving a least squares problem. We show that LSDA is not only suitable for the analysis of threshold mixes, but can be easily extended to attack pool mixes. Furthermore, contrary to previous heuristic-based attacks, our approach allows us to analytically derive expressions that characterize the profiling error of LSDA with respect to the system parameters. We empirically demonstrate that LSDA recovers users' profiles with greater accuracy than its statistical predecessors and verify that our analysis closely predicts actual performance. Keywords: cryptography; least squares approximations ;LSDA; cryptographic means; disclosure attacks; high-latency anonymous communication systems ;least squares disclosure attack; pool mixes; static traffic analysis; statistical predecessors; Accuracy; Bayes methods; Estimation; Least squares approximations; Random variables; Receivers; Vectors; Anonymity; disclosure attacks; mixes (ID#:14-2304) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6832564&isnumber=6867417
- Fouad, M.R.; Elbassioni, K.; Bertino, E., "A Supermodularity-Based Differential Privacy Preserving Algorithm for Data Anonymization," Knowledge and Data Engineering, IEEE Transactions on , vol.26, no.7, pp.1591,1601, July 2014. doi: 10.1109/TKDE.2013.107 Maximizing data usage and minimizing privacy risk are two conflicting goals. Organizations always apply a set of transformations on their data before releasing it. While determining the best set of transformations has been the focus of extensive work in the database community, most of this work suffered from one or both of the following major problems: scalability and privacy guarantee. Differential Privacy provides a theoretical formulation for privacy that ensures that the system essentially behaves the same way regardless of whether any individual is included in the database. In this paper, we address both scalability and privacy risk of data anonymization. We propose a scalable algorithm that meets differential privacy when applying a specific random sampling. The contribution of the paper is two-fold: 1) we propose a personalized anonymization technique based on an aggregate formulation and prove that it can be implemented in polynomial time; and 2) we show that combining the proposed aggregate formulation with specific sampling gives an anonymization algorithm that satisfies differential privacy. Our results rely heavily on exploring the supermodularity properties of the risk function, which allow us to employ techniques from convex optimization. Through experimental studies we compare our proposed algorithm with other anonymization schemes in terms of both time and privacy risk. Keywords: data privacy; optimisation; convex optimization; data anonymization; data usage maximization; database community; privacy risk; privacy risk minimization; random sampling; scalability risk; supermodularity-based differential privacy preserving algorithm; Aggregates; Communities; Data privacy; Databases; Privacy; Scalability; Security; Data; Data sharing; Database Management; Database design; Differential privacy; General ;Information Storage and Retrieval; Information Technology and Systems; Knowledge and data engineering tools and techniques; Online Information Services; Security; and protection; anonymity; data sharing; data utility; integrity; modeling and management; risk management; scalability; security (ID#:14-2305) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6709680&isnumber=6851230
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Digital Signatures
Digital signatures are a common method of demonstrating the authenticity of a message. But such signatures can, of course, be forged. Research into digital signatures cited here has looked at digital signatures in the context of the Internet of Things, the elliptic curve digital signature algorithm, a hardware quantum based algorithm, and the use of DNA cryptography. These papers were presented or published between January andAugust of 2014.
- Skarmeta, AF.; Hernandez-Ramos, J.L.; Moreno, M.V., "A Decentralized Approach For Security And Privacy Challenges In The Internet Of Things," Internet of Things (WF-IoT), 2014 IEEE World Forum on , vol., no., pp.67,72, 6-8 March 2014. doi: 10.1109/WF-IoT.2014.6803122 The strong development of the Internet of Things (IoT) is dramatically changing traditional perceptions of the current Internet towards an integrated vision of smart objects interacting with each other. While in recent years many technological challenges have already been solved through the extension and adaptation of wireless technologies, security and privacy still remain as the main barriers for the IoT deployment on a broad scale. In this emerging paradigm, typical scenarios manage particularly sensitive data, and any leakage of information could severely damage the privacy of users. This paper provides a concise description of some of the major challenges related to these areas that still need to be overcome in the coming years for a full acceptance of all IoT stakeholders involved. In addition, we propose a distributed capability-based access control mechanism which is built on public key cryptography in order to cope with some of these challenges. Specifically, our solution is based on the design of a lightweight token used for access to CoAP Resources, and an optimized implementation of the Elliptic Curve Digital Signature Algorithm (ECDSA) inside the smart object. The results obtained from our experiments demonstrate the feasibility of the proposal and show promising in order to cover more complex scenarios in the future, as well as its application in specific IoT use cases. Keywords: Internet of Things; authorisation; computer network security; data privacy; digital signatures; personal area networks; public key cryptography;6LoWPAN;CoAP resources; ECDSA; Internet of Things; IoT deployment; IoT stakeholders; distributed capability-based access control mechanism; elliptic curve digital signature algorithm; information leakage; lightweight token; public key cryptography; security challenges; sensitive data management; user privacy; wireless technologies; Authentication; Authorization; Cryptography; Internet; Privacy; 6LoWPAN; Internet of Things; Privacy; Security; cryptographic primitives; distributed access control (ID#:14-2306) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803122&isnumber=6803102
- Qawaqneh, Z.; Elleithy, K.; Alotaibi, B.; Alotaibi, M., "A New Hardware Quantum-Based Encryption Algorithm," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island , vol., no., pp.1,5, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845201 Cryptography is entering a new age since the first steps that have been made towards quantum computing, which also poses a threat to the classical cryptosystem in general. In this paper, we introduce a new novel encryption technique and algorithm to improve quantum cryptography. The aim of the suggested scheme is to generate a digital signature in quantum computing. An arbitrated digital signature is introduced instead of the directed digital signature to avoid the denial of sending the message from the sender and pretending that the sender's private key was stolen or lost and the signature has been forged. The onetime pad operation that most quantum cryptography algorithms that have been proposed in the past is avoided to decrease the possibility of the channel eavesdropping. The presented algorithm in this paper uses quantum gates to do the encryption and decryption processes. In addition, new quantum gates are introduced, analyzed, and investigated in the encryption and decryption processes. The authors believe the gates that are used in the proposed algorithm improve the security for both classical and quantum computing. (Against)The proposed gates in the paper have plausible properties that position them as suitable candidates for encryption and decryption processes in quantum cryptography. To demonstrate the security features of the algorithm, it was simulated using MATLAB simulator, in particular through the Quack Quantum Library. Keywords: digital signatures; quantum computing; quantum cryptography; quantum gates; Matlab simulator; Quack Quantum Library; arbitrated digital signature; channel eavesdropping; decryption process; encryption process; hardware quantum-based encryption algorithm; quantum computing; quantum cryptography improvement; quantum gates; sender private key; signature forging; Encryption; Logic gates; Protocols; Quantum computing; Quantum mechanics; algorithms; quantum; quantum cryptography; qubit key; secure communications (ID#:14-2307) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845201&isnumber=6845183
- Chouhan, D.S.; Mahajan, R.P., "An Architectural Framework For Encryption & Generation Of Digital Signature Using DNA Cryptography," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.743,748, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828061 As most of the modern encryption algorithms are broken fully/partially, the world of information security looks in new directions to protect the data it transmits. The concept of using DNA computing in the fields of cryptography has been identified as a possible technology that may bring forward a new hope for hybrid and unbreakable algorithms. Currently, several DNA computing algorithms are proposed for cryptography, cryptanalysis and steganography problems, and they are proven to be very powerful in these areas. This paper gives an architectural framework for encryption & Generation of digital signature using DNA Cryptography. To analyze the performance; the original plaintext size and the key size; together with the encryption and decryption time are examined also the experiments on plaintext with different contents are performed to test the robustness of the program. Keywords: biocomputing; digital signatures; DNA computing; DNA cryptography; architectural framework; cryptanalysis; decryption time; digital signature encryption; digital signature generation; encryption algorithms; encryption time; information security; key size; plaintext size; steganography; Ciphers; DNA; DNA computing; Digital signatures; Encoding; Encryption; DNA; DNA computing DNA cryptography; DNA digital coding (ID#:14-2308) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828061&isnumber=6827395
- Kishore Dutta, M.; Singh, A; Travieso, C.M.; Burget, R., "Generation Of Digital Signature From Multi-Feature Biometric Traits For Digital Right Management Control," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in , vol., no., pp.1,4, 6-8 March 2014. doi: 10.1109/RAECS.2014.6799558 This paper addresses the issue of ownership of digital images by embedding imperceptible digital pattern in the image. The digital pattern is generated from multiple biometric features in a strategic matter so that the identification of individual subject can be done. The features from iris image and fingerprint image are strategically combined to generate the pattern. This digital pattern was embedded and extracted from the host image and the experiments were also carried out when the image was subjected to signal processing attacks. Experimental results indicate that the insertion of this digital pattern does not change the perceptual properties of the image, and the digital pattern survives signal processing attacks and can be extracted for unique identification. Keywords: {biometrics (access control);digital rights management; digital signatures; image watermarking; biometric features; digital right management control; digital signature; fingerprint image; host image; imperceptible digital pattern; iris image; multifeature biometric traits; signal processing attacks; Biomedical imaging; Discrete cosine transforms; Fingerprint recognition; Gabor filters; Image recognition; PSNR; Watermarking; Digital Right Management; Fingerprint Recognition ;Iris Pattern Recognition; Multimode Biometric Feature; Robustness; Signal Processing Attacks (ID#:14-2309) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799558&isnumber=6799496
- Oder, Tobias; Poppelmann, Thomas; Guneysu, Tim, "Beyond ECDSA and RSA: Lattice-based Digital Signatures On Constrained Devices," Design Automation Conference (DAC), 2014 51st ACM/EDAC/IEEE , vol., no., pp.1,6, 1-5 June 2014. doi: 10.1109/DAC.2014.6881437 All currently deployed asymmetric cryptography is broken with the advent of powerful quantum computers. We thus have to consider alternative solutions for systems with long-term security requirements (e.g., for long-lasting vehicular and avionic communication infrastructures). In this work we present an efficient implementation of BLISS, a recently proposed, post-quantum secure, and formally analyzed novel lattice-based signature scheme. We show that we can achieve a significant performance of 35.3 and 6 ms for signing and verification, respectively, at a 128-bit security level on an ARM Cortex-M4F microcontroller. This shows that lattice-based cryptography can be efficiently deployed on today's hardware and provides security solutions for many use cases that can even withstand future threats. Keywords: (not provided) (ID#:14-2310) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881437&isnumber=6881325
- Fisher, P.S.; Min Gyung Kwak; Eunjung Lee; Jinsuk Baek, "A Signature Scheme for Digital Imagery," Information Science and Applications (ICISA), 2014 International Conference on , vol., no., pp.1,4, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847337 We propose a signature scheme for identifying a related class of images based upon the content of the images. With the proposed scheme, we represent an image to a collection of rules based upon a technique using relationships derived from the pixels of images. This collection of relationships or rules is called Finite Inductive sequences. These rules make up a collective storage structure which can be used to process an image. The rules used in processing an unknown image characterize the image. The storage requirement increases with the number of rules for an image, which is on the order of the number of pixels within the image. One way to alleviate the storage requirement associated with large images is to process the image by using a wavelet transform, and then considering only the resulting high frequency component of the transform as the input to this process. When a new image is submitted, the rules are used to recognize similarities between the stored image and the new image. The process will provide an interlinking mesh to images that are similar or have similar components, as a background process. Retrieval then can be done without additional work at the moment of retrieval. Keywords: content-based retrieval; image retrieval; wavelet transforms; collective storage structure; digital imagery; finite inductive sequences; high frequency component; interlinking mesh; signature scheme; wavelet transform; Databases; Face; Image recognition; Search problems; Tagging; Wavelet transforms (ID#:14-2311) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847337&isnumber=6847317
- Huang Lu; Jie Li; Guizani, M., "Secure and Efficient Data Transmission for Cluster-Based Wireless Sensor Networks," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.3, pp.750,761, March 2014. doi: 10.1109/TPDS.2013.43 Secure data transmission is a critical issue for wireless sensor networks (WSNs). Clustering is an effective and practical way to enhance the system performance of WSNs. In this paper, we study a secure data transmission for cluster-based WSNs (CWSNs), where the clusters are formed dynamically and periodically. We propose two secure and efficient data transmission (SET) protocols for CWSNs, called SET-IBS and SET-IBOOS, by using the identity-based digital signature (IBS) scheme and the identity-based online/offline digital signature (IBOOS) scheme, respectively. In SET-IBS, security relies on the hardness of the Diffie-Hellman problem in the pairing domain. SET-IBOOS further reduces the computational overhead for protocol security, which is crucial for WSNs, while its security relies on the hardness of the discrete logarithm problem. We show the feasibility of the SET-IBS and SET-IBOOS protocols with respect to the security requirements and security analysis against various attacks. The calculations and simulations are provided to illustrate the efficiency of the proposed protocols. The results show that the proposed protocols have better performance than the existing secure protocols for CWSNs, in terms of security overhead and energy consumption. Keywords: digital signatures; protocols; telecommunication security; wireless sensor networks; Diffie Hellman problem; SET IBOOS;SET IBS; cluster based wireless sensor networks; computational overhead; discrete logarithm problem; efficient data transmission; identity based digital signature scheme; identity based online offline digital signature scheme; protocol security; secure data transmission; security analysis; Cryptography; Data communication; Digital signatures; Protocols; Steady-state; Wireless sensor networks; Cluster-based WSNs; ID-based digital signature; ID-based online/offline digital signature; secure data transmission protocol (ID#:14-2312) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6464257&isnumber=6731354
- Kishore, N.; Kapoor, B., "An Efficient Parallel Algorithm For Hash Computation In Security And Forensics Applications," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.873,877, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779437 Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine. Keywords: application program interfaces; cryptography; digital forensics; digital signatures; file organisation; parallel algorithms; probability; OpenMP API;SHA-1 algorithm; collision probability; data preservation; digital forensics; digital signature; hash computation; hashing algorithms ;information security; parallel algorithm; standard hash function; Algorithm design and analysis; Conferences; Cryptography; Multicore processing; Program processors; Standards; Cryptographic Hash Function; Digital Forensics; Digital Signature; MD5; Multicore Processors; OpenMP; SHA-1 (ID#:14-2313) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779437&isnumber=6779283
- Dinu, D.D.; Togan, M., "DHCP Server Authentication Using Digital Certificates," Communications (COMM), 2014 10th International Conference on, pp.1,6, 29-31 May 2014. doi: 10.1109/ICComm.2014.6866756 In this paper we give an overview of the DHCP security issues and the related work done to secure the protocol. Then we propose a method based on the use of public key cryptography and digital certificates in order to authenticate the DHCP server and DHCP server responses, and to prevent in this way the rogue DHCP server attacks. We implemented and tested the proposed solution using different key and certificate types in order to find out the packet overhead and time consumed by the new added authentication option. Keywords: certification; cryptographic protocols; digital signatures; public key cryptography; DHCP security; DHCP server attacks; DHCP server authentication; digital certificates; digital signature; public key cryptography; Authentication; Digital signatures; IP networks; Message authentication; Protocols; Servers; DHCP; DHCP authentication; DHCP security; digital certificate; digital signature; replay detection method URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866756&isnumber=6866648
- Benzaid, C.; Saiah, A; Badache, N., "An Enhanced Secure Pairwise Broadcast Time Synchronization Protocol in Wireless Sensor Networks," Parallel, Distributed and Network-Based Processing (PDP), 2014 22nd Euromicro International Conference on , vol., no., pp.569,573, 12-14 Feb. 2014. doi: 10.1109/PDP.2014.114 This paper proposes an Enhanced Secure Pairwise Broadcast Time Synchronization (E-SPBS) protocol that allows authenticated MAC-layer timestamping on high-data rate radio interfaces. E-SPBS ensures the security of the Receiver-Only synchronization approach using a Public-Key-based Cryptography authentication scheme. The robustness and accuracy of E-SPBS were evaluated through simulations and experiments on a MICAz platform. Both simulation and experimental results demonstrate that E-SPBS achieves high robustness to external and internal attacks with low energy consumption. However, while the simulation results indicate that E-SPBS can achieve an average accuracy of less than 1m s, the experimental results show that the synchronization error is higher and not stable. This comparison gives us a good indication on how much confidence can be put into simulation results. Keywords: access protocols; cryptographic protocols; public key cryptography; radio receivers; synchronisation; telecommunication security; wireless sensor networks; E-SPBS protocol; MAC-layer timestamping; MICAz platform; energy consumption; enhanced secure pairwise broadcast time synchronization protocol; high-data rate radio interfaces; public-key-based cryptography authentication scheme ;receiver-only synchronization approach; wireless sensor networks;Accuracy;Authentication;Delays;Protocols;Synchronization;Wireless sensor networks; Digital Signatures; Receiver-Only Synchronization approach; Secure Time Synchronization ;Sensor Networks (ID#:14-2314) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6787330&isnumber=6787236
- Gulhane, G.; Mahajan, N.V., "Securing Multipath Routing Protocol Using Authentication Approach for Wireless Sensor Network," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on , vol., no., pp.729,733, 7-9 April 2014. doi: 10.1109/CSNT.2014.153 Wireless Sensor Networks (WSN) suffers from variety of threats such as operational lifetime of sensor nodes and security for information carried by sensor nodes. There is an increasing threat of malicious nodes attack on WSN. Black Hole attack is one of the security thread in which the traffic is redirected to such a node that actually does not exist in network. Having multipath routing protocol the lifespan of the wireless sensor network has been increases by dispensing traffic among several paths instead of a single optimal path. Also, secured data communication is one of the important research challenges in wireless sensor network. A secure and authentic Multipath Routing protocol for wireless sensor networks should be proposed which overcomes black hole attacks and provides secure data transmission in network. Performance should be measure in terms of different network parameters such as packet delivery fraction, energy consumption, normalize routing load and end-to-end delay. Keywords: delays; multipath channels; routing protocols; telecommunication security; wireless sensor networks; authentication approach; black hole attacks; end to end delay; energy consumption; multipath routing protocol; normalize routing load; operational lifetime; packet delivery fraction; secured data communication; wireless sensor network; Ad hoc networks; Energy efficiency; Routing; Routing protocols; Security; Wireless sensor networks; Ad hoc On Demand Multipath Vector Routing Protocol; Black Hole Attack; Digital Signature; Multipath routing protocol; wireless sensor network (ID#:14-2315) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821495&isnumber=6821334
- Soderstrom, H., "Self-Contained Digitally Signed Documents: Approaching "What You See Is What You Sign"," Information Science and Applications (ICISA), 2014 International Conference on , vol., no., pp.1,4, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847461 The "what you see is what you sign" challenge has been part of digital signatures since the very start. Digital signatures apply to the bit level. Users see a higher level, so how can they know what they sign? A sample of real-life applications indicates that the issue is still open. We propose a method for improved assurance based on simple tenets. The document to be signed is a well-defined visual impression. Exactly that visual impression is signed. After signing all parties have a copy of the signed document, including its signatures. PDF makes it possible to store signatures and metadata in the document. The method is being implemented in an e-government web platform for a major Swedish city. Keywords: digital signatures; document handling; meta data; PDF; Swedish city; digital signature; e-government Web platform; metadata; self-contained digitally signed documents; visual impression; Digital signatures; Portable document format; Smart cards; Software; Visualization; XML (ID#:14-2316) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847461&isnumber=6847317
- Benitez, Yesica Imelda Saavedra; Ben-Othman, Jalel; Claude, Jean-Pierre, "Performance Evaluation Of Security Mechanisms In RAOLSR Protocol for Wireless Mesh Networks," Communications (ICC), 2014 IEEE International Conference on , vol., no., pp.1808,1812, 10-14 June 2014. doi: 10.1109/ICC.2014.6883585 In this paper, we have proposed the IBE-RAOLSR and ECDSA-RAOLSR protocols for WMNs (Wireless Mesh Networks), which contributes to security routing protocols. We have implemented the IBE (Identity Based Encryption) and ECDSA (Elliptic Curve Digital Signature Algorithm) methods to secure messages in RAOLSR (Radio Aware Optimized Link State Routing), namely TC (Topology Control) and Hello messages. We then compare the ECDSA-based RAOLSR with IBE-based RAOLSR protocols. This study shows the great benefits of the IBE technique in securing RAOLSR protocol for WMNs. Through extensive ns-3 (Network Simulator-3) simulations, results have shown that the IBE-RAOLSR outperforms the ECDSA-RAOLSR in terms of overhead and delay. Simulation results show that the utilize of the IBE-based RAOLSR provides a greater level of security with light overhead. Keywords: Delays; Digital signatures; IEEE 802.11 Standards; Routing; Routing protocols; IBE; Identity Based Encryption; Radio Aware Optimized Link State Routing; Routing Protocol; Security; Wireless Mesh Networks (ID#:14-2317) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883585&isnumber=6883277
- Tsai, J., "An Improved Cross-Layer Privacy-Preserving Authentication in WAVE-enabled VANETs," Communications Letters, IEEE, vol. PP, no.99, pp.1, 1, May 2014. doi: 10.1109/LCOMM.2014.2323291 In 2013, Biswas and Misic proposed a new privacy preserving authentication scheme for WAVE-based vehicular ad hoc networks (VANETs), claiming that they used a variant of the Elliptic Curve Digital Signature Algorithm (ECDSA). However, our study has discovered that the authentication scheme proposed by them is vulnerable to a private key reveal attack. Any malicious receiving vehicle who receives a valid signature from a legal signing vehicle can gain access to the signing vehicle private key from the learned valid signature. Hence, the authentication scheme proposed by Biswas and Misic is insecure. We thus propose an improved version to overcome this weakness. The proposed improved scheme also supports identity revocation and trace. Based on this security property, the CA and a receiving entity (RSU or OBU) can check whether a received signature has been generated by a revoked vehicle. Security analysis is also conducted to evaluate the security strength of the proposed authentication scheme. Keywords: Authentication; Digital signatures; Elliptic curves; Law; Public key; Vehicles (ID#:14-2318) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814798&isnumber=5534602
- Shah, N.; Desai, N.; Vashi, V., "Efficient Cryptography for Data Security," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.908,910, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828095 In today's world Sensitive data are increasingly used in communication over internet. Thus Security of data is biggest concern of internet users. Best solution is use of some cryptography algorithm which encrypts data in some cipher and transfers it over internet and again decrypted to original data. This paper provides solution to data security problem through Cryptography technique based on ASCII value. Keywords: Internet;c ryptography; ASCII value Internet; cipher; cryptography algorithm; cryptography technique; data security; sensitive data; Digital signatures; Encryption; Internet; Public key; Reflective binary codes; Cryptography; Data Security (ID#:14-2319) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828095&isnumber=6827395
- Premnath, AP.; Ju-Yeon Jo; Yoohwan Kim, "Application of NTRU Cryptographic Algorithm for SCADA Security," Information Technology: New Generations (ITNG), 2014 11th International Conference on , vol., no., pp.341,346, 7-9 April 2014. doi: 10.1109/ITNG.2014.38 Critical Infrastructure represents the basic facilities, services and installations necessary for functioning of a community, such as water, power lines, transportation, or communication systems. Any act or practice that causes a real-time Critical Infrastructure System to impair its normal function and performance will have debilitating impact on security and economy, with direct implication on the society. SCADA (Supervisory Control and Data Acquisition) system is a control system which is widely used in Critical Infrastructure System to monitor and control industrial processes autonomously. As SCADA architecture relies on computers, networks, applications and programmable controllers, it is more vulnerable to security threats/attacks. Traditional SCADA communication protocols such as IEC 60870, DNP3, IEC 61850, or Modbus did not provide any security services. Newer standards such as IEC 62351 and AGA-12 offer security features to handle the attacks on SCADA system. However there are performance issues with the cryptographic solutions of these specifications when applied to SCADA systems. This research is aimed at improving the performance of SCADA security standards by employing NTRU, a faster and light-weight NTRU public key algorithm for providing end-to-end security. Keywords: SCADA systems; critical infrastructures; cryptographic protocols; process control; process monitoring; production engineering computing; programmable controllers; public key cryptography; transport protocols;AGA-12;DNP3;IEC 60870;IEC 61850;IEC 62351;Modbus;NTRU cryptographic algorithm; NTRU public key algorithm; SCADA architecture; SCADA communication protocols; SCADA security standards; TCP/IP; communication systems; end-to-end security; industrial process control; industrial process monitoring; power lines; programmable controllers; real-time critical infrastructure system; security threats-attacks; supervisory control and data acquisition system; transportation; water; Authentication; Digital signatures; Encryption; IEC standards; SCADA systems;AGA-12;Critical Infrastructure System; IEC 62351; NTRU cryptographic algorithm; SCADA communication protocols over TCP/IP (ID#:14-2320) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822221&isnumber=6822158
- Ullah, R.; Nizamuddin; Umar, AI; ul Amin, N., "Blind Signcryption Scheme Based On Elliptic Curves," Information Assurance and Cyber Security (CIACS), 2014 Conference on , vol., no., pp.51,54, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861332 In this paper blind signcryption using elliptic curves cryptosystem is presented. It satisfies the functionalities of Confidentiality, Message Integrity, Unforgeability, Signer Non-repudiation, Message Unlink-ability, Sender anonymity and Forward Secrecy. The proposed scheme has low computation and communication overhead as compared to existing blind Signcryption schemes and best suited for mobile phone voting and m-commerce. Keywords: public key cryptography; blind signcryption scheme; communication overhead; confidentiality; elliptic curves cryptosystem; forward secrecy; m-commerce; message integrity; message unlink-ability; mobile phone voting; sender anonymity; signer nonrepudiation; unforgeability; Digital signatures; Elliptic curve cryptography; Elliptic curves; Equations; Mobile handsets; Anonymity; Blind Signature; Blind Signcryption; Elliptic curves; Signcryption (ID#:14-2321) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861332&isnumber=6861314
- Daehee Kim; Sunshin An, "Efficient And Scalable Public Key Infrastructure For Wireless Sensor Networks," Networks, Computers and Communications, The 2014 International Symposium on , vol., no., pp.1,5, 17-19 June 2014. doi: 10.1109/SNCC.2014.6866514 Ensuring security is essential in wireless sensor networks (WSNs) since a variety of applications of WSNs, including military, medical and industrial sectors, require several kinds of security services such as confidentiality, authentication, and integrity. However, ensuring security is not trivial in WSNs because of the limited resources of the sensor nodes. This has led a lot of researchers to focus on a symmetric key cryptography which is computationally lightweight, but requires a shared key between the sensor nodes. Public key cryptography (PKC) not only solves this problem gracefully, but also provides enhanced security services such as non-repudiation and digital signatures. To take advantage of PKC, each node must have a public key of the corresponding node via an authenticated method. The most widely used way is to use digital signatures signed by a certificate authority which is a part of a public key infrastructure (PKI). Since traditional PKI requires a huge amount of computations and communications, it can be heavy burden to WSNs. In this paper, we propose our own energy efficient and scalable PKI for WSNs. This is accomplished by taking advantage of heterogeneous sensor networks and elliptic curve cryptography. Our proposed PKI is analyzed in terms of security, energy efficiency, and scalability. As you will see later, our PKI is secure, energy efficient, and scalable. Keywords: digital signatures; energy conservation; public key cryptography; telecommunication power management; wireless sensor networks; PKC; PKI; WSN; authenticated method; certificate authority; digital signatures; elliptic curve cryptography; energy efficiency; heterogeneous sensor networks; industrial sectors; medical sectors; military sectors; public key cryptography; public key infrastructure; security services; sensor nodes; symmetric key cryptography; wireless sensor networks; Cryptography; IP networks; Servers; Wireless communication; Wireless sensor networks;(k, n) Threshold Scheme; Certificate Authority; Elliptic Curve Cryptography; Heterogeneous Sensor Networks; Public Key Infrastructure; Wireless Sensor Networks (ID#:14-2322) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866514&isnumber=6866503
- Vollala, S.; Varadhan, V.V.; Geetha, K.; Ramasubramanian, N., "Efficient Modular Multiplication Algorithms For Public Key Cryptography," Advance Computing Conference (IACC), 2014 IEEE International, pp.74,78, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779297 The modular exponentiation is an important operation for cryptographic transformations in public key cryptosystems like the Rivest, Shamir and Adleman, the Difie and Hellman and the ElGamal schemes. computing ax mod n and axby mod n for very large x,y and n are fundamental to the efficiency of almost all pubic key cryptosystems and digital signature schemes. To achieve high level of security, the word length in the modular exponentiations should be significantly large. The performance of public key cryptography is primarily determined by the implementation efficiency of the modular multiplication and exponentiation. As the words are usually large, and in order to optimize the time taken by these operations, it is essential to minimize the number of modular multiplications. In this paper we are presenting efficient algorithms for computing ax mod n and axby mod n. In this work we propose four algorithms to evaluate modular exponentiation. Bit forwarding (BFW) algorithms to compute ax mod n, and to compute axby mod n two algorithms namely Substitute and reward (SRW), Store and forward(SFW) are proposed. All the proposed algorithms are efficient in terms of time and at the same time demands only minimal additional space to store the pre-computed values. These algorithms are suitable for devices with low computational power and limited storage. Keywords: digital signatures; public key cryptography; BFW algorithms; bit forwarding algorithms; cryptographic transformations; digital signature schemes; modular exponentiation; modular multiplication algorithms; public key cryptography; public key cryptosystems ;store and forward algorithms; substitute and reward algorithms; word length; Algorithm design and analysis; Ciphers; Conferences; Encryption; Public key cryptography; Modular Multiplication; Public key cryptography(PKC); RSA; binary exponentiation (ID#:14-2324) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779297&isnumber=6779283
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Efficient Encryption
The term "efficient encryption" generally refers to the speed of an algorithm, that is, the time needed to complete the calculations to encrypt or decrypt a coded text. The research cited here shows a broader concept and looks at both hardware and software. Several of these works also address power consumption. The works cited here appeared from January to August of 2014.
- Pathak, S.; Kamble, R.; Chaursia, D., "An Efficient Data Encryption Standard Image Encryption Technique With RGB Random Uncertainty," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on , vol., no., pp.413,421, 6-8 Feb. 2014. doi: 10.1109/ICROIT.2014.6798366 Image encryption is an emerging area of focus now a day. To make heavy distortion between the original image and the encrypted image is a crucial aspect. In this paper we propose an efficient approach based on data encryption standard (DES). In our approach we are using XOR with the combination of DES encryption which emphasize greater changes in RGB combination as well as in the histogram. We also discuss our results which show the variations. Higher the variation security will be improved. Keywords: cryptography; image processing ;DES; RGB random uncertainty; XOR; efficient data encryption standard; heavy distortion; histogram; image encryption technique; variation security; Cryptography; IP networks; Image color analysis; Irrigation; Uncertainty; Chaos; DES; Image Encryption; Security Measures (ID#:14-2325) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798366&isnumber=6798279
- Seo, S.; Nabeel, M.; Ding, X.; Bertino, E., "An Efficient Certificateless Encryption for Secure Data Sharing in Public Clouds," Knowledge and Data Engineering, IEEE Transactions on , vol.26, no.9, pp.2107,2119, Sept. 2014. doi: 10.1109/TKDE.2013.138 We propose a mediated certificateless encryption scheme without pairing operations for securely sharing sensitive information in public clouds. Mediated certificateless public key encryption (mCL-PKE) solves the key escrow problem in identity based encryption and certificate revocation problem in public key cryptography. However, existing mCL-PKE schemes are either inefficient because of the use of expensive pairing operations or vulnerable against partial decryption attacks. In order to address the performance and security issues, in this paper, we first propose a mCL-PKE scheme without using pairing operations. We apply our mCL-PKE scheme to construct a practical solution to the problem of sharing sensitive information in public clouds. The cloud is employed as a secure storage as well as a key generation center. In our system, the data owner encrypts the sensitive data using the cloud generated users' public keys based on its access control policies and uploads the encrypted data to the cloud. Upon successful authorization, the cloud partially decrypts the encrypted data for the users. The users subsequently fully decrypt the partially decrypted data using their private keys. The confidentiality of the content and the keys is preserved with respect to the cloud, because the cloud cannot fully decrypt the information. We also propose an extension to the above approach to improve the efficiency of encryption at the data owner. We implement our mCL-PKE scheme and the overall cloud based system, and evaluate its security and performance. Our results show that our schemes are efficient and practical. Keywords: Access control; Artificial intelligence; Cloud computing; Encryption; Public key; Cloud computing; Data encryption; Public key cryptosystems; access control; certificateless cryptography; confidentiality (ID#:14-2326) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6574849&isnumber=6871455
- Jiantao Zhou; Xianming Liu; Au, O.C.; Yuan Yan Tang, "Designing an Efficient Image Encryption-Then-Compression System via Prediction Error Clustering and Random Permutation," Information Forensics and Security, IEEE Transactions on , vol.9, no.1, pp.39,50, Jan. 2014. doi: 10.1109/TIFS.2013.2291625 In many practical scenarios, image encryption has to be conducted prior to image compression. This has led to the problem of how to design a pair of image encryption and compression algorithms such that compressing the encrypted images can still be efficiently performed. In this paper, we design a highly efficient image encryption-then-compression (ETC) system, where both lossless and lossy compression are considered. The proposed image encryption scheme operated in the prediction error domain is shown to be able to provide a reasonably high level of security. We also demonstrate that an arithmetic coding-based approach can be exploited to efficiently compress the encrypted images. More notably, the proposed compression approach applied to encrypted images is only slightly worse, in terms of compression efficiency, than the state-of-the-art lossless/lossy image coders, which take original, unencrypted images as inputs. In contrast, most of the existing ETC solutions induce significant penalty on the compression efficiency. Keywords: arithmetic codes; data compression; image coding; pattern clustering; prediction theory; random codes; ETC; arithmetic coding-based approach; image encryption-then-compression system design; lossless compression; lossless image coder; lossy compression ;lossy image coder; prediction error clustering; random permutation; security; Bit rate; Decoding; Encryption; mage coding; Image reconstruction; Compression of encrypted image; encrypted domain signal processing (ID#:14-2327) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6670767&isnumber=6684617
- Haojie Shen; Li Zhuo; Yingdi Zhao, "An Efficient Motion Reference Structure Based Selective Encryption Algorithm For H.264 Videos," Information Security, IET , vol.8, no.3, pp.199,206, May 2014. doi: 10.1049/iet-ifs.2012.0349 In this study, based on both the prediction mechanism of H.264 encoder and the syntax of H.264 bitstream, an efficient selective video encryption algorithm is proposed. The contributions of the study include two aspects. First, motion reference ratio (MRR) of macroblock (MB) is proposed to describe the inter-frame dependency among the adjacent frames. At the MB layer, MRRs of MBs are statistically analysed, and MBs to be encrypted are selected based on the statistical results. Second, at the bitstream layer of MBs, bit-sensitivity is proposed to represent the degree of importance of each bit in the compressed bitstream for reconstructed video quality. The most significant bits for reconstructed video quality are selected to be encrypted based on the bit-sensitivity of H.264 bitstream. The intra-prediction mode codewords, the sign bits of the non-zero coefficients and the info_suffix of motion vector difference codewords are extracted to be encrypted. The proposed two-layer selection scheme improves the encryption efficiency significantly. Experimental results demonstrate that both perceptual security and cryptographic security are achieved, and compared with the existing SEH264 algorithm, the proposed selective encryption algorithm can reduce the computational complexity by 50% on average. Keywords: computational complexity; cryptography; data compression; image motion analysis; image reconstruction; video codecs; video coding;H.264 bitstream;H.264 encoder; MB layer;MRR;SEH264 algorithm; bit-sensitivity; computational complexity; cryptographic security ;interframe dependency; intra-prediction mode codewords; macroblock; motion reference ratio; motion reference structure; motion vector difference codewords; non-zero coefflcients; perceptual security; prediction mechanism; selective video encryption algorithm; sign bits ;two-layer selection scheme; video quality reconstruction (ID#:14-2328) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786860&isnumber=6786849
- Yuhao Wang; Hao Yu; Sylvester, D.; Pingfan Kong, "Energy Efficient In-Memory AES Encryption Based On Nonvolatile Domain-Wall Nanowire," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,4, 24-28 March 2014. doi: 10.7873/DATE.2014.196 The widely applied Advanced Encryption Standard (AES) encryption algorithm is critical in secure big-data storage. Data oriented applications have imposed high throughput and low power, i.e., energy efficiency (J/bit), requirements when applying AES encryption. This paper explores an in-memory AES encryption using the newly introduced domain-wall nanowire. We show that all AES operations can be fully mapped to a logic-in-memory architecture by non-volatile domain-wall nanowire, called DW-AES. The experimental results show that DW-AES can achieve the best energy efficiency of 24 pJ/bit, which is 9X and 6.5X times better than CMOS ASIC and memristive CMOL implementations, respectively. Under the same area budget, the proposed DW-AES exhibits 6.4X higher throughput and 29% power saving compared to a CMOS ASIC implementation; 1.7X higher throughput and 74% power reduction compared to a memristive CMOL implementation. Keywords: cryptography; low-power electronics; nanowires; random-access storage; Advanced Encryption Standard; CMOS ASIC implementations; DW-AES; data oriented applications; energy efficient in-memory AES encryption; logic-in-memory architecture; low power; memristive CMOL implementations; nonvolatile domain-wall nanowire; secure big-data storage; Application specific integrated circuits; CMOS integrated circuits; Ciphers; Encryption; Nanoscale devices; Nonvolatile memory; Throughput (ID#:14-2329) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800397&isnumber=6800201
- Fei Huo; Guang Gong, "A New Efficient Physical Layer OFDM Encryption Scheme," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.1024,1032, April 27 2014-May 2, 2014. doi: 10.1109/INFOCOM.2014.6848032 In this paper, we propose a new encryption scheme for OFDM systems. The reason for physical layer approach is that it has the least impact on the system and is the fastest among all layers. This scheme is computationally secure against the adversary. It requires less key streams compared with other approaches. The idea comes from the importance of orthogonality in OFDM symbols. Destroying the orthogonality create intercarrier interferences. This in turn cause higher bit and symbol decoding error rate. The encryption is performed on the time domain OFDM symbols, which is equivalent to performing nonlinear masking in the frequency domain. Various attacks are explored in this paper. These include known plaintext and ciphertext attack, frequency domain attack, time domain attack, statistical attack and random guessing attack. We show our scheme is resistant against these attacks. Finally, simulations are conducted to compare the new scheme with the conventional cipher encryption. Keywords: OFDM modulation; cryptography; decoding ;intercarrier interference; OFDM symbols; OFDM systems; cipher encryption; ciphertext attack ;efficient physical layer OFDM encryption scheme; frequency domain; frequency domain attack; intercarrier interferences; nonlinear masking; orthogonality; physical layer approach; plaintext attack; random guessing attack; statistical attack; symbol decoding error rate ;time domain OFDM symbols; time domain attack; Ciphers; Encryption; Frequency-domain analysis; OFDM; Receivers; Time-domain analysis (ID#:14-2330) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848032&isnumber=6847911
- Hamdi, M.; Hermassi, H.; Rhouma, R.; Belghith, S., "A New Secure And Efficient Scheme Of ADPCM Encoder Based On Chaotic Encryption," Advanced Technologies for Signal and Image Processing (ATSIP), 2014 1st International Conference on , vol., no., pp.7,11, 17-19 March 2014. doi: 10.1109/ATSIP.2014.6834580 This paper presents a new secure variant of ADPCM encoders that are adopted by the CCITT as Adaptive Differential Pulse Code Modulation. This version provides encryption and decryption of voice simultaneously with operations ADPCM encoding and decoding. The evaluation of the scheme showed better performance in terms of speed and security. Keywords: adaptive modulation; cryptography; differential pulse code modulation; speech coding; CCITT; adaptive differential pulse code modulation; chaotic encryption; efficient ADPCM encoder; secure ADPCM encoder; voice decryption; voice encryption; Chaotic communication; Decoding; Encoding; Encryption; Speech; Encryption-Compression; Speech coding; chaotic encryption (ID#:14-2331) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834580&isnumber=6834578
- Hongchao Zhou; Wornell, G., "Efficient Homomorphic Encryption On Integer Vectors And Its Applications," Information Theory and Applications Workshop (ITA), 2014 , vol., no., pp.1,9, 9-14 Feb. 2014. doi: 10.1109/ITA.2014.6804228 Homomorphic encryption, aimed at enabling computation in the encrypted domain, is becoming important to a wide and growing range of applications, from cloud computing to distributed sensing. In recent years, a number of approaches to fully (or nearly fully) homomorphic encryption have been proposed, but to date the space and time complexity of the associated schemes has precluded their use in practice. In this work, we demonstrate that more practical homomorphic encryption schemes are possible when we require that not all encrypted computations be supported, but rather only those of interest to the target application. More specifically, we develop a homomorphic encryption scheme operating directly on integer vectors that supports three operations of fundamental interest in signal processing applications: addition, linear transformation, and weighted inner products. Moreover, when used in combination, these primitives allow us to efficiently and securely compute arbitrary polynomials. Some practically relevant examples of the computations supported by this framework are described, including feature extraction, recognition, classification, and data aggregation. Keywords: computational complexity; cryptography; polynomials; arbitrary polynomials; cloud computing; distributed sensing; homomorphic encryption scheme; integer vectors; space and time complexity; Encryption; Noise; Polynomials; Servers; Switches; Vectors (ID#:14-2332) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804228&isnumber=6804199
- Yongsung Jeon; Youngsae Kim; Jeongnyeo Kim, "Implementation of a Video Streaming Security System For Smart Device," Consumer Electronics (ICCE), 2014 IEEE International Conference on , vol., no., pp.97,100, 10-13 Jan. 2014. doi: 10.1109/ICCE.2014.6775925 This paper proposes an efficient hardware architecture to embody a video surveillance camera for security. The proposed smart camera will combine the Digital Media SoC with the low-cost FPGA. Each can perform video processing and security functions independently and the FPGA has a novel video security module. This security module encrypts video stream raw data by using an efficient encryption method; high 4 bits from the MSB of video data are encrypted by an AES algorithm. And, the proposed security module can encrypt raw video data with a maximum operation frequency of 39 MHz which is possible on a low-cost FPGA. This paper also asserts that the proposed encryption method can obtain a similar video data security level while using less hardware resources than when all of video data is encrypted. Keywords: cameras; cryptography; field programmable gate arrays; system-on-chip; telecommunication security; video streaming; video surveillance; AES algorithm; FPGA; MSB; digital media SoC; encryption method; frequency 39 MHz; hardware architecture; most significant bit; smart camera; smart device ;systen on chip; video data security; video stream raw data; video streaming security system; video surveillance camera; Computer architecture; Encryption; Field programmable gate arrays; Hardware; Streaming media (ID#:14-2333) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6775925&isnumber=6775879
- Milioris, D.; Jacquet, P., "SecLoc: Encryption System Based On Compressive Sensing Measurements For Location Estimation," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on , vol., no., pp.171,172, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849210 In this paper we present an efficient encryption system based on Compressive Sensing, without the additional computational cost of a separate encryption protocol, when applied to indoor location estimation problems. The breakthrough of the method is the use of the weakly encrypted measurement matrices which are generated when solving the optimization problem to localize the source. It must be noted that in this method an alternative key is required to secure the system. Keywords: {compressed sensing; cryptographic protocols; matrix algebra; optimisation; SecLoc system; compressive sensing measurements; encryption protocol; encryption system; location estimation; optimization problem; weakly encrypted measurement matrices; Bayes methods; Compressed sensing; Encryption; Estimation; Runtime; Servers; Vectors (ID#:14-2334) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849210&isnumber=6849127
- Zibideh, W.Y.; Matalgah, M.M., "Energy Consumptions Analysis For A Class Of Symmetric Encryption Algorithm," Radio and Wireless Symposium (RWS), 2014 IEEE , vol., no., pp.268,270, 19-23 Jan. 2014. doi: 10.1109/RWS.2014.6830130 Due to the increased demand on wireless devices and their applications, the necessity for efficient and secure encryption algorithms is critical. A secure encryption algorithm is considered energy efficient if it uses a minimum number of CPU operations. In this paper we use numerical calculations to analyze the energy consumption for a class of encryption algorithms. We compute the number of arithmetic and logical instructions in addition to the number of memory access used by each of the algorithms under study. Given some information about the microprocessor used in encryption, we can compute the energy consumed per each instruction and hence compute the total energy consumed by the encryption algorithm. In addition, we use computer simulations to compare the energy loss of transmitting encrypted information over the Wireless channel. Therefore, in this paper we introduce a comprehensive approach where we use two approaches to analyze the energy consumption of encryption algorithms. Keywords: cryptography; energy conservation; energy consumption; error statistics; microcomputers; telecommunication channels; telecommunication power management; CPU operations; arithmetic instructions; encrypted information; energy consumptions analysis; energy efficiency; energy loss; logical instructions; memory access; microprocessor; secure encryption algorithms; symmetric encryption algorithm; wireless channel; wireless devices; Bit error rate; Clocks; Encryption; Energy consumption; Microprocessors; Wireless communication (ID#:14-2335) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830130&isnumber=6830066
- Bhatnagar, G.; Wu, Q.M.J., "Biometric Inspired Multimedia Encryption Based on Dual Parameter Fractional Fourier Transform," Systems, Man, and Cybernetics: Systems, IEEE Transactions on , vol.44, no.9, pp.1234,1247, Sept. 2014. doi: 10.1109/TSMC.2014.2303789 In this paper, a novel biometric inspired multimedia encryption technique is proposed. For this purpose, a new advent in the definition of fractional Fourier transform, namely, dual parameter fractional Fourier transform (DP-FrFT) is proposed and used in multimedia encryption. The core idea behind the proposed encryption technique is to obtain biometrically encoded bitstream followed by the generation of the keys used in the encryption process. Since the key generation process of encryption technique directly determines the security of the technique. Therefore, this paper proposes an efficient method for generating biometrically encoded bitstream from biometrics and its usage to generate the keys. Then, the encryption of multimedia data is done in the DP-FrFT domain with the help of Hessenberg decomposition and nonlinear chaotic map. Finally, a reliable decryption process is proposed to construct original multimedia data from the encrypted data. Theoretical analyses and computer simulations both confirm high security and efficiency of the proposed encryption technique. Keywords: Eigenvalues and eigenfunctions; Encryption; Fourier transforms; Iris recognition; Multimedia communication; Biometrics; Hessenberg Decomposition; dual parameter fractional Fourier transform (DP-FrFT); encryption techniques; fractional Fourier transform; nonlinear chaotic map (ID#:14-2336) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748100&isnumber=6878502
- Huang Qinlong; Ma Zhaofeng; Yang Yixian; Niu Xinxin; Fu Jingyi, "Attribute Based DRM Scheme With Dynamic Usage Control In Cloud Computing," Communications, China , vol.11, no.4, pp.50,63, April 2014. doi: 10.1109/CC.2014.6827568 In order to achieve fine-grained access control in cloud computing, existing digital rights management (DRM) schemes adopt attribute-based encryption as the main encryption primitive. However, these schemes suffer from inefficiency and cannot support dynamic updating of usage rights stored in the cloud. In this paper, we propose a novel DRM scheme with secure key management and dynamic usage control in cloud computing. We present a secure key management mechanism based on attribute-based encryption and proxy re-encryption. Only the users whose attributes satisfy the access policy of the encrypted content and who have effective usage rights can be able to recover the content encryption key and further decrypt the content. The attribute based mechanism allows the content provider to selectively provide fine-grained access control of contents among a set of users, and also enables the license server to implement immediate attribute and user revocation. Moreover, our scheme supports privacy-preserving dynamic usage control based on additive homomorphic encryption, which allows the license server in the cloud to update the users' usage rights dynamically without disclosing the plaintext. Extensive analytical results indicate that our proposed scheme is secure and efficient. Keywords: authorisation; cloud computing; data privacy; digital rights management; private key cryptography; public key cryptography; access policy; additive homomorphic encryption; attribute based DRM scheme; attribute-based encryption; cloud computing; content decryption; content encryption key; digital rights management; encrypted content recovery; fine-grained access control; immediate attribute; license server; privacy-preserving dynamic usage control; proxy re-encryption; secure key management; user revocation; Access control; Cloud computing Encryption ;Licenses; Privacy; attribute-based encryption; cloud computing; digital rights management; homomorphic encryption; usage control (ID#:14-2337) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827568&isnumber=6827540
- Lembrikov, B.I; Ben-Ezra, Y.; Yurchenko, Yu., "Transmission of Chaotically Encrypted Signals Over An Optical Channel," Transparent Optical Networks (ICTON), 2014 16th International Conference on , vol., no., pp.1,1, 6-10 July 2014. doi: 10.1109/ICTON.2014.6876414 The important problems of the contemporary information transmission systems are privacy and security. Traditional cryptosystems are based on software techniques where a short secret parameter defined as the key is used, or the message is encoded directly. A novel approach to encryption is based on a hardware communication system where the encryption is directly applied to the physical layer of the communication system. Chaos communication is a direct encoding and decoding scheme of a message system in a communication system. Optical communication with chaotic laser system attracted a wide interest. Optical-fiber communication systems using chaotic semiconductor lasers have been investigated both theoretically and experimentally. The advantages of the chaotic communications are following: (i) Efficient use of the bandwidth of the communication channel; (ii) Utilization of the intrinsic nonlinearities in communication devices such as semiconductor diode lasers; (iii) Large-signal modulation for efficient use of carrier-power; (iv) Reduced number of components in a communication system; (v) Security of communication based on chaotic encryption. Typically, generation of chaotic signals can be achieved by introduction of the delayed all-optical or electro-optical feedback into diode lasers. We propose a novel system of the coupled lasers synchronization based on the master and slave lasers both in the transmitter and in the receiver. We carried out the numerical simulations of the optical communication channel containing such a transmitter and a receiver. We investigated theoretically the influence of optical fiber dispersion and nonlinearity on the chaotically encoded signal transmission efficiency. The numerical simulations show that the efficient transmission of the chaotically modulated waveform over the optical channel of a 100 km distance and the following decoding are possible. Keywords: (not provided) (ID#:14-2338) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876414&isnumber=6876260
- Hazarika, N.; Saikia, M., "A Novel Partial Image Encryption Using Chaotic Logistic Map," Signal Processing and Integrated Networks (SPIN), 2014 International Conference on , vol., no., pp.231,236, 20-21 Feb. 2014. doi: 10.1109/SPIN.2014.6776953 Transmitted images may have many different applications like commercial, military, medical etc. To protect the information from unauthorized access secure image transfer is required and this can be achieved by image data encryption. But the encryption of whole image is time consuming. This paper proposed a selective encryption techniques using spatial or DCT domain. The result of the several experimental, statistical analysis and sensitivity test shows that the proposed image encryption scheme provides an efficient and secure way for real-time image encryption and transmission. A chaotic logistic map is used to perform different encryption/decryption operation in this proposed method. Keywords: chaos; cryptography; discrete cosine transforms; image processing; statistical analysis; DCT domain; chaotic logistic map; decryption operation; discrete cosine transform; novel partial image data encryption; real-time image transmission; selective encryption techniques; sensitivity test; spatial domain;statistical analysis;unauthorized secure image transfer access; Chaos; Ciphers; Discrete cosine transforms; Encryption; Histograms;Logistics; Block Cipher; Chaos; DCT; Logistic map; Partial Encryption (ID#:14-2339) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6776953&isnumber=6776904
- Wenhai Sun; Shucheng Yu; Wenjing Lou; Hou, Y.T.; Hui Li, "Protecting Your Right: Attribute-Based Keyword Search With Fine-Grained Owner-Enforced Search Authorization In The Cloud," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.226,234, April 27 2014-May 2, 2014. doi: 10.1109/INFOCOM.2014.6847943 Search over encrypted data is a critically important enabling technique in cloud computing, where encryption-before-outsourcing is a fundamental solution to protecting user data privacy in the untrusted cloud server environment. Many secure search schemes have been focusing on the single-contributor scenario, where the outsourced dataset or the secure searchable index of the dataset are encrypted and managed by a single owner, typically based on symmetric cryptography. In this paper, we focus on a different yet more challenging scenario where the outsourced dataset can be contributed from multiple owners and are searchable by multiple users, i.e. multi-user multi-contributor case. Inspired by attribute-based encryption (ABE), we present the first attribute-based keyword search scheme with efficient user revocation (ABKS-UR) that enables scalable fine-grained (i.e. file-level) search authorization. Our scheme allows multiple owners to encrypt and outsource their data to the cloud server independently. Users can generate their own search capabilities without relying on an always online trusted authority. Fine-grained search authorization is also implemented by the owner-enforced access policy on the index of each file. Further, by incorporating proxy re-encryption and lazy re-encryption techniques, we are able to delegate heavy system update workload during user revocation to the resourceful semi-trusted cloud server. We formalize the security definition and prove the proposed ABKS-UR scheme selectively secure against chosen-keyword attack. Finally, performance evaluation shows the efficiency of our scheme. Keywords: authorisation; cloud computing; cryptography; data privacy; information retrieval; trusted computing; ABE; ABKS-UR scheme; always online trusted authority ;attribute-based encryption; attribute-based keyword search; chosen-keyword attack; cloud computing; cloud server environment; data privacy; encryption; encryption-before-outsourcing; fine-grained owner-enforced search authorization; lazy re-encryption technique; owner-enforced access policy; proxy re-encryption technique ;resourceful semi-trusted cloud server; searchable index ;security definition; single-contributor search scenario; symmetric cryptography; user revocation; Authorization; Data privacy; Encryption; Indexes; Keyword search; Servers (ID#:14-2340) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847943&isnumber=6847911
- Areed, N.F.F.; Obayya, S.S.A, "Multiple Image Encryption System Based on Nematic Liquid Photonic Crystal Layers," Lightwave Technology, Journal of vol.32, no.7, pp.1344,1350, April1, 2014. doi: 10.1109/JLT.2014.2300553 A novel design for multiple symmetric image encryption system based on a phase encoding is presented. The proposed encryptor utilizes a photonic bandgap (PBG) block in order to ensure high reflectivity over a relatively wide frequency range of interest. Also, the proposed encryptor can be utilized to encrypt two images simultaneously through the use of two nematic liquid crystal (NLC) layers across the PBG block. The whole system has been simulated numerically using the rigorous finite difference time domain method. To describe the robustness of the encryption, a root mean square of error and the signal to noise ratio are calculated. The statistical analysis of the retrieved images shows that the proposed image encryption system provides an efficient and secure way for real time image encryption and transmission. In addition, as the proposed system offers a number of advantages over existing systems such as simple design, symmetry allowing integrated encryptor/decryptor system, ultra high bandwidth and encrypting two images at the same time, it can be suitably exploited in optical imaging system applications. Keywords: cryptography; finite difference time-domain analysis; image processing; nematic liquid crystals; photonic crystals; reflectivity; statistical analysis; finite difference time domain method; multiple image encryption system; nematic liquid photonic crystal layers; photonic bandgap block; reflectivity; root mean square error; signal to noise ratio; statistical analysis; Encryption; Histograms; Laser beams; Optical imaging; Optical reflection; Photonic crystals; Encryption; finite difference time domain (FDTD); liquid crystal (LC); photonic crystal (PhC) (ID#:14-2341) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6712899&isnumber=6740872
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Information Assurance
The term "information Assurance" was adopted in the late 1990's to cover what is often now referred to generically as "cybersecurity." Many still use the phrase, particularly in the U.S. government, both for teaching and research. Since it is a rather generic phrase, there is a wide area of coverage under this topic. The articles cited here, from the January to September of 2014, cover topics related both to technology and pedagogy.
- Xiaohong Yuan; Williams, K.; Huiming Yu; Bei-Tseng Chu; Rorrer, A; Li Yang; Winters, K.; Kizza, J., "Developing Faculty Expertise in Information Assurance through Case Studies and Hands-On Experiences," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4938,4945, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.606 Though many Information Assurance (IA) educators agree that hands-on exercises and case studies improve student learning, hands-on exercises and case studies are not widely adopted due to the time needed to develop them and integrate them into curriculum. Under the support of National Science Foundation (NSF) Scholarship for Service program, we implemented two faculty development workshops to disseminate effective hands-on exercises and case studies developed through multiple previous and ongoing grants, and to develop faculty expertise in IA. This paper reports our experience of holding the faculty summer workshops on teaching information assurance through case studies and hands-on experiences. The topics presented at the workshops are briefly described and the evaluation results of the workshops are discussed. The workshops provided a valuable opportunity for IA educators to connect with each other and form collaboration in teaching and research in IA. Keywords: computer science education; continuing professional development; teacher training; teaching; IA educators; NSF Scholarship for Service program; National Science Foundation Scholarship for Service program; case studies; curriculum; faculty development workshops; faculty expertise; faculty summer workshops; hands-on exercises; hands-on experiences; information assurance educators; student learning; teaching; Access control; Authentication; Conferences; Cryptography; Educational institutions (ID#:14-2342) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759209&isnumber=6758592
- Romero-Mariona, J., "DITEC (DoD-Centric and Independent Technology Evaluation Capability): A Process for Testing Security," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on , vol., no., pp.24,25, March 31 2014-April 4 2014. doi: 10.1109/ICSTW.2014.52 Information Assurance (IA) is one of the Department of Defense's (DoD) top priorities today. IA technologies are constantly evolving to protect critical information from the growing number of cyber threats. Furthermore, DoD spends millions of dollars each year procuring, maintaining, and discontinuing various IA and Cyber technologies. Today, there is no process and/or standardized method for making informed decisions about which IA technologies are better/best. Due to this, efforts for selecting technologies go through very disparate evaluations that are often times non-repeatable and very subjective. DITEC (DoD-centric and Independent Technology Evaluation Capability) is a new capability that streamlines IA technology evaluation. DITEC defines a Process for evaluating whether or not a product meets DoD needs, Security Metrics for measuring how well needs are met, and a Framework for comparing various products that address the same IA technology area. DITEC seeks to reduce the time and cost of creating a test plan and expedite the test and evaluation effort for considering new IA technologies, consequently streamlining the deployment of IA products across DoD and increasing the potential to meet its needs. Keywords: data protection; decision making; military computing; security of data; DITEC; Department of Defense; DoD-centric and independent technology evaluation capability; IA technologies; critical information protection; cyber technologies; cyber threats; information assurance; informed decision making; security metrics; security testing process; Computer security; Conferences; Measurement; US Department of Defense; Usability; Decision-making Support; Evaluation; Information Assurance; Security; Security Metrics (ID#:14-2343) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825634&isnumber=6825623
- Schumann, M.A; Drusinsky, D.; Michael, J.B.; Wijesekera, D., "Modeling Human-in-the-Loop Security Analysis and Decision-Making Processes," Software Engineering, IEEE Transactions on, vol.40, no.2, pp.154,166, Feb. 2014. doi: 10.1109/TSE.2014.2302433 This paper presents a novel application of computer-assisted formal methods for systematically specifying, documenting, statically and dynamically checking, and maintaining human-centered workflow processes. This approach provides for end-to-end verification and validation of process workflows, which is needed for process workflows that are intended for use in developing and maintaining high-integrity systems. We demonstrate the technical feasibility of our approach by applying it on the development of the US government's process workflow for implementing, certifying, and accrediting cross-domain computer security solutions. Our approach involves identifying human-in-the-loop decision points in the process activities and then modeling these via statechart assertions. We developed techniques to specify and enforce workflow hierarchies, which was a challenge due to the existence of concurrent activities within complex workflow processes. Some of the key advantages of our approach are: it results in development of a model that is executable, supporting both upfront and runtime checking of process-workflow requirements; aids comprehension and communication among stakeholders and process engineers; and provides for incorporating accountability and risk management into the engineering of process workflows. Keywords: decision making; formal specification; formal verification; government data processing; security of data; workflow management software; US government process workflow; United States; accountability; computer-assisted formal methods; cross-domain computer security solutions; decision-making process; end-to-end validation; end-to-end verification; high-integrity systems; human-centered workflow process; human-in-the-loop decision points; human-in-the-loop security analysis; process activities; process documentation; process dynamically checking; process maintenance; process specification; process statically checking; process workflows engineering ;risk management; statechart assertions; workflow hierarchies; Analytical models; Business; Formal specifications; Object oriented modeling; Runtime; Software; Unified modeling language; Formal methods; information assurance; process modeling; software engineering; statechart assertions; verification and validation (ID#:14-2344) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6727512&isnumber=6755497
- Hershey, P.C.; Rao, S.; Silio, C.B.; Narayan, A, "System of Systems for Quality-of-Service Observation and Response in Cloud Computing Environments," Systems Journal, IEEE, vol. PP, no.99, pp.1, 11, January 2014. doi: 10.1109/JSYST.2013.2295961 As military, academic, and commercial computing systems evolve from autonomous entities that deliver computing products into network centric enterprise systems that deliver computing as a service, opportunities emerge to consolidate computing resources, software, and information through cloud computing. Along with these opportunities come challenges, particularly to service providers and operations centers that struggle to monitor and manage quality of service (QoS) for these services in order to meet customer service commitments. Traditional approaches fall short in addressing these challenges because they examine QoS from a limited perspective rather than from a system-of-systems (SoS) perspective applicable to a net-centric enterprise system in which any user from any location can share computing resources at any time. This paper presents a SoS approach to enable QoS monitoring, management, and response for enterprise systems that deliver computing as a service through a cloud computing environment. A concrete example is provided for application of this new SoS approach to a real-world scenario (viz., distributed denial of service). Simulated results confirm the efficacy of the approach. Keywords: Cloud computing; Delays; Monitoring; Quality of service; Security; Cloud computing; distributed denial of service (DDoS);enterprise systems; information assurance; net centric; quality of service (QoS); security; service-oriented architecture (SOA); systems of systems (SoS) (ID#:14-2345) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6729062&isnumber=4357939
- Kowtko, M.A, "Biometric Authentication For Older Adults," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, pp.1,6, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845213 In recent times, cyber-attacks and cyber warfare have threatened network infrastructures from across the globe. The world has reacted by increasing security measures through the use of stronger passwords, strict access control lists, and new authentication means; however, while these measures are designed to improve security and Information Assurance (IA), they may create accessibility challenges for older adults and people with disabilities. Studies have shown the memory performance of older adults decline with age. Therefore, it becomes increasingly difficult for older adults to remember random strings of characters or passwords that have 12 or more character lengths. How are older adults challenged by security measures (passwords, CAPTCHA, etc.) and how does this affect their accessibility to engage in online activities or with mobile platforms? While username/password authentication, CAPTCHA, and security questions do provide adequate protection; they are still vulnerable to cyber-attacks. Passwords can be compromised from brute force, dictionary, and social engineering style attacks. CAPTCHA, a type of challenge-response test, was developed to ensure that user inputs were not manipulated by machine-based attacks. Unfortunately, CAPTCHA are now being exploited by new vulnerabilities and exploits. Insecure implementations through code or server interaction have circumvented CAPTCHA. New viruses and malware now utilize character recognition as means to circumvent CAPTCHA [1]. Security questions, another challenge response test that attempts to authenticate users, can also be compromised through social engineering attacks and spyware. Since these common security measures are increasingly being compromised, many security professionals are turning towards biometric authentication. Biometric authentication is any form of human biological measurement or metric that can be used to identify and authenticate an authorized user of a secure system. Biometric authentication- can include fingerprint, voice, iris, facial, keystroke, and hand geometry [2]. Biometric authentication is also less affected by traditional cyber-attacks. However, is Biometrics completely secure? This research will examine the security challenges and attacks that may risk the security of biometric authentication. Recently, medical professionals in the TeleHealth industry have begun to investigate the effectiveness of biometrics. In the United States alone, the population of older adults has increased significantly with nearly 10,000 adults per day reaching the age of 65 and older [3]. Although people are living longer, that does not mean that they are living healthier. Studies have shown the U.S. healthcare system is being inundated by older adults. As security with the healthcare industry increases, many believe that biometric authentication is the answer. However, there are potential problems; especially in the older adult population. The largest problem is authentication of older adults with medical complications. Cataracts, stroke, congestive heart failure, hard veins, and other ailments may challenge biometric authentication. Since biometrics often utilize metrics and measurement between biological features, anyone of the following conditions and more could potentially affect the verification of users. This research will analyze older adults and their impact of biometric authentication on the verification process. Keywords: authorisation; biometrics (access control); invasive software; medical administrative data processing; mobile computing; CAPTCHA; Cataracts; IA; TeleHealth industry ;US healthcare system; access control lists; authentication means; biometric authentication; challenge-response test; congestive heart failure; cyber warfare; cyber-attacks; dictionary; hard veins; healthcare industry; information assurance; machine-based attacks; medical professionals; mobile platforms; network infrastructures; older adults; online activities; security measures; security professionals; social engineering style attacks; spyware; stroke; username-password authentication; Authentication; Barium; CAPTCHAs; Computers; Heart; Iris recognition; Biometric Authentication; CAPTCHA; Cyber-attacks; Information Security; Older Adults; Telehealth (ID#:14-2346) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845213&isnumber=6845183
- ier Jin, "EDA Tools Trust Evaluation Through Security Property Proofs," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014, pp.1,4, 24-28 March 2014. doi: 10.7873/DATE.2014.260 The security concerns of EDA tools have long been ignored because IC designers and integrators only focus on their functionality and performance. This lack of trusted EDA tools hampers hardware security researchers' efforts to design trusted integrated circuits. To address this concern, a novel EDA tools trust evaluation framework has been proposed to ensure the trustworthiness of EDA tools through its functional operation, rather than scrutinizing the software code. As a result, the newly proposed framework lowers the evaluation cost and is a better fit for hardware security researchers. To support the EDA tools evaluation framework, a new gate-level information assurance scheme is developed for security property checking on any gatelevel netlist. Helped by the gate-level scheme, we expand the territory of proof-carrying based IP protection from RT-level designs to gate-level netlist, so that most of the commercially trading third-party IP cores are under the protection of proof-carrying based security properties. Using a sample AES encryption core, we successfully prove the trustworthiness of Synopsys Design Compiler in generating a synthesized netlist. Keywords: cryptography; electronic design automation; integrated circuit design; AES encryption core; EDA tools trust evaluation; Synopsys design compiler; functional operation; gate-level information assurance scheme; gate-level netlist; hardware security researchers; proof-carrying based IP protection; security property proofs; software code; third-party IP cores; trusted integrated circuits; Hardware; IP networks; Integrated circuits; Logic gates; Sensitivity; Trojan horses (ID#:14-2347) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800461&isnumber=6800201
- Whitmore, J.; Turpe, S.; Triller, S.; Poller, A; Carlson, C., "Threat Analysis In The Software Development Lifecycle," IBM Journal of Research and Development, vol.58, no.1, pp.6:1, 6:13, Jan.-Feb. 2014. doi: 10.1147/JRD.2013.2288060 Businesses and governments that deploy and operate IT (information technology) systems continue to seek assurance that software they procure has the security characteristics they expect. The criteria used to evaluate the security of software are expanding from static sets of functional and assurance requirements to complex sets of evidence related to development practices for design, coding, testing, and support, plus consideration of security in the supply chain. To meet these evolving expectations, creators of software are faced with the challenge of consistently and continuously applying the most current knowledge about risks, threats, and weaknesses to their existing and new software assets. Yet the practice of threat analysis remains an art form that is highly subjective and reserved for a small community of security experts. This paper reviews the findings of an IBM-sponsored project with the Fraunhofer Institute for Secure Information Technology (SIT) and the Technische Universitat Darmstadt. This project investigated aspects of security in software development, including practical methods for threat analysis. The project also examined existing methods and tools, assessing their efficacy for software development within an open-source software supply chain. These efforts yielded valuable insights plus an automated tool and knowledge base that has the potential for overcoming some of the current limitations of secure development on a large scale. Keywords: Analytical models; Business; Computer security; Encoding; Government; Information technology; Software development; information assurance (ID#:14-2348) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6717070&isnumber=6717043 Beato, F.; Peeters, R., "Collaborative Joint Content Sharing For Online Social Networks," Pervasive Computing and Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on , vol., no., pp.616,621, 24-28 March 2014. doi: 10.1109/PerComW.2014.6815277 Online social networks' (OSNs) epic popularity has accustomed users to the ease of sharing information. At the same time, OSNs have been a focus of privacy concerns with respect to the information shared. Therefore, it is important that users have some assurance when sharing on OSNs: popular OSNs provide users with mechanisms, to protect shared information access rights. However, these mechanisms do not allow collaboration when defining access rights for joint content related to more than one user (e.g, party pictures in which different users are being tagged). In fact, the access rights list for such content is represented by the union of the access list defined by each related user, which could result in unwanted leakage. We propose a collaborative access control scheme, based on secret sharing, in which sharing of content on OSNs is decided collaboratively by a number of related users. We demonstrate that such mechanism is feasible and benefits users' privacy. Keywords: authorisation; data privacy; groupware; social networking (online) ;OSN; access rights list; collaborative access control scheme; collaborative joint content sharing; information sharing; online social networks; privacy concerns; secret sharing; unwanted leakage; user privacy; Access control; Collaboration; Encryption; Joints; Privacy (ID#:14-2349) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815277&isnumber=6815123
- Adjei, J.K., "Explaining the Role of Trust in Cloud Service Acquisition," Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2014 2nd IEEE International Conference on, pp.283, 288, 8-11 April 2014. doi: 10.1109/MobileCloud.2014.48 Effective digital identity management system is a critical enabler of cloud computing, since it supports the provision of the required assurances to the transacting parties. Such assurances sometimes require the disclosure of sensitive personal information. Given the prevalence of various forms of identity abuses on the Internet, a re-examination of the factors underlying cloud services acquisition has become critical and imperative. In order to provide better assurances, parties to cloud transactions must have confidence in service providers' ability and integrity in protecting their interest and personal information. Thus a trusted cloud identity ecosystem could promote such user confidence and assurances. Using a qualitative research approach, this paper explains the role of trust in cloud service acquisition by organizations. The paper focuses on the processes of acquisition of cloud services by financial institutions in Ghana. The study forms part of comprehensive study on the monetization of personal Identity information. Keywords: cloud computing; data protection; trusted computing; Ghana; Internet; cloud computing; cloud services acquisition; cloud transactions; digital identity management system; financial institutions; identity abuses; interest protection; organizations; personal identity information; sensitive personal information; service provider ability; service provider integrity; transacting parties; trusted cloud identity ecosystem; u ser assurances; user confidence; Banking; Cloud computing ;Context; Law; Organizations; Privacy; cloud computing; information privacy; mediating; trust (ID#:14-2350) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834977&isnumber=6823830
- Kekkonen, T.; Kanstren, T.; Hatonen, K., "Towards Trusted Environment in Cloud Monitoring," Information Technology: New Generations (ITNG), 2014 11th International Conference on, pp.180,185, 7-9 April 2014. doi: 10.1109/ITNG.2014.104 This paper investigates the problem of providing trusted monitoring information on a cloud environment to the cloud customers. The general trust between customer and provider is taken as a starting point. The paper discusses possible methods to strengthen this trust. It focuses on establishing a chain of trust inside the provider infrastructure to supply monitoring data for the customer. The goal is to enable delivery of state and event information to parties outside the cloud infrastructure. The current technologies and research are reviewed for the solution and the usage scenario is presented. Based on such technology, higher assurance of the cloud can be presented to the customer. This allows customers with high security requirements and responsibilities to have more confidence in accepting the cloud as their platform of choice. Keywords: cloud computing; security of data; trusted computing; cloud customers; cloud monitoring; cloud service provider infrastructure; monitoring data; security requirements; trusted environment; trusted monitoring information; Hardware; Monitoring; Operating systems; Probes; Registers; Security; Virtual machining; TPM; cloud; integrity measurement; remote attestation; security concerns; security measurement (ID#:14-2351) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822195&isnumber=6822158
- Dubrova, E.; Naslund, M.; Selander, G., "Secure and Efficient LBIST For Feedback Shift Register-Based Cryptographic Systems," Test Symposium (ETS), 2014 19th IEEE European, pp.1,6, 26-30 May 2014. doi: 10.1109/ETS.2014.6847821 Cryptographic methods are used to protect confidential information against unauthorised modification or disclo-sure. Cryptographic algorithms providing high assurance exist, e.g. AES. However, many open problems related to assuring security of a hardware implementation of a cryptographic algorithm remain. Security of a hardware implementation can be compromised by a random fault or a deliberate attack. The traditional testing methods are good at detecting random faults, but they do not provide adequate protection against malicious alterations of a circuit known as hardware Trojans. For example, a recent attack on Intel's Ivy Bridge processor demonstrated that the traditional Logic Built-In Self-Test (LBIST) may fail even the simple case of stuck-at fault type of Trojans. In this paper, we present a novel LBIST method for Feedback Shift Register (FSR)-based cryptographic systems which can detect such Trojans. The specific properties of FSR-based cryptographic systems allow us to reach 100% single stuck-at fault coverage with a small set of deterministic tests. The test execution time of the proposed method is at least two orders of magnitude shorter than the one of the pseudo-random pattern-based LBIST. Our results enable an efficient protection of FSR-based cryptographic systems from random and malicious stuck-at faults. Keywords: cryptography ;logic testing; shift registers ;FSR-based cryptographic systems ;Ivy Bridge processor; LBIST method; confidential information protection; cryptographic algorithms; cryptographic methods; deliberate attack; feedback shift register-based cryptographic systems; hardware Trojans logic built-in self-test; random fault attack; stuck-at fault coverage; Boolean functions; Circuit faults; Clocks; Cryptography; Logic gates; Trojan horses; Vectors (ID#:14-2352) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847821&isnumber=6847779
- Zlomislic, Vinko; Fertalj, Kresimir; Sruk, Vlado, "Denial of Service Attacks: An Overview," Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on, vol., no., pp.1,6, 18-21 June 2014. doi: 10.1109/CISTI.2014.6876979 Denial of service (DoS) attacks present one of the most significant threats to assurance of dependable and secure information systems. Rapid development of new and increasingly sophisticated attacks requires resourcefulness in designing and implementing reliable defenses. This paper presents an overview of current DoS attack and defense concepts, from a theoretical and practical point of view. Considering the elaborated DoS mechanisms, main directions are proposed for future research required in defending against the evolving threat. Keywords: Computer crime; Filtering; Floods; Protocols; Reliability; Servers; DDoS; Denial of Service; Denial of Sustainability; DoS; Network Security; System Security (ID#:14-2353) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876979&isnumber=6876860
- Almohri, H.M.J.; Danfeng Yao; Kafura, D., "Process Authentication for High System Assurance," Dependable and Secure Computing, IEEE Transactions on, vol.11, no.2, pp.168,180, March-April 2014. doi: 10.1109/TDSC.2013.29 This paper points out the need in modern operating system kernels for a process authentication mechanism, where a process of a user-level application proves its identity to the kernel. Process authentication is different from process identification. Identification is a way to describe a principal; PIDs or process names are identifiers for processes in an OS environment. However, the information such as process names or executable paths that is conventionally used by OS to identify a process is not reliable. As a result, malware may impersonate other processes, thus violating system assurance. We propose a lightweight secure application authentication framework in which user-level applications are required to present proofs at runtime to be authenticated to the kernel. To demonstrate the application of process authentication, we develop a system call monitoring framework for preventing unauthorized use or access of system resources. It verifies the identity of processes before completing the requested system calls. We implement and evaluate a prototype of our monitoring architecture in Linux. The results from our extensive performance evaluation show that our prototype incurs reasonably low overhead, indicating the feasibility of our approach for cryptographically authenticating applications and their processes in the operating system. Keywords: Linux; authorization; cryptography; operating system kernels; software architecture software performance evaluation; system monitoring; Linux; cryptographic authenticating applications; high system assurance; modern operating system kernels; monitoring architecture; performance evaluation; process authentication mechanism; process identification; requested system calls; secure application authentication framework; system call monitoring framework; unauthorized system resource access prevention; unauthorized system resource use prevention; user-level application; Authentication; Kernel; Malware; Monitoring; Runtime; Operating system security; process authentication; secret application credential; system call monitoring (ID#:14-2354) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6560050&isnumber=6785951
- Xixiang Lv; Yi Mu; Hui Li, "Non-Interactive Key Establishment for Bundle Security Protocol of Space DTNs," Information Forensics and Security, IEEE Transactions on, vol.9, no.1, pp.5,13, Jan. 2014. doi: 10.1109/TIFS.2013.2289993 To ensure the authenticity, integrity, and confidentiality of bundles, the in-transit Protocol Data Units of bundle protocol (BP) in space delay/disruption tolerant networks (DTNs), the Consultative Committee for Space Data Systems bundle security protocol (BSP) specification suggests four IPsec style security headers to provide four aspects of security services. However, this specification leaves key management as an open problem. Aiming to address the key establishment issue for BP, in this paper, we utilize a time-evolving topology model and two-channel cryptography to design efficient and noninteractive key exchange protocol. A time-evolving model is used to formally model the periodic and predetermined behavior patterns of space DTNs, and therefore, a node can schedule when and to whom it should send its public key. Meanwhile, the application of two-channel cryptography enables DTN nodes to exchange their public keys or revocation status information, with authentication assurance and in a noninteractive manner. The proposed scheme helps to establish a secure context to support for BSP, tolerating high delays, and unexpected loss of connectivity of space DTNs. Keywords: cryptographic protocols; delay tolerant networks; space communication links; telecommunication channels; telecommunication security; BSP specification; DTN nodes; IPsec style security headers; authentication assurance; authenticity; bundle security protocol; connectivity loss; consultative committee; delay-disruption tolerant networks ;in-transit protocol data units; noninteractive key establishment; noninteractive key exchange protocol; noninteractive manner; revocation status information; security services; space DTN; pace data systems bundle security protocol; time-evolving model; time-evolving topology model; two-channel cryptography; Authentication; Delays; Message authentication; Protocols; Public key; Space-based delay tolerant networks; bundle authentication; key establishment (ID#:14-2355) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6657823&isnumber=6684617
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Insider Threat
The insider threat continues to grow and the need to develop technical solutions to the problem grows as well. But through August of 2014, there has been little original scholarship written about research being conducted in this important area. The half dozen articles cited here are all of the works found in academic literature for the year.
- Szott, S., "Selfish Insider Attacks In IEEE 802.11s Wireless Mesh Networks," Communications Magazine, IEEE, vol.52, no.6, pp.227,233, June 2014. doi: 10.1109/MCOM.2014.6829968 The IEEE 802.11s amendment for wireless mesh networks does not provide incentives for stations to cooperate and is particularly vulnerable to selfish insider attacks in which a legitimate network participant hopes to increase its QoS at the expense of others. In this tutorial we describe various attacks that can be executed against 802.11s networks and also analyze existing attacks and identify new ones. We also discuss possible countermeasures and detection methods and attempt to quantify the threat of the attacks to determine which of the 802.11s vulnerabilities need to be secured with the highest priority. Keywords: telecommunication security; wireless LAN; wireless mesh networks; IEEE 802.11s wireless mesh networks; selfish insider attacks; Ad hoc networks; IEEE 802.11 Standards; Logic gates; Protocols; Quality of service; Routing; Wireless mesh networks (ID#:14-2356) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6829968&isnumber=6829933
- Flores, D.A, "An Authentication And Auditing Architecture For Enhancing Security On Egovernment Services," eDemocracy & eGovernment (ICEDEG), 2014 First International Conference on , vol., no., pp.73,76, 24-25 April 2014. doi: 10.1109/ICEDEG.2014.6819952 eGovernment deploys governmental information and services for citizens and general society. As the Internet is being used as underlying platform for information exchange, these services are exposed to data tampering and unauthorised access as main threats against citizen privacy. These issues have been usually tackled by applying controls at application level, making authentication stronger and protecting credentials in transit using digital certificates. However, these efforts to enhance security on governmental web sites have been only focused on what malicious users can do from the outside, and not in what insiders can do to alter data straight on the databases. In fact, the lack of security controls at back-end level hinders every effort to find evidence and investigate events related to credential misuse and data tampering. Moreover, even though attackers can be found and prosecuted, there is no evidence and audit trails on the databases to link illegal activities with identities. In this article, a Salting-Based Authentication Module and a Database Intrusion Detection Module are proposed as enhancements to eGovernment security to provide better authentication and auditing controls. Keywords: Internet; Web sites; access control; digital signatures; government data processing; information systems; public administration; security of data; Internet platform; auditing control; citizen privacy; data tampering; database intrusion detection module; digital certificates ;eGovernment security enhancement; eGovernment services; governmental Web sites; governmental information deployment; salting-based authentication module; unauthorised access; Access control; Authentication; Databases; Intrusion detection; Servers; Web sites; architecture; auditing; authentication; database; eGovernment; intrusion detection; log; salting (ID#:14-2357) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6819952&isnumber=6819917
- Greitzer, F.L.; Strozer, J.; Cohen, S.; Bergey, J.; Cowley, J.; Moore, A; Mundie, D., "Unintentional Insider Threat: Contributing Factors, Observables, and Mitigation Strategies," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.2025,2034, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.256 Organizations often suffer harm from individuals who bear them no malice but whose actions unintentionally expose the organizations to risk in some way. This paper examines initial findings from research on such cases, referred to as unintentional insider threat (UIT). The goal of this paper is to inform government and industry stakeholders about the problem and its possible causes and mitigation strategies. As an initial approach to addressing the problem, we developed an operational definition for UIT, reviewed research relevant to possible causes and contributing factors, and provided examples of UIT cases and their frequencies across several categories. We conclude the paper by discussing initial recommendations on mitigation strategies and countermeasures. Keywords: organisational aspects; security of data; UIT; contributing factors; government; industry stakeholders; mitigation strategy; organizations unintentional insider threat; Electronic mail; Human factors; law; Organizations; Security; Stress; Contributing; Definition; Ethical; Factors; Feature; Human; Insider; Legal; Mitigation; Model; Organizational; Observables; Psychosocial; Strategies; Threat; Unintentional; demographic (ID#:14-2358) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758854&isnumber=6758592
- Yi-Lu Wang; Sang-Chin Yang, "A Method of Evaluation for Insider Threat," Computer, Consumer and Control (IS3C), 2014 International Symposium on , vol., no., pp.438,441, 10-12 June 2014. doi: 10.1109/IS3C.2014.121 Due to cyber security is an important issue of the cloud computing. Insider threat becomes more and more important for cyber security, it is also much more complex issue. But till now, there is no equivalent to a vulnerability scanner for insider threat. We survey and discuss the history of research on insider threat analysis to know system dynamics is the best method to mitigate insider threat from people, process, and technology. In the paper, we present a system dynamics method to model insider threat. We suggest some concludes for future research who are interested in insider threat issue The study. Keywords: cloud computing; security of data; cloud computing; cyber security; insider threat analysis; insider threat evaluation; insider threat mitigation; vulnerability scanner; Analytical models; Computer crime; Computers; Educational institutions; Organizations ;Insider threat; System Dynamic (ID#:14-2359) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845913&isnumber=6845429
- Gritzalis, D.; Stavrou, V.; Kandias, M.; Stergiopoulos, G., "Insider Threat: Enhancing BPM through Social Media," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on , vol., no., pp.1,6, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814027 Modern business environments have a constant need to increase their productivity, reduce costs and offer competitive products and services. This can be achieved via modeling their business processes. Yet, even in light of modelling's widespread success, one can argue that it lacks built-in security mechanisms able to detect and fight threats that may manifest throughout the process. Academic research has proposed a variety of different solutions which focus on different kinds of threat. In this paper we focus on insider threat, i.e. insiders participating in an organization's business process, who, depending on their motives, may cause severe harm to the organization. We examine existing security approaches to tackle down the aforementioned threat in enterprise business processes. We discuss their pros and cons and propose a monitoring approach that aims at mitigating the insider threat. This approach enhances business process monitoring tools with information evaluated from Social Media. It exams the online behavior of users and pinpoints potential insiders with critical roles in the organization's processes. We conclude with some observations on the monitoring results (i.e. psychometric evaluations from the social media analysis) concerning privacy violations and argue that deployment of such systems should be only allowed on exceptional cases, such as protecting critical infrastructures. Keywords: {business data processing; organisational aspects; process monitoring; social networking (online);BPM enhancement; built-in security mechanism; business process monitoring tools; cost reduction; enterprise business processes; insider threat; organization business process management; privacy violations; social media; Media; Monitoring; Organizations; Privacy; Security; Unified modeling language (ID#:14-2360) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814027&isnumber=6813963
- Kajtazi, M.; Bulgurcu, B.; Cavusoglu, H.; Benbasat, I, "Assessing Sunk Cost Effect on Employees' Intentions to Violate Information Security Policies in Organizations," System Sciences (HICSS), 2014 47th Hawaii International Conference on, vol., no., pp.3169,3177, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.393 It has been widely known that employees pose insider threats to the information and technology resources of an organization. In this paper, we develop a model to explain insiders' intentional violation of the requirements of an information security policy. We propose sunk cost as a mediating factor. We test our research model on data collected from three information-intensive organizations in banking and pharmaceutical industries (n=502). Our results show that sunk cost acts as a mediator between the proposed antecedents of sunk cost (i.e., completion effect and goal in congruency) and intentions to violate the ISP. We discuss the implications of our results for developing theory and for re-designing current security agendas that could help improve compliance behavior in the future. keywords: organisational aspects; personnel; security of data; ISP; banking; compliance behavior; employees intentions ;information security policy; information-intensive organizations; insider intentional violation; mediating factor; pharmaceutical industries; sunk cost effect assessment; technology resources; Educational institutions; Information security; Mathematical model; Organizations; Pharmaceuticals; Reliability; completion effect; goal incongruency; information security violation; insider threats; sunk cost (ID#:14-2361) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758995&isnumber=6758592
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Lightweight Cryptography
Lightweight cryptography is a major research direction. The release of SIMON in June 2013 has generated significant interest and a number of studies evaluating and comparing it to other cipher algorithms. The articles cited here are the first results of these studies and were presented in the first half of 2014. In addition, articles on other lightweight ciphers are included from the same period.
- Min Chen; Shigang Chen; Qingjun Xiao, "Pandaka: A Lightweight Cipher For RFID Systems," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.172,180, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6847937 The ubiquitous use of RFID tags raises concern about potential security risks in RFID systems. Because low-cost tags are extremely resource-constrained devices, common security mechanisms adopted in resource-rich equipment such as computers are no longer applicable to them. Hence, one challenging research topic is to design a lightweight cipher that is suitable for low-cost RFID tags. Traditional cryptography generally assumes that the two communicating parties are equipotent entities. In contrast, there is a large capability gap between readers and tags in RFID systems. We observe that the readers, which are much more powerful, should take more responsibility in RFID cryptographic protocols. In this paper, we make a radical shift from traditional cryptography, and design a novel cipher called Pandaka1, in which most workload is pushed to the readers. As a result, Pandaka is particularly hardware-efficient for tags. We perform extensive simulations to evaluate the effectiveness of Pandaka. In addition, we present security analysis of Pandaka facing different attacks. Keywords: cryptographic protocols; radiofrequency identification; telecommunication security; Pandaka security analysis; RFID cryptographic protocols; RFID systems; lightweight cipher; low-cost RFID tags; resource-constrained devices; resource-rich equipment; security mechanisms; security risks; Ciphers; Computers; Indexes; Radiofrequency identification; Servers (ID#:14-2362) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847937&isnumber=6847911
- Lin Ding; Chenhui Jin; Jie Guan; Qiuyan Wang, "Cryptanalysis of Lightweight WG-8 Stream Cipher," Information Forensics and Security, IEEE Transactions on , vol.9, no.4, pp.645,652, April 2014. doi: 10.1109/TIFS.2014.2307202 WG-8 is a new lightweight variant of the well-known Welch-Gong (WG) stream cipher family, and takes an 80-bit secret key and an 80-bit initial vector (IV) as inputs. So far no attack on the WG-8 stream cipher has been published except the attacks by the designers. This paper shows that there exist Key-IV pairs for WG-8 that can generate keystreams, which are exact shifts of each other throughout the keystream generation. By exploiting this slide property, an effective key recovery attack on WG-8 in the related key setting is proposed, which has a time complexity of 253.32 and requires 252 chosen IVs. The attack is minimal in the sense that it only requires one related key. Furthermore, we present an efficient key recovery attack on WG-8 in the multiple related key setting. As confirmed by the experimental results, our attack recovers all 80 bits of WG-8 in on a PC with 2.5-GHz Intel Pentium 4 processor. This is the first time that a weakness is presented for WG-8, assuming that the attacker can obtain only a few dozen consecutive keystream bits for each IV. Finally, we give a new Key/IV loading proposal for WG-8, which takes an 80-bit secret key and a 64-bit IV as inputs. The new proposal keeps the basic structure of WG-8 and provides enough resistance against our related key attacks. Keywords: computational complexity; cryptography; microprocessor chips;80-bit initial vector;80-bit secret key; Intel Pentium 4 processor; Welch-Gong stream cipher; frequency 2.5 GHz; key recovery attack; keystream generation; lightweight WG-8 stream cipher cryptanalysis; related key attack; slide property; time complexity; Ciphers; Clocks;Equations;Proposals; Time complexity; Cryptanalysis; WG-8; lightweight stream cipher; related key attack (ID#:14-2363) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6746224&isnumber=6755552
- Xuanxia Yao; Xiaoguang Han; Xiaojiang Du, "A lightweight access control mechanism for mobile cloud computing," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on , vol., no., pp.380,385, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849262 In order to meet the security requirement, most data are stored in cloud as cipher-texts. Hence, a cipher-text based access control mechanism is needed for data sharing in cloud. A popular solution is to use the attribute-based encryption. However, it is not suitable for mobile cloud due to the heavy computation overhead caused by bilinear pairing, which also makes it difficult to change the access control policy. In addition, attribute-based encryption can't achieve fine-grained access control yet. In this paper, we present a lightweight cipher-text access control mechanism for mobile cloud computing, which is based on authorization certificates and secret sharing. Only the certificate owner can reconstruct decryption keys for his/her files. Our analyses show that the mechanism can achieve efficient and fine-grained access control on cipher-text at a much lower cost than the attribute-based encryption solution. Keywords: authorisation; cloud computing; cryptography; mobile computing; access control policy; attribute-based encryption; authorization certificates; bilinear pairing; certificate owner; cipher-text based access control mechanism; data sharing; decryption key reconstruction; fine-grained access control ;lightweight cipher-text access control mechanism; mobile cloud computing; secret sharing; security requirement; Authorization; Cloud computing; Encryption; Mobile communication; Servers; Authorization; access control; certificate; mobile cloud storage (ID#:14-2364) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849262&isnumber=6849127
- Fujishiro, M.; Yanagisawa, M.; Togawa, N., "Scan-based attack on the LED block cipher using scan signatures," Circuits and Systems (ISCAS), 2014 IEEE International Symposium on , vol., no., pp.1460,1463, 1-5 June 2014. doi: 10.1109/ISCAS.2014.6865421 LED (Light Encryption Device) block cipher, one of lightweight block ciphers, is very compact in hardware. Its encryption process is composed of AES-like rounds. Recently, a scan-based side-channel attack is reported which retrieves the secret information inside the cryptosystem utilizing scan chains, one of design-for-test techniques. In this paper, a scan-based attack method on the LED block cipher using scan signatures is proposed. In our proposed method, we focus on a particular 16-bit position in scanned data obtained from an LED LSI chip and retrieve its secret key using scan signatures. Experimental results show that our proposed method successfully retrieves its 64-bit secret key using 73 plaintexts on average if the scan chain is only connected to the LED block cipher. These experimental results also show the key is successfully retrieved even if the scan chain includes additional some 4000 1-bit registers. Keywords: design for testability; digital signatures; large scale integration; private key cryptography; AES-like rounds; LED LSI chip; LED block cipher; cryptosystem; design-for-test techniques; encryption process; light encryption device; lightweight block ciphers; plaintexts; scan chain; scan signatures; scan-based attack method; scan-based side-channel attack; secret information; secret key; word length 16 bit; word length 64 bit; Ciphers; Encryption; Hardware; Large scale integration; Light emitting diodes; Registers (ID#:14-2365) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6865421&isnumber=6865048
- Bhasin, S.; Graba, T.; Danger, J.-L.; Najm, Z., "A Look Into SIMON From A Side-Channel Perspective," Hardware-Oriented Security and Trust (HOST), 2014 IEEE International Symposium on , vol., no., pp.56,59, 6-7 May 2014. doi: 10.1109/HST.2014.6855568 SIMON is a lightweight block cipher, specially designed for resource constrained devices that was recently presented by the National Security Agency (NSA). This paper deals with a hardware implementation of this algorithm from a side-channel point of view as it is a prime concern for embedded systems. We present the implementation of SIMON on a Xilinx Virtex-5 FPGA and propose a low-overhead countermeasure using first-order Boolean masking exploiting the simplistic construction of SIMON. Finally we evaluate the side-channel resistance of both implementations. Keywords: Boolean algebra; cryptography; field programmable gate arrays; SIMON; Xilinx Virtex-5 FPGA; embedded system; first-order Boolean masking; lightweight block cipher; resource constrained device; side-channel perspective; side-channel resistance; Ciphers; Field programmable gate arrays; Hardware; Magnetohydrodynamics; Registers; Table lookup; Countermeasures; Lightweight Cryptography; SIMON; Side-Channel Analysis (ID#:14-2366) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855568&isnumber=6855557
- Cioranesco, J.-M.; Danger, J.-L.; Graba, T.; Guilley, S.; Mathieu, Y.; Naccache, D.; Xuan Thuy Ngo, "Cryptographically Secure Shields," Hardware-Oriented Security and Trust (HOST), 2014 IEEE International Symposium on , vol., no., pp.25,31, 6-7 May 2014. doi: 10.1109/HST.2014.6855563 Abstract: Probing attacks are serious threats on integrated circuits. Security products often include a protective layer called shield that acts like a digital fence. In this article, we demonstrate a new shield structure that is cryptographically secure. This shield is based on the newly proposed SIMON lightweight block cipher and independent mesh lines to ensure the security against probing attacks of the hardware located behind the shield. Such structure can be proven secure against state-of-the-art invasive attacks. For the first time in the open literature, we describe a chip designed with a digital shield, and give an extensive report of its cost, in terms of power, metal layer(s) to sacrifice and of logic (including the logic to connect it to the CPU). Also, we explain how "Through Silicon Vias" (TSV) technology can be used for the protection against both frontside and backside probing. Keywords: cryptography integrated circuit design; three-dimensional integrated circuits; SIMON lightweight block cipher; TSV technology; chip design; cryptographical secure shield; digital fence; digital shield; integrated circuit invasive attacks; mesh lines; metal layer; probing attacks; protective layer; security product; shield structure; through silicon vias; Ciphers; Integrated circuits; Metals; Registers; Routing; Cryptographically secure shield ;Focused Ion Beam (FIB);SIMON block cipher; Through Silicon Vias (TSV) (ID#:14-2367) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855563&isnumber=6855557
- Hwajeong Seo; Jongseok Choi; Hyunjin Kim; Taehwan Park; Howon Kim, "Pseudo Random Number Generator And Hash Function For Embedded Microprocessors," Internet of Things (WF-IoT), 2014 IEEE World Forum on , vol., no., pp.37,40, 6-8 March 2014. doi: 10.1109/WF-IoT.2014.6803113 Embedded microprocessors are commonly used for future technologies such as Internet of Things(IoT), RFID and Wireless Sensor Networks(WSN). However, the microprocessors have limited computing power and storages so straight-forward implementation of traditional services on resource constrained devices is not recommenced. To overcome this problem, lightweight implementation techniques should be concerned for practical implementations. Among various requirements, security applications should be conducted on microprocessors for secure and robust service environments. In this paper, we presented a light weight implementation techniques for efficient Pseudo Random Number Generator(PRNG) and Hash function. To reduce memory consumption and accelerate performance, we adopted AES accelerator based implementation. This technique is firstly introduced in INDOCRYPT'12, whose idea exploits peripheral devices for efficient hash computations. With this technique, we presented block cipher based light-weight pseudo random number generator and simple hash function on embedded microprocessors. Keywords: cryptography; embedded systems; microprocessor chips; random number generation; AES accelerator; INDOCRYPT'12;PRNG;block cipher based lightweight pseudo random number generator; embedded microprocessors; future technologies; hash computations; hash function; lightweight implementation techniques; peripheral devices; resource constrained devices; robust service environments; secure service environments; security applications; straight-forward implementation; Ciphers; Clocks; Encryption; Generators; Microprocessors (ID#:14-2368) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803113&isnumber=6803102
- At, N.; Beuchat, J.-L.; Okamoto, E.; San, I; Yamazaki, T., "Compact Hardware Implementations of ChaCha, BLAKE, Threefish, and Skein on FPGA," Circuits and Systems I: Regular Papers, IEEE Transactions on , vol.61, no.2, pp.485,498, Feb. 2014. doi: 10.1109/TCSI.2013.2278385 The cryptographic hash functions BLAKE and Skein are built from the ChaCha stream cipher and the tweakable Threefish block cipher, respectively. Interestingly enough, they are based on the same arithmetic operations, and the same design philosophy allows one to design lightweight coprocessors for hashing and encryption. The key element of our approach is to take advantage of the parallelism of the algorithms considered in this work to deeply pipeline our Arithmetic and Logic Units, and to avoid data dependencies by interleaving independent tasks. We show for instance that a fully autonomous implementation of BLAKE and ChaCha on a Xilinx Virtex-6 device occupies 144 slices and three memory blocks, and achieves competitive throughputs. In order to offer the same features, a coprocessor implementing Skein and Threefish requires a substantial higher slice count. Keywords: coprocessors; cryptography; field programmable gate arrays ;BLAKE function; ChaCha stream cipher; FPGA; Skein function; Threefish block cipher; Xilinx Virtex-6 device; algorithm parallelism; arithmetic operations; arithmetic-and-logic units; competitive throughput; cryptographic hash functions; data dependencies; encryption;field programmable gate array; lightweight coprocessors; memory blocks; slice count; Ciphers; Coprocessors; Encryption; Field programmable gate arrays; Hardware; Pipelines; Ciphers; cryptography, coprocessors ;field programmable gate arrays (ID#:14-2369) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6607237&isnumber=6722960
- Verma, S.; Pal, S.K.; Muttoo, S.K., "A new Tool For Lightweight Encryption On Android," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.306,311, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779339 Theft or loss of a mobile device could be an information security risk as it can result in loss of confidential personal data. Traditional cryptographic algorithms are not suitable for resource constrained and handheld devices. In this paper, we have developed an efficient and user friendly tool called "NCRYPT" on Android platform. "NCRYPT" application is used to secure the data at rest on Android thus making it inaccessible to unauthorized users. It is based on lightweight encryption scheme i.e. Hummingbird-2. The application provides secure storage by making use of password based authentication so that an adversary cannot access the confidential data stored on the mobile device. The cryptographic key is derived through the password based key generation method PBKDF2 from the standard SUN JCE cryptographic provider. Various tools for encryption are available in the market which are based on AES or DES encryption schemes. Ihe reported tool is based on Hummingbird-2 and is faster than most of the other existing schemes. It is also resistant to most of attacks applicable to Block and Stream Ciphers. Hummingbird-2 has been coded in C language and embedded in Android platform with the help of JNI (Java Native Interface) for faster execution. This application provides choice for encrypting the entire data on SD card or selective files on the smart phone and protect personal or confidential information available in such devices. Keywords: C language; cryptography; smart phones; AES encryption scheme; Android platform; C language; DES encryption scheme;Hummingbird-2 scheme; JNI; Java native interface; NCRYPT application;PBKDF2 password based key generation method; SUN JCE cryptographic provider; block ciphers; confidential data; cryptographic algorithms; cryptographic key; information security risk; lightweight encryption scheme; mobile device; password based authentication; stream ciphers; Ciphers; Encryption; Smart phones; Standards; Throughput; Android; HummingBird2; Information Security ;Lightweight Encryption;PBKDF2 (ID#:14-2370) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779339&isnumber=6779283
- Ahmadi, S.; Ahmadian, Z.; Mohajeri, J.; Aref, M.R., "Low Data Complexity Biclique Cryptanalysis of Block Ciphers with Application to Piccolo and HIGHT," Information Forensics and Security, IEEE Transactions on, vol.PP, no.99, pp.1, 1, July 2014. doi: 10.1109/TIFS.2014.2344445 In this paper, we present a framework for biclique cryptanalysis of block ciphers which extremely requires a low amount of data. To that end, we enjoy a new representation of biclique attack based on a new concept of cutset that describes our attack more clearly. Then, an algorithm for choosing two differential characteristics is presented to simultaneously minimize the data complexity and control the computational complexity. Then, we characterize those block ciphers that are vulnerable to this technique and among them, we apply this attack on lightweight block ciphers Piccolo-80, Piccolo-128 and HIGHT. The data complexity of these attacks is only 16 plaintextciphertext pairs which is considerably less than the existing cryptanalytic results. In all the attacks the computational complexity remains the same as the previous ones or even it is slightly improved. Keywords: (not provided) (ID#:14-2371) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868260&isnumber=4358835
- Aysu, A; Gulcan, E.; Schaumont, P., "SIMON Says: Break Area Records of Block Ciphers on FPGAs," Embedded Systems Letters, IEEE , vol.6, no.2, pp.37,40, June 2014. doi: 10.1109/LES.2014.2314961 While advanced encryption standard (AES) is extensively in use in a number of applications, its area cost limits its deployment in resource constrained platforms. In this letter, we have implemented SIMON, a recent promising low-cost alternative of AES on reconfigurable platforms. The Feistel network, the construction of the round function and the key generation of SIMON, enables bit-serial hardware architectures which can significantly reduce the cost. Moreover, encryption and decryption can be done using the same hardware. The results show that with an equivalent security level, SIMON is 86% smaller than AES, 70% smaller than PRESENT (a standardized low-cost AES alternative), and its smallest hardware architecture only costs 36 slices (72 LUTs, 30 registers). To our best knowledge, this work sets the new area records as we propose the hardware architecture of the smallest block cipher ever published on field-programmable gate arrays (FPGAs) at 128-bit level of security. Therefore, SIMON is a strong alternative to AES for low-cost FPGA-based applications. Keywords: cryptography; field programmable gate arrays; Feistel network; SIMON; advanced encryption standard; bit-serial hardware architectures; block ciphers; break area records; cost reduction; decryption; equivalent security level; field-programmable gate arrays; hardware architecture; low-cost FPGA-based applications; reconfigurable platforms; resource constrained platforms; round function; standardized low-cost AES alternative; Ciphers; Encryption; Field programmable gate arrays; Hardware; Parallel processing; Table lookup; Block ciphers; SIMON; field-programmable gate arrays (FPGAs) implementation; lightweight cryptography (ID#:14-2372) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6782431&isnumber=6820801
- Mathew, S.; Satpathy, S.; Suresh, V.; Kaul, H.; Anders, M.; Chen, G.; Agarwal, A; Hsu, S.; Krishnamurthy, R., "340mV-1.1V, 289Gbps/W, 2090-gate NanoAES Hardware Accelerator With Area-Optimized Encrypt/Decrypt GF(24)2 Polynomials In 22nm Tri-Gate CMOS," VLSI Circuits Digest of Technical Papers, 2014 Symposium on , vol., no., pp.1,2, 10-13 June 2014. doi: 10.1109/VLSIC.2014.6858420 An on-die, lightweight nanoAES hardware accelerator is fabricated in 22nm tri-gate CMOS, targeted for ultra-low power mobile SOCs. Compared to conventional 128-bit AES implementations, this design uses an 8-bit Sbox datapath along with ShiftRow byte-order processing to compute all AES rounds in native GF(24)2 composite-field. This approach along with a serial-accumulating MixColumns circuit, area-optimized encrypt and decrypt Galois-field polynomials and integrated on-the-fly key generation circuit results in a compact 2090-gate design, enabling peak energy-efficiency of 289Gbps/W and AES-128 encrypt/decrypt throughput of 432/671Mbps with total energy consumption of 4.7/3nJ measured at 0.9V, 25degC. Keywords: CMOS digital integrated circuits; Galois fields; cryptography ;low-power electronics; system-on-chip; AES rounds; Sbox datapath; ShiftRow byte-order processing; area-optimized encrypt polynomials ;compact 2090-gate design; decrypt Galois-field polynomials; integrated on-the-fly key generation circuit; lightweight nanoAES hardware accelerator; native composite-field; serial-accumulating MixColumns circuit; size 22 nm; temperature 25 degC; trigate CMOS; ultra-low power mobile SOC; voltage 340 mV to 1.1 V; word length 8 bit; Abstracts; Area measurement; Ciphers; Energy measurement ;IP networks; Logic gates (ID#:14-2373) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6858420&isnumber=6858353
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Locking
In computer science, a lock is a timing mechanism designed to enforce a control policy. Locks have some advantages and many disadvantages. To be efficient, they typically require hardware support. The articles cited here look at cache locking, injection locking, phase locking, and a lock-free approach to addressing multicore computing. These articles appeared in the first half of 2014.
- Huping Ding; Yun Liang; Mitra, T., "WCET-Centric Dynamic Instruction Cache Locking," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014. doi: 10.7873/DATE.2014.040 Cache locking is an effective technique to improve timing predictability in real-time systems. In static cache locking, the locked memory blocks remain unchanged throughout the program execution. Thus static locking may not be effective for large programs where multiple memory blocks are competing for few cache lines available for locking. In comparison, dynamic cache locking overcomes cache space limitation through time-multiplexing of locked memory blocks. Prior dynamic locking technique partitions the program into regions and takes independent locking decisions for each region. We propose a flexible loop-based dynamic cache locking approach. We not only select the memory blocks to be locked but also the locking points (e.g., loop level). We judiciously allow memory blocks from the same loop to be locked at different program points for WCET improvement. We design a constraint-based approach that incorporates a global view to decide on the number of locking slots at each loop entry point and then select the memory blocks to be locked for each loop. Experimental evaluation shows that our dynamic cache locking approach achieves substantial improvement of WCET compared to prior techniques. Keywords: {ache storage; real-time systems; WCET-centric dynamic instruction cache locking; cache lines; constraint-based approach; flexible loop-based dynamic cache locking approach; independent locking decisions; locked memory blocks ;locking points;loop entry point; multiple memory blocks; program execution; program points; real-time systems; time-multiplexing; timing predictability ;worst-case execution time; Abstracts; Benchmark testing; Educational institutions; Electronic mail; Nickel; Resilience; Timing (ID#:14-2374) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800241&isnumber=6800201
- Raj, M.; Emami, A, "A Wideband Injection-Locking Scheme and Quadrature Phase Generation in 65-nm CMOS," Microwave Theory and Techniques, IEEE Transactions on , vol.62, no.4, pp.763,772, April 2014. doi: 10.1109/TMTT.2014.2310172 A novel technique for wideband injection locking in an LC oscillator is proposed. Phased-lock-loop and injection-locking elements are combined symbiotically to achieve wide locking range while retaining the simplicity of the latter. This method does not require a phase frequency detector or a loop filter to achieve phase lock. A mathematical analysis of the system is presented and the expression for new locking range is derived. A locking range of 13.4-17.2 GHz and an average jitter tracking bandwidth of up to 400 MHz were measured in a high- Q LC oscillator. This architecture is used to generate quadrature phases from a single clock without any frequency division. It also provides high-frequency jitter filtering while retaining the low-frequency correlated jitter essential for forwarded clock receivers. Keywords: CMOS integrated circuits; LC circuits; MMIC oscillators ;injection locked oscillators; jitter; phase locked loops; voltage-controlled oscillators; forwarded clock receivers ;frequency 13.4 GHz to 17.2 GHz; high-Q LC oscillator; high-frequency jitter filtering; injection-locking elements; jitter tracking bandwidth; low-frequency correlated jitter; mathematical analysis; phased-lock-loop; quadrature phase generation; size 65 nm; wide locking range; wideband injection locking scheme; Clocks; Jitter; Mathematical model; Phase locked loops; Varactors; Voltage-controlled oscillators; Adler's equation; injection-locked (IL) phase-locked loop (PLL); injection-locked oscillator (ILO) ;jitter transfer function; locking range; quadrature; voltage-controlled oscillator (VCO) (ID#:14-2375) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6766809&isnumber=6782343
- Asaduzzaman, A; Allen, M.P.; Jareen, T., "An Effective Locking-Free Caching Technique For Power-Aware Multicore Computing Systems," Informatics, Electronics & Vision (ICIEV), 2014 International Conference on , vol., no., pp.1,6, 23-24 May 2014. doi: 10.1109/ICIEV.2014.6850861 In multicore/manycore systems, multiple caches increase the total power consumption and intensify latency because it is nearly impossible to hide last-level latency. Studies suggest that there are opportunities to increase the performance to power ratio by locking selected memory blocks inside the caches during runtime. However, the cache locking technique reduces the effective cache size and may introduce additional configuration difficulties, especially for multicore architectures. Furthermore, there may be other restrictions (example: PowerPC 750GX processor does not allow cache locking at level-1). In this paper, we propose a Smart Victim Cache (SVC) assisted caching technique that eliminates traditional cache locking without compromising the performance to power ratio. In addition to functioning as a normal victim cache, the proposed SVC holds memory blocks that may cause higher cache misses and supports stream buffering to increase cache hits. We model a Quad-Core System that has Private First Level Caches (PFLCs), a Shared Last Level Cache (SLLC), and a shared SVC located between the PFLCs and SLLC. We run simulation programs using a diverse group of applications including MPEG-4 and H.264/AVC. Experimental results suggest that the proposed SVC added multicore cache memory subsystem helps decrease the total power consumption and average latency up to 21% and 17%, respectively, when compared with that of SLLC cache locking mechanism without SVC. Keywords: cache storage; multiprocessing systems; power aware computing; PFLCs; PowerPC 750GX processor; SLLC; SVC; cache locking technique; effective locking free caching technique; intensify latency; multicore cache memory subsystem; multicore-manycore systems; power aware multicore computing systems; power consumption; power ratio; private first level caches; quadcore system; selected memory blocks; shared last level cache; smart victim cache; Informatics; Memory management; Multicore processing; Power demand; Static VAr compensators; Transform coding; Video coding; Cache locking; green technology; low-power computing; multicore architecture; victim cache (ID#:14-2376) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850861&isnumber=6850678
- Dong Hou; Bo Ning; Shuangyou Zhang; Jiutao Wu; Jianye Zhao, "Long-Term Stabilization of Fiber Laser Using Phase-Locking Technique With Ultra-Low Phase Noise and Phase Drift," Selected Topics in Quantum Electronics, IEEE Journal of, vol.20, no.5, pp.1,8, Sept.-Oct. 2014. doi: 10.1109/JSTQE.2014.2316592 We investigated the phase noise performance of a conventional phase-locking technique in the long-term stabilization of a mode-locked fiber laser (MLFL). The investigation revealed that the electronic noise introduced by the electronic phase detector is a key contributor to the phase noise of the stabilization system. To eliminate this electronic noise, we propose an improved phase-locking technique with an optic-microwave phase detector and a pump-tuning-based technique. The mechanism and the theoretical model of the novel phase-locking technique are discussed. Long-term stabilization experiments demonstrated that the improved technique can achieve long-term stabilization of MLFLs with ultra-low phase noise and phase drift. The excellent locking performance of the improved phase-locking technique implies that this technique can be used to stabilize fiber lasers with a highly stable H-maser or an optical clock without stability loss. Keywords: fibre lasers; laser mode locking; laser tuning; optical pumping; phase detectors; phase noise; electronic noise; electronic phase detector; fiber laser; long-term stabilization; mode-locked fiber laser; optic-microwave phase detector; phase drift; phase-locking technique; pump-tuning-based technique; ultra-low phase noise; Adaptive optics; Optical fibers; Optical noise; Optical pulses; Phase locked loops; Phase noise ;Modeling; mode-locked fiber laser (MLFL);phase detection; phase-locking loop; stabilization (ID#:14-2377) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6797883&isnumber=6603383
- Jing Jin; Bukun Pan; Xiaoming Liu; Jianjun Zhou, "Injection-Locking Frequency Divider based dual-modulus prescalers with extended locking range," Circuits and Systems (ISCAS), 2014 IEEE International Symposium on , vol., no., pp.502,505, 1-5 June 2014. doi: 10.1109/ISCAS.2014.6865182 A new Injection-Locking Frequency Divider (ILFD) based dual-modulus prescaler with extended locking range is presented in this paper. The tuning capacitor inserted into the ring oscillator loop can widen the common locking range of two operating modes of the prescaler. A dual-modulus prescaler using the proposed method is designed and simulated in a 65nm CMOS process. Simulation results show that the locking range of the divide-by-4/5, from 11.5 GHz to 19.1 GHz, is extended by more than 40 % compared with from 14 GHz to 19.4 GHz using the conventional design. Keywords: CMOS integrated circuits; field effect MMIC; frequency dividers; injection locked oscillators; microwave oscillators; CMOS process; dual modulus prescaler; extended locking range; frequency 11.5 GHz to 19.1 GHz; injection locking frequency divider; ring oscillator loop; size 65 nm; tuning capacitor; Capacitors; Frequency conversion; Phase locked loops; Power demand; Ring oscillators; Tuning (ID#:14-2378) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6865182&isnumber=6865048
- Hwi Don Lee; Zhongping Chen; Myung Yung Jeong; Chang-Seok Kim, "Simultaneous Dual-Band Wavelength-Swept Fiber Laser Based on Active Mode Locking," Photonics Technology Letters, IEEE , vol.26, no.2, pp.190,193, Jan.15, 2014. doi: 10.1109/LPT.2013.2291834 We report a simultaneous dual-band wavelength-swept laser based on the active mode locking method. By applying a single modulation signal, synchronized sweeping of two lasing-wavelengths is demonstrated without the use of a mechanical wavelength-selecting filter. Two free spectral ranges are independently controlled with a dual path-length configuration of a laser cavity. The static and dynamic performances of a dual-band wavelength-swept active mode locking fiber laser are characterized in both the time and wavelength regions. Two lasing wavelengths were swept simultaneously from 1263.0 to 1333.3 nm for the 1310 nm band and from 1493 to 1563.3 nm for the 1550 nm band. The application of a dual-band wavelength-swept fiber laser was also demonstrated with a dual-band optical coherence tomography imaging system. Keywords: fibre lasers; laser beam applications; laser cavity resonators; laser mode locking; optical filters; optical modulation; optical tomography; active mode locking method; dual path-length configuration; dual-band optical coherence tomography imaging system; dual-band wavelength-swept active mode locking fiber laser; dynamic performances; laser cavity; lasing-wavelengths; mechanical wavelength-selecting filter; simultaneous dual-band wavelength-swept fiber laser; single modulation signal; static performances; synchronized sweeping; wavelength 1263.0 nm to 1333.3 nm; wavelength 1310 nm; wavelength 1493 nm to 1563.3 nm; wavelength 1550 nm; wavelength regions; Cavity resonators; Dual band; Fiber lasers; Frequency modulation; Laser mode locking; Optical fibers; Fiber lasers; laser mode locking; optical imaging (ID#:14-2379) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6674061&isnumber=6693740
- Simos, H.; Bogris, A; Syvridis, D.; Elsasser, W., "Intensity Noise Properties of Mid-Infrared Injection Locked Quantum Cascade Lasers: I. Modeling," Quantum Electronics, IEEE Journal of, vol.50, no.2, pp.98,105, Feb. 2014. doi: 10.1109/JQE.2013.2295434 In this paper, we numerically investigate the effect of optical injection locking on the noise properties of mid-infrared quantum cascade lasers. The analysis is carried out by means of a rate equation model, which takes into account the various noise contributions and the injection of the master laser. The obtained results indicate that the locked slave laser may operate under reduced intensity noise levels compared with the free running operation. In addition, optimization of the locking process leads to further suppression of the intensity noise when the slave laser is biased close to the free-running threshold current. The main factors that significantly affect the locking process and the achievable noise levels are the injected optical power and the master-slave frequency detuning. Keywords: infrared spectra; laser mode locking; laser tuning; numerical analysis; optical noise; optimisation; quantum cascade lasers; free-running threshold current; intensity noise suppression; master-slave frequency detuning; midinfrared injection locking; midinfrared quantum cascade lasers; numerical investigation; optical injection locking; optical power injection; optimization; rate equation model; Laser noise; Mathematical model; Optical noise; Power lasers; Quantum cascade lasers; Quantum cascade lasers; injection locking; intensity noise; optical injection (ID#:14-2380) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6690160&isnumber=6685877
- Wenrui Wang; Jinlong Yu; Bingchen Han; Ju Wang; Lingyun Ye; Enze Yang, "Tunable Microwave Frequency Multiplication by Injection Locking of DFB Laser With a Weakly Phase Modulated Signal," Photonics Journal, IEEE , vol.6, no.2, pp.1,8, April 2014. doi: 10.1109/JPHOT.2014.2308634 We have demonstrated in this paper a novel tunable microwave frequency multiplication by injecting a weakly phase-modulated optical signal into a DFB laser diode. Signals with multiple weak sidebands are generated by cross-phase modulation of a continuous wave (CW) with short pulses from mode-locked fiber laser. Then, frequency multiplication is achieved by injection and phase locking a commercially available DFB laser to one of the harmonics of the phase modulated signal. The multiplication factor can be tuned by changing the frequency difference between the CW and the free oscillating wavelength of the DFB laser. The experimental results show that, with an original signal at a repetition rate of 1 GHz, a microwave signal with high spectral purity and stability is generated with a multiplication factor up to 60. The side-mode suppression ratio over 40 dB and phase noise lower than -90 dBc/Hz at 10 kHz are demonstrated over a continuous tuning range from 20 to 40. Keywords: distributed feedback lasers; laser frequency stability; laser mode locking; laser noise; laser tuning; microwave generation; microwave photonics; optical modulation;phase modulation; phase noise; semiconductor lasers; CW wavelength; DFB laser diode; cross-phase modulation; distributed feedback laser; free oscillating wavelength; frequency 10 kHz; high spectral purity; injection locking; microwave signal generation; mode-locked fiber laser; phase locking; phase noise; side-mode suppression ratio; stability; tunable microwave frequency multiplication; weakly phase modulated signal; Laser mode locking; Masers; Microwave filters; Microwave photonics; Optical filters; Phase modulation; Semiconductor lasers; Microwave photonics; frequency multiplication; injection locking (ID#:14-2381) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748869&isnumber=6750774
- Arsenijevic, D.; Kleinert, M.; Bimberg, D., "Breakthroughs in Photonics 2013: Passive Mode-Locking of Quantum-Dot Lasers," Photonics Journal, IEEE , vol.6, no.2, pp.1,6, April 2014. doi: 10.1109/JPHOT.2014.2308195 Most recent achievements in passive mode-locking of quantum-dot lasers, with the main focus on jitter reduction and frequency tuning, are described. Different techniques, leading to record values for integrated jitter of 121 fs and a locking range of 342 MHz, are presented for a 40-GHz laser. Optical feedback is observed to be the method of choice in this field. For the first time, five different optical-feedback regimes are discovered, including the resonant one yielding a radio-frequency linewidth reduction by 99%. Keywords: jitter; laser feedback; laser mode locking; laser tuning; quantum dot lasers; frequency 40 GHz; frequency tuning; jitter reduction; optical feedback; passive mode-locking; photonics; quantum-dot lasers; radio-frequency linewidth reduction; Jitter; Laser mode locking; Optical attenuators; Optical feedback; Quantum dot lasers; Tuning; Mode-locked lasers; optical feedback; phase noise; quantum dots (ID#:14-2382) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6747957&isnumber=6750774
- Nagashima, T.; Wei, X.; Tanaka, H.-A; Sekiya, H., "Locking Range Derivations for Injection-Locked Class-E Oscillator Applying Phase Reduction Theory," Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. PP, no.99, pp.1,8, June 2014. doi: 10.1109/TCSI.2014.2327276 This paper presents a numerical locking-range prediction for the injection-locked class-E oscillator using the phase reduction theory (PRT). By applying this method to the injection-locked class-E oscillator designs, which is in the field of electrical engineering, the locking ranges of the oscillator on any injection-signal waveform can be efficiently obtained. The locking ranges obtained from the proposed method quantitatively agreed with those obtained from the simulations and circuit experiments, showing the validity and effectiveness of the locking-range derivation method based on PRT. Keywords: Capacitance; Equations; Limit-cycles; MOSFET; Oscillators; Switches; Synchronization; Injection-locked class-E oscillator; locking range; phase reduction theory (ID#:14-2383) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842684&isnumber=4358591
- Habruseva, T.; Arsenijevic, D.; Kleinert, M.; Bimberg, D.; Huyet, G.; Hegarty, S.P., "Optimum Phase Noise Reduction And Repetition Rate Tuning In Quantum-Dot Mode-Locked Lasers," Applied Physics Letters , vol.104, no.2, pp.021112,021112-4, Jan 2014. doi: 10.1063/1.4861604 Competing approaches exist, which allow control of phase noise and frequency tuning in mode-locked lasers, but no judgement of pros and cons based on a comparative analysis was presented yet. Here, we compare results of hybrid mode-locking, hybrid mode-locking with optical injection seeding, and sideband optical injection seeding performed on the same quantum dot laser under identical bias conditions. We achieved the lowest integrated jitter of 121 fs and a record large radio-frequency (RF) tuning range of 342 MHz with sideband injection seeding of the passively mode-locked laser. The combination of hybrid mode-locking together with optical injection-locking resulted in 240 fs integrated jitter and a RF tuning range of 167 MHz. Using conventional hybrid mode-locking, the integrated jitter and the RF tuning range were 620 fs and 10 MHz, respectively. Keywords: (not provided) (ID#:14-2384) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6715601&isnumber=6712870
- Jun-Chau Chien; Upadhyaya, P.; Jung, H.; Chen, S.; Fang, W.; Niknejad, AM.; Savoj, J.; Ken Chang, "2.8 A pulse-position-modulation phase-noise-reduction technique for a 2-to-16GHz injection-locked ring oscillator in 20nm CMOS," Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 2014 IEEE International , vol., no., pp.52,53, 9-13 Feb. 2014. doi: 10.1109/ISSCC.2014.6757334 High-speed transceivers embedded inside FPGAs require software-programmable clocking circuits to cover a wide range of data rates across different channels [1]. These transceivers use high-frequency PLLs with LC oscillators to satisfy stringent jitter requirements at increasing data rates. However, the large area of these oscillators limits the number of independent LC-based clocking sources and reduces the flexibility offered by the FPGA. A ring-based PLL occupies smaller area but produces higher jitter. With injection-locking (IL) techniques [2-3], ring-based oscillators achieve comparable performance with their LC counterparts [4-5] at frequencies below 10GHz. Moreover, addition of a PLL to an injection-locked VCO (IL-PLL) provides injection-timing calibration and frequency tracking against PVT [3,5]. Nevertheless, applying injection-locking techniques to high-speed ring oscillators in deep submicron CMOS processes, with high flicker-noise corner frequencies at tens of MHz, poses a design challenge for low-jitter operation. Shown in Fig. 2.8.1, injection locking can be modeled as a single-pole feedback system that achieves 20dB/dec of in-band noise shaping against intrinsic VCO phase noise over a wide bandwidth [6]. As a consequence, this technique suppresses the 1/f2 noise of the VCO but not its 1/f3 noise. Note that the conventional IL-PLL is capable of shaping the VCO in-band noise at 40dB/dec [6]; however, its noise shaping is limited by the narrow PLL bandwidth due to significant attenuation of the loop gain by injection locking. To achieve wideband 2nd-order noise shaping in 20nm ring oscillators, we present a circuit technique that applies pulse-position-modulated (PPM) injection through feedback control. Keywords: 1/f noise; CMOS integrated circuits; flicker noise; injection locked oscillators; microwave oscillators; phase locked loops; phase noise; pulse position modulation; voltage-controlled oscillators;1/f2 noise; FPGA; LC oscillator; VCO phase noise; deep submicron CMOS process; feedback control; frequency 2 GHz to 16 GHz; frequency tracking; high-frequency PLL; high-speed ring oscillator; high-speed transceiver; injection-locked VCO; injection-locked ring oscillator; injection-locking technique; injection-timing calibration; phase-noise-reduction technique; pulse-position-modulation; ring-based PLL; single-pole feedback system; size 20 nm; software-programmable clocking circuit; Bandwidth; Injection-locked oscillators; Jitter; Noise; Phase locked loops; Ring oscillators; Voltage-controlled oscillators (ID#:14-2385) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6757334&isnumber=6757318
- Mangold, M.; Link, S.M.; Klenner, A; Zaugg, C.A; Golling, M.; Tilma, B.W.; Keller, U., "Amplitude Noise and Timing Jitter Characterization of a High-Power Mode-Locked Integrated External-Cavity Surface Emitting Laser," Photonics Journal, IEEE , vol.6, no.1, pp.1,9, Feb. 2014. doi: 10.1109/JPHOT.2013.2295464 We present a timing jitter and amplitude noise characterization of a high-power mode-locked integrated external-cavity surface emitting laser (MIXSEL). In the MIXSEL, the semiconductor saturable absorber of a SESAM is integrated into the structure of a VECSEL to start and stabilize passive mode-locking. In comparison to previous noise characterization of SESAM-mode-locked VECSELs, this first noise characterization of a MIXSEL is performed at a much higher average output power. In a free-running operation, the laser generates 14.3-ps pulses at an average output power of 645 mW at a 2-GHz pulse repetition rate and an RMS amplitude noise of 0.15% [1 Hz, 10 MHz]. We measured an RMS timing jitter of 129 fs [100 Hz, 10 MHz], which represents the lowest value for a free-running passively mode-locked semiconductor disk laser to date. Additionally, we stabilized the pulse repetition rate with a piezo actuator to control the cavity length. With the laser generating 16.7-ps pulses at an average output power of 701 mW, the repetition frequency was phase-locked to a low-noise electronic reference using a feedback loop. In actively stabilized operation, the RMS timing jitter was reduced to less than 70 fs [1 Hz, 100 MHz]. In the 100-Hz to 10-MHz bandwidth, we report the lowest timing jitter measured from a passively mode-locked semiconductor disk laser to date with a value of 31 fs. These results show that the MIXSEL technology provides compact ultrafast laser sources combining high-power and low-noise performance similar to diode-pumped solid-state lasers, which enable world-record optical communication rates and low-noise frequency combs. Keywords: integrated optoelectronics; laser beams; laser cavity resonators ;laser feedback; laser mode locking; laser noise; laser stability; optical pulse generation; optical saturable absorption; piezoelectric actuators; semiconductor lasers; surface emitting lasers; timing jitter; MIXSEL; RMS amplitude noise; RMS timing jitter; SESAM; VECSEL; actively stabilized operation; average output power;c avity length; compact ultrafast laser sources; feedback loop; free-running passively mode-locked semiconductor disk laser; frequency 1 Hz to 100 MHz; frequency 2 GHz; high-power mode-locked integrated external-cavity surface emitting laser; low-noise electronic reference; low-noise frequency combs; low-noise performance ;optical communication rates; phase-locking; piezoactuator; power 645 mW; power 701 mW; pulse generation; pulse repetition rate; repetition frequency; semiconductor saturable absorber; stabilize passive mode-locking;time 129 fs ;time 14.3 ps;time 16.7 ps; Cavity resonators ;Laser mode locking ;Laser noise; Vertical cavity surface emitting lasers; Diode-pumped lasers; infrared lasers; mode-locked lasers; semiconductor lasers; ultrafast lasers (ID#:14-2386) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6690115&isnumber=6689334
- Yu-Sheng Lin; Cheng-Han Wu; Chia-Chen Huang; Chun-Lin Lu; Yeong-Her Wang, "Ultra-Wide Locking Range Regenerative Frequency Dividers With Quadrature-Injection Current-Mode-Logic Loop Divider," Microwave and Wireless Components Letters, IEEE , vol.24, no.3, pp.179,181, March 2014. doi: 10.1109/LMWC.2013.2291864 The / 3 and / 5 regenerative frequency dividers (RFDs) with ultra-wide locking ranges are presented. The proposed dividers were fabricated by a TSMC 90 nm CMOS process, using / 2 and / 4 quadrature-injected current-mode-logic loop dividers to widen the locking ranges. The dividers also achieved quadrature input and quadrature output. Using a 1.2 V supply voltage, the power consumptions of the / 3 and the / 5 divider cores were 10.2 and 14.8 mW, respectively. Without using the tuning techniques, the measured locking ranges for the / 3 and the / 5 dividers were from 9 to 14.7 GHz (48.1%) and 7.2 to 19 GHz (90.1%), respectively. The phase deviation of the quadrature outputs for the two dividers were less than 0.8 deg and 1.1 deg. Compared with the reported data, the outstanding figure-of-merit values of the proposed / 3 and / 5 RFDs can be observed. Keywords: CMOS integrated circuits; circuit tuning; cores; current-mode circuits; frequency dividers; integrated circuit design; integrated circuit measurement; logic circuits; microwave integrated circuits; RFD;TSMC CMOS process; core; frequency 7.2 GHz to 19 GHz; integrated circuit design; phase deviation; power 10.2 mW; power 14.8 mW; power consumption; quadrature-injection current-mode-logic loop divider; size 90 nm ;tuning technique; ultrawide locking range regenerative frequency divider; voltage 1.2 V;CMOS integrated circuits; Frequency measurement; Mixers; Noise measurement; Phase measurement; Phase noise; CMOS; quadrature input and quadrature output (QIQO); quadrature-injected current-mode-logic; regenerative frequency divider (ID#:14-2387) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6710207&isnumber=6759771
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Machine Learning
Machine learning offers potential efficiencies and is an important tool in data mining. However, the "learned" or derived data must maintain integrity. Machine learning can also be used to identify threats and attacks. Research in this field is of particular interest in sensitive industries, including healthcare. The works cited here appeared in the first half of 2014.
- Mozaffari Kermani, M.; Sur-Kolay, S.; Raghunathan, A; Jha, N.K., "Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare," Biomedical and Health Informatics, IEEE Journal of, vol. PP, no.99, pp.1,1, July 2014. doi: 10.1109/JBHI.2014.2344095 Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive and, thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm independent approach for mounting poisoning attacks across a wide range of machine learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness. Keywords: (not provided) (ID#:14-2388) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868201&isnumber=6363502
- Baughman, AK.; Chuang, W.; Dixon, K.R.; Benz, Z.; Basilico, J., "DeepQA Jeopardy! Gamification: A Machine-Learning Perspective," Computational Intelligence and AI in Games, IEEE Transactions on , vol.6, no.1, pp.55,66, March 2014. doi: 10.1109/TCIAIG.2013.2285651 DeepQA is a large-scale natural language processing (NLP) question-and-answer system that responds across a breadth of structured and unstructured data, from hundreds of analytics that are combined with over 50 models, trained through machine learning. After the 2011 historic milestone of defeating the two best human players in the Jeopardy! game show, the technology behind IBM Watson, DeepQA, is undergoing gamification into real-world business problems. Gamifying a business domain for Watson is a composite of functional, content, and training adaptation for nongame play. During domain gamification for medical, financial, government, or any other business, each system change affects the machine-learning process. As opposed to the original Watson Jeopardy!, whose class distribution of positive-to-negative labels is 1:100, in adaptation the computed training instances, question-and-answer pairs transformed into true-false labels, result in a very low positive-to-negative ratio of 1:100 000. Such initial extreme class imbalance during domain gamification poses a big challenge for the Watson machine-learning pipelines. The combination of ingested corpus sets, question-and-answer pairs, configuration settings, and NLP algorithms contribute toward the challenging data state. We propose several data engineering techniques, such as answer key vetting and expansion, source ingestion, oversampling classes, and question set modifications to increase the computed true labels. In addition, algorithm engineering, such as an implementation of the Newton-Raphson logistic regression with a regularization term, relaxes the constraints of class imbalance during training adaptation. We conclude by empirically demonstrating that data and algorithm engineering are complementary and indispensable to overcome the challenges in this first Watson gamification for real-world business problems. Keywords: business data processing ;computer games; learning (artificial intelligence);natural language processing; question answering (information retrieval) ;text analysis; DeepQA Jeopardy! gamification; NLP algorithms; NLP question-and-answer system; Newton-Raphson logistic regression; Watson gamification; Watson machine-learning pipelines; algorithm engineering; business domain; configuration settings; data engineering techniques; domain gamification; extreme class imbalance; ingested corpus sets ;large-scale natural language processing question-and-answer system; machine-learning process; nongame play; positive-to-negative ratio; question-and-answer pairs; real-world business problems; regularization term; structured data; training instances; true-false labels; unstructured data; Accuracy; Games ;Logistics; Machine learning algorithms; Pipelines; Training; Gamification; machine learning; natural language processing (NLP);pattern recognition (ID#:14-2389) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6632881&isnumber=6766678
- Stevanovic, M.; Pedersen, J.M., "An Efficient Flow-Based Botnet Detection Using Supervised Machine Learning," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp.797, 801, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785439 Botnet detection represents one of the most crucial prerequisites of successful botnet neutralization. This paper explores how accurate and timely detection can be achieved by using supervised machine learning as the tool of inferring about malicious botnet traffic. In order to do so, the paper introduces a novel flow-based detection system that relies on supervised machine learning for identifying botnet network traffic. For use in the system we consider eight highly regarded machine learning algorithms, indicating the best performing one. Furthermore, the paper evaluates how much traffic needs to be observed per flow in order to capture the patterns of malicious traffic. The proposed system has been tested through the series of experiments using traffic traces originating from two well-known P2P botnets and diverse non-malicious applications. The results of experiments indicate that the system is able to accurately and timely detect botnet traffic using purely flow-based traffic analysis and supervised machine learning. Additionally, the results show that in order to achieve accurate detection traffic flows need to be monitored for only a limited time period and number of packets per flow. This indicates a strong potential of using the proposed approach within a future on-line detection framework. Keywords: computer network security ;invasive software; learning (artificial intelligence); peer-to-peer computing; telecommunication traffic; P2P botnets; botnet neutralization; flow-based botnet detection ;flow-based traffic analysis; malicious botnet network traffic identification; nonmalicious applications; packet flow; supervised machine learning; Accuracy; Bayes methods; Feature extraction; Protocols; Support vector machines; Training; Vegetation; Botnet; Botnet detection; Machine learning; Traffic analysis; Traffic classification (ID#:14-2390) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785439&isnumber=6785290
- Aroussi, S.; Mellouk, A, "Survey on Machine Learning-Based Qoe-Qos Correlation Models," Computing, Management and Telecommunications (ComManTel), 2014 International Conference on, pp.200,204, 27-29 April 2014. doi: 10.1109/ComManTel.2014.6825604 The machine learning provides a theoretical and methodological framework to quantify the relationship between user OoE (Quality of Experience) and network QoS (Quality of Service). This paper presents an overview of QoE-QoS correlation models based on machine learning techniques. According to the learning type, we propose a categorization of correlation models. For each category, we review the main existing works by citing deployed learning methods and model parameters (QoE measurement, QoS parameters and service type). Moreover, the survey will provide researchers with the latest trends and findings in this field. Keywords: learning (artificial intelligence); quality of experience; quality of service; telecommunication computing; QoE measurement; QoE-QoS correlation model ;QoS parameter; QoS service type; machine learning; quality of experience; quality of service; Correlation; Data models; Packet loss; Predictive models; Quality of service; Streaming media; Correlation model; Machine Learning; Quality of Experience; Quality of Service (ID#:14-2391) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825604&isnumber=6825559
- Alsheikh, M.A; Lin, S.; Niyato, D.; Tan, Hwee-Pink, "Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications," Communications Surveys & Tutorials, IEEE, vol. PP, no.99, pp.1,1, Aapril 2014. doi: 10.1109/COMST.2014.2320099 Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges. Keywords: Algorithm design and analysis; Classification algorithms; Clustering algorithms; Machine learning algorithms; Principal component analysis; Routing; Wireless sensor networks (ID#:14-2392) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805162&isnumber=5451756
- Fangming Ye; Zhaobo Zhang; Chakrabarty, K.; Xinli Gu, "Board-Level Functional Fault Diagnosis Using Multikernel Support Vector Machines and Incremental Learning," Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on , vol.33, no.2, pp.279,290, Feb. 2014. doi: 10.1109/TCAD.2013.2287184 Advanced machine learning techniques offer an unprecedented opportunity to increase the accuracy of board-level functional fault diagnosis and reduce product cost through successful repair. Ambiguous or incorrect diagnosis results lead to long debug times and even wrong repair actions, which significantly increase repair cost. We propose a smart diagnosis method based on multikernel support vector machines (MK-SVMs) and incremental learning. The MK-SVM method leverages a linear combination of single kernels to achieve accurate faulty-component classification based on the errors observed. The MK-SVMs thus generated can also be updated based on incremental learning, which allows the diagnosis system to quickly adapt to new error observations and provide even more accurate fault diagnosis. Two complex boards from industry, currently in volume production, are used to validate the proposed diagnosis approach in terms of diagnosis accuracy (success rate) and quantifiable improvements over previously proposed machine-learning methods based on several single-kernel SVMs and artificial neural networks. Keywords: {electronic engineering computing; fault diagnosis ;learning (artificial intelligence);neural nets; printed circuit testing; support vector machines; MK-SVM method; advanced machine learning technique; artificial neural network; board level functional fault diagnosis; faulty component classification; linear combination; multikernel support vector machine; smart diagnosis method; Accuracy; Circuit faults; Fault diagnosis; Kernel; Maintenance engineering; Support vector machines; Training; Board-level fault diagnosis; functional failures; incremental learning; kernel; machine learning; support-vector machines (ID#:14-2393) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6714627&isnumber=6714471
- Breuker, D., "Towards Model-Driven Engineering for Big Data Analytics -- An Exploratory Analysis of Domain-Specific Languages for Machine Learning," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.758,767, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.101 Graphical models and general purpose inference algorithms are powerful tools for moving from imperative towards declarative specification of machine learning problems. Although graphical models define the principle information necessary to adapt inference algorithms to specific probabilistic models, entirely model-driven development is not yet possible. However, generating executable code from graphical models could have several advantages. It could reduce the skills necessary to implement probabilistic models and may speed up development processes. Both advantages address pressing industry needs. They come along with increased supply of data scientist labor, the demand of which cannot be fulfilled at the moment. To explore the opportunities of model-driven big data analytics, I review the main modeling languages used in machine learning as well as inference algorithms and corresponding software implementations. Gaps hampering direct code generation from graphical models are identified and closed by proposing an initial conceptualization of a domain-specific modeling language. Keywords: Big Data; computer graphics; data analysis; inference mechanisms; learning (artificial intelligence);program compilers; specification languages; big data analytics; direct code generation; domain-specific languages; domain-specific modeling language; general purpose inference algorithms; graphical models; machine learning problems; model-driven development; model-driven engineering; modeling languages; probabilistic models; Adaptation models; Computational modeling; Data models; Graphical models; Inference algorithms; Random variables; Unified modeling language; Graphical Models; Machine Learning; Model-driven Engineering (ID#:14-2394) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758697&isnumber=6758592
- Aydogan, E.; Sen, S., "Analysis of Machine Learning Methods On Malware Detection," Signal Processing and Communications Applications Conference (SIU), 2014 22nd , vol., no., pp.2066,2069, 23-25 April 2014. doi: 10.1109/SIU.2014.6830667 Nowadays, one of the most important security threats are new, unseen malicious executables. Current anti-virus systems have been fairly successful against known malicious softwares whose signatures are known. However they are very ineffective against new, unseen malicious softwares. In this paper, we aim to detect new, unseen malicious executables using machine learning techniques. We extract distinguishing structural features of softwares and, employ machine learning techniques in order to detect malicious executables. Keywords: invasive software; learning (artificial intelligence); anti-virus systems; machine learning methods; malicious executables detection; malicious softwares; malware detection; security threats; software structural features; Conferences; Internet; Malware; Niobium; Signal processing; Software; machine learning; malware analysis and detection (ID#:14-2395) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830667&isnumber=6830164
- Kandasamy, K.; Koroth, P., "An Integrated Approach To Spam Classification On Twitter Using URL Analysis, Natural Language Processing And Machine Learning Techniques," Electrical, Electronics and Computer Science (SCEECS), 2014 IEEE Students' Conference on , vol., no., pp.1,5, 1-2 March 2014. doi: 10.1109/SCEECS.2014.6804508 In the present day world, people are so much habituated to Social Networks. Because of this, it is very easy to spread spam contents through them. One can access the details of any person very easily through these sites. No one is safe inside the social media. In this paper we are proposing an application which uses an integrated approach to the spam classification in Twitter. The integrated approach comprises the use of URL analysis, natural language processing and supervised machine learning techniques. In short, this is a three step process. Keywords: classification; learning (artificial intelligence) ;natural language processing; social networking (online);unsolicited e-mail; Twitter; URL analysis; natural language processing; social media; social networks ;spam classification; spam contents; supervised machine learning techniques; Accuracy; Machine learning algorithms; Natural language processing; Training; Twitter; Unsolicited electronic mail; URLs; machine learning; natural language processing; tweets (ID#:14-2396) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804508&isnumber=6804412
- Singh, N.; Chandra, N., "Integrating Machine Learning Techniques to Constitute a Hybrid Security System," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on , vol., no., pp.1082,1087, 7-9 April 2014. doi: 10.1109/CSNT.2014.221 Computer Security has been discussed and improvised in many forms and using different techniques as well as technologies. The enhancements keep on adding as the security remains the fastest updating unit in a computer system. In this paper we propose a model for securing the system along with the network and enhance it more by applying machine learning techniques SVM (support vector machine) and ANN (Artificial Neural Network). Both the techniques are used together to generate results which are appropriate for analysis purpose and thus, prove to be the milestone for security. Keywords: learning (artificial intelligence); neural nets; security of data; support vector machines; ANN; SVM; artificial neural network; computer security ;hybrid security system; machine learning techniques; support vector machine; Artificial neural networks; Intrusion detection; Neurons; Probabilistic logic; Support vector machines; Training; Artificial neural network; Host logs; Machine Learning; Network logs; Support vector machine}, (ID#:14-2397) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821566&isnumber=6821334
- Asmitha, K.A; Vinod, P., "A Machine Learning Approach For Linux Malware Detection," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.825,830, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781387 The increasing number of malware is becoming a serious threat to the private data as well as to the expensive computer resources. Linux is a Unix based machine and gained popularity in recent years. The malware attack targeting Linux has been increased recently and the existing malware detection methods are insufficient to detect malware efficiently. We are introducing a novel approach using machine learning for identifying malicious Executable Linkable Files. The system calls are extracted dynamically using system call tracer Strace. In this approach we identified best feature set of benign and malware specimens to build classification model that can classify malware and benign efficiently. The experimental results are promising which depict a classification accuracy of 97% to identify malicious samples. Keywords: Linux; invasive software; learning (artificial intelligence);pattern classification; Linux malware detection; Unix based machine; benign specimens; classification model; machine learning approach; malicious executable linkable files identification; malware specimens; system call tracer Strace; Accuracy; Malware; Testing; dynamic analysis; feature selection; system call (ID#:14-2398) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781387&isnumber=6781240
- Esmalifalak, M.; Liu, L.; Nguyen, N.; Zheng, R.; Han, Z., "Detecting Stealthy False Data Injection Using Machine Learning in Smart Grid," Systems Journal, IEEE , vol. PP, no.99, pp.1,9, August 2014. doi: 10.1109/JSYST.2014.2341597 Aging power industries, together with the increase in demand from industrial and residential customers, are the main incentive for policy makers to define a road map to the next-generation power system called the smart grid. In the smart grid, the overall monitoring costs will be decreased, but at the same time, the risk of cyber attacks might be increased. Recently, a new type of attacks (called the stealth attack) has been introduced, which cannot be detected by the traditional bad data detection using state estimation. In this paper, we show how normal operations of power networks can be statistically distinguished from the case under stealthy attacks. We propose two machine-learning-based techniques for stealthy attack detection. The first method utilizes supervised learning over labeled data and trains a distributed support vector machine (SVM). The design of the distributed SVM is based on the alternating direction method of multipliers, which offers provable optimality and convergence rate. The second method requires no training data and detects the deviation in measurements. In both methods, principal component analysis is used to reduce the dimensionality of the data to be processed, which leads to lower computation complexities. The results of the proposed detection methods on IEEE standard test systems demonstrate the effectiveness of both schemes. Keywords: Anomaly detection; bad data detection (BDD); power system state estimation; support vector machines (SVMs) (ID#:14-2399) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880823&isnumber=4357939
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Multidimensional Signal Processing
Research in multidimensional signal processing deals with issues such as those arising in automatic target detection and recognition problems, geophysical inverse problems, and medical estimation problems. Its goal is to develop methods to extract information from diverse data sources amid uncertainty. Research cited here was published or presented between January and September, 2014. It covers a range of subtopics including hidden communications channels, wave digital filters, SAR interferometry, and SAR tomography.
- Seleym, A, "High-rate Hidden Communications Channel: A Multi-Dimensional Signaling Approach," Integrated Communications, Navigation and Surveillance Conference (ICNS), 2014 , vol., no., pp.W4-1,W4-8, 8-10 April 2014. doi: 10.1109/ICNSurv.2014.6820026 Hidden communications is one recent method to provide reliable security in transferring information between entities. Data hiding in media carriers is a power limited and band-limited system, as a consequence, there is a tradeoff between the host media perceptual fidelity and the transferred data error rate. In this paper, a developed embedding approach is proposed by considering the altering process as a signaling communications problem. This approach uses a structured scheme of Multiple Trellis-Coded Quantization jointed with Multiple Trellis-Coded Modulation (MTCQ/MTCM) to generate the stego-cover space. The developed scheme allows transferring a high volume of information without causing a severe perceptual or statistical degradation, and also be robust to additive noise attacks. Keywords: quantisation (signal); steganography; trellis coded modulation; additive noise attack; data hiding; high rate hidden communications channel; host media perceptual fidelity; media carrier; multidimensional signaling; multiple trellis coded modulation; multiple trellis coded quantization; reliable security; signaling communications problem; stego cover space; Constellation diagram; Encoding; Noise; Nonlinear distortion; Quantization (signal); Vectors (ID#:14-2400) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6820026&isnumber=6819972
- Balasa, F.; Abuaesh, N.; Gingu, C.V.; Nasui, D.V., "Leakage-aware Scratch-Pad Memory Banking For Embedded Multidimensional Signal Processing," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.5026,5030, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854559 Partitioning a memory into multiple banks that can be independently accessed is an approach mainly used for the reduction of the dynamic energy consumption. When leakage energy comes into play as well, the idle memory banks must be put in a low-leakage `dormant' state to save static energy when not accessed. The energy savings must be large enough to compensate the energy overhead spent by changing the bank status from active to dormant, then back to active again. This paper addresses the problem of energy-aware on-chip memory banking, taking into account - during the exploration of the search space - the idleness time intervals of the data mapped into the memory banks. As on-chip storage, we target scratch-pad memories (SPMs) since they are commonly used in embedded systems as an alternative to caches. The proposed approach proved to be computationally fast and very efficient when tested for several data-intensive applications, whose behavioral specifications contain multidimensional arrays as main data structures. Keywords: embedded systems; power aware computing; signal processing; storage management; SPMs; data structures; dynamic energy consumption reduction; embedded multidimensional signal processing; embedded systems; energy-aware on-chip memory banking; leakage energy ;leakage-aware scratch-pad memory banking; low-leakage dormant state; multidimensional arrays; on-chip storage; Arrays; Banking; Energy consumption; Lattices; Memory management; Signal processing algorithms; System-on-chip memory banking; memory management; multidimensional signal processing; scratch-pad memory (ID#:14-2401) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854559&isnumber=6853544
- Schwerdtfeger, T.; Kummert, A, "A Multidimensional Signal Processing Approach To Wave Digital Filters With Topology-Related Delay-Free Loops," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.389,393, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6853624 To avoid the occurrence of noncomputable, delay-free loops, classic Wave Digital Filters (WDFs) usually exhibit a tree-like topology. For the realization of prototype circuits that contain ring-like subnetworks, prior approaches require the decomposition of the structure and thus neglect the notion of modularity of the original Wave Digital concept. In this paper, a new modular approach based on Multidimensional Wave Digital Filters (MDWDFs) is presented. For this, the contractivity property of WDFs is shown. On that basis, the new approach is studied with respect to possible side-effects and an appropriate modification is proposed that counteracts these effects and significantly improves the convergence behaviour. Keywords: digital filters; network topology; delay-free loops; multidimensional signal processing; multidimensional wave digital filter; ring-like subnetwork; structure decomposition; topology related loops; Convergence; Delays; Digital filters; Mathematical model; Ports (Computers); Prototypes; Topology; Bridged-T Model; Contractivity; Delay-Free Loop; Multidimensional; Wave Digital Filter (ID#:14-2402) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853624&isnumber=6853544
- Holt, K.M., "Total Nuclear Variation and Jacobian Extensions of Total Variation for Vector Fields," Image Processing, IEEE Transactions on, vol.23, no.9, pp.3975,3989, Sept. 2014. doi: 10.1109/TIP.2014.2332397 We explore a class of vectorial total variation (VTV) measures formed as the spatial sum of a pixel-wise matrix norm of the Jacobian of a vector field. We give a theoretical treatment that indicates that, while color smearing and affine-coupling bias (often reported as gray-scale bias) are typically cited as drawbacks for VTV, these are actually fundamental to smoothing vector direction (i.e., smoothing hue and saturation in color images). In addition, we show that encouraging different vector channels to share a common gradient direction is equivalent to minimizing Jacobian rank. We thus propose total nuclear variation (TNV), and since nuclear norm is the convex envelope of matrix rank, we argue that TNV is the optimal convex regularizer for enforcing shared directions. We also propose extended Jacobians, which use larger neighborhoods than the conventional finite difference operator, and we discuss efficient VTV optimization algorithms. In simple color image denoising experiments, TNV outperformed other common VTV regularizers, and was further improved by using extended Jacobians. TNV was also competitive with the method of nonlocal means, often outperforming it by 0.25-2 dB when using extended Jacobians. Keywords: Color; Image color analysis; Image reconstruction; Jacobian matrices; Materials; TV; Vectors; Color imaging; convex optimization; denoising; image reconstruction; inverse problems; multidimensional signal processing; regularization; total variation; vector-valued images (ID#:14-2403) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841619&isnumber=6862127
- Lombardini, F.; Cai, F., "Temporal Decorrelation-Robust SAR Tomography," Geoscience and Remote Sensing, IEEE Transactions on , vol.52, no.9, pp.5412,5421, Sept. 2014. doi: 10.1109/TGRS.2013.2288689 Much interest is continuing to grow in advanced interferometric synthetic aperture radar (SAR) methods for full 3-D imaging, particularly of volumetric forest scatterers. Multibaseline (MB) SAR tomographic elevation beam forming, i.e., spatial spectral estimation, is a promising technique in this framework. In this paper, the important effect of temporal decorrelation during the repeat-pass MB acquisition is tackled, analyzing the impact on superresolution (MUSIC) tomography with limited sparse data. Moreover, new tomographic methods robust to temporal decorrelation phenomena are proposed, exploiting the advanced differential tomography concept that produces "space-time" signatures of scattering dynamics in the SAR cell. To this aim, a 2-D version of MUSIC and a generalized MUSIC method matched to nonline spectra are applied to decouple the nuisance temporal signal components in the spatial spectral estimation. Simulated analyses are reported for different geometrical and temporal parameters, showing that the new concept of restoring tomographic performance in temporal decorrelating forest scenarios through differential tomography is promising. Keywords: array signal processing; decorrelation; forestry; image matching; image resolution; image restoration; optical tomography; radar imaging; ynthetic aperture radar; 2D MUSIC version; 3D imaging; MB SAR tomographic elevation beam forming; SAR; interferometric synthetic aperture radar method; multibase-line SAR tomographic elevation beam forming; nuisance temporal signal component; repeat-pass MB acquisition; space-time signature; spatial spectral estimation; superresolution tomography; temporal decorrelation-robust SAR tomography; volumetric forest scattering dynamic; Decorrelation; Estimation; Frequency estimation; Multiple signal classification; Synthetic aperture radar; Tomography; Decorrelation; electromagnetic tomography; multidimensional signal processing; radar interferometry; spectral analysis; synthetic aperture radar (SAR) (ID#:14-2404) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6679227&isnumber=6756973
- Hao Fang; Vorobyov, S.A; Hai Jiang; Taheri, O., "Permutation Meets Parallel Compressed Sensing: How to Relax Restricted Isometry Property for 2D Sparse Signals," Signal Processing, IEEE Transactions on , vol.62, no.1, pp.196,210, Jan.1, 2014. doi: 10.1109/TSP.2013.2284762 Traditional compressed sensing considers sampling a 1D signal. For a multidimensional signal, if reshaped into a vector, the required size of the sensing matrix becomes dramatically large, which increases the storage and computational complexity significantly. To solve this problem, the multidimensional signal is reshaped into a 2D signal, which is then sampled and reconstructed column by column using the same sensing matrix. This approach is referred to as parallel compressed sensing, and it has much lower storage and computational complexity. For a given reconstruction performance of parallel compressed sensing, if a so-called acceptable permutation is applied to the 2D signal, the corresponding sensing matrix is shown to have a smaller required order of restricted isometry property condition, and thus, lower storage and computation complexity at the decoder are required. A zigzag-scan-based permutation is shown to be particularly useful for signals satisfying the newly introduced layer model. As an application of the parallel compressed sensing with the zigzag-scan-based permutation, a video compression scheme is presented. It is shown that the zigzag-scan-based permutation increases the peak signal-to-noise ratio of reconstructed images and video frames. Keywords: compressed sensing; matrix algebra; parallel processing; 2D sparse signals; computational complexity; image reconstruction; isometry property; multidimensional signal; parallel compressed sensing; peak signal-to-noise ratio; sensing matrix; video compression scheme; video frames; zigzag scan based permutation; Compressed sensing; Computational complexity; Educational institutions; Image reconstruction; Sensors; Size measurement; Sparse matrices; Compressed sensing; multidimensional signal processing; parallel processing; permutation (ID#:14-2405) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6619412&isnumber=6678249
- Lyons, S.M.J.; Sarkka, S.; Storkey, AJ., "Series Expansion Approximations of Brownian Motion for Non-Linear Kalman Filtering of Diffusion Processes," Signal Processing, IEEE Transactions on , vol.62, no.6, pp.1514,1524, March15, 2014. doi: 10.1109/TSP.2014.2303430 In this paper, we describe a novel application of sigma-point methods to continuous-discrete filtering. The nonlinear continuous-discrete filtering problem is often computationally intractable to solve. Assumed density filtering methods attempt to match statistics of the filtering distribution to some set of more tractable probability distributions. Filters such as these are usually decompose the problem into two sub-problems. The first of these is a prediction step, in which one uses the known dynamics of the signal to predict its state at time tk+1 given observations up to time tk. In the second step, one updates the prediction upon arrival of the observation at time tk+1. The aim of this paper is to describe a novel method that improves the prediction step. We decompose the Brownian motion driving the signal in a generalised Fourier series, which is truncated after a number of terms. This approximation to Brownian motion can be described using a relatively small number of Fourier coefficients, and allows us to compute statistics of the filtering distribution with a single application of a sigma-point method. Assumed density filters that exist in the literature usually rely on discretisation of the signal dynamics followed by iterated application of a sigma point transform (or a limiting case thereof). Iterating the transform in this manner can lead to loss of information about the filtering distribution in highly non-linear settings. We demonstrate that our method is better equipped to cope with such problems. Keywords: Fourier series; Kalman filters; approximation theory; iterative methods; nonlinear filters; statistical distributions; Brownian motion approximation; Fourier coefficients; assumed density filtering methods; assumed density filters; diffusion processes; generalised Fourier series; nonlinear Kalman filtering; nonlinear continuous-discrete filtering problem; series expansion approximations; sigma-point methods; signal dynamic discretisation; tractable probability distributions; Approximation methods; Differential equations; Kalman filters; Mathematical model; Noise; Stochastic processes; Transforms; Kalman filters; Markov processes; multidimensional signal processing; nonlinear filters (ID#:14-2406) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6728679&isnumber=6744712
- Xuefeng Liu; Bourennane, S.; Fossati, C., "Reduction of Signal-Dependent Noise From Hyperspectral Images for Target Detection," Geoscience and Remote Sensing, IEEE Transactions on , vol.52, no.9, pp.5396,5411, Sept. 2014. doi: 10.1109/TGRS.2013.2288525 Tensor-decomposition-based methods for reducing random noise components in hyperspectral images (HSIs), both dependent and independent from signal, are proposed. In this paper, noise is described by a parametric model that accounts for the dependence of noise variance on the signal. This model is thus suitable for the cases where photon noise is dominant compared with the electronic noise contribution. To denoise HSIs distorted by both signal-dependent (SD) and signal-independent (SI) noise, some hybrid methods, which reduce noise by two steps according to the different statistical properties of those two types of noise, are proposed in this paper. The first one, named as the PARAFACSI- PARAFACSD method, uses a multilinear algebra model, i.e., parallel factor analysis (PARAFAC) decomposition, twice to remove SI and SD noise, respectively. The second one is a combination of the well-known multiple-linear-regression-based approach termed as the HYperspectral Noise Estimation (HYNE) method and PARAFAC decomposition, which is named as the HYNE-PARAFAC method. The last one combines the multidimensional Wiener filter (MWF) method and PARAFAC decomposition and is named as the MWF-PARAFAC method. For HSIs distorted by both SD and SI noise, first, most of the SI noise is removed from the original image by PARAFAC decomposition, the HYNE method, or the MWF method based on the statistical property of SI noise; then, the residual SD components can be further reduced by PARAFAC decomposition due to its own statistical property. The performances of the proposed methods are assessed on simulated HSIs. The results on the real-world airborne HSI Hyperspectral Digital Imagery Collection Experiment (HYDICE) are also presented and analyzed. These experiments show that it is worth taking into account noise signal-dependence hypothesis for processing HYDICE data. Keywords: Wiener filters; geophysical image processing; hyperspectral imaging; image denoising; interference suppression; multidimensional signal processing; object detection; random noise;singular value decomposition; statistical analysis; tensors;HSI distortion; HYDICE; HYNE method ;MWF method; PARAFAC decomposition; PARAFACSD method; PARAFACSI method; SD noise removal; SI noise removal; airborne HSI; hybrid method; hyperspectral digital imagery collection experiment; hyperspectral image; hyperspectral noise estimation; image denoising; multidimensional Wiener filter; multilinear algebra model; noise variance; parallel factor analysis; parametric model; random noise component reduction; residual SD component reduction; signal dependent noise reduction; signal independent noise; statistical property; target detection; tensor decomposition-based method; Covariance matrices; Hyperspectral sensors; Noise; Noise reduction; Silicon; Tensile stress; Vectors;Denoising; PARAFAC; hyperspectral image (HSI);signal-dependent (SD) noise ;target detection (ID#:14-2407) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6675784&isnumber=6756973
- Xun Chen; Aiping Liu; McKeown, M.J.; Poizner, H.; Wang, Z.J., "An EEMD-IVA Framework for Concurrent Multidimensional EEG and Unidimensional Kinematic Data Analysis," Biomedical Engineering, IEEE Transactions on , vol.61, no.7, pp.2187,2198, July 2014. doi: 10.1109/TBME.2014.2319294 Joint blind source separation (JBSS) is a means to extract common sources simultaneously found across multiple datasets, e.g., electroencephalogram (EEG) and kinematic data jointly recorded during reaching movements. Existing JBSS approaches are designed to handle multidimensional datasets, yet to our knowledge, there is no existing means to examine common components that may be found across a unidimensional dataset and a multidimensional one. In this paper, we propose a simple, yet effective method to achieve the goal of JBSS when concurrent multidimensional EEG and unidimensional kinematic datasets are available, by combining ensemble empirical mode decomposition (EEMD) with independent vector analysis (IVA). We demonstrate the performance of the proposed method through numerical simulations and application to data collected from reaching movements in Parkinson's disease. The proposed method is a promising JBSS tool for real-world biomedical signal processing applications. Keywords: biomechanics; blind source separation; data analysis; diseases; electroencephalography; kinematics; medical signal processing; multidimensional signal processing; numerical analysis; EEMD-IVA framework; Parkinson disease; concurrent multidimensional EEG; electroencephalogram; ensemble empirical mode decomposition; independent vector analysis ;joint blind source separation; kinematic data joint recording; multidimensional datasets; multiple datasets; numerical simulations; reaching movements; real-world biomedical signal processing applications; unidimensional kinematic data analysis; unidimensional kinematic datasets; Data analysis; Data mining;Electroencephalography;Joints;Kinematics;Noise;Vectors;Data fusion; EEG; EEMD; IVA; JBSS; unidimensional (ID#:14-2408) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803885&isnumber=6835114
- Paskaleva, B.S.; Godoy, S.E.; Woo-Yong Jang; Bender, S.C.; Krishna, S.; Hayat, M.M., "Model-Based Edge Detector for Spectral Imagery Using Sparse Spatiospectral Masks," Image Processing, IEEE Transactions on , vol.23, no.5, pp.2315,2327, May 2014. doi: 10.1109/TIP.2014.2315154 Two model-based algorithms for edge detection in spectral imagery are developed that specifically target capturing intrinsic features such as isoluminant edges that are characterized by a jump in color but not in intensity. Given prior knowledge of the classes of reflectance or emittance spectra associated with candidate objects in a scene, a small set of spectral-band ratios, which most profoundly identify the edge between each pair of materials, are selected to define a edge signature. The bands that form the edge signature are fed into a spatial mask, producing a sparse joint spatiospectral nonlinear operator. The first algorithm achieves edge detection for every material pair by matching the response of the operator at every pixel with the edge signature for the pair of materials. The second algorithm is a classifier-enhanced extension of the first algorithm that adaptively accentuates distinctive features before applying the spatiospectral operator. Both algorithms are extensively verified using spectral imagery from the airborne hyperspectral imager and from a dots-in-a-well midinfrared imager. In both cases, the multicolor gradient (MCG) and the hyperspectral/spatial detection of edges (HySPADE) edge detectors are used as a benchmark for comparison. The results demonstrate that the proposed algorithms outperform the MCG and HySPADE edge detectors in accuracy, especially when isoluminant edges are present. By requiring only a few bands as input to the spatiospectral operator, the algorithms enable significant levels of data compression in band selection. In the presented examples, the required operations per pixel are reduced by a factor of 71 with respect to those required by the MCG edge detector. Keywords: data compression; edge detection; image colour analysis; infrared imaging; multidimensional signal processing; HySPADE edge detectors; MCG edge detector; airborne hyperspectral imager; data compression; dots-in-a-well midinfrared imager; edge signature; hyperspectral-spatial detection of edges; isoluminant edges; model based edge detector; multicolor gradient;s parse joint spatiospectral nonlinear operator; sparse spatiospectral masks; spatial mask; spectral band ratio; spectral imagery; Detectors; Gray-scale; Hyperspectral imaging; Image color analysis; Image edge detection; Materials; Standards; Edge detection; classification ;isoluminant edge; multicolor edge detection; spatio-spectral mask; spectral ratios (ID#:14-2409) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781601&isnumber=6779706
- Kamislioglu, B.; Karaboga, N., "Design of FIR QMF Bank Using Windowing Functions," Signal Processing and Communications Applications Conference (SIU), 2014 22nd , vol., no., pp.95,99, 23-25 April 2014. doi: 10.1109/SIU.2014.6830174 The past over the years, single or multi-dimensional signal processing applications, communication systems, biomedical signal processing, word coding, sub-band coding in applications such as efficient use filter banks; single filter instead of multiple custom filters come together with being designed. In this study, two-channel filter banks a special case known as the QMF (Quadrature Mirror Filter - Quarter-mirror filter) bank for the design of Kaiser, Chebyshev and Hanning windowing methods with the filter's cutoff frequency on the optimization of a design based were made. QMF bank design, failure to peak reconstruction error (Peak Reconstruction Error-PRA) is based. As a result of the ongoing applications designed filter banks belonging to the numerical results and comparisons are given. Keywords: Chebyshev approximation; channel bank filters; quadrature mirror filters; Chebyshev methods ;FIR QMF bank design; Hanning windowing methods; Kaiser design; QMF bank design; biomedical signal processing; communication systems; design optimization; filter banks; filter cutoff frequency; multidimensional signal processing; peak reconstruction error; quadrature mirror filter quarter-mirror filter bank; subband coding; two-channel filter banks; windowing functions; word coding; Chebyshev approximation; Conferences; Encoding; Filter banks; Finite impulse response filters; Mirrors (ID#:14-2410) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830174&isnumber=6830164
- Deyun Wei; Yuanmin Li, "Reconstruction of Multidimensional Bandlimited Signals From Multichannel Samples In Linear Canonical Transform Domain," Signal Processing, IET , vol.8, no.6, pp.647,657, August 2014. doi: 10.1049/iet-spr.2013.0240 The linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing. In this study, the authors address the problem of signal reconstruction from the multidimensional multichannel samples in the LCT domain. Firstly, they pose and solve the problem of expressing the kernel of the multidimensional LCT in the elementary functions. Then, they propose the multidimensional multichannel sampling (MMS) for the bandlimited signal in the LCT domain based on a basis expansion of an exponential function. The MMS expansion which is constructed by the ordinary convolution structure can reduce the effect of the spectral leakage and is easy to implement. Thirdly, based on the MMS expansion, they obtain the reconstruction method for the multidimensional derivative sampling and the periodic non-uniform sampling by designing the system filter transfer functions. Finally, the simulation results and the potential applications of the MMS are presented. Especially, the application of the multidimensional derivative sampling in the context of the image scaling about the image super-resolution is discussed. Keywords: signal processing ;transforms; LCT; MMS; bandlimited signal; image scaling; image super resolution ;linear canonical transform domain; multichannel samples; multidimensional bandlimited signal reconstruction; multidimensional multichannel samples; multidimensional multichannel sampling; optics processing; signal processing; transfer functions (ID#:14-2411) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6869171&isnumber=6869162
- Wen-Long Chin; Chun-Wei Kao; Hsiao-Hwa Chen; Teh-Lu Liao, "Iterative Synchronization-Assisted Detection of OFDM Signals in Cognitive Radio Systems," Vehicular Technology, IEEE Transactions on , vol.63, no.4, pp.1633,1644, May 2014. doi: 10.1109/TVT.2013.2285389 Despite many attractive features of an orthogonal frequency-division multiplexing (OFDM) system, the signal detection in an OFDM system over multipath fading channels remains a challenging issue, particularly in a relatively low signal-to-noise ratio (SNR) scenario. This paper presents an iterative synchronization-assisted OFDM signal detection scheme for cognitive radio (CR) applications over multipath channels in low-SNR regions. To detect an OFDM signal, a log-likelihood ratio (LLR) test is employed without additional pilot symbols using a cyclic prefix (CP). Analytical results indicate that the LLR of received samples at a low SNR can be approximated by their log-likelihood (LL) functions, thus allowing us to estimate synchronization parameters for signal detection. The LL function is complex and depends on various parameters, including correlation coefficient, carrier frequency offset (CFO), symbol timing offset, and channel length. Decomposing a synchronization problem into several relatively simple parameter estimation subproblems eliminates a multidimensional grid search. An iterative scheme is also devised to implement a synchronization process. Simulation results confirm the effectiveness of the proposed detector. Keywords: OFDM modulation; cognitive radio; fading channels; iterative methods; multipath channels; parameter estimation; signal detection; synchronisation; LLR; OFDM signal detection; SNR; carrier frequency offset; cognitive radio systems; correlation coefficient; cyclic prefix ;iterative synchronization; log likelihood functions ;log-likelihood ratio; multidimensional grid search; multipath channels; multipath fading channels; orthogonal frequency division multiplexing; parameter estimation subproblems; signal-to-noise ratio; synchronization problem; Correlation; Detectors; OFDM; Signal to noise ratio; Synchronization; Cognitive radio; Cognitive radio (CR);cyclic prefix; cyclic prefix (CP); orthogonal frequency-division multiplexing; orthogonal frequency-division multiplexing (OFDM); synchronization (ID#:14-2412) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6627985&isnumber=6812142
- Alvarez-Perez, J.L., "A Multidimensional Extension of the Concept of Coherence in Polarimetric SAR Interferometry," Geoscience and Remote Sensing, IEEE Transactions on, vol.PP, no.99, pp.1, 14, July 2014. doi: 10.1109/TGRS.2014.2336805 Interferometric synthetic aperture radar (InSAR) is a phase-based radar signal processing technique that has been addressed from a polarimetric point of view since the late 1990s, starting with Cloude and Papathanassiou's foundational work. Polarimeric InSAR (PolInSAR) has consolidated as an active field of research in parallel to non-PolInSAR. Regarding the latter, there have been a number of issues that were discussed in an earlier paper from which some other questions related to Cloude's PolInSAR come out naturally. In particular, they affect the usual understanding of coherence and statistical independence. Coherence involves the behavior of electromagnetic waves in at least a pair of points, and it is crucially related to the statistical independence of scatterers in a complex scene. Although this would seem to allow PolInSAR to overcome the difficulties involving the controversial confusion between statistical independence and polarization as present in PolSAR, Cloude's PolInSAR originally inherited the idea of separating physical contributors to the scattering phenomenon through the use of singular values and vectors. This was an assumption consistent with Cloude's PolSAR postulates that was later set aside. We propose the introduction of a multidimensional coherence tensor that includes PolInSAR's polarimetric interferometry matrix $Omega_{12}$ as its 2-D case. We show that some important properties of the polarimetric interferometry matrix are incidental to its bidimensionality. Notably, this exceptional behavior in 2-D seems to suggest that the singular value decomposition (SVD) of $Omega_{12}$ does not provide a physical insight into the scattering problem in the sense of splitting different scattering contributors. It might be argued that Cloude's PolInSAR in its current form does not rely on the SVD of $Omega_{12}$ but on other underlying optimization sch- mes. The drawbacks of such ulterior developments and the failure of the maximum coherence separation procedure to be a consistent scheme for surface topography estimation in a two-layer model are discussed in depth in this paper. Nevertheless, turning back to the SVD of $Omega_{12}$, the use of the singular values of a prewhitened version of $Omega_{12}$ is consistent with a leading method of characterizing coherence in modern Optics. For this reason, the utility of the SVD of $Omega_{12}$ as a means of characterizing coherence is analyzed here and extended to higher dimensionalities. Finally, these extensions of the concept of coherence to the multidimensional case are tested and compared with the 2-D case by numerically simulating the scattered electromagnetic field from a rough surface. Keywords: Coherence; Interferometry; Matrix decomposition; Tensile stress; Vectors; Coherence; electromagnetic scattering; polarimetric synthetic aperture radar interferometry (PolInSAR)} (ID#:14-2413) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868983&isnumber=4358825
- Di Franco, Carmelo; Franchino, Gianluca; Marinoni, Mauro, "Data Fusion For Relative Localization Of Wireless Mobile Nodes," Industrial Embedded Systems (SIES), 2014 9th IEEE International Symposium on, vol., no., pp.58,65, 18-20 June 2014. doi: 10.1109/SIES.2014.6871187 Monitoring teams of mobile nodes is becoming crucial in a growing number of activities. When it is not possible to use fix references or external measurements, a practicable solution is to derive relative positions from local communication. In this work, we propose an anchor-free Received Signal Strength Indicator (RSSI) method aimed at small multi-robot teams. Information from Inertial Measurement Unit (IMU) mounted on the nodes and processed with a Kalman Filter are used to estimate the robot dynamics, thus increasing the quality of RSSI measurements. A Multidimensional Scaling algorithm is then used to compute the network topology from improved RSSI data provided by all nodes. A set of experiments performed on data acquired from a real scenario show the improvements over RSSI-only localization methods. With respect to previous work only an extra IMU is required, and no constraints are imposed on its placement, like with camera-based approaches. Moreover, no a-priori knowledge of the environment is required and no fixed anchor nodes are needed. Keywords: Accuracy; Channel models; Covariance matrices; Equations; Estimation; Mobile nodes; Sensors (ID#:14-2414) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6871187&isnumber=6871170
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Network Accountability
The term "accountability' suggests that an entity should be held responsible for its own specific actions. Once an event has transpired, the events that took place need to be traceable so that the causes can be determined afterwards. The goal of network accountability research is to provide accountability within networks and computers by building trace files of events. The research cited here was presented or published between January and September of 2014. The focus in these articles is on smart grid, wireless, cloud, and telemedicine.
- Tongtong Li; Abdelhakim, M.; Jian Ren, "N-Hop Networks: A General Framework For Wireless Systems," Wireless Communications, IEEE, vol.21, no.2, pp.98, 105, April 2014. doi: 10.1109/MWC.2014.6812297 This article introduces a unified framework for quantitative characterization of various wireless networks. We first revisit the evolution of centralized, ad-hoc and hybrid networks, and discuss the trade-off between structure-ensured reliability and efficiency, and ad-hoc enabled flexibility. Motivated by the observation that the number of hops for a basic node in the network to reach the base station or the sink has a direct impact on the network capacity, delay, efficiency and their evaluation techniques, we introduce the concept of the N-hop networks. It can serve as a general framework that includes most existing network models as special cases, and can also make the analytical characterization of the network performance more tractable. Moreover, for network security, it is observed that hierarchical structure enables easier tracking of user accountability and malicious node detection; on the other hand, the multi-layer diversity increases the network reliability under unexpected network failure or malicious attacks, and at the same time, provides a flexible platform for privacy protection. Keywords: ad hoc networks; diversity reception; telecommunication security; wireless channels; N-hop networks; ad hoc networks; ad-hoc enabled flexibility; hybrid networks; malicious attacks; malicious node detection; multilayer diversity; network capacity; network reliability; network security; unexpected network failure ;user accountability; wireless systems; Ad hoc networks; Delays; Mobile communication; Mobile computing; Sensors; Throughput; Wireless sensor networks} (ID#:14-2415) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6812297&isnumber=6812279
- Jing Liu; Yang Xiao; Jingcheng Gao, "Achieving Accountability in Smart Grid," Systems Journal, IEEE, vol.8, no.2, pp.493, 508, June 2014. doi: 10.1109/JSYST.2013.2260697 Smart grid is a promising power infrastructure that is integrated with communication and information technologies. Nevertheless, privacy and security concerns arise simultaneously. Failure to address these issues will hinder the modernization of the existing power system. After critically reviewing the current status of smart grid deployment and its key cyber security concerns, the authors argue that accountability mechanisms should be involved in smart grid designs. We design two separate accountable communication protocols using the proposed architecture with certain reasonable assumptions under both home area network and neighborhood area network. Analysis and simulation results indicate that the design works well, and it may cause all power loads to become accountable. Keywords: computer network security; power engineering computing; protocols; smart power grids; accountable communication protocols; cyber security concern; home area network; neighborhood area network; power system modernization; smart grid accountability; smart grid deployment; smart grid design; Accountability; advanced metering infrastructure (AMI); security; smart grid (ID#:14-2416) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6545310&isnumber=6819870
- Jeyanthi, N.; Thandeeswaran, R.; Mcheick, H., "SCT: Secured Cloud based Telemedicine," Networks, Computers and Communications, The 2014 International Symposium on , vol., no., pp.1,4, 17-19 June 2014. doi: 10.1109/SNCC.2014.6866531 Telemedicine started its journey and successful deployment over several decades. But still it could not mark a remarkable contribution to neither rural nor urban areas. People realized its impact when it saved a life from becoming an extinct. Telemedicine connects patient and specialized doctors remotely and also allows them to share the sensitive medical records. Irrespective of the mode of data exchange, all types of media are vulnerable to security and performance issues. Remote data exchange during an emergency situation should not be delayed and at the same time should not be altered. While transit, a single bit change could be interpreted differently at the other end. Hence telemedicine comes with all the challenges of performance and security issues. Delay, cost and scalability are the pressing performance factors whereas integrity, availability and accountability are the security issues need to be addressed. This paper lights up on security without compromising quality of service. Telemedicine is on track from standard PSTN, wireless Mobile phones and satellites. Secure Cloud based Telemedicine (SCT) uses Cloud which could free the people from administrative and accounting burdens. Keywords: biomedical equipment; cloud computing; data integrity; delays; electronic data interchange; emergency services; mobile computing; mobile handsets; security of data; telemedicine; telephone networks ;SCT; accounting burdens; administrative burdens; emergency situation; medical record sharing; performance factors; quality service quality; remote data exchange alteration; remote data exchange delay; remote data exchange mode; secured cloud based telemedicine; single bit change effect; standard PSTN; telemedicine accountability; telemedicine availability; telemedicine cost; telemedicine delay; telemedicine effect; telemedicine integrity; telemedicine performance issues; telemedicine scalability; telemedicine security issues; wireless mobile phones; Availability; Cloud computing; Educational institutions; Medical services; Read only memory; Security; Telemedicine; Cloud; Security; Telemedicine; availability; confidentiality (ID#:14-2417) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866531&isnumber=6866503
- Gueret, Christophe; de Boer, Victor; Schlobach, Stefan, "Let's "Downscale" Linked Data," Internet Computing, IEEE , vol.18, no.2, pp.70,73, Mar.-Apr. 2014. doi: 10.1109/MIC.2014.29 Open data policies and linked data publication are powerful tools for increasing transparency, participatory governance, and accountability. The linked data community proudly emphasizes the economic and societal impact such technology shows. But a closer look proves that the design and deployment of these technologies leave out most of the world's population. The good news is that it will take small but fundamental changes to bridge this gap. Research agendas should be updated to design systems for small infrastructure, provide multimodal interfaces to data, and account better for locally relevant, contextualized data. Now is the time to act, because most linked data technologies are still in development. Keywords: Data processing; Digital systems; Linked technologies; Open systems; digital divide; linked data technologies; multimodal interfaces; open linked data (ID#:14-2418) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777473&isnumber=6777469
- Chen, X.; Li, J.; Huang, X.; Li, J.; Xiang, Y.; Wong, D., "Secure Outsourced Attribute-based Signatures," Parallel and Distributed Systems, IEEE Transactions on, vol. PP, no.99, pp.1,1, January 2014. doi: 10.1109/TPDS.2013.2295809 Attribute-based signature (ABS) enables users to sign messages over attributes without revealing any information other than the fact that they have attested to the messages. However, heavy computational cost is required during signing in existing work of ABS, which grows linearly with the size of the predicate formula. As a result, this presents a significant challenge for resource-constrained devices (such as mobile devices or RFID tags) to perform such heavy computations independently. Aiming at tackling the challenge above, we first propose and formalize a new paradigm called Outsourced ABS, i.e., OABS, in which the computational overhead at user side is greatly reduced through outsourcing intensive computations to an untrusted signing-cloud service provider (S-CSP). Furthermore, we apply this novel paradigm to existing ABS schemes to reduce the complexity. As a result, we present two concrete OABS schemes: i) in the first OABS scheme, the number of exponentiations involving in signing is reduced from O(d) to O(1) (nearly three), where d is the upper bound of threshold value defined in the predicate; ii) our second scheme is built on Herranz et al.'s construction with constant-size signatures. The number of exponentiations in signing is reduced from O(d2) to O(d) and the communication overhead is O(1). Security analysis demonstrates that both OABS schemes are secure in terms of the unforgeability and attribute-signer privacy definitions specified in the proposed security model. Finally, to allow for high efficiency and flexibility, we discuss extensions of OABS and show how to achieve accountability as well. Keywords: Educational institutions; Electronic mail; Games; Outsourcing; Polynomials; Privacy; Security (ID#:14-2419) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6714536&isnumber=4359390
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Network Coding
Network coding methods are used to improve a network's throughput, efficiency and scalability. It can also be a method for dealing with attacks and eavesdropping. Research into network coding deals with finding optimal solutions to the general network problems that remain open. The articles cited here were presented or published between January and September 2014.
- Shiyu Ji; Tingting Chen; Sheng Zhong; Kak, S., "DAWN: Defending Against Wormhole Attacks In Wireless Network Coding Systems," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.664,672, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6847992 Network coding has been shown to be an effective approach to improve the wireless system performance. However, many security issues impede its wide deployment in practice. Besides the well-studied pollution attacks, there is another severe threat, that of wormhole attacks, which undermines the performance gain of network coding. Since the underlying characteristics of network coding systems are distinctly different from traditional wireless networks, the impact of wormhole attacks and countermeasures are generally unknown. In this paper, we quantify wormholes' devastating harmful impact on network coding system performance through experiments. Then we propose DAWN, a Distributed detection Algorithm against Wormhole in wireless Network coding systems, by exploring the change of the flow directions of the innovative packets caused by wormholes. We rigorously prove that DAWN guarantees a good lower bound of successful detection rate. We perform analysis on the resistance of DAWN against collusion attacks. We find that the robustness depends on the node density in the network, and prove a necessary condition to achieve collusion-resistance. DAWN does not rely on any location information, global synchronization assumptions or special hardware/middleware. It is only based on the local information that can be obtained from regular network coding protocols, and thus does not introduce any overhead by extra test messages. Extensive experimental results have verified the effectiveness and the efficiency of DAWN. Keywords: network coding; radio networks; synchronisation; telecommunication security; DAWN; collusion attacks; collusion-resistance; detection rate; distributed detection algorithm; flow directions; global synchronization assumptions; location information; node density; pollution attacks; regular network coding protocols; test messages; wireless network coding systems; wireless system performance; wormhole attacks; Encoding; Network coding; Probability; Protocols; Routing; Throughput; Wireless networks (ID#:14-2420) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847992&isnumber=6847911
- Shang Tao; Pei Hengli; Liu Jianwei, "Secure network coding based on lattice signature," Communications, China , vol.11, no.1, pp.138,151, Jan. 2014. doi: 10.1109/CC.2014.6821316 To provide a high-security guarantee to network coding and lower the computing complexity induced by signature scheme, we take full advantage of homomorphic property to build lattice signature schemes and secure network coding algorithms. Firstly, by means of the distance between the message and its signature in a lattice, we propose a Distance-based Secure Network Coding (DSNC) algorithm and stipulate its security to a new hard problem Fixed Length Vector Problem (FLVP), which is harder than Shortest Vector Problem (SVP) on lattices. Secondly, considering the boundary on the distance between the message and its signature, we further propose an efficient Boundary-based Secure Network Coding (BSNC) algorithm to reduce the computing complexity induced by square calculation in DSNC. Simulation results and security analysis show that the proposed signature schemes have stronger unforgeability due to the natural property of lattices than traditional Rivest-Shamir-Adleman (RSA)-based signature scheme. DSNC algorithm is more secure and BSNC algorithm greatly reduces the time cost on computation. Keywords: computational complexity; digital signatures; network coding ;telecommunication security; BSNC; DSNC; FLVP; boundary-based secure network coding; computing complexity; distance-based secure network coding; fixed length vector problem; hard problem; high-security guarantee; homomorphic property; lattice signature; signature scheme; Algorithm design and analysis; Cryptography; Lattices; Network coding; Network security; fixed length vector problem; lattice signature; pollution attack; secure network coding (ID#:14-2421) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821316&isnumber=6821299
- Keshavarz-Haddad, A; Riedi, R.H., "Bounds on the Benefit of Network Coding for Wireless Multicast and Unicast," Mobile Computing, IEEE Transactions on , vol.13, no.1, pp.102,115, Jan. 2014. doi: 10.1109/TMC.2012.234 In this paper, we explore fundamental limitations of the benefit of network coding in multihop wireless networks. We study two well-accepted scenarios in the field: single multicast session and multiple unicast sessions. We assume arbitrary but fixed topology and traffic patterns for the wireless network. We prove that the gain of network coding in terms of throughput and energy saving of a single multicast session is at most a constant factor. Also, we present a lower bound on the average number of transmissions of multiple unicast sessions under any arbitrary network coding. We identify scenarios under which network coding provides no gain at all, in the sense that there exists a simple flow scheme that achieves the same performance. Moreover, we prove that the gain of network coding in terms of the maximum transport capacity is bounded by a constant factor of at most $(pi)$ in any arbitrary wireless network under all traditional Gaussian channel models. As a corollary, we find that the gain of network coding on the throughput of large homogeneous wireless networks is asymptotically bounded by a constant. Furthermore, we establish theorems which relate a network coding scheme to a simple routing scheme for multiple unicast sessions. The theorems can be used as criteria for evaluating the potential gain of network coding in a given wired or wireless network. Based on these criteria, we find more scenarios where network coding has no gain on throughput or energy saving. Keywords: Gaussian channels; multicast communication; network coding; Gaussian channel models; arbitrary wireless network; constant factor; large homogeneous wireless networks; maximum transport capacity; multihop wireless networks; multiple unicast sessions; network coding scheme; single multicast session; wireless multicast; wireless unicast; Channel models; Energy consumption; Network coding; Throughput; Unicast; Wireless networks; Channel models; Energy consumption; Network coding; Network coding gain; Throughput; Unicast; Wireless networks; energy consumption; multicast throughput; transport capacity (ID#:14-2422) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6357191&isnumber=6674931
- Tae-hwa Kim; Hyungwoo Choi; Hong-Shik Park, "Centrality-based Network Coding Node Selection Mechanism For Improving Network Throughput," Advanced Communication Technology (ICACT), 2014 16th International Conference on , vol., no., pp.864,867, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779083 The problem of minimizing the number of coding nodes is caused by network coding overhead and is proved to be NP-hard. To resolve this issue, this paper proposes Centrality-based Network Coding Node Selection (CNCNS) that is the heuristic and distributed mechanism to minimize the number of network coding (NC) nodes without compromising the achievable network throughput. CNCNS iteratively analyses the node centrality and selects NC node in the specific area. Since CNCNS operates with distributed manner, it can dynamically adapt the network status with approximately minimizing network coding nodes. Especially, CNCNS adjusts the network performance of network throughput and reliability using control indicator. Simulation results show that the well selected network coding nodes can improve the network throughput and almost close to throughput performance of a system where all network nodes operate network coding. Keywords: network coding; radio networks; NP hard problem; centrality based network coding node selection mechanism; coding nodes; distributed mechanism; heuristic mechanism; network coding overhead; network reliability; network status; network throughput improvement; Decoding; Delays; Encoding; Network coding; Receivers; Reliability; Throughput; Centrality; Degree; Network coding; Throughput; Weight (ID#:14-2423) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779083&isnumber=6778899
- Min Yang; Yuanyuan Yang, "Applying Network Coding to Peer-to-Peer File Sharing," Computers, IEEE Transactions on , vol.63, no.8, pp.1938,1950, Aug. 2014. doi: 10.1109/TC.2013.88 Network coding is a promising enhancement of routing to improve network throughput and provide high reliability. It allows a node to generate output messages by encoding its received messages. Peer-to-peer networks are a perfect place to apply network coding due to two reasons: the topology of a peer-to-peer network is constructed arbitrarily, thus it is easy to tailor the topology to facilitate network coding; the nodes in a peer-to-peer network are end hosts which can perform more complex operations such as decoding and encoding than simply storing and forwarding messages. In this paper, we propose a scheme to apply network coding to peer-to-peer file sharing which employs a peer-to-peer network to distribute files resided in a web server or a file server. The scheme exploits a special type of network topology called combination network. It was proved that combination networks can achieve unbounded network coding gain measured by the ratio of network throughput with network coding to that without network coding. Our scheme encodes a file into multiple messages and divides peers into multiple groups with each group responsible for relaying one of the messages. The encoding scheme is designed to satisfy the property that any subset of the messages can be used to decode the original file as long as the size of the subset is sufficiently large. To meet this requirement, we first define a deterministic linear network coding scheme which satisfies the desired property, then we connect peers in the same group to flood the corresponding message, and connect peers in different groups to distribute messages for decoding. Moreover, the scheme can be readily extended to support link heterogeneity and topology awareness to further improve system performance in terms of throughput, reliability and link stress. Our simulation results show that the new scheme can achieve 15%-20% higher throughput than another peer-to-peer multicast system, Narada, which does not employ network c- ding. In addition, it achieves good reliability and robustness to link failure or churn. Keywords: network coding; peer-to-peer computing; telecommunication network reliability; telecommunication network topology; Web server; combination network; decoding; deterministic linear network; encoding; file server; network topology; peer-to-peer file sharing; Network coding; file sharing; multicast; peer-to-peer networks; web-based applications (ID#:14-2424) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6497042&isnumber=6857445
- Bourtsoulatze, E.; Thomos, N.; Frossard, P., "Decoding Delay Minimization in Inter-Session Network Coding," Communications, IEEE Transactions on , vol.62, no.6, pp.1944,1957, June 2014. doi: 10.1109/TCOMM.2014.2318701 Intra-session network coding has been shown to offer significant gains in terms of achievable throughput and delay in settings where one source multicasts data to several clients. In this paper, we consider a more general scenario where multiple sources transmit data to sets of clients over a wireline overlay network. We propose a novel framework for efficient rate allocation in networks where intermediate network nodes have the opportunity to combine packets from different sources using randomized network coding. We formulate the problem as the minimization of the average decoding delay in the client population and solve it with a gradient-based stochastic algorithm. Our optimized inter-session network coding solution is evaluated in different network topologies and is compared with basic intra-session network coding solutions. Our results show the benefits of proper coding decisions and effective rate allocation for lowering the decoding delay when the network is used by concurrent multicast sessions. Keywords: computer networks; decoding; delays; gradient methods; minimisation; network coding; overlay networks; stochastic processes; telecommunication network topology; client population; coding decisions; concurrent multicast sessions; decoding delay minimization; gradient-based stochastic algorithm; intermediate network nodes ;intersession network coding solution; intrasession network coding solutions; network topologies; randomized network coding; rate allocation; wireline overlay network; Decoding; Delays; Encoding; Network coding; Resource management; Throughput; Vectors; Network coding; decoding delay; inter-session network coding; overlay networks; rate allocation (ID#:14-2425) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804664&isnumber=6839072
- Yin, X.; Wang, Y.; Li, Z.; Wang, X.; Xue, X., "A Graph Minor Perspective to Multicast Network Coding," Information Theory, IEEE Transactions on , vol.60, no.9, pp.5375,5386, Sept. 2014. doi: 10.1109/TIT.2014.2336836 Network coding encourages information coding across a communication network. While the necessity, benefit and complexity of network coding are sensitive to the underlying graph structure of a network, existing theory on network coding often treats the network topology as a black box, focusing on algebraic or information theoretic aspects of the problem. This paper aims at an in-depth examination of the relation between algebraic coding and network topologies. We mathematically establish a series of results along the direction of: if network coding is necessary/beneficial, or if a particular finite field is required for coding, then the network must have a corresponding hidden structure embedded in its underlying topology, and such embedding is computationally efficient to verify. Specifically, we first formulate a meta-conjecture, the NC-minor conjecture, that articulates such a connection between graph theory and network coding, in the language of graph minors. We next prove that the NC-minor conjecture for multicasting two information flows is almost equivalent to the Hadwiger conjecture, which connects graph minors with graph coloring. Such equivalence implies the existence of (K_{4}) , (K_{5}) , (K_{6}) , and (K_{O(q/log {q})}) minors, for networks that require (mathbb {F}_{3}) , (mathbb {F}_{4}) , (mathbb {F}_{5}) , and (mathbb {F}_{q}) to multicast two flows, respectively. We finally pro- e that, for the general case of multicasting arbitrary number of flows, network coding can make a difference from routing only if the network contains a (K_{4}) minor, and this minor containment result is tight. Practical implications of the above results are discussed. Keywords: Color; Encoding; Network coding; Network topology; Receivers; Routing; Vectors; Network coding; graph minor ;multicast; treewidth (ID#:14-2426) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850047&isnumber=6878505
- Coondu, S.; Mitra, A; Chattopadhyay, S.; Chattopadhyay, M.; Bhattacharya, M., "Network-coded Broadcast Incremental Power Algorithm For Energy-Efficient Broadcasting In Wireless Ad-Hoc Network," Applications and Innovations in Mobile Computing (AIMoC), 2014, pp.42, 47, Feb. 27, 2014-March 1, 2014. doi: 10.1109/AIMOC.2014.6785517 An important operation in multi-hop wireless ad-hoc networks is broadcasting, which propagates information throughout the network. We are interested to explore the issue of broadcasting, where all nodes of the network are sources that want to transmit information to all other nodes, in an ad-hoc wireless network. Our performance metric is energy efficiency, a vital defining factor for wireless networks as it directly concerns the battery life and thus network longevity. We show the benefits network coding has to offer in a wireless ad-hoc network as far as energy-savings is concerned, compared to the store-and-forward strategy. Network coded broadcasting concentrates on reducing the number of transmissions performed by each forwarding node in the all-to-all broadcast application, where each forwarding node combines the incoming messages for transmission. The total number of transmissions can be reduced using network coding, compared to broadcasting using the same forwarding nodes without coding. In this paper, we present the performance of a network coding-based Broadcast Incremental Power (BIP) algorithm for all-to-all broadcast. Simulation results show that optimisation using network coding method lead to substantial improvement in the cost associated with BIP. Keywords: ad hoc networks; network coding; telecommunication network reliability; all-to-all broadcast application; battery life; energy-efficient broadcasting; energy-savings; forwarding node; multihop wireless ad hoc networks; network coding-based BIP algorithm; network longevity; network nodes; network-coded broadcast incremental power algorithm; store-and-forward strategy; vital defining factor; Ad hoc networks; Broadcasting; Encoding; Energy consumption; Network coding; Space vehicles; Wireless communication; Broadcast Incremental Power; Energy-Efficiency; Minimum Power Broadcast Problem; Network Coding; Wireless Ad-Hoc Network; Wireless Multicast Advantage (ID#:14-2427) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785517&isnumber=6785503
- Deze Zeng; Song Guo; Yong Xiang; Hai Jin, "On the Throughput of Two-Way Relay Networks Using Network Coding," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.1, pp.191,199, Jan. 2014. doi: 10.1109/TPDS.2013.187 Network coding has shown the promise of significant throughput improvement. In this paper, we study the network throughput using network coding and explore how the maximum throughput can be achieved in a two-way relay wireless network. Unlike previous studies, we consider a more general network with arbitrary structure of overhearing status between receivers and transmitters. To efficiently utilize the coding opportunities, we invent the concept of network coding cliques (NCCs), upon which a formal analysis on the network throughput using network coding is elaborated. In particular, we derive the closed-form expression of the network throughput under certain traffic load in a slotted ALOHA network with basic medium access control. Furthermore, the maximum throughput as well as optimal medium access probability at each node is studied under various network settings. Our theoretical findings have been validated by simulation as well. Keywords: access protocols; network coding; radio receivers; radio transmitters; relay networks (telecommunication); telecommunication traffic; NCCs; closed-form expression; medium access control; network coding clique; network traffic load; optimal medium access probability; receiver; slotted ALOHA network; transmitter; two-way relay wireless network; Encoding; Network coding;Receivers;Relays;Throughput;Transmitters;Unicast;Performance analysis; network coding; slotted ALOHA (ID#:14-2428) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6573287&isnumber=6674937
- Lili Wei; Wen Chen; Hu, R.Q.; Geng Wu, "Network Coding In Multiple Access Relay Channel With Multiple Antenna Relay," Computing, Networking and Communications (ICNC), 2014 International Conference on , vol., no., pp.656,661, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785414 Network coding is a paradigm for modern communication networks by allowing intermediate nodes to mix messages received from multiple sources. In this paper, we carry out a study on network coding in multiple access relay channel (MARC) with multiple antenna relay. Under the same transmission time slots constraint, we investigate several different transmission strategies applicable to the system model, including direct transmission, decode-and-forward, digital network coding, digital network coding with Alamouti space time coding, analog network coding, and compare the error rate performance. Interestingly, simulation studies show that in the system model under investigation, the schemes with network coding do not show any performance gain compared with the traditional schemes with same time slots consumption. Keywords: antenna arrays; decode and forward communication; network coding; radio access networks; relay networks (telecommunication);simulation; space-time codes; Alamouti space time coding; MARC; analog network coding; decode-and-forward transmission; digital network coding; direct transmission; multiple access relay channel; multiple antenna relay; transmission time slots constraint; Encoding; Erbium; Network coding; Relays; Slot antennas; Vectors; Wireless communication; cooperative; multiple access relay channel; network coding; space-time coding (ID#:14-2429) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785414&isnumber=6785290
- Ye Liu; Chi Wan Sung, "Quality-Aware Instantly Decodable Network Coding," Wireless Communications, IEEE Transactions on , vol.13, no.3, pp.1604,1615, March 2014. doi: 10.1109/TWC.2014.012314.131046 In erasure broadcast channels, network coding has been demonstrated to be an efficient way to satisfy each user's demand. However, the erasure broadcast channel model does not fully characterize the information available in a "lost" packet, and therefore any retransmission schemes designed based on the erasure broadcast channel model cannot make use of that information. In this paper, we characterize the quality of erroneous packets by Signal-to-Noise Ratio (SNR) and then design a network coding retransmission scheme with the knowledge of the SNRs of the erroneous packets, so that a user can immediately decode two source packets upon reception of a useful retransmission packet. We demonstrate that our proposed scheme, namely Quality-Aware Instantly Decodable Network Coding (QAIDNC), can increase the transmission efficiency significantly compared to the existing Instantly Decodable Network Coding (IDNC) and Random Linear Network Coding (RLNC). Keywords: broadcast channels; decoding; linear codes; network coding; QAIDNC; RLNC; SNR; erasure broadcast channel model; lost packet; quality of erroneous packets; quality-aware instantly decodable network coding; random linear network coding; retransmission schemes; signal-to-noise ratio; source packets; transmission efficiency; user demand; Decoding; Encoding; Network coding; Phase shift keying; Signal to noise ratio; Vectors; Broadcast channel; Rayleigh fading; instantly decodable network coding; maximal-ratio combining; network coding (ID#:14-2430) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6725590&isnumber=6776574
- Amerimehr, M.H.; Ashtiani, F.; Valaee, S., "Maximum Stable Throughput of Network-Coded Multiple Broadcast Sessions for WirelessTandem Random Access Networks," Mobile Computing, IEEE Transactions on, vol.13, no.6, pp.1256,1267, June 2014. doi: 10.1109/TMC.2013.2296502 This paper presents an analytical study of the stable throughput for multiple broadcast sessions in a multi-hop wireless tandem network with random access. Intermediate nodes leverage on the broadcast nature of wireless medium access to perform inter-session network coding among different flows. This problem is challenging due to the interaction among nodes, and has been addressed so far only in the saturated mode where all nodes always have packet to send, which results in infinite packet delay. In this paper, we provide a novel model based on multi-class queueing networks to investigate the problem in unsaturated mode. We devise a theoretical framework for computing maximum stable throughput of network coding for a slotted ALOHA-based random access system. Using our formulation, we compare the performance of network coding and traditional routing. Our results show that network coding leads to high throughput gain over traditional routing. We also define a new metric, network unbalance ratio (NUR), that indicates the unbalance status of the utilization factors at different nodes. We show that although the throughput gain of the network coding compared to the traditional routing decreases when the number of nodes tends to infinity, NUR of the former outperforms the latter. We carry out simulations to confirm our theoretical analysis. Keywords: access protocols; broadcast communication; network coding; queueing theory; radio access networks; infinite packet delay; inter-session network coding; maximum stable throughput; multiclass queueing networks; multihop wireless tandem network; multiple broadcast sessions; network coding; network routing; network unbalance ratio; network-coded multiple broadcast sessions; slotted ALOHA-based random access system; theoretical analysis; wireless medium access; wireless tandem random access networks; Analytical models; Multicast communication; Network coding; Routing; Spread spectrum communication; Throughput; Wireless communication; Network coding; queueing networks; random access; routing; stable throughput; vehicular networks (ID#:14-2431) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6697896&isnumber=6824285
- Gang Wang; Xia Dai; Yonghui Li, "On the Network Sharing of Mixed Network Coding and Routing Data Flows in Congestion Networks," Vehicular Technology, IEEE Transactions on , vol.63, no.5, pp.2420,2428, Jun 2014. doi: 10.1109/TVT.2013.2291859 In this paper, we study the congestion game for a network where multiple network coding (NC) and routing users sharing a single common congestion link to transmit their information. The data flows using NC and routing will compete network resources, and we need to determine the optimal allocation of network resources between NC and routing data flows to maximize the network payoff. To facilitate the design, we formulate this process using a cost-sharing game model. A novel average-cost-sharing (ACS) pricing mechanism is developed to maximize the overall network payoff. We analyze the performance of ACS in terms of price of anarchy (PoA). We formulate an analytical expression to compute PoA under the ACS mechanism. In contrast to the previous affine marginal cost (AMC) mechanism, where the overall network payoff decreases when NC is applied, the proposed ACS mechanism can considerably improve the overall network payoff by optimizing the number and the spectral resource allocation of NC and routing data flows sharing the network link. Keywords: game theory; network coding; radio networks; telecommunication congestion control; telecommunication network routing; anarchy price; congestion game; congestion networks; cost sharing game model; data flow routing; mixed network coding; network sharing; optimal network resource allocation; pricing mechanism; single common congestion link; Aggregates; Games; Nash equilibrium; Network coding; Pricing; Resource management; Routing; Affine Marginal Cost (AMC);Affine marginal cost (AMC); Average Cost Sharing (ACS) ;Network Coding (NC); Price of Ararchy (PoA); average cost sharing (ACS); network coding (NC); price of anarchy (PoA) (ID#:14-2432) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6671460&isnumber=6832681
- Kramarev, D.; Yi Hong; Viterbo, E., "Software Defined Radio Implementation Of A Two-Way Relay Network With Digital Network Coding," Communications Theory Workshop (AusCTW), 2014 Australian, pp.120, 125, 3-5 Feb. 2014. doi: 10.1109/AusCTW.2014.6766439 Network coding is a technology which has the potential to increase network throughput beyond existing standards based on routing. Despite the fact, that the theoretical understanding is mature, there have been only a few papers on implementation of network coding and demonstration of a working testbed. This paper presents the implementation of a two-way relay network with digital network coding. Unlike previous work, where the testbeds are implemented on custom hardware, we implement the testbed on GNU Radio, an open-source software defined radio platform. In this paper we discuss the implementation issues and the ways to overcome the hardware imperfections and software inadequacies of the GNU Radio platform. Using our testbed we measure the throughput of the system in an indoor environment. The experimental results show that the network coding outperforms the traditional routing as predicted by the theoretical analysis. Keywords: network coding; public domain software; relay networks (telecommunication); software radio; GNU Radio platform; digital network coding; hardware imperfections; open-source software; radio implementation; software inadequacies; testbed; two-way relay network; Hardware; Network coding; Packet switching; Relays; Software; Synchronization; Throughput; GNU radio; Software-defined radio; network coding; network coding implementation; testbed; two-way relay network (ID#:14-2433) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6766439&isnumber=6766413
- Carrillo, E.; Ramos, V., "On the Impact of Network Coding Delay for IEEE 802.11s Infrastructure Wireless Mesh Networks," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on , vol., no., pp.305,312, 13-16 May 2014. doi: 10.1109/AINA.2014.39 The distributed coordination function (DCF) may reduce the potential of network coding in 802.11 wireless networks. Due to the randomness of DCF, the coding delay, defined as the time that a packet must wait for a coding opportunity, may increase and degrade the network performance. In this paper, we study the potential impact of the coding delay in the performance of TCP over IEEE 802.11s infrastructure wireless mesh networks. By means of simulation, we evaluate the formation of coding opportunities at the mesh access points. We find that as TCP traffic increases, the coding opportunities rise up to 70% and the coding delay increases considerably. We propose to adjust dynamically the maximum time that a packet can wait in the coding queues to reduce the coding delay. We evaluate different moving-average estimation methods for this aim. Our results show that the coding delay may be reduced with these methods using at the same time an estimation threshold. This threshold increases the estimation's mean in order to exploit a high percentage of the coding opportunities. Keywords: moving average processes; network coding; transport protocols; wireless LAN; wireless mesh networks; DCF; IEEE 802.11s infrastructure wireless mesh networks; TCP traffic increases; coding delay; coding opportunity; coding queues; distributed coordination function; mesh access points; moving-average estimation methods; network coding; Delays; Encoding; IEEE 802.11 Standards; Markov processes; Network coding; Wireless networks; IEEE 802.11s;mesh networks; network coding (ID#:14-2434) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838680&isnumber=6838626
- Nabaee, M.; Labeau, F., "Bayesian Quantized Network Coding Via Generalized Approximate Message Passing," Wireless Telecommunications Symposium (WTS), 2014, pp.1,7, 9-11 April 2014. doi: 10.1109/WTS.2014.6834995 In this paper, we study message passing-based decoding of real network coded packets. We explain our developments on the idea of using real field network codes for distributed compression of inter-node correlated messages. Then, we discuss the use of iterative message passing-based decoding for the described network coding scenario, as the main contribution of this paper. Motivated by Bayesian compressed sensing, we discuss the possibility of approximate decoding, even with fewer received measurements (packets) than the number of messages. As a result, our real field network coding scenario, called quantized network coding, is capable of inter-node compression without the need to know the inter-node redundancy of messages. We also present our numerical and analytic arguments on the robustness and computational simplicity (relative to the previously proposed linear programming and standard belief propagation) of our proposed decoding algorithm for the quantized network coding. Keywords: Bayes methods; compressed sensing; iterative decoding; linear programming; message passing; network coding; Bayesian compressed sensing; Bayesian quantized network coding; distributed compression; internode compression; internode correlated messages; iterative message passing-based decoding; linear programming; network coded packets; Bayes methods; Decoding; Message passing; Network coding; Noise; Noise measurement; Quantization (signal);Bayesian compressed sensing; Network coding; approximate message passing (ID#:14-2435) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834995&isnumber=6834983
- Shalaby, A; Ragab, M.E.-S.; Goulart, V.; Fujiwara, I; Koibuchi, M., "Hierarchical Network Coding for Collective Communication on HPC Interconnects," Parallel, Distributed and Network-Based Processing (PDP), 2014 22nd Euromicro International Conference on, vol., no., pp.98,102, 12-14 Feb. 2014. doi: 10.1109/PDP.2014.58 Network bandwidth is a performance concern especially for collective communication because the bisection bandwidth of recent supercomputers is far less than their full bisection bandwidth. In this context we propose to exploit the use of a network coding technique to reduce the number of unicasts and the size of transferred data generated by latency-sensitive collective communication in supercomputers. Our proposed network coding scheme has a hierarchical multicasting structure with intra-group and inter-group unicasts. Quantitative analysis show that the aggregate path hop counts by our hierarchical network coding decrease as much as 94% when compared to conventional unicast-based multicasts. We validate these results by cycle-accurate network simulations. In 1,024-switch networks, the network reduces the execution time of collective communication as much as 64%. We also show that our hierarchical network coding is beneficial for any packet size. Keywords: network coding; parallel machines; parallel processing; HPC interconnects; hierarchical multicasting structure; hierarchical network coding technique; inter-group unicasts; intra-group unicasts; latency-sensitive collective communication; network bandwidth; supercomputers; Aggregates; Bandwidth; Network coding; Network topology; Routing; Supercomputers; Topology; Interconnection networks; collective communication; high-performance computing; multicast algorithm; network coding (ID#:14-2436) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6787258&isnumber=6787236
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Network Intrusion Detection
Network intrusion detection is one of the chronic problems in cybersecurity. The growth of cellular and ad hoc networks has increased the threat and risks. Research into this area of concern reflects its importance. The articles cited here were presented or published between January and August of 2014.
- Weiming Hu; Jun Gao; Yanguo Wang; Ou Wu; Maybank, S., "Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection," Cybernetics, IEEE Transactions on, vol.44, no.1, pp.66,82, Jan. 2014. doi: 10.1109/TCYB.2013.2247592 Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types. Keywords: Gaussian processes; computer architecture; computer network security; distributed processing; learning (artificial intelligence);particle swarm optimisation; support vector machines; GMM; PSO;SVM-based algorithm; distributed architectures; dynamic distributed network intrusion detection; local parameterized detection model; network attack detection; network information security; online Adaboost process; online Adaboost-based intrusion detection algorithms; online Adaboost-based parameterized methods; online Gaussian mixture models; particle swarm optimization; support vector machines; weak classifiers; Dynamic distributed detection; network intrusions; online Adaboost learning; parameterized model (ID#:14-2437) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6488798&isnumber=6683070
- Al-Jarrah, O.; Arafat, A, "Network Intrusion Detection System Using Attack Behavior Classification," Information and Communication Systems (ICICS), 2014 5th International Conference on , vol., no., pp.1,6, 1-3 April 2014. doi: 10.1109/IACS.2014.6841978 Intrusion Detection Systems (IDS) have become a necessity in computer security systems because of the increase in unauthorized accesses and attacks. Intrusion Detection is a major component in computer security systems that can be classified as Host-based Intrusion Detection System (HIDS), which protects a certain host or system and Network-based Intrusion detection system (NIDS), which protects a network of hosts and systems. This paper addresses Probes attacks or reconnaissance attacks, which try to collect any possible relevant information in the network. Network probe attacks have two types: Host Sweep and Port Scan attacks. Host Sweep attacks determine the hosts that exist in the network, while port scan attacks determine the available services that exist in the network. This paper uses an intelligent system to maximize the recognition rate of network attacks by embedding the temporal behavior of the attacks into a TDNN neural network structure. The proposed system consists of five modules: packet capture engine, preprocessor, pattern recognition, classification, and monitoring and alert module. We have tested the system in a real environment where it has shown good capability in detecting attacks. In addition, the system has been tested using DARPA 1998 dataset with 100% recognition rate. In fact, our system can recognize attacks in a constant time. Keywords: computer network security; neural nets; pattern classification; HIDS; NIDS; TDNN neural network structure; alert module; attack behavior classification; computer security systems; host sweep attacks; host-based intrusion detection system; network intrusion detection system; network probe attacks; packet capture engine; pattern classification; pattern recognition; port scan attacks; preprocessor; reconnaissance attacks; unauthorized accesses; IP networks; Intrusion detection; Neural networks; Pattern recognition; Ports (Computers); Probes; Protocols; Host sweep; Intrusion Detection Systems; Network probe attack; Port scan; TDNN neural network (ID#:14-2438) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841978&isnumber=6841931
- Jaic, K.; Smith, M.C.; Sarma, N., "A Practical Network Intrusion Detection System For Inline FPGAs On 10gbe Network Adapters," Application-specific Systems, Architectures and Processors (ASAP), 2014 IEEE 25th International Conference on, pp.180,181, 18-20 June 2014. doi: 10.1109/ASAP.2014.6868655 A network intrusion detection system (NIDS), such as SNORT, analyzes incoming packets to identify potential security threats. Pattern matching is arguably the most important and most computationally intensive component of a NIDS. Software-based NIDS implementations drop up to 90% of packets during increased network load even at lower network bandwidth. We propose an alternative hybrid-NIDS that couples an FPGA with a network adapter to provide hardware support for pattern matching and software support for post processing. The proposed system, SFAOENIDS, offers an extensible open-source NIDS for Solarflare AOE devices. The pattern matching engine-the primary component of the hardware architecture was designed based on the requirements of typical NIDS implementations. In testing on a real network environment, the SFAOENIDS hardware implementation, operating at 200 MHz, handles a 10Gbps data rate without dropping packets while simultaneously minimizing the server CPU load. Keywords: field programmable gate arrays; security of data; SFAOENIDS; SNORT; Solarflare AOE devices ;inline FPGA; lower network bandwidth; network adapters; network load; open-source NIDS; pattern matching; pattern matching engine; practical network intrusion detection system; real network environment; security threats; software based NIDS implementations; Engines; Field programmable gate arrays; Hardware; Intrusion detection; Memory management; Pattern matching; Software (ID#:14-2439) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868655&isnumber=6868606
- Valgenti, V.C.; Hai Sun; Min Sik Kim, "Protecting Run-Time Filters for Network Intrusion Detection Systems," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on , vol., no., pp.116,122, 13-16 May 2014. doi: 10.1109/AINA.2014.19 Network Intrusion Detection Systems (NIDS) examine millions of network packets searching for malicious traffic. Multi-gigabit line-speeds combined with growing databases of rules lead to dropped packets as the load exceeds the capacity of the device. Several areas of research have attempted to mitigate this problem through improving packet inspection efficiency, increasing resources, or reducing the examined population. A popular method for reducing the population examined is to employ run-time filters that can provide a quick check to determine that a given network packet cannot match a particular rule set. While this technique is an excellent method for reducing the population under examination, rogue elements can trivially bypass such filters with specially crafted packets and render the run-time filters effectively useless. Since the filtering comes at the cost of extra processing a filtering solution could actually perform worse than a non-filtered solution under such pandemic circumstances. To defend against such attacks, it is necessary to consider run-time filters as an independent anomaly detector capable of detecting attacks against itself. Such anomaly detection, together with judicious rate-limiting of traffic forwarded to full packet inspection, allows the detection, logging, and mitigation of attacks targeted at the filters while maintaining the overall improvements in NIDS performance garnered from using run-time filters. Keywords: filters; security of data; telecommunication traffic; NIDS performance; anomaly detector; crafted packets; filtering solution; malicious traffic; multigigabit line-speeds; network intrusion detection systems; network packets; packet inspection; run-time filters; run-time filters protection; Automata; Detectors; Inspection; Intrusion detection; Limiting; Matched filters; Sociology; Deep Packet Inspection; Filters; IDS; Intrusion Detection; Network Security; Run-time Filters; Security (ID#:14-2440) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838655&isnumber=6838626
- Chakchai So-In; Mongkonchai, N.; Aimtongkham, P.; Wijitsopon, K.; Rujirakul, K., "An Evaluation Of Data Mining Classification Models For Network Intrusion Detection," Digital Information and Communication Technology and it's Applications (DICTAP), 2014 Fourth International Conference on , vol., no., pp.90,94, 6-8 May 2014. doi: 10.1109/DICTAP.2014.6821663 Due to a rapid growth of Internet, the number of network attacks has risen leading to the essentials of network intrusion detection systems (IDS) to secure the network. With heterogeneous accesses and huge traffic volumes, several pattern identification techniques have been brought into the research community. Data Mining is one of the analyses which many IDSs have adopted as an attack recognition scheme. Thus, in this paper, the classification methodology including attribute and data selections was drawn based on the well-known classification schemes, i.e., Decision Tree, Ripper Rule, Neural Networks, Naive Bayes, k-Nearest-Neighbour, and Support Vector Machine, for intrusion detection analysis using both KDD CUP dataset and recent HTTP BOTNET attacks. Performance of the evaluation was measured using recent Weka tools with a standard cross-validation and confusion matrix. Keywords: Internet; computer network security; data mining; invasive software; pattern classification; telecommunication traffic; HTTP BOTNET attacks; IDS; Internet; KDD CUP dataset; Weka tools; attack recognition scheme; attribute selection; confusion matrix; data mining classification models; data selection; network attack; network intrusion detection system; pattern identification techniques; traffic volumes; Accuracy; Computational modeling; Data mining; Internet; Intrusion detection; Neural networks; Probes; BOTNET; Classification; Data Mining; Intrusion Detection; KDD CUP dataset; Network Security (ID#:14-2441) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821663&isnumber=6821645
- do Carmo, R.; Hollick, M., "Analyzing Active Probing For Practical Intrusion Detection in Wireless Multihop Networks," Wireless On-demand Network Systems and Services (WONS), 2014 11th Annual Conference on , vol., no., pp.77,80, 2-4 April 2014. doi: 10.1109/WONS.2014.6814725 Practical intrusion detection in Wireless Multihop Networks (WMNs) is a hard challenge. It has been shown that an active-probing-based network intrusion detection system (AP-NIDS) is practical for WMNs. However, understanding its interworking with real networks is still an unexplored challenge. In this paper, we investigate this in practice. We identify the general functional parameters that can be controlled, and by means of extensive experimentation, we tune these parameters and analyze the trade-offs between them, aiming at reducing false positives, overhead, and detection time. The traces we collected help us to understand when and why the active probing fails, and let us present countermeasures to prevent it. Keywords: frequency hop communication; security of data; wireless mesh networks; active-probing-based network intrusion detection system; wireless mesh network; wireless multihop networks; Ad hoc networks; Communication system security; Intrusion detection; Routing protocols; Testing; Wireless communication; Wireless sensor networks (ID#:14-2442) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814725&isnumber=6814711
- Al-Obeidat, F.N.; El-Alfy, E.-S.M., "Network Intrusion Detection Using Multi-Criteria PROAFTN Classification," Information Science and Applications (ICISA), 2014 International Conference on , vol., no., pp.1,5, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847436 Network intrusion is recognized as a chronic and recurring problem. Hacking techniques continually change and several countermeasure methods have been suggested in the literature including statistical and machine learning approaches. However, no single solution can be claimed as a rule of thumb for the wide spectrum of attacks. In this paper, a novel methodology is proposed for network intrusion detection based on the multicriteria PROAFTN classification. The algorithm is evaluated and compared on a publicly available and widely used dataset. The results in this paper show that the proposed algorithm is promising in detecting various types of intrusions with high classification accuracy. Keywords: computer crime; learning (artificial intelligence); statistical analysis; hacking techniques; machine learning approach; multicriteria PROAFTN classification; network intrusion detection; statistical approach; Accuracy; Computers; Decision making; Educational institutions; Intrusion detection; Prototypes; Support vector machines (ID#:14-2443) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847436&isnumber=6847317
- Weller-Fahy, D.; Borghetti, B.J.; Sodemann, AA, "A Survey of Distance and Similarity Measures used within Network Intrusion Anomaly Detection," Communications Surveys & Tutorials, IEEE, vol. PP, no.99, pp.1, 1, July 2014. doi: 10.1109/COMST.2014.2336610 Anomaly Detection (AD) use within the Network Intrusion Detection (NID) field of research, or Network Intrusion Anomaly Detection (NIAD), is dependent on the proper use of similarity and distance measures, but the measures used are often not documented in published research. As a result, while the body of NIAD research has grown extensively, knowledge of the utility of similarity and distance measures within the field has not grown correspondingly. NIAD research covers a myriad of domains and employs a diverse array of techniques from simple k-means clustering through advanced multi-agent distributed anomaly detection systems. This review presents an overview of the use of similarity and distance measures within NIAD research. The analysis provides a theoretical background in distance measures, and a discussion of various types of distance measures and their uses. Exemplary uses of distance measures in published research are presented, as is the overall state of the distance measure rigor in the field. Finally, areas which require further focus on improving distance measure rigor in the NIAD field are presented. Key words: (not provided) (ID#:14-2444) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853338&isnumber=5451756
- Kumar, G.V.P.; Reddy, D.K., "An Agent Based Intrusion Detection System for Wireless Network with Artificial Immune System (AIS) and Negative Clone Selection," Electronic Systems, Signal Processing and Computing Technologies (ICESC), 2014 International Conference on , vol., no., pp.429,433, 9-11 Jan. 2014. doi: 10.1109/ICESC.2014.73 Intrusion in Wireless network differs from IP network in a sense that wireless intrusion is both of packet level as well as signal level. Hence a wireless intrusion signature may be as simple as say a changed MAC address or jamming signal to as complicated as session hijacking. Therefore merely managing and cross verifying the patterns from an intrusion source are difficult in such a network. Beside the difficulty of detecting the intrusion at different layers, the network credential varies from node to node due to factors like mobility, congestion, node failure and so on. Hence conventional techniques for intrusion detection fail to prevail in wireless networks. Therefore in this work we device a unique agent based technique to gather information from various nodes and use this information with an evolutionary artificial immune system to detect the intrusion and prevent the same via bypassing or delaying the transmission over the intrusive paths. Simulation results show that the overhead of running AIS system does not vary and is consistent for topological changes. The system also proves that the proposed system is well suited for intrusion detection and prevention in wireless network. keywords: {access protocols; artificial immune systems; jamming; packet radio networks; radio networks; security of data; AIS system; IP network; MAC address; agent based intrusion detection system; artificial immune system; jamming signal; negative clone selection; network topology; session hijacking; wireless intrusion signature; wireless network; Bandwidth; Delays ;Immune system; Intrusion detection; Mobile agents; Wireless networks; Wireless sensor networks; AIS; congestion; intrusion detection; mobility (ID#:14-2445) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6745417&isnumber=6745317
- Junho Hong; Chen-Ching Liu; Govindarasu, M., "Detection Of Cyber Intrusions Using Network-Based Multicast Messages For Substation Automation," Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES , vol., no., pp.1,5, 19-22 Feb. 2014. doi: 10.1109/ISGT.2014.6816375 This paper proposes a new network-based cyber intrusion detection system (NIDS) using multicast messages in substation automation systems (SASs). The proposed network-based intrusion detection system monitors anomalies and malicious activities of multicast messages based on IEC 61850, e.g., Generic Object Oriented Substation Event (GOOSE) and Sampled Value (SV). NIDS detects anomalies and intrusions that violate predefined security rules using a specification-based algorithm. The performance test has been conducted for different cyber intrusion scenarios (e.g., packet modification, replay and denial-of-service attacks) using a cyber security testbed. The IEEE 39-bus system model has been used for testing of the proposed intrusion detection method for simultaneous cyber attacks. The false negative ratio (FNR) is the number of misclassified abnormal packets divided by the total number of abnormal packets. The results demonstrate that the proposed NIDS achieves a low fault negative rate. Keywords: power engineering computing; security of data; substation automation; FNR;GOOSE;IEC 61850;IEEE 39-bus system model; NIDS; SAS;S V; anomaly detection; cyber security testbed; denial-of-service attacks; false negative ratio; generic object-oriented substation event; low-fault negative rate; misclassified abnormal packets; network-based cyber intrusion detection system; network-based multicast messages; packet modification; predefined security rules; replay; sampled value ;simultaneous cyber attacks;specification-based algorithm; substation automation systems; Computer security; Educational institutions; IEC standards; Intrusion detection; Substation automation; Cyber Security of Substations; GOOSE and SV; Intrusion Detection System; Network Security (ID#:14-2446) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816375&isnumber=6816367
- Arya, A; Kumar, S., "Information theoretic feature extraction to reduce dimensionality of Genetic Network Programming based intrusion detection model," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.34,37, 7-8 Feb. 2014.doi: 10.1109/ICICICT.2014.6781248 Intrusion detection techniques require examining high volume of audit records so it is always challenging to extract minimal set of features to reduce dimensionality of the problem while maintaining efficient performance. Previous researchers analyzed Genetic Network Programming framework using all 41 features of KDD cup 99 dataset and found the efficiency of more than 90% at the cost of high dimensionality. We are proposing a new technique for the same framework with low dimensionality using information theoretic approach to select minimal set of features resulting in six attributes and giving the accuracy very close to their result. Feature selection is based on the hypothesis that all features are not at same relevance level with specific class. Simulation results with KDD cup 99 dataset indicates that our solution is giving accurate results as well as minimizing additional overheads. Keywords: feature extraction; feature selection; genetic algorithms; information theory; security of data; KDD cup 99 dataset; audit records; dimensionality reduction; feature selection; genetic network programming based intrusion detection model; information theoretic feature extraction; Artificial intelligence; Correlation; Association rule; Discretization; Feature Selection; GNP (ID#:14-2447) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781248&isnumber=6781240
- Nafir, Abdenacer; Mazouzi, Smaine; Chikhi, Salim, "Collective intrusion detection in wide area networks," Innovations in Intelligent Systems and Applications (INISTA) Proceedings, 2014 IEEE International Symposium on , vol., no., pp.46,51, 23-25 June 2014.doi: 10.1109/INISTA.2014.6873596 We present in this paper a collective approach for intrusion detection in wide area networks. We use the multi-agent paradigm to model the proposed distributed system. In this system, an agent, which plays several roles, is situated on each node of the net. The first role of an agent is to perform the work of a local intrusion detection system (IDS). Periodically, it proceeds to exchange security data within its local neighbouring. The agent neighbouring consists of IDS agents of local neighbour nodes. The goal of such an approach is to consolidate the decision, regarding every suspected security event. Unlike previous works having proposed distributed systems for intrusion detection, our system is not restricted to data sharing. It proceeds in the case of a conflict to a negotiation between neighbouring agents in order to produce a consensual decision. So, the proposed system is fully distributed. It does not require any central or hierarchical control, which compromises its scalability, specially in wide area networks such as Internet. Indeed, in this kind of networks, some attacks like distributed denial of service (DDoS) require fully distributed defence. Experiments on our system show its potential for satisfactory DDoS attack detection. Keywords: Computer crime; Computer hacking; Internet; Intrusion detection; Multi-agent systems; Wide area networks; DDoS; IDS; Intrusion detection; Multi-agent systems; Network security (ID#:14-2448) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6873596&isnumber=6873584
- Soo Young Moon; Ji Won Kim; Tae Ho Cho, "An Energy-Efficient Routing Method With Intrusion Detection And Prevention For Wireless Sensor Networks," Advanced Communication Technology (ICACT), 2014 16th International Conference on , vol., no., pp.467,470, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779004 Because of the features such as limited resources, wireless communication and harsh environments, wireless sensor networks (WSNs) are prone to various security attacks. Therefore, we need intrusion detection and prevention methods in WSNs. When the two types of schemes are applied, heavy communication overhead and resulting excessive energy consumption of nodes occur. For this reason, we propose an energy efficient routing method in an environment where both intrusion detection and prevention schemes are used in WSNs. We confirmed through experiments that the proposed scheme reduces the communication overhead and energy consumption compared to existing schemes. Keywords: security of data; telecommunication network routing; wireless sensor networks; energy-efficient routing method; excessive energy consumption; heavy communication overhead; intrusion detection scheme; intrusion prevention scheme; security attacks; wireless communication; wireless sensor networks; Energy consumption; Intrusion detection; Network topology; Routing; Sensors; Topology; Wireless sensor networks; intrusion detection; intrusion prevention; network layer attacks; wireless sensor network (ID#:14-2449) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779004&isnumber=6778899
- Chaudhary, A; Tiwari, V.N.; Kumar, A, "Design an Anomaly Based Fuzzy Intrusion Detection System For Packet Dropping Attack In Mobile Ad Hoc Networks," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.256,261, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779330 Due to the advancement in communication technologies, mobile ad hoc network increases the ability in terms of ad hoc communication between the mobile nodes. Mobile ad hoc networks do not use any predefined infrastructure during the communication so that all the present mobile nodes which are want to communicate with each other immediately form the topology and initiates the request for data packets to send or receive. In terms of security perspectives, communication via wireless links makes mobile ad hoc networks more vulnerable to attacks because any one can join and move the networks at any time. Particularly, in mobile ad hoc networks one of very common attack is packet dropping attack through the malicious node (s). This paper developed an anomaly based fuzzy intrusion detection system to detect the packet dropping attack from mobile ad hoc networks and this proposed solution also save the resources of mobile nodes in respect to remove the malicious nodes. For implementation point of view, qualnet simulator 6.1 and sugeno-type fuzzy inference system are used to make the fuzzy rule base for analyzing the results. From the simulation results it's proved that proposed system is more capable to detect the packet dropping attack with high positive rate and low false positive under each level (low, medium and high) of speed of mobile nodes. Keywords: fuzzy logic; fuzzy reasoning; fuzzy set theory; mobile ad hoc networks; telecommunication network topology; telecommunication security; anomaly based fuzzy intrusion detection system; data packets; fuzzy rule base ;malicious nodes; mobile ad hoc networks; mobile nodes; network topology; packet dropping attack; qualnet simulator 6.1;sugeno-type fuzzy inference system; wireless communication; Fuzzy logic; Intrusion detection; Mobile ad hoc networks; Mobile computing; Mobile nodes; MANETs security issues; detection methods; fuzzy logic; intrusion detection system (IDS);mobile ad hoc networks (MANETs);packet dropping attack (ID#:14-2450) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779330&isnumber=6779283
- Holm, H., "Signature Based Intrusion Detection for Zero-Day Attacks: (Not) A Closed Chapter?," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.4895,4904, 6-9 Jan. 2014.doi: 10.1109/HICSS.2014.600 A frequent claim that has not been validated is that signature based network intrusion detection systems (SNIDS) cannot detect zero-day attacks. This paper studies this property by testing 356 severe attacks on the SNIDS Snort, configured with an old official rule set. Of these attacks, 183 attacks are zero-days' to the rule set and 173 attacks are theoretically known to it. The results from the study show that Snort clearly is able to detect zero-days' (a mean of 17% detection). The detection rate is however on overall greater for theoretically known attacks (a mean of 54% detection). The paper then investigates how the zero-days' are detected, how prone the corresponding signatures are to false alarms, and how easily they can be evaded. Analyses of these aspects suggest that a conservative estimate on zero-day detection by Snort is 8.2%. Keywords: computer network security; digital signatures ; SNIDS; false alarm; signature based network intrusion detection; zero day attacks; zero day detection ;Computer architecture; Payloads; Ports (Computers); Reliability; Servers; Software; Testing; Computer security; NIDS; code injection; exploits (ID#:14-2451) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759203&isnumber=6758592
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Pervasive Computing
Also called ubiquitous computing, pervasive computing is the concept that all man-made and some natural products will have embedded hardware and software technology and connectivity. This evolution has been proceeding exponentially as computing devices become progressively smaller and more powerful. The goal of pervasive computing, which combines current network technologies with wireless computing, voice recognition, Internet capability and artificial intelligence, is to create an environment where the connectivity of devices is embedded in such a way that the connectivity is unobtrusive and always available. Such an approach offers security challenges. The articles cited here were published in the first half of 2014.
- Chopra, A; Tokas, S.; Sinha, S.; Panchal, V.K., "Integration of Semantic Search Technique And Pervasive Computing," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on ,pp.283,285, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828144 The main goal of pervasive computing is to provide services that can be used by the user in the given context with minimal user intervention. To support such an environment services or the applications in the environment should be able to interact seamlessly, with the other devices or applications present in the environment, to gather relevant information in current context. Main challenge is devices are resource constrained. To support such systems, so that they can utilize resources of other sensor nodes/mobile devices, I propose a system that integrates semantic search in pervasive computing. Information associated with mobile devices and sensor nodes is used in a way that results in minimal inexact matching, efficient and improved service discovery. Keywords: information retrieval; ubiquitous computing; information gathering; mobile devices; pervasive computing; resource utilization; semantic search technique; sensor nodes; service discovery; user intervention; Context; Decision support systems; Mobile handsets; Pervasive computing; Resource description framework; Semantics; Wireless sensor networks; RDF; pervasive computing ;semantic search; service discovery (ID#:14-2452) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828144&isnumber=6827395
- Kiljander, J.; D'Elia, A; Morandi, F.; Hyttinen, P.; Takalo-Mattila, J.; Ylisaukko-oja, A; Soininen, J.; Salmon Cinotti, T., "Semantic interoperability architecture for pervasive computing and Internet of Things," Access, IEEE, vol. PP, no.99, pp.1, 1, August 2014. doi: 10.1109/ACCESS.2014.2347992 Pervasive computing and Internet of Things (IoT) paradigms have created a huge potential for new business. To fully realize this potential, there is a need for a common way to abstract the heterogeneity of devices so that their functionality can be represented as a virtual computing platform. To this end, we present novel semantic-level interoperability architecture for pervasive computing and Internet of Things (IoT). There are two main principles in the proposed architecture. First, information and capabilities of devices are represented with Semantic Web knowledge representation technologies and interaction with devices and the physical world is achieved by accessing and modifying their virtual representations. Second, global IoT is divided into numerous local smart spaces managed by a Semantic Information Broker (SIB) that provides a means to monitor and update the virtual representation of the physical world. An integral part of the architecture is a Resolution Infrastructure that provides a means to resolve the network address of a SIB either by using a physical object identifier as a pointer to information or by searching SIBs matching a specification represented with SPARQL. We present several reference implementations and applications that we have developed to evaluate the architecture in practice. The evaluation also includes performance studies that, together with the applications, demonstrate the suitability of the architecture to real-life IoT scenarios. Additionally, to validate that the proposed architecture conforms to the common IoT-A Architecture Reference Model (ARM), we map the central components of the architecture to the IoT-ARM. Keywords: Computer architecture; Context awareness; Interoperability; Pervasive computing; Resource description framework; Semantics; Sensors (ID#:14-2453) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6879461&isnumber=6514899
- Strobel, D.; Oswald, D.; Richter, B.; Schellenberg, F.; Paar, C., "Microcontrollers as (In)Security Devices for Pervasive Computing Applications," Proceedings of the IEEE , vol.102, no.8, pp.1157,1173, Aug. 2014. doi: 10.1109/JPROC.2014.2325397 Often overlooked, microcontrollers are the central component in embedded systems which drive the evolution toward the Internet of Things (IoT). They are small, easy to handle, low cost, and with myriads of pervasive applications. An increasing number of microcontroller-equipped systems are security and safety critical. In this tutorial, we take a critical look at the security aspects of today's microcontrollers. We demonstrate why the implementation of sensitive applications on a standard microcontroller can lead to severe security problems. To this end, we summarize various threats to microcontroller-based systems, including side-channel analysis and different methods for extracting embedded code. In two case studies, we demonstrate the relevance of these techniques in real-world applications: Both analyzed systems, a widely used digital locking system and the YubiKey 2 onetime password generator, turned out to be susceptible to attacks against the actual implementations, allowing an adversary to extract the cryptographic keys which, in turn, leads to a total collapse of the system security. Keywords: Internet of Things; cryptography; embedded systems; microcontrollers; ubiquitous computing; Internet of Things; IoT; YubiKey 2 onetime password generator; cryptographic key extraction; digital locking system; embedded code extraction; embedded systems; microcontroller-equipped systems; pervasive computing applications; security devices; side-channel analysis; Algorithm design and analysis; Cryptography; Embedded systems; Field programmable gate arrays; Integrated circuit modeling; Microcontrollers; Pervasive computing; Security; Code extraction; microcontroller; real-world attacks; reverse engineering; side-channel analysis (ID#:14-2455) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6826474&isnumber=6860340
- Alomair, B.; Poovendran, R., "Efficient Authentication for Mobile and Pervasive Computing," Mobile Computing, IEEE Transactions on, vol.13, no.3, pp. 469,481, March 2014. doi: 10.1109/TMC.2012.252 With today's technology, many applications rely on the existence of small devices that can exchange information and form communication networks. In a significant portion of such applications, the confidentiality and integrity of the communicated messages are of particular interest. In this work, we propose two novel techniques for authenticating short encrypted messages that are directed to meet the requirements of mobile and pervasive applications. By taking advantage of the fact that the message to be authenticated must also be encrypted, we propose provably secure authentication codes that are more efficient than any message authentication code in the literature. The key idea behind the proposed techniques is to utilize the security that the encryption algorithm can provide to design more efficient authentication mechanisms, as opposed to using standalone authentication primitives. Keywords: cryptography; message authentication; mobile computing; communicated message confidentiality; communicated message integrity; communication networks; encryption algorithm; information exchange; mobile applications; mobile computing; pervasive applications; pervasive computing; provably secure authentication codes; short encrypted message authentication mechanism; Algorithm design and analysis; Authentication; Encryption; Message authentication; Authentication; computational security; pervasive computing; unconditional security; universal hash-function families (ID#:14-2456) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6380496&isnumber=6731368
- Vihavainen, S.; Lampinen, A; Oulasvirta, A; Silfverberg, S.; Lehmuskallio, A, "The Clash between Privacy and Automation in Social Media," Pervasive Computing, IEEE, vol.13, no.1, pp.56, 63, Jan.-Mar. 2014. doi: 10.1109/MPRV.2013.25 Classic research on human factors has found that automation never fully eliminates the human operator from the loop. Instead, it shifts the operator's responsibilities to the machine and changes the operator's control demands, sometimes with adverse consequences, called the "ironies of automation." In this article, the authors revisit the problem of automation in the era of social media, focusing on privacy concerns. Present-day social media automatically discloses information, such as users' whereabouts, likings, and undertakings. This review of empirical studies exposes three recurring privacy-related issues in automated disclosure: insensitivity to situational demands, inadequate control of nuance and veracity, and inability to control disclosure with service providers and third parties. The authors claim that "all-or-nothing" automation has proven problematic and that social network services should design their user controls with all stages of the disclosure process in mind. Keywords: data privacy; human factors; social networking (online); automated disclosure; human factors; privacy-related issues; social media; social network services; Automation; Context awareness; Human factors; Media; Pervasive computing; Privacy; Social implications of technology; Social network services; automation; pervasive computing; privacy; social media (ID#:14-2457) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6419690&isnumber=6750476
- Arbit, A; Oren, Y.; Wool, A, "A Secure Supply-Chain RFID System that Respects Your Privacy," Pervasive Computing, IEEE , vol.13, no.2, pp.52,60, Apr.-June. 2014. doi: 10.1109/MPRV.2014.22 Supply-chain RFID systems introduce significant privacy issues to consumers, making it necessary to encrypt communications. Because the resources available on tags are very small, it is generally assumed that only symmetric-key cryptography can be used in such systems. Unfortunately, symmetric-key cryptography imposes negative trust issues between the various stake-holders, and risks compromising the security of the whole system if even a single tag is reverse engineered. This work presents a working prototype implementation of a secure RFID system which uses public-key cryptography to simplify deployment, reduce trust issues between the supply-chain owner and tag manufacturer, and protect user privacy. The authors' prototype system consists of a UHF tag running custom firmware, a standard off-the-shelf reader and custom point-of-sale terminal software. No modifications were made to the reader or the air interface, proving that high-security EPC tags and standard EPC tags can coexist and share the same infrastructure. Keywords: data privacy; manufacturing data processing; public key cryptography; radiofrequency identification; supply chain management; UHF tag; custom point-of-sale terminal software; data privacy; high-security EPC tags; off-the-shelf reader; privacy issues; public key cryptography; radiofrequency identification; reverse engineering; secure supply-chain RFID system; supply-chain owner; symmetric-key cryptography; system security; tag manufacturer; trust issues user privacy; Encryption; Payloads; Protocols; Public key; Radiofrequency identification; Supply chain management; RFID; pervasive computing; security; supply chain (ID#:14-2458) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818503&isnumber=6818495
- Abas, K.; Porto, C.; Obraczka, K., "Wireless Smart Camera Networks for the Surveillance of Public Spaces," Computer, vol.47, no.5, pp.37,44, May 2014. doi: 10.1109/MC.2014.140 A taxonomy of wireless visual sensor networks for surveillance offers design goals that try to balance energy efficiency and application performance requirements. SWEETcam, a wireless smart camera network platform, tries to address the challenges raised by achieving adequate energy-performance tradeoffs. Keywords: cameras; video surveillance; wireless sensor networks; SWEETcam; energy-performance tradeoffs; public space surveillance; wireless smart camera networks; Bandwidth; Cameras; Data visualization; Energy efficiency; Smart cameras; Surveillance; Wireless communication; Wireless sensor networks; computer vision; distributed systems; embedded systems; hardware; image processing; pervasive computing; surveillance systems; visualization; wireless sensor networks (ID#:14-2459) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818944&isnumber=6818895
- Avoine, G.; Coisel, I; Martin, T., "Untraceability Model for RFID," Mobile Computing, IEEE Transactions on, vol. PP, no.99, pp.1, 1, December 2013. doi: 10.1109/TMC.2013.161 After several years of research on cryptographic models for privacy in RFID systems, it appears that no universally model exists yet. Experience shows that security experts usually prefer using their own ad-hoc model than the existing ones. In particular, the impossibility of the models to refine the privacy assessment of different protocols has been highlighted in several studies. The paper emphasizes the necessity to define a new model capable of comparing protocols meaningfully. It introduces an untraceability model that is operational where the previous models are not. The model aims to be easily usable to design proofs or describe attacks. This spirit led to a modular model where adversary actions (oracles), capabilities (selectors and restrictions), and goals (experiment) follow an intuitive and practical approach. This design enhances the ability to formalize new adversarial assumptions and future evolutions of the technology, and provide a finest privacy evaluation of protocols. Keywords: Pervasive computing; Security; Systems and Information Theory; and protection; integrity (ID#:14-2460) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6692838&isnumber=4358975
- Chia-Mei Chen; Peng-Yu Yang; Ya-Hui Ou; Han-Wei Hsiao, "Targeted Attack Prevention at Early Stage," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.866,870, 13-16 May 2014. doi: 10.1109/WAINA.2014.134 Targeted cyber attacks play a critical role in disrupting network infrastructure and information privacy. Based on the incident investigation, Intelligence gathering is the first phase of such attacks. To evade detection, hacker may make use of botnet, a set of zombie machines, to gain the access of a target and the zombies send the collected results back to the hacker. Even though the zombies would be blocked by detection system, the hacker, using the access information obtained from the botnet, would login the target from another machine without being noticed by the detection system. Such information gathering tactic can evade detection and the hacker grants the initial access to the target. The proposed defense system analyzes multiple logs from the network and extracts the reconnaissance attack sequences related to targeted attacks. State-based model is adopted to model the steps of the above early phase attack performed by multiple scouts and an intruder and such attack events in a long time frame becomes significant in the state-aware model. The results show that the proposed system can identify the attacks at the early stage efficiently to prevent further damage in the networks. Keywords: authorisation; data privacy; invasive software; ubiquitous computing; botnet; cyber attack;information privacy; intelligence gathering; network infrastructure; state-based model ;targeted attack prevention; Computer hacking; Hidden Markov models; IP networks; Joints; Reconnaissance; Servers intrusion detection; pervasive computing; targeted attacks (ID#:14-2461) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844748&isnumber=6844560
- Mirzadeh, S.; Cruickshank, H.; Tafazolli, R., "Secure Device Pairing: A Survey," Communications Surveys & Tutorials, IEEE , vol.16, no.1, pp.17,40, First Quarter 2014. doi: 10.1109/SURV.2013.111413.00196 In this paper, we discuss secure device pairing mechanisms in detail. We explain man-in-the-middle attack problem in unauthenticated Diffie-Hellman key agreement protocols and show how it can be solved by using out-of-band channels in the authentication procedure. We categorize out-of-band channels into three categories of weak, public, and private channels and demonstrate their properties through some familiar scenarios. A wide range of current device pairing mechanisms are studied and their design circumstances, problems, and security issues are explained. We also study group device pairing mechanisms and discuss their application in constructing authenticated group key agreement protocols. We divide the mechanisms into two categories of protocols with and without the trusted leader and show that protocols with trusted leader are more communication and computation efficient. In our study, we considered both insider and outsider adversaries and present protocols that provide secure group device pairing for uncompromised nodes even in presence of corrupted group members. Keywords: cryptographic protocols; authenticated group key agreement protocol; authentication procedure; device pairing mechanism; man-in-the-middle attack problem; out-of-band channel; private channel; public channel ;unauthenticated Diffie-Hellman key agreement protocols; Authentication; DH-HEMTs; Protocols; Public key; Wireless communication;key management; machine-to-machine communication; pervasive computing; security (ID#:14-2462) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6687314&isnumber=6734839
- Thuong Nguyen, "Bayesian Nonparametric Extraction Of Hidden Contexts From Pervasive Honest Signals," Pervasive Computing And Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on , vol., no., pp.168,170, 24-28 March 2014. doi: 10.1109/PerComW.2014.6815190 Hidden patterns and contexts play an important part in intelligent pervasive systems. Most of the existing works have focused on simple forms of contexts derived directly from raw signals. High-level constructs and patterns have been largely neglected or remained under-explored in pervasive computing, mainly due to the growing complexity over time and the lack of efficient principal methods to extract them. Traditional parametric modeling approaches from machine learning find it difficult to discover new, unseen patterns and contexts arising from continuous growth of data streams due to its practice of training-then-prediction paradigm. In this work, we propose to apply Bayesian nonparametric models as a systematic and rigorous paradigm to continuously learn hidden patterns and contexts from raw social signals to provide basic building blocks for context-aware applications. Bayesian nonparametric models allow the model complexity to grow with data, fitting naturally to several problems encountered in pervasive computing. Under this framework, we use nonparametric prior distributions to model the data generative process, which helps towards learning the number of latent patterns automatically, adapting to changes in data and discovering never-seen-before patterns, contexts and activities. The proposed methods are agnostic to data types, however our work shall demonstrate to two types of signals: accelerometer activity data and Bluetooth proximal data. Keywords: data mining; learning (artificial intelligence); ubiquitous computing; Bayesian nonparametric extraction; Bayesian nonparametric models; Bluetooth proximal data; accelerometer activity data; context-aware applications; data streams; hidden contexts extraction; high-level constructs; high-level patterns; intelligent pervasive systems; machine learning; parametric modeling approach; pervasive computing; pervasive honest signals; social signals; training-then-prediction paradigm; Adaptation models; Context; Context modeling; Data mining; Data models; Hidden Markov models; Pervasive computing (ID#:14-2463) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815190&isnumber=6815123
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Router Systems Security
Routers are among the most ubiquitous electronic devices in use. Basic security from protocols and encryption can be readily achieved, but routing has many leaks. The articles cited here look at route leaks, stack protection, and mobile platforms using Tor, iOS and Android OS. They were published in the first half of 2014.
- Siddiqui, M.S.; Montero, D.; Yannuzzi, M.; Serral-Gracia, R.; Masip-Bruin, X., "Diagnosis of Route Leaks Among Autonomous Systems In The Internet," Smart Communications in Network Technologies (SaCoNeT), 2014 International Conference on , vol., no., pp.1,6, 18-20 June 2014. doi: 10.1109/SaCoNeT.2014.6867765 Border Gateway Protocol (BGP) is the defacto inter-domain routing protocol in the Internet. It was designed without an inherent security mechanism and hence is prone to a number of vulnerabilities which can cause large scale disruption in the Internet. Route leak is one such inter-domain routing security problem which has the potential to cause wide-scale Internet service failure. Route leaks occur when Autonomous systems violate export policies while exporting routes. As BGP security has been an active research area for over a decade now, several security strategies were proposed, some of which either advocated complete replacement of the BGP or addition of new features in BGP, but they failed to achieve global acceptance. Even the most recent effort in this regard, lead by the Secure Inter-Domain Routing (SIDR) working group (WG) of IETF fails to counter all the BGP anomalies, especially route leaks. In this paper we look at the efforts in countering the policy related BGP problems and provide an analytical insights into why they are ineffective. We contend a new direction for future research in managing the broader security issues in the inter-domain routing. In that light, we propose a naive approach for countering the route leak problem by analyzing the information available at hand, such as the RIB of the router. The main purpose of this paper was to position and highlight the autonomous smart analytical approach for tackling policy related BGP security issues. Keywords: Internet ;computer network security; routing protocols; BGP security issue; IETF ;Internet autonomous systems; Secure InterDomain Routing working group; border gateway protocol; interdomain routing protocol; interdomain routing security problem; route leak diagnosis; security issues; IP networks; Internet; Radiation detectors; Routing; Routing protocols; Security (ID#:14-2464) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6867765&isnumber=6867755
- Peng Wu; Wolf, T., "Stack Protection In Packet Processing Systems," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp.53, 57, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785304 Network security is a critical aspect of Internet operations. Most network security research has focused on protecting end-systems from hacking and denial-of-service attacks. In our work, we address hacking attacks on the network infrastructure itself. In particular, we explore data plane stack smashing attacks that have demonstrated successfully on network processor systems. We explore their use in the context of software routers that are implemented on top of general-purpose processor and operating systems. We discuss how such attacks can be adapted to these router systems and how stack protection mechanisms can be used as defense. We show experimental results that demonstrate the effectiveness of these stack protection mechanisms. Keywords: Internet; computer crime; computer network security; general purpose computers; operating systems (computers);packet switching; telecommunication network routing; Internet; computer network security; denial of service attacks; end systems protection; general purpose processor; hacking attacks; network infrastructure; network processor systems; operating systems; packet processing system; router systems; smashing attacks; software routers; stack protection mechanism; Computer architecture; Information security; Linux; Operating systems; Protocols; attack; defense; network security; stack smashing} (ID#:14-2465) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785304&isnumber=6785290
- Frantti, Tapio; Roning, Juha, "A Risk-Driven Security Analysis For A Bluetooth Low Energy Based Microdata Ecosystem," Ubiquitous and Future Networks (ICUFN), 2014 Sixth International Conf on, vol., no., pp.69,74, 8-11 July 2014. doi: 10.1109/ICUFN.2014.6876753 This paper presents security requirements, risk survey, security objectives, and security controls of the Bluetooth Low Energy (BLE) based Catcher devices and the related Microdata Ecosystem of Ceruus company for a secure, energy efficient and scalable wireless content distribution. The system architecture was composed of the Mobile Cellular Network (MCN) based gateway/edge router device, such as Smart Phone, Catchers, and web based application servers. It was assumed that MCN based gateways communicate with application servers and surrounding Catcher devices. The analysis of the scenarios developed highlighted common aspects and led to security requirements, objectives, and controls that were used to define and develop the Catcher and MCN based router devices and guide the system architecture design of the Microdata Ecosystem. Keywords: Authentication; Ecosystems; Encryption; Logic gates; Protocols; Servers; Internet of Things; authentication; authorization; confidentiality; integrity; security; timeliness (ID#:14-2466) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876753&isnumber=6876727
- Wassel, H.M.G.; Ying Gao; Oberg, J.K.; Huffmire, T.; Kastner, R.; Chong, F.T.; Sherwood, T., "Networks on Chip with Provable Security Properties," Micro, IEEE , vol.34, no.3, pp.57,68, May-June 2014. doi: 10.1109/MM.2014.46 In systems where a lack of safety or security guarantees can be catastrophic or even fatal, noninterference is used to separate domains handling critical (or confidential) information from those processing normal (or unclassified) data for purposes of fault containment and ease of verification. This article introduces SurfNoC, an on-chip network that significantly reduces the latency incurred by strict temporal partitioning. By carefully scheduling the network into waves that flow across the interconnect, data from different domains carried by these waves are strictly noninterfering while avoiding the significant overheads associated with cycle-by-cycle time multiplexing. The authors describe the scheduling policy and router microarchitecture changes required, and evaluate the information-flow security of a synthesizable implementation through gate-level information flow analysis. When comparing their approach for varying numbers of domains and network sizes, they find that in many cases SurfNoC can reduce the latency overhead of implementing cycle-level noninterference by up to 85 percent. Keywords: network-on-chip; processor scheduling; security of data; SurfNoC; cycle-by-cycle time multiplexing; ycle-level noninterference; gate-level information flow analysis; information-flow security; network scheduling; networks on chip; provable security properties; Computer architecture; Computer security; Microarchitecture; Network-on-chip; Ports (Computers);Quality of service; Schedules; Computer architecture; Computer security; Microarchitecture; Network-on-chip; Ports (Computers);Quality of service; Schedules; high performance computing; high-assurance systems; networks on chip; noninterference; security; virtualization (ID#:14-2467) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828567&isnumber=6828565
- Sivaraman, V.; Matthews, J.; Russell, C.; Ali, S.T.; Vishwanath, A, "Greening Residential WiFi Networks under Centralized Control," Mobile Computing, IEEE Transactions on, vol . PP, no.99, pp.1, 1, May 2014. doi: 10.1109/TMC.2014.2324582 Residential broadband gateways (comprising modem, router, and WiFi access point), though individually consuming only 5-10 Watts of power, are significant contributors to overall network energy consumption due to large deployment numbers. Moreover, home gateways are typically always on, so as to provide continuous online presence to household devices for VoIP, smart metering, security surveillance, medical monitoring, etc. A natural solution for reducing the energy consumption of home gateways is to leverage the overlap of WiFi networks common in urban environments and aggregate user traffic on to fewer gateways, thus putting the remaining to sleep. In this paper we propose, evaluate, and prototype an architecture that overcomes significant challenges in making this solution feasible at large-scale. We advocate a centralized approach, whereby a single authority coordinates the home gateways to maximize energy savings in a fair manner. Our solution can be implemented across heterogeneous ISPs, avoids client-side modifications (thus encompassing arbitrary user devices and operating systems), and permits explicit control of session migrations. We apply our solution to WiFi traces collected in a building with 30 access points and 25,000 client connections, and evaluate via simulation the trade-offs between energy savings, session disruptions, and fairness. We then prototype our system on commodity WiFi access points, test it in a two-storey building emulating 6 residences, and demonstrate radio energy reduction of over 60% with little impact on user experience. Keywords: Bandwidth; Buildings; Energy consumption; Green products; IEEE 802.11 Standards; Logic gates; Security (ID#:14-2468) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816063&isnumber=4358975
- Tennekoon, R.; Wijekoon, J.; Harahap, E.; Nishi, H.; Saito, E.; Katsura, S., "Per HOP DATA ENCRYPTION PROTOCOL FOR TRANSMISSION OF MOTION CONTROL DATA OVER PUBLIC NETWORKS," Advanced Motion Control (AMC),2014 IEEE 13th International Workshop on , vol., no., pp.128,133, 14-16 March 2014. doi: 10.1109/AMC.2014.6823269 Bilateral controllers are widely used vital technology to perform remote operations and telesurgeries. The nature of the bilateral controller enables control objects, which are geographically far from the operation location. Therefore, the control data has to travel through public networks. As a result, to maintain the effectiveness and the consistency of applications such as teleoperations and telesurgeries, faster data delivery and data integrity are essential. The Service-oriented Router (SoR) was introduced to maintain the rich information on the Internet and to achieve maximum benefit from networks. In particular, the security, privacy and integrity of bilateral communication are not discoursed in spite of its significance brought by its underlying skill information or personal vital information. An SoR can analyze all packet or network stream transactions on its interfaces and store them in high throughput databases. In this paper, we introduce a hop-by-hop routing protocol which provides hop-by-hop data encryption using functions of the SoR. This infrastructure can provide security, privacy and integrity by using these functions. Furthermore, we present the implementations of proposed system in the ns-3 simulator and the test result shows that in a given scenario, the protocol only takes a processing delay of 46.32 ms for the encryption and decryption processes per a packet. Keywords: Internet; computer network security; control engineering computing; cryptographic protocols; data communication; data integrity; data privacy; force control; medical robotics; motion control; position control; routing protocols; surgery; telecontrol; telemedicine; telerobotics; Internet; SoR; bilateral communication; bilateral controller; control objects; data delivery; data integrity; decryption process; hop-by-hop data encryption; hop-by-hop routing protocol; motion control data transmission; network stream transaction analysis;ns-3 simulator operation location; packet analysis; per hop data encryption protocol; personal vital information; privacy; processing delay; public network; remote operation; security; service-oriented router; skill information; teleoperation; telesurgery; throughput database; Delays; Encryption; Haptic interfaces; Routing protocols; Surgery; Bilateral Controllers; Service-oriented Router; op-by-hop routing; motion control over networks; ns-3 (ID#:14-2469) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823269&isnumber=6823244
- Bingyang Liu; Jun Bi; Vasilakos, AV., "Toward Incentivizing Anti-Spoofing Deployment," Information Forensics and Security, IEEE Transactions on, vol.9, no.3, pp.436,450, March 2014. doi: 10.1109/TIFS.2013.2296437 IP spoofing-based flooding attacks are a serious and open security problem on the current Internet. The best current antispoofing practices have long been implemented in modern routers. However, they are not sufficiently applied due to the lack of deployment incentives, i.e., an autonomous system (AS) can hardly gain additional protection by deploying them. In this paper, we propose mutual egress filtering (MEF), a novel antispoofing method, which provides continuous deployment incentives. The MEF is implemented on the AS border routers using access control lists (ACLs). It drops an outbound packet whose source address does not belong to the local AS if the packet is related to a spoofing attack against other MEF-enabled ASes. By this means, only the deployers of the MEF can gain protection, whereas nondeployers cannot free ride. As more ASes deploy MEF, deployment incentives become higher. We present the system design of MEF, and propose an optimal prefix compression algorithm to compact the ACL into the routers' limited hardware resource. With theoretical analysis and simulations with real Internet data, our evaluation results show that MEF is the only method that achieves monotonically increasing deployment incentives for all types of spoofing attacks, and the system design is lightweight and practical. The prefix compression algorithm advances the state-of-the-art by generalizing the functionalities and reducing the overhead in both time and space. Keywords: IP networks; Internet; authorisation; computer network security; telecommunication network routing; ACL; AS border routers; IP spoofing-based flooding attacks; Internet; MEF; access control lists; antispoofing deployment incentivization; deployment incentives; functionality generalization; mutual egress filtering; open security problem; optimal prefix compression resource; overhead reduction; Compression algorithms; Filtering; Hardware; IP networks; Internet; Routing protocols; System analysis and design; DoS defense; IP spoofing; deployment incentive; spoofing prevention (ID#:14-2470) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6697842&isnumber=6727454
- Naito, K.; Mori, K.; Kobayashi, H.; Kamienoo, K.; Suzuki, H.; Watanabe, A, "End-to-end IP Mobility Platform In Application Layer for iOS and Android OS," Consumer Communications and Networking Conference (CCNC), 2014 IEEE 11th , vol., no., pp.92,97, 10-13 Jan. 2014. doi: 10.1109/CCNC.2014.6866554 Smartphones are a new type of mobile devices that users can install additional mobile software easily. In the almost all smartphone applications, client-server model is used because end-to-end communication is prevented by NAT routers. Recently, some smartphone applications provide real time services such as voice and video communication, online games etc. In these applications, end-to-end communication is suitable to reduce transmission delay and achieve efficient network usage. Also, IP mobility and security are important matters. However, the conventional IP mobility mechanisms are not suitable for these applications because most mechanisms are assumed to be installed in OS kernel. We have developed a novel IP mobility mechanism called NTMobile (Network Traversal with Mobility). NTMobile supports end-to-end IP mobility in IPv4 and IPv6 networks, however, it is assumed to be installed in Linux kernel as with other technologies. In this paper, we propose a new type of end-to-end mobility platform that provides end-to-end communication, mobility, and also secure data exchange functions in the application layer for smartphone applications. In the platform, we use NTMobile, which is ported as the application program. Then, we extend NTMobile to be suitable for smartphone devices and to provide secure data exchange. Client applications can achieve secure end-to-end communication and secure data exchange by sharing an encryption key between clients. Users also enjoy IP mobility which is the main function of NTMobile in each application. Finally, we confirmed that the developed module can work on Android system and iOS system. Keywords: Android (operating system); IP networks; client-server systems; cryptography; electronic data interchange; iOS (operating system);real-time systems; smart phones; Android OS;IPv4 networks; IPv6 networks ;Linux kernel; NAT routers; NTMobile; OS kernel; application layer; client-server model encryption key; end-to-end IP mobility platform; end-to-end communication; iOS system; network traversal with mobility; network usage; real time services; secure data exchange ;smartphones; transmission delay; Authentication; Encryption; IP networks; Manganese; Relays; Servers (ID#:14-2471) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866554&isnumber=6866537
- Zhen Ling; Junzhou Luo; Kui Wu; Wei Yu; Xinwen Fu, "TorWard: Discovery of Malicious Traffic Over Tor," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.1402,1410, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848074 Tor is a popular low-latency anonymous communication system. However, it is currently abused in various ways. Tor exit routers are frequently troubled by administrative and legal complaints. To gain an insight into such abuse, we design and implement a novel system, TorWard, for the discovery and systematic study of malicious traffic over Tor. The system can avoid legal and administrative complaints and allows the investigation to be performed in a sensitive environment such as a university campus. An IDS (Intrusion Detection System) is used to discover and classify malicious traffic. We performed comprehensive analysis and extensive real-world experiments to validate the feasibility and effectiveness of TorWard. Our data shows that around 10% Tor traffic can trigger IDS alerts. Malicious traffic includes P2P traffic, malware traffic (e.g., botnet traffic), DoS (Denial-of-Service) attack traffic, spam, and others. Around 200 known malware have been identified. To the best of our knowledge, we are the first to perform malicious traffic categorization over Tor. Keywords: computer network security; peer-to-peer computing; telecommunication network routing telecommunication traffic; DoS; IDS; IDS alerts;P2P traffic; Tor exit routers; denial-of-service attack traffic; intrusion detection system; low-latency anonymous communication system; malicious traffic categorization; malicious traffic discovery; spam; Bandwidth; Computers; Logic gates; Malware; Mobile handsets; Ports (Computers);Servers; Intrusion Detection System; Malicious Traffic; Tor (ID#:14-2472) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848074&isnumber=6847911
- Ganegedara, T.; Weirong Jiang; Prasanna, V.K., "A Scalable and Modular Architecture for High-Performance Packet Classification," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.5, pp.1135,1144, May 2014. doi: 10.1109/TPDS.2013.261 Packet classification is widely used as a core function for various applications in network infrastructure. With increasing demands in throughput, performing wire-speed packet classification has become challenging. Also the performance of today's packet classification solutions depends on the characteristics of rulesets. In this work, we propose a novel modular Bit-Vector (BV) based architecture to perform high-speed packet classification on Field Programmable Gate Array (FPGA). We introduce an algorithm named StrideBV and modularize the BV architecture to achieve better scalability than traditional BV methods. Further, we incorporate range search in our architecture to eliminate ruleset expansion caused by range-to-prefix conversion. The post place-and-route results of our implementation on a state-of-the-art FPGA show that the proposed architecture is able to operate at 100+ Gbps for minimum size packets while supporting large rulesets up to 28 K rules using only the on-chip memory resources. Our solution is ruleset-feature independent, i.e. the above performance can be guaranteed for any ruleset regardless the composition of the ruleset. Keywords: field programmable gate arrays; packet switching; FPGA; core function ;field programmable gate array; high performance packet classification solutions; high speed packet classification; modular architecture; modular bit vector; network infrastructure; on-chip memory resources; range-to-prefix conversion; ruleset expansion; ruleset-feature independent; scalable architecture; wire speed packet classification; Arrays; Field programmable gate arrays; Hardware; Memory management; Pipelines; Throughput; Vectors; ASIC; FPGA; Packet classification; firewall; hardware architectures; network security; networking; router (ID#:14-2473) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6627892&isnumber=6786006
- Sgouras, K.I; Birda, AD.; Labridis, D.P., "Cyber Attack Impact On Critical Smart Grid Infrastructures," Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES , vol., no., pp.1,5, 19-22 Feb. 2014. doi: 10.1109/ISGT.2014.6816504 Electrical Distribution Networks face new challenges by the Smart Grid deployment. The required metering infrastructures add new vulnerabilities that need to be taken into account in order to achieve Smart Grid functionalities without considerable reliability trade-off. In this paper, a qualitative assessment of the cyber attack impact on the Advanced Metering Infrastructure (AMI) is initially attempted. Attack simulations have been conducted on a realistic Grid topology. The simulated network consisted of Smart Meters, routers and utility servers. Finally, the impact of Denial-of-Service and Distributed Denial-of-Service (DoS/DDoS) attacks on distribution system reliability is discussed through a qualitative analysis of reliability indices. Keywords: computer network security; power distribution reliability; power engineering computing; power system security; smart meters; smart power grids; AMI; DoS-DDoS attacks; advanced metering infrastructure ;critical smart grid infrastructures; cyber attack impact; distributed denial-of-service attacks; distribution system reliability; electrical distribution networks; grid topology; qualitative assessment; routers; smart grid deployment; smart meters; utility servers; Computer crime; Reliability; Servers; Smart grids; Topology; AMI; Cyber Attack; DDoS; DoS; Reliability; Simulation; Smart Grid (ID#:14-2474) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816504&isnumber=6816367
- Sarma, K.J.; Sharma, R.; Das, R., "A Survey Of Black Hole Attack Detection In Manet," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.202,205, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781279 MANET is an infrastructure less, dynamic, decentralised network. Any node can join the network and leave the network at any point of time. Due to its simplicity and flexibility, it is widely used in military communication, emergency communication, academic purpose and mobile conferencing. In MANET there no infrastructure hence each node acts as a host and router. They are connected to each other by Peer-to-peer network. Decentralised means there is nothing like client and server. Each and every node is acted like a client and a server. Due to the dynamic nature of mobile Ad-HOC network it is more vulnerable to attack. Since any node can join or leave the network without any permission the security issues are more challenging than other type of network. One of the major security problems in ad hoc networks called the black hole problem. It occurs when a malicious node referred as black hole joins the network. The black hole conducts its malicious behavior during the process of route discovery. For any received RREQ, the black hole claims having route and propagates a faked RREP. The source node responds to these faked RREPs and sends its data through the received routes once the data is received by the black hole; it is dropped instead of being sent to the desired destination. This paper discusses some of the techniques put forwarded by researchers to detect and prevent Black hole attack in MANET using AODV protocol and based on their flaws a new methodology also have been proposed. Keywords: client-server systems; mobile ad hoc networks; network servers; peer-to-peer computing; radio wave propagation; routing protocols; telecommunication security; AODV protocol; MANET; academic purpose; black hole attack detection; client; decentralised network; emergency communication; military communication; mobile ad-hoc network; mobile conferencing; peer-to-peer network; received RREQ; route discovery; security; server; Europe; Mobile communication; Routing protocols; Ad-HOC; Black hole attack; MANET; RREP; RREQ (ID#:14-2475) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781279&isnumber=6781240
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Steganography
Digital steganography is one of the primary areas or science of security research. Detection and countermeasures are the topics pursued. The articles cited here were presented between January ad August of 2014. They cover a range of topics, including Least Significant Bit (LSB), LDPC codes, combinations with DES encryption, and Hamming code.
- Akhtar, N.; Khan, S.; Johri, P., "An Improved Inverted LSB Image Steganography," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.749,755, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781374 In this paper, an improvement in the plain LSB based image steganography is proposed and implemented. The paper proposes the use of bit inversion technique to improve the stego-image quality. Two schemes of the bit inversion techniques are proposed and implemented. In these techniques, LSBs of some pixels of cover image are inverted if they occur with a particular pattern of some bits of the pixels. In this way, less number of pixels is modified in comparison to plain LSB method. So PSNR of stego-image is improved. For correct de-steganography, the bit patterns for which LSBs has inverted needs to be stored within the stego-image somewhere. The proposed bit inversion technique provides good improvement to LSB steganography. This technique could be combined with other methods to improve the steganography further. Keywords: image processing; steganography; PSNR; bit inversion technique; bit patterns; cover image pixels; de-steganography ;inverted LSB image steganography; least significant bit; plain LSB-based image steganography; steganography quality improvement; stego-image; Clocks; Cryptography; Laser transitions; PSNR; PSNR; bit inversion; least significant bit; quality; steganography (ID#:14-2476) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781374&isnumber=6781240
- Islam, M.R.; Siddiqa, A; Uddin, M.P.; Mandal, AK.; Hossain, M.D., "An Efficient Filtering Based Approach Improving LSB Image Steganography Using Status Bit Along With AES Cryptography," Informatics, Electronics & Vision (ICIEV), 2014 International Conference on , vol., no., pp.1,6, 23-24 May 2014. doi: 10.1109/ICIEV.2014.6850714 In Steganography, the total message will be invisible into a cover media such as text, audio, video, and image in which attackers don't have any idea about the original message that the media contain and which algorithm use to embed or extract it. In this paper, the proposed technique has focused on Bitmap image as it is uncompressed and convenient than any other image format to implement LSB Steganography method. For better security AES cryptography technique has also been used in the proposed method. Before applying the Steganography technique, AES cryptography will change the secret message into cipher text to ensure two layer security of the message. In the proposed technique, a new Steganography technique is being developed to hide large data in Bitmap image using filtering based algorithm, which uses MSB bits for filtering purpose. This method uses the concept of status checking for insertion and retrieval of message. This method is an improvement of Least Significant Bit (LSB) method for hiding information in images. It is being predicted that the proposed method will able to hide large data in a single image retaining the advantages and discarding the disadvantages of the traditional LSB method. Various sizes of data are stored inside the images and the PSNR are also calculated for each of the images tested. Based on the PSNR value, the Stego image has higher PSNR value as compared to other method. Hence the proposed Steganography technique is very efficient to hide the secret information inside an image. Keywords: cryptography; filtering theory; image processing; image retrieval; steganography; AES cryptography technique; Bitmap image; LSB image steganography; PSNR value; bitmap image; cover media; efficient filtering image format least significant bit method; message retrieval; secret message; steganography technique; Ciphers; Encryption; Histograms; Image color analysis; PSNR; AES Cryptography; Conceal of Message; Filtering Algorithm; Image Steganography; LSB Image Steganography (ID#:14-2477) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850714&isnumber=6850678
- Yang Ren-er; Zheng Zhiwei; Tao Shun; Ding Shilei, "Image Steganography Combined with DES Encryption Pre-processing," Measuring Technology and Mechatronics Automation (ICMTMA), 2014 Sixth International Conference on , vol., no., pp.323,326, 10-11 Jan. 2014. doi: 10.1109/ICMTMA.2014.80 In order to improve the security of steganography, this paper studied image steganography combined with pre-processing of DES encryption. When transmitting the secret information, firstly, encrypt the information intended to hide by DES encryption is encrypted, and then is written in the image through the LSB steganography. Encryption algorithm improves the lowest matching performance between the image and the secret information by changing the statistical characteristics of the secret information to enhance the anti-detection of the image steganography. The experimental results showed that the anti-detection robustness of image steganography combined with pre-processing of DES encryption is found much better than the way using LSB steganography algorithms directly. Keywords: cryptography ;image matching; steganography; DES encryption preprocessing; LSB steganography; image matching performance; image steganography; least significant bit; secret information; statistical characteristics; Algorithm design and analysis; Encryption; Histograms; Media; Probability distribution; DES encryption; High security; Information hiding; Steganography (ID#:14-2478) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6802697&isnumber=6802614
- Singla, D.; Juneja, M., "An Analysis Of Edge Based Image Steganography Techniques In Spatial Domain," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in , vol., no., pp.1,5, 6-8 March 2014. doi: 10.1109/RAECS.2014.6799604 Steganography is a branch of information security. Steganography aims at hiding the existence of the actual communication. This aim is achieved by hiding the actual information into other information in such a way that intruder cannot detect it. A variety of carrier file formats can be used to carry out steganography e.g. images, text, videos, audio, radio waves etc. But mainly images are used for this purpose because of their high frequency on internet. Number of image steganography techniques has been introduced having some drawbacks and advantages. These techniques are evaluated on the basis of three parameters imperceptibility, robustness and capacity. In this paper we will review various edge based image steganography techniques. Main idea behind these techniques is that edges can bear more variation than smooth areas without being detected. Keywords: image coding; steganography; Internet; carrier file formats; edge based image steganography techniques; information security; smooth areas; spatial domain; Algorithm design and analysis; Cryptography Detectors; Image edge detection ;PSNR; Robustness; LSB substitution; Pixel Value Differencing steganography; edge based image steganography; peak signal to noise ratio ;random edge pixel embedding (ID#:14-2478) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799604&isnumber=6799496
- Kaur, S.; Bansal, S.; Bansal, R.K., "Steganography and Classification Of Image Steganography Techniques," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.870,875, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828087 Information is wealth of any organization and in present era in which information transferred through digital media and internet, it became a top priority for any organizations to protect this wealth. Whatever technique we adopt for the security purpose, the degree and level of security always remains top concern. Steganography is one such technique in which presence of secret message cannot be detected and we can use it as a tool for security purpose to transmit the confidential information in a secure way. It is an ongoing research area having vast number of applications in distinct fields such as defense and intelligence, medical, on-line banking, on-line transaction, to stop music piracy and other financial and commercial purposes. There are various steganography approaches exist and they differs depending upon message to be embedded, use of file type as carrier or compression method used etc. The focus of this paper is to classify distinct image steganography techniques besides giving overview, importance and challenges of steganography techniques. Other related security techniques are also been discussed in brief in this paper. The classification of steganography techniques may provide not only understanding and guidelines to researchers in this field but also provide directions for future work in this field. Keywords: Internet; image classification; image coding; steganography ;Internet; confidential information transmission; digital media; image steganography technique classification; music piracy; online banking; online transaction; secret message; security purpose; Algorithm design and analysis; Discrete cosine transforms; Frequency-domain analysis; Image coding; Robustness; Security; Confidential; Cover Object ;Data Security; Steganalysis etc; Steganography; Stego Object; information (ID#:14-2479) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828087&isnumber=6827395
- Bugar, G.; Banoci, V.; Broda, M.; Levicky, D.; Dupak, D., "Data Hiding In Still Images Based On Blind Algorithm Of Steganography," Radioelektronika (RADIOELEKTRONIKA), 2014 24th International Conference , vol., no., pp.1,4, 15-16 April 2014. doi: 10.1109/Radioelek.2014.6828423 Steganography is the science of hiding secret information in another unsuspicious data. Generally, a steganographic secret message could be a widely useful multimedia: as a picture, an audio file, a video file or a message in clear text - the covertext. The most recent steganography techniques tend to hide a secret message in digital images. We propose and analyze experimentally a blind steganography method based on specific attributes of two dimensional discrete wavelet transform set by Haar mother wavelet. The blind steganography methods do not require an original image in the process of extraction what helps to keep a secret communication undetected to third party user or steganalysis tools. The secret message is encoded by Huffman code in order to achieve a better imperceptibility result. Moreover, this modification also increases the security of the hidden communication. Keywords: Huffman codes; discrete wavelet transforms; image coding; steganography; Haar mother wavelet; Huffman code; blind algorithm; blind steganography method; covertext; data hiding; digital images; hidden communication security; multimedia; secret communication; secret information hiding; steganalysis tool; steganographic secret message; steganography techniques; still images; third party user; two dimensional discrete wavelet transform; unsuspicious data; Decoding; Discrete wavelet transforms; Huffman coding; Image coding; Pixel; DWT; message hiding; steganography (ID#:14-2480) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828423&isnumber=6828396
- Devi, M.; Sharma, N., "Improved Detection Of Least Significant Bit Steganography Algorithms In Color And Gray Scale Images," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in , vol., no., pp.1,5, 6-8 March 2014. doi: 10.1109/RAECS.2014.6799507 This paper proposes an improved LSB (least Significant bit) based Steganography technique for images imparting better information security for hiding secret information in images. There is a large variety of steganography techniques some are more complex than others and all of them have respective strong and weak points. It ensures that the eavesdroppers will not have any suspicion that message bits are hidden in the image and standard steganography detection methods can not estimate the length of the secret message correctly. In this paper we present improved steganalysis methods, based on the most reliable detectors of thinly-spread LSB steganography presently known, focusing on the case when grayscale Bitmaps are used as cover images. Keywords: image coding; image colour analysis; security of data; steganography; color scale images; gray scale images; grayscale bitmaps; information security; least significant bit steganography algorithm detection; secret information hiding; steganalysis methods; steganography detection methods; Conferences; Gray-scale; Image coding; Image color analysis; Image edge detection; PSNR; Security; Gray Images; LSB; RGB; Steganalysis; Steganography (ID#:14-2481) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799507&isnumber=6799496
- Mstafa, R.J.; Elleithy, K.M., "A Highly Secure Video Steganography Using Hamming Code (7, 4)," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, pp.1,6, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845191 Due to the high speed of internet and advances in technology, people are becoming more worried about information being hacked by attackers. Recently, many algorithms of steganography and data hiding have been proposed. Steganography is a process of embedding the secret information inside the host medium (text, audio, image and video). Concurrently, many of the powerful steganographic analysis software programs have been provided to unauthorized users to retrieve the valuable secret information that was embedded in the carrier files. Some steganography algorithms can be easily detected by steganalytical detectors because of the lack of security and embedding efficiency. In this paper, we propose a secure video steganography algorithm based on the principle of linear block code. Nine uncompressed video sequences are used as cover data and a binary image logo as a secret message. The pixels' positions of both cover videos and a secret message are randomly reordered by using a private key to improve the system's security. Then the secret message is encoded by applying Hamming code (7, 4) before the embedding process to make the message even more secure. The result of the encoded message will be added to random generated values by using XOR function. After these steps that make the message secure enough, it will be ready to be embedded into the cover video frames. In addition, the embedding area in each frame is randomly selected and it will be different from other frames to improve the steganography scheme's robustness. Furthermore, the algorithm has high embedding efficiency as demonstrated by the experimental results that we have obtained. Regarding the system's quality, the Pick Signal to Noise Ratio (PSNR) of stego videos are above 51 dB, which is close to the original video quality. The embedding payload is also acceptable, where in each video frame we can embed 16 Kbits and it can go up to 90 Kbits without noticeable degrading of the stego video's quality. Keywords: block codes; image sequences; private key cryptography ;steganography;video coding; Hamming code (7, 4);XOR function; binary image logo; cover data; data hiding; high secure video steganography algorithm; linear block code; private key; secret information; steganalytical detectors; steganographic analysis software programs; uncompressed video sequences; Algorithm design and analysis; Block codes ;Image color analysis; PSNR; Security; Streaming media; Vectors; Embedding Efficiency; Embedding Payload; Hamming Code; Linear Block Code; Security; Video Steganography (ID#:14-2482) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845191&isnumber=6845183
- Diop, I; Farss, S.M.; Tall, K.; Fall, P.A; Diouf, M.L.; Diop, AK., "Adaptive Steganography Scheme Based On LDPC Codes," Advanced Communication Technology (ICACT), 2014 16th International Conference on , vol., no., pp.162,166, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778941 Steganography is the art of secret communication. Since the advent of modern steganography, in the 2000s, many approaches based on the error correcting codes (Hamming, BCH, RS, STC ...) have been proposed to reduce the number of changes of the cover medium while inserting the maximum bits. The works of LDiop and al [1], inspired by those of T. Filler [2] have shown that the LDPC codes are good candidates in minimizing the impact of insertion. This work is a continuation of the use of LDPC codes in steganography. We propose in this paper a steganography scheme based on these codes inspired by the adaptive approach to the calculation of the map detectability. We evaluated the performance of our method by applying an algorithm for steganalysis. Keywords: parity check codes ;steganography; LDPC codes; adaptive steganography scheme; error correcting codes; map detectability; secret communication; steganalysis; Complexity theory; Distortion measurement; Educational institutions; Histograms; PSNR; Parity check codes; Vectors; Adaptive steganography; complexity; detectability; steganalysis (ID#:14-2483) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778941&isnumber=6778899
- Bansal, D.; Chhikara, R., "Performance Evaluation of Steganography Tools Using SVM and NPR Tool," Advanced Computing & Communication Technologies (ACCT), 2014 Fourth International Conference on , vol., no., pp.483,487, 8-9 Feb. 2014. doi: 10.1109/ACCT.2014.17 Steganography is the art of hiding the secret messages in an innocent medium like images, audio, video, text, etc. such that the existence of any secret message is not revealed. There are various Steganography tools available. In this paper, we are considering three algorithms - nsF5, PQ,Outguess. To compare the robustness and to withstand the steganalytic attack of the above three algorithms, an algorithm based on sensitive features is presented. SVM and Neural Network Pattern Recognition Tool is used on sensitive features extracted from DCT domain. A comparison between the accuracy obtained from SVM and NPR is also shown. Experimental results show that the Outguess method can withstand steganalytic attack by a margin of 35% accuracy as compared to nsF5 and PQ, hence Outguess is more reliable for Steganography. Keywords: data compression; discrete cosine transforms; feature extraction; image coding; neural nets; performance evaluation; steganography; support vector machines; DCT domain; JPEG feature set; NPR tool; Outguess algorithm; PQ algorithm; SVM tool; discrete cosine transform; neural network pattern recognition tool;nsF5 algorithm performance evaluation; secret message hiding; sensitive feature extraction; steganalytic attack; steganography tools; support vector machine; Accuracy; Discrete cosine transforms; Feature extraction; Histograms; Pattern recognition; Support vector machines; Training; Discrete Cosine Transform; Neural Network Pattern Recognition; Outguess; PQ; SVM; Steganography; nsF5 (ID#:14-2484) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6783501&isnumber=6783406
- Bin Li; Shunquan Tan; Ming Wang; Jiwu Huang, "Investigation on Cost Assignment in Spatial Image Steganography," Information Forensics and Security, IEEE Transactions on , vol.9, no.8, pp.1264,1277, Aug. 2014. doi: 10.1109/TIFS.2014.2326954 Relating the embedding cost in a distortion function to statistical detectability is an open vital problem in modern steganography. In this paper, we take one step forward by formulating the process of cost assignment into two phases: 1) determining a priority profile and 2) specifying a cost-value distribution. We analytically show that the cost-value distribution determines the change rate of cover elements. Furthermore, when the cost-values are specified to follow a uniform distribution, the change rate has a linear relation with the payload, which is a rare property for content-adaptive steganography. In addition, we propose some rules for ranking the priority profile for spatial images. Following such rules, we propose a five-step cost assignment scheme. Previous steganographic schemes, such as HUGO, WOW, S-UNIWARD, and MG, can be integrated into our scheme. Experimental results demonstrate that the proposed scheme is capable of better resisting steganalysis equipped with high-dimensional rich model features. Keywords: image coding; steganography; content-adaptive steganography; cost assignment investigation; cost-value distribution; distortion function; five-step cost assignment scheme; high-dimensional rich model features; spatial image steganography; spatial images; statistical detectability; Additives ;Educational institutions; Encoding; Feature extraction; Payloads; Security; Vectors; Cost-value distribution; distortion function; priority profile; steganalysis; steganography (ID#:14-2485) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822611&isnumber=6846399
- Banerjee, I; Bhattacharyya, S.; Sanyal, G., "Robust Image Steganography With Pixel Factor Mapping (PFM) Technique," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.692,698, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828050 Our routine life is carrying an essential dependability on Internet Technologies and their expertise's in various activities. It has advantages and disadvantages. Technology requires information hiding expertise for maintaining the secrecy of the information. Steganography is one of the fashionable information hiding technique. Extensive competence of attempt has been agreed in this land by different researchers. In this contribution, a frequency domain image Steganography method using DCT coefficient has been proposed which has been design based on prime factor mapping technique. Keywords: Internet; discrete cosine transforms; image processing; steganography; DCT coefficient; Internet technology; frequency domain image steganography method; information hiding technique; pixel factor mapping technique; robust image steganography; Discrete cosine transforms; Entropy; Frequency-domain analysis; PSNR; Security; Cover Image; DCT; Pixel Factor Mapping (PFM) method; Steganography ;Stego Image (ID#:14-2486) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828050&isnumber=6827395
- Linjie Guo; Jiangqun Ni; Yun Qing Shi, "Uniform Embedding for Efficient JPEG Steganography," Information Forensics and Security, IEEE Transactions on, vol.9, no.5, pp.814,825, May 2014. doi: 10.1109/TIFS.2014.2312817 Steganography is the science and art of covert communication, which aims to hide the secret messages into a cover medium while achieving the least possible statistical detectability. To this end, the framework of minimal distortion embedding is widely adopted in the development of the steganographic system, in which a well designed distortion function is of vital importance. In this paper, a class of new distortion functions known as uniform embedding distortion function (UED) is presented for both side-informed and non side-informed secure JPEG steganography. By incorporating the syndrome trellis coding, the best codeword with minimal distortion for a given message is determined with UED, which, instead of random modification, tries to spread the embedding modification uniformly to quantized discrete cosine transform (DCT) coefficients of all possible magnitudes. In this way, less statistical detectability is achieved, owing to the reduction of the average changes of the first- and second-order statistics for DCT coefficients as a whole. The effectiveness of the proposed scheme is verified with evidence obtained from exhaustive experiments using popular steganalyzers with various feature sets on the BOSSbase database. Compared with prior arts, the proposed scheme gains favorable performance in terms of secure embedding capacity against steganalysis. Keywords: discrete cosine transforms; distortion; higher order statistics; image coding; steganography; trellis codes; BOSSbase database; DCT; UED; distortion functions; first-order statistics; minimal distortion embedding framework; nonside-informed secure JPEG steganography; quantized discrete cosine transform coefficients; second-order statistics; secure embedding capacity; side-informed secure JPEG steganography; statistical detectability; syndrome trellis coding; uniform embedding distortion function; Additives; Discrete cosine transforms; Encoding; Histograms; Payloads; Security; Transform coding; JPEG steganography ;distortion function design; minimal-distortion embedding; uniform embedding (ID#:14-2487) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6776485&isnumber=6776454
- Gupta, N.; Sharma, N., "Dwt and Lsb based Audio Steganography," Optimization, Reliabilty, and Information Technology (ICROIT), 2014 International Conference on, pp.428,431, 6-8 Feb. 2014 doi: 10.1109/ICROIT.2014.6798368 Steganography is a fascinating and effective method of hiding data that has been used throughout history. Methods that can be employed to uncover such devious tactics, but the first step are awareness that such methods even exist. There are many good reasons as well to use this type of data hiding, including watermarking or a more secure central storage method for such things as passwords, or key processes. Regardless, the technology is easy to use and difficult to detect. Researchers and scientists have made a lot of research work to solve this problem and to find an effective method for image hiding .The proposed system aims to provide improved robustness, security by using the concept of DWT (Discrete Wavelet Transform) and LSB (Least Significant Bit) proposed a new method of Audio Steganography. The emphasize will be on the proposed scheme of image hiding in audio and its comparison with simple Least Significant Bit insertion method for data hiding in audio. Keywords: audio watermarking; data encapsulation; discrete wavelet transforms; steganography; DWT based audio steganography; LSB based audio steganography; audio watermarking; data hiding; discrete wavelet transform; image hiding; least significant bit insertion method; secure central storage method; Cryptography; Discrete wavelet transforms; Generators; Audio steganography; DWT; LSB; PSNR (ID#:14-2488) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798368&isnumber=6798279
- Balakrishna, C.; Naveen Chandra, V.; Pal, R., "Image Steganography Using Single Digit Sum With Varying Base," Electronics, Computing and Communication Technologies (IEEE CONECCT), 2014 IEEE International Conference on , vol., no., pp.1,5, 6-7 Jan. 2014. doi: 10.1109/CONECCT.2014.6740336 Hiding an important message within an image is known as image steganography. Imperceptibility of the message is a major concern of an image steganography scheme. A novel single digit sum (SDS) based image steganography scheme has been proposed in this paper. At first, the computation of SDS has been generalized to support a number system with any given base. Then, an image steganography scheme has been developed, where the base for computing SDS is varied from one pixel to another. Therefore, the number of embedding bits in a pixel is varied across pixels. The purpose of this technique is to control the amount of change in a pixel. A lossy compressed version of the cover image is used to determine the upper limit of change in each pixel value. The base for computing SDS is determined by using this upper limit for a pixel. Thus, it is ensured that the stego image does not degrade beyond the degradation in the lossily compressed image. Keywords: data compression; image coding; steganography; SDS; lossy compressed version; message hiding; novel single digit sum based image steganography scheme; pixel value; Payloads (ID#:14-2489) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6740336&isnumber=6740167
- Odeh, A; Elleithy, K.; Faezipour, M., "Fast Real-Time Hardware Engine for Multipoint Text Steganography," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island , vol., no., pp.1,5, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845184 Different strategies were introduced in the literature to protect data. Some techniques change the data form while other techniques hide the data inside another file. Steganography techniques conceal information inside different digital media like image, audio, and text files. Most of the introduced techniques use software implementation to embed secret data inside the carrier file. Most software implementations are not sufficiently fast for real-time applications. In this paper, we present a new real-time Steganography technique to hide data inside a text file using a hardware engine with 11.27 Gbps hidden data rate. The fast Steganography implementation is presented in this paper. Keywords: data protection; steganography; text analysis; carrier file; data hiding; data protection; digital media; multipoint text steganography; real-time hardware engine; real-time steganography technique; secret data; text file; Algorithm design and analysis; Engines; Field programmable gate arrays; Hardware; Real-time systems; Signal processing algorithms; Streaming media (ID#:14-2490) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845184&isnumber=6845183
- Karakis, R.; Guler, I, "An Application Of Fuzzy Logic-Based Image Steganography," Signal Processing and Communications Applications Conference (SIU), 2014 22nd , vol., no., pp.156,159, 23-25 April 2014 doi: 10.1109/SIU.2014.6830189 Today, data security in digital environment (such as text, image and video files) is revealed by development technology. Steganography and Cryptology are very important to save and hide data. Cryptology saves the message contents and Steganography hides the message presence. In this study, an application of fuzzy logic (FL)-based image Steganography was performed. First, the hidden messages were encrypted by XOR (eXclusive Or) algorithm. Second, FL algorithm was used to select the least significant bits (LSB) of the image pixels. Then, the LSBs of selected image pixels were replaced with the bits of the hidden messages. The method of LSB was improved as robustly and safely against steg-analysis by the FL-based LSB algorithm. Keywords: cryptography; fuzzy logic; image coding; steganography; FL-based LSB algorithm; XOR algorithm; cryptology; data security ; eXclusive OR algorithm; fuzzy logic; image steganography; least significant bits; Conferences; Cryptography; Fuzzy logic; Internet; PSNR; Signal processing algorithms (ID#:14-2491) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830189&isnumber=6830164
- Sarreshtedari, S.; Akhaee, M.A, "One-third Probability Embedding: A New +-1 Histogram Compensating Image Least Significant Bit Steganography Scheme," Image Processing, IET vol.8, no.2, pp.78,89, February 2014. doi: 10.1049/iet-ipr.2013.0109 A new method is introduced for the least significant bit (LSB) image steganography in spatial domain providing the capacity of one bit per pixel. Compared to the recently proposed image steganography techniques, the new method called one-third LSB embedding reduces the probability of change per pixel to one-third without sacrificing the embedding capacity. This improvement results in a better imperceptibility and also higher robustness against well-known LSB detectors. Bits of the message are carried using a function of three adjacent cover pixels. It is shown that no significant improvement is achieved by increasing the length of the pixel sequence employed. A closed-form expression for the probability of change per pixel in terms of the number of pixels used in the pixel groups has been derived. Another advantage of the proposed algorithm is to compensate, as much as possible, for any changes in the image histogram. It has been demonstrated that one-third probability embedding outperforms histogram compensating version of the LSB matching in terms of keeping the image histogram unchanged. Keywords: image coding; image enhancement; probability; steganography; LSB image steganography; closed-form expression; histogram compensating image least significant bit steganography scheme; image histogram; one-third LSB embedding; one-third probability embedding; pixel sequence; spatial domain (ID#:14-2492) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6733839&isnumber=6733837
- Pathak, P.; Chattopadhyay, AK.; Nag, A, "A New Audio Steganography Scheme Based On Location Selection With Enhanced Security," Automation, Control, Energy and Systems (ACES), 2014 First International Conference on , vol., no., pp.1,4, 1-2 Feb. 2014. doi: 10.1109/ACES.2014.6807979 Steganography is the art and science of secret communication. In this paper a new scheme for digital audio steganography is presented where the bits of a secret message are embedded into the coefficients of a cover audio. Each secret bit is embedded into the selected position of a cover coefficient. The position for insertion of a secret bit is selected from the 0th (Least Significant Bit) to 7th LSB based on the upper three MSB (Most Significant Bit). This scheme provides high audio quality, robustness and lossless recovery from the cover Audio. Keywords: security of data; steganography; telecommunication security; LSB; MSB; communication security; cover audio coefficient; digital audio quality; digital audio steganography scheme; information security; least significant bit; location selection; message security; most significant bit; Decoding; Encoding; Encryption; Information security; Signal processing algorithms; LSB; Steganography; digital audio; secret communication (ID#:14-2493) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6807979&isnumber=6807973
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.