Biblio
Different wireless Peer-to-Peer (P2P) routing protocols rely on cooperative protocols of interaction among peers, yet, most of the surveyed provide little detail on how the peers can take into consideration the peers' reliability for improving routing efficiency in collaborative networks. Previous research has shown that in most of the trust and reputation evaluation schemes, the peers' rating behaviour can be improved to include the peers' attributes for understanding peers' reliability. This paper proposes a reliability based trust model for dynamic trust evaluation between the peers in P2P networks for collaborative routing. Since the peers' routing attributes vary dynamically, our proposed model must also accommodate the dynamic changes of peers' attributes and behaviour. We introduce peers' buffers as a scaling factor for peers' trust evaluation in the trust and reputation routing protocols. The comparison between reliability and non-reliability based trust models using simulation shows the improved performance of our proposed model in terms of delivery ratio and average message latency.
Screen lock is vulnerable against shoulder surfing since password, personal identification numbers (PIN) and pattern can be seen when smart phones are used in public space although important information is stored in them and they are often used in public space. In this paper, we propose a new method in which passwords are combined with biometrics authentication which cannot be seen by shoulder surfing and difficult to be guessed by brute-force attacks. In our method, the motion of a finger is measured by sensors when a user controls a mobile terminal, and the motion which includes characteristics of the user is registered. In our method, registered characteristics are classified by learning with self-organizing maps. Users are identified by referring the self-organizing maps when they input passwords on mobile terminals.
One of the most challenging issues facing academic conferences and educational institutions today is plagiarism detection. Typically, these entities wish to ensure that the work products submitted to them have not been plagiarized from another source (e.g., authors submitting identical papers to multiple journals). Assembling large centralized databases of documents dramatically improves the effectiveness of plagiarism detection techniques, but introduces a number of privacy and legal issues: all document contents must be completely revealed to the database operator, making it an attractive target for abuse or attack. Moreover, this content aggregation involves the disclosure of potentially sensitive private content, and in some cases this disclosure may be prohibited by law. In this work, we introduce Elxa, the first scalable centralized plagiarism detection system that protects the privacy of the submissions. Elxa incorporates techniques from the current state of the art in plagiarism detection, as evaluated by the information retrieval community. Our system is designed to be operated on existing cloud computing infrastructure, and to provide incentives for the untrusted database operator to maintain the availability of the network. Elxa can be used to detect plagiarism in student work, duplicate paper submissions (and their associated peer reviews), similarities between confidential reports (e.g., malware summaries), or any approximate text reuse within a network of private documents. We implement a prototype using the Hadoop MapReduce framework, and demonstrate that it is feasible to achieve competitive detection effectiveness in the private setting.
We are confronted with the threat from the theft of user-id / password information caused by phishing attacks. Now authentication by using the user-id and password is no longer safe. We can use the PKI-based authentication as a safer authentication mechanism. In our university, Japan Advanced Institute of Science and Technology (JAIST), we deployed On Demand Digital Certificate Issuing System for our users, and employ the PKI-based client certificates for log-on to web application, connecting to wireless network (including eduroam), using VPN service, and email sender signing. In addition, National In-stitute of Information (NII), which are providing common ICT infrastructure services for Japanese universities and institutes, started a service to issue client certificates in this year. So use of the electronic certificates will become more popular within a few years in Japan. However, there are not so enough cases deploying the electronic certificate based authentication in University infrastructure, we still has many tips and issues on operating this. In this paper, we introduce the use case of the electronic certificate in JAIST, the challenges and issues, and consider the future prospects.
While attacks on information systems have for most practical purposes binary outcomes (information was manipulated/eavesdropped, or not), attacks manipulating the sensor or control signals of Industrial Control Systems (ICS) can be tuned by the attacker to cause a continuous spectrum in damages. Attackers that want to remain undetected can attempt to hide their manipulation of the system by following closely the expected behavior of the system, while injecting just enough false information at each time step to achieve their goals. In this work, we study if attack-detection can limit the impact of such stealthy attacks. We start with a comprehensive review of related work on attack detection schemes in the security and control systems community. We then show that many of those works use detection schemes that are not limiting the impact of stealthy attacks. We propose a new metric to measure the impact of stealthy attacks and how they relate to our selection on an upper bound on false alarms. We finally show that the impact of such attacks can be mitigated in several cases by the proper combination and configuration of detection schemes. We demonstrate the effectiveness of our algorithms through simulations and experiments using real ICS testbeds and real ICS systems.
In organizations that use DevOps practices, software changes can be deployed as fast as 500 times or more per day. Without adequate involvement of the security team, rapidly deployed software changes are more likely to contain vulnerabilities due to lack of adequate reviews. The goal of this paper is to aid software practitioners in integrating security and DevOps by summarizing experiences in utilizing security practices in a DevOps environment. We analyzed a selected set of Internet artifacts and surveyed representatives of nine organizations that are using DevOps to systematically explore experiences in utilizing security practices. We observe that the majority of the software practitioners have expressed the potential of common DevOps activities, such as automated monitoring, to improve the security of a system. Furthermore, organizations that integrate DevOps and security utilize additional security activities, such as security requirements analysis and performing security configurations. Additionally, these teams also have established collaboration between the security team and the development and operations teams.
In organizations that use DevOps practices, software changes can be deployed as fast as 500 times or more per day. Without adequate involvement of the security team, rapidly deployed software changes are more likely to contain vulnerabilities due to lack of adequate reviews. The goal of this paper is to aid software practitioners in integrating security and DevOps by summarizing experiences in utilizing security practices in a DevOps environment. We analyzed a selected set of Internet artifacts and surveyed representatives of nine organizations that are using DevOps to systematically explore experiences in utilizing security practices. We observe that the majority of the software practitioners have expressed the potential of common DevOps activities, such as automated monitoring, to improve the security of a system. Furthermore, organizations that integrate DevOps and security utilize additional security activities, such as security requirements analysis and performing security configurations. Additionally, these teams also have established collaboration between the security team and the development and operations teams.
Keystroke dynamics is a form of behavioral biometrics that can be used for continuous authentication of computer users. Many classifiers have been proposed for the analysis of acquired user patterns and verification of users at computer terminals. The underlying machine learning methods that use Gaussian density estimator for outlier detection typically assume that the digraph patterns in keystroke data are generated from a single Gaussian distribution. In this paper, we relax this assumption by allowing digraphs to fit more than one distribution via the Gaussian Mixture Model (GMM). We have conducted an experiment with a public data set collected in a controlled environment. Out of 30 users with dynamic text, we obtain 0.08% Equal Error Rate (EER) with 2 components by using GMM, while pure Gaussian yields 1.3% EER for the same data set (an improvement of EER by 93.8%). Our results show that GMM can recognize keystroke dynamics more precisely and authenticate users with higher confidence level.
Attacks on airport information network services in the form of Denial of Service (DoS), Distributed DoS (DDoS), and hijacking are the most effective schemes mostly explored by cyber terrorists in the aviation industry running Mission Critical Services (MCSs). This work presents a case for Airport Information Resource Management Systems (AIRMS) which is a cloud based platform proposed for the Nigerian aviation industry. Granting that AIRMS is susceptible to DoS attacks, there is need to develop a robust counter security network model aimed at pre-empting such attacks and subsequently mitigating the vulnerability in such networks. Existing works in literature regarding cyber security DoS and other schemes have not explored embedded Stateful Packet Inspection (SPI) based on OpenFlow Application Centric Infrastructure (OACI) for securing critical network assets. As such, SPI-OACI was proposed to address the challenge of Vulnerability Bandwidth Depletion DDoS Attacks (VBDDA). A characterization of the Cisco 9000 router firewall as an embedded network device with support for Virtual DDoS protection was carried out in the AIRMS threat mitigation design. Afterwards, the mitigation procedure and the initial phase of the design with Riverbed modeler software were realized. For the security Quality of Service (QoS) profiling, the system response metrics (i.e. SPI-OACI delay, throughput and utilization) in cloud based network were analyzed only for normal traffic flows. The work concludes by offering practical suggestion for securing similar enterprise management systems running on cloud infrastructure against cyber terrorists.
In this paper, we propose a novel Social Networking Service (SNS) for a regional community. The purpose of the SNS is to support and encourage people by making them aware beneficial social relations in the real world. The conventional SNSs can hardly deal with beneficial social relations, because they are implicit and dynamic. The proposed SNS is designed to provide positive information for two types of people: people who does community voluntary works, such as cleaning, as contributors, and people who receives benefit from them as beneficiary. This paper introduces the basic scheme based on the SNS for beneficial social relations, and evaluates the effectiveness of our scheme based on the result of the experimental studies. The experimental result shows the users of our SNS tend to consider the information about the voluntary works valuable if they have been performed in their living area, and it suggests that our proposed SNS system would work well in a regional community.
Learning classifier systems (LCSs) are rule-based evolutionary algorithms uniquely suited to classification and data mining in complex, multi-factorial, and heterogeneous problems. LCS rule fitness is commonly based on accuracy, but this metric alone is not ideal for assessing global rule `value' in noisy problem domains, and thus impedes effective knowledge extraction. Multi-objective fitness functions are promising but rely on knowledge of how to weigh objective importance. Prior knowledge would be unavailable in most real-world problems. The Pareto-front concept offers a strategy for multi-objective machine learning that is agnostic to objective importance. We propose a Pareto-inspired multi-objective rule fitness (PIMORF) for LCS, and combine it with a complimentary rule-compaction approach (SRC). We implemented these strategies in ExSTraCS, a successful supervised LCS and evaluated performance over an array of complex simulated noisy and clean problems (i.e. genetic and multiplexer) that each concurrently model pure interaction effects and heterogeneity. While evaluation over multiple performance metrics yielded mixed results, this work represents an important first step towards efficiently learning complex problem spaces without the advantage of prior problem knowledge. Overall the results suggest that PIMORF paired with SRC improved rule set interpretability, particularly with regard to heterogeneous patterns.
Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of the proposed method in recovering non-skewed signals.
Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of the proposed method in recovering non-skewed signals.
Wide-area monitoring and control (WAMC) systems are the next-generation operational-management systems for electric power systems. The main purpose of such systems is to provide high resolution real-time situational awareness in order to improve the operation of the power system by detecting and responding to fast evolving phenomenon in power systems. From an information and communication technology (ICT) perspective, the nonfunctional qualities of these systems are increasingly becoming important and there is a need to evaluate and analyze the factors that impact these nonfunctional qualities. Enterprise architecture methods, which capture properties of ICT systems in architecture models and use these models as a basis for analysis and decision making, are a promising approach to meet these challenges. This paper presents a quantitative architecture analysis method for the study of WAMC ICT architectures focusing primarily on the interoperability and cybersecurity aspects.
Encryption and decryption of data in an efficient manner is one of the challenging aspects of modern computer science. This paper introduces a new algorithm for Cryptography to achieve a higher level of security. In this algorithm it becomes possible to hide the meaning of a message in unprintable characters. The main issue of this paper is to make the encrypted message undoubtedly unprintable using several times of ASCII conversions and a cyclic mathematical function. Dividing the original message into packets binary matrices are formed for each packet to produce the unprintable encrypted message through making the ASCII value for each character below 32. Similarly, several ASCII conversions and the inverse cyclic mathematical function are used to decrypt the unprintable encrypted message. The final encrypted message received from three times of encryption becomes an unprintable text through which the algorithm possesses higher level of security without increasing the size of data or loosing of any data.
The main objective of this research is to build upon existing cryptographic standards and web protocols to design an alternative multi-factor authentication cryptosystem for the web. It involves seed exchange to a software-based token through a login-protected Transport Layer Security (TLS/SSL) tunnel, encrypted local storage through a password-protected keystore (BC UBER) with a strong key derivation function (PBEWithSHAANDTwofish-CBC), and offline generation of one-time passwords through the TOTP algorithm (IETF RFC 6239). Authentication occurs through the use of a shared secret (the seed) to verify the correctness of the one-time password used to authenticate. With the traditional use of username and password no longer wholly adequate for protecting online accounts, and with regulators worldwide toughening up security requirements (i.e. BSP 808, FFIEC), this research hopes to increase research effort on further development of cryptosystems involving multi-factor authentication.
In bound applications, the locations of events reportable by a device network have to be compelled to stay anonymous. That is, unauthorized observers should be unable to notice the origin of such events by analyzing the network traffic. The authors analyze 2 forms of downsides: Communication overhead and machine load problem. During this paper, the authors give a new framework for modeling, analyzing, and evaluating obscurity in device networks. The novelty of the proposed framework is twofold: initial, it introduces the notion of "interval indistinguishability" and provides a quantitative live to model obscurity in wireless device networks; second, it maps supply obscurity to the applied mathematics downside the authors showed that the present approaches for coming up with statistically anonymous systems introduce correlation in real intervals whereas faux area unit unrelated. The authors show however mapping supply obscurity to consecutive hypothesis testing with nuisance Parameters ends up in changing the matter of exposing non-public supply data into checking out associate degree applicable knowledge transformation that removes or minimize the impact of the nuisance data victimization sturdy cryptography algorithmic rule. By doing therefore, the authors remodeled the matter of analyzing real valued sample points to binary codes, that opens the door for committal to writing theory to be incorporated into the study of anonymous networks. In existing work, unable to notice unauthorized observer in network traffic. However this work in the main supported enhances their supply obscurity against correlation check, the most goal of supply location privacy is to cover the existence of real events.
In the field of scene understanding, researchers have mainly focused on using video/images to extract different elements in a scene. The computational as well as monetary cost associated with such implementations is high. This paper proposes a low-cost system which uses sound-based techniques in order to jointly perform localization as well as fingerprinting of the sound sources. A network of embedded nodes is used to sense the sound inputs. Phase-based sound localization and Support-Vector Machine classification are used to locate and classify elements of the scene, respectively. The fusion of all this data presents a complete “picture” of the scene. The proposed concepts are applied to a vehicular-traffic case study. Experiments show that the system has a fingerprinting accuracy of up to 97.5%, localization error less than 4 degrees and scene prediction accuracy of 100%.
Cloud computing technologies are receiving a great deal of attention. Furthermore most of the hardware devices such as the PCs and mobile phones are increasingly having a trusted component called Trusted Platform Module embedded in them, which helps to measure the state of the platform and hence reason about its trust. Recently attestation techniques such as binary attestation and property based attestation techniques have been proposed based on the TPM. In this paper, we propose a novel trust enhanced security model for cloud services that helps to detect and prevent security attacks in cloud infrastructures using trusted attestation techniques. We consider a cloud architecture where different services are hosted on virtualized systems on the cloud by multiple cloud customers (multi-tenants). We consider attacker model and various attack scenarios for such hosted services in the cloud. Our trust enhanced security model enables the cloud service provider to certify certain security properties of the tenant virtual machines and services running on them. These properties are then used to detect and minimise attacks between the cloud tenants running virtual machines on the infrastructure and its customers as well as increase the assurance of the tenant virtual machine transactions. If there is a variation in the behaviour of the tenant virtual machine from the certified properties, the model allows us to dynamically isolate the tenant virtual machine or even terminate the malicious services on a fine granular basis. The paper describes the design and implementation of the proposed model and discusses how it deals with the different attack scenarios. We also show that our model is beneficial for the cloud service providers, cloud customers running tenant virtual machines as well as the customers using the services provided by these tenant virtual machines.
A web service is a web-based application connected via the internet connectivity. The common web-based applications are deployed using web browsers and web servers. However, the security of Web Service is a major concern issues since it is not widely studied and integrated in the design stage of Web Service standard. They are add-on modules rather a well-defined solutions in standards. So, various web services security solutions have been defined in order to protect interaction over a network. Remote attestation is an authentication technique proposed by the Trusted Computing Group (TCG) which enables the verification of the trusted environment of platforms and assuring the information is accurate. To incorporate this method in web services framework in order to guarantee the trustworthiness and security of web-based applications, a new framework called TrustWeb is proposed. The TrustWeb framework integrates the remote attestation into SSL/TLS protocol to provide integrity information of the involved endpoint platforms. The framework enhances TLS protocol with mutual attestation mechanism which can help to address the weaknesses of transferring sensitive computations, and a practical way to solve the remote trust issue at the client-server environment. In this paper, we describe the work of designing and building a framework prototype in which attestation mechanism is integrated into the Mozilla Firefox browser and Apache web server. We also present framework solution to show improvement in the efficiency level.
Large number of digital images and videos are acquired, stored, processed and shared nowadays. High quality imaging hardware and low cost, user friendly image editing software make digital mediums vulnerable to modifications. One of the most popular image modification techniques is copy move forgery. This tampering technique copies part of an image and pastes it into another part on the same image to conceal or to replicate some part of the image. Researchers proposed many techniques to detect copy move forged regions of images recently. These methods divide image into overlapping blocks and extract features to determine similarity among group of blocks. Selection of the feature extraction algorithm plays an important role on the accuracy of detection methods. Column averages of 1D-FT of rows is used to extract features from overlapping blocks on the image. Blocks are transformed into frequency domain using 1D-FT of the rows and average values of the transformed columns form feature vectors. Similarity of feature vectors indicates possible forged regions. Results show that the proposed method can detect copy pasted regions with higher accuracy compared to similar works reported in the literature. The method is also more resistant against the Gaussian blurring or JPEG compression attacks as shown in the results.
Cryptographically-Curated File System (CCFS) proposed in this work supports the adoption of Information-Centric Networking. CCFS utilizes content names that span trust boundaries, verify integrity, tolerate disruption, authenticate content, and provide non-repudiation. Irrespective of the ability to reach an authoritative host, CCFS provides secure access by binding a chain of trust into the content name itself. Curators cryptographically bind content to a name, which is a path through a series of objects that map human meaningful names to cryptographically strong content identifiers. CCFS serves as a network layer for storage systems unifying currently disparate storage technologies. The power of CCFS derives from file hashes and public keys used as a name with which to retrieve content and as a method of verifying that content. We present results from our prototype implementation. Our results show that the overhead associated with CCFS is not negligible, but also is not prohibitive.
We consider the block Rayleigh fading multiple-input multiple-output (MIMO) wiretap channel with no prior channel state information (CSI) available at any of the terminals. The channel gains remain constant in a coherence time of T symbols, and then change to another independent realization. The transmitter, the legitimate receiver and the eavesdropper have nt, nr and ne antennas, respectively. We determine the exact secure degrees of freedom (s.d.o.f.) of this system when T ≥ 2 min(nt, nr). We show that, in this case, the s.d.o.f. is exactly (min(nt, nr) - ne)+(T - min(nt, nr))/T. The first term can be interpreted as the eavesdropper with ne antennas taking away ne antennas from both the transmitter and the legitimate receiver. The second term can be interpreted as a fraction of s.d.o.f. being lost due to the lack of CSI at the legitimate receiver. In particular, the fraction loss, min(nt, nr)/T, can be interpreted as the fraction of channel uses dedicated to training the legitimate receiver for it to learn its own CSI. We prove that this s.d.o.f. can be achieved by employing a constant norm channel input, which can be viewed as a generalization of discrete signalling to multiple dimensions.
A model of protection mechanisms in computing systems is presented and its appropriateness is argued. The “safety” problem for protection systems under this model is to determine in a given situation whether a subject can acquire a particular right to an object. In restricted cases, it can be shown that this problem is decidable, i.e. there is an algorithm to determine whether a system in a particular configuration is safe. In general, and under surprisingly weak assumptions, it cannot be decided if a situation is safe. Various implications of this fact are discussed.
This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper.
The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.
We are currently moving from the Internet society to a mobile society where more and more access to information is done by previously dumb phones. For example, the number of mobile phones using a full blown OS has risen to nearly 200% from Q3/2009 to Q3/2010. As a result, mobile security is no longer immanent, but imperative. This survey paper provides a concise overview of mobile network security, attack vectors using the back end system and the web browser, but also the hardware layer and the user as attack enabler. We show differences and similarities between "normal" security and mobile security, and draw conclusions for further research opportunities in this area.