Biblio
Underwater acoustic networks is an enabling technology for a range of applications such as mine countermeasures, intelligence and reconnaissance. Common for these applications is a need for robust information distribution while minimizing energy consumption. In terrestrial wireless networks topology information is often used to enhance the efficiency of routing, in terms of higher capacity and less overhead. In this paper we asses the effects of topology information on routing in underwater acoustic networks. More specifically, the interplay between long propagation delays, contention-based channels access and dissemination of varying degrees of topology information is investigated. The study is based on network simulations of a number of network protocols that make use of varying amounts of topology information. The results indicate that, in the considered scenario, relying on local topology information to reduce retransmissions may have adverse effects on the reliability. The difficult channel conditions and the contention-based channels access methods create a need for an increased amount of diversity, i.e., more retransmissions. In the scenario considered, an opportunistic flooding approach is a better, both in terms of robustness and energy consumption.
Attack graphs provide compact representations of the attack paths an attacker can follow to compromise network resources from the analysis of network vulnerabilities and topology. These representations are a powerful tool for security risk assessment. Bayesian inference on attack graphs enables the estimation of the risk of compromise to the system's components given their vulnerabilities and interconnections and accounts for multi-step attacks spreading through the system. While static analysis considers the risk posture at rest, dynamic analysis also accounts for evidence of compromise, for example, from Security Information and Event Management software or forensic investigation. However, in this context, exact Bayesian inference techniques do not scale well. In this article, we show how Loopy Belief Propagation—an approximate inference technique—can be applied to attack graphs and that it scales linearly in the number of nodes for both static and dynamic analysis, making such analyses viable for larger networks. We experiment with different topologies and network clustering on synthetic Bayesian attack graphs with thousands of nodes to show that the algorithm's accuracy is acceptable and that it converges to a stable solution. We compare sequential and parallel versions of Loopy Belief Propagation with exact inference techniques for both static and dynamic analysis, showing the advantages and gains of approximate inference techniques when scaling to larger attack graphs.
Industrial control systems are cyber-physical systems that are used to operate critical infrastructures such as smart grids, traffic systems, industrial facilities, and water distribution networks. The digitalization of these systems increases their efficiency and decreases their cost of operation, but also makes them more vulnerable to cyber-attacks. In order to protect industrial control systems from cyber-attacks, the installation of multiple layers of security measures is necessary. In this paper, we study how to allocate a large number of security measures under a limited budget, such as to minimize the total risk of cyber-attacks. The security measure allocation problem formulated in this way is a combinatorial optimization problem subject to a knapsack (budget) constraint. The formulated problem is NP-hard, therefore we propose a method to exploit submodularity of the objective function so that polynomial time algorithms can be applied to obtain solutions with guaranteed approximation bounds. The problem formulation requires a preprocessing step in which attack scenarios are selected, and impacts and likelihoods of these scenarios are estimated. We discuss how the proposed method can be applied in practice.
The main issue with big data in cloud is the processed or used always need to be by third party. It is very important for the owners of data or clients to trust and to have the guarantee of privacy for the information stored in cloud or analyzed as big data. The privacy models studied in previous research showed that privacy infringement for big data happened because of limitation, privacy guarantee rate or dissemination of accurate data which is obtainable in the data set. In addition, there are various privacy models. In order to determine the best and the most appropriate model to be applied in the future, which also guarantees big data privacy, it is necessary to invest in research and study. In the next part, we surfed some of the privacy models in order to determine the advantages and disadvantages of each model in privacy assurance for big data in cloud. The present study also proposes combined Diff-Anonym algorithm (K-anonymity and differential models) to provide data anonymity with guarantee to keep balance between ambiguity of private data and clarity of general data.
Knowledge work such as summarizing related research in preparation for writing, typically requires the extraction of useful information from scientific literature. Nowadays the primary source of information for researchers comes from electronic documents available on the Web, accessible through general and academic search engines such as Google Scholar or IEEE Xplore. Yet, the vast amount of resources makes retrieving only the most relevant results a difficult task. As a consequence, researchers are often confronted with loads of low-quality or irrelevant content. To address this issue we introduce a novel system, which combines a rich, interactive Web-based user interface and different visualization approaches. This system enables researchers to identify key phrases matching current information needs and spot potentially relevant literature within hierarchical document collections. The chosen context was the collection and summarization of related work in preparation for scientific writing, thus the system supports features such as bibliography and citation management, document metadata extraction and a text editor. This paper introduces the design rationale and components of the PaperViz. Moreover, we report the insights gathered in a formative design study addressing usability.
Information technology graduates reach industry and innovate for the future after completing demanding degrees. Upper division college courses require long hours of work on class projects and exams. Some students have hopes of completing their degrees, but are deterred due to many different issues. Instructors can monitor students' progress based on their assignments, projects, and exams. Judging students' understanding and potential for success becomes more difficult when handling large classes. In this paper we utilize IBM Text Analytics Web Tooling on large amounts of unstructured text data collected from past assignments, exams, and discussions to help professors make assessments faster for large classes. In particular, we focus on an Information Security course offered at San Jose State University and use its classroom-generated data to determine if the extracted information provides strong insights for professors to help struggling students. We examine these issues through exploratory analysis.
Critical information systems strongly rely on event logging techniques to collect data, such as housekeeping/error events, execution traces and dumps of variables, into unstructured text logs. Event logs are the primary source to gain actionable intelligence from production systems. In spite of the recognized importance, system/application logs remain quite underutilized in security analytics when compared to conventional and structured data sources, such as audit traces, network flows and intrusion detection logs. This paper proposes a method to measure the occurrence of interesting activity (i.e., entries that should be followed up by analysts) within textual and heterogeneous runtime log streams. We use an entropy-based approach, which makes no assumptions on the structure of underlying log entries. Measurements have been done in a real-world Air Traffic Control information system through a data analytics framework. Experiments suggest that our entropy-based method represents a valuable complement to security analytics solutions.
Recently, the IoT (internet of things) still does not have global policies and standards to govern the interaction and the development of applications. There are huge of security issues relevant to the application layer of IoT becoming very urgent. On the other hand, it is important for addressing the development of security algorithm to protect the IoT system from malicious attack. The service requesters must pay attention to the data how will be used, who and when to apply, even they must have tools to control what data want to be disclosed. In this article, a fusion diversity scheme adopting MRC (maximum ratio combining) scheme with TM (trust management) security algorithm is proposed. In MRC stage, specified parameters first extracted and before combined with the control information they weighted by one estimation value. The fused information forward to the upper layer of IoT technologies in succession after the combination is completed. The simulation results from experiments deployed with physical assessment show that the security has more reliability after the MRC scheme fused into the TM procedure.
Mobile ad hoc network (MANET) is one of the most important and unique network in wireless network which has brought maximum mobility and scalability. It is suitable for environments that need on fly setup. A lot of challenges come with implementing these networks. The most sensitive challenge that MANET faces is making the MANET energy efficient at the same time handling the security issues. In this paper we are going to discuss the best routing for maximum energy saving which is Load Balanced Energy Enhanced Clustered Bee Ad Hoc Routing (LBEE) along with secured PKI scheme. LBEE which is inspired from swarm intelligence and follows the bee colony paradigm has been found as the best energy efficient method for the MANETs. In this paper along with energy efficiency care has been taken for security of all the nodes of the network. The best suiting security for the protocol has been chosen as the four key security scheme.
Identity masking methods have been developed in recent years for use in multiple applications aimed at protecting privacy. There is only limited work, however, targeted at evaluating effectiveness of methods-with only a handful of studies testing identity masking effectiveness for human perceivers. Here, we employed human participants to evaluate identity masking algorithms on video data of drivers, which contains subtle movements of the face and head. We evaluated the effectiveness of the “personalized supervised bilinear regression method for Facial Action Transfer (FAT)” de-identification algorithm. We also evaluated an edge-detection filter, as an alternate “fill-in” method when face tracking failed due to abrupt or fast head motions. Our primary goal was to develop methods for humanbased evaluation of the effectiveness of identity masking. To this end, we designed and conducted two experiments to address the effectiveness of masking in preventing recognition and in preserving action perception. 1- How effective is an identity masking algorithm?We conducted a face recognition experiment and employed Signal Detection Theory (SDT) to measure human accuracy and decision bias. The accuracy results show that both masks (FAT mask and edgedetection) are effective, but that neither completely eliminated recognition. However, the decision bias data suggest that both masks altered the participants' response strategy and made them less likely to affirm identity. 2- How effectively does the algorithm preserve actions? We conducted two experiments on facial behavior annotation. Results showed that masking had a negative effect on annotation accuracy for the majority of actions, with differences across action types. Notably, the FAT mask preserved actions better than the edge-detection mask. To our knowledge, this is the first study to evaluate a deidentification method aimed at preserving facial ac- ions employing human evaluators in a laboratory setting.
Performing large-scale malware classification is increasingly becoming a critical step in malware analytics as the number and variety of malware samples is rapidly growing. Statistical machine learning constitutes an appealing method to cope with this increase as it can use mathematical tools to extract information out of large-scale datasets and produce interpretable models. This has motivated a surge of scientific work in developing machine learning methods for detection and classification of malicious executables. However, an optimal method for extracting the most informative features for different malware families, with the final goal of malware classification, is yet to be found. Fortunately, neural networks have evolved to the state that they can surpass the limitations of other methods in terms of hierarchical feature extraction. Consequently, neural networks can now offer superior classification accuracy in many domains such as computer vision and natural language processing. In this paper, we transfer the performance improvements achieved in the area of neural networks to model the execution sequences of disassembled malicious binaries. We implement a neural network that consists of convolutional and feedforward neural constructs. This architecture embodies a hierarchical feature extraction approach that combines convolution of n-grams of instructions with plain vectorization of features derived from the headers of the Portable Executable (PE) files. Our evaluation results demonstrate that our approach outperforms baseline methods, such as simple Feedforward Neural Networks and Support Vector Machines, as we achieve 93% on precision and recall, even in case of obfuscations in the data.
In this position paper we describe how mutation testing can be used to evaluate the quality of test suites from a security viewpoint. Our focus is on measuring the quality of the test suite associated with the Java Development Kit (JDK) because it provides the core security properties for all applications. We describe the challenges associated with identifying security-specific mutation operators that are specific to the Java model and ensuring that our solution can be automated for large code-bases like the JDK.
Today's mobile applications increasingly rely on communication with a remote backend service to perform many critical functions, including handling user-specific information. This implies that some form of authentication should be used to associate a user with their actions and data. Since schemes involving tedious account creation procedures can represent "friction" for users, many applications are moving toward alternative solutions, some of which, while increasing usability, sacrifice security. This paper focuses on a new trend of authentication schemes based on what we call "device-public" information, which consists of properties and data that any application running on a device can obtain. While these schemes are convenient to users, since they require little to no interaction, they are vulnerable by design, since all the needed information to authenticate a user is available to any app installed on the device. An attacker with a malicious app on a user's device could easily hijack the user's account, steal private information, send (and receive) messages on behalf of the user, or steal valuable virtual goods. To demonstrate how easily these vulnerabilities can be weaponized, we developed a generic exploitation technique that first mines all relevant data from a victim's phone, and then transfers and injects them into an attacker's phone to fool apps into granting access to the victim's account. Moreover, we developed a dynamic analysis detection system to automatically highlight problematic apps. Using our tool, we analyzed 1,000 popular applications and found that 41 of them, including the popular messaging apps WhatsApp and Viber, were vulnerable. Finally, our work proposes solutions to this issue, based on modifications to the Android API.
As QR codes become ubiquitous, there is a prominent security threat of phishing and malware attacks that can be carried out by sharing rogue URLs through such codes. Several QR code scanner apps have become available in the past few years to combat such threats. Nevertheless, limited work exists in the literature evaluating such apps in the context of security. In this paper, we have investigated the status of existing secure QR code scanner apps for Android from a security point of view. We found that several of the so-called secure QR code scanner apps merely present the URL encoded in a QR code to the user rather than validating it against suitable threat databases. Further, many apps do not support basic security features such as displaying the URL to the user and asking for user confirmation before proceeding to open the URL in a browser. The most alarming issue that emerged during this study is that only two of the studied apps perform validation of the redirected URL associated with a QR code. We also tested the relevant apps with a set of benign, phishing and malware URLs collected from multiple sources. Overall, the results of our experiments imply that the protection offered by the examined secure QR code scanner apps against rogue URLs (especially malware URLs) is limited. Based on the findings of our investigation, we have distilled a set of key lessons and proposed design recommendations to enhance the security aspects of such apps.
DNA cryptography is one of the promising fields in cryptographic research which emerged with the evolution of DNA computing. In this era, end to end transmission of secure data by ensuring confidentiality and authenticity over the networks is a real challenge. Even though various DNA based cryptographic algorithms exists, they are not secure enough to provide better security as required with today's security requirements. Hence we propose a cryptographic model which will enhance the message security. A new method of round key selection is used, which provides better and enhanced security against intruder's attack. The crucial attraction of this proposed model is providing multi level security of 3 levels with round key selection and message encryption in level 1, 16×16 matrix manipulation using asymmetric key encryption in level 2 and shift operations in level 3. Thus we design a system with multi level encryption without compromising complexity and size of the cipher text.
The data security is a challenging issue nowadays with the increase of information capacity and its transmission rate. The most common and widely used techniques in the data security fields are cryptography and steganography. The combination of cryptography and steganography methods provides more security to the data. Now, DNA (Deoxyribonucleic Acid) is explored as a new carrier for data security since it achieves maximum protection and powerful security with high capacity and low modification rate. A new data security method can be developed by taking the advantages of DNA based AES (Advanced Encryption Standard) cryptography and DNA steganography. This new technique will provide multilayer security to the secret message. Here the secret message is first encoded to DNA bases then DNA based AES algorithm is applied to it. Finally the encrypted DNA will be concealed in another DNA sequence. This hybrid technique provides triple layer security to the secret message.
ERP helps enterprises to integrate internal information and to improve operating performance and reaction capability. However, it is not enough to depend on ERP if enterprises want to develop quickly. The enterprise also needs several external supporting sub-systems such as personnel management system, equipment management system, etc. These sub-systems maybe outsourcing customized or developed by internal IT staff. They may be distributed in many branches or headquarter to collect the first line of data and then to deliver data to ERP for data integration. Most enterprises use human or timing batch process via internet to deliver data to ERP, but the two methods are not ideal from the view point of efficiency and security. This paper proposes a fast and safe way with both trigger and data replication techniques to deliver in time the distributed data to ERP for data integration.
The emergence and wide availability of remote storage service providers prompted work in the security community that allows clients to verify integrity and availability of the data that they outsourced to a not fully trusted remote storage server at a relatively low cost. Most recent solutions to this problem allow clients to read and update (i.e., insert, modify, or delete) stored data blocks while trying to lower the overhead associated with verifying the integrity of the stored data. In this work, we develop a novel scheme, performance of which favorably compares with the existing solutions. Our solution additionally enjoys a number of new features, such as a natural support for operations on ranges of blocks, revision control, and support for multiple user access to shared content. The performance guarantees that we achieve stem from a novel data structure called a balanced update tree and removing the need for interaction during update operations in addition to communicating the updates themselves.
Over the past few years we have articulated theory that describes ‘encrypted computing’, in which data remains in encrypted form while being worked on inside a processor, by virtue of a modified arithmetic. The last two years have seen research and development on a standards-compliant processor that shows that near-conventional speeds are attainable via this approach. Benchmark performance with the US AES-128 flagship encryption and a 1GHz clock is now equivalent to a 433MHz classic Pentium, and most block encryptions fit in AES's place. This summary article details how user data is protected by a system based on the processor from being read or interfered with by the computer operator, for those computing paradigms that entail trust in data-oriented computation in remote locations where it may be accessible to powerful and dishonest insiders. We combine: (i) the processor that runs encrypted; (ii) a slightly modified conventional machine code instruction set architecture with which security is achievable; (iii) an ‘obfuscating’ compiler that takes advantage of its possibilities, forming a three-point system that provably provides cryptographic "semantic security" for user data against the operator and system insiders.
In this paper we present results of a research on automatic extremist text detection. For this purpose an experimental dataset in the Russian language was created. According to the Russian legislation we cannot make it publicly available. We compared various classification methods (multinomial naive Bayes, logistic regression, linear SVM, random forest, and gradient boosting) and evaluated the contribution of differentiating features (lexical, semantic and psycholinguistic) to classification quality. The results of experiments show that psycholinguistic and semantic features are promising for extremist text detection.
Word representation is one of the basic word repressentation methods in natural language processing, which mapped a word into a dense real-valued vector space based on a hypothesis: words with similar context have similar meanings. Models like NNLM, C&W, CBOW, Skip-gram have been designed for word embeddings learning, and get widely used in many NLP tasks. However, these models assume that one word had only one semantics meaning which is contrary to the real language rules. In this paper we pro-pose a new word unit with multiple meanings and an algorithm to distinguish them by it's context. This new unit can be embedded in most language models and get series of efficient representations by learning variable embeddings. We evaluate a new model MCBOW that integrate CBOW with our word unit on word similarity evaluation task and some downstream experiments, the result indicated our new model can learn different meanings of a word and get a better result on some other tasks.
In previous work, we proposed a solution to facilitate access to computer science related courses and learning materials using cloud computing and mobile technologies. The solution was positively evaluated by the participants, but most of them indicated that it lacks support for laboratory activities. As it is well known that many of computer science subjects (e.g. Computer Networks, Information Security, Systems Administration, etc.) require a suitable and flexible environment where students can access a set of computers and network devices to successfully complete their hands-on activities. To achieve this criteria, we created a cloud-based virtual laboratory based on OpenStack cloud platform to facilitate access to virtual machine both locally and remotely. Cloud-based virtual labs bring a lot of advantages, such as increased manageability, scalability, high availability and flexibility, to name a few. This arrangement has been tested in a case-study exercise with a group of students as part of Computer Networks and System Administration courses at Kabul Polytechnic University in Afghanistan. To measure success, we introduced a level test to be completed by participants prior and after the experiment. As a result, the learners achieved an average of 17.1 % higher scores in the post level test after completing the practical exercises. Lastly, we distributed a questionnaire after the experiment and students provided positive feedback on the effectiveness and usefulness of the proposed solution.