Biblio
Information and communication technologies have augmented interoperability and rapidly advanced varying industries, with vast complex interconnected networks being formed in areas such as safety-critical systems, which can be further categorised as critical infrastructures. What also must be considered is the paradigm of the Internet of Things which is rapidly gaining prevalence within the field of wireless communications, being incorporated into areas such as e-health and automation for industrial manufacturing. As critical infrastructures and the Internet of Things begin to integrate into much wider networks, their reliance upon communication assets by third parties to ensure collaboration and control of their systems will significantly increase, along with system complexity and the requirement for improved security metrics. We present a critical analysis of the risk assessment methods developed for generating attack graphs. The failings of these existing schemas include the inability to accurately identify the relationships and interdependencies between the risks and the reduction of attack graph size and generation complexity. Many existing methods also fail due to the heavy reliance upon the input, identification of vulnerabilities, and analysis of results by human intervention. Conveying our work, we outline our approach to modelling interdependencies within large heterogeneous collaborative infrastructures, proposing a distributed schema which utilises network modelling and attack graph generation methods, to provide a means for vulnerabilities, exploits and conditions to be represented within a unified model.
This paper reports a research work on how to transmit a secured image data using Discrete Wavelet Transform (DWT) in combination of Advanced Encryption Standard (AES) with low power and high speed. This can have better quality secured image with reduced latency and improved throughput. A combined model of DWT and AES technique help in achieving higher compression ratio and simultaneously it provides high security while transmitting an image over the channels. The lifting scheme algorithm is realized using a single and serialized DT processor to compute up to 3-levels of decomposition for improving speed and security. An ASIC circuit is designed using RTL-GDSII to simulate proposed technique using 65 nm CMOS Technology. The ASIC circuit is implemented on an average area of about 0.76 mm2 and the power consumption is estimated in the range of 10.7-19.7 mW at a frequency of 333 MHz which is faster compared to other similar research work reported.
Joint transmission coordinated multi-point (CoMP) is a combination of constructive and destructive superposition of several to potentially many signal components, with the goal to maximize the desired receive-signal and at the same time to minimize mutual interference. Especially the destructive superposition requires accurate alignment of phases and amplitudes. Therefore, a 5G clean slate approach needs to incorporate the following enablers to overcome the challenging limitation for JT CoMP: accurate channel estimation of all relevant channel components, channel prediction for time-aligned precoder design, proper setup of cooperation areas corresponding to user grouping and to limit feedback overhead especially in FDD as well as treatment of out-of-cluster interference (interference floor shaping).
Food safety policies have aim to promote and develop feeding and nutrition in society. This paper presents a system dynamics model that studies the dynamic behavior between transport infrastructure and the food supply chain in the city of Bogotá. The results show that an adequate transport infrastructure is more effective to improve the service to the customer in the food supply chain. The system dynamics model allows analyze the behavior of transport infrastructure and supply chains of fruits and vegetables, groceries, meat and dairy. The study has gone some way towards enhancing our understanding of food security impact, food supply chain and transport infrastructure.
A technical method regarding to the improvement of transmission capacity of an optical wireless orthogonal frequency division multiplexing (OFDM) link based on a visible light emitting diode (LED) is proposed in this paper. An original OFDM signal, which is encoded by various multilevel digital modulations such as quadrature phase shift keying (QPSK), and quadrature amplitude modulation (QAM), is converted into a sparse one and then compressed using an adaptive sampling with inverse discrete cosine transform, while its error-free reconstruction is implemented using a L1-minimization based on a Bayesian compressive sensing (CS). In case of QPSK symbols, the transmission capacity of the optical wireless OFDM link was increased from 31.12 Mb/s to 51.87 Mb/s at the compression ratio of 40 %, while It was improved from 62.5 Mb/s to 78.13 Mb/s at the compression ratio of 20 % under the 16-QAM symbols in the error free wireless transmission (forward error correction limit: bit error rate of 10-3).
Intrusion Detection Systems (IDSs) are an important defense tool against the sophisticated and ever-growing network attacks. These systems need to be evaluated against high quality datasets for correctly assessing their usefulness and comparing their performance. We present an Intrusion Detection Dataset Toolkit (ID2T) for the creation of labeled datasets containing user defined synthetic attacks. The architecture of the toolkit is provided for examination and the example of an injected attack, in real network traffic, is visualized and analyzed. We further discuss the ability of the toolkit of creating realistic synthetic attacks of high quality and low bias.
Security and making trust is the first step toward development in both real and virtual societies. Internet-based development is inevitable. Increasing penetration of technology in the internet banking and its effectiveness in contributing to banking profitability and prosperity requires that satisfied customers turn into loyal customers. Currently, a large number of cyber attacks have been focused on online banking systems, and these attacks are considered as a significant security threat. Banks or customers might become the victim of the most complicated financial crime, namely internet fraud. This study has developed an intelligent system that enables detecting the user's abnormal behavior in online banking. Since the user's behavior is associated with uncertainty, the system has been developed based on the fuzzy theory, This enables it to identify user behaviors and categorize suspicious behaviors with various levels of intensity. The performance of the fuzzy expert system has been evaluated using an receiver operating characteristic curve, which provides the accuracy of 94%. This expert system is optimistic to be used for improving e-banking services security and quality.
Self-adaptive systems have the ability to adapt their behavior to dynamic operation conditions. In reaction to changes in the environment, these systems determine the appropriate corrective actions based in part on information about which action will have the best impact on the system. Existing models used to describe the impact of adaptations are either unable to capture the underlying uncertainty and variability of such dynamic environments, or are not compositional and described at a level of abstraction too low to scale in terms of specification effort required for non-trivial systems. In this paper, we address these shortcomings by describing an approach to the specification of impact models based on architectural system descriptions, which at the same time allows us to represent both variability and uncertainty in the outcome of adaptations, hence improving the selection of the best corrective action. The core of our approach is an impact model language equipped with a formal semantics defined in terms of Discrete Time Markov Chains. To validate our approach, we show how employing our language can improve the accuracy of predictions used for decisionmaking in the Rainbow framework for architecture-based self-adaptation.
In previous work, the viability of split-cycle constant-period frequency modulation for controlling two degrees of freedom of flapping wing micro air vehicle has been demonstrated. Though the proposed wing control system was made compact and self-sufficient to be deployed on the vehicle, it was not built for on-the-fly configurability of all the split-cycle control's parameters. Further the system had limited external communication capabilities that rendered it inappropriate for its integration into a higher level research framework to analyze and validate motion controllers in flapping vehicles. In this paper, an improved control system has been proposed that could addresses the on-the-fly configurability issue and provide an improved external communication capabilities, hence the wing control system could be seamlessly integrated in a research framework for analyzing and validating motion controllers for flapping wing vehicles.
To keep malware out of mobile application markets, existing techniques analyze the security aspects of application behaviors and summarize patterns of these security aspects to determine what applications do. However, user expectations (reflected via user perception in combination with user judgment) are often not incorporated into such analysis to determine whether application behaviors are within user expectations. This poster presents our recent work on bridging the semantic gap between user perceptions of the application behaviors and the actual application behaviors.
Support Vector Machine (SVM) as an innovative machine learning tool, based on statistical learning theory, is recently used in process fault diagnosis tasks. In the application of SVM to a fault diagnosis problem, typically a discrete decision function with discrete output values is utilized in order to solely define the label of the fault. However, for incipient faults in which fault steadily progresses over time and there is a changeover from normal operation to faulty operation, using discrete decision function does not reveal any evidence about the progress and depth of the fault. Numerous process faults, such as the reactor fouling and degradation of catalyst, progress slowly and can be categorized as incipient faults. In this work a continuous decision function is anticipated. The decision function values not only define the fault label, but also give qualitative evidence about the depth of the fault. The suggested method is applied to incipient fault diagnosis of a continuous binary mixture distillation column and the result proves the practicability of the proposed approach. In incipient fault diagnosis tasks, the proposed approach outperformed some of the conventional techniques. Moreover, the performance of the proposed approach is better than typical discrete based classification techniques employing some monitoring indexes such as the false alarm rate, detection time and diagnosis time.
The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.
Sandboxes impose a security policy, isolating applications and their components from the rest of a system. While many sandboxing techniques exist, state of the art sandboxes generally perform their functions within the system that is being defended. As a result, when the sandbox fails or is bypassed, the security of the surrounding system can no longer be assured. We experiment with the idea of in-nimbo sandboxing, encapsulating untrusted computations away from the system we are trying to protect. The idea is to delegate computations that may be vulnerable or malicious to virtual machine instances in a cloud computing environment. This may not reduce the possibility of an in-situ sandbox compromise, but it could significantly reduce the consequences should that possibility be realized. To achieve this advantage, there are additional requirements, including: (1) A regulated channel between the local and cloud environments that supports interaction with the encapsulated application, (2) Performance design that acceptably minimizes latencies in excess of the in-situ baseline. To test the feasibility of the idea, we built an in-nimbo sandbox for Adobe Reader, an application that historically has been subject to significant attacks. We undertook a prototype deployment with PDF users in a large aerospace firm. In addition to thwarting several examples of existing PDF-based malware, we found that the added increment of latency, perhaps surprisingly, does not overly impair the user experience with respect to performance or usability.
Sandboxes impose a security policy, isolating applications
and their components from the rest of a system. While
many sandboxing techniques exist, state of the art sandboxes
generally perform their functions within the system
that is being defended. As a result, when the sandbox fails
or is bypassed, the security of the surrounding system can
no longer be assured. We experiment with the idea of innimbo
sandboxing, encapsulating untrusted computations
away from the system we are trying to protect. The idea
is to delegate computations that may be vulnerable or malicious
to virtual machine instances in a cloud computing
environment.
This may not reduce the possibility of an in-situ sandbox
compromise, but it could significantly reduce the consequences
should that possibility be realized. To achieve this
advantage, there are additional requirements, including: (1)
A regulated channel between the local and cloud environments
that supports interaction with the encapsulated application,
(2) Performance design that acceptably minimizes
latencies in excess of the in-situ baseline.
To test the feasibility of the idea, we built an in-nimbo
sandbox for Adobe Reader, an application that historically
has been subject to significant attacks. We undertook a
prototype deployment with PDF users in a large aerospace
firm. In addition to thwarting several examples of existing
PDF-based malware, we found that the added increment of
latency, perhaps surprisingly, does not overly impair the
One of the biggest challenges in mobile security is human behavior. The most secure password may be useless if it is sent as a text or in an email. The most secure network is only as secure as its most careless user. Thus, in the current project we sought to discover the conditions under which users of mobile devices were most likely to make security errors. This scaffolds a larger project where we will develop automatic ways of detecting such environments and eventually supporting users during these times to encourage safe mobile behaviors.
Cloud computing is a distributed architecture that has shared resources, software, and information. There exists a great number of implementations and research for Intrusion Detection Systems (IDS) in grid and cloud environments, however they are limited in addressing the requirements for an ideal intrusion detection system. Security issues in Cloud Computing (CC) have become a major concern to its users, availability being one of the key security issues. Distributed Denial of Service (DDoS) is one of these security issues that poses a great threat to the availability of the cloud services. The aim of this research is to evaluate the performance of IDS in CC when the DDoS attack is detected in a private cloud, named Saa SCloud. A model has been implemented on three virtual machines, Saa SCloud Model, DDoS attack Model, and IDSServer Model. Through this implementation, Service Intrusion Detection System in Cloud Computing (SIDSCC) will be proposed, investigated and evaluated.