Biblio
Food safety policies have aim to promote and develop feeding and nutrition in society. This paper presents a system dynamics model that studies the dynamic behavior between transport infrastructure and the food supply chain in the city of Bogotá. The results show that an adequate transport infrastructure is more effective to improve the service to the customer in the food supply chain. The system dynamics model allows analyze the behavior of transport infrastructure and supply chains of fruits and vegetables, groceries, meat and dairy. The study has gone some way towards enhancing our understanding of food security impact, food supply chain and transport infrastructure.
Segmentation of land and water regions is necessary in many applications involving analysis of remote sensing imagery. Not only is manual segmentation of these regions prone to considerable subjective variability, but the large volume of imagery collected by modern platforms makes manual segmentation extremely tedious to perform, particularly in applications that require frequent re-measurement. This paper examines a robust, semi-automated approach that utilizes simple and efficient machine learning algorithms to perform supervised classification of multi-spectral image data into land and water regions. By combining the four wavelength bands widely available in imaging platforms such as IKONOS, QuickBird, and GeoEye-1 with basic texture metrics, high quality segmentation can be achieved. An efficient workflow was created by constructing a Graphical User Interface (GUI) to these machine learning algorithms.
Most existing approaches focus on examining the values are dangerous for information flow within inter-suspicious modules of cloud applications (apps) in a host by using malware threat analysis, rather than the risk posed by suspicious apps were connected to the cloud computing server. Accordingly, this paper proposes a taint propagation analysis model incorporating a weighted spanning tree analysis scheme to track data with taint marking using several taint checking tools. In the proposed model, Android programs perform dynamic taint propagation to analyse the spread of and risks posed by suspicious apps were connected to the cloud computing server. In determining the risk of taint propagation, risk and defence capability are used for each taint path for assisting a defender in recognising the attack results against network threats caused by malware infection and estimate the losses of associated taint sources. Finally, a case of threat analysis of a typical cyber security attack is presented to demonstrate the proposed approach. Our approach verified the details of an attack sequence for malware infection by incorporating a finite state machine (FSM) to appropriately reflect the real situations at various configuration settings and safeguard deployment. The experimental results proved that the threat analysis model allows a defender to convert the spread of taint propagation to loss and practically estimate the risk of a specific threat by using behavioural analysis with real malware infection.
Intrusion Detection Systems (IDSs) are an important defense tool against the sophisticated and ever-growing network attacks. These systems need to be evaluated against high quality datasets for correctly assessing their usefulness and comparing their performance. We present an Intrusion Detection Dataset Toolkit (ID2T) for the creation of labeled datasets containing user defined synthetic attacks. The architecture of the toolkit is provided for examination and the example of an injected attack, in real network traffic, is visualized and analyzed. We further discuss the ability of the toolkit of creating realistic synthetic attacks of high quality and low bias.
The perception of lack of control over resources deployed in the cloud may represent one of the critical factors for an organization to decide to cloudify or not their own services. Furthermore, in spite of the idea of offering security-as-a-service, the development of secure cloud applications requires security skills that can slow down the adoption of the cloud for nonexpert users. In the recent years, the concept of Security Service Level Agreements (Security SLA) is assuming a key role in the provisioning of cloud resources. This paper presents the SPECS framework, which enables the development of secure cloud applications covered by a Security SLA. The SPECS framework offers APIs to manage the whole Security SLA life cycle and provides all the functionalities needed to automatize the enforcement of proper security mechanisms and to monitor userdefined security features. The development process of SPECS applications offering security-enhanced services is illustrated, presenting as a real-world case study the provisioning of a secure web server.
What you see is not definitely believable is not a rare case in the cyber security monitoring. However, due to various tricks of camouflages, such as packing or virutal private network (VPN), detecting "advanced persistent threat"(APT) by only signature based malware detection system becomes more and more intractable. On the other hand, by carefully modeling users' subsequent behaviors of daily routines, probability for one account to generate certain operations can be estimated and used in anomaly detection. To the best of our knowledge so far, a novel behavioral analytic framework, which is dedicated to analyze Active Directory domain service logs and to monitor potential inside threat, is now first proposed in this project. Experiments on real dataset not only show that the proposed idea indeed explores a new feasible direction for cyber security monitoring, but also gives a guideline on how to deploy this framework to various environments.
Selective encryption designates a technique that aims at scrambling a message content while preserving its syntax. Such an approach allows encryption to be transparent towards middle-box and/or end user devices, and to easily fit within existing pipelines. In this paper, we propose to apply this property to a real-time diffusion scenario - or broadcast - over a RTP session. The main challenge of such problematic is the preservation of the synchronization between encryption and decryption. Our solution is based on the Advanced Encryption Standard in counter mode which has been modified to fit our auto-synchronization requirement. Setting up the proposed synchronization scheme does not induce any latency, and requires no additional bandwidth in the RTP session (no additional information is sent). Moreover, its parallel structure allows to start decryption on any given frame of the video while leaving a lot of room for further optimization purposes.
During an advanced persistent threat (APT), an attacker group usually establish more than one C&C server and these C&C servers will change their domain names and corresponding IP addresses over time to be unseen by anti-virus software or intrusion prevention systems. For this reason, discovering and catching C&C sites becomes a big challenge in information security. Based on our observations and deductions, a malware tends to contain a fixed user agent string, and the connection behaviors generated by a malware is different from that by a benign service or a normal user. This paper proposed a new method comprising filtering and clustering methods to detect C&C servers with a relatively higher coverage rate. The experiments revealed that the proposed method can successfully detect C&C Servers, and the can provide an important clue for detecting APT.
Salt and Pepper Noise is very common during transmission of images through a noisy channel or due to impairment in camera sensor module. For noise removal, methods have been proposed in literature, with two stage cascade various configuration. These methods, can remove low density impulse noise, are not suited for high density noise in terms of visible performance. We propose an efficient method for removal of high as well as low density impulse noise. Our approach is based on novel extension over iterated conditional modes (ICM). It is cascade configuration of two stages - noise detection and noise removal. Noise detection process is a combination of iterative decision based approach, while noise removal process is based on iterative noisy pixel estimation. Using improvised approach, up to 95% corrupted image have been recovered with good results, while 98% corrupted image have been recovered with quite satisfactory results. To benchmark the image quality, we have considered various metrics like PSNR (Peak Signal to Noise Ratio), MSE (Mean Square Error) and SSIM (Structure Similarity Index Measure).
When making decisions under uncertainty, the optimal choices are often difficult to discern, especially if not enough information has been gathered. Two key questions in this regard relate to whether one should stop the information gathering process and commit to a decision (stopping criterion), and if not, what information to gather next (selection criterion). In this paper, we show that the recently introduced notion, Same-Decision Probability (SDP), can be useful as both a stopping and a selection criterion, as it can provide additional insight and allow for robust decision making in a variety of scenarios. This query has been shown to be highly intractable, being PPPP-complete, and is exemplary of a class of queries which correspond to the computation of certain expectations. We propose the first exact algorithm for computing the SDP, and demonstrate its effectiveness on several real and synthetic networks. Finally, we present new complexity results, such as the complexity of computing the SDP on models with a Naive Bayes structure. Additionally, we prove that computing the non-myopic value of information is complete for the same complexity class as computing the SDP
This paper is a proposal for a poster. In it we describe a medical device security approach that researchers at Fraunhofer used to analyze different kinds of medical devices for security vulnerabilities. These medical devices were provided to Fraunhofer by a medical device manufacturer whose name we cannot disclose due to non-disclosure agreements.
Low-latency anonymity systems such as Tor rely on intermediate relays to forward user traffic; these relays, however, are often unreliable, resulting in a degraded user experience. Worse yet, malicious relays may introduce deliberate failures in a strategic manner in order to increase their chance of compromising anonymity. In this paper we propose using a reputation metric that can profile the reliability of relays in an anonymity system based on users' past experience. The two main challenges in building a reputation-based system for an anonymity system are: first, malicious participants can strategically oscillate between good and malicious nature to evade detection, and second, an observed failure in an anonymous communication cannot be uniquely attributed to a single relay. Our proposed framework addresses the former challenge by using a proportional-integral-derivative (PID) controller-based reputation metric that ensures malicious relays adopting time-varying strategic behavior obtain low reputation scores over time, and the latter by introducing a filtering scheme based on the evaluated reputation score to effectively discard relays mounting attacks. We collect data from the live Tor network and perform simulations to validate the proposed reputation-based filtering scheme. We show that an attacker does not gain any significant benefit by performing deliberate failures in the presence of the proposed reputation framework.
Applied Cyber-Physical Systems presents the latest methods and technologies in the area of cyber-physical systems including medical and biological applications. Cyber-physical systems (CPS) integrate computing and communication capabilities by monitoring, and controlling the physical systems via embedded hardware and computers.
This book brings together unique contributions from renowned experts on cyber-physical systems research and education with applications. It also addresses the major challenges in CPS, and then provides a resolution with various diverse applications as examples.
Advanced-level students and researchers focused on computer science, engineering and biomedicine will find this to be a useful secondary text book or reference, as will professionals working in this field.
Security features are often hardwired into software applications, making it difficult to adapt security responses to reflect changes in runtime context and new attacks. In prior work, we proposed the idea of architecture-based self-protection as a way of separating adaptation logic from application logic and providing a global perspective for reasoning about security adaptations in the context of other business goals. In this paper, we present an approach, based on this idea, for combating denial-of-service (DoS) attacks. Our approach allows DoS-related tactics to be composed into more sophisticated mitigation strategies that encapsulate possible responses to a security problem. Then, utility-based reasoning can be used to consider different business contexts and qualities. We describe how this approach forms the underpinnings of a scientific approach to self-protection, allowing us to reason about how to make the best choice of mitigation at runtime. Moreover, we also show how formal analysis can be used to determine whether the mitigations cover the range of conditions the system is likely to encounter, and the effect of mitigations on other quality attributes of the system. We evaluate the approach using the Rainbow self-adaptive framework and show how Rainbow chooses DoS mitigation tactics that are sensitive to different business contexts.
Reliability and security tend to be treated separately because they appear orthogonal: reliability focuses on accidental failures, security on intentional attacks. Because of the apparent dissimilarity between the two, tools to detect and recover from different classes of failures and attacks are usually designed and implemented differently. So, integrating support for reliability and security in a single framework is a significant challenge.
Here, we discuss how to address this challenge in the context of cloud computing, for which reliability and security are growing concerns. Because cloud deployments usually consist of commodity hardware and software, efficient monitoring is key to achieving resiliency. Although reliability and security monitoring might use different types of analytics, the same sensing infrastructure can provide inputs to monitoring modules.
We split monitoring into two phases: logging and auditing. Logging captures data or events; it constitutes the framework’s core and is common to all monitors. Auditing analyzes data or events; it’s implemented and operated independently by each monitor. To support a range of auditing policies, logging must capture a complete view, including both actions and states of target systems. It must also provide useful, trustworthy information regarding the captured view.
We applied these principles when designing HyperTap, a hypervisor-level monitoring framework for virtual machines (VMs). Unlike most VM-monitoring techniques, HyperTap employs hardware architectural invariants (hardware invariants, for short) to establish the root of trust for logging. Hardware invariants are properties defined and enforced by a hardware platform (for example, the x86 instruction set architecture). Additionally, HyperTap supports continuous, event-driven VM monitoring, which enables both capturing the system state and responding rapidly to actions of interest.
We present an architecture for the Security Behavior Observatory (SBO), a client-server infrastructure designed to collect a wide array of data on user and computer behavior from hundreds of participants over several years. The SBO infrastructure had to be carefully designed to fulfill several requirements. First, the SBO must scale with the desired length, breadth, and depth of data collection. Second, we must take extraordinary care to ensure the security of the collected data, which will inevitably include intimate participant behavioral data. Third, the SBO must serve our research interests, which will inevitably change as collected data is analyzed and interpreted. This short paper summarizes some of our design and implementation benefits and discusses a few hurdles and trade-offs to consider when designing such a data collection system.
We present an architecture for the Security Behavior Observatory
(SBO), a client-server infrastructure designed to
collect a wide array of data on user and computer behavior
from hundreds of participants over several years. The SBO
infrastructure had to be carefully designed to fulfill several
requirements. First, the SBO must scale with the desired
length, breadth, and depth of data collection. Second, we
must take extraordinary care to ensure the security of the
collected data, which will inevitably include intimate participant
behavioral data. Third, the SBO must serve our
research interests, which will inevitably change as collected
data is analyzed and interpreted. This short paper summarizes
some of our design and implementation benefits and
discusses a few hurdles and trade-offs to consider when designing
such a data collection system.
This work reports an efficient and compact FPGA processor for the SHA-256 algorithm. The novel processor architecture is based on a custom datapath that exploits the reusing of modules, having as main component a 4-input Arithmetic-Logic Unit not previously reported. This ALU is designed as a result of studying the type of operations in the SHA algorithm, their execution sequence and the associated dataflow. The processor hardware architecture was modeled in VHDL and implemented in FPGAs. The results obtained from the implementation in a Virtex5 device demonstrate that the proposed design uses fewer resources achieving higher performance and efficiency, outperforming previous approaches in the literature focused on compact designs, saving around 60% FPGA slices with an increased throughput (Mbps) and efficiency (Mbps/Slice). The proposed SHA processor is well suited for applications like Wi-Fi, TMP (Trusted Mobile Platform), and MTM (Mobile Trusted Module), where the data transfer speed is around 50 Mbps.
40th-year commemorative issue
Static types may be used both by the language implementation and directly by the user as documentation. Though much existing work focuses primarily on the implications of static types on the semantics of programs, relatively little work considers the impact on usability that static types pro- vide. Though the omission of static type information may decrease program length and thereby improve readability, it may also decrease readability because users must then frequently derive type information manually while reading programs. As type inference becomes more popular in languages that are in widespread use, it is important to consider whether the adoption of type inference may impact productivity of developers.
Session management in distributed Internet services is traditionally based on username and password, explicit logouts and mechanisms of user session expiration using classic timeouts. Emerging biometric solutions allow substituting username and password with biometric data during session establishment, but in such an approach still a single verification is deemed sufficient, and the identity of a user is considered immutable during the entire session. Additionally, the length of the session timeout may impact on the usability of the service and consequent client satisfaction. This paper explores promising alternatives offered by applying biometrics in the management of sessions. A secure protocol is defined for perpetual authentication through continuous user verification. The protocol determines adaptive timeouts based on the quality, frequency and type of biometric data transparently acquired from the user. The functional behavior of the protocol is illustrated through Matlab simulations, while model-based quantitative analysis is carried out to assess the ability of the protocol to contrast security attacks exercised by different kinds of attackers. Finally, the current prototype for PCs and Android smartphones is discussed.
Current Trusted Platform Modules (TPMs) are illsuited for cross-device scenarios in trusted mobile applications because they hinder the seamless sharing of data across multiple devices. This paper presents cTPM, an extension of the TPM's design that adds an additional root key to the TPM and shares that root key with the cloud. As a result, the cloud can create and share TPM-protected keys and data across multiple devices owned by one user. Further, the additional key lets the cTPM allocate cloud-backed remote storage so that each TPM can benefit from a trusted real-time clock and high-performance, non-volatile storage.
This paper shows that cTPM is practical, versatile, and easily applicable to trusted mobile applications. Our simple change to the TPM specification is viable because its fundamental concepts - a primary root key and off-chip, NV storage - are already found in the current specification, TPM 2.0. By avoiding a clean-slate redesign, we sidestep the difficult challenge of re-verifying the security properties of a new TPM design. We demonstrate cTPM's versatility with two case studies: extending Pasture with additional functionality, and reimplementing TrInc without the need for extra hardware.