Biblio
Being an era of fast internet-based application environment, large volumes of relational data are being outsourced for business purposes. Therefore, ownership and digital rights protection has become one of the greatest challenges and among the most critical issues. This paper presents a novel fingerprinting technique to protect ownership rights of non-numeric digital data on basis of pattern generation and row association schemes. Firstly, fingerprint sequence is formulated by using secret key and buyer's Unique ID. With the chunks of these sequences and by applying the Fibonacci series, we select some rows. The selected rows are candidates of fingerprinting. The primary key of selected row is protected using RSA encryption; after which a pattern is designed by randomly choosing the values of different attributes of datasets. The encryption of primary key leads to develop an association between original and fake pattern; creating an ease in fingerprint detection. Fingerprint detection algorithm first finds the fake rows and then extracts the fingerprint sequence from the fake attributes, hence identifying the traitor. Some most important features of the proposed approach is to overcome major weaknesses such as error tolerance, integrity and accuracy in previously proposed fingerprinting techniques. The results show that technique is efficient and robust against several malicious attacks.
Mobile Ad hoc Network (MANET) is one of the most popular dynamic topology reconfigurable local wireless network standards. Distributed Denial of Services is one of the most challenging threats in such a network. Flooding attack is one of the forms of DDoS attack whereby certain nodes in the network miss-utilizes the allocated channel by flooding packets with very high packet rate to it's neighbors, causing a fast energy loss to the neighbors and causing other legitimate nodes a denial of routing and transmission services from these nodes. In this work we propose a novel link layer assessment based flooding attack detection and prevention method. MAC layer of the nodes analyzes the signal properties and incorporated into the routing table by a cross layer MAC/Network interface. Once a node is marked as a flooding node, it is blacklisted in the routing table and is communicated to MAC through Network/MAC cross layer interface. Results shows that the proposed technique produces more accurate flooding attack detection in comparison to current state of art statistical analysis based flooding attack detection by network layer.
In recommender systems based on low-rank factorization of a partially observed user-item matrix, a common phenomenon that plagues many otherwise effective models is the interleaving of good and spurious recommendations in the top-K results. A single spurious recommendation can dramatically impact the perceived quality of a recommender system. Spurious recommendations do not result in serendipitous discoveries but rather cognitive dissonance. In this work, we investigate folding, a major contributing factor to spurious recommendations. Folding refers to the unintentional overlap of disparate groups of users and items in the low-rank embedding vector space, induced by improper handling of missing data. We formally define a metric that quantifies the severity of folding in a trained system, to assist in diagnosing its potential to make inappropriate recommendations. The folding metric complements existing information retrieval metrics that focus on the number of good recommendations and their ranks but ignore the impact of undesired recommendations. We motivate the folding metric definition on synthetic data and evaluate its effectiveness on both synthetic and real world datasets. In studying the relationship between the folding metric and other characteristics of recommender systems, we observe that optimizing for goodness metrics can lead to high folding and thus more spurious recommendations.
Two known shortcomings of standard techniques in formal verification are the limited capability to provide system-level assertions, and the scalability to large, complex models, such as those needed in Cyber-Physical Systems (CPS) applications. Leveraging data, which nowadays is becoming ever more accessible, has the potential to mitigate such limitations. However, this leads to a lack of formal proofs that are needed for modern safety-critical systems. This contribution presents a research initiative that addresses these shortcomings by bringing model-based techniques and data-driven methods together, which can help pushing the envelope of existing algorithms and tools in formal verification and thus expanding their applicability to complex engineering systems, such as CPS. In the first part of the contribution, we discuss a new, formal, measurement-driven and model-based automated technique, for the quantitative verification of physical systems with partly unknown dynamics. We formulate this setup as a data-driven Bayesian inference problem, formally embedded within a quantitative, model-based verification procedure. We argue that the approach can be applied to complex physical systems that are key for CPS applications, dealing with spatially continuous variables, evolving under complex dynamics, driven by external inputs, and accessed under noisy measurements. In the second part of the contribution, we concentrate on systems represented by models that evolve under probabilistic and heterogeneous (continuous/discrete - that is "hybrid" - as well as nonlinear) dynamics. Such stochastic hybrid models (also known as SHS) are a natural mathematical framework for CPS. With focus on model-based verification procedures, we provide algorithms for quantitative model checking of temporal specifications on SHS with formal guarantees. This is attained via the development of formal abstraction techniques that are based on quantitative approximations. Theory is complemented by algorithms, all packaged in software tools that are available to users, and which are applied here in the domain of Smart Energy.
We present a Network Address Translator (NAT) written in C and proven to be semantically correct according to RFC 3022, as well as crash-free and memory-safe. There exists a lot of recent work on network verification, but it mostly assumes models of network functions and proves properties specific to network configuration, such as reachability and absence of loops. Our proof applies directly to the C code of a network function, and it demonstrates the absence of implementation bugs. Prior work argued that this is not feasible (i.e., that verifying a real, stateful network function written in C does not scale) but we demonstrate otherwise: NAT is one of the most popular network functions and maintains per-flow state that needs to be properly updated and expired, which is a typical source of verification challenges. We tackle the scalability challenge with a new combination of symbolic execution and proof checking using separation logic; this combination matches well the typical structure of a network function. We then demonstrate that formally proven correctness in this case does not come at the cost of performance. The NAT code, proof toolchain, and proofs are available at [58].
With the exponential growth of Ubiquitous Computing, multiple technologies have gained prominence. One of them is the technology of Wireless Sensor Networks (WSNs). Increasingly used in fields such as smart houses and e-health, it can be said that the sensors have a consolidated room in the current scenario. These sensors, however, have some shortcomings: limited resources, energy and computing power are points of interest. Besides these, there is also concern about the vulnerability of these devices, both physical and logical. To eliminate or at least ameliorating these threats is necessary to create layers of protection. One of the layers is formed by Intrusion Detection Systems (IDS). However, sensors have limited computational capacity, and the development of IDSs for these devices must take into account this constraint. Other important requirements for an Intrusion Detection System are flexibility, efficiency and the ability to adapt to new situations. A tool that enables such capabilities are the Intelligent Agents. With this in mind, this work describes the proposal of a framework for intrusion detection in WSNs based on intelligent agents.
It is increasingly common to outsource data storage to untrusted, third party (e.g. cloud) servers. However, in such settings, low-level online reference monitors may not be appropriate for enforcing read access, and thus cryptographic enforcement schemes (CESs) may be required. Much of the research on cryptographic access control has focused on the use of specific primitives and, primarily, on how to generate appropriate keys and fails to model the access control system as a whole. Recent work in the context of role-based access control has shown a gap between theoretical policy specification and computationally secure implementations of access control policies, potentially leading to insecure implementations. Without a formal model, it is hard to (i) reason about the correctness and security of a CES, and (ii) show that the security properties of a particular cryptographic primitive are sufficient to guarantee security of the CES as a whole. In this paper, we provide a rigorous definitional framework for a CES that enforces read-only information flow policies (which encompass many practical forms of access control, including role-based policies). This framework (i) provides a tool by which instantiations of CESs can be proven correct and secure, (ii) is independent of any particular cryptographic primitives used to instantiate a CES, and (iii) helps to identify the limitations of current primitives (e.g. key assignment schemes) as components of a CES.
Collecting and analyzing personal data is important in modern information applications. Though the privacy of data providers should be protected, some adversarial users may behave badly under circumstances where they are not identified. However, the privacy of honest users should not be infringed. Thus, detecting anomalies without revealing normal users-identities is quite important for operating information systems using personal data. Though various methods of statistics and machine learning have been developed for detecting anomalies, it is difficult to know in advance what anomaly will come up. Thus, it would be useful to provide a "general" framework that can employ any anomaly detection method regardless of the type of data and the nature of the abnormality. In this paper, we propose a privacy preserving anomaly detection framework that allows an authority to detect adversarial users while other honest users are kept anonymous. By using cryptographic techniques, group signatures with message-dependent opening (GS-MDO) and public key encryption with non-interactive opening (PKENO), we provide a correspondence table that links a user and data in a secure way, and we can employ any anonymization technique and any anomaly detection method. It is particularly worth noting that no big brother exists, meaning that no single entity can identify users, while bad behaviors are always traceable. We also show the result of implementing our framework. Briefly, the overhead of our framework is on the order of dozens of milliseconds.
Information security policies are not easy to create unless organizations explicitly recognize the various steps required in the development process of an information security policy, especially in institutions of higher education that use enormous amounts of IT. An improper development process or a copied security policy content from another organization might also fail to execute an effective job. The execution could be aimed at addressing an issue such as the non-compliance to applicable rules and regulations even if the replicated policy is properly developed, referenced, cited in laws or regulations and interpreted correctly. A generic framework was proposed to improve and establish the development process of security policies in institutions of higher education. The content analysis and cross-case analysis methods were used in this study in order to gain a thorough understanding of the information security policy development process in institutions of higher education.
A lot of research in security of cyber physical systems focus on threat models where an attacker can spoof sensor readings by compromising the communication channel. A little focus is given to attacks on physical components. In this paper a method to detect potential attacks on physical components in a Cyber Physical System (CPS) is proposed. Physical attacks are detected through a comparison of noise pattern from sensor measurements to a reference noise pattern. If an adversary has physically modified or replaced a sensor, the proposed method issues an alert indicating that a sensor is probably compromised or is defective. A reference noise pattern is established from the sensor data using a deterministic model. This pattern is referred to as a fingerprint of the corresponding sensor. The fingerprint so derived is used as a reference to identify measured data during the operation of a CPS. Extensive experimentation with ultrasonic level sensors in a realistic water treatment testbed point to the effectiveness of the proposed fingerprinting method in detecting physical attacks.
Reliable operation of electrical power systems in the presence of multiple critical N − k contingencies is an important challenge for the system operators. Identifying all the possible N − k critical contingencies to design effective mitigation strategies is computationally infeasible due to the combinatorial explosion of the search space. This paper describes two heuristic algorithms based on the iterative pruning of the candidate contingency set to effectively and efficiently identify all the critical N − k contingencies resulting in system failure. These algorithms are applied to the standard IEEE-14 bus system, IEEE-39 bus system, and IEEE-57 bus system to identify multiple critical N − k contingencies. The algorithms are able to capture all the possible critical N − k contingencies (where 1 ≤ k ≤ 9) without missing any dangerous contingency.
Our operational context is a task-oriented dialog system where no single module satisfactorily addresses the range of conversational queries from humans. Such systems must be equipped with a range of technologies to address semantic, factual, task-oriented, open domain conversations using rule-based, semantic-web, traditional machine learning and deep learning. This raises two key challenges. First, the modules need to be managed and selected appropriately. Second, the complexity of troubleshooting on such systems is high. We address these challenges with a mixed-initiative model that controls conversational logic through hierarchical classification. We also developed an interface to increase interpretability for operators and to aggregate module performance.
User attribution process based on human inherent dynamics and preference is one area of research that is capable of elucidating and capturing human dynamics on the Internet. Prior works on user attribution concentrated on behavioral biometrics, 1-to-1 user identification process without consideration for individual preference and human inherent temporal tendencies, which is capable of providing a discriminatory baseline for online users, as well as providing a higher level classification framework for novel user attribution. To address these limitations, the study developed a temporal model, which comprises the human Polyphasia tendency based on Polychronic-Monochronic tendency scale measurement instrument and the extraction of unique human-centric features from server-side network traffic of 48 active users. Several machine-learning algorithms were applied to observe distinct pattern among the classes of the Polyphasia tendency, through which a logistic model tree was observed to provide higher classification accuracy for a 1-to-N user attribution process. The study further developed a high-level attribution model for higher-level user attribution process. The result from this study is relevant in online profiling process, forensic identification and profiling process, e-learning profiling process as well as in social network profiling process.
As the most successful cryptocurrency to date, Bitcoin constitutes a target of choice for attackers. While many attack vectors have already been uncovered, one important vector has been left out though: attacking the currency via the Internet routing infrastructure itself. Indeed, by manipulating routing advertisements (BGP hijacks) or by naturally intercepting traffic, Autonomous Systems (ASes) can intercept and manipulate a large fraction of Bitcoin traffic. This paper presents the first taxonomy of routing attacks and their impact on Bitcoin, considering both small-scale attacks, targeting individual nodes, and large-scale attacks, targeting the network as a whole. While challenging, we show that two key properties make routing attacks practical: (i) the efficiency of routing manipulation; and (ii) the significant centralization of Bitcoin in terms of mining and routing. Specifically, we find that any network attacker can hijack few (\textbackslashtextless;100) BGP prefixes to isolate 50% of the mining power-even when considering that mining pools are heavily multi-homed. We also show that on-path network attackers can considerably slow down block propagation by interfering with few key Bitcoin messages. We demonstrate the feasibility of each attack against the deployed Bitcoin software. We also quantify their effectiveness on the current Bitcoin topology using data collected from a Bitcoin supernode combined with BGP routing data. The potential damage to Bitcoin is worrying. By isolating parts of the network or delaying block propagation, attackers can cause a significant amount of mining power to be wasted, leading to revenue losses and enabling a wide range of exploits such as double spending. To prevent such effects in practice, we provide both short and long-term countermeasures, some of which can be deployed immediately.
Improving e-government services by using data more effectively is a major focus globally. It requires Public Administrations to be transparent, accountable and provide trustworthy services that improve citizen confidence. However, despite all the technological advantages on developing such services and analysing security and privacy concerns, the literature does not provide evidence of frameworks and platforms that enable privacy analysis, from multiple perspectives, and take into account citizens' needs with regards to transparency and usage of citizens information. This paper presents the VisiOn (Visual Privacy Management in User Centric Open Requirements) platform, an outcome of a H2020 European Project. Our objective is to enable Public Administrations to analyse privacy and security from different perspectives, including requirements, threats, trust and law compliance. Finally, our platform-supported approach introduces the concept of Privacy Level Agreement (PLA) which allows Public Administrations to customise their privacy policies based on the privacy preferences of each citizen.
Distributed Denial of Service (DDoS) is a sophisticated cyber-attack due to its variety of types and techniques. The traditional mitigation method of this attack is to deploy dedicated security appliances such as firewall, load balancer, etc. However, due to the limited capacity of the hardware and the potential high volume of DDoS traffic, it may not be able to defend all the attacks. Therefore, cloud-based DDoS protection services were introduced to allow the organizations to redirect their traffic to the scrubbing centers in the cloud for filtering. This solution has some drawbacks such as privacy violation and latency. More recently, Network Functions Virtualization (NFV) and edge computing have been proposed as new networking service models. In this paper, we design a framework that leverages NFV and edge computing for DDoS mitigation through two-stage processes.
Homomorphic signatures can provide a credential of a result which is indeed computed with a given function on a data set by an untrusted third party like a cloud server, when the input data are stored with the signatures beforehand. Boneh and Freeman in EUROCRYPT2011 proposed a homomorphic signature scheme for polynomial functions of any degree, however the scheme is not based on the normal short integer solution (SIS) problems as its security assumption. In this paper, we show a homomorphic signature scheme for quadratic polynomial functions those security assumption is based on the normal SIS problems. Our scheme constructs the signatures of multiplication as tensor products of the original signature vectors of input data so that homomorphism holds. Moreover, security of our scheme is reduced to the hardness of the SIS problems respect to the moduli such that one modulus is the power of the other modulus. We show the reduction by constructing solvers of the SIS problems respect to either of the moduli from any forger of our scheme.
Conducted emission of motors is a domain of interest for EMC as it may introduce disturbances in the system in which they are integrated. Nevertheless few publications deal with the susceptibility of motors, and especially, servomotors despite this devices are more and more used in automated production lines as well as for robotics. Recent papers have been released devoted to the possibility of compromising such systems by cyber-attacks. One could imagine the use of smart intentional electromagnetic interference to modify their behavior or damage them leading in the modification of the industrial process. This paper aims to identify the disturbances that may affect the behavior of a Commercial Off-The-Shelf servomotor when exposed to an electromagnetic field and the criticality of the effects with regards to its application. Experiments have shown that a train of radio frequency pulses may induce an erroneous reading of the position value of the servomotor and modify in an unpredictable way the movement of the motor's axis.



