Biblio
Transmission lines' monitoring systems produce a large amount of data that hinders faults diagnosis. For this reason, approaches that can acquire and automatically interpret the information coming from lines' monitoring are needed. Furthermore, human errors stemming from operator dependent real-time decision need to be reduced. In this paper a multiple faults diagnosis method to determine transmission lines' operating conditions is proposed. Different scenarios, including insulator chains contamination with different types and concentrations of pollutants were modeled by equivalents circuits. Their performance were characterized by leakage current (LC) measurements and related to specific fault modes. Features extraction's algorithm relying on the difference between normal and faulty conditions were used to define qualitative trends for the diagnosis of various fault modes.
Modern detection systems use sensor outputs available in the deployment environment to probabilistically identify attacks. These systems are trained on past or synthetic feature vectors to create a model of anomalous or normal behavior. Thereafter, run-time collected sensor outputs are compared to the model to identify attacks (or the lack of attack). While this approach to detection has been proven to be effective in many environments, it is limited to training on only features that can be reliably collected at detection time. Hence, they fail to leverage the often vast amount of ancillary information available from past forensic analysis and post-mortem data. In short, detection systems do not train (and thus do not learn from) features that are unavailable or too costly to collect at run-time. Recent work proposed an alternate model construction approach that integrates forensic "privilege" information–-features reliably available at training time, but not at run-time–-to improve accuracy and resilience of detection systems. In this paper, we further evaluate two of proposed techniques to model training with privileged information: knowledge transfer, and model influence. We explore the cultivation of privileged features, the efficiency of those processes and their influence on the detection accuracy. We observe that the improved integration of privileged features makes the resulting detection models more accurate. Our evaluation shows that use of privileged information leads to up to 8.2% relative decrease in detection error for fast-flux bot detection over a system with no privileged information, and 5.5% for malware classification.
NoSQL databases have gained a lot of popularity over the last few years. They are now used in many new system implementations that work with vast amounts of data. This data will typically also include sensitive information that needs to be secured. NoSQL databases are also underlying a number of cloud implementations which are increasingly being used to store sensitive information by various organisations. This has made NoSQL databases a new target for hackers and other state sponsored actors. Forensic examinations of compromised systems will need to be conducted to determine what exactly transpired and who was responsible. This paper examines specifically if NoSQL databases have security features that leave relevant traces so that accurate forensic attribution can be conducted. The seeming lack of default security measures such as access control and logging has prompted this examination. A survey into the top ranked NoSQL databases was conducted to establish what authentication and authorisation features are available. Additionally the provided logging mechanisms were also examined since access control without any auditing would not aid forensic attribution tremendously. Some of the surveyed NoSQL databases do not provide adequate access control mechanisms and logging features that leave relevant traces to allow forensic attribution to be done using those. The other surveyed NoSQL databases did provide adequate mechanisms and logging traces for forensic attribution, but they are not enabled or configured by default. This means that in many cases they might not be available, leading to insufficient information to perform accurate forensic attribution even on those databases.
Investigations on the charge of possessing child pornography usually require manual forensic image inspection in order to collect evidence. When storage devices are confiscated, law enforcement authorities are hence often faced with massive image datasets which have to be screened within a limited time frame. As the ability to concentrate and time are highly limited factors of a human investigator, we believe that intelligent algorithms can effectively assist the inspection process by rearranging images based on their content. Thus, more relevant images can be discovered within a shorter time frame, which is of special importance in time-critical investigations of triage character. While currently employed techniques are based on black- and whitelisting of known images, we propose to use deep learning algorithms trained for the detection of pornographic imagery, as they are able to identify new content. In our approach, we evaluated three state-of-the-art neural networks for the detection of pornographic images and employed them to rearrange simulated datasets of 1 million images containing a small fraction of pornographic content. The rearrangement of images according to their content allows a much earlier detection of relevant images during the actual manual inspection of the dataset, especially when the percentage of relevant images is low. With our approach, the first relevant image could be discovered between positions 8 and 9 in the rearranged list on average. Without using our approach of image rearrangement, the first relevant image was discovered at position 1,463 on average.
Modern security protocols may involve humans in order to compare or copy short strings between different devices. Multi-factor authentication protocols, such as Google 2-factor or 3D-secure are typical examples of such protocols. However, such short strings may be subject to brute force attacks. In this paper we propose a symbolic model which includes attacker capabilities for both guessing short strings, and producing collisions when short strings result from an application of weak hash functions. We propose a new decision procedure for analysing (a bounded number of sessions of) protocols that rely on short strings. The procedure has been integrated in the AKISS tool and tested on protocols from the ISO/IEC 9798-6:2010 standard.
Delay-Tolerant Networks exhibit highly asynchronous connections often routed over many mobile hops before reaching its intended destination. The Bundle Security Protocol has been standardized providing properties such as authenticity, integrity, and confidentiality of bundles using traditional Public-Key Cryptography. Other protocols based on Identity-Based Cryptography have been proposed to reduce the key distribution overhead. However, in both schemes, secret keys are usually valid for several months. Thus, a secret key extracted from a compromised node allows for decryption of past communications since its creation. We solve this problem and propose the first forward secure protocol for Delay-Tolerant Networking. For this, we apply the Puncturable Encryption construction designed by Green and Miers, integrate it into the Bundle Security Protocol and adapt its parameters for different highly asynchronous scenarios. Finally, we provide performance measurements and discuss their impact.
FPGAs have been used as accelerators in a wide variety of domains such as learning, search, genomics, signal processing, compression, analytics and so on. In recent years, the availability of tools and flows such as high-level synthesis has made it even easier to accelerate a variety of high-performance computing applications onto FPGAs. In this paper we propose a systematic methodology for optimizing the performance of an accelerated block using the notion of compute intensity to guide optimizations in high-level synthesis. We demonstrate the effectiveness of our methodology on an FPGA implementation of a non-uniform discrete Fourier transform (NUDFT), used to convert a wireless channel model from the time-domain to the frequency domain. The acceleration of this particular computation can be used to improve the performance and capacity of wireless channel simulation, which has wide applications in the system level design and performance evaluation of wireless networks. Our results show that our FPGA implementation outperforms the same code offloaded onto GPUs and CPUs by 1.6x and 10x respectively, in performance as measured by the throughput of the accelerated block. The gains in performance per watt versus GPUs and CPUs are 15.6x and 41.5x respectively.
Internet of Things (IoT) will be emerged over many of devices that are dynamically networked. Because of distributed and dynamic nature of IoT, designing a recommender system for them is a challenging problem. Recently, cognitive systems are used to design modern frameworks in different types of computer applications such as cognitive radio networks and cognitive peer-to-peer networks. A cognitive system can learn to improve its performance while operating under its unknown environment. In this paper, we propose a framework for cognitive recommender systems in IoT. To the best of our knowledge, there is no recommender system based on cognitive systems in the IoT. The proposed algorithm is compared with the existing recommender systems.
Summary form only given. Strong light-matter coupling has been recently successfully explored in the GHz and THz [1] range with on-chip platforms. New and intriguing quantum optical phenomena have been predicted in the ultrastrong coupling regime [2], when the coupling strength Ω becomes comparable to the unperturbed frequency of the system ω. We recently proposed a new experimental platform where we couple the inter-Landau level transition of an high-mobility 2DEG to the highly subwavelength photonic mode of an LC meta-atom [3] showing very large Ω/ωc = 0.87. Our system benefits from the collective enhancement of the light-matter coupling which comes from the scaling of the coupling Ω ∝ √n, were n is the number of optically active electrons. In our previous experiments [3] and in literature [4] this number varies from 104-103 electrons per meta-atom. We now engineer a new cavity, resonant at 290 GHz, with an extremely reduced effective mode surface Seff = 4 × 10-14 m2 (FE simulations, CST), yielding large field enhancements above 1500 and allowing to enter the few (\textbackslashtextless;100) electron regime. It consist of a complementary metasurface with two very sharp metallic tips separated by a 60 nm gap (Fig.1(a, b)) on top of a single triangular quantum well. THz-TDS transmission experiments as a function of the applied magnetic field reveal strong anticrossing of the cavity mode with linear cyclotron dispersion. Measurements for arrays of only 12 cavities are reported in Fig.1(c). On the top horizontal axis we report the number of electrons occupying the topmost Landau level as a function of the magnetic field. At the anticrossing field of B=0.73 T we measure approximately 60 electrons ultra strongly coupled (Ω/ω- \textbackslashtextbar\textbackslashtextbar
With the accelerated iteration of technological innovation, blockchain has rapidly become one of the hottest Internet technologies in recent years. As a decentralized and distributed data management solution, blockchain has restored the definition of trust by the embedded cryptography and consensus mechanism, thus providing security, anonymity and data integrity without the need of any third party. But there still exists some technical challenges and limitations in blockchain. This paper has conducted a systematic research on current blockchain application in cybersecurity. In order to solve the security issues, the paper analyzes the advantages that blockchain has brought to cybersecurity and summarizes current research and application of blockchain in cybersecurity related areas. Through in-depth analysis and summary of the existing work, the paper summarizes four major security issues of blockchain and performs a more granular analysis of each problem. Adopting an attribute-based encryption method, the paper also puts forward an enhanced access control strategy.
The paper presents a novel model of hybrid biometric-based authentication. Currently, the recognition accuracy of a single biometric verification system is often much reduced due to many factors such as the environment, user mode and physiological defects of an individual. Apparently, the enrolment of static biometric is highly vulnerable to impersonation attack. Due to the fact of single biometric authentication only offers one factor of verification, we proposed to hybrid two biometric attributes that consist of physiological and behavioural trait. In this study, we utilise the static and dynamic features of a human face. In order to extract the important features from a face, the primary steps taken are image pre-processing and face detection. Apparently, to distinguish between a genuine user or an imposter, the first authentication is to verify the user's identity through face recognition. Solely depending on a single modal biometric is possible to lead to false acceptance when two or more similar face features may result in a relatively high match score. However, it is found the False Acceptance Rate is 0.55% whereas the False Rejection Rate is 7%. By reason of the security discrepancies in the mentioned condition, therefore we proposed a fusion method whereby a genuine user will select a facial expression from the seven universal expression (i.e. happy, sad, anger, disgust, surprise, fear and neutral) as enrolled earlier in the database. For the proof of concept, it is proven in our results that even there are two or more users coincidently have the same face features, the selected facial expression will act as a password to be prominently distinguished a genuine or impostor user.
Passthoughts, in which a user thinks a secret thought to log in to services or devices, provides two factors of authentication (knowledge and inherence) in a single step. Since its proposal in 2005, passthoughts enjoyed a number of successful empirical studies. In this paper, we renew the promise of passthoughts authentication, outlining the main challenges that passthoughts must overcome in order to move from the lab to the real world. We propose two studies, which seek different angles at the fundamental questions we pose. Further, we propose it as a fruitful case study for thinking about what authentication can, and should, be expected to do, as it pushes up against questions of what sorts of "selves" authentication systems must be tasked with recognizing. Through this discussion, we raise novel possibilities for authentication broadly, such as "organic passwords" that change naturally over time, or systems that reject users who are not acting quite "like themselves."
In the open network environment, the strange entities can establish the mutual trust through Automated Trust Negotiation (ATN) that is based on exchanging digital credentials. In traditional ATN, the attribute certificate required to either satisfied or not, and in the strategy, the importance of the certificate is same, it may cause some unnecessary negotiation failure. And in the actual situation, the properties is not just 0 or 1, it is likely to between 0 and 1, so the satisfaction degree is different, and the negotiation strategy need to be quantified. This paper analyzes the fuzzy negotiation process, in order to improve the trust establishment in high efficiency and accuracy further.
Privacy preserving on data publication has been an important research field over the past few decades. One of the fundamental challenges in privacy preserving data publication is the trade-off problem between privacy and utility of the single and independent data set. However, recent research works have shown that the advanced privacy mechanism, i.e., differential privacy, is vulnerable when multiple data sets are correlated. In this case, the trade-off problem between privacy and utility is evolved into a game problem, in which the payoff of each player is dependent not only on his privacy parameter, but also on his neighbors' privacy parameters. In this paper, we firstly present the definition of correlated differential privacy to evaluate the real privacy level of a single data set influenced by the other data sets. Then, we construct a game model of multiple players, who each publishes the data set sanitized by differential privacy. Next, we analyze the existence and uniqueness of the pure Nash Equilibrium and demonstrate the sufficient conditions in the game. Finally, we refer to a notion, i.e., the price of anarchy, to evaluate efficiency of the pure Nash Equilibrium.
To meet the growing railway-transportation demand, a new train control system, communication-based train control (CBTC) system, aims to maximize the ability of train lines by reducing the headway of each train. However, the wireless communications expose the CBTC system to new security threats. Due to the cyber-physical nature of the CBTC system, a jamming attack can damage the physical part of the train system by disrupting the communications. To address this issue, we develop a secure framework to mitigate the impact of the jamming attack based on a security criterion. At the cyber layer, we apply a multi-channel model to enhance the reliability of the communications and develop a zero-sum stochastic game to capture the interactions between the transmitter and jammer. We present analytical results and apply dynamic programming to find the equilibrium of the stochastic game. Finally, the experimental results are provided to evaluate the performance of the proposed secure mechanism.
CAPTCHA is a type of challenge-response test to ensure that the response is only generated by humans and not by computerized robots. CAPTCHA are getting harder as because usage of latest advanced pattern recognition and machine learning algorithms are capable of solving simpler CAPTCHA. However, some enhancement procedures make the CAPTCHAs too difficult to be recognized by the human. This paper resolves the problem by next generation human-friendly mini game-CAPTCHA for quantifying the usability of CAPTCHAs.
In an Internet of Things (IOT) network, each node (device) provides and requires services and with the growth in IOT, the number of nodes providing the same service have also increased, thus creating a problem of selecting one reliable service from among many providers. In this paper, we propose a scalable graph-based collaborative filtering recommendation algorithm, improved using trust to solve service selection problem, which can scale to match the growth in IOT unlike a central recommender which fails. Using this recommender, a node can predict its ratings for the nodes that are providing the required service and then select the best rated service provider.
When Bitcoin was first introduced to the world in 2008 by an enigmatic programmer going by the pseudonym Satoshi Nakamoto, it was billed as the world's first decentralized virtual currency. Offering the first credible incarnation of a digital currency, Bitcoin was based on the principal of peer to peer transactions involving a complex public address and a private key that only the owner of the coin would know. This paper will seek to investigate how the usage and value of Bitcoin is affected by current events in the cyber environment. Is an advancement in the digital security of Bitcoin reflected by the value of the currency and conversely does a major security breech have a negative effect? By analyzing statistical data of the market value of Bitcoin at specific points where the currency has fluctuated dramatically, it is believed that trends can be found. This paper proposes that based on the data analyzed, the current integrity of the Bitcoin security is trusted by general users and the value and usage of the currency is growing. All the major fluctuations of the currency can be linked to significant events within the digital security environment however these fluctuations are beginning to decrease in frequency and severity. Bitcoin is still a volatile currency but this paper concludes that this is a result of security flaws in Bitcoin services as opposed to the Bitcoin protocol itself.
The main goal of introducing an identity-based cryptosystem and certificateless cryptosystem was avoiding certificates' management costs. In turn, the goal of introducing a certificate-based cryptosystem was to solve the certificate revocation problem. In this paper, we propose a new digital Implicit and Explicit Certificates-Based Hess's Signature (IE-CBHS) scheme that combines the features of a standard public key infrastructure (PKI) and certificate-based cryptosystem. Our IE-CBHS scheme is an efficient certificates-based signature. The security analysis proves that the scheme is secure against two game attacks in the random oracle model. The security is closely related to the difficulty of solving the computational Diffie–Hellman and discrete logarithm problems. The IE-CBHS scheme, when compared with other signature schemes, has similar efficiency and is both more flexible and more useful in practice. It is possible to revoke the explicit certificate and use that fact during digital signature verification. Thus, our scheme is useful in applications where typical mechanisms of standard PKI are used. One of many important security features is resistance to denial of signature verification attack. Also, it is impossible for a trusted authority to recreate a partial private key, even with cooperation with the signer.
Byte-addressable non-volatile memory technology is emerging as an alternative for DRAM for main memory. This new Non-Volatile Main Memory (NVMM) allows programmers to store important data in data structures in memory instead of serializing it to the file system, thereby providing a substantial performance boost. However, modern systems reorder memory operations and utilize volatile caches for better performance, making it difficult to ensure a consistent state in NVMM. Intel recently announced a new set of persistence instructions, clflushopt, clwb, and pcommit. These new instructions make it possible to implement fail-safe code on NVMM, but few workloads have been written or characterized using these new instructions. In this work, we describe how these instructions work and how they can be used to implement write-ahead logging based transactions. We implement several common data structures and kernels and evaluate the performance overhead incurred over traditional non-persistent implementations. In particular, we find that persistence instructions occur in clusters along with expensive fence operations, they have long latency, and they add a significant execution time overhead, on average by 20.3% over code with logging but without fence instructions to order persists. To deal with this overhead and alleviate the performance bottleneck, we propose to speculate past long latency persistency operations using checkpoint-based processing. Our speculative persistence architecture reduces the execution time overheads to only 3.6%.
Wireless Sensor Networks (WSNs) have been widely adopted to monitor various ambient conditions including critical infrastructures. Since power grid is considered as a critical infrastructure, and the smart grid has appeared as a viable technology to introduce more reliability, efficiency, controllability, and safety to the traditional power grid, WSNs have been envisioned as potential tools to monitor the smart grid. The motivation behind smart grid monitoring is to improve its emergency preparedness and resilience. Despite their effectiveness in monitoring critical infrastructures, WSNs also introduce various security vulnerabilities due to their open nature and unreliable wireless links. In this paper, we focus on the, Black-Hole (B-H) attack. To cope with this, we propose a hierarchical trust-based WSN monitoring model for the smart grid equipment in order to detect the B-H attacks. Malicious nodes have been detected by testing the trade-off between trust and dropped packet ratios for each Cluster Head (CH). We select different thresholds for the Packets Dropped Ratio (PDR) in order to test the network behaviour with them. We set four different thresholds (20%, 30%, 40%, and 50%). Threshold of 50% has been shown to reach the system stability in early periods with the least number of re-clustering operations.
User attribution process based on human inherent dynamics and preference is one area of research that is capable of elucidating and capturing human dynamics on the Internet. Prior works on user attribution concentrated on behavioral biometrics, 1-to-1 user identification process without consideration for individual preference and human inherent temporal tendencies, which is capable of providing a discriminatory baseline for online users, as well as providing a higher level classification framework for novel user attribution. To address these limitations, the study developed a temporal model, which comprises the human Polyphasia tendency based on Polychronic-Monochronic tendency scale measurement instrument and the extraction of unique human-centric features from server-side network traffic of 48 active users. Several machine-learning algorithms were applied to observe distinct pattern among the classes of the Polyphasia tendency, through which a logistic model tree was observed to provide higher classification accuracy for a 1-to-N user attribution process. The study further developed a high-level attribution model for higher-level user attribution process. The result from this study is relevant in online profiling process, forensic identification and profiling process, e-learning profiling process as well as in social network profiling process.
As the most successful cryptocurrency to date, Bitcoin constitutes a target of choice for attackers. While many attack vectors have already been uncovered, one important vector has been left out though: attacking the currency via the Internet routing infrastructure itself. Indeed, by manipulating routing advertisements (BGP hijacks) or by naturally intercepting traffic, Autonomous Systems (ASes) can intercept and manipulate a large fraction of Bitcoin traffic. This paper presents the first taxonomy of routing attacks and their impact on Bitcoin, considering both small-scale attacks, targeting individual nodes, and large-scale attacks, targeting the network as a whole. While challenging, we show that two key properties make routing attacks practical: (i) the efficiency of routing manipulation; and (ii) the significant centralization of Bitcoin in terms of mining and routing. Specifically, we find that any network attacker can hijack few (\textbackslashtextless;100) BGP prefixes to isolate 50% of the mining power-even when considering that mining pools are heavily multi-homed. We also show that on-path network attackers can considerably slow down block propagation by interfering with few key Bitcoin messages. We demonstrate the feasibility of each attack against the deployed Bitcoin software. We also quantify their effectiveness on the current Bitcoin topology using data collected from a Bitcoin supernode combined with BGP routing data. The potential damage to Bitcoin is worrying. By isolating parts of the network or delaying block propagation, attackers can cause a significant amount of mining power to be wasted, leading to revenue losses and enabling a wide range of exploits such as double spending. To prevent such effects in practice, we provide both short and long-term countermeasures, some of which can be deployed immediately.
We continue the study of Homomorphic Secret Sharing (HSS), recently introduced by Boyle et al. (Crypto 2016, Eurocrypt 2017). A (2-party) HSS scheme splits an input x into shares (x0,x1) such that (1) each share computationally hides x, and (2) there exists an efficient homomorphic evaluation algorithm \$\textbackslashEval\$ such that for any function (or "program") from a given class it holds that Eval(x0,P)+Eval(x1,P)=P(x). Boyle et al. show how to construct an HSS scheme for branching programs, with an inverse polynomial error, using discrete-log type assumptions such as DDH. We make two types of contributions. Optimizations. We introduce new optimizations that speed up the previous optimized implementation of Boyle et al. by more than a factor of 30, significantly reduce the share size, and reduce the rate of leakage induced by selective failure. Applications. Our optimizations are motivated by the observation that there are natural application scenarios in which HSS is useful even when applied to simple computations on short inputs. We demonstrate the practical feasibility of our HSS implementation in the context of such applications.
Homomorphic signatures can provide a credential of a result which is indeed computed with a given function on a data set by an untrusted third party like a cloud server, when the input data are stored with the signatures beforehand. Boneh and Freeman in EUROCRYPT2011 proposed a homomorphic signature scheme for polynomial functions of any degree, however the scheme is not based on the normal short integer solution (SIS) problems as its security assumption. In this paper, we show a homomorphic signature scheme for quadratic polynomial functions those security assumption is based on the normal SIS problems. Our scheme constructs the signatures of multiplication as tensor products of the original signature vectors of input data so that homomorphism holds. Moreover, security of our scheme is reduced to the hardness of the SIS problems respect to the moduli such that one modulus is the power of the other modulus. We show the reduction by constructing solvers of the SIS problems respect to either of the moduli from any forger of our scheme.