Biblio
Due to improvements in defensive systems, network threats are becoming increasingly sophisticated and complex as cybercriminals are using various methods to cloak their actions. This, among others, includes the application of network steganography e.g. to hide the communication between an infected host and a malicious control server by embedding commands into innocent-looking traffic. Currently, a new subtype of such methods called inter-protocol steganography emerged. It utilizes relationships between two or more overt protocols to hide data. In this paper, we present new inter-protocol hiding techniques which are suitable for real-time services. Afterwards, we introduce and present preliminary results of a novel steganography detection approach which relies on network traffic coloring.
This paper combines FMEA and n2 approaches in order to create a methodology to determine risks associated with the components of an underwater system. This methodology is based on defining the risk level related to each one of the components and interfaces that belong to a complex underwater system. As far as the authors know, this approach has not been reported before. The resulting information from the mentioned procedures is combined to find the system's critical elements and interfaces that are most affected by each failure mode. Finally, a calculation is performed to determine the severity level of each failure mode based on the system's critical elements.
Trust Management (TM) systems for authentication are vital to the security of online interactions, which are ubiquitous in our everyday lives. Various systems, like the Web PKI (X.509) and PGP's Web of Trust are used to manage trust in this setting. In recent years, blockchain technology has been introduced as a panacea to our security problems, including that of authentication, without sufficient reasoning, as to its merits.In this work, we investigate the merits of using open distributed ledgers (ODLs), such as the one implemented by blockchain technology, for securing TM systems for authentication. We formally model such systems, and explore how blockchain can help mitigate attacks against them. After formal argumentation, we conclude that in the context of Trust Management for authentication, blockchain technology, and ODLs in general, can offer considerable advantages compared to previous approaches. Our analysis is, to the best of our knowledge, the first to formally model and argue about the security of TM systems for authentication, based on blockchain technology. To achieve this result, we first provide an abstract model for TM systems for authentication. Then, we show how this model can be conceptually encoded in a blockchain, by expressing it as a series of state transitions. As a next step, we examine five prevalent attacks on TM systems, and provide evidence that blockchain-based solutions can be beneficial to the security of such systems, by mitigating, or completely negating such attacks.
Different data mining techniques are employed in stylometry domain for performing authorship attribution tasks. Sometimes to improve the decision system the discretization of input data can be applied. In many cases such approach allows to obtain better classification results. On the other hand, there were situations in which discretization decreased overall performance of the system. Therefore, the question arose what would be the result if only some selected attributes were discretized. The paper presents the results of the research performed for forward sequential selection of attributes to be discretized. The influence of such approach on the performance of the decision system, based on Naive Bayes classifier in authorship attribution domain, is presented. Some basic discretization methods and different approaches to discretization of the test datasets are taken into consideration.
The notion of style is pivotal to literature. The choice of a certain writing style moulds and enhances the overall character of a book. Stylometry uses statistical methods to analyze literary style. This work aims to build a recommendation system based on the similarity in stylometric cues of various authors. The problem at hand is in close proximity to the author attribution problem. It follows a supervised approach with an initial corpus of books labelled with their respective authors as training set and generate recommendations based on the misclassified books. Results in book similarity are substantiated by domain experts.
Node compromising is still the most hard attack in Wireless Sensor Networks (WSNs). It affects key distribution which is a building block in securing communications in any network. The weak point of several roposed key distribution schemes in WSNs is their lack of resilience to node compromising attacks. When a node is compromised, all its key material is revealed leading to insecure communication links throughout the network. This drawback is more harmful for long-lived WSNs that are deployed in multiple phases, i.e., Multi-phase WSNs (MPWSNs). In the last few years, many key management schemes were proposed to ensure security in WSNs. However, these schemes are conceived for single phase WSNs and their security degrades with time when an attacker captures nodes. To deal with this drawback and enhance the resilience to node compromising over the whole lifetime of the network, we propose in this paper, a new key pre-distribution scheme adapted to MPWSNs. Our scheme takes advantage of the resilience improvement of Q-composite key scheme and adds self-healing which is the ability of the scheme to decrease the effect of node compromising over time. Self-healing is achieved by pre-distributing each generation with fresh keys. The evaluation of our scheme proves that it has a good key connectivity and a high resilience to node compromising attack compared to existing key management schemes.
Crypto-ransomware is a challenging threat that ciphers a user's files while hiding the decryption key until a ransom is paid by the victim. This type of malware is a lucrative business for cybercriminals, generating millions of dollars annually. The spread of ransomware is increasing as traditional detection-based protection, such as antivirus and anti-malware, has proven ineffective at preventing attacks. Additionally, this form of malware is incorporating advanced encryption algorithms and expanding the number of file types it targets. Cybercriminals have found a lucrative market and no one is safe from being the next victim. Encrypting ransomware targets business small and large as well as the regular home user. This paper discusses ransomware methods of infection, technology behind it and what can be done to help prevent becoming the next victim. The paper investigates the most common types of crypto-ransomware, various payload methods of infection, typical behavior of crypto ransomware, its tactics, how an attack is ordinarily carried out, what files are most commonly targeted on a victim's computer, and recommendations for prevention and safeguards are listed as well.
The Named-Data Networking (NDN) has emerged as a clean-slate Internet proposal on the wave of Information-Centric Networking. Although the NDN's data-plane seems to offer many advantages, e.g., native support for multicast communications and flow balance, it also makes the network infrastructure vulnerable to a specific DDoS attack, the Interest Flooding Attack (IFA). In IFAs, a botnet issuing unsatisfiable content requests can be set up effortlessly to exhaust routers' resources and cause a severe performance drop to legitimate users. So far several countermeasures have addressed this security threat, however, their efficacy was proved by means of simplistic assumptions on the attack model. Therefore, we propose a more complete attack model and design an advanced IFA. We show the efficiency of our novel attack scheme by extensively assessing some of the state-of-the-art countermeasures. Further, we release the software to perform this attack as open source tool to help design future more robust defense mechanisms.
Social media plays an integral part in individual's everyday lives as well as for companies. Social media brings numerous benefits in people's lives such as to keep in touch with close ones and specially with relatives who are overseas, to make new friends, buy products, share information and much more. Unfortunately, several threats also accompany the countless advantages of social media. The rapid growth of the online social networking sites provides more scope for criminals and cyber-criminals to carry out their illegal activities. Hackers have found different ways of exploiting these platform for their malicious gains. This research englobes some of the common threats on social media such as spam, malware, Trojan horse, cross-site scripting, industry espionage, cyber-bullying, cyber-stalking, social engineering attacks. The main purpose of the study to elaborates on phishing, malware and click-jacking attacks. The main purpose of the research, there is no particular research available on the forensic investigation for Facebook. There is no particular forensic investigation methodology and forensic tools available which can follow on the Facebook. There are several tools available to extract digital data but it's not properly tested for Facebook. Forensics investigation tool is used to extract evidence to determine what, when, where, who is responsible. This information is required to ensure that the sufficient evidence to take legal action against criminals.
The increasing complexity and ubiquity in user connectivity, computing environments, information content, and software, mobile, and web applications transfers the responsibility of privacy management to the individuals. Hence, making it extremely difficult for users to maintain the intelligent and targeted level of privacy protection that they need and desire, while simultaneously maintaining their ability to optimally function. Thus, there is a critical need to develop intelligent, automated, and adaptable privacy management systems that can assist users in managing and protecting their sensitive data in the increasingly complex situations and environments that they find themselves in. This work is a first step in exploring the development of such a system, specifically how user personality traits and other characteristics can be used to help automate determination of user sharing preferences for a variety of user data and situations. The Big-Five personality traits of openness, conscientiousness, extroversion, agreeableness, and neuroticism are examined and used as inputs into several popular machine learning algorithms in order to assess their ability to elicit and predict user privacy preferences. Our results show that the Big-Five personality traits can be used to significantly improve the prediction of user privacy preferences in a number of contexts and situations, and so using machine learning approaches to automate the setting of user privacy preferences has the potential to greatly reduce the burden on users while simultaneously improving the accuracy of their privacy preferences and security.
In our era, most of the communication between people is realized in the form of electronic messages and especially through smart mobile devices. As such, the written text exchanged suffers from bad use of punctuation, misspelling words, continuous chunk of several words without spaces, tables, internet addresses etc. which make traditional text analytics methods difficult or impossible to be applied without serious effort to clean the dataset. Our proposed method in this paper can work in massive noisy and scrambled texts with minimal preprocessing by removing special characters and spaces in order to create a continuous string and detect all the repeated patterns very efficiently using the Longest Expected Repeated Pattern Reduced Suffix Array (LERP-RSA) data structure and a variant of All Repeated Patterns Detection (ARPaD) algorithm. Meta-analyses of the results can further assist a digital forensics investigator to detect important information to the chunk of text analyzed.
The deceleration of transistor feature size scaling has motivated growing adoption of specialized accelerators implemented as GPUs, FPGAs, ASICs, and more recently new types of computing such as neuromorphic, bio-inspired, ultra low energy, reversible, stochastic, optical, quantum, combinations, and others unforeseen. There is a tension between specialization and generalization, with the current state trending to master slave models where accelerators (slaves) are instructed by a general purpose system (master) running an Operating System (OS). Traditionally, an OS is a layer between hardware and applications and its primary function is to manage hardware resources and provide a common abstraction to applications. Does this function, however, apply to new types of computing paradigms? This paper revisits OS functionality for memristor-based accelerators. We explore one accelerator implementation, the Dot Product Engine (DPE), for a select pattern of applications in machine learning, imaging, and scientific computing and a small set of use cases. We explore typical OS functionality, such as reconfiguration, partitioning, security, virtualization, and programming. We also explore new types of functionality, such as precision and trustworthiness of reconfiguration. We claim that making an accelerator, such as the DPE, more general will result in broader adoption and better utilization.
Building secure systems used to mean ensuring a secure perimeter, but that is no longer the case. Today's systems are ill-equipped to deal with attackers that are able to pierce perimeter defenses. Data provenance is a critical technology in building resilient systems that will allow systems to recover from attackers that manage to overcome the "hard-shell" defenses. In this paper, we provide background information on data provenance, details on provenance collection, analysis, and storage techniques and challenges. Data provenance is situated to address the challenging problem of allowing a system to "fight-through" an attack, and we help to identify necessary work to ensure that future systems are resilient.
In a continually evolving cyber-threat landscape, the detection and prevention of cyber attacks has become a complex task. Technological developments have led organisations to digitise the majority of their operations. This practice, however, has its perils, since cybespace offers a new attack-surface. Institutions which are tasked to protect organisations from these threats utilise mainly network data and their incident response strategy remains oblivious to the needs of the organisation when it comes to protecting operational aspects. This paper presents a system able to combine threat intelligence data, attack-trend data and organisational data (along with other data sources available) in order to achieve automated network-defence actions. Our approach combines machine learning, visual analytics and information from business processes to guide through a decision-making process for a Security Operation Centre environment. We test our system on two synthetic scenarios and show that correlating network data with non-network data for automated network defences is possible and worth investigating further.
A growing need for scalable solutions for both machine learning and interactive analytics exists in the area of cyber-security. Machine learning aims at segmentation and classification of log events, which leads towards optimization of the threat monitoring processes. The tools for interactive analytics are required to resolve the uncertain cases, whereby machine learning algorithms are not able to provide a convincing outcome and human expertise is necessary. In this paper we focus on a case study of a security operations platform, whereby typical layers of information processing are integrated with a new database engine dedicated to approximate analytics. The engine makes it possible for the security experts to query massive log event data sets in a standard relational style. The query outputs are received orders of magnitude faster than any of the existing database solutions running with comparable resources and, in addition, they are sufficiently accurate to make the right decisions about suspicious corner cases. The engine internals are driven by the principles of information granulation and summary-based processing. They also refer to the ideas of data quantization, approximate computing, rough sets and probability propagation. In the paper we study how the engine's parameters can influence its performance within the considered environment. In addition to the results of experiments conducted on large data sets, we also discuss some of our high level design decisions including the choice of an approximate query result accuracy measure that should reflect the specifics of the considered threat monitoring operations.
For the increasing use of internet, it is equally important to protect the intellectual property. And for the protection of copyright, a blind digital watermark algorithm with SVD and OSELM in the IWT domain has been proposed. During the embedding process, SVD has been applied to the coefficient blocks to get the singular values in the IWT domain. Singular values are modulated to embed the watermark in the host image. Online sequential extreme learning machine is trained to learn the relationship between the original coefficient and the corresponding watermarked version. During the extraction process, this trained OSELM is used to extract the embedded watermark logo blindly as no original host image is required during this process. The watermarked image is altered using various attacks like blurring, noise, sharpening, rotation and cropping. The experimental results show that the proposed watermarking scheme is robust against various attacks. The extracted watermark has very much similarity with the original watermark and works good to prove the ownership.
We prove polarization theorems for arbitrary classical-quantum (cq) channels. The input alphabet is endowed with an arbitrary Abelian group operation and an Arikan-style transformation is applied using this operation. It is shown that as the number of polarization steps becomes large, the synthetic cq-channels polarize to deterministic homomorphism channels that project their input to a quotient group of the input alphabet. This result is used to construct polar codes for arbitrary cq-channels and arbitrary classical-quantum multiple access channels (cq-MAC). The encoder can be implemented in O(N log N) operations, where N is the blocklength of the code. A quantum successive cancellation decoder for the constructed codes is proposed. It is shown that the probability of error of this decoder decays faster than 2-Nβ for any β textless; ½.
Distributed denial of service attacks represent continuous threat to availability of information and communication resources. This research conducted the analysis of relevant scientific literature and synthesize parameters on packet and traffic flow level applicable for detection of infrastructure layer DDoS attacks. It is concluded that packet level detection uses two or more parameters while traffic flow level detection often used only one parameter which makes it more convenient and resource efficient approach in DDoS detection.
Computer systems face the threat of deliberate security intrusions due to malicious attacks that exploit security holes or vulnerabilities. In practice, these security holes or vulnerabilities still remain in the system and applications even if developers carefully execute system testing. Thus it is necessary and important to develop the mechanism to prevent and/or tolerate security intrusions. As a result, the computer systems are often evaluated with confidentiality, integrity and availability (CIA) criteria from the viewpoint of security, and security is treated as a QoS (Quality of Service) attribute at par with other QoS attributes such as capacity and performance. In this paper, we present the method for quantifying a security attribute called mean time to security failure (MTTSF) of a VM-based intrusion tolerant system based on queueing theory.
In a number of information security scenarios, human beings can be better than technical security measures at detecting threats. This is particularly the case when a threat is based on deception of the user rather than exploitation of a specific technical flaw, as is the case of spear-phishing, application spoofing, multimedia masquerading and other semantic social engineering attacks. Here, we put the concept of the human-as-a-security-sensor to the test with a first case study on a small number of participants subjected to different attacks in a controlled laboratory environment and provided with a mechanism to report these attacks if they spot them. A key challenge is to estimate the reliability of each report, which we address with a machine learning approach. For comparison, we evaluate the ability of known technical security countermeasures in detecting the same threats. This initial proof of concept study shows that the concept is viable.
Advanced Persistent Threat (APT) attacks became a major network threat in recent years. Among APT attack techniques, sending a phishing email with malicious documents attached is considered one of the most effective ones. Although many users have the impression that documents are harmless, a malicious document may in fact contain shellcode to attack victims. To cope with the problem, we design and implement a malicious document detector called Forensor to differentiate malicious documents. Forensor integrates several open-source tools and methods. It first introspects file format to retrieve objects inside the documents, and then automatically decrypts simple encryption methods, e.g., XOR, rot and shift, commonly used in malware to discover potential shellcode. The emulator is used to verify the presence of shellcode. If shellcode is discovered, the file is considered malicious. The experiment used 9,000 benign files and more than 10,000 malware samples from a well-known sample sharing website. The result shows no false negative and only 2 false positives.
Phishing emails have affected users seriously due to the enormous increasing in numbers and exquisite camouflage. Users spend much more effort on distinguishing the email properties, therefore current phishing email detection system demands more creativity and consideration in filtering for users. The proposed research tries to adopt creative computing in detecting phishing emails for users through a combination of computing techniques and social engineering concepts. In order to achieve the proposed target, the fraud type is summarised in social engineering criteria through literature review; a semantic web database is established to extract and store information; a fuzzy logic control algorithm is constructed to allocate email categories. The proposed approach will help users to distinguish the categories of emails, furthermore, to give advice based on different categories allocation. For the purpose of illustrating the approach, a case study will be presented to simulate a phishing email receiving scenario.
Summary form only given. Strong light-matter coupling has been recently successfully explored in the GHz and THz [1] range with on-chip platforms. New and intriguing quantum optical phenomena have been predicted in the ultrastrong coupling regime [2], when the coupling strength Ω becomes comparable to the unperturbed frequency of the system ω. We recently proposed a new experimental platform where we couple the inter-Landau level transition of an high-mobility 2DEG to the highly subwavelength photonic mode of an LC meta-atom [3] showing very large Ω/ωc = 0.87. Our system benefits from the collective enhancement of the light-matter coupling which comes from the scaling of the coupling Ω ∝ √n, were n is the number of optically active electrons. In our previous experiments [3] and in literature [4] this number varies from 104-103 electrons per meta-atom. We now engineer a new cavity, resonant at 290 GHz, with an extremely reduced effective mode surface Seff = 4 × 10-14 m2 (FE simulations, CST), yielding large field enhancements above 1500 and allowing to enter the few (\textbackslashtextless;100) electron regime. It consist of a complementary metasurface with two very sharp metallic tips separated by a 60 nm gap (Fig.1(a, b)) on top of a single triangular quantum well. THz-TDS transmission experiments as a function of the applied magnetic field reveal strong anticrossing of the cavity mode with linear cyclotron dispersion. Measurements for arrays of only 12 cavities are reported in Fig.1(c). On the top horizontal axis we report the number of electrons occupying the topmost Landau level as a function of the magnetic field. At the anticrossing field of B=0.73 T we measure approximately 60 electrons ultra strongly coupled (Ω/ω- \textbackslashtextbar\textbackslashtextbar