Biblio
All over the world, objects are increasingly connected in networks such as the Industrial Internet of Things. Interconnections, intercommunications and interactions are driving the development of an entirely new whole in the form of the Industrial Internet of Things. Communication and interaction are the norm both for separate components, such as cyber-physical systems, and for the functioning of the system as a whole. This new whole can be likened to a natural ecosystem where the process of homeostasis ensures the stability and security of the whole. Components of such an industrial ecosystem, or even an industrial ecosystem as a whole, are increasingly targeted by cyber attacks. Such attacks not only threaten the functioning of one or multiple components, they also constitute a threat to the functioning of the new whole. General systems theory can offer a scientific framework for the development of measures to improve the security and stability of both separate components and the new whole.
GENI (Global Environment for Network Innovations) is a National Science Foundation (NSF) funded program which provides a virtual laboratory for networking and distributed systems research and education. It is well suited for exploring networks at a scale, thereby promoting innovations in network science, security, services and applications. GENI allows researchers obtain compute resources from locations around the United States, connect compute resources using 100G Internet2 L2 service, install custom software or even custom operating systems on these compute resources, control how network switches in their experiment handle traffic flows, and run their own L3 and above protocols. GENI architecture incorporates cloud federation. With the federation, cloud resources can be federated and/or community of clouds can be formed. The heart of federation is user identity and an ability to “advertise” cloud resources into community including compute, storage, and networking. GENI administrators can carve out what resources are available to the community and hence a portion of GENI resources are reserved for internal consumption. GENI architecture also provides “stitching” of compute and storage resources researchers request. This provides L2 network domain over Internet2's 100G network. And researchers can run their Software Defined Networking (SDN) controllers on the provisioned L2 network domain for a complete control of networking traffic. This capability is useful for large science data transfer (bypassing security devices for high throughput). Renaissance Computing Institute (RENCI), a research institute in the state of North Carolina, has developed ORCA (Open Resource Control Architecture), a GENI control framework. ORCA is a distributed resource orchestration system to serve science experiments. ORCA provides compute resources as virtual machines and as well as baremetals. ORCA based GENI ra- k was designed to serve both High Throughput Computing (HTC) and High Performance Computing (HPC) type of computes. Although, GENI is primarily used in various universities and research entities today, GENI architecture can be leveraged in the commercial, aerospace and government settings. This paper will go over the architecture of GENI and discuss the GENI architecture for scientific computing experiments.
Energy efficient High-Performance Computing (HPC) is becoming increasingly important. Recent ventures into this space have introduced an unlikely candidate to achieve exascale scientific computing hardware with a small energy footprint. ARM processors and embedded GPU accelerators originally developed for energy efficiency in mobile devices, where battery life is critical, are being repurposed and deployed in the next generation of supercomputers. Unfortunately, the performance of executing scientific workloads on many of these devices is largely unknown, yet the bulk of computation required in high-performance supercomputers is scientific. We present an analysis of one such scientific code, in the form of Gaussian Elimination, and evaluate both execution time and energy used on a range of embedded accelerator SoCs. These include three ARM CPUs and two mobile GPUs. Understanding how these low power devices perform on scientific workloads will be critical in the selection of appropriate hardware for these supercomputers, for how can we estimate the performance of tens of thousands of these chips if the performance of one is largely unknown?
Remote Access Trojans (RATs) give remote attackers interactive control over a compromised machine. Unlike large-scale malware such as botnets, a RAT is controlled individually by a human operator interacting with the compromised machine remotely. The versatility of RATs makes them attractive to actors of all levels of sophistication: they've been used for espionage, information theft, voyeurism and extortion. Despite their increasing use, there are still major gaps in our understanding of RATs and their operators, including motives, intentions, procedures, and weak points where defenses might be most effective. In this work we study the use of DarkComet, a popular commercial RAT. We collected 19,109 samples of DarkComet malware found in the wild, and in the course of two, several-week-long experiments, ran as many samples as possible in our honeypot environment. By monitoring a sample's behavior in our system, we are able to reconstruct the sequence of operator actions, giving us a unique view into operator behavior. We report on the results of 2,747 interactive sessions captured in the course of the experiment. During these sessions operators frequently attempted to interact with victims via remote desktop, to capture video, audio, and keystrokes, and to exfiltrate files and credentials. To our knowledge, we are the first large-scale systematic study of RAT use.
The Internet of Things (IoT) revolution promises to make our lives easier by providing cheap and always connected smart embedded devices, which can interact on the Internet and create added values for human needs. But all that glitters is not gold. Indeed, the other side of the coin is that, from a security perspective, this IoT revolution represents a potential disaster. This plethora of IoT devices that flooded the market were very badly protected, thus an easy prey for several families of malwares that can enslave and incorporate them in very large botnets. This, eventually, brought back to the top Distributed Denial of Service (DDoS) attacks, making them more powerful and easier to achieve than ever. This paper aims at provide an up-to-date picture of DDoS attacks in the specific subject of the IoT, studying how these attacks work and considering the most common families in the IoT context, in terms of their nature and evolution through the years. It also explores the additional offensive capabilities that this arsenal of IoT malwares has available, to mine the security of Internet users and systems. We think that this up-to-date picture will be a valuable reference to the scientific community in order to take a first crucial step to tackle this urgent security issue.
Recently, the increase of interconnectivity has led to a rising amount of IoT enabled devices in botnets. Such botnets are currently used for large scale DDoS attacks. To keep track with these malicious activities, Honeypots have proven to be a vital tool. We developed and set up a distributed and highly-scalable WAN Honeypot with an attached backend infrastructure for sophisticated processing of the gathered data. For the processed data to be understandable we designed a graphical frontend that displays all relevant information that has been obtained from the data. We group attacks originating in a short period of time in one source as sessions. This enriches the data and enables a more in-depth analysis. We produced common statistics like usernames, passwords, username/password combinations, password lengths, originating country and more. From the information gathered, we were able to identify common dictionaries used for brute-force login attacks and other more sophisticated statistics like login attempts per session and attack efficiency.
Building the Internet of Things requires deploying a huge number of objects with full or limited connectivity to the Internet. Given that these objects are exposed to attackers and generally not secured-by-design, it is essential to be able to update them, to patch their vulnerabilities and to prevent hackers from enrolling them into botnets. Ideally, the update infrastructure should implement the CIA triad properties, i.e., confidentiality, integrity and availability. In this work, we investigate how the use of a blockchain infrastructure can meet these requirements, with a focus on availability. In addition, we propose a peer-to-peer mechanism, to spread updates between objects that have limited access to the Internet. Finally, we give an overview of our ongoing prototype implementation.
Botnets have long been used for malicious purposes with huge economic costs to the society. With the proliferation of cheap but non-secure Internet-of-Things (IoT) devices generating large amounts of data, the potential for damage from botnets has increased manifold. There are several approaches to detect bots or botnets, though many traditional techniques are becoming less effective as botnets with centralized command & control structure are being replaced by peer-to-peer (P2P) botnets which are harder to detect. Several algorithms have been proposed in literature that use graph analysis or machine learning techniques to detect the overlay structure of P2P networks in communication graphs. Many of these algorithms however, depend on the availability of a universal communication graph or a communication graph aggregated from several ISPs, which is not likely to be available in reality. In real world deployments, significant gaps in communication graphs are expected and any solution proposed should be able to work with partial information. In this paper, we analyze the effectiveness of some community detection algorithms in detecting P2P botnets, especially with partial information. We show that the approach can work with only about half of the nodes reporting their communication graphs, with only small increase in detection errors.
In this paper, a novel method to do feature selection to detect botnets at their phase of Command and Control (C&C) is presented. A major problem is that researchers have proposed features based on their expertise, but there is no a method to evaluate these features since some of these features could get a lower detection rate than other. To this aim, we find the feature set based on connections of botnets at their phase of C&C, that maximizes the detection rate of these botnets. A Genetic Algorithm (GA) was used to select the set of features that gives the highest detection rate. We used the machine learning algorithm C4.5, this algorithm did the classification between connections belonging or not to a botnet. The datasets used in this paper were extracted from the repositories ISOT and ISCX. Some tests were done to get the best parameters in a GA and the algorithm C4.5. We also performed experiments in order to obtain the best set of features for each botnet analyzed (specific), and for each type of botnet (general) too. The results are shown at the end of the paper, in which a considerable reduction of features and a higher detection rate than the related work presented were obtained.
The spread of Internet of Things (IoT) botnets like those utilizing the Mirai malware were successful enough to power some of the most powerful DDoS attacks that have been seen thus far on the Internet. Two such attacks occurred on October 21, 2016 and September 20, 2016. Since there are an estimated three billion IoT devices currently connected to the Internet, these attacks highlight the need to understand the spread of IoT worms like Mirai and the vulnerability that they create for the Internet. In this work, we describe the spread of IoT worms using a proposed model known as the IoT Botnet with Attack Information (IoT-BAI), which utilizes a variation of the Susceptible-Exposed-Infected-Recovered-Susceptible (SEIRS) epidemic model [14]. The IoT-BAI model has shown that it may be possible to mitigate the frequency of IoT botnet attacks with improved user information which may positively affect user behavior. Additionally, the IoT-BAI model has shown that increased vulnerability to attack can be caused by new hosts entering the IoT population on a daily basis. Models like IoT-BAI could be used to predict user behavior after significant events in the network like a significant botnet attack.
Machine learning has been widely used and achieved considerable results in various research areas. On the other hand, machine learning becomes a big threat when malicious attackers make use it for the wrong purpose. As such a threat, self-evolving botnets have been considered in the past. The self-evolving botnets autonomously predict vulnerabilities by implementing machine learning with computing resources of zombie computers. Furthermore, they evolve based on the vulnerability, and thus have high infectivity. In this paper, we consider several models of Markov chains to counter the spreading of the self-evolving botnets. Through simulation experiments, this paper shows the behaviors of these models.
Botnets have been a serious threat to the Internet security. With the constant sophistication and the resilience of them, a new trend has emerged, shifting botnets from the traditional desktop to the mobile environment. As in the desktop domain, detecting mobile botnets is essential to minimize the threat that they impose. Along the diverse set of strategies applied to detect these botnets, the ones that show the best and most generalized results involve discovering patterns in their anomalous behavior. In the mobile botnet field, one way to detect these patterns is by analyzing the operation parameters of this kind of applications. In this paper, we present an anomaly-based and host-based approach to detect mobile botnets. The proposed approach uses machine learning algorithms to identify anomalous behaviors in statistical features extracted from system calls. Using a self-generated dataset containing 13 families of mobile botnets and legitimate applications, we were able to test the performance of our approach in a close-to-reality scenario. The proposed approach achieved great results, including low false positive rates and high true detection rates.
Peer-to-peer (P2P) botnets have become one of the major threats in network security for serving as the infrastructure that responsible for various of cyber-crimes. Though a few existing work claimed to detect traditional botnets effectively, the problem of detecting P2P botnets involves more challenges. In this paper, we present PeerHunter, a community behavior analysis based method, which is capable of detecting botnets that communicate via a P2P structure. PeerHunter starts from a P2P hosts detection component. Then, it uses mutual contacts as the main feature to cluster bots into communities. Finally, it uses community behavior analysis to detect potential botnet communities and further identify bot candidates. Through extensive experiments with real and simulated network traces, PeerHunter can achieve very high detection rate and low false positives.
Several applications adopt electromagnetic sensors, that base their principle on the presence of magnets realized with specific magnetic materials that show a rather high remanence, but low coercivity. This work concerns the production, analysis and characterization of hybrid composite materials, with the use of metal powders, which aim to reach those specific properties. In order to obtain the best coercivity and remanence characteristics various "recipes" have been used with different percentages of soft and hard magnetic materials, bonded together by a plastic binder. The goal was to find out the interdependence between the magnetic powder composition and the characteristics of the final material. Soft magnetic material (special Fe powder) has been used to obtain a low coercivity value, while hard materials were primarily used for maintaining a good induction remanence; by increasing the soft proportion a higher magnetic permeability has been also obtained. All the selected materials have been characterized and then tested; in order to verify the validity of the proposed materials two practical tests have been performed. Special magnets have been realized for a comparison with original ones (AlNiCo and ferrite) for two experimental cases: the first is consisting in an encoder realized through a toothed wheel, the second regards the special system used for the electric guitars.
This paper design three distribution devices for the strong and smart grid, respectively are novel transformer with function of dc bias restraining, energy-saving contactor and controllable reactor with adjustable intrinsic magnetic state based on nanocomposite magnetic material core. The magnetic performance of this material was analyzed and the relationship between the remanence and coercivity was determined. The magnetization and demagnetization circuit for the nanocomposite core has been designed based on three-phase rectification circuit combined with a capacitor charging circuit. The remanence of the nanocomposite core can neutralize the dc bias flux occurred in transformer main core, can pull in the movable core of the contactor instead of the traditional fixed core and adjust the saturation degree of the reactor core. The electromagnetic design of the three distribution devices was conducted and the simulation, experiment results verify correctness of the design which provides intelligent and energy-saving power equipment for the smart power grids safe operation.
Arrays of nanosized hollow spheres of Ni were studied using micromagnetic simulation by the Object Oriented Micromagnetic Framework. Before all the results, we will present an analysis of the properties for an individual hollow sphere in order to separate the real effects due to the array. The results in this paper are divided into three parts in order to analyze the magnetic behaviors in the static and dynamic regimes. The first part presents calculations for the magnetic field applied parallel to the plane of the array; specifically, we present the magnetization for equilibrium configurations. The obtained magnetization curves show that decreasing the thickness of the shell decreases the coercive field and it is difficult to obtain magnetic saturation. The values of the coercive field obtained in our work are of the same order as reported in experimental studies in the literature. The magnetic response in our study is dominated by the shape effects and we obtained high values for the reduced remanence, Mr/MS = 0.8. In the second part of this paper, we have changed the orientation of the magnetic field and calculated hysteresis curves to study the angular dependence of the coercive field and remanence. In thin shells, we have observed how the moments are oriented tangentially to the spherical surface. For the inversion of the magnetic moments we have observed the formation of vortex and onion modes. In the third part of this paper, we present an analysis for the process of magnetization reversal in the dynamic regime. The analysis showed that inversion occurs in the nonhomogeneous configuration. We could see that self-demagnetizing effects are predominant in the magnetic properties of the array. We could also observe that there are two contributions: one due to the shell as an independent object and the other due to the effects of the array.
This article deals with the estimation of magnet losses in a permanent-magnet motor inserted in a nut-runner. This type of machine has interesting features such as being two-pole, slot-less and running at a high speed (30000 rpm). Two analytical models were chosen from the literature. A numerical estimation of the losses with 2D Finite Element Method was carried out. A detailed investigation of the effect of simulation settings (e.g., mesh size, time-step, remanence flux density in the magnet, superposition of the losses, etc.) was performed. Finally, calculation of losses with 3D-FEM were also run in order to compare the calculated losses with both analytical and 2D-FEM results. The estimation of the losses focuses on a range of frequencies between 10 and 100 kHz.
We present an optimization approach that can be employed to calculate the globally optimal segmentation of a 2-D magnetic system into uniformly magnetized pieces. For each segment, the algorithm calculates the optimal shape and the optimal direction of the remanent flux density vector, with respect to a linear objective functional. We illustrate the approach with results for magnet design problems from different areas, such as a permanent magnet electric motor, a beam-focusing quadrupole magnet for particle accelerators, and a rotary device for magnetic refrigeration.
{This paper describes application of permanent magnet on permanent magnet generator (PMG) for renewable energy power plants. Permanent magnet used are bonded hybrid magnet that was a mixture of barium ferrite magnetic powders 50 wt % and NdFeB magnetic powders 50 wt % with 15 wt % of adhesive polymer as a binder. Preparation of bonded hybrid magnets by hot press method at a pressure of 2 tons and temperature of 200°C for 15 minutes. The magnetic properties obtained were remanence induction (Br) =1.54 kG, coercivity (Hc) = 1.290 kOe, product energy maximum (BHmax) = 0.28 MGOe, surface remanence induction (Br) = 1200 gauss
The manufacturing process of electrical machines influences the geometric dimensions and material properties, e.g. the yoke thickness. These influences occur by statistical variation as manufacturing tolerances. The effect of these tolerances and their potential impact on the mechanical torque output is not fully studied up to now. This paper conducts a sensitivity analysis for geometric and material parameters. For the general approach these parameters are varied uniformly in a range of 10 %. Two dimensional finite element analysis is used to simulate the influences at three characteristic operating points. The studied object is an internal permanent magnet machine in the 100 kW range used for hybrid drive applications. The results show a significant dependency on the rotational speed. The general validity is studied by using boundary condition variations and two further machine designs. This procedure offers the comparison of matching qualitative results for small quantitative deviations. For detecting the impact of the manufacturing process realistic tolerance ranges are used. This investigation identifies the airgap and magnet remanence induction as the main parameters for potential torque fluctuation.
The inevitable temperature raise leads to the demagnetization of permanent magnet synchronous motor (PMSM), that is undesirable in the application of electrical vehicle. This paper presents a nonlinear demagnetization model taking into account temperature with the Wiener structure and neural network characteristics. The remanence and intrinsic coercivity are chosen as intermediate variables, thus the relationship between motor temperature and maximal permanent magnet flux is described by the proposed neural Wiener model. Simulation and experimental results demonstrate the precision of temperature dependent demagnetization model. This work makes the basis of temperature compensation for the output torque from PMSM.
Substituting neodymium with ferrite based magnets comes with the penalty of significant reduced magnetic field energy. Several possibilities to compensate for the negative effects of a lower remanence and coercivity provided by ferrite magnets are presented and finally combined into the development of a new kind of BLDC-machine design. The new design is compared to a conventional machine on the application example of an electric 800 W/48 V automotive coolant pump.
This paper presents the analysis and the design of a ferrite permanent magnet synchronous generator (FePMSG) with flux concentration. Despite the well-known advantages of rare earth permanent magnet synchronous generators (REPMSG), the high cost of the rare earth permanent magnets represents an important drawback, particularly in competitive markets like the wind power. To reduce the cost of permanent magnet machines it is possible to replace the expensive rare earth materials by ferrite. Once ferrite has low remanent magnetization, flux concentration techniques are used to design a cheaper generator. The designed FePMSG is compared with a reference rare earth (NdFeB) permanent magnet synchronous generator (REPMSG), both with 3 kW, 220 V and 350 rpm. The results, validated with finite element analysis, show that the FePMSG can replace the REPMSG reducing significantly the active material cost.
Many aspects of our daily lives now rely on computers, including communications, transportation, government, finance, medicine, and education. However, with increased dependence comes increased vulnerability. Therefore recognizing attacks quickly is critical. In this paper, we introduce a new anomaly detection algorithm based on persistent homology, a tool which computes summary statistics of a manifold. The idea is to represent a cyber network with a dynamic point cloud and compare the statistics over time. The robustness of persistent homology makes for a very strong comparison invariant.
Protecting Critical Infrastructures (CIs) against contemporary cyber attacks has become a crucial as well as complex task. Modern attack campaigns, such as Advanced Persistent Threats (APTs), leverage weaknesses in the organization's business processes and exploit vulnerabilities of several systems to hit their target. Although their life-cycle can last for months, these campaigns typically go undetected until they achieve their goal. They usually aim at performing data exfiltration, cause service disruptions and can also undermine the safety of humans. Novel detection techniques and incident handling approaches are therefore required, to effectively protect CI's networks and timely react to this type of threats. Correlating large amounts of data, collected from a multitude of relevant sources, is necessary and sometimes required by national authorities to establish cyber situational awareness, and allow to promptly adopt suitable countermeasures in case of an attack. In this paper we propose three novel methods for security information correlation designed to discover relevant insights and support the establishment of cyber situational awareness.