Biblio
This paper design three distribution devices for the strong and smart grid, respectively are novel transformer with function of dc bias restraining, energy-saving contactor and controllable reactor with adjustable intrinsic magnetic state based on nanocomposite magnetic material core. The magnetic performance of this material was analyzed and the relationship between the remanence and coercivity was determined. The magnetization and demagnetization circuit for the nanocomposite core has been designed based on three-phase rectification circuit combined with a capacitor charging circuit. The remanence of the nanocomposite core can neutralize the dc bias flux occurred in transformer main core, can pull in the movable core of the contactor instead of the traditional fixed core and adjust the saturation degree of the reactor core. The electromagnetic design of the three distribution devices was conducted and the simulation, experiment results verify correctness of the design which provides intelligent and energy-saving power equipment for the smart power grids safe operation.
Arrays of nanosized hollow spheres of Ni were studied using micromagnetic simulation by the Object Oriented Micromagnetic Framework. Before all the results, we will present an analysis of the properties for an individual hollow sphere in order to separate the real effects due to the array. The results in this paper are divided into three parts in order to analyze the magnetic behaviors in the static and dynamic regimes. The first part presents calculations for the magnetic field applied parallel to the plane of the array; specifically, we present the magnetization for equilibrium configurations. The obtained magnetization curves show that decreasing the thickness of the shell decreases the coercive field and it is difficult to obtain magnetic saturation. The values of the coercive field obtained in our work are of the same order as reported in experimental studies in the literature. The magnetic response in our study is dominated by the shape effects and we obtained high values for the reduced remanence, Mr/MS = 0.8. In the second part of this paper, we have changed the orientation of the magnetic field and calculated hysteresis curves to study the angular dependence of the coercive field and remanence. In thin shells, we have observed how the moments are oriented tangentially to the spherical surface. For the inversion of the magnetic moments we have observed the formation of vortex and onion modes. In the third part of this paper, we present an analysis for the process of magnetization reversal in the dynamic regime. The analysis showed that inversion occurs in the nonhomogeneous configuration. We could see that self-demagnetizing effects are predominant in the magnetic properties of the array. We could also observe that there are two contributions: one due to the shell as an independent object and the other due to the effects of the array.
This article deals with the estimation of magnet losses in a permanent-magnet motor inserted in a nut-runner. This type of machine has interesting features such as being two-pole, slot-less and running at a high speed (30000 rpm). Two analytical models were chosen from the literature. A numerical estimation of the losses with 2D Finite Element Method was carried out. A detailed investigation of the effect of simulation settings (e.g., mesh size, time-step, remanence flux density in the magnet, superposition of the losses, etc.) was performed. Finally, calculation of losses with 3D-FEM were also run in order to compare the calculated losses with both analytical and 2D-FEM results. The estimation of the losses focuses on a range of frequencies between 10 and 100 kHz.
We present an optimization approach that can be employed to calculate the globally optimal segmentation of a 2-D magnetic system into uniformly magnetized pieces. For each segment, the algorithm calculates the optimal shape and the optimal direction of the remanent flux density vector, with respect to a linear objective functional. We illustrate the approach with results for magnet design problems from different areas, such as a permanent magnet electric motor, a beam-focusing quadrupole magnet for particle accelerators, and a rotary device for magnetic refrigeration.
We present algorithmic techniques for parallel PDE solvers that leverage numerical smoothness properties of physics simulation to detect and correct silent data corruption within local computations. We initially model such silent hardware errors (which are of concern for extreme scale) via injected DRAM bit flips. Our mitigation approach generalizes previously developed "robust stencils" and uses modified linear algebra operations that spatially interpolate to replace large outlier values. Prototype implementations for 1D hyperbolic and 3D elliptic solvers, tested on up to 2048 cores, show that this error mitigation enables tolerating orders of magnitude higher bit-flip rates. The runtime overhead of the approach generally decreases with greater solver scale and complexity, becoming no more than a few percent in some cases. A key advantage is that silent data corruption can be handled transparently with data in cache, reducing the cost of false-positive detections compared to rollback approaches.
In this paper we propose Mastino, a novel defense system to detect malware download events. A download event is a 3-tuple that identifies the action of downloading a file from a URL that was triggered by a client (machine). Mastino utilizes global situation awareness and continuously monitors various network- and system-level events of the clients' machines across the Internet and provides real time classification of both files and URLs to the clients upon submission of a new, unknown file or URL to the system. To enable detection of the download events, Mastino builds a large download graph that captures the subtle relationships among the entities of download events, i.e. files, URLs, and machines. We implemented a prototype version of Mastino and evaluated it in a large-scale real-world deployment. Our experimental evaluation shows that Mastino can accurately classify malware download events with an average of 95.5% true positive (TP), while incurring less than 0.5% false positives (FP). In addition, we show the Mastino can classify a new download event as either benign or malware in just a fraction of a second, and is therefore suitable as a real time defense system.
Traditional sensitive data disclosure analysis faces two challenges: to identify sensitive data that is not generated by specific API calls, and to report the potential disclosures when the disclosed data is recognized as sensitive only after the sink operations. We address these issues by developing BidText, a novel static technique to detect sensitive data disclosures. BidText formulates the problem as a type system, in which variables are typed with the text labels that they encounter (e.g., during key-value pair operations). The type system features a novel bi-directional propagation technique that propagates the variable label sets through forward and backward data-flow. A data disclosure is reported if a parameter at a sink point is typed with a sensitive text label. BidText is evaluated on 10,000 Android apps. It reports 4,406 apps that have sensitive data disclosures, with 4,263 apps having log based disclosures and 1,688 having disclosures due to other sinks such as HTTP requests. Existing techniques can only report 64.0% of what BidText reports. And manual inspection shows that the false positive rate for BidText is 10%.
Based on Storm, a distributed, reliable, fault-tolerant real-time data stream processing system, we propose a recognition system of web intrusion detection. The system is based on machine learning, feature selection algorithm by TF-IDF(Term Frequency–Inverse Document Frequency) and the optimised cosine similarity algorithm, at low false positive rate and a higher detection rate of attacks and malicious behavior in real-time to protect the security of user data. From comparative analysis of experiments we find that the system for intrusion recognition rate and false positive rate has improved to some extent, it can be better to complete the intrusion detection work.
With the growth of internet world has transformed into a global market with all monetary and business exercises being carried online. Being the most imperative resource of the developing scene, it is the vulnerable object and hence needs to be secured from the users with dangerous personality set. Since the Internet does not have focal surveillance component, assailants once in a while, utilizing varied and advancing hacking topologies discover a path to bypass framework's security and one such collection of assaults is Intrusion. An intrusion is a movement of breaking into the framework by compromising the security arrangements of the framework set up. The technique of looking at the system information for the conceivable intrusions is known intrusion detection. For the last two decades, automatic intrusion detection system has been an important exploration point. Till now researchers have developed Intrusion Detection Systems (IDS) with the capability of detecting attacks in several available environments; latest on the scene are Machine Learning approaches. Machine learning techniques are the set of evolving algorithms that learn with experience, have improved performance in the situations they have already encountered and also enjoy a broad range of applications in speech recognition, pattern detection, outlier analysis etc. There are a number of machine learning techniques developed for different applications and there is no universal technique that can work equally well on all datasets. In this work, we evaluate all the machine learning algorithms provided by Weka against the standard data set for intrusion detection i.e. KddCupp99. Different measurements contemplated are False Positive Rate, precision, ROC, True Positive Rate.
In Data mining is the method of extracting the knowledge from huge amount of data and interesting patterns. With the rapid increase of data storage, cloud and service-based computing, the risk of misuse of data has become a major concern. Protecting sensitive information present in the data is crucial and critical. Data perturbation plays an important role in privacy preserving data mining. The major challenge of privacy preserving is to concentrate on factors to achieve privacy guarantee and data utility. We propose a data perturbation method that perturbs the data using fuzzy logic and random rotation. It also describes aspects of comparable level of quality over perturbed data and original data. The comparisons are illustrated on different multivariate datasets. Experimental study has proved the model is better in achieving privacy guarantee of data, as well as data utility.
Within few years, Cloud computing has emerged as the most promising IT business model. Thanks to its various technical and financial advantages, Cloud computing continues to convince every day new users coming from scientific and industrial sectors. To satisfy the various users' requirements, Cloud providers must maximize the performance of their IT resources to ensure the best service at the lowest cost. The performance optimization efforts in the Cloud can be achieved at different levels and aspects. In the present paper, we propose to introduce a fuzzy logic process in scheduling strategy for public Cloud in order to improve the response time, processing time and total cost. In fact, fuzzy logic has proven his ability to solve the problem of optimization in several fields such as data mining, image processing, networking and much more.
Over the last few decades, accessibility scenarios have undergone a drastic change. Today the way people access information and resources is quite different from the age when internet was not evolved. The evolution of the Internet has made remarkable, epoch-making changes and has become the backbone of smart city. The vision of smart city revolves around seamless connectivity. Constant connectivity can provide uninterrupted services to users such as e-governance, e-banking, e-marketing, e-shopping, e-payment and communication through social media. And to provide uninterrupted services to such applications to citizens is our prime concern. So this paper focuses on smart handoff framework for next generation heterogeneous networks in smart cities to provide all time connectivity to anyone, anyhow and anywhere. To achieve this, three strategies have been proposed for handoff initialization phase-Mobile controlled, user controlled and network controlled handoff initialization. Each strategy considers a different set of parameters. Results show that additional parameters with RSSI and adaptive threshold and hysteresis solve ping-pong and corner effect problems in smart city.
The performance of clustering is a crucial challenge, especially for pattern recognition. The models aggregation has a positive impact on the efficiency of Data clustering. This technique is used to obtain more cluttered decision boundaries by aggregating the resulting clustering models. In this paper, we study an aggregation scheme to improve the stability and accuracy of clustering, which allows to find a reliable and robust clustering model. We demonstrate the advantages of our aggregation method by running Fuzzy C-Means (FCM) clustering on Reuters-21578 corpus. Experimental studies showed that our scheme optimized the bias-variance on the selected model and achieved enhanced clustering for unstructured textual resources.
In this paper, we describe the formatting guidelines for ACM SIG Proceedings. In order to assure safety of Chinese Train Control System (CTCS), it is necessary to ensure the operational risk is acceptable throughout its life-cycle, which requires a pragmatic risk assessment required for effective risk control. Many risk assessment techniques currently used in railway domain are qualitative, and rely on the experience of experts, which unavoidably brings in subjective judgements. This paper presents a method that combines fuzzy reasoning and analytic hierarchy process approach to quantify the experiences of experts to get the scores of risk parameters. Fuzzy reasoning is used to obtain the risk of system hazard, analytic hierarchy process approach is used to determine the risk level (RL) and its membership of the system. This method helps safety analyst to calculate overall collective risk level of system. A case study of risk assessment of CTCS system is used to demonstrate this method can give quantitative result of collective risks without much information from experts, but can support the risk assessment with risk level and its membership, which are more valuable to guide the further risk management.
Cloud and its transactions have emerged as a major challenge. This paper aims to come up with an efficient and best possible way to transfer data in cloud computing environment. This goal is achieved with the help of Soft Computing Techniques. Of the various techniques such as fuzzy logic, genetic algorithm or neural network, the paper proposes an effective method of intrusion detection using genetic algorithm. The selection of the optimized path for the data transmission proved to be effective method in cloud computing environment. Network path optimization increases data transmission speed making intrusion in network nearly impossible. Intruders are forced to act quickly for intruding the network which is quite a tough task for them in such high speed data transmission network.
In this paper, we extend the Maximum Satisfiability (MaxSAT) problem to Łukasiewicz logic. The MaxSAT problem for a set of formulae Φ is the problem of finding an assignment to the variables in Φ that satisfies the maximum number of formulae. Three possible solutions (encodings) are proposed to the new problem: (1) Disjunctive Linear Relations (DLRs), (2)Mixed Integer Linear Programming (MILP) and (3)Weighted Constraint Satisfaction Problem (WCSP). Like its Boolean counterpart, the extended fuzzy MaxSAT will have numerous applications in optimization problems that involve vagueness.
Availability is one of the most important requirements in the production system. Keeping the level of high availability in Infrastructure-as-a-Service (IaaS) cloud computing is a challenge task because of the complexity of service providing. By definition, the availability can be maintain by using fault tolerance approaches. Recently, many fault tolerance methods have been developed, but few of them focus on the fault detection aspect. In this paper, after a rigorous analysis on the nature of failures, we would like to introduce a technique to identified the failures occurring in IaaS system. By using fuzzy logic algorithm, this proposed technique can provide better performance in terms of accuracy and detection speed, which is critical for the cloud system.
Wireless sensor networks (WSN) are useful in many practical applications including agriculture, military and health care systems. However, the nodes in a sensor network are constrained by energy and hence the lifespan of such sensor nodes are limited due to the energy problem. Temporal logics provide a facility to predict the lifetime of sensor nodes in a WSN using the past and present traffic and environmental conditions. Moreover, fuzzy logic helps to perform inference under uncertainty. When fuzzy logic is combined with temporal constraints, it increases the accuracy of decision making with qualitative information. Hence, a new data collection and cluster based energy efficient routing algorithm is proposed in this paper by extending the existing LEACH protocol. Extensions are provided in this work by including fuzzy temporal rules for making data collection and routing decisions. Moreover, this proposed work uses fuzzy temporal logic for forming clusters and to perform cluster based routing. The main difference between other cluster based routing protocols and the proposed protocol is that two types of cluster heads are used here, one for data collection and other for routing. In this research work we conducted an experiment and it is observed that the proposed fuzzy cluster based routing algorithm with temporal constrains enhances the network life time reduces the energy consumption and enhances the quality of service by increasing the packet delivery ratio by reducing the delay.
Cloud has gained a wide acceptance across the globe. Despite wide acceptance and adoption of cloud computing, certain apprehensions and diffidence, related to safety and security of data still exists. The service provider needs to convince and demonstrate to the client, the confidentiality of data on the cloud. This can be broadly translated to issues related to the process of identifying, developing, maintaining and optimizing trust with clients regarding the services provided. Continuous demonstration, maintenance and optimization of trust of the agreed upon services affects the relationship with a client. The paper proposes a framework of integration of trust at the IAAS level in the cloud. It proposes a novel method of generation of trust index factor, considering the performance and the agility of the feedback received using fuzzy logic.
Cyber-physical systems (CPS) are often network integrated to enable remote management, monitoring, and reporting. Such integration has made them vulnerable to cyber attacks originating from an untrusted network (e.g., the internet). Once an attacker breaches the network security, he could corrupt operations of the system in question, which may in turn lead to catastrophes. Hence there is a critical need to detect intrusions into mission-critical CPS. Signature based detection may not work well for CPS, whose complexity may preclude any succinct signatures that we will need. Specification based detection requires accurate definitions of system behaviour that similarly can be hard to obtain, due to the CPS's complexity and dynamics, as well as inaccuracies and incompleteness of design documents or operation manuals. Formal models, to be tractable, are often oversimplified, in which case they will not support effective detection. In this paper, we study a behaviour-based machine learning (ML) approach for the intrusion detection. Whereas prior unsupervised ML methods have suffered from high missed detection or false-positive rates, we use a high-fidelity CPS testbed, which replicates all main physical and control components of a modern water treatment facility, to generate systematic training data for a supervised method. The method does not only detect the occurrence of a cyber attack at the physical process layer, but it also identifies the specific type of the attack. Its detection is fast and robust to noise. Furthermore, its adaptive system model can learn quickly to match dynamics of the CPS and its operating environment. It exhibits a low false positive (FP) rate, yet high precision and recall.
A distributed detection method is proposed to detect single stage multi-point (SSMP) attacks on a Cyber Physical System (CPS). Such attacks aim at compromising two or more sensors or actuators at any one stage of a CPS and could totally compromise a controller and prevent it from detecting the attack. However, as demonstrated in this work, using the flow properties of water from one stage to the other, a neighboring controller was found effective in detecting such attacks. The method is based on physical invariants derived for each stage of the CPS from its design. The attack detection effectiveness of the method was evaluated experimentally against an operational water treatment testbed containing 42 sensors and actuators. Results from the experiments point to high effectiveness of the method in detecting a variety of SSMP attacks but also point to its limitations. Distributing the attack detection code among various controllers adds to the scalability of the proposed method.
The implementation of automated regulatory control has been around since the middle of the last century through analog means. It has allowed engineers to operate the plant more consistently by focusing on overall operations and settings instead of individual monitoring of local instruments (inside and outside of a control room). A similar approach is proposed for cyber security, where current border-protection designs have been inherited from information technology developments that lack consideration of the high-reliability, high consequence nature of industrial control systems. Instead of an independent development, however, an integrated approach is taken to develop a holistic understanding of performance. This performance takes shape inside a multiagent design, which provides a notional context to model highly decentralized and complex industrial process control systems, the nervous system of critical infrastructure. The resulting strategy will provide a framework for researching solutions to security and unrecognized interdependency concerns with industrial control systems.
The University of Illinois at Urbana Champaign (Illinois), Pacific Northwest National Labs (PNNL), and the University of Southern California Information Sciences Institute (USC-ISI) consortium is working toward providing tools and expertise to enable collaborative research to improve security and resiliency of cyber physical systems. In this extended abstract we discuss the challenges and the solution space. We demonstrate the feasibility of some of the proposed components through a wide-area situational awareness experiment for the power grid across the three sites.
Recent attention to aviation cyber physical systems (ACPS) is driven by the need for seamless integration of design disciplines that dominate physical world and cyber world convergence. System convergence is a big obstacle to good aviation cyber-physical system (ACPS) design, which is due to a lack of an adequate scientific theoretical foundation for the subject. The absence of a good understanding of the science of aviation system convergence is not due to neglect, but rather due to its difficulty. Most complex aviation system builders have abandoned any science or engineering discipline for system convergence they simply treat it as a management problem. Aviation System convergence is almost totally absent from software engineering and engineering curricula. Hence, system convergence is particularly challenging in ACPS where fundamentally different physical and computational design concerns intersect. In this paper, we propose an integrated approach to handle System convergence of aviation cyber physical systems based on multi-dimensions, multi-views, multi-paradigm and multiple tools. This model-integrated development approach addresses the development needs of cyber physical systems through the pervasive use of models, and physical world, cyber world can be specified and modeled together, cyber world and physical world can be converged entirely, and cyber world models and physical world model can be integrated seamlessly. The effectiveness of the approach is illustrated by means of one practical case study: specifying and modeling Aircraft Systems. In this paper, We specify and model Aviation Cyber-Physical Systems with integrating Modelica, Modelicaml and Architecture Analysis & Design Language (AADL), the physical world is modeled by Modelica and Modelicaml, the cyber part is modeled by AADL and Modelicaml.