Biblio
In this paper we solve the problem of neural network technology development for e-mail messages classification. We analyze basic methods of spam filtering such as a sender IP-address analysis, spam messages repeats detection and the Bayesian filtering according to words. We offer the neural network technology for solving this problem because the neural networks are universal approximators and effective in addressing the problems of classification. Also, we offer the scheme of this technology for e-mail messages “spam”/“not spam” classification. The creation of effective neural network model of spam filtering is performed within the databases knowledge discovery technology. For this training set is formed, the neural network model is trained, its value and classifying ability are estimated. The experimental studies have shown that a developed artificial neural network model is adequate and it can be effectively used for the e-mail messages classification. Thus, in this paper we have shown the possibility of the effective neural network model use for the e-mail messages filtration and have shown a scheme of artificial neural network model use as a part of the e-mail spam filtering intellectual system.
This paper describes the technology of neural network application to solve the problem of information security incidents forecasting. We describe the general problem of analyzing and predicting time series in a graphical and mathematical setting. To solve this problem, it is proposed to use a neural network model. To solve the task of forecasting a time series of information security incidents, data are generated and described on the basis of which the neural network is trained. We offer a neural network structure, train the neural network, estimate it's adequacy and forecasting ability. We show the possibility of effective use of a neural network model as a part of an intelligent forecasting system.
This study aims to enhance the security of Moodle system environment during the Execution of online exams, Taking into consideration the most common problems facing online exams and working to solve them. This was handled by improving the security performance of Moodle Quiz tool, which is one of the most important tools in the learning Management system as general and in Moodle system as well. In this paper we include two enhancement aspects: The first aspect is solving the problem of losing the answers during sudden short disconnection of the network because of the server crash or any other reasons, the second aspect is Increasing the level of confidentiality of e-Quiz by preventing accessing the Quiz from more than one computer or browser at the same time. In order to verify the efficiency of the new quiz tool features, the upgraded tool have been tested using an experimental test Moodle site.
The growth of the internet has brought along positive gains such as the emergence of a highly interconnected world. However, on the flip side, there has been a growing concern on how secure distributed systems can be built effectively and tested for security vulnerabilities prior to deployment. Developing a secure software product calls for a deep technical understanding of some complex issues with regards to the software and its operating environment, as well as embracing a systematic approach of analyzing the software. This paper proposes a method for identifying software security vulnerabilities from software requirement specifications written in Structured Object-oriented Formal Language (SOFL). Our proposed methodology leverages on the concept of providing an early focus on security by identifying potential security vulnerabilities at the requirement analysis and verification phase of the software development life cycle.
Mutriku wave farm is the first commercial plant all around the world. Since July 2011 it has been continuously selling electricity to the grid. It operates with the OWC technology and has 14 operating Wells-type turbines. In the plant there is a SCADA data recording system that collects the most important parameters of the turbines; among them, the pressure in the inlet chamber, the position of the security valve (from fully open to fully closed) and the generated power in the last 5 minutes. There is also an electricity meter which provides information about the amount of electric energy sold to the grid. The 2014 winter (January, February and March), and especially the first fortnight of February, was a stormy winter with rough sea state conditions. This was reflected both in the performance of the turbines (high pressure values, up to 9234.2 Pa; low opening degrees of the security valve, down to 49.4°; and high power generation of about 7681.6 W, all these data being average values) and in the calculated capacity factor (CF = 0.265 in winter and CF = 0.294 in February 2014). This capacity factor is a good tool for the comparison of different WEC technologies or different locations and shows an important seasonal behavior.
The increasing publication of large amounts of data, theoretically anonymous, can lead to a number of attacks on the privacy of people. The publication of sensitive data without exposing the data owners is generally not part of the software developers concerns. The regulations for the data privacy-preserving create an appropriate scenario to focus on privacy from the perspective of the use or data exploration that takes place in an organization. The increasing number of sanctions for privacy violations motivates the systematic comparison of three known machine learning algorithms in order to measure the usefulness of the data privacy preserving. The scope of the evaluation is extended by comparing them with a known privacy preservation metric. Different parameter scenarios and privacy levels are used. The use of publicly available implementations, the presentation of the methodology, explanation of the experiments and the analysis allow providing a framework of work on the problem of the preservation of privacy. Problems are shown in the measurement of the usefulness of the data and its relationship with the privacy preserving. The findings motivate the need to create optimized metrics on the privacy preferences of the owners of the data since the risks of predicting sensitive attributes by means of machine learning techniques are not usually eliminated. In addition, it is shown that there may be a hundred percent, but it cannot be measured. As well as ensuring adequate performance of machine learning models that are of interest to the organization that data publisher.
The need for data exchange and storage is currently increasing. The increased need for data exchange and storage also increases the need for data exchange devices and media. One of the most commonly used media exchanges and data storage is the USB Flash Drive. USB Flash Drive are widely used because they are easy to carry and have a fairly large storage. Unfortunately, this increased need is not directly proportional to an increase in awareness of device security, both for USB flash drive devices and computer devices that are used as primary storage devices. This research shows the threats that can arise from the use of USB Flash Drive devices. The threat that is used in this research is the fork bomb implemented on an Arduino Pro Micro device that is converted to a USB Flash drive. The purpose of the Fork Bomb is to damage the memory performance of the affected devices. As a result, memory performance to execute the process will slow down. The use of a USB Flash drive as an attack vector with the fork bomb method causes users to not be able to access the operating system that was attacked. The results obtained indicate that the USB Flash Drive can be used as a medium of Fork Bomb attack on the Windows operating system.
Accountability is a recent paradigm in security protocol design which aims to eliminate traditional trust assumptions on parties and hold them accountable for their misbehavior. It is meant to establish trust in the first place and to recognize and react if this trust is violated. In this work, we discuss a protocol-agnostic definition of accountability: a protocol provides accountability (w.r.t. some security property) if it can identify all misbehaving parties, where misbehavior is defined as a deviation from the protocol that causes a security violation. We provide a mechanized method for the verification of accountability and demonstrate its use for verification and attack finding on various examples from the accountability and causality literature, including Certificate Transparency and Krollˆ\textbackslashtextbackslashprimes Accountable Algorithms protocol. We reach a high degree of automation by expressing accountability in terms of a set of trace properties and show their soundness and completeness.
Cloud Computing is the most suitable environment for the collaboration of multiple organizations via its multi-tenancy architecture. However, due to the distributed management of policies within these collaborations, they may contain several anomalies, such as conflicts and redundancies, which may lead to both safety and availability problems. On the other hand, current cloud computing solutions do not offer verification tools to manage access control policies. In this paper, we propose a cloud policy verification service (CPVS), that facilitates to users the management of there own security policies within Openstack cloud environment. Specifically, the proposed cloud service offers a policy verification approach to dynamically choose the adequate policy using Aspect-Oriented Finite State Machines (AO-FSM), where pointcuts and advices are used to adopt Domain-Specific Language (DSL) state machine artifacts. The pointcuts define states' patterns representing anomalies (e.g., conflicts) that may occur in a security policy, while the advices define the actions applied at the selected pointcuts to remove the anomalies. In order to demonstrate the efficiency of our approach, we provide time and space complexities. The approach was implemented as middleware service within Openstack cloud environment. The implementation results show that the middleware can detect and resolve different policy anomalies in an efficient manner.
We present a scalable dynamic analysis framework that allows for the automatic evaluation of the privacy behaviors of Android apps. We use our system to analyze mobile apps’ compliance with the Children’s Online Privacy Protection Act (COPPA), one of the few stringent privacy laws in the U.S. Based on our automated analysis of 5,855 of the most popular free children’s apps, we found that a majority are potentially in violation of COPPA, mainly due to their use of thirdparty SDKs. While many of these SDKs offer configuration options to respect COPPA by disabling tracking and behavioral advertising, our data suggest that a majority of apps either do not make use of these options or incorrectly propagate them across mediation SDKs. Worse, we observed that 19% of children’s apps collect identifiers or other personally identifiable information (PII) via SDKs whose terms of service outright prohibit their use in child-directed apps. Finally, we show that efforts by Google to limit tracking through the use of a resettable advertising ID have had little success: of the 3,454 apps that share the resettable ID with advertisers, 66% transmit other, non-resettable, persistent identifiers as well, negating any intended privacy-preserving properties of the advertising ID.
Information systems are becoming more and more complex and closely linked. These systems are encountering an enormous amount of nefarious traffic while ensuring real - time connectivity. Therefore, a defense method needs to be in place. One of the commonly used tools for network security is intrusion detection systems (IDS). An IDS tries to identify fraudulent activity using predetermined signatures or pre-established user misbehavior while monitoring incoming traffic. Intrusion detection systems based on signature and behavior cannot detect new attacks and fall when small behavior deviations occur. Many researchers have proposed various approaches to intrusion detection using machine learning techniques as a new and promising tool to remedy this problem. In this paper, the authors present a combination of two machine learning methods, unsupervised clustering followed by a supervised classification framework as a Fast, highly scalable and precise packets classification system. This model's performance is assessed on the new proposed dataset by the Canadian Institute for Cyber security and the University of New Brunswick (CICIDS2017). The overall process was fast, showing high accuracy classification results.
Privacy has become a critical topic in the engineering of electric systems. This work proposes an approach for smart-grid-specific privacy requirements engineering by extending previous general privacy requirements engineering frameworks. The proposed extension goes one step further by focusing on privacy in the smart grid. An alignment of smart grid privacy requirements, dependability issues and privacy requirements engineering methods is presented. Starting from this alignment a Threat Tree Analysis is performed to obtain a first set of generic, high level privacy requirements. This set is formulated mostly on the data instead of the information level and provides the basis for further project-specific refinement.
The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increase of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and - if feasible - an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.