Biblio
Currently, air pollution is still a problem that requires special attention, especially in big cities. Air pollution can come from motor vehicle fumes, factory smoke or other particles. To overcome these problems, a system is made that can monitor environmental conditions in order to know the good and bad of air quality in an environment and is expected to be a solution to reduce air pollution that occurs. The system created will utilize the Wireless Sensor Network (WSN) combined with Waspmote Smart Environment PRO, so that later data will be obtained in the form of temperature, humidity, CO levels and CO2 levels. From the sensor data that has been processed on Waspmote, it will then be used as input for data processing using a fuzzy algorithm. The classification obtained from sensor data processing using fuzzy to monitor environmental conditions there are 5 classifications, namely Very Good, Good, Average, Bad and Dangerous. Later the data that has been collected will be distributed to Meshlium as a gateway and will be stored in the database. The process of sending information between one party to another needs to pay attention to the confidentiality of data and information. The final result of the implementation of this research is that the system is able to classify values using fuzzy algorithms and is able to secure text data that will be sent to the database via Meshlium, and is able to display data sent to the website in real time.
With the advent of massive machine type of communications, security protection becomes more important than ever. Efforts have been made to impose security protection capability to physical-layer signal design, so called physical-layer security (PLS). The purpose of this paper is to evaluate the performance of PLS schemes for a multi-input-multi-output (MIMO) systems with space-time block coding (STBC) under imperfect channel estimation. Three PLS schemes for STBC schemes are modeled and their bit error rate (BER) performances are evaluated under various channel estimation error environments, and their performance characteristics are analyzed.
ISSN: 2163-0771
Recent approaches have proven the effectiveness of local outlier factor-based outlier detection when applied over traffic flow probability distributions. However, these approaches used distance metrics based on the Bhattacharyya coefficient when calculating probability distribution similarity. Consequently, the limited expressiveness of the Bhattacharyya coefficient restricted the accuracy of the methods. The crucial deficiency of the Bhattacharyya distance metric is its inability to compare distributions with non-overlapping sample spaces over the domain of natural numbers. Traffic flow intensity varies greatly, which results in numerous non-overlapping sample spaces, rendering metrics based on the Bhattacharyya coefficient inappropriate. In this work, we address this issue by exploring alternative distance metrics and showing their applicability in a massive real-life traffic flow data set from 26 vital intersections in The Hague. The results on these data collected from 272 sensors for more than two years show various advantages of the Earth Mover's distance both in effectiveness and efficiency.
Currently, the Dark Web is one key platform for the online trading of illegal products and services. Analysing the .onion sites hosting marketplaces is of interest for law enforcement and security researchers. This paper presents a study on 123k listings obtained from 6 different Dark Web markets. While most of current works leverage existing datasets, these are outdated and might not contain new products, e.g., those related to the 2020 COVID pandemic. Thus, we build a custom focused crawler to collect the data. Being able to conduct analyses on current data is of considerable importance as these marketplaces continue to change and grow, both in terms of products offered and users. Also, there are several anti-crawling mechanisms being improved, making this task more difficult and, consequently, reducing the amount of data obtained in recent years on these marketplaces. We conduct a data analysis evaluating multiple characteristics regarding the products, sellers, and markets. These characteristics include, among others, the number of sales, existing categories in the markets, the origin of the products and the sellers. Our study sheds light on the products and services being offered in these markets nowadays. Moreover, we have conducted a case study on one particular productive and dynamic drug market, i.e., Cannazon. Our initial goal was to understand its evolution over time, analyzing the variation of products in stock and their price longitudinally. We realized, though, that during the period of study the market suffered a DDoS attack which damaged its reputation and affected users' trust on it, which was a potential reason which lead to the subsequent closure of the market by its operators. Consequently, our study provides insights regarding the last days of operation of such a productive market, and showcases the effectiveness of a potential intervention approach by means of disrupting the service and fostering mistrust.
Self-adaptive systems commonly operate in heterogeneous contexts and need to consider multiple quality attributes. Human stakeholders often express their quality preferences by defining utility functions, which are used by self-adaptive systems to automatically generate adaptation plans. However, the adaptation space of realistic systems is large and it is obscure how utility functions impact the generated adaptation behavior, as well as structural, behavioral, and quality constraints. Moreover, human stakeholders are often not aware of the underlying tradeoffs between quality attributes. To address this issue, we present an approach that uses machine learning techniques (dimensionality reduction, clustering, and decision tree learning) to explain the reasoning behind automated planning. Our approach focuses on the tradeoffs between quality attributes and how the choice of weights in utility functions results in different plans being generated. We help humans understand quality attribute tradeoffs, identify key decisions in adaptation behavior, and explore how differences in utility functions result in different adaptation alternatives. We present two systems to demonstrate the approach’s applicability and consider its potential application to 24 exemplar self-adaptive systems. Moreover, we describe our assessment of the tradeoff between the information reduction and the amount of explained variance retained by the results obtained with our approach.
In software design, guaranteeing the correctness of run-time system behavior while achieving an acceptable balance among multiple quality attributes remains a challenging problem. Moreover, providing guarantees about the satisfaction of those requirements when systems are subject to uncertain environments is even more challenging. While recent developments in architectural analysis techniques can assist architects in exploring the satisfaction of quantitative guarantees across the design space, existing approaches are still limited because they do not explicitly link design decisions to satisfaction of quality requirements. Furthermore, the amount of information they yield can be overwhelming to a human designer, making it difficult to see the forest for the trees. In this paper we present ExTrA (Explaining Tradeoffs of software Architecture design spaces), an approach to analyzing architectural design spaces that addresses these limitations and provides a basis for explaining design tradeoffs. Our approach employs dimensionality reduction techniques employed in machine learning pipelines like Principal Component Analysis (PCA) and Decision Tree Learning (DTL) to enable architects to understand how design decisions contribute to the satisfaction of extra-functional properties across the design space. Our results show feasibility of the approach in two case studies and evidence that combining complementary techniques like PCA and DTL is a viable approach to facilitate comprehension of tradeoffs in poorly-understood design spaces.
The vehicle-to-grid (V2G) network has a clear advantage in terms of economic benefits, and it has grabbed the interest of powergrid and electric vehicle (EV) consumers. Many V2G techniques, at present, for example, use bilinear pairing to execute the authentication scheme, which results in significant computational costs. Furthermore, in the existing V2G techniques, the system master key is issued independently by the third parties, it is vulnerable to leaking if the third party is compromised by an attacker. This paper presents an efficient and secure anonymous authentication scheme for V2G networks to overcome this issue we use a lightweight authentication system for electric vehicles and smart grids. In the proposed technique, the keys are generated by the trusted authority after the successful registration of EVs in the trusted authority and the dispatching center. The suggested scheme not only enhances the verification performance of V2G networks and also protects against inbuilt hackers.
Intrusion detection systems (IDSs) are widely deployed in the industrial control systems to protect network security. IDSs typically generate a huge number of alerts, which are time-consuming for system operators to process. Most of the alerts are individually insignificant false alarms. However, it is not the best solution to discard these alerts, as they can still provide useful information about network situation. Based on the study of characteristics of alerts in the industrial control systems, we adopt an enhanced method of exponentially weighted moving average (EWMA) control charts to help operators in processing alerts. We classify all detection signatures as regular and irregular according to their frequencies, set multiple control limits to detect anomalies, and monitor regular signatures for network security situational awareness. Extensive experiments have been performed using real-world alert data. Simulation results demonstrate that the proposed enhanced EWMA method can greatly reduce the volume of alerts to be processed while reserving significant abnormal information.
Edge detection based embedding techniques are famous for data security and image quality preservation. These techniques use diverse edge detectors to classify edge and non-edge pixels in an image and then implant secrets in one or both of these classes. Image with conceived data is called stego image. It is noticeable that none of such researches tries to reform the original image from the stego one. Rather, they devote their concentration to extract the hidden message only. This research presents a solution to the raised reversibility problem. Like the others, our research, first, applies an edge detector e.g., canny, in a cover image. The scheme next collects \$n\$-LSBs of each of edge pixels and finally, concatenates them with encrypted message stream. This method applies a lossless compression algorithm to that processed stream. Compression factor is taken such a way that the length of compressed stream does not exceed the length of collected LSBs. The compressed message stream is then implanted only in the edge pixels by \$n\$-LSB substitution method. As the scheme does not destroy the originality of non-edge pixels, it presents better stego quality. By incorporation the mechanisms of encryption, concatenation, compression and \$n\$-LSB, the method has enriched the security of implanted data. The research shows its effectiveness while implanting a small sized message.
In this paper, we present the architecture of a Smart Industry inspired platform designed for Agriculture 4.0 applications and, specifically, to optimize an ecosystem of SW and HW components for animal repelling. The platform implementation aims to obtain reliability and energy efficiency in a system aimed to detect, recognize, identify, and repel wildlife by generating specific ultrasound signals. The wireless sensor network is composed of OpenMote hardware devices coordinated on a mesh network based on the 6LoWPAN protocol, and connected to an FPGA-based board. The system, activated when an animal is detected, elaborates the data received from a video camera connected to FPGA-based hardware devices and then activates different ultrasonic jammers belonging to the OpenMotes network devices. This way, in real-time wildlife will be progressively moved away from the field to be preserved by the activation of specific ultrasonic generators. To monitor the daily behavior of the wildlife, the ecosystem is expanded using a time series database running on a Cloud platform.