Biblio
In the last decade, numerous Industrial IoT systems have been deployed. Attack vectors and security solutions for these are an active area of research. However, to the best of our knowledge, only very limited insight in the applicability and real-world comparability of attacks exists. To overcome this widespread problem, we have developed and realized an approach to collect attack traces at a larger scale. An easily deployable system integrates well into existing networks and enables the investigation of attacks on unmodified commercial devices.
People connect with a plethora of information from many online portals due to the availability and ease of access to the internet and electronic communication devices. However, news portals sometimes abuse press freedom by manipulating facts. Most of the time, people are unable to discriminate between true and false news. It is difficult to avoid the detrimental impact of Bangla fake news from spreading quickly through online channels and influencing people’s judgment. In this work, we investigated many real and false news pieces in Bangla to discover a common pattern for determining if an article is disseminating incorrect information or not. We developed a deep learning model that was trained and validated on our selected dataset. For learning, the dataset contains 48,678 legitimate news and 1,299 fraudulent news. To deal with the imbalanced data, we used random undersampling and then ensemble to achieve the combined output. In terms of Bangla text processing, our proposed model achieved an accuracy of 98.29% and a recall of 99%.
Recent approaches have proven the effectiveness of local outlier factor-based outlier detection when applied over traffic flow probability distributions. However, these approaches used distance metrics based on the Bhattacharyya coefficient when calculating probability distribution similarity. Consequently, the limited expressiveness of the Bhattacharyya coefficient restricted the accuracy of the methods. The crucial deficiency of the Bhattacharyya distance metric is its inability to compare distributions with non-overlapping sample spaces over the domain of natural numbers. Traffic flow intensity varies greatly, which results in numerous non-overlapping sample spaces, rendering metrics based on the Bhattacharyya coefficient inappropriate. In this work, we address this issue by exploring alternative distance metrics and showing their applicability in a massive real-life traffic flow data set from 26 vital intersections in The Hague. The results on these data collected from 272 sensors for more than two years show various advantages of the Earth Mover's distance both in effectiveness and efficiency.
Many organizations process and store classified data within their computer networks. Owing to the value of data that they hold; such organizations are more vulnerable to targets from adversaries. Accordingly, the sensitive organizations resort to an ‘air-gap’ approach on their networks, to ensure better protection. However, despite the physical and logical isolation, the attackers have successfully manifested their capabilities by compromising such networks; examples of Stuxnet and Agent.btz in view. Such attacks were possible due to the successful manipulation of human beings. It has been observed that to build up such attacks, persistent reconnaissance of the employees, and their data collection often forms the first step. With the rapid integration of social media into our daily lives, the prospects for data-seekers through that platform are higher. The inherent risks and vulnerabilities of social networking sites/apps have cultivated a rich environment for foreign adversaries to cherry-pick personal information and carry out successful profiling of employees assigned with sensitive appointments. With further targeted social engineering techniques against the identified employees and their families, attackers extract more and more relevant data to make an intelligent picture. Finally, all the information is fused to design their further sophisticated attacks against the air-gapped facility for data pilferage. In this regard, the success of the adversaries in harvesting the personal information of the victims largely depends upon the common errors committed by legitimate users while on duty, in transit, and after their retreat. Such errors would keep on repeating unless these are aligned with their underlying human behaviors and weaknesses, and the requisite mitigation framework is worked out.
With billions of devices already connected to the network's edge, the Internet of Things (IoT) is shaping the future of pervasive computing. Nonetheless, IoT applications still cannot escape the need for the computing resources available at the fog layer. This becomes challenging since the fog nodes are not necessarily secure nor reliable, which widens even further the IoT threat surface. Moreover, the security risk appetite of heterogeneous IoT applications in different domains or deploy-ment contexts should not be assessed similarly. To respond to this challenge, this paper proposes a new approach to optimize the allocation of secure and reliable fog computing resources among IoT applications with varying security risk level. First, the security and reliability levels of fog nodes are quantitatively evaluated, and a security risk assessment methodology is defined for IoT services. Then, an online, incentive-compatible mechanism is designed to allocate secure fog resources to high-risk IoT offloading requests. Compared to the offline Vickrey auction, the proposed mechanism is computationally efficient and yields an acceptable approximation of the social welfare of IoT devices, allowing to attenuate security risk within the edge network.
Considered sensitive information by the ISO/IEC 24745, biometric data should be stored and used in a protected way. If not, privacy and security of end-users can be compromised. Also, the advent of quantum computers demands quantum-resistant solutions. This work proposes the use of Kyber and Saber public key encryption (PKE) algorithms together with homomorphic encryption (HE) in a face recognition system. Kyber and Saber, both based on lattice cryptography, were two finalists of the third round of NIST post-quantum cryptography standardization process. After the third round was completed, Kyber was selected as the PKE algorithm to be standardized. Experimental results show that recognition performance of the non-protected face recognition system is preserved with the protection, achieving smaller sizes of protected templates and keys, and shorter execution times than other HE schemes reported in literature that employ lattices. The parameter sets considered achieve security levels of 128, 192 and 256 bits.
ISSN: 1617-5468
Secrete message protection has become a focal point of the network security domain due to the problems of violating the network use policies and unauthorized access of the public network. These problems have led to data protection techniques such as cryptography, and steganography. Cryptography consists of encrypting secrete message to a ciphertext format and steganography consists of concealing the secrete message in codes that make up a digital file, such as an image, audio, and video. Steganography, which is different from cryptography, ensures hiding a secret message for secure transmission over the public network. This paper presents a steganographic approach using digital images for data hiding that aims to providing higher performance by combining fuzzy logic type I to pre-process the cover image and difference expansion techniques. The previous methods have used the original cover image to embed the secrete message. This paper provides a new method that first identifies the edges of a cover image and then proceeds with a difference expansion to embed the secrete message. The experimental results of this work identified an improvement of 10% of the existing method based on increased payload capacity and the visibility of the stego image.
Static information flow control (IFC) systems provide the ability to restrict data flows within a program, enabling vulnerable functionality or confidential data to be statically isolated from unsecured data or program logic. Despite the wide applicability of IFC as a mechanism for guaranteeing confidentiality and integrity -- the fundamental properties on which computer security relies -- existing IFC systems have seen little use, requiring users to reason about complicated mechanisms such as lattices of security labels and dual notions of confidentiality and integrity within these lattices. We propose a system that diverges significantly from previous work on information flow control, opting to reason directly about the data that programmers already work with. In doing so, we naturally and seamlessly combine the clasically separate notions of confidentiality and integrity into one unified framework, further simplifying reasoning. We motivate and showcase our work through two case studies on TLS private key management: one for Rocket, a popular Rust web framework, and another for Conduit, a server implementation for the Matrix messaging service written in Rust.
Human safety has always been the main priority when working near an industrial robot. With the rise of Human-Robot Collaborative environments, physical barriers to avoiding collisions have been disappearing, increasing the risk of accidents and the need for solutions that ensure a safe Human-Robot Collaboration. This paper proposes a safety system that implements Speed and Separation Monitoring (SSM) type of operation. For this, safety zones are defined in the robot's workspace following current standards for industrial collaborative robots. A deep learning-based computer vision system detects, tracks, and estimates the 3D position of operators close to the robot. The robot control system receives the operator's 3D position and generates 3D representations of them in a simulation environment. Depending on the zone where the closest operator was detected, the robot stops or changes its operating speed. Three different operation modes in which the human and robot interact are presented. Results show that the vision-based system can correctly detect and classify in which safety zone an operator is located and that the different proposed operation modes ensure that the robot's reaction and stop time are within the required time limits to guarantee safety.
ISSN: 2153-0866
Payment Service Providers (PSPs) provide software development toolkits (SDKs) for integrating complex payment processing code into applications. Security weaknesses in payment SDKs can impact thousands of applications. In this work, we propose AARDroid for statically assessing payment SDKs against OWASP’s MASVS industry standard for mobile application security. In creating AARDroid, we adapted application-level requirements and program analysis tools for SDK-specific analysis, tailoring dataflow analysis for SDKs using domain-specific ontologies to infer the security semantics of application programming interfaces (APIs). We apply AARDroid to 50 payment SDKs and discover security weaknesses including saving unencrypted credit card information to files, use of insecure cryptographic primitives, insecure input methods for credit card information, and insecure use of WebViews. These results demonstrate the value of applying security analysis at the SDK granularity to prevent the widespread deployment of insecure code.
Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and detect problems that the system is unaware of. One way of achieving this synergy is by placing the human operator on the loop—i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, an explanation can play an important role in allowing the human operator to understand why the system is making certain decisions and improve the level of knowledge that the operator has about the system. This, in turn, may improve the operator’s capability to intervene and, if necessary, override the decisions being made by the system. However, explanations may incur costs, in terms of delay in actions and the possibility that a human may make a bad judgment. Hence, it is not always obvious whether an explanation will improve overall utility and, if so, then what kind of explanation should be provided to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation content, effect, and cost. We then present a dynamic system adaptation approach that leverages a probabilistic reasoning technique to determine when an explanation should be used to improve overall system utility. We evaluate our explanation framework in the context of a realistic industrial control system with adaptive behaviors.
Advanced Encryption Standard (AES) algorithm plays an important role in a data security application. In general S-box module in AES will give maximum confusion and diffusion measures during AES encryption and cause significant path delay overhead. In most cases, either L UTs or embedded memories are used for S- box computations which are vulnerable to attacks that pose a serious risk to real-world applications. In this paper, implementation of the composite field arithmetic-based Sub-bytes and inverse Sub-bytes operations in AES is done. The proposed work includes an efficient multiple round AES cryptosystem with higher-order transformation and composite field s-box formulation with some possible inner stage pipelining schemes which can be used for throughput rate enhancement along with path delay optimization. Finally, input biometric-driven key generation schemes are used for formulating the cipher key dynamically, which provides a higher degree of security for the computing devices.
This article discusses a threat and vulnerability analysis model that allows you to fully analyze the requirements related to information security in an organization and document the results of the analysis. The use of this method allows avoiding and preventing unnecessary costs for security measures arising from subjective risk assessment, planning and implementing protection at all stages of the information systems lifecycle, minimizing the time spent by an information security specialist during information system risk assessment procedures by automating this process and reducing the level of errors and professional skills of information security experts. In the initial sections, the common methods of risk analysis and risk assessment software are analyzed and conclusions are drawn based on the results of comparative analysis, calculations are carried out in accordance with the proposed model.
Software Defined Networking (SDN) is an emerging technology, which provides the flexibility in communicating among network. Software Defined Network features separation of the data forwarding plane from the control plane which includes controller, resulting centralized network. Due to centralized control, the network becomes more dynamic, and resources are managed efficiently and cost-effectively. Network Virtualization is transformation of network from hardware-based to software-based. Network Function Virtualization will permit implementation, adaptable provisioning, and even management of functions virtually. The use of virtualization of SDN networks permits network to strengthen the features of SDN and virtualization of NFV and has for that reason has attracted notable research awareness over the last few years. SDN platform introduces network security challenges. The network becomes vulnerable when a large number of requests is encapsulated inside packet\_in messages and passed to controller from switch for instruction, if it is not recognized by existing flow entry rules. which will limit the resources and become a bottleneck for the entire network leading to DDoS attack. It is necessary to have quick provisional methods to prevent the switches from breaking down. To resolve this problem, the researcher develops a mechanism that detects and mitigates flood attacks. This paper provides a comprehensive survey which includes research relating frameworks which are utilized for detecting attack and later mitigation of flood DDoS attack in Software Defined Network (SDN) with the help of NFV.
The vehicle-to-grid (V2G) network has a clear advantage in terms of economic benefits, and it has grabbed the interest of powergrid and electric vehicle (EV) consumers. Many V2G techniques, at present, for example, use bilinear pairing to execute the authentication scheme, which results in significant computational costs. Furthermore, in the existing V2G techniques, the system master key is issued independently by the third parties, it is vulnerable to leaking if the third party is compromised by an attacker. This paper presents an efficient and secure anonymous authentication scheme for V2G networks to overcome this issue we use a lightweight authentication system for electric vehicles and smart grids. In the proposed technique, the keys are generated by the trusted authority after the successful registration of EVs in the trusted authority and the dispatching center. The suggested scheme not only enhances the verification performance of V2G networks and also protects against inbuilt hackers.
Code-graph based software defect prediction methods have become a research focus in SDP field. Among them, Code Property Graph is used as a form of data representation for code defects due to its ability to characterize the structural features and dependencies of defect codes. However, since the coarse granularity of Code Property Graph, redundant information which is not related to defects often attached to the characterization of software defects. Thus, it is a problem to be solved in how to locate software defects at a finer granularity in Code Property Graph. Static analysis is a technique for identifying software defects using set defect rules, and there are many proven static analysis tools in the industry. In this paper, we propose a method for locating specific types of defects in the Code Property Graph based on the result of static analysis tool. Experiments show that the location method based on static analysis results can effectively predict the location of specific defect types in real software program.