Biblio
The article explores the question of the effective implementation of arithmetic operations with points of an elliptic curve given over a prime field. Given that the basic arithmetic operations with points of an elliptic curve are the operations of adding points and doubling points, we study the question of implementing the arithmetic operations of adding and doubling points in various coordinate systems using the weighted number system and using the Residue Number System (RNS). We have shown that using the fourmodule RNS allows you to get an average gain for the operation of adding points of the elliptic curve of 8.67% and for the operation of doubling the points of the elliptic curve of 8.32% compared to the implementation using the operation of modular multiplication with special moduli from NIST FIPS 186.
In this paper, we propose an efficient and secure physically unclonable function based multi-factor authenticated key exchange (PUF-MAKE). In a PUF-MAKE setting, we suppose two participants; a user and a server. The user keeps multi-factor authenticators and securely holds a PUF-embedded device while the server maintains PUF outputs for authentication. We first study on how to efficiently construct a PUF-MAKE protocol. The main difficulty comes from that it should establish a common key from both multi-factor authenticators and a PUF-embedded device. Our construction is the first secure PUF-MAKE protocol that just needs three communication flows.
The paper introduces a method of efficient partial firmware update with several advantages compared to common methods. The amount of data to transfer for an update is reduced, the energetic efficiency is increased and as the method is designed for over the air update, the radio spectrum occupancy is decreased. Herein described approach uses Lua scripting interface to introduce updatable fragments of invokable native code. This requires a dedicated memory layout, which is herein introduced. This method allows not only to distribute patches for deployed systems, but also on demand add-ons. At the end, the security aspects of proposed firmware update system is discussed and its limitations are presented.
This paper presents for the first time a study on the security of information processed by video projectors. Examples of video recovery from the electromagnetic radiation of these equipment will be illustrated both in laboratory and real-field environment. It presents the results of the time parameters evaluation for the analyzed video signal that confirm the video standards specifications. There will also be illustrated the results of a vulnerability analysis based on the colors used to display the images but also the remote video recovery capabilities.
Recent Development in Hardware and Software Technology for the communication email is preferred. But due to the unbidden emails, it affects communication. There is a need for detection and classification of spam email. In this present research email spam detection and classification, models are built. We have used different Machine learning classifiers like Naive Bayes, SVM, KNN, Bagging and Boosting (Adaboost), and Ensemble Classifiers with a voting mechanism. Evaluation and testing of classifiers is performed on email spam dataset from UCI Machine learning repository and Kaggle website. Different accuracy measures like Accuracy Score, F measure, Recall, Precision, Support and ROC are used. The preliminary result shows that Ensemble Classifier with a voting mechanism is the best to be used. It gives the minimum false positive rate and high accuracy.
Aiming at the composite uncertainty characteristics and high-dimensional data stream characteristics of the evaluation index with both ambiguity and randomness, this paper proposes a emergency severity assessment method for cluster supply chain based on cloud fuzzy clustering algorithm. The summary cloud model generation algorithm is created. And the multi-data fusion method is applied to the cloud model processing of the evaluation indexes for high-dimensional data stream with ambiguity and randomness. The synopsis data of the emergency severity assessment indexes are extracted. Based on time attenuation model and sliding window model, the data stream fuzzy clustering algorithm for emergency severity assessment is established. The evaluation results are rationally optimized according to the generalized Euclidean distances of the cluster centers and cluster microcluster weights, and the severity grade of cluster supply chain emergency is dynamically evaluated. The experimental results show that the proposed algorithm improves the clustering accuracy and reduces the operation time, as well as can provide more accurate theoretical support for the early warning decision of cluster supply chain emergency.
Energy Distribution Grids are considered critical infrastructure, hence the Distribution System Operators (DSOs) have developed sophisticated engineering practices to improve their resilience. Over the last years, due to the "Smart Grid" evolution, this infrastructure has become a distributed system where prosumers (the consumers who produce and share surplus energy through the grid) can plug in distributed energy resources (DERs) and manage a bi-directional flow of data and power enabled by an advanced IT and control infrastructure. This introduces new challenges, as the prosumers possess neither the skills nor the knowledge to assess the risk or secure the environment from cyber-threats. We propose a simple and usable approach based on the Reference Model of Information Assurance & Security (RMIAS), to support the prosumers in the selection of cybesecurity measures. The purpose is to reduce the risk of being directly targeted and to establish collective responsibility among prosumers as grid gatekeepers. The framework moves from a simple risk analysis based on security goals to providing guidelines for the users for adoption of adequate security countermeasures. One of the greatest advantages of the approach is that it does not constrain the user to a specific threat model.
Despite the wide of range of research and technologies that deal with the problem of routing in computer networks, there remains a gap between the level of network hardware administration and the level of business requirements and constraints. Not much has been accomplished in literature in order to have a direct enforcement of such requirements on the network. This paper presents a new solution in specifying and directly enforcing security policies to control the routing configuration in a software-defined network by using Row-Level Security checks which enable fine-grained security policies on individual rows in database tables. We show, as a first step, how a specific class of such policies, namely multilevel security policies, can be enforced on a database-defined network, which presents an abstraction of a network's configuration as a set of database tables. We show that such policies can be used to control the flow of data in the network either in an upward or downward manner.
The data collected by IoT devices is of great value, which makes people urgently need a secure device key management strategy to protect their data. Existing works introduce the blockchain technology to transfer the responsibility of key management from the trusted center in the traditional key management strategy to the devices, thus eliminating the trust crisis caused by excessive dependence on third parties. However, the lightweight implementation of IoT devices limits the ability to resist side channel attacks, causing the private key to be exposed and subject to masquerading attacks. Accordingly, we strengthen the original blockchain based key management scheme to defend against key exposure attack. On the one hand, we introduce two hash functions to bind transactions in the blockchain to legitimate users. On the other hand, we design a secure key exchange protocol for identifying and exchanging access keys between legitimate users. Security analysis and performance show that the proposed scheme improves the robustness of the network with small storage and communication overhead increments.
As Web traffics is increasing on the Internet, caching solutions for Web systems are becoming more important since they can greatly expand system scalability. An important part of a caching solution is cache replacement policy, which is responsible for selecting victim items that should be removed in order to make space for new objects. Typical replacement policies used in practice only take advantage of temporal reference locality by removing the least recently/frequently requested items from the cache. Although those policies work well in memory or filesystem cache, they are inefficient for Web systems since they do not exploit semantic relationship between Web items. This paper presents a semantic-aware caching policy that can be used in Web systems to enhance scalability. The proposed caching mechanism defines semantic distance from a web page to a set of pivot pages and use the semantic distances as a metric for choosing victims. Also, it use a function-based metric that combines access frequency and cache item size for tie-breaking. Our simulations show that out enhancements outperform traditional methods in terms of hit rate, which can be useful for websites with many small and similar-in-size web objects.
Network steganography is a branch of steganography that hides information through packet header manipulation and uses protocols as carriers to hide secret information. Many techniques were already developed using the Transmission Control Protocol (TCP) headers. Among the schemes in hiding information in the TCP header, the Initial Sequence Number (ISN) field is the most difficult to be detected since this field can have arbitrary values within the requirements of the standard. In this paper, a more undetectable scheme is proposed by increasing the complexity of hiding data in the TCP ISN using dynamic identifiers. The experimental results have shown that using Bayes Net, the proposed scheme outperforms the existing scheme with a low detection accuracy of 0.52%.
This study aims to enhance the security of Moodle system environment during the Execution of online exams, Taking into consideration the most common problems facing online exams and working to solve them. This was handled by improving the security performance of Moodle Quiz tool, which is one of the most important tools in the learning Management system as general and in Moodle system as well. In this paper we include two enhancement aspects: The first aspect is solving the problem of losing the answers during sudden short disconnection of the network because of the server crash or any other reasons, the second aspect is Increasing the level of confidentiality of e-Quiz by preventing accessing the Quiz from more than one computer or browser at the same time. In order to verify the efficiency of the new quiz tool features, the upgraded tool have been tested using an experimental test Moodle site.
Opportunities arising from IoT-enabled applications are significant, but market growth is inhibited by concerns over security and complexity. To address these issues, we propose the ERAMIS methodology, which is based on instantiation of a reference architecture that captures common design features, embodies best practice, incorporates good security properties by design, and makes explicit provision for operational security services and processes.
Malware authors are known to reuse existing code, this development process results in software evolution and a sequence of versions of a malware family containing functions that show a divergence from the initial version. This paper proposes the term evolved similarity to account for this gradual divergence of similarity across the version history of a malware family. While existing techniques are able to match functions in different versions of malware, these techniques work best when the version changes are relatively small. This paper introduces the concept of evolved similarity and presents automated Evolved Similarity Techniques (EST). EST differs from existing malware function similarity techniques by focusing on the identification of significantly modified functions in adjacent malware versions and may also be used to identify function similarity in malware samples that differ by several versions. The challenge in identifying evolved malware function pairs lies in identifying features that are relatively invariant across evolved code. The research in this paper makes use of the function call graph to establish these features and then demonstrates the use of these techniques using Zeus malware.
This paper presents an experimental analysis of current Distributed Denial of Service attacks. Our analysis is based on real data collected by a honeynet system that was installed on an ISP edge router, for a four-month period. In the examined scenario, we identify and analyze malicious activities based on packets captured and analyzed by a network protocol sniffer and signature-based attack analysis tools. Our analysis shows that IoT-based DDoS attacks are one of the latest and most proliferating attack trends in network security. Based on the analysis of the attacks, we describe some mitigation techniques that can be applied at the providers' network to mitigate the trending attack vectors.
Named Data Networking (NDN) intrinsically supports in-network caching and multipath forwarding. The two salient features offer the potential to simultaneously transmit content segments that comprise the requested content from original content publishers and in-network caches. However, due to the complexity of maintaining the reachability information of off-path cached content at the fine-grained packet level of granularity, the multipath forwarding and off-path cached copies are significantly underutilized in NDN so far. Network coding enabled NDN, referred to as NC-NDN, was proposed to effectively utilize multiple on-path routes to transmit content, but off-path cached copies are still unexploited. This work enhances NC-NDN with an On-demand Off-path Cache Exploration based Multipath Forwarding strategy, dubbed as O2CEMF, to take full advantage of the multipath forwarding to efficiently utilize off-path cached content. In O2CEMF, each network node reactively explores the reachability information of nearby off-path cached content when consumers begin to request a generation of content, and maintains the reachability at the coarse-grained generation level of granularity instead. Then the consumers simultaneously retrieve content from the original content publisher(s) and the explored capable off-path caches. Our experimental studies validate that this strategy improves the content delivery performance efficiently as compared to that in the present NC-NDN.
Software Defined Network (SDN) is a revolutionary networking paradigm which provides the flexibility of programming the network interface as per the need and demand of the user. Software Defined Network (SDN) is independent of vendor specific hardware or protocols and offers the easy extensions in the networking. A customized network as per on user demand facilitates communication control via a single entity i.e. SDN controller. Due to this SDN Controller has become more vulnerable to SDN security attacks and more specifically a single point of failure. It is worth noticing that vulnerabilities were identified because of customized applications which are semi-independent of underlying network infrastructure. No doubt, SDN has provided numerous benefits like breaking vendor lock-ins, reducing overhead cost, easy innovations, increasing programmability among devices, introducing new features and so on. But security of SDN cannot be neglected and it has become a major topic of debate. The communication channel used in SDN is OpenFlow which has made TLS implementation an optional approach in SDN. TLS adoption is important and still vulnerable. This paper focuses on making SDN OpenFlow communication more secure by following extended TLS support and defensive algorithm.
Ze the quality of channels into either completely noisy or noieseless channels. This paper presents extrinsic information transfer (EXIT) analysis for iterative decoding of Polar codes to reveal the mechanism of channel transformation. The purpose of understanding the transformation process are to comprehend the placement process of information bit and frozen bit and to comprehend the security standard of Polar codes. Mutual information derived based on the concept of EXIT chart for check nodes and variable nodes of low density parity check (LDPC) codes and applied to Polar codes. This paper explores the quality of the polarized channels in finite blocklength. The finite block-length is of our interest since in the fifth telecommunications generation (5G) the block length is limited. This paper reveals the EXIT curve changes of Polar codes and explores the polarization characteristics, thus, high value of mutual informations for frozen bit are needed to be detectable. If it is the other way, the error correction capability of Polar codes would be drastically decreases. These results are expected to be a reference for developments of Polar codes for 5G technologies and beyond.
In the past few years, there has been increasing interest in the perception of human expressions and mental states by machines, and Facial Expression Recognition (FER) has attracted increasing attention. Facial Action Unit (AU) is an early proposed method to describe facial muscle movements, which can effectively reflect the changes in people's facial expressions. In this paper, we propose a high-performance facial expression recognition method based on facial action unit, which can run on low-configuration computer and realize video and real-time camera FER. Our method is mainly divided into two parts. In the first part, 68 facial landmarks and image Histograms of Oriented Gradients (HOG) are obtained, and the feature values of action units are calculated accordingly. The second part uses three classification methods to realize the mapping from AUs to FER. We have conducted many experiments on the popular human FER benchmark datasets (CK+ and Oulu CASIA) to demonstrate the effectiveness of our method.
Deep learning based facial expression recognition (FER) has received a lot of attention in the past few years. Most of the existing deep learning based FER methods do not consider domain knowledge well, which thereby fail to extract representative features. In this work, we propose a novel FER framework, named Facial Motion Prior Networks (FMPN). Particularly, we introduce an addition branch to generate a facial mask so as to focus on facial muscle moving regions. To guide the facial mask learning, we propose to incorporate prior domain knowledge by using the average differences between neutral faces and the corresponding expressive faces as the training guidance. Extensive experiments on three facial expression benchmark datasets demonstrate the effectiveness of the proposed method, compared with the state-of-the-art approaches.
Keystroke dynamics study the way in which users input text via their keyboards, which is unique to each individual, and can form a component of a behavioral biometric system to improve existing account security. Keystroke dynamics systems on free-text data use n-graphs that measure the timing between consecutive keystrokes to distinguish between users. Many algorithms require 500, 1,000, or more keystrokes to achieve EERs of below 10%. In this paper, we propose an instance-based graph comparison algorithm to reduce the number of keystrokes required to authenticate users. Commonly used features such as monographs and digraphs are investigated. Feature importance is determined and used to construct a fused classifier. Detection error tradeoff (DET) curves are produced with different numbers of keystrokes. The fused classifier outperforms the state-of-the-art with EERs of 7.9%, 5.7%, 3.4%, and 2.7% for test samples of 50, 100, 200, and 500 keystrokes.
Caching methods are developed since 50 years for paging in CPU and database systems, and since 25 years for web caching as main application areas among others. Pages of unique size are usual in CPU caches, whereas web caches are storing data chunks of different size in a widely varying range. We study the impact of different object sizes on the performance and the overhead of web caching. This entails different caching goals, starting from the byte and object hit ratio to a generalized value hit ratio for optimized costs and benefits of caching regarding traffic engineering (TE), reduced delays and other QoS measures. The selection of the cache contents turns out to be crucial for the web cache efficiency with awareness of the size and other properties in a score for each object. We introduce a new class of rank exchange caching methods and show how their performance compares to other strategies with extensions needed to include the size and scores for QoS and TE caching goals. Finally, we derive bounds on the object, byte and value hit ratio for the independent request model (IRM) based on optimum knapsack solutions of the cache content.
We have been investigating methods for establishing an effective, immediate defense mechanism against the DDoS attacks on Web applications via hacker botnets, in which this defense mechanism can be immediately active without preparation time, e.g. for training data, usually asked for in existing proposals. In this study, we propose a new mechanism, including new data structures and algorithms, that allow the detection and filtering of large amounts of attack packets (Web request) based on monitoring and capturing the suspect groups of source IPs that can be sending packets at similar patterns, i.e. with very high and similar frequencies. The proposed algorithm places great emphasis on reducing storage space and processing time so it is promising to be effective in real-time attack response.
Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 — it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by 10%. Code is available at https://github.com/facebookresearch/ImageNet-Adversarial-Training.
Generally, methods of authentication and identification utilized in asserting users' credentials directly affect security of offered services. In a federated environment, service owners must trust external credentials and make access control decisions based on Assurance Information received from remote Identity Providers (IdPs). Communities (e.g. NIST, IETF and etc.) have tried to provide a coherent and justifiable architecture in order to evaluate Assurance Information and define Assurance Levels (AL). Expensive deployment, limited service owners' authority to define their own requirements and lack of compatibility between heterogeneous existing standards can be considered as some of the unsolved concerns that hinder developers to openly accept published works. By assessing the advantages and disadvantages of well-known models, a comprehensive, flexible and compatible solution is proposed to value and deploy assurance levels through a central entity called Proxy.