Biblio
Nowadays, Vehicular Ad hoc Networks (VANETs) are popularly known as they can reduce traffic and road accidents. These networks need several security requirements, such as anonymity, data authentication, confidentiality, traceability and cancellation of offending users, unlinkability, integrity, undeniability and access control. Authentication of the data and sender are most important security requirements in these networks. So many authentication schemes have been proposed up to now. One of the well-known techniques to provide users authentication in these networks is the authentication based on the smartcard (ASC). In this paper, we propose an ASC scheme that not only provides necessary security requirements such as anonymity, traceability and unlinkability in the VANETs but also is more efficient than the other schemes in the literatures.
The market landscape has undergone dramatic change because of globalization, shifting marketing conditions, cost pressure, increased competition, and volatility. Transforming the operation of businesses has been possible because of the astonishing speed at which technology has witnessed the change. The automotive industry is on the edge of a revolution. The increased customer expectations, changing ownership, self-driving vehicles and much more have led to the transformation of automobiles, applications, and services from artificial intelligence, sensors, RFID to big data analysis. Large automobiles industries have been emphasizing the collection of data to gain insight into customer's expectations, preferences, and budgets alongside competitor's policies. Statistical methods can be applied to historical data, which has been gathered from various authentic sources and can be used to identify the impact of fixed and variable marketing investments and support automakers to come up with a more effective, precise, and efficient approach to target customers. Proper analysis of supply chain data can disclose the weak links in the chain enabling to adopt timely countermeasures to minimize the adverse effects. In order to fully gain benefit from analytics, the collaboration of a detailed set of capabilities responsible for intersecting and integrating with multiple functions and teams across the business is required. The effective role played by big data analysis in the automobile industry has also been expanded in the research paper. The research paper discusses the scope and challenges of big data. The paper also elaborates on the working technology behind the concept of big data. The paper illustrates the working of MapReduce technology that executes in the back end and is responsible for performing data mining.
With its huge real-world demands, large-scale confidential computing still cannot be supported by today's Trusted Execution Environment (TEE), due to the lack of scalable and effective protection of high-throughput accelerators like GPUs, FPGAs, and TPUs etc. Although attempts have been made recently to extend the CPU-like enclave to GPUs, these solutions require change to the CPU or GPU chips, may introduce new security risks due to the side-channel leaks in CPU-GPU communication and are still under the resource constraint of today's CPU TEE.To address these problems, we present the first Heterogeneous TEE design that can truly support large-scale compute or data intensive (CDI) computing, without any chip-level change. Our approach, called HETEE, is a device for centralized management of all computing units (e.g., GPUs and other accelerators) of a server rack. It is uniquely designed to work with today's data centres and clouds, leveraging modern resource pooling technologies to dynamically compartmentalize computing tasks, and enforce strong isolation and reduce TCB through hardware support. More specifically, HETEE utilizes the PCIe ExpressFabric to allocate its accelerators to the server node on the same rack for a non-sensitive CDI task, and move them back into a secure enclave in response to the demand for confidential computing. Our design runs a thin TCB stack for security management on a security controller (SC), while leaving a large set of software (e.g., AI runtime, GPU driver, etc.) to the integrated microservers that operate enclaves. An enclaves is physically isolated from others through hardware and verified by the SC at its inception. Its microserver and computing units are restored to a secure state upon termination.We implemented HETEE on a real hardware system, and evaluated it with popular neural network inference and training tasks. Our evaluations show that HETEE can easily support the CDI tasks on the real-world scale and incurred a maximal throughput overhead of 2.17% for inference and 0.95% for training on ResNet152.
Over the years, a number of vulnerability scoring frameworks have been proposed to characterize the severity of known vulnerabilities in software-dependent systems. These frameworks provide security metrics to support decision-making in system development and security evaluation and assurance activities. When used in this context, it is imperative that these security metrics be sound, meaning that they can be consistently measured in a reproducible, objective, and unbiased fashion while providing contextually relevant, actionable information for decision makers. In this paper, we evaluate the soundness of the security metrics obtained via several vulnerability scoring frameworks. The evaluation is based on the Method for DesigningSound Security Metrics (MDSSM). We also present several recommendations to improve vulnerability scoring frameworks to yield more sound security metrics to support the development of secure software-dependent systems.
Code randomization is considered as the basis of mitigation against code reuse attacks, fundamentally supporting some recent proposals such as execute-only memory (XOM) that aims at dynamic return-oriented programming (ROP) attacks. However, existing code randomization methods are hard to achieve a good balance between high-randomization entropy and semantic consistency. In particular, they always ignore code semantic consistency, incurring performance loss and incompatibility with current security schemes, e.g., control flow integrity (CFI). In this paper, we present an enhanced code randomization method termed as HCRESC, which can improve the randomization entropy significantly, meanwhile ensure the semantic consistency between variants and the original code. HCRESC reschedules instructions within the range of functions rather than basic blocks, thus producing more variants of the original code and preserving the code's semantic. We implement HCRESC on Linux platform of x86-64 architecture and demonstrate that HCRESC can increase the randomization entropy of x86-64 code over than 120% compared with existing methods while ensuring control flow and size of the code unaltered.
Analysis of the state of development of research on the protection of valuable scientific and educational databases, library resources, information centers, publishers show the importance of information security, especially in corporate information networks and systems for data exchange. Corporate library networks include dozens and even hundreds of libraries for active information exchange, and they (libraries) are equipped with information security tools to varying degrees. The purpose of the research is to create effective methods and tools to protect the databases of the scientific and educational resources from unauthorized access in libraries and library networks using fuzzy logic methods.
Safety and security of complex critical infrastructures is very important for economic, environmental and social reasons. The interdisciplinary and inter-system dependencies within these infrastructures introduce difficulties in the safety and security design. Late discovery of safety and security design weaknesses can lead to increased costs, additional system complexity, ineffective mitigation measures and delays to the deployment of the systems. Traditionally, safety and security assessments are handled using different methods and tools, although some concepts are very similar, by specialized experts in different disciplines and are performed at different system design life-cycle phases.The methodology proposed in this paper supports a concurrent safety and security Defense in Depth (DiD) assessment at an early design phase and it is designed to handle safety and security at a high level and not focus on specific practical technologies. It is assumed that regardless of the perceived level of security defenses in place, a determined (motivated, capable and/or well-funded) attacker can find a way to penetrate a layer of defense. While traditional security research focuses on removing vulnerabilities and increasing the difficulty to exploit weaknesses, our higher-level approach focuses on how the attacker's reach can be limited and to increase the system's capability for detection, identification, mitigation and tracking. The proposed method can assess basic safety and security DiD design principles like Redundancy, Physical separation, Functional isolation, Facility functions, Diversity, Defense lines/Facility and Computer Security zones, Safety classes/Security Levels, Safety divisions and physical gates/conduits (as defined by the International Atomic Energy Agency (IAEA) and international standards) concurrently and provide early feedback to the system engineer. A prototype tool is developed that can parse the exported project file of the interdisciplinary model. Based on a set of safety and security attributes, the tool is able to assess aspects of the safety and security DiD capabilities of the design. Its results can be used to identify errors, improve the design and cut costs before a formal human expert inspection. The tool is demonstrated on a case study of an early conceptual design of a complex system of a nuclear power plant.
The paper considers an expert system that provides an assessment of the state of information security in authorities and organizations of various forms of ownership. The proposed expert system allows to evaluate the state of compliance with the requirements of both organizational and technical measures to ensure the protection of information, as well as the level of compliance with the requirements of the information protection system in general. The expert assessment method is used as a basic method for assessing the state of information protection. The developed expert system provides a significant reduction in routine operations during the audit of information security. The results of the assessment are presented quite clearly and provide an opportunity for the leadership of the authorities and organizations to make informed decisions to further improve the information protection system.
Experts often design security and privacy technology with specific use cases and threat models in mind. In practice however, end users are not aware of these threats and potential countermeasures. Furthermore, mis-conceptions about the benefits and limitations of security and privacy technology inhibit large-scale adoption by end users. In this paper, we address this challenge and contribute a qualitative study on end users' and security experts' perceptions of threat models and potential countermeasures. We follow an inductive research approach to explore perceptions and mental models of both security experts and end users. We conducted semi-structured interviews with 8 security experts and 13 end users. Our results suggest that in contrast to security experts, end users neglect acquaintances and friends as attackers in their threat models. Our findings highlight that experts value technical countermeasures whereas end users try to implement trust-based defensive methods.
Over the last few years, there has been an increasing number of studies about facial emotion recognition because of the importance and the impact that it has in the interaction of humans with computers. With the growing number of challenging datasets, the application of deep learning techniques have all become necessary. In this paper, we study the challenges of Emotion Recognition Datasets and we also try different parameters and architectures of the Conventional Neural Networks (CNNs) in order to detect the seven emotions in human faces, such as: anger, fear, disgust, contempt, happiness, sadness and surprise. We have chosen iCV MEFED (Multi-Emotion Facial Expression Dataset) as the main dataset for our study, which is relatively new, interesting and very challenging.
New research fields and applications on human computer interaction will emerge based on the recognition of emotions on faces. With such aim, our study evaluates the features extracted from faces to recognize emotions. To increase the success rate of these features, we have run several tests to demonstrate how age and gender affect the results. The artificial neural networks were trained by the apparent regions on the face such as eyes, eyebrows, nose, mouth, and jawline and then the networks are tested with different age and gender groups. According to the results, faces of older people have a lower performance rate of emotion recognition. Then, age and gender based groups are created manually, and we show that performance rates of facial emotion recognition have increased for the networks that are trained using these particular groups.
The emergence of Industrial Cyber-Physical Systems (ICPS) in today's business world is still steadily progressing to new dimensions. Although they bring many new advantages to business processes and enable automation and a wider range of service capability, they also propose a variety of new challenges. One major challenge, which is introduced by such System-of-Systems (SoS), lies in the security aspect. As security may not have had that significant role in traditional embedded system engineering, a generic way to measure the level of security within an ICPS would provide a significant benefit for system engineers and involved stakeholders. Even though many security metrics and frameworks exist, most of them insufficiently consider an SoS context and the challenges of such environments. Therefore, we aim to define a security metric for ICPS, which measures the level of security during the system design, tests, and integration as well as at runtime. For this, we try to focus on a semantic point of view, which on one hand has not been considered in security metric definitions yet, and on the other hand allows us to handle the complexity of SoS architectures. Furthermore, our approach allows combining the critical characteristics of an ICPS, like uncertainty, required reliability, multi-criticality and safety aspects.