Biblio
Aiming at the composite uncertainty characteristics and high-dimensional data stream characteristics of the evaluation index with both ambiguity and randomness, this paper proposes a emergency severity assessment method for cluster supply chain based on cloud fuzzy clustering algorithm. The summary cloud model generation algorithm is created. And the multi-data fusion method is applied to the cloud model processing of the evaluation indexes for high-dimensional data stream with ambiguity and randomness. The synopsis data of the emergency severity assessment indexes are extracted. Based on time attenuation model and sliding window model, the data stream fuzzy clustering algorithm for emergency severity assessment is established. The evaluation results are rationally optimized according to the generalized Euclidean distances of the cluster centers and cluster microcluster weights, and the severity grade of cluster supply chain emergency is dynamically evaluated. The experimental results show that the proposed algorithm improves the clustering accuracy and reduces the operation time, as well as can provide more accurate theoretical support for the early warning decision of cluster supply chain emergency.
Blockchain may have a potential to prove its value for the new US FDA regulatory requirements defined in the Drug Supply Chain Security Act (DSCSA) as innovative solutions are needed to support the highly complex pharmaceutical industry supply chain as it seeks to comply. In this paper, we examine how blockchain can be applied to meet with the security compliance requirement for the pharmaceutical supply chain. We explore the online playground of Hyperledger Composer, a set of tools for building blockchain business networks, to model the data and access control rules for the drug supply chain. Our experiment shows that this solution can provide a prototyping opportunity for compliance checking with certain limitations.
The globalization of the semiconductor supply chain introduces ever-increasing security and privacy risks. Two major concerns are IP theft through reverse engineering and malicious modification of the design. The latter concern in part relies on successful reverse engineering of the design as well. IC camouflaging and logic locking are two of the techniques under research that can thwart reverse engineering by end-users or foundries. However, developing low overhead locking/camouflaging schemes that can resist the ever-evolving state-of-the-art attacks has been a challenge for several years. This article provides a comprehensive review of the state of the art with respect to locking/camouflaging techniques. We start by defining a systematic threat model for these techniques and discuss how various real-world scenarios relate to each threat model. We then discuss the evolution of generic algorithmic attacks under each threat model eventually leading to the strongest existing attacks. The article then systematizes defences and along the way discusses attacks that are more specific to certain kinds of locking/camouflaging. The article then concludes by discussing open problems and future directions.
Motivated by the September 11 attacks, we are addressing the problem of policy analysis of supply-chain security. Considering the potential economic and operational impacts of inspection together with the inherent difficulty of assigning a reasonable cost to an inspection failure call for a policy analysis methodology in which stakeholders can understand the trade-offs between the diverse and potentially conflicting objectives. To obtain this information, we used a simulation-based methodology to characterize the set of Pareto optimal solutions with respect to the multiple objectives represented in the decision problem. Our methodology relies on simulation and the response surface method (RSM) to model the relationships between inspection policies and relevant stakeholder objectives in order to construct a set of Pareto optimal solutions. The approach is illustrated with an application to a real-world supply chain.
Currently, organisations find it difficult to design a Decision Support System (DSS) that can predict various operational risks, such as financial and quality issues, with operational risks responsible for significant economic losses and damage to an organisation's reputation in the market. This paper proposes a new DSS for risk assessment, called the Fuzzy Inference DSS (FIDSS) mechanism, which uses fuzzy inference methods based on an organisation's big data collection. It includes the Emerging Association Patterns (EAP) technique that identifies the important features of each risk event. Then, the Mamdani fuzzy inference technique and several membership functions are evaluated using the firm's data sources. The FIDSS mechanism can enhance an organisation's decision-making processes by quantifying the severity of a risk as low, medium or high. When it automatically predicts a medium or high level, it assists organisations in taking further actions that reduce this severity level.
This paper presents an access control modelling that integrates risk assessment elements in the attribute-based model to organize the identification, authentication and authorization rules. Access control is complex in integrated systems, which have different actors accessing different information in multiple levels. In addition, systems are composed by different components, much of them from different developers. This requires a complete supply chain trust to protect the many existent actors, their privacy and the entire ecosystem. The incorporation of the risk assessment element introduces additional variables like the current environment of the subjects and objects, time of the day and other variables to help produce more efficient and effective decisions in terms of granting access to specific objects. The risk-based attributed access control modelling was applied in a health platform, Project CityZen.
Cloud computing is widely believed to be the future of computing. It has grown from being a promising idea to one of the fastest research and development paradigms of the computing industry. However, security and privacy concerns represent a significant hindrance to the widespread adoption of cloud computing services. Likewise, the attributes of the cloud such as multi-tenancy, dynamic supply chain, limited visibility of security controls and system complexity, have exacerbated the challenge of assessing cloud risks. In this paper, we conduct a real-world case study to validate the use of a supply chaininclusive risk assessment model in assessing the risks of a multicloud SaaS application. Using the components of the Cloud Supply Chain Cyber Risk Assessment (CSCCRA) model, we show how the model enables cloud service providers (CSPs) to identify critical suppliers, map their supply chain, identify weak security spots within the chain, and analyse the risk of the SaaS application, while also presenting the value of the risk in monetary terms. A key novelty of the CSCCRA model is that it caters for the complexities involved in the delivery of SaaS applications and adapts to the dynamic nature of the cloud, enabling CSPs to conduct risk assessments at a higher frequency, in response to a change in the supply chain.
The increasing diffusion of malware endowed with steganographic techniques requires to carefully identify and evaluate a new set of threats. The creation of a covert channel to hide a communication within network traffic is one of the most relevant, as it can be used to exfiltrate information or orchestrate attacks. Even if network steganography is becoming a well-studied topic, only few works focus on IPv6 and consider real network scenarios. Therefore, this paper investigates IPv6 covert channels deployed in the wild. Also, it presents a performance evaluation of six different data hiding techniques for IPv6 including their ability to bypass some intrusion detection systems. Lastly, ideas to detect IPv6 covert channels are presented.
Network attacks have become a growing threat to the current Internet. For the enhancement of network security and accountability, it is urgent to find the origin and identity of the adversary who misbehaves in the network. Some studies focus on embedding users' identities into IPv6 addresses, but such design cannot support the Stateless Address Autoconfiguration (SLAAC) protocol which is widely deployed nowadays. In this paper, we propose SDN-Ti, a general solution to traceback and identification for attackers in IPv6 networks based on Software Defined Network (SDN). In our proposal, the SDN switch performs a translation between the source IPv6 address of the packet and its trusted ID-encoded address generated by the SDN controller. The network administrator can effectively identify the attacker by parsing the malicious packets when the attack incident happens. Our solution not only avoids the heavy storage overhead and time synchronism problems, but also supports multiple IPv6 address assignment scenarios. What's more, SDN-Ti does not require any modification on the end device, hence can be easily deployed. We implement SDN-Ti prototype and evaluate it in a real IPv6 testbed. Experiment results show that our solution only brings very little extra performance cost, and it shows considerable performance in terms of latency, CPU consumption and packet loss compared to the normal forwarding method. The results indicate that SDN-Ti is feasible to be deployed in practice with a large number of users.
Accountability and privacy are considered valuable but conflicting properties in the Internet, which at present does not provide native support for either. Past efforts to balance accountability and privacy in the Internet have unsatisfactory deployability due to the introduction of new communication identifiers, and because of large-scale modifications to fully deployed infrastructures and protocols. The IPv6 is being deployed around the world and this trend will accelerate. In this paper, we propose a private and accountable proposal based on IPv6 called PAVI that seeks to bootstrap accountability and privacy to the IPv6 Internet without introducing new communication identifiers and large-scale modifications to the deployed base. A dedicated quantitative analysis shows that the proposed PAVI achieves satisfactory levels of accountability and privacy. The results of evaluation of a PAVI prototype show that it incurs little performance overhead, and is widely deployable.
Aiming at the incomplete and incomplete security mechanism of wireless access system in emergency communication network, this paper proposes a security mechanism requirement construction method for wireless access system based on security evaluation standard. This paper discusses the requirements of security mechanism construction in wireless access system from three aspects: the definition of security issues, the construction of security functional components and security assurance components. This method can comprehensively analyze the security threats and security requirements of wireless access system in emergency communication network, and can provide correct and reasonable guidance and reference for the establishment of security mechanism.
Future that IoT has to enhance the productivity on healthcare applications.
In AI Matters Volume 4, Issue 2, and Issue 4, we raised the notion of the possibility of an AI Cosmology in part in response to the "AI Hype Cycle" that we are currently experiencing. We posited that our current machine learning and big data era represents but one peak among several previous peaks in AI research in which each peak had accompanying "Hype Cycles". We associated each peak with an epoch in a possible AI Cosmology. We briefly explored the logic machines, cybernetics, and expert system epochs. One of the objectives of identifying these epochs was to help establish that we have been here before. In particular we've been in the territory where some application of AI research finds substantial commercial success which is then closely followed by AI fever and hype. The public's expectations are heightened only to end in disillusionment when the applications fall short. Whereas it is sometimes somewhat of a challenge even for AI researchers, educators, and practitioners to know where the reality ends and hype begins, the layperson is often in an impossible position and at the mercy of pop culture, marketing and advertising campaigns. We suggested that an AI Cosmology might help us identify a single standard model for AI that could be the foundation for a common shared understanding of what AI is and what it is not. A tool to help the layperson understand where AI has been, where it's going, and where it can't go. Something that could provide a basic road map to help the general public navigate the pitfalls of AI Hype.
For modern Automatic Test Equipment (ATE) one of the most daunting tasks is now Information Assurance (IA). What was once at most a secondary item consisting mainly of installing an Anti-Virus suite is now becoming one of the most important aspects of ATE. Given the current climate of IA it has become important to ensure ATE is kept safe from any breaches of security or loss of information. Even though most ATE are not on the Internet (or even on a local network for many) they are still vulnerable to some of the same attack vectors plaguing common computers and other electronic devices. This paper will discuss one method which can be used to ensure that modern ATE can continue to be used to test and detect faults in the systems they are designed to test. Most modern ATE include one or more Ethernet switches to allow communication to the many Instruments or devices contained within them. If the switches purchased are managed and support layer 2 or layer 3 of the Open Systems Interconnection (OSI) model they can also be used to help in the IA footprint of the station. Simple configurations such as limiting broadcast or multicast packets to the appropriate devices is the first step of limiting access to devices to what is needed. If the switch also includes some layer 3 like capabilities Virtual Local Area Networks can be created to further limit the communication pathways to only what is required to perform the required tasks. These and other simple switch configurations while not required can help limit the access of a virus or worm. This paper will discuss these and other configuration tools which can help prevent an ATE system from being compromised.
Insider threats refer to threats posed by individuals who intentionally or unintentionally destroy, exfiltrate, or leak sensitive information, or expose their organization to outside attacks. Surveys of organizations in government and industry consistently show that threats posed by insiders rival those posed by hackers, and that insider attacks are even more costly. Emerging U.S. government guidelines and policies for establishing insider threat programs tend to specify only minimum standards for insider threat monitoring, analysis, and mitigation programs. Arguably, one of the most serious challenges is to identify and integrate behavioral (sociotechnical) indicators of insider threat r isk in addition to cyber/technical indicators. That is, in focusing on data that are most readily obtained, insider threat programs most often miss the human side of the problem. This talk briefly describes research aiming to catalog human as well as technical factors associated with insider threat risk and summarizes several recent studies that seek to inform the development of more comprehensive, proactive approaches to insider threat assessment.
Information, not just data, is key to today's global challenges. To solve these challenges requires not only advancing geospatial and big data analytics but requires new analysis and decision-making environments that enable reliable decisions from trustable, understandable information that go beyond current approaches to machine learning and artificial intelligence. These environments are successful when they effectively couple human decision making with advanced, guided spatial analytics in human-computer collaborative discourse and decision making (HCCD). Our HCCD approach builds upon visual analytics, natural scale templates, traceable information, human-guided analytics, and explainable and interactive machine learning, focusing on empowering the decisionmaker through interactive visual spatial analytic environments where non-digital human expertise and experience can be combined with state-of-the-art and transparent analytical techniques. When we combine this approach with real-world application-driven research, not only does the pace of scientific innovation accelerate, but impactful change occurs. I'll describe how we have applied these techniques to challenges in sustainability, security, resiliency, public safety, and disaster management.
Generally, methods of authentication and identification utilized in asserting users' credentials directly affect security of offered services. In a federated environment, service owners must trust external credentials and make access control decisions based on Assurance Information received from remote Identity Providers (IdPs). Communities (e.g. NIST, IETF and etc.) have tried to provide a coherent and justifiable architecture in order to evaluate Assurance Information and define Assurance Levels (AL). Expensive deployment, limited service owners' authority to define their own requirements and lack of compatibility between heterogeneous existing standards can be considered as some of the unsolved concerns that hinder developers to openly accept published works. By assessing the advantages and disadvantages of well-known models, a comprehensive, flexible and compatible solution is proposed to value and deploy assurance levels through a central entity called Proxy.