Visible to the public Biblio

Filters: Keyword is Buildings  [Clear All Filters]
2022-06-09
Dizaji, Lida Ghaemi, Hu, Yaoping.  2021.  Building And Measuring Trust In Human-Machine Systems. 2021 IEEE International Conference on Autonomous Systems (ICAS). :1–5.
In human-machine systems (HMS), trust placed by humans on machines is a complex concept and attracts increasingly research efforts. Herein, we reviewed recent studies on building and measuring trust in HMS. The review was based on one comprehensive model of trust – IMPACTS, which has 7 features of intention, measurability, performance, adaptivity, communication, transparency, and security. The review found that, in the past 5 years, HMS fulfill the features of intention, measurability, communication, and transparency. Most of the HMS consider the feature of performance. However, all of the HMS address rarely the feature of adaptivity and neglect the feature of security due to using stand-alone simulations. These findings indicate that future work considering the features of adaptivity and/or security is imperative to foster human trust in HMS.
Alsyaibani, Omar Muhammad Altoumi, Utami, Ema, Hartanto, Anggit Dwi.  2021.  An Intrusion Detection System Model Based on Bidirectional LSTM. 2021 3rd International Conference on Cybernetics and Intelligent System (ICORIS). :1–6.
Intrusion Detection System (IDS) is used to identify malicious traffic on the network. Apart from rule-based IDS, machine learning and deep learning based on IDS are also being developed to improve the accuracy of IDS detection. In this study, the public dataset CIC IDS 2017 was used in developing deep learning-based IDS because this dataset contains the new types of attacks. In addition, this dataset also meets the criteria as an intrusion detection dataset. The dataset was split into train data, validation data and test data. We proposed Bidirectional Long-Short Term Memory (LSTM) for building neural network. We created 24 scenarios with various changes in training parameters which were trained for 100 epochs. The training parameters used as research variables are optimizer, activation function, and learning rate. As addition, Dropout layer and L2-regularizer were implemented on every scenario. The result shows that the model used Adam optimizer, Tanh activation function and a learning rate of 0.0001 produced the highest accuracy compared to other scenarios. The accuracy and F1 score reached 97.7264% and 97.7516%. The best model was trained again until 1000 iterations and the performance increased to 98.3448% in accuracy and 98.3793% in F1 score. The result exceeded several previous works on the same dataset.
2022-04-26
Makarov, Artyom, Varfolomeev, Alexander A..  2021.  Extended Classification of Signature-only Signature Models. 2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus). :2385–2389.

In this paper, we extend the existing classification of signature models by Cao. To do so, we present a new signature classification framework and migrate the original classification to build an easily extendable faceted signature classification. We propose 20 new properties, 7 property families, and 1 signature classification type. With our classification, theoretically, up to 11 541 420 signature classes can be built, which should cover almost all existing signature schemes.

Valeriano, Brandon, Jensen, Benjamin.  2021.  Building a National Cyber Strategy: The Process and Implications of the Cyberspace Solarium Commission Report. 2021 13th International Conference on Cyber Conflict (CyCon). :189–214.
Crafting a national cyber strategy is an enormous undertaking. In this article we review the process by which the Cyberspace Solarium Commission generated the Solarium Commission Report, developed the strategy of layered cyber deterrence, and strategized for legislative success in implementing its recommendations. This is an article about the development of a whole-of-nation strategy. Once the production of the strategy of layered cyber deterrence is explained, the article goes on to elaborate on implementation strategies, the challenge of escalation management, and future efforts to ensure that the work of the Solarium Commission becomes entrenched in U.S. national cyber strategy and behavior. We review the work left undone by the Solarium Commission, highlighting the enormous effort that went into the process of building out a strategy to defend a nation.11It takes a village; we thank the entire Solarium Commission team, as their efforts generated the final Commission Report and the legislative successes that followed. In some ways, this article seeks to chronicle the process of building a strategy that was developed through the efforts of hundreds of people. This work reflects the process that we went through to construct the Solarium Commission report, which is particular to our experience; others may have had different recollections of the events under consideration. Brandon Valeriano is also a Senior Fellow at the Cato Institute and a Senior Advisor to the Cyberspace Solarium Commission. Benjamin Jensen is also a Scholar in Residence at American University and the Research Director for the Cyberspace Solarium Commission.
2022-04-19
Chen, Quan, Snyder, Peter, Livshits, Ben, Kapravelos, Alexandros.  2021.  Detecting Filter List Evasion with Event-Loop-Turn Granularity JavaScript Signatures. 2021 IEEE Symposium on Security and Privacy (SP). :1715–1729.

Content blocking is an important part of a per-formant, user-serving, privacy respecting web. Current content blockers work by building trust labels over URLs. While useful, this approach has many well understood shortcomings. Attackers may avoid detection by changing URLs or domains, bundling unwanted code with benign code, or inlining code in pages.The common flaw in existing approaches is that they evaluate code based on its delivery mechanism, not its behavior. In this work we address this problem by building a system for generating signatures of the privacy-and-security relevant behavior of executed JavaScript. Our system uses as the unit of analysis each script's behavior during each turn on the JavaScript event loop. Focusing on event loop turns allows us to build highly identifying signatures for JavaScript code that are robust against code obfuscation, code bundling, URL modification, and other common evasions, as well as handle unique aspects of web applications.This work makes the following contributions to the problem of measuring and improving content blocking on the web: First, we design and implement a novel system to build per-event-loop-turn signatures of JavaScript behavior through deep instrumentation of the Blink and V8 runtimes. Second, we apply these signatures to measure how much privacy-and-security harming code is missed by current content blockers, by using EasyList and EasyPrivacy as ground truth and finding scripts that have the same privacy and security harming patterns. We build 1,995,444 signatures of privacy-and-security relevant behaviors from 11,212 unique scripts blocked by filter lists, and find 3,589 unique scripts hosting known harmful code, but missed by filter lists, affecting 12.48% of websites measured. Third, we provide a taxonomy of ways scripts avoid detection and quantify the occurrence of each. Finally, we present defenses against these evasions, in the form of filter list additions where possible, and through a proposed, signature based system in other cases.As part of this work, we share the implementation of our signature-generation system, the data gathered by applying that system to the Alexa 100K, and 586 AdBlock Plus compatible filter list rules to block instances of currently blocked code being moved to new URLs.

Evstafyev, G. A., Selyanskaya, E. A..  2021.  Method of Ensuring Structural Secrecy of the Signal. 2021 Systems of Signal Synchronization, Generating and Processing in Telecommunications (SYNCHROINFO. :1–4.
A method for providing energy and structural secrecy of a signal is presented, which is based on the method of pseudo-random restructuring of the spreading sequence. This method complicates the implementation of the accumulation mode, and therefore the detection of the signal-code structure of the signal in a third-party receiver, due to the use of nested pseudo-random sequences (PRS) and their restructuring. And since the receiver-detector is similar to the receiver of the communication system, it is necessary to ensure optimal signal processing to implement an acceptable level of structural secrecy.
2022-04-12
Li, Junyan.  2021.  Threats and data trading detection methods in the dark web. 2021 6th International Conference on Innovative Technology in Intelligent System and Industrial Applications (CITISIA). :1—9.
The dark web has become a major trading platform for cybercriminals, with its anonymity and encrypted content nature make it possible to exchange hacked information and sell illegal goods without being traced. The types of items traded on the dark web have increased with the number of users and demands. In recent years, in addition to the main items sold in the past, including drugs, firearms and child pornography, a growing number of cybercriminals are targeting various types of private information, including different types of account data, identity information and visual data etc. This paper will further discuss the issue of threat detection in the dark web by reviewing the past literature on the subject. An approach is also proposed to identify criminals who commit crimes offline or on the surface network by using private information purchased from the dark web and the original sources of information on the dark web by building a database based on historical victim records for keyword matching and traffic analysis.
2022-04-01
Marru, Suresh, Kuruvilla, Tanya, Abeysinghe, Eroma, McMullen, Donald, Pierce, Marlon, Morgan, David Gene, Tait, Steven L., Innes, Roger W..  2021.  User-Centric Design and Evolvable Architecture for Science Gateways: A Case Study. 2021 IEEE/ACM 21st International Symposium on Cluster, Cloud and Internet Computing (CCGrid). :267–276.
Scientific applications built on wide-area distributed systems such as emerging cloud based architectures and the legacy grid computing infrastructure often struggle with user adoption even though they succeed from a systems research perspective. This paper examines the coupling of user-centered design processes with modern distributed systems. Further in this paper, we describe approaches for conceptualizing a product that solves a recognized need: to develop a data gateway to serve the data management and research needs of experimentalists of electron microscopes and similar shared scientific instruments in the context of a research service laboratory. The purpose of the data gateway is to provide secure, controlled access to data generated from a wide range of scientific instruments. From the functional perspective, we focus on the basic processing of raw data that underlies the lab's "business" processes, the movement of data from the laboratory to central access and archival storage points, and the distribution of data to respective authorized users. Through the gateway interface, users will be able to share the instrument data with collaborators or copy it to remote storage servers. Basic pipelines for extracting additional metadata (through a pluggable parser framework) will be enabled. The core contribution described in this paper, building on the aforementioned distributed data management capabilities, is the adoption of user-centered design processes for developing the scientific user interface. We describe the user-centered design methodology for exploring user needs, iteratively testing the design, learning from user experiences, and adapting what we learn to improve design and capabilities. We further conclude that user-centered design is, in turn, best enabled by an adaptable distributed systems framework. A key challenge to implementing a user-centered design is to have design tools closely linked with a software system architecture that can evolve over time while providing a highly available data gateway. A key contribution of this paper is to share the insights from crafting such an evolvable design-build-evaluate-deploy architecture and plans for iterative development and deployment.
2022-03-15
Kadlubowski, Lukasz A., Kmon, Piotr.  2021.  Test and Verification Environment and Methodology for Vernier Time-to-Digital Converter Pixel Array. 2021 24th International Symposium on Design and Diagnostics of Electronic Circuits Systems (DDECS). :137—140.
The goal of building a system for precise time measurement in pixel radiation detectors motivates the development of flexible design and verification environment. It should be suitable for quick simulations when individual elements of the system are developed and should be scalable so that systemlevel verification is possible as well. The approach presented in this paper is to utilize the power of SystemVerilog language and apply basic Object-Oriented Programming concepts to the test program. Since the design under test is a full-custom mixed-signal design, it must be simulated with AMS simulator and various features of analog design environment are used as well (Monte Carlo analysis, corner analysis, schematic capture GUI-related functions). The presented approach combines these two worlds and should be suitable for small academia projects, where design and verification is seldom done by separate teams.
2022-03-14
Zharikov, Alexander, Konstantinova, Olga, Ternovoy, Oleg.  2021.  Building a Mesh Network Model with the Traffic Caching Based on the P2P Mechanism. 2021 Dynamics of Systems, Mechanisms and Machines (Dynamics). :1–5.
Currently, the technology of wireless mesh networks is actively developing. In 2021, Gartner included mesh network technologies and the tasks to ensure their security in the TOP global trends. A large number of scientific works focus on the research and modeling the traffic transmission in such networks. At the same time, they often bring up the “bottle neck” problem, characteristic of individual mesh network nodes. To address the issue, the authors of the article propose using the data caching mechanism and placing the cache data straight on the routers. The mathematical model presented in the article allows building a route with the highest access speed to the requested content by the modified Dijkstra algorithm. Besides, if the mesh network cache lacks the required content, the routers with the Internet access are applied. Practically, the considered method of creating routes to the content, which has already been requested by the users in the mesh network, allows for the optimal efficient use of the router bandwidth capacity distribution and reduces the latency period.
McQuistin, Stephen, Band, Vivian, Jacob, Dejice, Perkins, Colin.  2021.  Investigating Automatic Code Generation for Network Packet Parsing. 2021 IFIP Networking Conference (IFIP Networking). :1—9.
Use of formal protocol description techniques and code generation can reduce bugs in network packet parsing code. However, such techniques are themselves complex, and don't see wide adoption in the protocol standards development community, where the focus is on consensus building and human-readable specifications. We explore the utility and effectiveness of new techniques for describing protocol data, specifically designed to integrate with the standards development process, and discuss how they can be used to generate code that is safer and more trustworthy, while maintaining correctness and performance.
2022-03-08
Navrotsky, Yaroslav, Patsei, Natallia.  2021.  Zipf's Distribution Caching Application in Named Data Networks. 2021 IEEE Open Conference of Electrical, Electronic and Information Sciences (eStream). :1–4.
One of the most innovative directions in the Internet is Information Centric Networks, in particular the Named Data Network. This approach should make it easier to find and retrieve the desired information on the network through name-based addressing, intranet caching and other schemes. This article presents Named Data Network modeling, results and performance evaluation of proposed caching policies for Named Data Network research, taking into account the influence of external factors on base of Zipf's law and uniform distribution.
2022-03-01
Leevy, Joffrey L., Hancock, John, Khoshgoftaar, Taghi M., Seliya, Naeem.  2021.  IoT Reconnaissance Attack Classification with Random Undersampling and Ensemble Feature Selection. 2021 IEEE 7th International Conference on Collaboration and Internet Computing (CIC). :41–49.
The exponential increase in the use of Internet of Things (IoT) devices has been accompanied by a spike in cyberattacks on IoT networks. In this research, we investigate the Bot-IoT dataset with a focus on classifying IoT reconnaissance attacks. Reconnaissance attacks are a foundational step in the cyberattack lifecycle. Our contribution is centered on the building of predictive models with the aid of Random Undersampling (RUS) and ensemble Feature Selection Techniques (FSTs). As far as we are aware, this type of experimentation has never been performed for the Reconnaissance attack category of Bot-IoT. Our work uses the Area Under the Receiver Operating Characteristic Curve (AUC) metric to quantify the performance of a diverse range of classifiers: Light GBM, CatBoost, XGBoost, Random Forest (RF), Logistic Regression (LR), Naive Bayes (NB), Decision Tree (DT), and a Multilayer Perceptron (MLP). For this study, we determined that the best learners are DT and DT-based ensemble classifiers, the best RUS ratio is 1:1 or 1:3, and the best ensemble FST is our ``6 Agree'' technique.
2022-02-04
Iqbal, Siddiq, Sujatha, B R.  2021.  Secure Key Management Scheme With Good Resiliency For Hierarchical Network Using Combinatorial Theory. 2021 2nd International Conference for Emerging Technology (INCET). :1–7.
Combinatorial designs are powerful structures for key management in wireless sensor networks to address good connectivity and also security against external attacks in large scale networks. Symmetric key foundation is the most appropriate model for secure exchanges in WSNs among the ideal models. The core objective is to enhance and evaluate certain issues like attack on the nodes, to provide better key strength, better connectivity, security in interaction among the nodes. The keys distributed by the base station to cluster head are generated using Symmetric Balanced Incomplete Block Design (SBIBD). The keys distributed by cluster head to its member nodes are generated using Symmetric Balanced Incomplete Block Design (SBIBD) and Keys are refreshed periodically to avoid stale entries. Compromised sensor nodes can be used to insert false reports (spurious reports) in wireless sensor networks. The idea of interaction between the sensor nodes utilizing keys and building up a protected association helps in making sure the network is secure. Compared with similar existing schemes, our approach can provide better security.
Al-Turkistani, Hilalah F., AlFaadhel, Alaa.  2021.  Cyber Resiliency in the Context of Cloud Computing Through Cyber Risk Assessment. 2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA). :73–78.
Cyber resiliency in Cloud computing is one of the most important capability of an enterprise network that provides continues ability to withstand and quick recovery from the adversary conditions. This capability can be measured through cybersecurity risk assessment techniques. However, cybersecurity risk management studies in cloud computing resiliency approaches are deficient. This paper proposes resilient cloud cybersecurity risk assessment tailored specifically to Dropbox with two methods: technical-based solution motivated by a cybersecurity risk assessment of cloud services, and a target personnel-based solution guided by cybersecurity-related survey among employees to identify their knowledge that qualifies them withstand to any cyberattack. The proposed work attempts to identify cloud vulnerabilities, assess threats and detect high risk components, to finally propose appropriate safeguards such as failure predicting and removing, redundancy or load balancing techniques for quick recovery and return to pre-attack state if failure happens.
Al-Turkistani, Hilalah F., Aldobaian, Samar, Latif, Rabia.  2021.  Enterprise Architecture Frameworks Assessment: Capabilities, Cyber Security and Resiliency Review. 2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA). :79–84.

Recent technological advancement demands organizations to have measures in place to manage their Information Technology (IT) systems. Enterprise Architecture Frameworks (EAF) offer companies an efficient technique to manage their IT systems aligning their business requirements with effective solutions. As a result, experts have developed multiple EAF's such as TOGAF, Zachman, MoDAF, DoDAF, SABSA to help organizations to achieve their objectives by reducing the costs and complexity. These frameworks however, concentrate mostly on business needs lacking holistic enterprise-wide security practices, which may cause enterprises to be exposed for significant security risks resulting financial loss. This study focuses on evaluating business capabilities in TOGAF, NIST, COBIT, MoDAF, DoDAF, SABSA, and Zachman, and identify essential security requirements in TOGAF, SABSA and COBIT19 frameworks by comparing their resiliency processes, which helps organization to easily select applicable framework. The study shows that; besides business requirements, EAF need to include precise cybersecurity guidelines aligning EA business strategies. Enterprises now need to focus more on building resilient approach, which is beyond of protection, detection and prevention. Now enterprises should be ready to withstand against the cyber-attacks applying relevant cyber resiliency approach improving the way of dealing with impacts of cybersecurity risks.

2022-01-31
Velez, Miguel, Jamshidi, Pooyan, Siegmund, Norbert, Apel, Sven, Kästner, Christian.  2021.  White-Box Analysis over Machine Learning: Modeling Performance of Configurable Systems. 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). :1072–1084.

Performance-influence models can help stakeholders understand how and where configuration options and their interactions influence the performance of a system. With this understanding, stakeholders can debug performance behavior and make deliberate configuration decisions. Current black-box techniques to build such models combine various sampling and learning strategies, resulting in tradeoffs between measurement effort, accuracy, and interpretability. We present Comprex, a white-box approach to build performance-influence models for configurable systems, combining insights of local measurements, dynamic taint analysis to track options in the implementation, compositionality, and compression of the configuration space, without relying on machine learning to extrapolate incomplete samples. Our evaluation on 4 widely-used, open-source projects demonstrates that Comprex builds similarly accurate performance-influence models to the most accurate and expensive black-box approach, but at a reduced cost and with additional benefits from interpretable and local models.

2022-01-25
Wynn, Nathan, Johnsen, Kyle, Gonzalez, Nick.  2021.  Deepfake Portraits in Augmented Reality for Museum Exhibits. 2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). :513—514.
In a collaboration with the Georgia Peanut Commission’s Education Center and museum in Georgia, USA, we developed an augmented reality app to guide visitors through the museum and offer immersive educational information about the artifacts, exhibits, and artwork displayed therein. Notably, our augmented reality system applies the First Order Motion Model for Image Animation to several portraits of individuals influential to the Georgia peanut industry to provide immersive animated narration and monologue regarding their contributions to the peanut industry. [4]
2021-11-29
Baker, Oras, Thien, Chuong Nguyen.  2020.  A New Approach to Use Big Data Tools to Substitute Unstructured Data Warehouse. 2020 IEEE Conference on Big Data and Analytics (ICBDA). :26–31.
Data warehouse and big data have become the trend to help organise data effectively. Business data are originating in various kinds of sources with different forms from conventional structured data to unstructured data, it is the input for producing useful information essential for business sustainability. This research will navigate through the complicated designs of the common big data and data warehousing technologies to propose an effective approach to use these technologies for designing and building an unstructured textual data warehouse, a crucial and essential tool for most enterprises nowadays for decision making and gaining business competitive advantages. In this research, we utilised the IBM BigInsights Text Analytics, PostgreSQL, and Pentaho tools, an unstructured data warehouse is implemented and worked excellently with the unstructured text from Amazon review datasets, the new proposed approach creates a practical solution for building an unstructured data warehouse.
2021-11-08
Marino, Daniel L., Grandio, Javier, Wickramasinghe, Chathurika S., Schroeder, Kyle, Bourne, Keith, Filippas, Afroditi V., Manic, Milos.  2020.  AI Augmentation for Trustworthy AI: Augmented Robot Teleoperation. 2020 13th International Conference on Human System Interaction (HSI). :155–161.
Despite the performance of state-of-the-art Artificial Intelligence (AI) systems, some sectors hesitate to adopt AI because of a lack of trust in these systems. This attitude is prevalent among high-risk areas, where there is a reluctance to remove humans entirely from the loop. In these scenarios, Augmentation provides a preferred alternative over complete Automation. Instead of replacing humans, AI Augmentation uses AI to improve and support human operations, creating an environment where humans work side by side with AI systems. In this paper, we discuss how AI Augmentation can provide a path for building Trustworthy AI. We exemplify this approach using Robot Teleoperation. We lay out design guidelines and motivations for the development of AI Augmentation for Robot Teleoperation. Finally, we discuss the design of a Robot Teleoperation testbed for the development of AI Augmentation systems.
2021-10-12
Ivaki, Naghmeh, Antunes, Nuno.  2020.  SIDE: Security-Aware Integrated Development Environment. 2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). :149–150.
An effective way for building secure software is to embed security into software in the early stages of software development. Thus, we aim to study several evidences of code anomalies introduced during the software development phase, that may be indicators of security issues in software, such as code smells, structural complexity represented by diverse software metrics, the issues detected by static code analysers, and finally missing security best practices. To use such evidences for vulnerability prediction and removal, we first need to understand how they are correlated with security issues. Then, we need to discover how these imperfect raw data can be integrated to achieve a reliable, accurate and valuable decision about a portion of code. Finally, we need to construct a security actuator providing suggestions to the developers to remove or fix the detected issues from the code. All of these will lead to the construction of a framework, including security monitoring, security analyzer, and security actuator platforms, that are necessary for a security-aware integrated development environment (SIDE).
2021-08-31
Sundar, Agnideven Palanisamy, Li, Feng, Zou, Xukai, Hu, Qin, Gao, Tianchong.  2020.  Multi-Armed-Bandit-based Shilling Attack on Collaborative Filtering Recommender Systems. 2020 IEEE 17th International Conference on Mobile Ad Hoc and Sensor Systems (MASS). :347–355.
Collaborative Filtering (CF) is a popular recommendation system that makes recommendations based on similar users' preferences. Though it is widely used, CF is prone to Shilling/Profile Injection attacks, where fake profiles are injected into the CF system to alter its outcome. Most of the existing shilling attacks do not work on online systems and cannot be efficiently implemented in real-world applications. In this paper, we introduce an efficient Multi-Armed-Bandit-based reinforcement learning method to practically execute online shilling attacks. Our method works by reducing the uncertainty associated with the item selection process and finds the most optimal items to enhance attack reach. Such practical online attacks open new avenues for research in building more robust recommender systems. We treat the recommender system as a black box, making our method effective irrespective of the type of CF used. Finally, we also experimentally test our approach against popular state-of-the-art shilling attacks.
2021-08-17
Song, Guanglei, He, Lin, Wang, Zhiliang, Yang, Jiahai, Jin, Tao, Liu, Jieling, Li, Guo.  2020.  Towards the Construction of Global IPv6 Hitlist and Efficient Probing of IPv6 Address Space. 2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS). :1–10.
Fast IPv4 scanning has made sufficient progress in network measurement and security research. However, it is infeasible to perform brute-force scanning of the IPv6 address space. We can find active IPv6 addresses through scanning candidate addresses generated by the state-of-the-art algorithms, whose probing efficiency of active IPv6 addresses, however, is still very low. In this paper, we aim to improve the probing efficiency of IPv6 addresses in two ways. Firstly, we perform a longitudinal active measurement study over four months, building a high-quality dataset called hitlist with more than 1.3 billion IPv6 addresses distributed in 45.2k BGP prefixes. Different from previous work, we probe the announced BGP prefixes using a pattern-based algorithm, which makes our dataset overcome the problems of uneven address distribution and low active rate. Secondly, we propose an efficient address generation algorithm DET, which builds a density space tree to learn high-density address regions of the seed addresses in linear time and improves the probing efficiency of active addresses. On the public hitlist and our hitlist, we compare our algorithm DET against state-of-the-art algorithms and find that DET increases the de-aliased active address ratio by 10%, and active address (including aliased addresses) ratio by 14%, by scanning 50 million addresses.
2021-07-28
Wang, Wenhui, Chen, Liandong, Han, Longxi, Zhou, Zhihong, Xia, Zhengmin, Chen, Xiuzhen.  2020.  Vulnerability Assessment for ICS system Based on Zero-day Attack Graph. 2020 International Conference on Intelligent Computing, Automation and Systems (ICICAS). :1—5.
The numerous attacks on ICS systems have made severe threats to critical infrastructure. Extensive studies have focussed on the risk assessment of discovering vulnerabilities. However, to identify Zero-day vulnerabilities is challenging because they are unknown to defenders. Here we sought to measure ICS system zero-day risk by building an enhanced attack graph for expected attack path exploiting zero-day vulnerability. In this study, we define the security metrics of Zero-day vulnerability for an ICS. Then we created a Zero-day attack graph to guide how to harden the system by measuring attack paths that exploiting zero-day vulnerabilities. Our studies identify the vulnerability assessment method on ICS systems considering Zero-day Vulnerability by zero-day attack graph. Together, our work is essential to ICS systems security. By assessing unknown vulnerability risk to close the imbalance between attackers and defenders.
2021-06-30
Zhang, Wenrui.  2020.  Application of Attention Model Hybrid Guiding based on Artificial Intelligence in the Course of Intelligent Architecture History. 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS). :59—62.
Application of the attention model hybrid building based on the artificial intelligence in the course of the intelligent architecture history is studied in this article. A Hadoop distributed architecture using big data processing technology which combines basic building information with the building energy consumption data for the data mining research methods, and conduct a preliminary design of a Hadoop-based public building energy consumption data mining system. The principles of the proposed model were summarized. At first, the intelligent firewall processes the decision data faster, when the harmful information invades. The intelligent firewall can monitor and also intercept the harmful information in a timelier manner. Secondly, develop a problem data processing plan, delete and identify different types of problem data, and supplement the deleted problem data according to the rules obtained by data mining. The experimental results have reflected the efficiency of the proposed model.