Visible to the public Biblio

Filters: Keyword is Manuals  [Clear All Filters]
2022-08-26
Wadekar, Isha.  2021.  Artificial Conversational Agent using Robust Adversarial Reinforcement Learning. 2021 International Conference on Computer Communication and Informatics (ICCCI). :1–7.
Reinforcement learning (R.L.) is an effective and practical means for resolving problems where the broker possesses no information or knowledge about the environment. The agent acquires knowledge that is conditioned on two components: trial-and-error and rewards. An R.L. agent determines an effective approach by interacting directly with the setting and acquiring information regarding the circumstances. However, many modern R.L.-based strategies neglect to theorise considering there is an enormous rift within the simulation and the physical world due to which policy-learning tactics displease that stretches from simulation to physical world Even if design learning is achieved in the physical world, the knowledge inadequacy leads to failed generalization policies from suiting to test circumstances. The intention of robust adversarial reinforcement learning(RARL) is where an agent is instructed to perform in the presence of a destabilizing opponent(adversary agent) that connects impedance to the system. The combined trained adversary is reinforced so that the actual agent i.e. the protagonist is equipped rigorously.
Frumin, Dan, Krebbers, Robbert, Birkedal, Lars.  2021.  Compositional Non-Interference for Fine-Grained Concurrent Programs. 2021 IEEE Symposium on Security and Privacy (SP). :1416—1433.
Non-interference is a program property that ensures the absence of information leaks. In the context of programming languages, there exist two common approaches for establishing non-interference: type systems and program logics. Type systems provide strong automation (by means of type checking), but they are inherently restrictive in the kind of programs they support. Program logics support challenging programs, but they typically require significant human assistance, and cannot handle modules or higher-order programs.To connect these two approaches, we present SeLoC—a separation logic for non-interference, on top of which we build a type system using the technique of logical relations. By building a type system on top of separation logic, we can compositionally verify programs that consist of typed and untyped parts. The former parts are verified through type checking, while the latter parts are verified through manual proof.The core technical contribution of SeLoC is a relational form of weakest preconditions that can track information flow using separation logic resources. SeLoC is fully machine-checked, and built on top of the Iris framework for concurrent separation logic in Coq. The integration with Iris provides seamless support for fine-grained concurrency, which was beyond the reach of prior type systems and program logics for non-interference.
2022-07-28
Obert, James, Loffredo, Tim.  2021.  Efficient Binary Static Code Data Flow Analysis Using Unsupervised Learning. 2021 4th International Conference on Artificial Intelligence for Industries (AI4I). :89—90.
The ever increasing need to ensure that code is reliably, efficiently and safely constructed has fueled the evolution of popular static binary code analysis tools. In identifying potential coding flaws in binaries, tools such as IDA Pro are used to disassemble the binaries into an opcode/assembly language format in support of manual static code analysis. Because of the highly manual and resource intensive nature involved with analyzing large binaries, the probability of overlooking potential coding irregularities and inefficiencies is quite high. In this paper, a light-weight, unsupervised data flow methodology is described which uses highly-correlated data flow graph (CDFGs) to identify coding irregularities such that analysis time and required computing resources are minimized. Such analysis accuracy and efficiency gains are achieved by using a combination of graph analysis and unsupervised machine learning techniques which allows an analyst to focus on the most statistically significant flow patterns while performing binary static code analysis.
2022-07-01
Mani, Santosh, Nene, Manisha J.  2021.  Self-organizing Software Defined Mesh Networks to Counter Failures and Attacks. 2021 International Conference on Intelligent Technologies (CONIT). :1–7.
With current Traditional / Legacy networks, the reliance on manual intervention to solve a variety of issues be it primary operational functionalities like addressing Link-failure or other consequent complexities arising out of existing solutions for challenges like Link-flapping or facing attacks like DDoS attacks is substantial. This physical and manual approach towards network configurations to make significant changes result in very slow updates and increased probability of errors and are not sufficient to address and support the rapidly shifting workload of the networks due to the fact that networking decisions are left to the hands of physical networking devices. With the advent of Software Defined Networking (SDN) which abstracts the network functionality planes, separating it from physical hardware – and decoupling the data plane from the control plane, it is able to provide a degree of automation for the network resources and management of the services provided by the network. This paper explores some of the aspects of automation provided by SDN capabilities in a Mesh Network (provides Network Security with redundancy of communication links) which contribute towards making the network inherently intelligent and take decisions without manual intervention and thus take a step towards Intelligent Automated Networks.
2022-06-10
Ramachandran, Gowri Sankar, Deane, Felicity, Malik, Sidra, Dorri, Ali, Jurdak, Raja.  2021.  Towards Assisted Autonomy for Supply Chain Compliance Management. 2021 Third IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :321–330.

In an agricultural supply chain, farmers, food processors, transportation agencies, importers, and exporters must comply with different regulations imposed by one or more jurisdictions depending on the nature of their business operations. Supply chain stakeholders conventionally transport their goods, along with the corresponding documentation via regulators for compliance checks. This is generally followed by a tedious and manual process to ensure the goods meet regulatory requirements. However, supply chain systems are changing through digitization. In digitized supply chains, data is shared with the relevant stakeholders through digital supply chain platforms, including blockchain technology. In such datadriven digital supply chains, the regulators may be able to leverage digital technologies, such as artificial intelligence and machine learning, to automate the compliance verification process. However, a barrier to progress is the risk that information will not be credible, thus reversing the gains that automation could achieve. Automating compliance based on inaccurate data may compromise the safety and credibility of the agricultural supply chain, which discourages regulators and other stakeholders from adopting and relying on automation. Within this article we consider the challenges of digital supply chains when we describe parts of the compliance management process and how it can be automated to improve the operational efficiency of agricultural supply chains. We introduce assisted autonomy as a means to pragmatically automate the compliance verification process by combining the power of digital systems while keeping the human in-the-loop. We argue that autonomous compliance is possible, but that the need for human led inspection processes will never be replaced by machines, however it can be minimised through “assisted autonomy”.

Yang, Jing, Vega-Oliveros, Didier, Seibt, Tais, Rocha, Anderson.  2021.  Scalable Fact-checking with Human-in-the-Loop. 2021 IEEE International Workshop on Information Forensics and Security (WIFS). :1–6.
Researchers have been investigating automated solutions for fact-checking in various fronts. However, current approaches often overlook the fact that information released every day is escalating, and a large amount of them overlap. Intending to accelerate fact-checking, we bridge this gap by proposing a new pipeline – grouping similar messages and summarizing them into aggregated claims. Specifically, we first clean a set of social media posts (e.g., tweets) and build a graph of all posts based on their semantics; Then, we perform two clustering methods to group the messages for further claim summarization. We evaluate the summaries both quantitatively with ROUGE scores and qualitatively with human evaluation. We also generate a graph of summaries to verify that there is no significant overlap among them. The results reduced 28,818 original messages to 700 summary claims, showing the potential to speed up the fact-checking process by organizing and selecting representative claims from massive disorganized and redundant messages.
2022-06-09
Yan, Longchuan, Zhang, Zhaoxia, Huang, Huige, Yuan, Xiaoyu, Peng, Yuanlong, Zhang, Qingyun.  2021.  An Improved Deep Pairwise Supervised Hashing Algorithm for Fast Image Retrieval. 2021 IEEE 2nd International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA). 2:1152–1156.
In recent years, hashing algorithm has been widely researched and has made considerable progress in large-scale image retrieval tasks due to its advantages of convenient storage and fast calculation efficiency. Nowadays most researchers use deep convolutional neural networks (CNNs) to perform feature learning and hash coding learning at the same time for image retrieval and the deep hashing methods based on deep CNNs perform much better than the traditional manual feature hashing methods. But most methods are designed to handle simple binary similarity and decrease quantization error, ignoring that the features of similar images and hashing codes generated are not compact enough. In order to enhance the performance of CNNs-based hashing algorithms for large scale image retrieval, this paper proposes a new deep-supervised hashing algorithm in which a novel channel attention mechanism is added and the loss function is elaborately redesigned to generate compact binary codes. It experimentally proves that, compared with the existing hashing methods, this method has better performance on two large scale image datasets CIFAR-10 and NUS-WIDE.
2022-06-06
Madono, Koki, Nakano, Teppei, Kobayashi, Tetsunori, Ogawa, Tetsuji.  2020.  Efficient Human-In-The-Loop Object Detection using Bi-Directional Deep SORT and Annotation-Free Segment Identification. 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). :1226–1233.
The present study proposes a method for detecting objects with a high recall rate for human-supported video annotation. In recent years, automatic annotation techniques such as object detection and tracking have become more powerful; however, detection and tracking of occluded objects, small objects, and blurred objects are still difficult. In order to annotate such objects, manual annotation is inevitably required. For this reason, we envision a human-supported video annotation framework in which over-detected objects (i.e., false positives) are allowed to minimize oversight (i.e., false negatives) in automatic annotation and then the over-detected objects are removed manually. This study attempts to achieve human-in-the-loop object detection with an emphasis on suppressing the oversight for the former stage of processing in the aforementioned annotation framework: bi-directional deep SORT is proposed to reliably capture missed objects and annotation-free segment identification (AFSID) is proposed to identify video frames in which manual annotation is not required. These methods are reinforced each other, yielding an increase in the detection rate while reducing the burden of human intervention. Experimental comparisons using a pedestrian video dataset demonstrated that bi-directional deep SORT with AFSID was successful in capturing object candidates with a higher recall rate over the existing deep SORT while reducing the cost of manpower compared to manual annotation at regular intervals.
Lau, Tuong Phi.  2021.  Software Reuse Exploits in Node.js Web Apps. 2021 5th International Conference on System Reliability and Safety (ICSRS). :190–197.
The npm ecosystem has the largest number of third-party packages for making node.js-based web apps. Due to its free and open nature, it can raise diversity of security concerns. Adversaries can take advantage of existing software APIs included in node.js web apps for achieving their own malicious targets. More specifically, attackers may inject malicious data into its client requests and then submit them to a victim node.js server. It then may manipulate program states to reuse sensitive APIs as gadgets required in the node.js web app executed on the victim server. Once such sensitive APIs can be successfully accessed, it may indirectly raise security threats such as code injection attacks, software-layer DoS attacks, private data leaks, etc. For example, when the sensitive APIs are implemented as pattern matching operations and are called with hard-to-match input string submitted by clients, it may launch application-level DoS attacks.In this paper, we would like to introduce software reuse exploits through reusing packages available in node.js web apps for posing security threats to servers. In addition, we propose an approach based on data flow analysis to detect vulnerable npm packages that can be exposed to such exploits. To evaluate its effectiveness, we collected a dataset of 15,000 modules from the ecosystem to conduct the experiments. As a result, it discovered out 192 vulnerable packages. By manual analysis, we identified 156 true positives of 192 that can be exposed to code reuse exploits for remotely causing software-layer DoS attacks with 128 modules of 156, for code injection with 18 modules, and for private data leaks including 10 vulnerable ones.
2022-05-10
Wang, Ben, Chu, Hanting, Zhang, Pengcheng, Dong, Hai.  2021.  Smart Contract Vulnerability Detection Using Code Representation Fusion. 2021 28th Asia-Pacific Software Engineering Conference (APSEC). :564–565.
At present, most smart contract vulnerability detection use manually-defined patterns, which is time-consuming and far from satisfactory. To address this issue, researchers attempt to deploy deep learning techniques for automatic vulnerability detection in smart contracts. Nevertheless, current work mostly relies on a single code representation such as AST (Abstract Syntax Tree) or code tokens to learn vulnerability characteristics, which might lead to incompleteness of learned semantics information. In addition, the number of available vulnerability datasets is also insufficient. To address these limitations, first, we construct a dataset covering most typical types of smart contract vulnerabilities, which can accurately indicate the specific row number where a vulnerability may exist. Second, for each single code representation, we propose a novel way called AFS (AST Fuse program Slicing) to fuse code characteristic information. AFS can fuse the structured information of AST with program slicing information and detect vulnerabilities by learning new vulnerability characteristic information.
Lin, Wei, Cai, Saihua.  2021.  An Empirical Study on Vulnerability Detection for Source Code Software based on Deep Learning. 2021 IEEE 21st International Conference on Software Quality, Reliability and Security Companion (QRS-C). :1159–1160.
In recent years, the complexity of software vulnera-bilities has continued to increase. Manual vulnerability detection methods alone no longer meet the demand. With the rapid development of the deep learning, many neural network models have been widely applied to source code vulnerability detection. The variant of recurrent neural network (RNN), bidirectional Long Short-Term Memory (BiLSTM), has been a popular choice in vulnerability detection. However, is BiLSTM the most suitable choice? To answer this question, we conducted a series of experiments to investigate the effectiveness of different neural network models for source code vulnerability detection. The results shows that the variants of RNN, gated recurrent unit (GRU) and bidirectional GRU, are more capable of detecting source code fragments with mixed vulnerability types. And the concatenated convolutional neural network is more capable of detecting source code fragments of single vulnerability types.
Li, Hongrui, Zhou, Lili, Xing, Mingming, Taha, Hafsah binti.  2021.  Vulnerability Detection Algorithm of Lightweight Linux Internet of Things Application with Symbolic Execution Method. 2021 International Symposium on Computer Technology and Information Science (ISCTIS). :24–27.
The security of Internet of Things (IoT) devices has become a matter of great concern in recent years. The existence of security holes in the executable programs in the IoT devices has resulted in difficult to estimate security risks. For a long time, vulnerability detection is mainly completed by manual debugging and analysis, and the detection efficiency is low and the accuracy is difficult to guarantee. In this paper, the mainstream automated vulnerability analysis methods in recent years are studied, and a vulnerability detection algorithm based on symbol execution is presented. The detection algorithm is suitable for lightweight applications in small and medium-sized IoT devices. It realizes three functions: buffer overflow vulnerability detection, encryption reliability detection and protection state detection. The robustness of the detection algorithm was tested in the experiment, and the detection of overflow vulnerability program was completed within 2.75 seconds, and the detection of encryption reliability was completed within 1.79 seconds. Repeating the test with multiple sets of data showed a small difference of less than 6.4 milliseconds. The results show that the symbol execution detection algorithm presented in this paper has high detection efficiency and more robust accuracy and robustness.
2022-04-25
Jiang, Xiaoyu, Qiu, Tie, Zhou, Xiaobo, Zhang, Bin, Sun, Ximin, Chi, Jiancheng.  2021.  A Text Similarity-based Protocol Parsing Scheme for Industrial Internet of Things. 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD). :781–787.
Protocol parsing is to discern and analyze packets' transmission fields, which plays an essential role in industrial security monitoring. The existing schemes parsing industrial protocols universally have problems, such as the limited parsing protocols, poor scalability, and high preliminary information requirements. This paper proposes a text similarity-based protocol parsing scheme (TPP) to identify and parse protocols for Industrial Internet of Things. TPP works in two stages, template generation and protocol parsing. In the template generation stage, TPP extracts protocol templates from protocol data packets by the cluster center extraction algorithm. The protocol templates will update continuously with the increase of the parsing packets' protocol types and quantities. In the protocol parsing phase, the protocol data packet will match the template according to the similarity measurement rules to identify and parse the fields of protocols. The similarity measurement method comprehensively measures the similarity between messages in terms of character position, sequence, and continuity to improve protocol parsing accuracy. We have implemented TPP in a smart industrial gateway and parsed more than 30 industrial protocols, including POWERLINK, DNP3, S7comm, Modbus-TCP, etc. We evaluate the performance of TPP by comparing it with the popular protocol analysis tool Netzob. The experimental results show that the accuracy of TPP is more than 20% higher than Netzob on average in industrial protocol identification and parsing.
2022-04-18
Aivatoglou, Georgios, Anastasiadis, Mike, Spanos, Georgios, Voulgaridis, Antonis, Votis, Konstantinos, Tzovaras, Dimitrios.  2021.  A Tree-Based Machine Learning Methodology to Automatically Classify Software Vulnerabilities. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :312–317.
Software vulnerabilities have become a major problem for the security analysts, since the number of new vulnerabilities is constantly growing. Thus, there was a need for a categorization system, in order to group and handle these vulnerabilities in a more efficient way. Hence, the MITRE corporation introduced the Common Weakness Enumeration that is a list of the most common software and hardware vulnerabilities. However, the manual task of understanding and analyzing new vulnerabilities by security experts, is a very slow and exhausting process. For this reason, a new automated classification methodology is introduced in this paper, based on the vulnerability textual descriptions from National Vulnerability Database. The proposed methodology, combines textual analysis and tree-based machine learning techniques in order to classify vulnerabilities automatically. The results of the experiments showed that the proposed methodology performed pretty well achieving an overall accuracy close to 80%.
2022-04-13
Arthi, R, Krishnaveni, S.  2021.  Design and Development of IOT Testbed with DDoS Attack for Cyber Security Research. 2021 3rd International Conference on Signal Processing and Communication (ICPSC). :586—590.
The Internet of Things (IoT) is clubbed by networking of sensors and other embedded electronics. As more devices are getting connected, the vulnerability of getting affected by various IoT threats also increases. Among the IoT threads, DDoS attacks are causing serious issues in recent years. In IoT, these attacks are challenging to detect and isolate. Thus, an effective Intrusion Detection System (IDS) is essential to defend against these attacks. The traditional IDS is based on manual blacklisting. These methods are time-consuming and will not be effective to detect novel intrusions. At present, IDS are automated and programmed to be dynamic which are aided by machine learning & deep learning models. The performance of these models mainly depends on the data used to train the model. Majority of IDS study is performed with non-compatible and outdated datasets like KDD 99 and NSL KDD. Research on specific DDoS attack datasets is very less. Therefore, in this paper, we first aim to examine the effect of existing datasets in the IoT environment. Then, we propose a real-time data collection framework for DNS amplification attacks in IoT. The generated network packets containing DDoS attack is captured through port mirroring.
2022-04-01
Bichhawat, Abhishek, Fredrikson, Matt, Yang, Jean.  2021.  Automating Audit with Policy Inference. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1—16.
The risk posed by high-profile data breaches has raised the stakes for adhering to data access policies for many organizations, but the complexity of both the policies themselves and the applications that must obey them raises significant challenges. To mitigate this risk, fine-grained audit of access to private data has become common practice, but this is a costly, time-consuming, and error-prone process.We propose an approach for automating much of the work required for fine-grained audit of private data access. Starting from the assumption that the auditor does not have an explicit, formal description of the correct policy, but is able to decide whether a given policy fragment is partially correct, our approach gradually infers a policy from audit log entries. When the auditor determines that a proposed policy fragment is appropriate, it is added to the system's mechanized policy, and future log entries to which the fragment applies can be dealt with automatically. We prove that for a general class of attribute-based data policies, this inference process satisfies a monotonicity property which implies that eventually, the mechanized policy will comprise the full set of access rules, and no further manual audit is necessary. Finally, we evaluate this approach using a case study involving synthetic electronic medical records and the HIPAA rule, and show that the inferred mechanized policy quickly converges to the full, stable rule, significantly reducing the amount of effort needed to ensure compliance in a practical setting.
Sedano, Wadlkur Kurniawan, Salman, Muhammad.  2021.  Auditing Linux Operating System with Center for Internet Security (CIS) Standard. 2021 International Conference on Information Technology (ICIT). :466—471.
Linux is one of the operating systems to support the increasingly rapid development of internet technology. Apart from the speed of the process, security also needs to be considered. Center for Internet Security (CIS) Benchmark is an example of a security standard. This study implements the CIS Benchmark using the Chef Inspec application. This research focuses on building a tool to perform security audits on the Ubuntu 20.04 operating system. 232 controls on CIS Benchmark were successfully implemented using Chef Inspec application. The results of this study were 87 controls succeeded, 118 controls failed, and 27 controls were skipped. This research is expected to be a reference for information system managers in managing system security.
2022-03-25
Li, Xin, Yi, Peng, Jiang, Yiming, Lu, Xiangyu.  2021.  Traffic Anomaly Detection Algorithm Based on Improved Salp Swarm Optimal Density Peak Clustering. 2021 4th International Conference on Artificial Intelligence and Big Data (ICAIBD). :187—191.

Aiming at the problems of low accuracy and poor effect caused by the lack of data labels in most real network traffic, an optimized density peak clustering based on the improved salp swarm algorithm is proposed for traffic anomaly detection. Through the optimization of cosine decline and chaos strategy, the salp swarm algorithm not only accelerates the convergence speed, but also enhances the search ability. Moreover, we use the improved salp swarm algorithm to adaptively search the best truncation distance of density peak clustering, which avoids the subjectivity and uncertainty of manually selecting the parameters. The experimental results based on NSL-KDD dataset show that the improved salp swarm algorithm achieves faster convergence speed and higher precision, increases the average anomaly detection accuracy of 4.74% and detection rate of 6.14%, and reduces the average false positive rate of 7.38%.

2022-02-25
Aichernig, Bernhard K., Muškardin, Edi, Pferscher, Andrea.  2021.  Learning-Based Fuzzing of IoT Message Brokers. 2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST). :47—58.
The number of devices in the Internet of Things (IoT) immensely grew in recent years. A frequent challenge in the assurance of the dependability of IoT systems is that components of the system appear as a black box. This paper presents a semi-automatic testing methodology for black-box systems that combines automata learning and fuzz testing. Our testing technique uses stateful fuzzing based on a model that is automatically inferred by automata learning. Applying this technique, we can simultaneously test multiple implementations for unexpected behavior and possible security vulnerabilities.We show the effectiveness of our learning-based fuzzing technique in a case study on the MQTT protocol. MQTT is a widely used publish/subscribe protocol in the IoT. Our case study reveals several inconsistencies between five different MQTT brokers. The found inconsistencies expose possible security vulnerabilities and violations of the MQTT specification.
2022-02-24
Duan, Xuanyu, Ge, Mengmeng, Minh Le, Triet Huynh, Ullah, Faheem, Gao, Shang, Lu, Xuequan, Babar, M. Ali.  2021.  Automated Security Assessment for the Internet of Things. 2021 IEEE 26th Pacific Rim International Symposium on Dependable Computing (PRDC). :47–56.
Internet of Things (IoT) based applications face an increasing number of potential security risks, which need to be systematically assessed and addressed. Expert-based manual assessment of IoT security is a predominant approach, which is usually inefficient. To address this problem, we propose an automated security assessment framework for IoT networks. Our framework first leverages machine learning and natural language processing to analyze vulnerability descriptions for predicting vulnerability metrics. The predicted metrics are then input into a two-layered graphical security model, which consists of an attack graph at the upper layer to present the network connectivity and an attack tree for each node in the network at the bottom layer to depict the vulnerability information. This security model automatically assesses the security of the IoT network by capturing potential attack paths. We evaluate the viability of our approach using a proof-of-concept smart building system model which contains a variety of real-world IoT devices and poten-tial vulnerabilities. Our evaluation of the proposed framework demonstrates its effectiveness in terms of automatically predicting the vulnerability metrics of new vulnerabilities with more than 90% accuracy, on average, and identifying the most vulnerable attack paths within an IoT network. The produced assessment results can serve as a guideline for cybersecurity professionals to take further actions and mitigate risks in a timely manner.
Malladi, Sreekanth.  2021.  Towards Formal Modeling and Analysis of UPI Protocols. 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV). :239–243.
UPI (Unified Payments Interface) is a framework in India wherein customers can send payments to merchants from their smartphones. The framework consists of UPI servers that are connected to the banks at the sender and receiver ends. To send and receive payments, customers and merchants would have to first register themselves with UPI servers by executing a registration protocol using payment apps such as BHIM, PayTm, Google Pay, and PhonePe. Weaknesses were recently reported on these protocols that allow attackers to make money transfers on behalf of innocent customers and even empty their bank accounts. But the reported weaknesses were found after informal and manual analysis. However, as history has shown, formal analysis of cryptographic protocols often reveals flaws that could not be discovered with manual inspection. In this paper, we model UPI protocols in the pattern of traditional cryptographic protocols such that they can be rigorously studied and analyzed using formal methods. The modeling simplifies many of the complexities in the protocols, making it suitable to analyze and verify UPI protocols with popular analysis and verification tools such as the Constraint Solver, ProVerif and Tamarin. Our modeling could also be used as a general framework to analyze and verify many other financial payment protocols than just UPI protocols, giving it a broader applicability.
2022-02-04
Kewale, Prasad, Gardalwar, Ashwin, Vegad, Prachit, Agrawal, Rahul, Jaju, Santosh, Dabhekar, Kuldeep.  2021.  Design and Implementation of RFID Based E-Document Verification System. 2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA). :165—170.
The work shows the RFID cards as e-document rather than a paper passport with embedded chip as the e-passport. This type of Technological advancement creates benefits like the information can be stored electronically. The aim behind this is to reduce or stop the uses of illegal document. This will assure the security and prevent illegal entry in particular country by fake documents it will also maintain the privacy of the owner. Here, this research work has proposed an e-file verification device by means of RFID. Henceforth, this research work attempts to develop a new generation for file verification by decreasing the human effort. The most important idea of this examine is to make it feasible to get admission to the info of proprietor of the file the usage of RFID generation. For this the man or woman is issued RFID card. This card incorporates circuit which is used to store procedure information via way of modulating and demodulating the radio frequency sign transmitted. Therefore, the facts saved in this card are referred to the file element of the man or woman. With the help of the hardware of the proposed research work RFID Based E-Document verification provides a tag to the holder which produces waves of electromagnetic signal and then access the data. The purpose is to make the verification of document easy, secured and with less human intervention. In the proposed work, the comparative analysis is done using RFID technology in which 100 documents are verified in 500 seconds as compared to manual work done in 3000 seconds proves the system to be 6 times more efficient as compared to conventional method.
Almadi, Dana S., Albahsain, Basim M., Al-Essa, Hadeel A..  2021.  Towards Business Sustainability via an Automated Gaps Closure Approach. 2021 Fifth World Conference on Smart Trends in Systems Security and Sustainability (WorldS4). :182–185.
To ensure organization business and resources sustainability, it is required to establish Business Continuity Management System (BCMS). A key component of BCMS is conducting drills, which enables the organization to assess its readiness, sustainability and resiliency with an adequate planning for business continuation of unforeseen circumstances. The testing of the business services and processes is crucial and failing to conduct drills would lead to improper response and recovery strategies which will result in major financial loses. The drills aim to evaluate IT organization response, IT services recovery, identify observations, lessons learned and areas of improvement. As a result, identified observations are shared with service owners and tracked by BCMS to ensure closing all observations. However, tracking observations in a traditional manual approach is always associated with several challenges. This paper presents our experience in planning, executing, and validating the process of drills, by illustrating how an organization could overcome manual approach challenges with an automated observation tracking system. Additionally, we present our solution results in terms of time management and cost saving.
2022-01-25
Marksteiner, Stefan, Marko, Nadja, Smulders, Andre, Karagiannis, Stelios, Stahl, Florian, Hamazaryan, Hayk, Schlick, Rupert, Kraxberger, Stefan, Vasenev, Alexandr.  2021.  A Process to Facilitate Automated Automotive Cybersecurity Testing. 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring). :1—7.
Modern vehicles become increasingly digitalized with advanced information technology-based solutions like advanced driving assistance systems and vehicle-to-x communications. These systems are complex and interconnected. Rising complexity and increasing outside exposure has created a steadily rising demand for more cyber-secure systems. Thus, also standardization bodies and regulators issued standards and regulations to prescribe more secure development processes. This security, however, also has to be validated and verified. In order to keep pace with the need for more thorough, quicker and comparable testing, today's generally manual testing processes have to be structured and optimized. Based on existing and emerging standards for cybersecurity engineering, this paper therefore outlines a structured testing process for verifying and validating automotive cybersecurity, for which there is no standardized method so far. Despite presenting a commonly structured framework, the process is flexible in order to allow implementers to utilize their own, accustomed toolsets.
2022-01-11
Everson, Douglas, Cheng, Long.  2021.  Compressing Network Attack Surfaces for Practical Security Analysis. 2021 IEEE Secure Development Conference (SecDev). :23–29.
Testing or defending the security of a large network can be challenging because of the sheer number of potential ingress points that need to be investigated and evaluated for vulnerabilities. In short, manual security testing and analysis do not easily scale to large networks. While it has been shown that clustering can simplify the problem somewhat, the data structures and formats returned by the latest network mapping tools are not conducive to clustering algorithms. In this paper we introduce a hybrid similarity algorithm to compute the distance between two network services and then use those calculations to support a clustering algorithm designed to compress a large network attack surface by orders of magnitude. Doing so allows for new testing strategies that incorporate outlier detection and smart consolidation of test cases to improve accuracy and timeliness of testing. We conclude by presenting two case studies using an organization's network attack surface data to demonstrate the effectiveness of this approach.