Biblio

Found 1261 results

Filters: First Letter Of Title is I  [Clear All Filters]
2019-09-23
Arora, M., kumar, C., Verma, A. K..  2018.  Increase Capacity of QR Code Using Compression Technique. 2018 3rd International Conference and Workshops on Recent Advances and Innovations in Engineering (ICRAIE). :1–5.

The main objective of this research work is to enhance the data storage capacity of the QR codes. By achieving the research aim, we can visualize rapid increase in application domains of QR Codes, mostly for smart cities where one needs to store bulk amount of data. Nowadays India is experiencing demonetization step taken by Prime Minister of the country and QR codes can play major role for this step. They are also helpful for cashless society as many vendors have registered themselves with different e-wallet companies like paytm, freecharge etc. These e-wallet companies have installed QR codes at cash counter of such vendors. Any time when a customer wants to pay his bills, he only needs to scan that particular QR code. Afterwards the QR code decoder application start working by taking necessary action like opening payment gateway etc. So, objective of this research study focuses on solving this issue by applying proposed methodology.

2019-08-05
Černý, Jakub, Boýanský, Branislav, Kiekintveld, Christopher.  2018.  Incremental Strategy Generation for Stackelberg Equilibria in Extensive-Form Games. Proceedings of the 2018 ACM Conference on Economics and Computation. :151–168.

Dynamic interaction appears in many real-world scenarios where players are able to observe (perhaps imperfectly) the actions of another player and react accordingly. We consider the baseline representation of dynamic games - the extensive form - and focus on computing Stackelberg equilibrium (SE), where the leader commits to a strategy to which the follower plays a best response. For one-shot games (e.g., security games), strategy-generation (SG) algorithms offer dramatic speed-up by incrementally expanding the strategy spaces. However, a direct application of SG to extensive-form games (EFGs) does not bring a similar speed-up since it typically results in a nearly-complete strategy space. Our contributions are twofold: (1) for the first time we introduce an algorithm that allows us to incrementally expand the strategy space to find a SE in EFGs; (2) we introduce a heuristic variant of the algorithm that is theoretically incomplete, but in practice allows us to find exact (or close-to optimal) Stackelberg equilibrium by constructing a significantly smaller strategy space. Our experimental evaluation confirms that we are able to compute SE by considering only a fraction of the strategy space that often leads to a significant speed-up in computation times.

2019-06-28
Gillani, Fida, Al-Shaer, Ehab, Duan, Qi.  2018.  In-Design Resilient SDN Control Plane and Elastic Forwarding Against Aggressive DDoS Attacks. Proceedings of the 5th ACM Workshop on Moving Target Defense. :80-89.

Using Software-defined Networks in wide area (SDN-WAN) has been strongly emerging in the past years. Due to scalability and economical reasons, SDN-WAN mostly uses an in-band control mechanism, which implies that control and data sharing the same critical physical links. However, the in-band control and centralized control architecture can be exploited by attackers to launch distributed denial of service (DDoS) on SDN control plane by flooding the shared links and/or the Open flow agents. Therefore, constructing a resilient software designed network requires dynamic isolation and distribution of the control flow to minimize damage and significantly increase attack cost. Existing solutions fall short to address this challenge because they require expensive extra dedicated resources or changes in OpenFlow protocol. In this paper, we propose a moving target technique called REsilient COntrol Network architecture (ReCON) that uses the same SDN network resources to defend SDN control plane dynamically against the DDoS attacks. ReCON essentially, (1) minimizes the sharing of critical resources among data and control traffic, and (2) elastically increases the limited capacity of the software control agents on-demand by dynamically using the under-utilized resources from within the same SDN network. To implement a practical solution, we formalize ReCON as a constraints satisfaction problem using Satisfiability Modulo Theory (SMT) to guarantee a correct-by-construction control plan placement that can handle dynamic network conditions.

2020-05-15
Madhukar, Anant, Misra, Dinesh Kumar, Zaheer, M M.  2018.  Indigenous Network Monitoring System. 2018 International Conference on Computational and Characterization Techniques in Engineering Sciences (CCTES). :262—266.

Military reconnaissance in 1999 has paved the way to establish its own, self-reliant and indigenous navigation system. The strategic necessity has been accomplished in 2013 by launching seven satellites in Geo-orbit and underlying Network control center in Bangalore and a new NavIC control center at Lucknow, later in 2016. ISTRAC is one of the premier and amenable center to track the Indian as well as external network satellite launch vehicle and provide house-keeping and inertial navigation (INC) data to launch control center in real time and to project team in off-line. Over the ISTRAC Launch network, Simple Network Management Protocol (SNMP) was disabled due to security and bandwidth reasons. The cons of SNMP comprise security risks that are normal trait whenever applied as an open standard. There is "security through obscurity" linked with any slight-used communications standard in SNMP. Detailed messages are being sent between devices, not just miniature pre-set codes. These cons in the SNMP are found in majority applications and more bandwidth seizure is another contention. Due to the above pros and cones in SNMP in form of open source, available network monitoring system (NMS) could not be employed for link monitoring and immediate decision making in ISTRAC network. The situation has made requisitions to evolve an in-house network monitoring system (NMS). It was evolved for real-time network monitoring as well as communication link performance explication. The evolved system has the feature of Internet control message protocol (ICMP) based link monitoring, 24/7 monitoring of all the nodes, GUI based real-time link status, Summary and individual link statistics on the GUI. It also identifies total downtime and generates summary reports. It does identification for out of order or looped packets, Email and SMS alert to Prime and Redundant system which one is down and repeat alert if the link is failed for more than 30 minutes. It has easy file based configuration and no application restart required. Generation of daily and monthly link status, offline link analysis plot of any day, less consumption of system resources are add-on features. It is fully secured in-house development, calculates total data flow over a network and co-relate data vs link percentage.

2019-08-05
Headrick, W. J., Dlugosz, A., Rajcok, P..  2018.  Information Assurance in modern ATE. 2018 IEEE AUTOTESTCON. :1–4.

For modern Automatic Test Equipment (ATE) one of the most daunting tasks is now Information Assurance (IA). What was once at most a secondary item consisting mainly of installing an Anti-Virus suite is now becoming one of the most important aspects of ATE. Given the current climate of IA it has become important to ensure ATE is kept safe from any breaches of security or loss of information. Even though most ATE are not on the Internet (or even on a network for many) they are still vulnerable to some of the same attack vectors plaguing common computers and other electronic devices. This paper will discuss some of the processes and procedures which must be used to ensure that modern ATE can continue to be used to test and detect faults in the systems they are designed to test. The common items that must be considered for ATE are as follows: The ATE system must have some form of Anti-Virus (as should all computers). The ATE system should have a minimum software footprint only providing the software needed to perform the task. The ATE system should be verified to have all the Operating System (OS) settings configured pursuant to the task it is intended to perform. The ATE OS settings should include password and password expiration settings to prevent access by anyone not expected to be on the system. The ATE system software should be written and constructed such that it in itself is not readily open to attack. The ATE system should be designed in a manner such that none of the instruments in the system can easily be attacked. The ATE system should insure any paths to the outside world (such as Ethernet or USB devices) are limited to only those required to perform the task it was designed for. These and many other common configuration concerns will be discussed in the paper.

2019-04-05
Wen, Senhao, Rao, Yu, Yan, Hanbing.  2018.  Information Protecting Against APT Based on the Study of Cyber Kill Chain with Weighted Bayesian Classification with Correction Factor. Proceedings of the 7th International Conference on Informatics, Environment, Energy and Applications. :231-235.

To avoid being discovered by the defenders of a target, APT attackers are using encrypted communication to hide communication features, using code obfuscation and file-less technology to avoid malicious code being easily reversed and leaking out its internal working mechanism, and using misleading content to conceal their identities. And it is clearly ineffective to detect APT attacks by relying on one single technology. All of these tough situation make information security and privacy protection face increasingly serious threats. In this paper, through a deep study of Cyber Kill Chain behaviors, combining with intelligence analysis technology, we transform APT detecting problem to be a measurable mathematical problem through weighted Bayesian classification with correction factor so as to detect APTs and perceive threats. In the solution, we adopted intelligence acquisition technology from massive data, and TFIDF algorithm for calculate attack behavior's weight. Also we designed a correction factor to improve the Markov Weighted Bayesian Model with multiple behaviors being detected by modifying the value of the probability of APT attack.

2019-03-28
Varga, S., Brynielsson, J., Franke, U..  2018.  Information Requirements for National Level Cyber Situational Awareness. 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). :774-781.

As modern societies become more dependent on IT services, the potential impact both of adversarial cyberattacks and non-adversarial service management mistakes grows. This calls for better cyber situational awareness-decision-makers need to know what is going on. The main focus of this paper is to examine the information elements that need to be collected and included in a common operational picture in order for stakeholders to acquire cyber situational awareness. This problem is addressed through a survey conducted among the participants of a national information assurance exercise conducted in Sweden. Most participants were government officials and employees of commercial companies that operate critical infrastructure. The results give insight into information elements that are perceived as useful, that can be contributed to and required from other organizations, which roles and stakeholders would benefit from certain information, and how the organizations work with creating cyber common operational pictures today. Among findings, it is noteworthy that adversarial behavior is not perceived as interesting, and that the respondents in general focus solely on their own organization.

2019-05-08
Mylrea, M., Gourisetti, S. N. G., Larimer, C., Noonan, C..  2018.  Insider Threat Cybersecurity Framework Webtool Methodology: Defending Against Complex Cyber-Physical Threats. 2018 IEEE Security and Privacy Workshops (SPW). :207–216.

This paper demonstrates how the Insider Threat Cybersecurity Framework (ITCF) web tool and methodology help provide a more dynamic, defense-in-depth security posture against insider cyber and cyber-physical threats. ITCF includes over 30 cybersecurity best practices to help organizations identify, protect, detect, respond and recover to sophisticated insider threats and vulnerabilities. The paper tests the efficacy of this approach and helps validate and verify ITCF's capabilities and features through various insider attacks use-cases. Two case-studies were explored to determine how organizations can leverage ITCF to increase their overall security posture against insider attacks. The paper also highlights how ITCF facilitates implementation of the goals outlined in two Presidential Executive Orders to improve the security of classified information and help owners and operators secure critical infrastructure. In realization of these goals, ITCF: provides an easy to use rapid assessment tool to perform an insider threat self-assessment; determines the current insider threat cybersecurity posture; defines investment-based goals to achieve a target state; connects the cybersecurity posture with business processes, functions, and continuity; and finally, helps develop plans to answer critical organizational cybersecurity questions. In this paper, the webtool and its core capabilities are tested by performing an extensive comparative assessment over two different high-profile insider threat incidents. 

2018-09-30
Himanshu Neema, Bradley Potteiger, Xenofon Koutsoukos, Gabor Karsai, Peter Volgyesi, Janos Sztipanovits.  2018.  Integrated Simulation Testbed for Security and Resilience of CPS. Proceedings of the 33rd Annual ACM Symposium on Applied Computing. :368–374.

Owing1 to an immense growth of internet-connected and learning-enabled cyber-physical systems (CPSs) [1], several new types of attack vectors have emerged. Analyzing security and resilience of these complex CPSs is difficult as it requires evaluating many subsystems and factors in an integrated manner. Integrated simulation of physical systems and communication network can provide an underlying framework for creating a reusable and configurable testbed for such analyses. Using a model-based integration approach and the IEEE High-Level Architecture (HLA) [2] based distributed simulation software; we have created a testbed for integrated evaluation of large-scale CPS systems. Our tested supports web-based collaborative metamodeling and modeling of CPS system and experiments and a cloud computing environment for executing integrated networked co-simulations. A modular and extensible cyber-attack library enables validating the CPS under a variety of configurable cyber-attacks, such as DDoS and integrity attacks. Hardware-in-the-loop simulation is also supported along with several hardware attacks. Further, a scenario modeling language allows modeling of alternative paths (Courses of Actions) that enables validating CPS under different what-if scenarios as well as conducting cyber-gaming experiments. These capabilities make our testbed well suited for analyzing security and resilience of CPS. In addition, the web-based modeling and cloud-hosted execution infrastructure enables one to exercise the entire testbed using simply a web-browser, with integrated live experimental results display.

2019-05-20
Gschwandtner, Mathias, Demetz, Lukas, Gander, Matthias, Maier, Ronald.  2018.  Integrating Threat Intelligence to Enhance an Organization's Information Security Management. Proceedings of the 13th International Conference on Availability, Reliability and Security. :37:1-37:8.

As security incidents might have disastrous consequences on an enterprise's information technology (IT), organizations need to secure their IT against threats. Threat intelligence (TI) promises to provide actionable information about current threats for information security management systems (ISMS). Common information range from malware characteristics to observed perpetrator origins that allow customizing security controls. The aim of this article is to assess the impact of utilizing public available threat feeds within the corporate process on an organization's security information level. We developed a framework to integrate TI for large corporations and evaluated said framework in cooperation with a global acting manufacturer and retailer. During the development of the TI framework, a specific provider of TI was analyzed and chosen for integration within the process of vulnerability management. The evaluation of this exemplary integration was assessed by members of the information security department at the cooperating enterprise. During our evaluation it was emphasized that a prioritization of management activities based on whether threats that have been observed in the wild are targeting them or similar companies. Furthermore, indicators of compromise (IoC) provided by the chosen TI source, can be automatically integrated utilizing a provided software development kit. Theoretical relevance is based on the contribution towards the verification of proposed benefits of TI integration, such as increasing the resilience of an enterprise network, within a real-world environment. Overall, practitioners suggest that TI integration should result in enhanced management of security budgets and more resilient enterprise networks.

2019-12-30
Razaque, Abdul, Jinrui, Wang, Zancheng, Wang, Hani, Qassim Bani, Khaskheli, Murad Ali, Bhutto, Waseem Ahmed.  2018.  Integration of CPU and GPU to Accelerate RSA Modular Exponentiation Operation. 2018 IEEE Long Island Systems, Applications and Technology Conference (LISAT). :1-6.

Now-a-days, the security of data becomes more and more important, people store many personal information in their phones. However, stored information require security and maintain privacy. Encryption algorithm has become the main force of maintaining the security of data. Thus, the algorithm complexity and encryption efficiency have become the main measurement of whether the encryption algorithm is save or not. With the development of hardware, we have many tools to improve the algorithm at present. Because modular exponentiation in RSA algorithm can be divided into several parts mathematically. In this paper, we introduce a conception by dividing the process of encryption and add the model into graphics process unit (GPU). By using GPU's capacity in parallel computing, the core of RSA can be accelerated by using central process unit (CPU) and GPU. Compute unified device architecture (CUDA) is a platform which can combine CPU and GPU together to realize GPU parallel programming and this is the tool we use to perform experience of accelerating RSA algorithm. This paper will also build up a mathematical model to help understand the mechanism of RSA encryption algorithm.

2019-01-31
Wang, Ningfei, Ji, Shouling, Wang, Ting.  2018.  Integration of Static and Dynamic Code Stylometry Analysis for Programmer De-Anonymization. Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security. :74–84.

De-anonymizing the authors of anonymous code (i.e., code stylometry) entails significant privacy and security implications. Most existing code stylometry methods solely rely on static (e.g., lexical, layout, and syntactic) features extracted from source code, while neglecting its key difference from regular text – it is executable! In this paper, we present Sundae, a novel code de-anonymization framework that integrates both static and dynamic stylometry analysis. Compared with the existing solutions, Sundae departs in significant ways: (i) it requires much less number of static, hand-crafted features; (ii) it requires much less labeled data for training; and (iii) it can be readily extended to new programmers once their stylometry information becomes available Through extensive evaluation on benchmark datasets, we demonstrate that Sundae delivers strong empirical performance. For example, under the setting of 229 programmers and 9 problems, it outperforms the state-of-art method by a margin of 45.65% on Python code de-anonymization. The empirical results highlight the integration of static and dynamic analysis as a promising direction for code stylometry research.

2019-08-26
Oleksenko, Oleksii, Kuvaiskii, Dmitrii, Bhatotia, Pramod, Felber, Pascal, Fetzer, Christof.  2018.  Intel MPX Explained: A Cross-Layer Analysis of the Intel MPX System Stack. Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems. :111-112.

Memory-safety violations are the primary cause of security and reliability issues in software systems written in unsafe languages. Given the limited adoption of decades-long research in software-based memory safety approaches, as an alternative, Intel released Memory Protection Extensions (MPX)–-a hardware-assisted technique to achieve memory safety. In this work, we perform an exhaustive study of Intel MPX architecture along three dimensions: (a) performance overheads, (b) security guarantees, and (c) usability issues. We present the first detailed root cause analysis of problems in the Intel MPX architecture through a cross-layer dissection of the entire system stack, involving the hardware, operating system, compilers, and applications. To put our findings into perspective, we also present an in-depth comparison of Intel MPX with three prominent types of software-based memory safety approaches. Lastly, based on our investigation, we propose directions for potential changes to the Intel MPX architecture to aid the design space exploration of future hardware extensions for memory safety. A complete version of this work appears in the 2018 proceedings of the ACM on Measurement and Analysis of Computing Systems.

2019-01-31
Simmons, Andrew J., Curumsing, Maheswaree Kissoon, Vasa, Rajesh.  2018.  An Interaction Model for De-Identification of Human Data Held by External Custodians. Proceedings of the 30th Australian Conference on Computer-Human Interaction. :23–26.

Reuse of pre-existing industry datasets for research purposes requires a multi-stakeholder solution that balances the researcher's analysis objectives with the need to engage the industry data custodian, whilst respecting the privacy rights of human data subjects. Current methods place the burden on the data custodian, whom may not be sufficiently trained to fully appreciate the nuances of data de-identification. Through modelling of functional, quality, and emotional goals, we propose a de-identification in the cloud approach whereby the researcher proposes analyses along with the extraction and de-identification operations, while engaging the industry data custodian with secure control over authorising the proposed analyses. We demonstrate our approach through implementation of a de-identification portal for sports club data.

2019-06-28
Gulzar, Muhammad Ali.  2018.  Interactive and Automated Debugging for Big Data Analytics. Proceedings of the 40th International Conference on Software Engineering: Companion Proceeedings. :509-511.

An abundance of data in many disciplines of science, engineering, national security, health care, and business has led to the emerging field of Big Data Analytics that run in a cloud computing environment. To process massive quantities of data in the cloud, developers leverage Data-Intensive Scalable Computing (DISC) systems such as Google's MapReduce, Hadoop, and Spark. Currently, developers do not have easy means to debug DISC applications. The use of cloud computing makes application development feel more like batch jobs and the nature of debugging is therefore post-mortem. Developers of big data applications write code that implements a data processing pipeline and test it on their local workstation with a small sample data, downloaded from a TB-scale data warehouse. They cross fingers and hope that the program works in the expensive production cloud. When a job fails or they get a suspicious result, data scientists spend hours guessing at the source of the error, digging through post-mortem logs. In such cases, the data scientists may want to pinpoint the root cause of errors by investigating a subset of corresponding input records. The vision of my work is to provide interactive, real-time and automated debugging services for big data processing programs in modern DISC systems with minimum performance impact. My work investigates the following research questions in the context of big data analytics: (1) What are the necessary debugging primitives for interactive big data processing? (2) What scalable fault localization algorithms are needed to help the user to localize and characterize the root causes of errors? (3) How can we improve testing efficiency during iterative development of DISC applications by reasoning the semantics of dataflow operators and user-defined functions used inside dataflow operators in tandem? To answer these questions, we synthesize and innovate ideas from software engineering, big data systems, and program analysis, and coordinate innovations across the software stack from the user-facing API all the way down to the systems infrastructure.

2019-12-10
Braverman, Mark, Kol, Gillat.  2018.  Interactive Compression to External Information. Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. :964-977.

We describe a new way of compressing two-party communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the participants' private inputs to an observer that watches the communication, can be simulated by a new protocol that communicates at most poly(I) $\cdot$ loglog(C) bits. Our result is tight up to polynomial factors, as it matches the recent work separating communication complexity from external information cost.

2019-10-02
Santo, Walter E., de B. Salgueiro, Ricardo J. P., Santos, Reneilson, Souza, Danilo, Ribeiro, Admilson, Moreno, Edward.  2018.  Internet of Things: A Survey on Communication Protocol Security. Proceedings of the Euro American Conference on Telematics and Information Systems. :17:1–17:5.

This paper presents a survey on the main security problems that affect the communication protocols in the context of Internet of Things, in order to identify possible threats and vulnerabilities. The protocols RFID, NFC, 6LoWPAN, 6TiSCH, DTSL, CoAP and MQTT, for a better organization, were explored and categorized in layers according to the TCP / IP reference model. At the end, a summary is presented in tabular form with the security modes used for each protocol is used.

2019-05-20
Terkawi, A., Innab, N., al-Amri, S., Al-Amri, A..  2018.  Internet of Things (IoT) Increasing the Necessity to Adopt Specific Type of Access Control Technique. 2018 21st Saudi Computer Society National Computer Conference (NCC). :1–5.

The Internet of Things (IoT) is one of the emerging technologies that has seized the attention of researchers, the reason behind that was the IoT expected to be applied in our daily life in the near future and human will be wholly dependent on this technology for comfort and easy life style. Internet of things is the interconnection of internet enabled things or devices to connect with each other and to humans in order to achieve some goals or the ability of everyday objects to connect to the Internet and to send and receive data. However, the Internet of Things (IoT) raises significant challenges that could stand in the way of realizing its potential benefits. This paper discusses access control area as one of the most crucial aspect of security and privacy in IoT and proposing a new way of access control that would decide who is allowed to access what and who is not to the IoT subjects and sensors.

2019-11-19
Wimmer, Maria A., Boneva, Rositsa, di Giacomo, Debora.  2018.  Interoperability Governance: A Definition and Insights from Case Studies in Europe. Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data Age. :14:1-14:11.

Interoperability has become a crucial value in European e-government developments, as promoted by the Digital Single Market strategy and the Tallinn Declaration. The European Union and its Member States have made considerable investments in improving the understanding of interoperability and in developing interoperable building blocks to support cross-border data exchange and public service provisioning. This includes recent updates of the European Interoperability Framework (EIF) and European Interoperability Reference Architecture (EIRA), as well as the publication of a number of generic and domain specific architecture and solutions building blocks such as digital identification or electronic delivery services. While in the previous version of the EIF, interoperability governance was not clearly developed, the new version of 2017 puts interoperability governance as a concept that spans across the different interoperability layers (legal, organizational, semantic and technical) and that builds the frame for interoperability overall. In this paper, we develop a definition of interoperability governance from a literature review and we put forward a model to investigate interoperability governance models at European and Member State levels. Based on several case studies of EU institutions and Member States, we could draw recommendations for what the key aspects of interoperability governance are to successfully diffuse interoperability into public service provisioning.

2019-03-28
Subasi, A., Al-Marwani, K., Alghamdi, R., Kwairanga, A., Qaisar, S. M., Al-Nory, M., Rambo, K. A..  2018.  Intrusion Detection in Smart Grid Using Data Mining Techniques. 2018 21st Saudi Computer Society National Computer Conference (NCC). :1-6.

The rapid growth of population and industrialization has given rise to the way for the use of technologies like the Internet of Things (IoT). Innovations in Information and Communication Technologies (ICT) carries with it many challenges to our privacy's expectations and security. In Smart environments there are uses of security devices and smart appliances, sensors and energy meters. New requirements in security and privacy are driven by the massive growth of devices numbers that are connected to IoT which increases concerns in security and privacy. The most ubiquitous threats to the security of the smart grids (SG) ascended from infrastructural physical damages, destroying data, malwares, DoS, and intrusions. Intrusion detection comprehends illegitimate access to information and attacks which creates physical disruption in the availability of servers. This work proposes an intrusion detection system using data mining techniques for intrusion detection in smart grid environment. The results showed that the proposed random forest method with a total classification accuracy of 98.94 %, F-measure of 0.989, area under the ROC curve (AUC) of 0.999, and kappa value of 0.9865 outperforms over other classification methods. In addition, the feasibility of our method has been successfully demonstrated by comparing other classification techniques such as ANN, k-NN, SVM and Rotation Forest.

2019-01-21
Warzyński, A., Kołaczek, G..  2018.  Intrusion detection systems vulnerability on adversarial examples. 2018 Innovations in Intelligent Systems and Applications (INISTA). :1–4.

Intrusion detection systems define an important and dynamic research area for cybersecurity. The role of Intrusion Detection System within security architecture is to improve a security level by identification of all malicious and also suspicious events that could be observed in computer or network system. One of the more specific research areas related to intrusion detection is anomaly detection. Anomaly-based intrusion detection in networks refers to the problem of finding untypical events in the observed network traffic that do not conform to the expected normal patterns. It is assumed that everything that is untypical/anomalous could be dangerous and related to some security events. To detect anomalies many security systems implements a classification or clustering algorithms. However, recent research proved that machine learning models might misclassify adversarial events, e.g. observations which were created by applying intentionally non-random perturbations to the dataset. Such weakness could increase of false negative rate which implies undetected attacks. This fact can lead to one of the most dangerous vulnerabilities of intrusion detection systems. The goal of the research performed was verification of the anomaly detection systems ability to resist this type of attack. This paper presents the preliminary results of tests taken to investigate existence of attack vector, which can use adversarial examples to conceal a real attack from being detected by intrusion detection systems.

2019-09-05
Deshotels, Luke, Deaconescu, Razvan, Carabas, Costin, Manda, Iulia, Enck, William, Chiroiu, Mihai, Li, Ninghui, Sadeghi, Ahmad-Reza.  2018.  iOracle: Automated Evaluation of Access Control Policies in iOS. Proceedings of the 2018 on Asia Conference on Computer and Communications Security. :117-131.

Modern operating systems, such as iOS, use multiple access control policies to define an overall protection system. However, the complexity of these policies and their interactions can hide policy flaws that compromise the security of the protection system. We propose iOracle, a framework that logically models the iOS protection system such that queries can be made to automatically detect policy flaws. iOracle models policies and runtime context extracted from iOS firmware images, developer resources, and jailbroken devices, and iOracle significantly reduces the complexity of queries by modeling policy semantics. We evaluate iOracle by using it to successfully triage executables likely to have policy flaws and comparing our results to the executables exploited in four recent jailbreaks. When applied to iOS 10, iOracle identifies previously unknown policy flaws that allow attackers to modify or bypass access control policies. For compromised system processes, consequences of these policy flaws include sandbox escapes (with respect to read/write file access) and changing the ownership of arbitrary files. By automating the evaluation of iOS access control policies, iOracle provides a practical approach to hardening iOS security by identifying policy flaws before they are exploited.

2019-09-23
Babu, S., Markose, S..  2018.  IoT Enabled Robots with QR Code Based Localization. 2018 International Conference on Emerging Trends and Innovations In Engineering And Technological Research (ICETIETR). :1–5.

Robots are sophisticated form of IoT devices as they are smart devices that scrutinize sensor data from multiple sources and observe events to decide the best procedural actions to supervise and manoeuvre objects in the physical world. In this paper, localization of the robot is addressed by QR code Detection and path optimization is accomplished by Dijkstras algorithm. The robot can navigate automatically in its environment with sensors and shortest path is computed whenever heading measurements are updated with QR code landmark recognition. The proposed approach highly reduces computational burden and deployment complexity as it reflects the use of artificial intelligence to self-correct its course when required. An Encrypted communication channel is established over wireless local area network using SSHv2 protocol to transfer or receive sensor data(or commands) making it an IoT enabled Robot.

2019-07-01
Saleem, Jibran, Hammoudeh, Mohammad, Raza, Umar, Adebisi, Bamidele, Ande, Ruth.  2018.  IoT Standardisation: Challenges, Perspectives and Solution. Proceedings of the 2Nd International Conference on Future Networks and Distributed Systems. :1:1-1:9.

The success and widespread adoption of the Internet of Things (IoT) has increased many folds over the last few years. Industries, technologists and home users recognise the importance of IoT in their lives. Essentially, IoT has brought vast industrial revolution and has helped automate many processes within organisations and homes. However, the rapid growth of IoT is also a cause for significant concern. IoT is not only plagued with security, authentication and access control issues, it also doesn't work as well as it should with fourth industrial revolution, commonly known as Industry 4.0. The absence of effective regulation, standards and weak governance has led to a continual downward trend in the security of IoT networks and devices, as well as given rise to a broad range of privacy issues. This paper examines the IoT industry and discusses the urgent need for standardisation, the benefits of governance as well as the issues affecting the IoT sector due to the absence of regulation. Additionally, through this paper, we are introducing an IoT security framework (IoTSFW) for organisations to bridge the current lack of guidelines in the IoT industry. Implementation of the guidelines, defined in the proposed framework, will assist organisations in achieving security, privacy, sustainability and scalability within their IoT networks.

2018-12-03
Shearon, C. E..  2018.  IPC-1782 standard for traceability of critical items based on risk. 2018 Pan Pacific Microelectronics Symposium (Pan Pacific). :1–3.

Traceability has grown from being a specialized need for certain safety critical segments of the industry, to now being a recognized value-add tool for the industry as a whole that can be utilized for manual to automated processes End to End throughout the supply chain. The perception of traceability data collection persists as being a burden that provides value only when the most rare and disastrous of events take place. Disparate standards have evolved in the industry, mainly dictated by large OEM companies in the market create confusion, as a multitude of requirements and definitions proliferate. The intent of the IPC-1782 project is to bring the whole principle of traceability up to date and enable business to move faster, increase revenue, increase productivity, and decrease costs as a result of increased trust. Traceability, as defined in this standard will represent the most effective quality tool available, becoming an intrinsic part of best practice operations, with the encouragement of automated data collection from existing manufacturing systems which works well with Industry 4.0, integrating quality, reliability, product safety, predictive (routine, preventative, and corrective) maintenance, throughput, manufacturing, engineering and supply-chain data, reducing cost of ownership as well as ensuring timeliness and accuracy all the way from a finished product back through to the initial materials and granular attributes about the processes along the way. The goal of this standard is to create a single expandable and extendable data structure that can be adopted for all levels of traceability and enable easily exchanged information, as appropriate, across many industries. The scope includes support for the most demanding instances for detail and integrity such as those required by critical safety systems, all the way through to situations where only basic traceability, such as for simple consumer products, are required. A key driver for the adoption of the standard is the ability to find a relevant and achievable level of traceability that exactly meets the requirement following risk assessment of the business. The wealth of data accessible from traceability for analysis (e.g.; Big Data, etc.) can easily and quickly yield information that can raise expectations of very significant quality and performance improvements, as well as providing the necessary protection against the costs of issues in the market and providing very timely information to regulatory bodies along with consumers/customers as appropriate. This information can also be used to quickly raise yields, drive product innovation that resonates with consumers, and help drive development tests & design requirements that are meaningful to the Marketplace. Leveraging IPC 1782 to create the best value of Component Traceability for your business.