Biblio
Filters: First Letter Of Last Name is A [Clear All Filters]
.
2018. Aggregated-Query-as-a-Secure-Service for RF Spectrum Database-Driven Opportunistic Wireless Communications. 2018 IEEE Conference on Communications and Network Security (CNS). :1–2.
The US Federal Communications Commission (FCC) has recently mandated the database-driven dynamic spectrum access where unlicensed secondary users search for idle bands and use them opportunistically. The database-driven dynamic spectrum access approach is regarded for minimizing any harmful interference to licensed primary users caused by RF channel sensing uncertainties. However, when several secondary users (or several malicious users) query the RF spectrum database at the same time, spectrum server could experience denial of service (DoS) attack. In this paper, we investigate the Aggregated-Query-as-a-Secure-Service (AQaaSS) for querying RF spectrum database by secondary users for opportunistic wireless communications where selected number of secondary users aka grid leaders, query the database on behalf of all other secondary users, aka grid followers and relay the idle channel information to grid followers. Furthermore, the grid leaders are selected based on their both reputation or trust level and location in the network for the integrity of the information that grid followers receive. Grid followers also use the weighted majority voting to filter out comprised information about the idle channels. The performance of the proposed approach is evaluated using numerical results. The proposed approach gives lower latency (or same latency) to the secondary users and lower load (or same load) to the RF spectrum database server when more number of secondary users (or less number of secondary users) query than that of the server capacity.
.
2018. Aggregation of Security Metrics for Decision Making: A Reference Architecture. Proceedings of the 12th European Conference on Software Architecture: Companion Proceedings. :53:1–53:7.
Existing security technologies play a significant role in protecting enterprise systems but they are no longer enough on their own given the number of successful cyberattacks against businesses and the sophistication of the tactics used by attackers to bypass the security defences. Security measurement is different to security monitoring in the sense that it provides a means to quantify the security of the systems while security monitoring helps in identifying abnormal events and does not measure the actual state of an infrastructure's security. The goal of enterprise security metrics is to enable understanding of the overall security using measurements to guide decision making. In this paper we present a reference architecture for aggregating the measurement values from the different components of the system in order to enable stakeholders to see the overall security state of their enterprise systems and to assist with decision making. This will provide a newer dimension to security management by shifting from security monitoring to security measurement.
.
2018. Analysis of the effect of malicious packet drop attack on packet transmission in wireless mesh networks. 2018 Conference on Information Communications Technology and Society (ICTAS). :1–6.
Wireless mesh networks (WMNs) are known for possessing good attributes such as low up-front cost, easy network maintenance, and reliable service coverage. This has largely made them to be adopted in various environments such as; school campus networks, community networking, pervasive healthcare, office and home automation, emergency rescue operations and ubiquitous wireless networks. The routing nodes are equipped with self-organized and self-configuring capabilities. However, the routing mechanisms of WMNs depend on the collaboration of all participating nodes for reliable network performance. The authors of this paper have noted that most routing algorithms proposed for WMNs in the last few years are designed with the assumption that all the participating nodes will collaboratively be involved in relaying the data packets originated from a source to a multi-hop destination. Such design approach however exposes WMNs to vulnerability such as malicious packet drop attack. This paper presents an evaluation of the effect of the black hole attack with other influential factors in WMNs. In this study, NS-3 simulator was used with AODV as the routing protocol. The results show that the packet delivery ratio and throughput of WMN under attack decreases sharply as compared to WMN free from attack. On an average, 47.41% of the transmitted data packets were dropped in presence of black hole attack.
.
2018. Analytics as a service architecture for cloud-based CDN: Case of video popularity prediction. 2018 15th IEEE Annual Consumer Communications Networking Conference (CCNC). :1–4.
User Generated Videos (UGV) are the dominating content stored in scattered caches to meet end-user Content Delivery Networks (CDN) requests with quality of service. End-User behaviour leads to a highly variable UGV popularity. This aspect can be exploited to efficiently utilize the limited storage of the caches, and improve the hit ratio of UGVs. In this paper, we propose a new architecture for Data Analytics in Cloud-based CDN to derive UGVs popularity online. This architecture uses RESTful web services to gather CDN logs, store them through generic collections in a NoSQL database, and calculate related popular UGVs in a real time fashion. It uses a dynamic model training and prediction services to provide each CDN with related popular videos to be cached based on the latest trained model. The proposed architecture is implemented with k-means clustering prediction model and the obtained results are 99.8% accurate.
.
2018. Assessing Level of Resilience Using Attack Graphs. 2018 10th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). :1–6.
Cyber-Physical-Systems are subject to cyber-attacks due to existing vulnerabilities in the various components constituting them. System Resiliency is concerned with the extent the system is able to bounce back to a normal state under attacks. In this paper, two communication Networks are analyzed, formally described, and modeled using Architecture Analysis & Design Language (AADL), identifying their architecture, connections, vulnerabilities, resources, possible attack instances as well as their pre-and post-conditions. The generated network models are then verified against a security property using JKind model checker integrated tool. The union of the generated attack sequences/scenarios resulting in overall network compromise (given by its loss of stability) is the Attack graph. The generated Attack graph is visualized graphically using Unity software, and then used to assess the worst Level of Resilience for both networks.
.
2018. An Attack Graph-Based On-Line Multi-Step Attack Detector. Proceedings of the 19th International Conference on Distributed Computing and Networking. :40:1-40:10.
Modern distributed systems are characterized by complex deployment designed to ensure high availability through replication and diversity, to tolerate the presence of failures and to limit the possibility of successful compromising. However, software is not free from bugs that generate vulnerabilities that could be exploited by an attacker through multiple steps. This paper presents an attack-graph based multi-step attack detector aiming at detecting a possible on-going attack early enough to take proper countermeasures through; a Visualization interfaced with the described attack detector presents the security operator with the relevant pieces of information, allowing a better comprehension of the network status and providing assistance in managing attack situations (i.e., reactive analysis mode). We first propose an architecture and then we present the implementation of each building block. Finally, we provide an evaluation of the proposed approach aimed at highlighting the existing trade-off between accuracy of the detection and detection time.
.
2018. Automated Security Configuration Checklist for Apple iOS Devices Using SCAP v1.2. 2018 International Conference on Platform Technology and Service (PlatCon). :1–6.
The security content automation includes configurations of large number of systems, installation of patches securely, verification of security-related configuration settings, compliance with security policies and regulatory requirements, and ability to respond quickly when new threats are discovered [1]. Although humans are important in information security management, humans sometimes introduce errors and inconsistencies in an organization due to manual nature of their tasks [2]. Security Content Automation Protocol was developed by the U.S. NIST to automate information security management tasks such as vulnerability and patch management, and to achieve continuous monitoring of security configurations in an organization. In this paper, SCAP is employed to develop an automated security configuration checklist for use in verifying Apple iOS device configuration against the defined security baseline to enforce policy compliance in an enterprise.
.
2018. Automated Security Investment Analysis of Dynamic Networks. Proceedings of the Australasian Computer Science Week Multiconference. :6:1-6:10.
It is important to assess the cost benefits of IT security investments. Typically, this is done by manual risk assessment process. In this paper, we propose an approach to automate this using graphical security models (GSMs). GSMs have been used to assess the security of networked systems using various security metrics. Most of the existing GSMs assumed that networks are static, however, modern networks (e.g., Cloud and Software Defined Networking) are dynamic with changes. Thus, it is important to develop an approach that takes into account the dynamic aspects of networks. To this end, we automate security investments analysis of dynamic networks using a GSM named Temporal-Hierarchical Attack Representation Model (T-HARM) in order to automatically evaluate the security investments and their effectiveness for a given period of time. We demonstrate our approach via simulations.
.
2018. Automatic Detection of Cyber Security Related Accounts on Online Social Networks: Twitter As an Example. Proceedings of the 9th International Conference on Social Media and Society. :236–240.
Recent studies have revealed that cyber criminals tend to exchange knowledge about cyber attacks in online social networks (OSNs). Cyber security experts are another set of information providers on OSNs who frequently share information about cyber security incidents and their personal opinions and analyses. Therefore, in order to improve our knowledge about evolving cyber attacks and the underlying human behavior for different purposes (e.g., crime investigation, understanding career development of cyber criminals and cyber security professionals, detection of impeding cyber attacks), it will be very useful to detect cyber security related accounts on OSNs automatically, and monitor their activities. This paper reports our preliminarywork on automatic detection of cyber security related accounts on OSNs using Twitter as an example. Three machine learning based classification algorithms were applied and compared: decision trees, random forests, and SVM (support vector machines). Experimental results showed that both decision trees and random forests had performed well with an overall accuracy over 95%, and when random forests were used with behavioral features the accuracy had reached as high as 97.877%.
.
2018. Block-Supply Chain: A New Anti-Counterfeiting Supply Chain Using NFC and Blockchain. Proceedings of the 1st Workshop on Cryptocurrencies and Blockchains for Distributed Systems. :30–35.
Current anti-counterfeiting supply chains rely on a centralized authority to combat counterfeit products. This architecture results in issues such as single point processing, storage, and failure. Blockchain technology has emerged to provide a promising solution for such issues. In this paper, we propose the block-supply chain, a new decentralized supply chain that detects counterfeiting attacks using blockchain and Near Field Communication (NFC) technologies. Block-supply chain replaces the centralized supply chain design and utilizes a new proposed consensus protocol that is, unlike existing protocols, fully decentralized and balances between efficiency and security. Our simulations show that the proposed protocol offers remarkable performance with a satisfactory level of security compared to the state of the art consensus protocol Tendermint.
.
2018. Breaking Down Violence: A Deep-learning Strategy to Model and Classify Violence in Videos. Proceedings of the 13th International Conference on Availability, Reliability and Security. :50:1–50:7.
Detecting violence in videos through automatic means is significant for law enforcement and analysis of surveillance cameras with the intent of maintaining public safety. Moreover, it may be a great tool for protecting children from accessing inappropriate content and help parents make a better informed decision about what their kids should watch. However, this is a challenging problem since the very definition of violence is broad and highly subjective. Hence, detecting such nuances from videos with no human supervision is not only technical, but also a conceptual problem. With this in mind, we explore how to better describe the idea of violence for a convolutional neural network by breaking it into more objective and concrete parts. Initially, our method uses independent networks to learn features for more specific concepts related to violence, such as fights, explosions, blood, etc. Then we use these features to classify each concept and later fuse them in a meta-classification to describe violence. We also explore how to represent time-based events in still-images as network inputs; since many violent acts are described in terms of movement. We show that using more specific concepts is an intuitive and effective solution, besides being complementary to form a more robust definition of violence. When compared to other methods for violence detection, this approach holds better classification quality while using only automatic features.
.
2018. A Conceptual Model for Promoting Positive Security Behavior in Internet of Things Era. 2018 Global Wireless Summit (GWS). :358–363.
As the Internet of Things (IoT) era raise, billions of additional connected devices in new locations and applications will create new challenges. Security and privacy are among the major challenges in IoT as any breaches and misuse in those aspects will have the adverse impact on users. Among many factors that determine the security of any system, human factor is the most important aspect to be considered; as it is renowned that human is the weakest link in the information security cycle. Experts express the need to increase cyber resilience culture and a focus on the human factors involved in cybersecurity to counter cyber risks. The aim of this study is to propose a conceptual model to improve cyber resilience in IoT users that is adapted from a model in public health sector. Cyber resilience is improved through promoting security behavior by gathering the existing knowledge and gain understanding about every contributing aspects. The proposed approach is expected to be used as foundation for government, especially in Indonesia, to derive strategies in improving cyber resilience of IoT users.
.
2018. Constructing Control System Abstractions from Modular Components. Proceedings of the 21st International Conference on Hybrid Systems: Computation and Control (Part of CPS Week). :137–146.
This paper tackles the problem of constructing finite abstractions for formal controller synthesis with high dimensional systems. We develop a theory of abstraction for discrete time nonlinear systems that are equipped with variables acting as interfaces for other systems. Systems interact via an interconnection map which constrains the value of system interface variables. An abstraction of a high dimensional interconnected system is obtained by composing subsystem abstractions with an abstraction of the interconnection. System abstractions are modular in the sense that they can be rearranged, substituted, or reused in configurations that were unknown during the time of abstraction. Constructing the abstraction of the interconnection map can become computationally infeasible when there are many systems. We introduce intermediate variables which break the interconnection and the abstraction procedure apart into smaller problems. Examples showcase the abstraction of a 24-dimensional system through the composition of 24 individual systems, and the synthesis of a controller for a 6-dimensional system with a consensus objective.
.
2018. Container Cluster Model Development for Legacy Applications Integration in Scientific Software System. 2018 IEEE International Conference "Quality Management, Transport and Information Security, Information Technologies" (IT QM IS). :815–819.
Feature of modern scientific information systems is their integration with computing applications, providing distributed computer simulation and intellectual processing of Big Data using high-efficiency computing. Often these software systems include legacy applications in different programming languages, with non-standardized interfaces. To solve the problem of applications integration, containerization systems are using that allow to configure environment in the shortest time to deploy software system. However, there are no such systems for computer simulation systems with large number of nodes. The article considers the actual task of combining containers into a cluster, integrating legacy applications to manage the distributed software system MD-SLAG-MELT v.14, which supports high-performance computing and visualization of the computer experiments results. Testing results of the container cluster including automatic load sharing module for MD-SLAG-MELT system v.14. are given.
.
2018. Content Based Algorithm Aiming to Improve the WEB\_QoE Over SDN Networks. 2018 32nd International Conference on Advanced Information Networking and Applications Workshops (WAINA). :153–158.
Since the 1990s, the concept of QoE has been increasingly present and many scientists take it into account within different fields of application. Taking for example the case of video streaming, the QoE has been well studied in this case while for the web the study of its QoE is relatively neglected. The Quality of Experience (QoE) is the set of objective and subjective characteristics that satisfy retain or give confidence to a user through the life cycle of a service. There are researches that take the different measurement metrics of QoE as a subject, others attack new ways to improve this QoE in order to satisfy the customer and gain his loyalty. In this paper, we focus on the web QoE that is declined by researches despite its great importance given the complexity of new web pages and their utility that is increasingly critical. The wealth of new web pages in images, videos, audios etc. and their growing significance prompt us to write this paper, in which we discuss a new method that aims to improve the web QoE in a software-defined network (SDN). Our proposed method consists in automating and making more flexible the management of the QoE improvement of the web pages and this by writing an algorithm that, depending on the case, chooses the necessary treatment to improve the web QoE of the page concerned and using both web prefetching and caching to accelerate the data transfer when the user asks for it. The first part of the paper discusses the advantages and disadvantages of existing works. In the second part we propose an automatic algorithm that treats each case with the appropriate solution that guarantees its best performance. The last part is devoted to the evaluation of the performance.
.
2018. Critical Aspects Pertaining Security of IoT Application Level Software Systems. 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON). :960–964.
With the prevalence of Internet of Things (IoT) devices and systems, touching almost every single aspect of our modern life, one core factor that will determine whether this technology will succeed, and gain people trust, or fail is security. This technology aimed to facilitate and improve the quality of our life; however, it is hysterical and fast growth makes it an attractive and prime target for a whole variety of hackers posing a significant risk to our technology and IT infrastructures at both enterprise and individual levels. This paper discusses and identifies some critical aspects from software security perspective that need to be addressed and considered when designing IoT applications. This paper mainly concerned with potential security issues of the applications running on IoT devices including insecure interfaces, insecure software, constrained application protocol and middleware security. This effort is part of a funded research project that investigates internet of things (IoT) security and privacy issues related to architecture, connectivity and data collection.
.
2018. Crypto Primitives IPCore Implementation Susceptibility in Cyber Physical System. 2018 IEEE International Symposium on Smart Electronic Systems (iSES) (Formerly iNiS). :255—260.
Security evaluation of third-party cryptographic Soft/Hard IP (Intellectual Property) core is often ignored due to several reasons including, lack of awareness about its adversity, lack of knowledge about validation methodology or considering security as a byproduct. Particularly, the security validation of bought-out Hardware IP core is important before being deployed in particle means. In this paper, we present Look-Up-Table (LUT) based unrolled implementation of low latency cipher, PRINCE as an hard IP core and show how the susceptible implementation (nested and flexible placement of IP cores) can be experimentally exploited to reveal secret key in FPGA using power analysis attack. Such vulnerability in constrained devices, Internet-of-Things(IoT), causes serious threats in cyber physical system.
.
2018. Cryptographic and Non-Cryptographic Network Applications and Their Optical Implementations. 2018 IEEE Photonics Society Summer Topical Meeting Series (SUM). :9-10.
The use of quantum mechanical signals in communication opens up the opportunity to build new communication systems that accomplishes tasks that communication with classical signals structures cannot achieve. Prominent examples are Quantum Key Distribution Protocols, which allows the generation of secret keys without computational assumptions of adversaries. Over the past decade, protocols have been developed that achieve tasks that can also be accomplished with classical signals, but the quantum version of the protocol either uses less resources, or leaks less information between the involved parties. The gap between quantum and classical can be exponential in the input size of the problems. Examples are the comparison of data, the scheduling of appointments and others. Until recently, it was thought that these protocols are of mere conceptual value, but that the quantum advantage could not be realized. We changed that by developing quantum optical versions of these abstract protocols that can run with simple laser pulses, beam-splitters and detectors. [1-3] By now the first protocols have been successfully implemented [4], showing that a quantum advantage can be realized. The next step is to find and realize protocols that have a high practical value.
.
2018. Culture, Errors, and Rapport-Building Dialogue in Social Agents. Proceedings of the 18th International Conference on Intelligent Virtual Agents. :51-58.
This work explores whether culture impacts the extent to which social dialogue can mitigate (or exacerbate) the loss of trust caused when agents make conversational errors. Our study uses an agent designed to persuade users to agree with its rankings on two tasks. Participants from the U.S. and Japan completed our study. We perform two manipulations: (1) The presence of conversational errors – the agent exhibited errors in the second task or not; (2) The presence of social dialogue – between the two tasks, users either engaged in a social dialogue with the agent or completed a control task. Replicating previous research, conversational errors reduce the agent's influence. However, we found that culture matters: there was a marginally significant three-way interaction with culture, presence of social dialogue, and presence of errors. The pattern of results suggests that, for American participants, social dialogue backfired if it is followed by errors, presumably because it extends the period of good performance, creating a stronger contrast effect with the subsequent errors. However, for Japanese participants, social dialogue if anything mitigates the detrimental effect of errors; the negative effect of errors is only seen in the absence of a social dialogue. Agent design should therefore take the culture of the intended users into consideration when considering use of social dialogue to bolster agents against conversational errors.
.
2018. A Data Provenance Visualization Approach. 2018 14th International Conference on Semantics, Knowledge and Grids (SKG). :84–91.
Data Provenance has created an emerging requirement for technologies that enable end users to access, evaluate, and act on the provenance of data in recent years. In the era of Big Data, the amount of data created by corporations around the world has grown each year. As an example, both in the Social Media and e-Science domains, data is growing at an unprecedented rate. As the data has grown rapidly, information on the origin and lifecycle of the data has also grown. In turn, this requires technologies that enable the clarification and interpretation of data through the use of data provenance. This study proposes methodologies towards the visualization of W3C-PROV-O Specification compatible provenance data. The visualizations are done by summarization and comparison of the data provenance. We facilitated the testing of these methodologies by providing a prototype, extending an existing open source visualization tool. We discuss the usability of the proposed methodologies with an experimental study; our initial results show that the proposed approach is usable, and its processing overhead is negligible.
.
2018. Datafusion: Taking Source Confidences into Account. Proceedings of the 8th International Conference on Information Systems and Technologies. :9:1–9:6.
Data fusion is a form of information integration where large amounts of data mined from sources such as web sites, Twitter feeds, Facebook postings, blogs, email messages, news streams, and the like are integrated. Such data is inherently uncertain and unreliable. The sources have different degrees of accuracy and the data mining process itself incurs additional uncertainty. The main goal of data fusion is to discover the correct data among the uncertain and possibly conflicting mined data. We investigate a data fusion approach that, in addition to the accuracy of sources, incorporates the correctness (confidence) measures that most data mining approaches associate with mined data. There are a number of advantages in incorporating these confidences. First, we do not require a training set. The initial training set is obtained using the confidence measures. More importantly, a more accurate fusion can result by taking the confidences into account. We present an approach to determine the correctness threshold using users' feedback, and show it can significantly improve the accuracy of data fusion. We evaluate of the performance and accuracy of our data fusion approach for two groups of experiments. In the first group data sources contain random (unintentional) errors. In the second group data sources contain intentional falsifications.
.
2018. Debugging Distributed Systems with Why-Across-Time Provenance. Proceedings of the ACM Symposium on Cloud Computing. :333–346.
Systematically reasoning about the fine-grained causes of events in a real-world distributed system is challenging. Causality, from the distributed systems literature, can be used to compute the causal history of an arbitrary event in a distributed system, but the event's causal history is an over-approximation of the true causes. Data provenance, from the database literature, precisely describes why a particular tuple appears in the output of a relational query, but data provenance is limited to the domain of static relational databases. In this paper, we present wat-provenance: a novel form of provenance that provides the benefits of causality and data provenance. Given an arbitrary state machine, wat-provenance describes why the state machine produces a particular output when given a particular input. This enables system developers to reason about the causes of events in real-world distributed systems. We observe that automatically extracting the wat-provenance of a state machine is often infeasible. Fortunately, many distributed systems components have simple interfaces from which a developer can directly specify wat-provenance using a technique we call wat-provenance specifications. Leveraging the theoretical foundations of wat-provenance, we implement a prototype distributed debugging framework called Watermelon.
.
2018. A Demonstration of Privacy-Preserving Aggregate Queries for Optimal Location Selection. 2018 IEEE 19th International Symposium on "A World of Wireless, Mobile and Multimedia Networks" (WoWMoM). :1–3.
In recent years, service providers, such as mobile operators providing wireless services, collected location data in enormous extent with the increase of the usages of mobile phones. Vertical businesses, such as banks, may want to use this location information for their own scenarios. However, service providers cannot directly provide these private data to the vertical businesses because of the privacy and legal issues. In this demo, we show how privacy preserving solutions can be utilized using such location-based queries without revealing each organization's sensitive data. In our demonstration, we used partially homomorphic cryptosystem in our protocols and showed practicality and feasibility of our proposed solution.
.
2018. Denial of Engineering Operations Attacks in Industrial Control Systems. Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy. :319–329.
We present a new type of attack termed denial of engineering operations in which an attacker can interfere with the normal cycle of an engineering operation leading to a loss of situational awareness. Specifically, the attacker can deceive the engineering software during attempts to retrieve the ladder logic program from a programmable logic controller (PLC) by manipulating the ladder logic on the PLC, such that the software is unable to process it while the PLC continues to execute it successfully. This attack vector can provide sufficient cover for the attacker»s actual scenario to play out while the owner tries to understand the problem and reestablish positive operational control. To enable the forensic analysis and, eventually, eliminate the threat, we have developed the first decompiler for ladder logic programs. Ladder logic is a graphical programming language for PLCs that control physical processes such as power grid, pipelines, and chemical plants; PLCs are a common target of malicious modifications leading to the compromise of the control behavior (and potentially serious consequences). Our decompiler, Laddis, transforms a low-level representation to its corresponding high-level original representation comprising of graphical symbols and connections. The evaluation of the accuracy of the decompiler on the program of varying complexity demonstrates perfect reconstruction of the original program. We present three new attack scenarios on PLC-deployed ladder logic and demonstrate the effectiveness of the decompiler on these scenarios.
.
2018. Deployment of IoT-based Honeynet Model. Proceedings of the 6th International Conference on Information Technology: IoT and Smart City. :134–139.
This paper deals with the developing model of a honeynet that depends on the Internet of things (IoT). Due to significant of industrial services, such model helps enhancement of information security detection in industrial domain, the model is designed to detect adversaries whom attempt to attack industrial control systems (ICS) and supervisory control and data acquisition (SCADA) systems. The model consists of hardware and software aspects, designed to focus on ICS services that managed remotely via SCADA systems. In order to prove the work of the model, a few of security tools are used such as Shodan, Nmap and others. These tools have been applied locally inside LAN and globally via internet to get proving results. Ultimately, results contain a list of protocols and ports that represent industry control services. To clarify outputs, it contains tcp/udp ports 623, 102, 1025 and 161 which represent respectively IPMI, S7comm, KAMSTRAP and SNMP services.



