Biblio
The strong development of the Internet of Things (IoT) is dramatically changing traditional perceptions of the current Internet towards an integrated vision of smart objects interacting with each other. While in recent years many technological challenges have already been solved through the extension and adaptation of wireless technologies, security and privacy still remain as the main barriers for the IoT deployment on a broad scale. In this emerging paradigm, typical scenarios manage particularly sensitive data, and any leakage of information could severely damage the privacy of users. This paper provides a concise description of some of the major challenges related to these areas that still need to be overcome in the coming years for a full acceptance of all IoT stakeholders involved. In addition, we propose a distributed capability-based access control mechanism which is built on public key cryptography in order to cope with some of these challenges. Specifically, our solution is based on the design of a lightweight token used for access to CoAP Resources, and an optimized implementation of the Elliptic Curve Digital Signature Algorithm (ECDSA) inside the smart object. The results obtained from our experiments demonstrate the feasibility of the proposal and show promising in order to cover more complex scenarios in the future, as well as its application in specific IoT use cases.
In the cyber crime huge log data, transactional data occurs which tends to plenty of data for storage and analyze them. It is difficult for forensic investigators to play plenty of time to find out clue and analyze those data. In network forensic analysis involves network traces and detection of attacks. The trace involves an Intrusion Detection System and firewall logs, logs generated by network services and applications, packet captures by sniffers. In network lots of data is generated in every event of action, so it is difficult for forensic investigators to find out clue and analyzing those data. In network forensics is deals with analysis, monitoring, capturing, recording, and analysis of network traffic for detecting intrusions and investigating them. This paper focuses on data collection from the cyber system and web browser. The FTK 4.0 is discussing for memory forensic analysis and remote system forensic which is to be used as evidence for aiding investigation.
Botnet is one of the most widespread and serious malware which occur frequently in today's cyber attacks. A botnet is a group of Internet-connected computer programs communicating with other similar programs in order to perform various attacks. HTTP-based botnet is most dangerous botnet among all the different botnets available today. In botnets detection, in particularly, behavioural-based approaches suffer from the unavailability of the benchmark datasets and this lead to lack of precise results evaluation of botnet detection systems, comparison, and deployment which originates from the deficiency of adequate datasets. Most of the datasets in the botnet field are from local environment and cannot be used in the large scale due to privacy problems and do not reflect common trends, and also lack some statistical features. To the best of our knowledge, there is not any benchmark dataset available which is infected by HTTP-based botnet (HBB) for performing Distributed Denial of Service (DDoS) attacks against Web servers by using HTTP-GET flooding method. In addition, there is no Web access log infected by botnet is available for researchers. Therefore, in this paper, a complete test-bed will be illustrated in order to implement a real time HTTP-based botnet for performing variety of DDoS attacks against Web servers by using HTTP-GET flooding method. In addition to this, Web access log with http bot traces are also generated. These real time datasets and Web access logs can be useful to study the behaviour of HTTP-based botnet as well as to evaluate different solutions proposed to detect HTTP-based botnet by various researchers.
Rapid advances in wireless ad hoc networks lead to increase their applications in real life. Since wireless ad hoc networks have no centralized infrastructure and management, they are vulnerable to several security threats. Malicious packet dropping is a serious attack against these networks. In this attack, an adversary node tries to drop all or partial received packets instead of forwarding them to the next hop through the path. A dangerous type of this attack is called black hole. In this attack, after absorbing network traffic by the malicious node, it drops all received packets to form a denial of service (DOS) attack. In this paper, a dynamic trust model to defend network against this attack is proposed. In this approach, a node trusts all immediate neighbors initially. Getting feedback from neighbors' behaviors, a node updates the corresponding trust value. The simulation results by NS-2 show that the attack is detected successfully with low false positive probability.
DeepQA is a large-scale natural language processing (NLP) question-and-answer system that responds across a breadth of structured and unstructured data, from hundreds of analytics that are combined with over 50 models, trained through machine learning. After the 2011 historic milestone of defeating the two best human players in the Jeopardy! game show, the technology behind IBM Watson, DeepQA, is undergoing gamification into real-world business problems. Gamifying a business domain for Watson is a composite of functional, content, and training adaptation for nongame play. During domain gamification for medical, financial, government, or any other business, each system change affects the machine-learning process. As opposed to the original Watson Jeopardy!, whose class distribution of positive-to-negative labels is 1:100, in adaptation the computed training instances, question-and-answer pairs transformed into true-false labels, result in a very low positive-to-negative ratio of 1:100 000. Such initial extreme class imbalance during domain gamification poses a big challenge for the Watson machine-learning pipelines. The combination of ingested corpus sets, question-and-answer pairs, configuration settings, and NLP algorithms contribute toward the challenging data state. We propose several data engineering techniques, such as answer key vetting and expansion, source ingestion, oversampling classes, and question set modifications to increase the computed true labels. In addition, algorithm engineering, such as an implementation of the Newton-Raphson logistic regression with a regularization term, relaxes the constraints of class imbalance during training adaptation. We conclude by empirically demonstrating that data and algorithm engineering are complementary and indispensable to overcome the challenges in this first Watson gamification for real-world business problems.
Power grids are monitored by gathering data through remote sensors and estimating the state of the grid. Bad data detection schemes detect and remove poor data. False data is a special type of data injection designed to evade typical bad data detection schemes and compromise state estimates, possibly leading to improper control of the grid. Topology perturbation is a situational awareness method that implements the use of distributed flexible AC transmission system devices to alter impedance on optimally chosen lines, updating the grid topology and exposing the presence of false data. The success of the topology perturbation for improving grid control and exposing false data in AC state estimation is demonstrated. A technique is developed for identifying the false data injection attack vector and quantifying the compromised measurements. The proposed method provides successful false data detection and identification in IEEE 14, 24, and 39-bus test systems using AC state estimation.
The need for increased surveillance due to increase in flight volume in remote or oceanic regions outside the range of traditional radar coverage has been fulfilled by the advent of space-based Automatic Dependent Surveillance — Broadcast (ADS-B) Surveillance systems. ADS-B systems have the capability of providing air traffic controllers with highly accurate real-time flight data. ADS-B is dependent on digital communications between aircraft and ground stations of the air route traffic control center (ARTCC); however these communications are not secured. Anyone with the appropriate capabilities and equipment can interrogate the signal and transmit their own false data; this is known as spoofing. The possibility of this type of attacks decreases the situational awareness of United States airspace. The purpose of this project is to design a secure transmission framework that prevents ADS-B signals from being spoofed. Three alternative methods of securing ADS-B signals are evaluated: hashing, symmetric encryption, and asymmetric encryption. Security strength of the design alternatives is determined from research. Feasibility criteria are determined by comparative analysis of alternatives. Economic implications and possible collision risk is determined from simulations that model the United State airspace over the Gulf of Mexico and part of the airspace under attack respectively. The ultimate goal of the project is to show that if ADS-B signals can be secured, the situational awareness can improve and the ARTCC can use information from this surveillance system to decrease the separation between aircraft and ultimately maximize the use of the United States airspace.
Encryption and decryption of data in an efficient manner is one of the challenging aspects of modern computer science. This paper introduces a new algorithm for Cryptography to achieve a higher level of security. In this algorithm it becomes possible to hide the meaning of a message in unprintable characters. The main issue of this paper is to make the encrypted message undoubtedly unprintable using several times of ASCII conversions and a cyclic mathematical function. Dividing the original message into packets binary matrices are formed for each packet to produce the unprintable encrypted message through making the ASCII value for each character below 32. Similarly, several ASCII conversions and the inverse cyclic mathematical function are used to decrypt the unprintable encrypted message. The final encrypted message received from three times of encryption becomes an unprintable text through which the algorithm possesses higher level of security without increasing the size of data or loosing of any data.
Distributed and parallel applications are critical information technology systems in multiple industries, including academia, military, government, financial, medical, and transportation. These applications present target rich environments for malicious attackers seeking to disrupt the confidentiality, integrity and availability of these systems. Applying the military concept of defense cyber maneuver to these systems can provide protection and defense mechanisms that allow survivability and operational continuity. Understanding the tradeoffs between information systems security and operational performance when applying maneuver principles is of interest to administrators, users, and researchers. To this end, we present a model of a defensive maneuver cyber platform using Stochastic Petri Nets. This model enables the understanding and evaluation of the costs and benefits of maneuverability in a distributed application environment, specifically focusing on moving target defense and deceptive defense strategies.
Many common cyberdefenses (like firewalls and intrusion-detection systems) are static, giving attackers the freedom to probe them at will. Moving-target defense (MTD) adds dynamism, putting the systems to be defended in motion, potentially at great cost to the defender. An alternative approach is a mobile resilient defense that removes attackers' ability to rely on prior experience without requiring motion in the protected infrastructure. The defensive technology absorbs most of the cost of motion, is resilient to attack, and is unpredictable to attackers. The authors' mobile resilient defense, Ant-Based Cyber Defense (ABCD), is a set of roaming, bio-inspired, digital-ant agents working with stationary agents in a hierarchy headed by a human supervisor. ABCD provides a resilient, extensible, and flexible defense that can scale to large, multi-enterprise infrastructures such as the smart electric grid.
Since the past 20 years the uses of web in daily life is increasing and becoming trend now. As the use of the web is increasing, the use of web application is also increasing. Apparently most of the web application exists up to today have some vulnerability that could be exploited by unauthorized person. Some of well-known web application vulnerabilities are Structured Query Language (SQL) Injection, Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF). By compromising with these web application vulnerabilities, the system cracker can gain information about the user and lead to the reputation of the respective organization. Usually the developers of web applications did not realize that their web applications have vulnerabilities. They only realize them when there is an attack or manipulation of their code by someone. This is normal as in a web application, there are thousands of lines of code, therefore it is not easy to detect if there are some loopholes. Nowadays as the hacking tools and hacking tutorials are easier to get, lots of new hackers are born. Even though SQL injection is very easy to protect against, there are still large numbers of the system on the internet are vulnerable to this type of attack because there will be a few subtle condition that can go undetected. Therefore, in this paper we propose a detection model for detecting and recognizing the web vulnerability which is; SQL Injection based on the defined and identified criteria. In addition, the proposed detection model will be able to generate a report regarding the vulnerability level of the web application. As the consequence, the proposed detection model should be able to decrease the possibility of the SQL Injection attack that can be launch onto the web application.
Recent years, HTML5 is widely adopted in popular browsers. Unfortunately, as a new Web standard, HTML5 may expand the Cross Site Scripting (XSS) attack surface as well as improve the interactivity of the page. In this paper, we identified 14 XSS attack vectors related to HTML5 by a systematic analysis about new tags and attributes. Based on these vectors, a XSS test vector repository is constructed and a dynamic XSS vulnerability detection tool focusing on Webmail systems is implemented. By applying the tool to some popular Webmail systems, seven exploitable XSS vulnerabilities are found. The evaluation result shows that our tool can efficiently detect XSS vulnerabilities introduced by HTML5.
Datacenter-based Cloud computing has induced new disruptive trends in networking, key among which is network virtualization. Software-Defined Networking overlays aim to improve the efficiency of the next generation multitenant datacenters. While early overlay prototypes are already available, they focus mainly on core functionality, with little being known yet about their impact on the system level performance. Using query completion time as our primary performance metric, we evaluate the overlay network impact on two representative datacenter workloads, Partition/Aggregate and 3-Tier. We measure how much performance is traded for overlay's benefits in manageability, security and policing. Finally, we aim to assist the datacenter architects by providing a detailed evaluation of the key overlay choices, all made possible by our accurate cross-layer hybrid/mesoscale simulation platform.
Communicating vehicles will change road traffic as we know it. With current versions of European and US standards in mind, the authors discuss privacy and traffic surveillance issues in vehicular network technology and outline research directions that could address these issues.
Smart Grid is the trend of next generation power distribution and network management that enable a two -- way interactive communication and operation between consumers and suppliers, so as to achieve intelligent resource management and optimization. The wireless mesh network technology is a promising infrastructure solution to support these smart functionalities, while it has some inherent vulnerabilities and cyber-attack risks to be addressed. As Smart Grid is heavily relying on the underlie communication networks, which makes their security and dependability issues critical to the entire smart grid technology. Several studies have been conducted in the field of Smart Grid security, but few works were focused on the dependability and its associated resource analysis of the control center networks. In this paper, we have investigated the dependability modeling and also resource allocation in redundant communication networks by adopting two mathematical approaches, Reliability Block Diagrams (RBD) and Stochastic Petri Nets (SPNs), to analyze the dependability of control center networks in Smart Grid environment. We have applied our proposed modeling approach in an extensive case study to evaluate the availability of smart gird networks with different redundancy mechanisms. A combination of dependability models and reliability importance are used to analyze the network availability according to the most important components. We also show the variation of network availability in accordance with Mean Time to Failure (MTTF) in different network architectures.
With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
This paper designs a secure transmission and authorization management system which based on the principles of Public Key Infrastructure and Rose-Based Access Control. It can solve the problems of identity authentication, secure transmission and access control on internet. In the first place, according to PKI principles, certificate authority system is implemented. It can issue and revoke the server-side and client-side digital certificate. Data secure transmission is achieved through the combination of digital certificate and SSL protocol. In addition, this paper analyses access control mechanism and RBAC model. The structure of RBAC model has been improved. The principle of group authority is added into the model and the combination of centralized authority and distributed authority management is adopted, so the model becomes more flexible.
Smart objects are small devices with limited system resources, typically made to fulfill a single simple task. By connecting smart objects and thus forming an Internet of Things, the devices can interact with each other and their users and support a new range of applications. Due to the limitations of smart objects, common security mechanisms are not easily applicable. Small message sizes and the lack of processing power severely limit the devices' ability to perform cryptographic operations. This paper introduces a protocol for delegating client authentication and authorization in a constrained environment. The protocol describes how to establish a secure channel based on symmetric cryptography between resource-constrained nodes in a cross-domain setting. A resource-constrained node can use this protocol to delegate authentication of communication peers and management of authorization information to a trusted host with less severe limitations regarding processing power and memory.
IP technology for resource-constrained devices enables transparent end-to-end connections between a vast variety of devices and services in the Internet of Things (IoT). To protect these connections, several variants of traditional IP security protocols have recently been proposed for standardization, most notably the DTLS protocol. In this paper, we identify significant resource requirements for the DTLS handshake when employing public-key cryptography for peer authentication and key agreement purposes. These overheads particularly hamper secure communication for memory-constrained devices. To alleviate these limitations, we propose a delegation architecture that offloads the expensive DTLS connection establishment to a delegation server. By handing over the established security context to the constrained device, our delegation architecture significantly reduces the resource requirements of DTLS-protected communication for constrained devices. Additionally, our delegation architecture naturally provides authorization functionality when leveraging the central role of the delegation server in the initial connection establishment. Hence, in this paper, we present a comprehensive, yet compact solution for authentication, authorization, and secure data transmission in the IP-based IoT. The evaluation results show that compared to a public-key-based DTLS handshake our delegation architecture reduces the memory overhead by 64 %, computations by 97 %, network transmissions by 68 %.
Optimizing memory access is critical for performance and power efficiency. CPU manufacturers have developed sampling-based performance measurement units (PMUs) that report precise costs of memory accesses at specific addresses. However, this data is too low-level to be meaningfully interpreted and contains an excessive amount of irrelevant or uninteresting information. We have developed a method to gather fine-grained memory access performance data for specific data objects and regions of code with low overhead and attribute semantic information to the sampled memory accesses. This information provides the context necessary to more effectively interpret the data. We have developed a tool that performs this sampling and attribution and used the tool to discover and diagnose performance problems in real-world applications. Our techniques provide useful insight into the memory behaviour of applications and allow programmers to understand the performance ramifications of key design decisions: domain decomposition, multi-threading, and data motion within distributed memory systems.
Artificial monitoring is no longer able to match the rapid growth of cybercrime, it is in great need to develop a new spatial analysis technology which allows emergency events to get rapidly and accurately locked in real environment, furthermore, to establish correlative analysis model for cybercrime prevention strategy. On the other hand, Geography information system has been changed virtually in data structure, coordinate system and analysis model due to the “uncertainty and hyper-dimension” characteristics of network object and behavior. In this paper, the spatial rules of typical cybercrime are explored on base of GIS with Internet searching and IP tracking technology: (1) Setup spatial database through IP searching based on criminal evidence. (2)Extend GIS data-structure and spatial models, add network dimension and virtual attribution to realize dynamic connection between cyber and real space. (3)Design cybercrime monitoring and prevention system to discover the cyberspace logics based on spatial analysis.
In interconnected power systems, dynamic model reduction can be applied to generators outside the area of interest (i.e., study area) to reduce the computational cost associated with transient stability studies. This paper presents a method of deriving the reduced dynamic model of the external area based on dynamic response measurements. The method consists of three steps, namely dynamic-feature extraction, attribution, and reconstruction (DEAR). In this method, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal “basis” of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original system. The network model is unchanged in the DEAR method. Tests on several IEEE standard systems show that the proposed method yields better reduction ratio and response errors than the traditional coherency based reduction methods.
Conducting active cyberdefense requires the acceptance of a proactive framework that acknowledges the lack of predictable symmetries between malicious actors and their capabilities and intent. Unlike physical weapons such as firearms, naval vessels, and piloted aircraft-all of which risk physical exposure when engaged in direct combat-cyberweapons can be deployed (often without their victims' awareness) under the protection of the anonymity inherent in cyberspace. Furthermore, it is difficult in the cyber domain to determine with accuracy what a malicious actor may target and what type of cyberweapon the actor may wield. These aspects imply an advantage for malicious actors in cyberspace that is greater than for those in any other domain, as the malicious cyberactor, under current international constructs and norms, has the ability to choose the time, place, and weapon of engagement. This being said, if defenders are to successfully repel attempted intrusions, then they must conduct an active cyberdefense within a framework that proactively engages threatening actions independent of a requirement to achieve attribution. This paper proposes that private business, government personnel, and cyberdefenders must develop a threat identification framework that does not depend upon attribution of the malicious actor, i.e., an attribution agnostic cyberdefense construct. Furthermore, upon developing this framework, network defenders must deploy internally based cyberthreat countermeasures that take advantage of defensive network environmental variables and alter the calculus of nefarious individuals in cyberspace. Only by accomplishing these two objectives can the defenders of cyberspace actively combat malicious agents within the virtual realm.
Nowadays, the design of a secure access authentication protocol in heterogeneous networks achieving seamless roaming across radio access technologies for mobile users (MUs) is a major technical challenge. This paper proposes a Distributed Anonymous Authentication (DAA) protocol to resolve the problems of heavy signaling overheads and long signaling delay when authentication is executed in a centralized manner. By applying MUs and point of attachments (PoAs) as group members, the adopted group signature algorithms provide identity verification directly without sharing secrets in advance, which significantly reduces signaling overheads. Moreover, MUs sign messages on behalf of the group, so that anonymity and unlinkability against PoAs are provided and thus privacy is preserved. Performance analysis confirm the advantages of DAA over existing solutions.