Biblio
In this special session, members of the ACM Joint Task Force on Cyber Education to Develop Undergraduate Curricular Guidance will provide an overview of the task force mission, objectives, and work plan. After the overview, task force members will engage session participants in the curricular development process.
Language-integrated query is an embedding of database queries into a host language to code queries at a higher level than the all-to-common concatenation of strings of SQL fragments. The eventually produced SQL is ensured to be well-formed and well-typed, and hence free from the embarrassing (security) problems. Language-integrated query takes advantage of the host language's functional and modular abstractions to compose and reuse queries and build query libraries. Furthermore, language-integrated query systems like T-LINQ generate efficient SQL, by applying a number of program transformations to the embedded query. Alas, the set of transformation rules is not designed to be extensible. We demonstrate a new technique of integrating database queries into a typed functional programming language, so to write well-typed, composable queries and execute them efficiently on any SQL back-end as well as on an in-memory noSQL store. A distinct feature of our framework is that both the query language as well as the transformation rules needed to generate efficient SQL are safely user-extensible, to account for many variations in the SQL back-ends, as well for domain-specific knowledge. The transformation rules are guaranteed to be type-preserving and hygienic by their very construction. They can be built from separately developed and reusable parts and arbitrarily composed into optimization pipelines. With this technique we have embedded into OCaml a relational query language that supports a very large subset of SQL including grouping and aggregation. Its types cover the complete set of intricate SQL behaviors.
We present the first formal verification of state machine safety for the Raft consensus protocol, a critical component of many distributed systems. We connected our proof to previous work to establish an end-to-end guarantee that our implementation provides linearizable state machine replication. This proof required iteratively discovering and proving 90 system invariants. Our verified implementation is extracted to OCaml and runs on real networks. The primary challenge we faced during the verification process was proof maintenance, since proving one invariant often required strengthening and updating other parts of our proof. To address this challenge, we propose a methodology of planning for change during verification. Our methodology adapts classical information hiding techniques to the context of proof assistants, factors out common invariant-strengthening patterns into custom induction principles, proves higher-order lemmas that show any property proved about a particular component implies analogous properties about related components, and makes proofs robust to change using structural tactics. We also discuss how our methodology may be applied to systems verification more broadly.
In infrastructure wireless network technology, communication between users is provided within a certain area supported by access points (APs) or base station communication networks, but in ad-hoc networks, communication between users is provided only through direct connections between nodes. Ad-hoc network technology supports mobility directly through routing algorithms. However, when a connected node is lost owing to the node's movement, the routing protocol transfers this traffic to another node. The routing table in the node that is receiving the traffic detects any changes that occur and manages them. This paper proposes a routing protocol method that sets up multi-hops in the ad-hoc network and verifies the performance, which provides more effective connection persistence than existing methods.
Evolutionary Computation (EC) has been used with great success on various real-world problems. One domain abundant with numerous difficult problems is cryptology. Cryptology can be divided into cryptography, that informally speaking considers methods how to ensure secrecy (but also authenticity, privacy, etc.), and cryptanalysis, that deals with methods how to break cryptographic systems. Although not always in an obvious way, EC can be applied to problems from both domains. This tutorial will first give a brief introduction to cryptology intended for general audience (therefore, omitting proofs and mathematics behind many concepts). Afterwards, we concentrate on several topics from cryptography that are successfully tackled up to now with EC and discuss why those topics are suitable to apply EC. However, care must be taken since there exists a number of problems that seem to be impossible to solve with EC and one needs to realize the limitations of the heuristics. We will discuss the choice of appropriate EC techniques (GA, GP, CGP, ES, multi-objective optimization, etc) for various problems and evaluate on the importance of that choice. Furthermore, we will discuss the gap between the cryptographic community and EC community and what does that mean for the results. By doing that, we will give a special emphasis on the perspective that cryptography presents a source of benchmark problems for the EC community. To conclude, we will present a number of topics we consider to be a strong research choice that can have a real-world impact. In that part, we give a special attention to cryptographic problems where cryptographic community successfully applied EC, but where those problems remained out of the focus of EC community. This tutorial will also present some live demos of EC in action when dealing with cryptographic problems. We will present several problems, ways of encoding solutions, impact of the algorithms choice and finally, we will run some experiments to show the results and discuss how to assess them from cryptographic perspective.
Parallel and distributed systems rely on intricate protocols to manage shared resources and synchronize, i.e., to manage how many processes are in a particular state. Effective verification of such systems requires universally quantification to reason about parameterized state and cardinalities tracking sets of processes, messages, failures to adequately capture protocol logic. In this paper we present Tool, an automatic invariant synthesis method that integrates cardinality-based reasoning and universal quantification. The resulting increase of expressiveness allows Tool to verify, for the first time, a representative collection of intricate parameterized protocols.
Media streaming has largely dominated the Internet traffic and the trend will keep increasing in the next years. To efficiently distribute the media content, Information-Centric Networking (ICN) has attracted many researchers. Since end users usually obtain content from indeterminate caches in ICN, the publisher cannot reinforce data security and access control depending on the caches. Hence, the ability of self-contained protection is important for the cached contents. Attribute-based encryption (ABE) is considered the preferred solution to achieve this goal. However, the existing ABE schemes usually have problems regarding efficiency. The exponentiation in key generation and pairing operation in decryption respectively increases linearly with the number of attributes involved, which make it costly. In this paper, we propose an efficient key-policy ABE with fast key generation and decryption (FKP-ABE). In the key generation, we get rid of exponentiation and only require multiplications/divisions for each attribute in the access policy. And in the decryption, we reduce the pairing operations to a constant number, no matter how many attributes are used. The efficiency analysis indicates that our scheme has better performance than the existing KP-ABE schemes. Finally, we present an implementation framework that incorporates the proposed FKP-ABE with the ICN architecture.
Information Systems curricula require on-going and frequent review [2] [11]. Furthermore, such curricula must be flexible because of the fast-paced, dynamic nature of the workplace. Such flexibility can be maintained through modernizing course content or, inclusively, exchanging hardware or software for newer versions. Alternatively, flexibility can arise from incorporating new information into curricula from other disciplines. One field where the pace of change is extremely high is cybersecurity [3]. Students are left with outdated skills when curricula lag behind the pace of change in industry. For example, cryptography is a required learning objective in the DHS/NSA Center of Academic Excellence (CAE) knowledge criteria [1]. However, the overarching curriculum associated with basic ciphers has gone unchanged for decades. Indeed, a general problem in cybersecurity education is that students lack fundamental knowledge in areas such as ciphers [5]. In response, researchers have developed a variety of interactive classroom visualization tools [5] [8] [9]. Such tools visualize the standard approach to frequency analysis of simple substitution ciphers that includes review of most common, single letters in ciphertext. While fundamental ciphers such as the monoalphabetic substitution cipher have not been updated (these are historical ciphers), collective understanding of how humans interact with language has changed. Updated understanding in both English language pedagogy [10] [12] and automated cryptanalysis of substitution ciphers [4] potentially renders the interactive classroom visualization tools incomplete or outdated. Classroom visualization tools are powerful teaching aids, particularly for abstract concepts. Existing research has established that such tools promote an active learning environment that translates to not only effective learning conditions but also higher student retention rates [7]. However, visualization tools require extensive planning and design when used to actively engage students with detailed, specific knowledge units such as ciphers [7] [8]. Accordingly, we propose a heatmap-based frequency analysis visualization solution that (a) incorporates digraph and trigraph language processing norms; (b) and enhances the active learning pedagogy inherent in visualization tools. Preliminary results indicate that study participants take approximately 15% longer to learn the heatmap-based frequency analysis technique compared to traditional frequency analysis but demonstrate a 50% increase in efficacy when tasked with solving simple substitution ciphers. Further, a heatmap-based solution contributes positively to the field insofar as educators have an additional tool to use in the classroom. As well, the heatmap visualization tool may allow researchers to comparatively examine efficacy of visualization tools in the cryptanalysis of mono-alphabetic substitution ciphers.
Yamata-no-Orochi is an authentication and authorization infrastructure across multiple service domains and provides Internet services with unified authentication and authorization mechanisms. In this paper, Yamata-no-Orochi is incorporated into a video distribution system to verify its general versatility as a multi-domain authentication and authorization infrastructure for Internet services. This paper also reduces the authorization time of Yamata-no-Orochi to fulfill the processing time constrains of the video distribution system. The evaluation results show that all the authentication and authorization processes work correctly and the performance of Yamata-no-Orochi is practical for the video distribution system.
Growth of internet era and corporate sector dealings communication online has introduced crucial security challenges in cyber space. Statistics of recent large scale attacks defined new class of threat to online world, advanced persistent threat (APT) able to impact national security and economic stability of any country. From all APTs, botnet is one of the well-articulated and stealthy attacks to perform cybercrime. Botnet owners and their criminal organizations are continuously developing innovative ways to infect new targets into their networks and exploit them. The concept of botnet refers collection of compromised computers (bots) infected by automated software robots, that interact to accomplish some distributed task which run without human intervention for illegal purposes. They are mostly malicious in nature and allow cyber criminals to control the infected machines remotely without the victim's knowledge. They use various techniques, communication protocols and topologies in different stages of their lifecycle; also specifically they can upgrade their methods at any time. Botnet is global in nature and their target is to steal or destroy valuable information from organizations as well as individuals. In this paper we present real world botnet (APTs) survey.
Cryptographic protocols and algorithms are the strength of digital era in which we are living. Unluckily, the security of many confidential information and credentials has been compromised due to ignorance of required security services. As a result, various attacks have been introduced by talented attackers and many security issues like as financial loss, violations of personal privacy, and security threats to democracy. This research paper provides the secure design and architecture of cryptographic protocols and expedites the authentication of cryptographic system. Designing and developing a secure cryptographic system is like a game in which designer or developer tries to maintain the security while attacker tries to penetrate the security features to perform successful attack.
KP-ABE mechanism emerges as one of the most suitable security scheme for asymmetric encryption. It has been widely used to implement access control solutions. However, due to its expensive overhead, it is difficult to consider this cryptographic scheme in resource-limited networks, such as the IoT. As the cloud has become a key infrastructural support for IoT applications, it is interesting to exploit cloud resources to perform heavy operations. In this paper, a collaborative variant of KP-ABE named C-KP-ABE for cloud-based IoT applications is proposed. Our proposal is based on the use of computing power and storage capacities of cloud servers and trusted assistant nodes to run heavy operations. A performance analysis is conducted to show the effectiveness of the proposed solution.
Mobile apps often collect and share personal data with untrustworthy third-party apps, which may lead to data misuse and privacy violations. Most of the collected data originates from sensors built into the mobile device, where some of the sensors are treated as sensitive by the mobile platform while others permit unconditional access. Examples of privacy-prone sensors are the microphone, camera and GPS system. Access to these sensors is always mediated by protected function calls. On the other hand, the light sensor, accelerometer and gyroscope are considered innocuous. All apps have unrestricted access to their data. Unfortunately, this gap is not always justified. State-of-the-art privacy mechanisms on Android provide inadequate access control and do not address the vulnerabilities that arise due to unmediated access to so-called innocuous sensors on smartphones. We have developed techniques to demonstrate these threats. As part of our demonstration, we illustrate possible attacks using the innocuous sensors on the phone. As a solution, we present ipShield, a framework that provides users with greater control over their resources at runtime so as to protect against such attacks. We have implemented ipShield by modifying the AOSP.
The concept of digital right management (DRM) has become extremely important in current mobile environments. This paper shows how partial bitstream encryption can allow the secure distribution of hardware applications resembling the mechanisms of traditional software DRM. Building on the recent developments towards the secure distribution of hardware cores, the paper demonstrates a prototypical implementation of a user mobile device supporting such distribution mechanisms. The prototype extends the Android operating system with support for hardware reconfigurability and showcases the interplay of novel security concepts enabled by hardware DRM, the advantages of a design flow based on high-level synthesis, and the opportunities provided by current software-rich reconfigurable Systems-on-Chips. Relying on this prototype, we also collected extensive quantitative results demonstrating the limited overhead incurred by the secure distribution architecture.
Cloud storage services such as Dropbox [1] and Google Drive [2] are becoming more and more popular. On the one hand, they provide users with mobility, scalability, and convenience. However, privacy issues arise when the storage becomes not fully controlled by users. Although modern encryption schemes are effective at protecting content of data, there are two drawbacks of the encryption-before-outsourcing approach: First, one kind of sensitive information, Access Pattern of the data is left unprotected. Moreover, encryption usually makes the data difficult to use. In this paper, we propose AIS (Access Indistinguishable Storage), the first client-side system that can partially conceal access pattern of the cloud storage in constant time. Besides data content, AIS can conceal information about the number of initial files, and length of each initial file. When it comes to the access phase after initiation, AIS can effectively conceal the behavior (read or write) and target file of the current access. Moreover, the existence and length of each file will remain confidential as long as there is no access after initiation. One application of AIS is SSE (Searchable Symmetric Encryption), which makes the encrypted data searchable. Based on AIS, we propose SBA (SSE Built on AIS). To the best of our knowledge, SBA is safer than any other SSE systems of the same complexity, and SBA is the first to conceal whether current keyword was queried before, the first to conceal whether current operation is an addition or deletion, and the first to support direct modification of files.
Content Security Policy is a mechanism designed to prevent the exploitation of XSS – the most common high-risk web application flaw. CSP restricts which scripts can be executed by allowing developers to define valid script sources; an attacker with a content-injection flaw should not be able to force the browser to execute arbitrary malicious scripts. Currently, CSP is commonly used in conjunction with domain-based script whitelist, where the existence of a single unsafe endpoint in the script whitelist effectively removes the value of the policy as a protection against XSS ( some examples ).
As demand for wireless mobile connectivity continues to explode, cellular network infrastructure capacity requirements continue to grow. While 5G tries to address capacity requirements at the radio layer, the load on the cellular core network infrastructure (called Enhanced Packet Core (EPC)) stresses the network infrastructure. Our work examines the architecture, protocols of current cellular infrastructures and the workload on the EPC. We study the challenges in dimensioning capacity and review the design alternatives to support the significant scale up desired, even for the near future. We breakdown the workload on the network infrastructure into its components-signaling event transactions; database or lookup transactions and packet processing. We quantitatively show the control plane and data plane load on the various components of the EPC and estimate how future 5G cellular network workloads will scale. This analysis helps us to understand the scalability challenges for future 5G EPC network components. Other efforts to scale the 5G cellular network take a system view where the control plane is separated from the data path and is terminated on a centralized SDN controller. The SDN controller configures the data path on a widely distributed switching infrastructure. Our analysis of the workload informs us on the feasibility of various design alternatives and motivates our efforts to develop our clean-slate approach, called CleanG.
The success or failure of a mobile application (`app') is largely determined by user ratings. Users frequently make their app choices based on the ratings of apps in comparison with similar, often competing apps. Users also expect apps to continually provide new features while maintaining quality, or the ratings drop. At the same time apps must also be secure, but is there a historical trade-off between security and ratings? Or are app store ratings a more all-encompassing measure of product maturity? We used static analysis tools to collect security-related metrics in 38,466 Android apps from the Google Play store. We compared the rate of an app's permission misuse, number of requested permissions, and Androrisk score, against its user rating. We found that high-rated apps have statistically significantly higher security risk metrics than low-rated apps. However, the correlations are weak. This result supports the conventional wisdom that users are not factoring security risks into their ratings in a meaningful way. This could be due to several reasons including users not placing much emphasis on security, or that the typical user is unable to gauge the security risk level of the apps they use everyday.
A major component of modern vehicles is the infotainment system, which interfaces with its drivers and passengers. Other mobile devices, such as handheld phones and laptops, can relay information to the embedded infotainment system through Bluetooth and vehicle WiFi. The ability to extract information from these systems would help forensic analysts determine the general contents that is stored in an infotainment system. Based off the data that is extracted, this would help determine what stored information is relevant to law enforcement agencies and what information is non-essential when it comes to solving criminal activities relating to the vehicle itself. This would overall solidify the Intelligent Transport System and Vehicular Ad Hoc Network infrastructure in combating crime through the use of vehicle forensics. Additionally, determining the content of these systems will allow forensic analysts to know if they can determine anything about the end-user directly and/or indirectly.
Named Data Networking (NDN), a clean-slate data oriented Internet architecture targeting on replacing IP, brings many potential benefits for content distribution. Real deployment of NDN is crucial to verify this new architecture and promote academic research, but work in this field is at an early stage. Due to the fundamental design paradigm difference between NDN and IP, Deploying NDN as IP overlay causes high overhead and inefficient transmission, typically in streaming applications. Aiming at achieving efficient NDN streaming distribution, this paper proposes a transitional architecture of NDN/IP hybrid network dubbed Centaur, which embodies both NDN's smartness, scalability and IP's transmission efficiency and deployment feasibility. In Centaur, the upper NDN module acts as the smart head while the lower IP module functions as the powerful feet. The head is intelligent in content retrieval and self-control, while the IP feet are able to transport large amount of media data faster than that if NDN directly overlaying on IP. To evaluate the performance of our proposal, we implement a real streaming prototype in ndnSIM and compare it with both NDN-Hippo and P2P under various experiment scenarios. The result shows that Centaur can achieve better load balance with lower overhead, which is close to the performance that ideal NDN can achieve. All of these validate that our proposal is a promising choice for the incremental and compatible deployment of NDN.
Federated cloud networks are formed by federating virtual network segments from different clouds, e.g. in a hybrid cloud, into a single federated network. Such networks should be protected with a global federated cloud network security policy. The availability of network function virtualisation and service function chaining in cloud platforms offers an opportunity for implementing and enforcing global federated cloud network security policies. In this paper we describe an approach for enforcing global security policies in federated cloud networks. The approach relies on a service manifest that specifies the global network security policy. From this manifest configurations of the security functions for the different clouds of the federation are generated. This enables automated deployment and configuration of network security functions across the different clouds. The approach is illustrated with a case study where communications between trusted and untrusted clouds, e.g. public clouds, are encrypted. The paper discusses future work on implementing this architecture for the OpenStack cloud platform with the service function chaining API.
With the popularization and development of network knowledge, network intruders are increasing, and the attack mode has been updated. Intrusion detection technology is a kind of active defense technology, which can extract the key information from the network system, and quickly judge and protect the internal or external network intrusion. Intrusion detection is a kind of active security technology, which provides real-time protection for internal attacks, external attacks and misuse, and it plays an important role in ensuring network security. However, with the diversification of intrusion technology, the traditional intrusion detection system cannot meet the requirements of the current network security. Therefore, the implementation of intrusion detection needs diversifying. In this context, we apply neural network technology to the network intrusion detection system to solve the problem. In this paper, on the basis of intrusion detection method, we analyze the development history and the present situation of intrusion detection technology, and summarize the intrusion detection system overview and architecture. The neural network intrusion detection is divided into data acquisition, data analysis, pretreatment, intrusion behavior detection and testing.
Coming days are becoming a much challenging task for the power system researchers due to the anomalous increase in the load demand with the existing system. As a result there exists a discordant between the transmission and generation framework which is severely pressurizing the power utilities. In this paper a quick and efficient methodology has been proposed to identify the most sensitive or susceptible regions in any power system network. The technique used in this paper comprises of correlation of a multi-bus power system network to an equivalent two-bus network along with the application of Artificial neural network(ANN) Architecture with training algorithm for online monitoring of voltage security of the system under all multiple exigencies which makes it more flexible. A fast voltage stability indicator has been proposed known as Unified Voltage Stability Indicator (UVSI) which is used as a substratal apparatus for the assessment of the voltage collapse point in a IEEE 30-bus power system in combination with the Feed Forward Neural Network (FFNN) to establish the accuracy of the status of the system for different contingency configurations.
Legacy work on correcting firewall anomalies operate with the premise of creating totally disjunctive rules. Unfortunately, such solutions are impractical from implementation point of view as they lead to an explosion of the number of firewall rules. In a related previous work, we proposed a new approach for performing assisted corrective actions, which in contrast to the-state-of-the-art family of radically disjunctive approaches, does not lead to a prohibitive increase of the configuration size. In this sense, we allow relaxation in the correction process by clearly distinguishing between constructive anomalies that can be tolerated and destructive anomalies that should be systematically fixed. However, a main disadvantage of the latter approach was its dependency on the guided input from the administrator which controversially introduces a new risk for human errors. In order to circumvent the latter disadvantage, we present in this paper a Firewall Policy Query Engine (FPQE) that renders the whole process of anomaly resolution a fully automated one and which does not require any human intervention. In this sense, instead of prompting the administrator for inserting the proper order corrective actions, FPQE executes those queries against a high level firewall policy. We have implemented the FPQE and the first results of integrating it with our legacy anomaly resolver are promising.
Separation of network control from devices in Software Defined Network (SDN) allows for centralized implementation and management of security policies in a cloud computing environment. The ease of programmability also makes SDN a great platform implementation of various initiatives that involve application deployment, dynamic topology changes, and decentralized network management in a multi-tenant data center environment. Dynamic change of network topology, or host reconfiguration in such networks might require corresponding changes to the flow rules in the SDN based cloud environment. Verifying adherence of these new flow policies in the environment to the organizational security policies and ensuring a conflict free environment is especially challenging. In this paper, we extend the work on rule conflicts from a traditional environment to an SDN environment, introducing a new classification to describe conflicts stemming from cross-layer conflicts. Our framework ensures that in any SDN based cloud, flow rules do not have conflicts at any layer; thereby ensuring that changes to the environment do not lead to unintended consequences. We demonstrate the correctness, feasibility and scalability of our framework through a proof-of-concept prototype.