Biblio
Customer Edge Switching (CES) is an experimental Internet architecture that provides reliable and resilient multi-domain communications. It provides resilience against security threats because domains negotiate inbound and outbound policies before admitting new traffic. As CES and its signalling protocols are being prototyped, there is a need for independent testing of the CES architecture. Hence, our research goal is to develop an automated test framework that CES protocol designers and early adopters can use to improve the architecture. The test framework includes security, functional, and performance tests. Using the Robot Framework and STRIDE analysis, in this paper we present this automated security test framework. By evaluating sample test scenarios, we show that the Robot Framework and our CES test suite have provided productive discussions about this new architecture, in addition to serving as clear, easy-to-read documentation. Our research also confirms that test automation can be useful to improve new protocol architectures and validate their implementation.
The image and multimedia data produced by individuals and enterprises is increasing every day. Motivated by the advances in cloud computing, there is a growing need to outsource such computational intensive image feature detection tasks to cloud for its economic computing resources and on-demand ubiquitous access. However, the concerns over the effective protection of private image and multimedia data when outsourcing it to cloud platform become the major barrier that impedes the further implementation of cloud computing techniques over massive amount of image and multimedia data. To address this fundamental challenge, we study the state-of-the-art image feature detection algorithms and focus on Scalar Invariant Feature Transform (SIFT), which is one of the most important local feature detection algorithms and has been broadly employed in different areas, including object recognition, image matching, robotic mapping, and so on. We analyze and model the privacy requirements in outsourcing SIFT computation and propose Secure Scalar Invariant Feature Transform (SecSIFT), a high-performance privacy-preserving SIFT feature detection system. In contrast to previous works, the proposed design is not restricted by the efficiency limitations of current homomorphic encryption scheme. In our design, we decompose and distribute the computation procedures of the original SIFT algorithm to a set of independent, co-operative cloud servers and keep the outsourced computation procedures as simple as possible to avoid utilizing a computationally expensive homomorphic encryption scheme. The proposed SecSIFT enables implementation with practical computation and communication complexity. Extensive experimental results demonstrate that SecSIFT performs comparably to original SIFT on image benchmarks while capable of preserving the privacy in an efficient way.
With the advent of social networks and cloud computing, the amount of multimedia data produced and communicated within social networks is rapidly increasing. In the meantime, social networking platforms based on cloud computing have made multimedia big data sharing in social networks easier and more efficient. The growth of social multimedia, as demonstrated by social networking sites such as Facebook and YouTube, combined with advances in multimedia content analysis, underscores potential risks for malicious use, such as illegal copying, piracy, plagiarism, and misappropriation. Therefore, secure multimedia sharing and traitor tracing issues have become critical and urgent in social networks. In this article, a joint fingerprinting and encryption (JFE) scheme based on tree-structured Haar wavelet transform (TSHWT) is proposed with the purpose of protecting media distribution in social network environments. The motivation is to map hierarchical community structure of social networks into a tree structure of Haar wavelet transform for fingerprinting and encryption. First, fingerprint code is produced using social network analysis (SNA). Second, the content is decomposed based on the structure of fingerprint code by the TSHWT. Then, the content is fingerprinted and encrypted in the TSHWT domain. Finally, the encrypted contents are delivered to users via hybrid multicast-unicast. The proposed method, to the best of our knowledge, is the first scalable JFE method for fingerprinting and encryption in the TSHWT domain using SNA. The use of fingerprinting along with encryption using SNA not only provides a double layer of protection for social multimedia sharing in social network environment but also avoids big data superposition effect. Theory analysis and experimental results show the effectiveness of the proposed JFE scheme.
Utility networks are part of every nation's critical infrastructure, and their protection is now seen as a high priority objective. In this paper, we propose a threat awareness architecture for critical infrastructures, which we believe will raise security awareness and increase resilience in utility networks. We first describe an investigation of trends and threats that may impose security risks in utility networks. This was performed on the basis of a viewpoint approach that is capable of identifying technical and non-technical issues (e.g., behaviour of humans). The result of our analysis indicated that utility networks are affected strongly by technological trends, but that humans comprise an important threat to them. This provided evidence and confirmed that the protection of utility networks is a multi-variable problem, and thus, requires the examination of information stemming from various viewpoints of a network. In order to accomplish our objective, we propose a systematic threat awareness architecture in the context of a resilience strategy, which ultimately aims at providing and maintaining an acceptable level of security and safety in critical infrastructures. As a proof of concept, we demonstrate partially via a case study the application of the proposed threat awareness architecture, where we examine the potential impact of attacks in the context of social engineering in a European utility company.
Multimedia authentication is an integral part of multimedia signal processing in many real-time and security sensitive applications, such as video surveillance. In such applications, a full-fledged video digital rights management (DRM) mechanism is not applicable due to the real time requirement and the difficulties in incorporating complicated license/key management strategies. This paper investigates the potential of multimedia authentication from a brand new angle by employing hardware-based security primitives, such as physical unclonable functions (PUFs). We show that the hardware security approach is not only capable of accomplishing the authentication for both the hardware device and the multimedia stream but, more importantly, introduce minimum performance, resource, and power overhead. We justify our approach using a prototype PUF implementation on Xilinx FPGA boards. Our experimental results on the real hardware demonstrate the high security and low overhead in multimedia authentication obtained by using hardware security approaches.
Cyber-attacks are cheap, easy to conduct and often pose little risk in terms of attribution, but their impact could be lasting. The low attribution is because tracing cyber-attacks is primitive in the current network architecture. Moreover, even when attribution is known, the absence of enforcement provisions in international law makes cyber attacks tough to litigate, and hence attribution is hardly a deterrent. Rather than attributing attacks, we can re-look at cyber-attacks as societal events associated with social, political, economic and cultural (SPEC) motivations. Because it is possible to observe SPEC motives on the internet, social media data could be valuable in understanding cyber attacks. In this research, we use sentiment in Twitter posts to observe country-to-country perceptions, and Arbor Networks data to build ground truth of country-to-country DDoS cyber-attacks. Using this dataset, this research makes three important contributions: a) We evaluate the impact of heightened sentiments towards a country on the trend of cyber-attacks received by the country. We find that, for some countries, the probability of attacks increases by up to 27% while experiencing negative sentiments from other nations. b) Using cyber-attacks trend and sentiments trend, we build a decision tree model to find attacks that could be related to extreme sentiments. c) To verify our model, we describe three examples in which cyber-attacks follow increased tension between nations, as perceived in social media.
Cloud computing is revolutionizing many IT ecosystems through offering scalable computing resources that are easy to configure, use and inter-connect. However, this model has always been viewed with some suspicion as it raises a wide range of security and privacy issues that need to be negotiated. This research focuses on the construction of a trust layer in cloud computing to build a trust relationship between cloud service providers and cloud users. In particular, we address the rise of container-based virtualisation has a weak isolation compared to traditional VMs because of the shared use of the OS kernel and system components. Therefore, we will build a trust layer to solve the issues of weaker isolation whilst maintaining the performance and scalability of the approach. This paper has two objectives. Firstly, we propose a security system to protect containers from other guests through the addition of a Role-based Access Control (RBAC) model and the provision of strict data protection and security. Secondly, we provide a stress test using isolation benchmarking tools to evaluate the isolation in containers in term of performance.
The extremely rapid development of the Internet of Things brings growing attention to the information security issue. Realization of cryptographically strong pseudo random number generators (PRNGs), is crucial in securing sensitive data. They play an important role in cryptography and in network security applications. In this paper, we realize a comparative study of two pseudo chaotic number generators (PCNGs). The First pseudo chaotic number generator (PCNG1) is based on two nonlinear recursive filters of order one using a Skew Tent map (STmap) and a Piece-Wise Linear Chaotic map (PWLCmap) as non linear functions. The second pseudo chaotic number generator (PCNG2) consists of four coupled chaotic maps, namely: PWLCmaps, STmap, Logistic map by means a binary diffusion matrix [D]. A comparative analysis of the performance in terms of computation time (Generation time, Bit rate and Number of needed cycles to generate one byte) and security of the two PCNGs is carried out.
In this paper, we introduce an optical network with cross-layer security, which can enhance security performance. In the transmitter, the user's data is encrypted at first. After that, based on optical encoding, physical layer encryption is implemented. In the receiver, after the corresponding optical decoding process, decryption algorithm is used to restore user's data. In this paper, the security performance has been evaluated quantitatively.
The serializability of transactions is the most important property that ensure correct processing to transactions. In case of concurrent access to the same data by several transactions, or in case of dependency relationships between running sub transactions. But some transactions has been marked as malicious and they compromise the serialization of running system. For that purpose, we propose an intrusion tolerant scheme to ensure the continuity of the running transactions. A transaction dependency graph is also used by the CDC to make decisions concerning the set of data and transactions that are threatened by a malicious activity. We will give explanations about how to use the proposed scheme to illustrate its behavior and efficiency against a compromised transaction-based in a cloud of databases environment. Several issues should be considered when dealing with the processing of a set of interleaved transactions in a transaction based environment. In most cases, these issues are due to the concurrent access to the same data by several transactions or the dependency relationship between running transactions. The serializability may be affected if a transaction that belongs to the processing node is compromised.
This article deals with the estimation of magnet losses in a permanent-magnet motor inserted in a nut-runner. This type of machine has interesting features such as being two-pole, slot-less and running at a high speed (30000 rpm). Two analytical models were chosen from the literature. A numerical estimation of the losses with 2D Finite Element Method was carried out. A detailed investigation of the effect of simulation settings (e.g., mesh size, time-step, remanence flux density in the magnet, superposition of the losses, etc.) was performed. Finally, calculation of losses with 3D-FEM were also run in order to compare the calculated losses with both analytical and 2D-FEM results. The estimation of the losses focuses on a range of frequencies between 10 and 100 kHz.
The purpose of this research is to propose architecture-driven, penetration testing equipped with a software reverse and forward engineering process. Although the importance of architectural risk analysis has been emphasized in software security, no methodology is shown to answer how to discover the architecture and abuse cases of a given insecure legacy system and how to modernize it to a secure target system. For this purpose, we propose an architecture-driven penetration testing methodology: 4+1 architectural views of the given insecure legacy system, documented to discover program paths for vulnerabilities through a reverse engineering process. Then, vulnerabilities are identified by using the discovered architecture abuse cases and countermeasures are proposed on identified vulnerabilities. As a case study, a telecommunication company's Identity Access Management (IAM) system is used for discovering its software architecture, identifying the vulnerabilities of its architecture, and providing possible countermeasures. Our empirical results show that functional suggestions would be relatively easier to follow up and less time-consuming work to fix; however, architectural suggestions would be more complicated to follow up, even though it would guarantee better security and take full advantage of OAuth 2.0 supporting communities.
This paper presents a multi-year undergraduate computing capstone project that holistically contributes to the development of cybersecurity knowledge and skills in non-computing high school and college students. We describe the student-built Vulnerable Web Server application, which is a system that packages instructional materials and pre-built virtual machines to provide lessons on cybersecurity to non-technical students. The Vulnerable Web Server learning materials have been piloted at several high schools and are now integrated into multiple security lessons in an intermediate, general education information technology course at the United States Military Academy. Our paper interweaves a description of the Vulnerable Web Server materials with the senior capstone design process that allowed it to be built by undergraduate information technology and computer science students, resulting in a valuable capstone learning experience. Throughout the paper, a call is made for greater emphasis on educating the non-technical user.
Information Centric Networking (ICN) paradigms nicely fit the world of wireless sensors, whose devices have tight constraints. In this poster, we compare two alternative designs for secure association of new IoT devices in existing ICN deployments, which are based on asymmetric and symmetric cryptography respectively. While the security properties of both approaches are equivalent, an interesting trade-off arises between properties of the protocol vs properties of its implementation in current IoT boards. Indeed, while the asymmetric-keys based approach incurs a lower traffic overhead (of about 30%), we find that its implementation is significantly more energy- and time-consuming due to the cost of cryptographic operations (it requires up to 41x more energy and 8x more time).
Insider threats remain a significant problem within organizations, especially as industries that rely on technology continue to grow. Traditionally, research has been focused on the malicious insider; someone that intentionally seeks to perform a malicious act against the organization that trusts him or her. While this research is important, more commonly organizations are the victims of non-malicious insiders. These are trusted employees that are not seeking to cause harm to their employer; rather, they misuse systems-either intentional or unintentionally-that results in some harm to the organization. In this paper, we look at both by developing and validating instruments to measure the behavior and circumstances of a malicious insider versus a non-malicious insider. We found that in many respects their psychological profiles are very similar. The results are also consistent with other research on the malicious insider from a personality standpoint. We expand this and also find that trait negative affect, both its higher order dimension and the lower order dimensions, are highly correlated with insider threat behavior and circumstances. This paper makes four significant contributions: 1) Development and validation of survey instruments designed to measure the insider threat; 2) Comparison of the malicious insider with the non-malicious insider; 3) Inclusion of trait affect as part of the psychological profile of an insider; 4) Inclusion of a measure for financial well-being, and 5) The successful use of survey research to examine the insider threat problem.
The demand for trained cybersecurity operators is growing more quickly than traditional programs in higher education can fill. At the same time, unemployment for returning military veterans has become a nationally discussed problem. We describe the design and launch of New Skills for a New Fight (NSNF), an intensive, one-year program to train military veterans for the cybersecurity field. This non-traditional program, which leverages experience that veterans gained in military service, includes recruitment and selection, a base of knowledge in the form of four university courses in a simultaneous cohort mode, a period of hands-on cybersecurity training, industry certifications and a practical internship in a Security Operations Center (SOC). Twenty veterans entered this pilot program in January of 2016, and will complete in less than a year's time. Initially funded by a global financial services company, the program provides veterans with an expense-free preparation for an entry-level cybersecurity job.
Instead of developing single-server software system for the powerful computers, the software is turning from large single-server to multi-server system such as distributed system. This change introduces a new challenge for the software quality measurement, since the current software analysis methods for single-server software could not observe and assess the correlation among the components on different nodes. In this paper, a new dynamic cohesion approach is proposed for distributed system. We extend Calling Network model for distributed system by differentiating methods of components deployed on different nodes. Two new cohesion metrics are proposed to describe the correlation at component level, by extending the cohesion metric of single-server software system. The experiments, conducted on a distributed systems-Netflix RSS Reader, present how to trace the various system functions accomplished on three nodes, how to abstract dynamic behaviors using our model among different nodes and how to evaluate the software cohesion on distributed system.
Pagination problems deal with questions around transforming a source text stream into a formatted document by dividing it up into individual columns and pages, including adding auxiliary elements that have some relationship to the source stream data but may allow a certain amount of variation in placement (such as figures or footnotes). Traditionally the pagination problem has been approached by separating it into one of micro-typography (e.g., breaking text into paragraphs, also known as h&j) and one of macro-typography (e.g., taking a galley of already formatted paragraphs and breaking them into columns and pages) without much interaction between the two. While early solutions for both problem spaces used simple greedy algorithms, Knuth and Plass introduced in the '80s a global-fit algorithm for line breaking that optimizes the breaks across the whole paragraph [1]. This algorithm was implemented in TeX'82 [2] and has since kept its crown as the best available solution for this space. However, for macro-typography there has been no (successful) attempt to provide globally optimized page layout: all systems to date (including TeX) use greedy algorithms for pagination. Various problems in this area have been researched (e.g., [3,4,5,6]) and the literature documents some prototype development. But none of these prototypes have been made widely available to the research community or ever made it into a generally usable and publicly available system. This paper presents a framework for a global-fit algorithm for page breaking based on the ideas of Knuth/Plass. It is implemented in such a way that it is directly usable without additional executables with any modern TeX installation. It therefore can serve as a test bed for future experiments and extensions in this space. At the same time a cleaned-up version of the current prototype has the potential to become a production tool for the huge number of TeX users world-wide. The paper also discusses two already implemented extensions that increase the flexibility of the pagination process: the ability to automatically consider existing flexibility in paragraph length (by considering paragraph variations with different numbers of lines [7]) and the concept of running the columns on a double spread a line long or short. It concludes with a discussion of the overall approach, its inherent limitations and directions for future research. [1] D. E. Knuth and M. F. Plass. Breaking Paragraphs into Lines. Software-Practice and Experience, 11(11):1119-1184, Nov. 1981. [2] D. E. Knuth. TeX: The Program, volume B of Computers and Typesetting. Addison-Wesley, Reading, MA, USA, 1986. [3] A. Brüggemann-Klein, R. Klein, and S. Wohlfeil. Computer science in perspective. Chapter On the Pagination of Complex Documents, pages 49-68. Springer-Verlag New York, Inc., New York, NY, USA, 2003. [4] C. Jacobs, W. Li, and D. H. Salesin. Adaptive document layout via manifold content. In Second International Workshop on Web Document Analysis (wda2003), Liverpool, UK, 2003, 2003. [5] A. Holkner. Global multiple objective line breaking. Master's thesis, School of Computer Science and Information Technology, RMIT University, Melbourne, Victoria, Australia, 2006. [6] P. Ciancarini, A. Di Iorio, L. Furini, and F. Vitali. High-quality pagination for publishing. Software-Practice and Experience, 42(6):733-751, June 2012. [7] T. Hassan and A. Hunter. Knuth-Plass revisited: Flexible line-breaking for automatic document layout. In Proceedings of the 2015 ACM Symposium on Document Engineering, DocEng '15, pages 17-20, New York, NY, USA, 2015.
MPI includes all processes in MPI\_COMM\_WORLD; this is untenable for reasons of scale, resiliency, and overhead. This paper offers a new approach, extending MPI with a new concept called Sessions, which makes two key contributions: a tighter integration with the underlying runtime system; and a scalable route to communication groups. This is a fundamental change in how we organise and address MPI processes that removes well-known scalability barriers by no longer requiring the global communicator MPI\_COMM\_WORLD.
Information-Centric Networking (ICN) is an emerging networking paradigm that focuses on content distribution and aims at replacing the current IP stack. Implementations of ICN have demonstrated its advantages over IP, in terms of network performance and resource requirements. Because of these advantages, ICN is also considered to be a good network paradigm candidate for the Internet-of-Things (IoT), especially in scenarios involving resource constrained devices. In this paper we propose OnboardICNg, the first secure protocol for on-boarding (authenticating and authorizing) IoT devices in ICN mesh networks. OnboardICNg can securely onboard resource constrained devices into an existing IoT network, outperforming the authentication protocol selected for the ZigBee-IP specification: EAP-PANA, i.e., the Protocol for carrying Authentication for Network Access (PANA) combined with the Extensible Authentication Protocol (EAP). In particular we show that, compared with EAP-PANA, OnboardICNg reduces the communication and energy consumption, by 87% and 66%, respectively.
Recent years have witnessed a flourish of hands-on cybersecurity labs and competitions. The information technology (IT) education community has recognized their significant role in boosting students' interest in security and enhancing their security knowledge and skills. Compared to the focus on individual based education materials, much less attention has been paid to the development of tools and materials suitable for team-based security practices, which, however, prevail in real-world environments. One major bottleneck is lack of suitable platforms for this type of practices in IT education community. In this paper, we propose a low-cost, team-oriented cybersecurity practice platform called Platoon. The Platoon platform allows for quickly and automatically creating one or more virtual networks that mimic real-world corporate networks using a regular computer. The virtual environment created by Platoon is suitable for both cybersecurity labs, competitions, and projects. The performance data and user feedback collected from our cyber-defense exercises indicate that Platoon is practical and useful for enhancing students' security learning outcomes.
Friends, family and colleagues at work may repeatedly observe how their peers unlock their smartphones. These "insiders" may combine multiple partial observations to form a hypothesis of a target's secret. This changing landscape requires that we update the methods used to assess the security of unlocking mechanisms against human shoulder surfing attacks. In our paper, we introduce a methodology to study shoulder surfing risks in the insider threat model. Our methodology dissects the authentication process into minimal observations by humans. Further processing is based on simulations. The outcome is an estimate of the number of observations needed to break a mechanism. The flexibility of this approach benefits the design of new mechanisms. We demonstrate the application of our methodology by performing an analysis of the SwiPIN scheme published at CHI 2015. Our results indicate that SwiPIN can be defeated reliably by a majority of the population with as few as 6 to 11 observations.