Publications of Interest |
The Publications of Interest section contains bibliographical citations, abstracts if available, links on specific topics, and research problems of interest to the Science of Security community.
How recent are these publications?
These bibliographies include recent scholarly research on topics that have been presented or published within the past year. Some represent updates from work presented in previous years; others are new topics.
How are topics selected?
The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.
How can I submit or suggest a publication?
Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.
Submissions and suggestions may be sent to: news@scienceofsecurity.net
(ID#: 16-11191)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence
APIs 2015 (Part 1) |
Applications Programming Interfaces, APIs, are definitions of interfaces to systems or modules. As code is reused, more and more are modified from earlier code. For the Science of Security community, the problems of compositionality and resilience are direct. The research work cited here was presented in 2015.
A. Masood and J. Java, “Static Analysis for Web Service Security — Tools & Techniques for a Secure Development Life Cycle,” Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-6. doi: 10.1109/THS.2015.7225337
Abstract: In this ubiquitous IoT (Internet of Things) era, web services have become a vital part of today's critical national and public sector infrastructure. With the industry wide adaptation of service-oriented architecture (SOA), web services have become an integral component of enterprise software eco-system, resulting in new security challenges. Web services are strategic components used by wide variety of organizations for information exchange on the internet scale. The public deployments of mission critical APIs opens up possibility of software bugs to be maliciously exploited. Therefore, vulnerability identification in web services through static as well as dynamic analysis is a thriving and interesting area of research in academia, national security and industry. Using OWASP (Open Web Application Security Project) web services guidelines, this paper discusses the challenges of existing standards, and reviews new techniques and tools to improve services security by detecting vulnerabilities. Recent vulnerabilities like Shellshock and Heartbleed has shifted the focus of risk assessment to the application layer, which for majority of organization means public facing web services and web/mobile applications. RESTFul services have now become the new service development paradigm normal; therefore SOAP centric standards such as XML Encryption, XML Signature, WS-Security, and WS-SecureConversation are nearly not as relevant. In this paper we provide an overview of the OWASP top 10 vulnerabilities for web services, and discuss the potential static code analysis techniques to discover these vulnerabilities. The paper reviews the security issues targeting web services, software/program verification and security development lifecycle.
Keywords: Web services; program diagnostics; program verification; security of data; Heartbleed; Internet of Things; Internet scale; OWASP; Open Web Application Security Project; RESTFul services; SOAP centric standards; Shellshock; WS-SecureConversation; WS-security; Web applications; Web service security; Web services guidelines; XML encryption; XML signature; critical national infrastructure; dynamic analysis; enterprise software ecosystem; information exchange; mission critical API; mobile applications; national security and industry; program verification; public deployments; public sector infrastructure; risk assessment; secure development life cycle; security challenges; service development paradigm; service-oriented architecture; services security; software bugs; software verification; static code analysis; strategic components; ubiquitous IoT; vulnerabilities detection; vulnerability identification; Computer crime; Cryptography; Simple object access protocol; Testing; XML; Cyber Security; Penetration Testing; RESTFul API; SOA; SOAP; Secure Design; Secure Software Development; Security Code Review; Service Oriented Architecture; Source Code Analysis; Static Analysis Tool; Static Code Analysis; Web Application security; Web Services; Web Services Security (ID#: 16-10020)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225337&isnumber=7190491
L. Tang, Liubo Ouyang and W. T. Tsai, “Multi-factor Web API Security for Securing Mobile Cloud,” Fuzzy Systems and Knowledge Discovery (FSKD), 2015 12th International Conference on, Zhangjiajie, 2015, pp. 2163-2168. doi: 10.1109/FSKD.2015.7382287
Abstract: Mobile Cloud Computing is gaining more popularity in both mobile users and enterprises. With mobile-first becoming enterprise IT strategy and more enterprises exposing their business services to mobile cloud through Web API, the security of mobile cloud computing becomes a main concern and key successful factor as well. This paper shows the security challenges of mobile cloud computing and defines an end-to-end secure mobile cloud computing reference architecture. Then it shows Web API security is a key to the end-to-end security stack and specifies traditional API security mechanism and two multi-factor Web API security strategy and mechanism. Finally, it compares the security features provided by ten API gateway providers.
Keywords: application program interfaces; cloud computing; mobile computing; security of data; API gateway providers; API security mechanism; business services; end-to-end secure mobile cloud computing; enterprise IT strategy; mobile cloud computing; mobile users; multifactor Web API security; securing mobile cloud; Authentication; Authorization; Business; Cloud computing; Mobile communication; end-to-end; mobile cloud; security mechanism; web API (ID#: 16-10021)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7382287&isnumber=7381900
M. F. F. Khan and K. Sakamura, “Tamper-Resistant Security for Cyber-Physical Systems with eTRON Architecture,” 2015 IEEE International Conference on Data Science and Data Intensive Systems, Sydney, NSW, 2015, pp. 196-203. doi: 10.1109/DSDIS.2015.98
Abstract: This article posits tamper-resistance as a necessary security measure for cyber-physical systems (CPS). With omnipresent connectivity and pervasive use of mobile devices, software security alone is arguably not sufficient to safeguard sensitive digital information we use everyday. As a result, utilization of a variety of tamper-resistant devices - including smartcards, secure digital cards with integrated circuits, and mobile phones with subscriber identity module - has become standard industry practice. Recognizing the need for effective hardware security alongside software security, in this paper, we present the eTRON architecture - at the core of which lies the tamper-resistant eTRON chip, equipped with functions for mutual authentication, encrypted communication and access control. Besides the security features, the eTRON architecture also offers a wide range of functionalities through a coherent set of application programming interfaces (API) leveraging tamper-resistance. In this paper, we discuss various features of the eTRON architecture, and present two representative eTRON-based applications with a view to evaluating its effectiveness by comparing with other existing applications.
Keywords: authorisation; cyber-physical systems; electronic commerce; smart cards; ubiquitous computing; API; CPS; application programming interfaces; cyber-physical systems; eTRON architecture; hardware security; integrated circuits; mobile phones; secure digital cards; security features; smartcards; software security; subscriber identity module; tamper-resistant devices; tamper-resistant security; Access control; Authentication; Computer architecture; Cryptography; Hardware; Libraries; CPS; Tamper-resistance; access control; authentication; e-commerce; secure filesystem; smartcards (ID#: 16-10022)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7396503&isnumber=7396460
Y. Sun, S. Nanda and T. Jaeger, “Security-as-a-Service for Microservices-Based Cloud Applications,” 2015 IEEE 7th International Conference on Cloud Computing Technology and Science (CloudCom), Vancouver, BC, 2015, pp. 50-57. doi: 10.1109/CloudCom.2015.93
Abstract: Microservice architecture allows different parts of an application to be developed, deployed and scaled independently, therefore becoming a trend for developing cloud applications. However, it comes with challenging security issues. First, the network complexity introduced by the large number of microservices greatly increases the difficulty in monitoring the security of the entire application. Second, microservices are often designed to completely trust each other therefore compromise of a single microservice may bring down the entire application. The problems are only exacerbated by the cloud, since applications no longer have complete control over their networks. In this paper, we propose a design for security-as-a-service for microservices-based cloud applications. By adding a new API primitive FlowTap for the network hypervisor, we build a flexible monitoring and policy enforcement infrastructure for network traffic to secure cloud applications. We demonstrate the effectiveness of our solution by deploying the Bro network monitor using FlowTap. Results show that our solution is flexible enough to support various kinds of monitoring scenarios and policies and it incurs minimal overhead (~6%) for real world usage. As a result, cloud applications can leverage our solution to deploy network security monitors to flexibly detect and block threats both external and internal to their network.
Keywords: application program interfaces; cloud computing; security of data; trusted computing; API primitive FlowTap; Bro network monitor; microservice-based cloud applications; network hypervisor; policy enforcement infrastructure; security-as-a-service; Cloud computing; Complexity theory; Computer architecture; DVD; Electronic mail; Monitoring; Security; microservices; network monitoring; security (ID#: 16-10023)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7396137&isnumber=7396111
W. You et al., “Promoting Mobile Computing and Security Learning Using Mobile Devices,” Integrated STEM Education Conference (ISEC), 2015 IEEE, Princeton, NJ, 2015, pp. 205-209. doi: 10.1109/ISECon.2015.7119924
Abstract: It is of vital importance to provide mobile computing and security education to students in the computing fields. As the mobile applications become increasingly popular and inexpensive ways for people to communicate, share information and take advantage of convenient functionality in people's daily lives, they also regularly attract the interests of malicious attackers. Malware and spyware that may damage smart phones or steal sensitive information are also growing in every aspect of people's lives. Another concern lies in insecure mobile application development. This kind of programming makes mobile devices more vulnerable. For example, some insecure exposures of the APIs or the abuse of some components while developing apps will make the applications suffer from potential threats. Although many academic institutions have started to or planned to offer mobile computing courses, there is a shortage of hands-on lab modules and resources that can be integrated into multiple existing computing courses. In this paper, we present our development on mobile computing and security hands-on Labs and share our experiences in teaching courses on mobile computing and security with students' learning feedback using Android mobile devices.
Keywords: Android (operating system); computer science education; invasive software; mobile computing; smart phones; teaching; API; Android mobile devices; hands-on lab modules; malicious attackers; malware; mobile application development; mobile computing and security education; mobile computing courses; mobile computing hands-on labs; mobile security hands-on labs; sensitive information; spyware; student learning feedback; teaching; Mobile communication; Mobile computing; Programming; Security; Smart phones; Android development; Mobile security education; Secure programming (ID#: 16-10024)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7119924&isnumber=7119894
S. Hosseinzadeh, S. Rauti, S. Hyrynsalmi and V. Leppanen, “Security in the Internet of Things Through Obfuscation and Diversification,” Computing, Communication and Security (ICCCS), 2015 International Conference on, Pamplemousses, 2015, pp. 1-5. doi: 10.1109/CCCS.2015.7374189
Abstract: Internet of Things (IoT) is composed of heterogeneous embedded and wearable sensors and devices that collect and share information over the Internet. This may contain private information of the users. Thus, securing the information and preserving the privacy of the users are of paramount importance. In this paper we look into the possibility of applying the two techniques, obfuscation and diversification, in IoT. Diversification and obfuscation techniques are two outstanding security techniques used for proactively protecting the software and code. We propose obfuscating and diversifying the operating systems and APIs on the IoT devices, and also some communication protocols enabling the external use of IoT devices. We believe that the proposed ideas mitigate the risk of unknown zero-day attacks, large-scale attacks, and also the targeted attacks.
Keywords: Internet of Things; application program interfaces; operating systems (computers); security of data; API; IoT; diversification techniques; obfuscation techniques; operating systems; security techniques; Apertures; Feeds; Impedance; Radar antennas; Substrates; Wireless LAN; diversification; obfuscation; privacy; security (ID#: 16-10025)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7374189&isnumber=7374113
M. A. Saied, O. Benomar, H. Abdeen and H. Sahraoui, “Mining Multi-level API Usage Patterns,” 2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER), Montreal, QC, 2015, pp. 23-32. doi: 10.1109/SANER.2015.7081812
Abstract: Software developers need to cope with complexity of Application Programming Interfaces (APIs) of external libraries or frameworks. However, typical APIs provide several thousands of methods to their client programs, and such large APIs are difficult to learn and use. An API method is generally used within client programs along with other methods of the API of interest. Despite this, co-usage relationships between API methods are often not documented. We propose a technique for mining Multi-Level API Usage Patterns (MLUP) to exhibit the co-usage relationships between methods of the API of interest across interfering usage scenarios. We detect multi-level usage patterns as distinct groups of API methods, where each group is uniformly used across variable client programs, independently of usage contexts. We evaluated our technique through the usage of four APIs having up to 22 client programs per API. For all the studied APIs, our technique was able to detect usage patterns that are, almost all, highly consistent and highly cohesive across a considerable variability of client programs.
Keywords: application program interfaces; data mining; software libraries; MLUP; application programming interface; multilevel API usage pattern mining; Clustering algorithms; Context; Documentation; Graphical user interfaces; Java; Layout; Security; API Documentation; API Usage; Software Clustering; Usage Pattern (ID#: 16-10026)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081812&isnumber=7081802
M. N. Aneci, L. Gheorghe, M. Carabas, S. Soriga and R. A. Somesan, “SDN-based Security Mechanism,” 2015 14th RoEduNet International Conference - Networking in Education and Research (RoEduNet NER), Craiova, 2015, pp. 12-17. doi: 10.1109/RoEduNet.2015.7311820
Abstract: Nowadays most hardware configurations are being replaced with software configurations in order to virtualize everything is possible. At this rate, in the networking research domain, the concept of Software Defined Network (SDN) is evolving. This paper proposes an SDN-based security mechanism that provides confidentiality and integrity by using custom cryptographic algorithms between two routers. The mechanism is able to secure for both TCP and UDP traffic. It provides the possibility to choose what information to secure: the Layer 4 header and Layer 7 payload, or just the Layer 7 payload. The implementation of the proposed security mechanism relies on the Cisco model for SDN and is using OnePK API.
Keywords: computer network security; cryptography; software defined networking; transport protocols; OnePK API; SDN-based security mechanism; TCP traffic; UDP traffic; application program interface; cryptographic algorithms; software configuration; software defined network; transport control protocol; user defined protocol; Decision support systems; Cisco; Software Defined Networking; confidentiality; custom encryption; integrity; onePK (ID#: 16-10027)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7311820&isnumber=7311815
R. Beniwal, P. Zavarsky and D. Lindskog, “Study of Compliance of Apple's Location based APIs with Recommendations of the IETF Geopriv,” 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST), London, 2015, pp. 214-219. doi: 10.1109/ICITST.2015.7412092
Abstract: Location Based Services (LBS) are services offered by smart phone applications which use device location data to offer the location-related services. Privacy of location information is a major concern in LBS applications. This paper compares the location APIs of iOS with the IETF Geopriv architecture to determine what mechanisms are in place to protect location privacy of an iOS user. The focus of the study is on the distribution phase of the Geopriv architecture and its applicability in enhancing location privacy on iOS mobile platforms. The presented review shows that two iOS APIs features known as Geocoder and turning off location services provide to some extent location privacy for iOS users. However, only a limited number of functionalities can be considered as compliant with Geopriv's recommendations. The paper also presents possible ways how to address limited location privacy offered by iOS mobile devices based on Geopriv recommendations.
Keywords: application program interfaces; data privacy; iOS (operating system); recommender systems; smart phones; Apple location based API; Geocoder; Geopriv recommendation; IETF Geopriv architecture; LBS; device location data; distribution phase; iOS mobile device; iOS mobile platform; iOS user; location based service; location information privacy; location privacy; location-related service; off location service; smart phone application; Global Positioning System; Internet; Mobile communication; Operating systems; Privacy; Servers; Smart phones; APIs; Geopriv; iOS; location information (ID#: 16-10028)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7412092&isnumber=7412034
A. Alotaibi and A. Mahmmod, “Enhancing OAuth Services Security by an Authentication Service with Face Recognition,” Systems, Applications and Technology Conference (LISAT), 2015 IEEE Long Island, Farmingdale, NY, 2015, pp. 1-6. doi: 10.1109/LISAT.2015.7160208
Abstract: Controlling secure access to web Application Programming Interfaces (APIs) and web services has become more vital with advancement and use of the web technologies. The security of web services APIs is encountering critical issues in managing authenticated and authorized identities of users. Open Authorization (OAuth) is a secure protocol that allows the resource owner to grant permission to a third-party application in order to access the resource owner's protected resource on their behalf, without releasing their credentials. Most web APIs are still using the traditional authentication which is vulnerable to many attacks such as man-in-the middle attack. To reduce such vulnerability, we enhance the security of OAuth through the implementation of a biometric service. We introduce a face verification system based on Local Binary Patterns as an authentication service handled by the authorization server. The entire authentication process consists of three services: Image registration service, verification service, and access token service. The developed system is most useful in securing those services where a human identification is required.
Keywords: Web services; application program interfaces; authorisation; biometrics (access control); face recognition; image registration; OAuth service security; Web application programming interfaces; Web services API; Web technologies; access token service; authentication service; authorization server; biometric service; face verification system; human identification; image registration service; local binary patterns; open authorization; resource owner protected resource; third-party application; verification service; Authentication; Authorization; Databases; Protocols; Servers; Access Token; Face Recognition; OAuth; Open Authorization; Web API; Web Services (ID#: 16-10029)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160208&isnumber=7160171
G. Suddul, K. Nundran, J. L. K. Cheung and M. Richomme, “Rapid Prototyping with a Local Geolocation API,” Computing, Communication and Security (ICCCS), 2015 International Conference on, Pamplemousses, 2015, pp. 1-4. doi: 10.1109/CCCS.2015.7374192
Abstract: Geolocation technology provides the ability to target content and services to users visiting specific locations. There is an expanding growth of device features and Web Application Programming Interfaces (APIs) supporting the development of applications with geolocation services on mobile platforms. However, to be effective, these applications rely on the availability of broadband networks which are not readily available in various developing countries, especially Africa. We propose a geolocation API for the Orange Emerginov Platform which keeps geolocation data in an offline environment and periodically synchronises with its online database. The API also has a set of new features like categorisation and shortest path. It has been successfully implemented and tested with geolocation data of Mauritius, consumed by mobile applications. Our result demonstrates reduction of response time around 80% for some features, when compared with other online Web APIs.
Keywords: application program interfaces; mobile computing; software prototyping; Mauritius; Orange Emerginov Platform; application programming interfaces; local geolocation API; mobile applications; rapid prototyping; Artificial neural networks; Computational modeling; Industries; Liquids; Mathematical model; Process control; Training; OSM geolocation API; Web API; geolocation; micro-services (ID#: 16-10030)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7374192&isnumber=7374113
P. Sun, S. Chandrasekaran, S. Zhu and B. Chapman, “Deploying OpenMP Task Parallelism on Multicore Embedded Systems with MCA Task APIs,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 843-847. doi: 10.1109/HPCC-CSS-ICESS.2015.88
Abstract: Heterogeneous multicore embedded systems are rapidly growing with cores of varying types and capacity. Programming these devices and exploiting the hardware has been a real challenge. The programming models and its execution are typically meant for general purpose computation, they are mostly too heavy to be adopted for the resource-constrained embedded systems. Embedded programmers are still expected to use low-level and proprietary APIs, making the software built less and less portable. These challenges motivated us to explore how OpenMP, a high-level directive-based model, could be used for embedded platforms. In this paper, we translate OpenMP to Multicore Association Task Management API (MTAPI), which is a standard API for leveraging task parallelism on embedded platforms. Our results demonstrate that the performance of our OpenMP runtime library is comparable to the state-of-the-art task parallel solutions. We believe this approach will provide a portable solution since it abstracts the low-level details of the hardware and no longer depends on vendor-specific API.
Keywords: application program interfaces; embedded systems; multiprocessing systems; parallel processing; MCA; MTAPI; OpenMP runtime library; OpenMP task parallelism; heterogeneous multicore embedded system; high-level directive-based model; multicore association task management API; multicore embedded system; resource-constrained embedded system; vendor-specific API; Computational modeling; Embedded systems; Hardware; Multicore processing; Parallel processing; Programming; Heterogeneous Multicore Embedded Systems; MTAPI; OpenMP; Parallel Computing (ID#: 16-10031)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336267&isnumber=7336120
N. Kawaguchi and K. Omote, “Malware Function Classification Using APIs in Initial Behavior,” Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, Kaohsiung, 2015, pp. 138-144. doi: 10.1109/AsiaJCIS.2015.15
Abstract: Malware proliferation has become a serious threat to the Internet in recent years. Most of the current malware are subspecies of existing malware that have been automatically generated by illegal tools. To conduct an efficient analysis of malware, estimating their functions in advance is effective when we give priority to analyze. However, estimating malware functions has been difficult due to the increasing sophistication of malware. Although various approaches for malware detection and classification have been considered, the classification accuracy is still low. In this paper, we propose a new classification method which estimates malware's functions from APIs observed by dynamic analysis on a host. We examining whether the proposed method can correctly classify unknown malware based on function by machine learning. The results show that our new method can classify each malware's function with an average accuracy of 83.4%.
Keywords: Internet; invasive software; learning (artificial intelligence); pattern classification; API; dynamic analysis; efficient malware analysis; illegal tools; initial behavior; machine learning; malware detection; malware function classification; malware proliferation; Accuracy; Data mining; Feature extraction; Machine learning algorithms; Malware; Software; Support vector machines; machine learning; malware classification (ID#: 16-10032)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153948&isnumber=7153836
M. A. Saied, H. Abdeen, O. Benomar and H. Sahraoui, “Could We Infer Unordered API Usage Patterns Only Using the Library Source Code?,” 2015 IEEE 23rd International Conference on Program Comprehension, Florence, 2015, pp. 71-81. doi: 10.1109/ICPC.2015.16
Abstract: Learning to use existing or new software libraries is a difficult task for software developers, which would impede their productivity. Much existing work has provided different techniques to mine API usage patterns from client programs in order to help developers on understanding and using existing libraries. However, considering only client programs to identify API usage patterns is a strong constraint as the client programs source code is not always available or the clients themselves do not exist yet for newly released APIs. In this paper, we propose a technique for mining Non Client-based Usage Patterns (NCBUP miner). We detect unordered API usage patterns as distinct groups of API methods that are structurally and semantically related and thus may contribute together to the implementation of a particular functionality for potential client programs. We evaluated our technique through four APIs. The obtained results are comparable to those of client-based approaches in terms of usage-patterns cohesion.
Keywords: application program interfaces; data mining; software libraries; source code (software); NCBUP miner; client programs; library source code; non client-based usage patterns; software libraries; unordered API usage patterns; Clustering algorithms; Context; Java; Matrix decomposition; Measurement; Security; Semantics; API Documentation; API Usage; Software Clustering; Usage Pattern (ID#: 16-10033)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7181434&isnumber=7181418
Y. E. Oktian, SangGon Lee, HoonJae Lee and JunHuy Lam, “Secure Your Northbound SDN API,” 2015 Seventh International Conference on Ubiquitous and Future Networks, Sapporo, 2015, pp. 919-920. doi: 10.1109/ICUFN.2015.7182679
Abstract: A lot of new features and capabilities emerge in the network because of separation of data plane and control plane in Software Defined Network (SDN) terminology. One of them is the possibility to implement Northbound API to allow third-party application to access resources of the network. However, most of the current implementations of Northbound API do not consider the security aspect. Therefore, we design more secure scheme for it. The design consists of token authentication for application and user, who is responsible to control/use the application/network, using OAuth 2.0 protocol.
Keywords: application program interfaces; computer network security; cryptographic protocols; software defined networking; Northbound SDN API; OAuth 2.0 protocol; SDN terminology; authentication; software defined network terminology; Authentication; Authorization; Proposals; Protocols; Servers; Software defined networking; Northbound API; SDN; authentication; token
(ID#: 16-10034)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182679&isnumber=7182475
C. I. Fan, H. W. Hsiao, C. H. Chou and Y. F. Tseng, “Malware Detection Systems Based on API Log Data Mining,” Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 255-260. doi: 10.1109/COMPSAC.2015.241
Abstract: As information technology improves, the Internet is involved in every area in our daily life. When the mobile devices and cloud computing technology start to play important parts of our life, they have become more susceptible to attacks. In recent years, phishing and malicious websites have increasingly become serious problems in the field of network security. Attackers use many approaches to implant malware into target hosts in order to steal significant data and cause substantial damage. The growth of malware has been very rapid, and the purpose has changed from destruction to penetration. The signatures of malware have become more difficult to detect. In addition to static signatures, malware also tries to conceal dynamic signatures from anti-virus inspection. In this research, we use hooking techniques to trace the dynamic signatures that malware tries to hide. We then compare the behavioural differences between malware and benign programs by using data mining techniques in order to identify the malware. The experimental results show that our detection rate reaches 95% with only 80 attributes. This means that our method can achieve a high detection rate with low complexity.
Keywords: Web sites; application program interfaces; cloud computing; computer viruses; data mining; API log data mining; Internet; antivirus inspection; cloud computing technology; dynamic signature tracing; dynamic signatures; hooking techniques; information technology; malicious Web sites; malware detection systems; mobile devices; network security; phishing; static signatures; Accuracy; Bayes methods; Data mining; Feature extraction; Malware; Monitoring; Training; API; Classification; Data Mining; Malware; System Call (ID#: 16-10035)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273364&isnumber=7273299
G. G. Sundarkumar, V. Ravi, I. Nwogu and V. Govindaraju, “Malware Detection via API Calls, Topic Models and Machine Learning,” 2015 IEEE International Conference on Automation Science and Engineering (CASE), Gothenburg, 2015, pp. 1212-1217. doi: 10.1109/CoASE.2015.7294263
Abstract: Dissemination of malicious code, also known as malware, poses severe challenges to cyber security. Malware authors embed software in seemingly innocuous executables, unknown to a user. The malware subsequently interacts with security-critical OS resources on the host system or network, in order to destroy their information or to gather sensitive information such as passwords and credit card numbers. Malware authors typically use Application Programming Interface (API) calls to perpetrate these crimes. We present a model that uses text mining and topic modeling to detect malware, based on the types of API call sequences. We evaluated our technique on two publicly available datasets. We observed that Decision Tree and Support Vector Machine yielded significant results. We performed t-test with respect to sensitivity for the two models and found that statistically there is no significant difference between these models. We recommend Decision Tree as it yields 'if-then' rules, which could be used as an early warning expert system.
Keywords: application program interfaces; data mining; decision trees; expert systems; invasive software; learning (artificial intelligence); support vector machines; API calls; application programming interface calls; cyber security; decision tree; early warning expert system; if-then rules; machine learning; malicious code dissemination; malware detection; security-critical OS resources; support vector machine text mining; topic modeling; topic models; Feature extraction; Grippers; Sensitivity; Support vector machines; Text mining; Trojan horses (ID#: 16-10036)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7294263&isnumber=7294025
M. Sneps-Sneppe and D. Namiot, “Metadata in SDN API for WSN,” 2015 7th International Conference on New Technologies, Mobility and Security (NTMS), Paris, 2015, pp. 1-5. doi: 10.1109/NTMS.2015.7266504
Abstract: This paper discusses the system aspects of the development of applied programming interfaces in Software-Defined Networking (SDN). SDN is a prospect software enablement for Wireless Sensor Networks (WSN). So, application layer SDN API will be the main application API for WSN. Almost all existing SDN interfaces use so-called Representational State Transfer (REST) services as a basic model. This model is simple and straightforward for developers, but often does not support the information (metadata) necessary for programming automation. In this article, we cover the issues of representation of metadata in the SDN API.
Keywords: meta data; software defined networking; wireless sensor networks; REST services; SDN interfaces; WSN; metadata; programming automation; programming interfaces; representational state transfer; software-defined networking; Computer architecture; Metadata; Programming; service-oriented architecture; Wireless sensor networks; Parlay; REST; SDN; WSDL; northbound API (ID#: 16-10037)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266504&isnumber=7266450
M. Panda and A. Nag, “Plain Text Encryption Using AES, DES and SALSA20 by Java Based Bouncy Castle API on Windows and Linux,” Advances in Computing and Communication Engineering (ICACCE), 2015 Second International Conference on, Dehradun, 2015, pp. 541-548. doi: 10.1109/ICACCE.2015.130
Abstract: Information Security has become an important element of data communication. Various encryption algorithms have been proposed and implemented as a solution and play an important role in information security system. But on the other hand these algorithms consume a significant amount of computing resources such as CPU time, memory and battery power. However, for all practical applications, performance and the cost of implementation are also important concerns. Therefore it is essential to assess the performance of encryption algorithms. In this paper, the performance of three Symmetric Key based algorithms-AES, Blowfish and Salsa20 has been evaluated based on execution time, memory required for implementation and throughput across two different operating systems. Based on the simulation results, it can be concluded that AES and Salsa20 are preferred over Blowfish for plain text data encryption.
Keywords: Java; Linux; application program interfaces; cryptography; AES; Blowfish; Bouncy Castle API; DES; SALSA20; Salsa20; Windows; data communication; information security system; operating systems; performance assessment; plain text data encryption; plain text encryption; symmetric key based algorithms; Algorithm design and analysis; Ciphers; Classification algorithms; Encryption; Memory management; Performance Analysis (ID#: 16-10038)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7306744&isnumber=7306547
V. Casola, A. D. Benedictis, M. Rak and U. Villano, “SLA-Based Secure Cloud Application Development: The SPECS Framework,” 2015 17th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), Timisoara, 2015, pp. 337-344. doi: 10.1109/SYNASC.2015.59
Abstract: The perception of lack of control over resources deployed in the cloud may represent one of the critical factors for an organization to decide to cloudify or not their own services. Furthermore, in spite of the idea of offering security-as-a-service, the development of secure cloud applications requires security skills that can slow down the adoption of the cloud for nonexpert users. In the recent years, the concept of Security Service Level Agreements (Security SLA) is assuming a key role in the provisioning of cloud resources. This paper presents the SPECS framework, which enables the development of secure cloud applications covered by a Security SLA. The SPECS framework offers APIs to manage the whole Security SLA life cycle and provides all the functionalities needed to automatize the enforcement of proper security mechanisms and to monitor user defined security features. The development process of SPECS applications offering security-enhanced services is illustrated, presenting as a real-world case study the provisioning of a secure web server.
Keywords: application program interfaces; cloud computing; contracts; security of data; API; SLA-based secure cloud application development; SPECS framework; secure Web server; security SLA; security service level agreement; security-as-a-service; security-enhanced service; user-defined security feature; Cloud computing; Context; Monitoring; Security; Supply chains; Unified modeling language; SPECS; Secure Cloud Application Development; Secure Web Server; Security Service Level Agreement (ID#: 16-10039)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7426103&isnumber=7425822
N. Paladi and C. Gehrmann, “Towards Secure Multi-Tenant Virtualized Networks,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 1180-1185. doi: 10.1109/Trustcom.2015.502
Abstract: Network virtualization enables multi-tenancy over physical network infrastructure, with a side-effect of increased network complexity. Software-defined networking (SDN) is a novel network architectural model – one where the control plane is separated from the data plane by a standardized API – which aims to reduce the network management overhead. However, as the SDN model itself is evolving, its application to multi-tenant virtualized networks raises multiple security challenges. In this paper, we present a security analysis of SDN-based multi-tenant virtualized networks: we outline the security assumptions applicable to such networks, define the relevant adversarial model, identify the main attack vectors for such network infrastructure deployments and finally synthesize a set of high-level security requirements for SDN-based multi-tenant virtualized networks. This paper sets the foundation for future design of secure SDN-based multi-tenant virtualized networks.
Keywords: application program interfaces; computer network management; computer network security; software defined networking; virtualisation; SDN; main attack vectors; multitenant virtualized network security; multitenant virtualized networks; network architectural model; network complexity; network infrastructure deployments; network management overhead reduction; network virtualization; physical network infrastructure; software-defined networking; standardized API; Cloud computing; Computer architecture; Hardware; Network operating systems; Routing; Security; Virtualization; Multi-tenant Virtualized Networks; Network Virtualization; Security; Software Defined Networks (ID#: 16-10040)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345410&isnumber=7345233
A. Bianchi, J. Corbetta, L. Invernizzi, Y. Fratantonio, C. Kruegel and G. Vigna, “What the App is That? Deception and Countermeasures in the Android User Interface,” 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015,
pp. 931-948. doi: 10.1109/SP.2015.62
Abstract: Mobile applications are part of the everyday lives of billions of people, who often trust them with sensitive information. These users identify the currently focused app solely by its visual appearance, since the GUIs of the most popular mobile OSes do not show any trusted indication of the app origin. In this paper, we analyze in detail the many ways in which Android users can be confused into misidentifying an app, thus, for instance, being deceived into giving sensitive information to a malicious app. Our analysis of the Android platform APIs, assisted by an automated state-exploration tool, led us to identify and categorize a variety of attack vectors (some previously known, others novel, such as a non-escapable full screen overlay) that allow a malicious app to surreptitiously replace or mimic the GUI of other apps and mount phishing and click-jacking attacks. Limitations in the system GUI make these attacks significantly harder to notice than on a desktop machine, leaving users completely defenseless against them. To mitigate GUI attacks, we have developed a two-layer defense. To detect malicious apps at the market level, we developed a tool that uses static analysis to identify code that could launch GUI confusion attacks. We show how this tool detects apps that might launch GUI attacks, such as ransom ware programs. Since these attacks are meant to confuse humans, we have also designed and implemented an on-device defense that addresses the underlying issue of the lack of a security indicator in the Android GUI. We add such an indicator to the system navigation bar, this indicator securely informs users about the origin of the app with which they are interacting (e.g., The Pay Pal app is backed by “Pay Pal, Inc.”). We demonstrate the effectiveness of our attacks and the proposed on-device defense with a user study involving 308 human subjects, whose ability to detect the attacks increased significantly when using a system equipped with our defense.
Keywords: Android (operating system); graphical user interfaces; invasive software; program diagnostics; smart phones; Android platform API; Android user interface; GUI confusion attacks; app origin; attack vectors; automated state-exploration tool; click-jacking attacks; desktop machine; malicious app; mobile OS; mobile applications; on-device defense; phishing attacks; ransomware programs; security indicator; sensitive information; static analysis; system navigation bar; trusted indication; two-layer defense; visual appearance; Androids; Graphical user interfaces; Humanoid robots; Navigation; Security; Smart phones; mobile-security; static-analysis; usable-security (ID#: 16-10041)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163069&isnumber=7163005
E. Markoska, N. Ackovska, S. Ristov, M. Gusev and M. Kostoska, “Software Design Patterns to Develop an Interoperable Cloud Environment,” Telecommunications Forum Telfor (TELFOR), 2015 23rd, Belgrade, 2015, pp. 986-989. doi: 10.1109/TELFOR.2015.7377630
Abstract: Software development has provided methods and tools to facilitate the development process, resulting in scalable, efficient, testable, readable and bug-free code. This endeavor has resulted in a multitude of products, many of them nowadays known as good practices, specialized environments, improved compilers, as well as software design patterns. Software design patterns are a tested methodology, and are most often language neutral. In this paper, we identify the problem of the heterogeneous cloud market, as well as the various APIs per a single cloud. By using a set of software design patterns, we developed a pilot software component that unifies the APIs of heterogeneous clouds. It offers an interface that would greatly simplify the development process of cloud based applications. The pilot adapter is developed for two open source clouds - Eucalyptus and OpenStack, but the usage of software design patterns allows an easy enhancement for all other clouds that have APIs for cloud management, either open source or commercial.
Keywords: application program interfaces; cloud computing; object-oriented methods; object-oriented programming; public domain software; software engineering; API; Eucalyptus; OpenStack; application program interface; cloud environment interoperability; open source cloud; software design pattern; software development; Cloud computing; Interoperability; Java; Production facilities; Security; Software design; cloud; design patterns; interoperability (ID#: 16-10042)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7377630&isnumber=7377376
S. Betgé-Brezetz, G. B. Kamga and M. Tazi, “Trust Support for SDN Controllers and Virtualized Network Applications,” Network Softwarization (NetSoft), 2015 1st IEEE Conference on, London, 2015, pp. 1-5. doi: 10.1109/NETSOFT.2015.7116153
Abstract: The SDN paradigm allows networks to be dynamically reconfigurable by network applications. SDN is also of particular interest for NFV which deals with the virtualization of network functions. The network programmability offered by SDN presents then various advantages but it also induces various threats regarding potential attacks on the network. For instance, there is a critical risk that a hacker takes over the network control by exploiting this SDN network programmability (e.g., using the SDN API or tampering a network application running on the SDN controller). This paper proposes then an approach to deal with this possible lack of trust in the SDN controller or in their applications. This approach consists in not relying on a single controller but on several `redundant' controllers that may also run in different execution environments. The network configuration requests coming from these controllers are then compared and, if deemed sufficiently consistent and then trustable, they are actually sent to the network. This approach has been implemented in an intermediary layer (based on a network hypervisor) inserted between the network equipments and the controllers. Experimentations have been performed showing the feasibility of the approach and providing some first evaluations of its impact on the network and the services.
Keywords: application program interfaces; computer network security; software defined networking; trusted computing; virtualisation; NFV; SDN API; SDN controllers; SDN network programmability; SDN paradigm; network configuration requests; network control; network equipments; network function virtualization; network hypervisor; network programmability; redundant controllers; trust support; virtualized network applications; Computer architecture; Network topology; Prototypes; Routing; Security; Virtual machine monitors; Virtualization; SDN; network applications; network virtualization; security; trust (ID#: 16-10043)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116153&isnumber=7116113
H. L. Choo, S. Oh, J. Jung and H. Kim, “The Behavior-Based Analysis Techniques for HTML5 Malicious Features,” Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, Blumenau, 2015, pp. 436-440. doi: 10.1109/IMIS.2015.67
Abstract: HTML5 announced in October 2014 contains many more functions than previous HTML versions. It includes the media controls of audio, video, canvas, etc., and it is designed to access the browser file system through the Java Script API such as the web storage and file reader API. In addition, it provides the powerful functions to replace existing active X. As the HTML5 standard is adopted, the conversion of web services to HTML5 is being carried out all over the world. The browser developers particularly have high expectation for HTML5 as it provides many mobile functions. However, as there is much expectation of HTML5, the damage of malicious attacks using HTML5 is also expected to be large. The script, which is the key to HTML5 functions, is a different type from existing malware attacks as a malicious attack can be generated merely by only a user accessing a browser. The existing known attacks can also be reused by bypassing the detection systems through the new HTML5 elements. This paper intends to define the unique HTML5 behavior data through the browser execution data and to propose the detection of malware by categorizing the malicious HTML5 features.
Keywords: Internet; Java; hypermedia markup languages; invasive software; mobile computing; multimedia computing; online front-ends; telecommunication control; HTML versions; HTML5 behavior data; HTML5 elements; HTML5 functions; HTML5 malicious features; HTML5 standard; Java Script API; Web services; Web storage; behavior-based analysis techniques; browser developers; browser execution data; browser file system; detection systems; file reader API; malicious attacks; malware attacks; media controls; mobile functions; Browsers; Engines; Feature extraction; HTML; Malware; Standards; Behavior-Based Analysis; HTML5 Malicious Features; Script-based CyberAttack; Web Contents Security (ID#: 16-10044)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284990&isnumber=7284886
H. Graupner, K. Torkura, P. Berger, C. Meinel and M. Schnjakin, “Secure Access Control for Multi-Cloud Resources,” Local Computer Networks Conference Workshops (LCN Workshops), 2015 IEEE 40th, Clearwater Beach, FL, 2015, pp. 722-729. doi: 10.1109/LCNW.2015.7365920
Abstract: Privacy, security, and trust concerns are continuously hindering the growth of cloud computing despite its attractive features. To mitigate these concerns, an emerging approach targets the use of multi-cloud architectures to achieve portability and reduce cost. Multi-cloud architectures however suffer several challenges including inadequate cross-provider APIs, insufficient support from cloud service providers, and especially non-unified access control mechanisms. Consequently, the available multi-cloud proposals are unhandy or insecure. This paper proposes two contributions. At first, we survey existing cloud storage provider interfaces. Following, we propose a novel technique that deals with the challenges of connecting modern authentication standards and multiple cloud authorization methods.
Keywords: authorisation; cloud computing; data privacy; storage management; trusted computing; cloud storage provider interfaces inadequate cross-provider APIs; modern authentication standards; multicloud resources; multiple cloud authorization methods; nonunified access control mechanisms; privacy; secure access control; security; trust concerns; Access control; Authentication; Cloud computing; Containers; Google; Standards; Cloud storage; access control management; data security; multi-cloud systems
(ID#: 16-10045)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7365920&isnumber=7365758
J. Xu and X. Yuan, “Developing a Course Module for Teaching Cryptography Programming on Android,” Frontiers in Education Conference (FIE), 2015. 32614 2015. IEEE, El Paso, TX, 2015, pp. 1-4. doi: 10.1109/FIE.2015.7344086
Abstract: Mobile platforms have become extremely popular among users and hence become an important platform for developers. Mobile devices often store tremendous amount of personal, financial and commercial data. Several studies have shown that large number of the mobile applications that use cryptography APIs have made mistakes. This could potentially attract both targeted and mass-scale attacks, which will cause great loss to the mobile users. Therefore, it is vitally important to provide education in secure mobile programming to students in computer science and other related disciplines. It is very hard to find pedagogical resources on this topic that many educators urgently need. This paper introduces a course module that teaches students how to develop secure Android applications by correctly using Android's cryptography APIs. This course module is targeted to two areas where programmers commonly make many mistakes: password based encryption and SSL certificate validation. The core of the module includes a real world sample Android program for students to secure by implementing cryptographic components correctly. The course module will use open-ended problem solving to let students freely explore the multiple options in securing the application. The course module includes a lecture slide on Android's Crypto library, its common misuses, and suggested good practices. Assessment materials will also be included in the course module. This course module could be used in mobile programming class or network security class. It could also be taught as a module in advanced programming class or used as a self-teaching tool for general public.
Keywords: application program interfaces; computer aided instruction; computer science education; cryptography; educational courses; mobile computing; smart phones; teaching; Android crypto library; Android program; SSL certificate validation; assessment materials; computer science; course module development; cryptographic components; cryptography API; cryptography programming; education; lecture slide; mass-scale attacks; mobile applications; mobile devices; mobile platforms; network security class; open-ended problem solving; password based encryption; pedagogical resources; secure Android applications; secure mobile programming class; targeted attacks; teaching; Androids; Encryption; Humanoid robots; Mobile communication; Programming; Android programming; SSL; course module; cryptography; programming; security (ID#: 16-10046)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7344086&isnumber=7344011
J. Li, D. Tian and C. Hu, “Dynamic Tracking Reinforcement Based on Simplified Control Flow,” 2015 11th International Conference on Computational Intelligence and Security (CIS), Shenzhen, 2015, pp. 358-362. doi: 10.1109/CIS.2015.93
Abstract: With the rapid development of computer science and Internet technology, software security issues have become one of the main threats to information system. The technique of execution path tracking based on control flow integrity is an effective method to improve software security. However, the dynamic tracking method may incur considerable performance overhead. To address this problem, this paper proposes a method of dynamic control flow enforcement based on API invocations. Our method is based on a key observation: most control flow attackers will invoke the sensitive APIs to achieve their malicious purpose. To defeat these attacks, we first extract the normal execution path of API calls by offline analysis. Then, we utilize the offline information for run-time enforcement. The results of the experiment showed that our method is able to detect and prevent the control flow attacks with malicious API invocations. Compared with existing methods, the system performance is improved.
Keywords: Internet; application program interfaces; information systems; security of data; API calls; API invocations; Internet technology; computer science; control flow attacks; control flow integrity; dynamic control flow enforcement; dynamic tracking reinforcement; information system; offline analysis; offline information; run-time enforcement; simplified control flow; software security; software security issues; Algorithm design and analysis; Heuristic algorithms; Instruments; Registers; Security; Software; Yttrium; API calls; inserted reinforcement; path tracking; simplified control flow (ID#: 16-10047)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7397107&isnumber=7396229
V. S. Sinha, D. Saha, P. Dhoolia, R. Padhye and S. Mani, “Detecting and Mitigating Secret-Key Leaks in Source Code Repositories,” 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories, Florence, 2015, pp. 396-400. doi: 10.1109/MSR.2015.48
Abstract: Several news articles in the past year highlighted incidents in which malicious users stole API keys embedded in files hosted on public source code repositories such as GitHub and Bit Bucket in order to drive their own work-loads for free. While some service providers such as Amazon have started taking steps to actively discover such developer carelessness by scouting public repositories and suspending leaked API keys, there is little support for tackling the problem from the code sharing platforms themselves. In this paper, we discuss practical solutions to detecting, preventing and fixing API key leaks. We first outline a handful of methods for detecting API keys embedded within source code, and evaluate their effectiveness using a sample set of projects from GitHub. Second, we enumerate the mechanisms which could be used by developers to prevent or fix key leaks in code repositories manually. Finally, we outline a possible solution that combines these techniques to provide tool support for protecting against key leaks in version control systems.
Keywords: application program interfaces; public key cryptography; source code (software); code repositories; fix key leaks; key leaks protection; secret-key leaks detection; secret-key leaks mitigation; source code repositories; version control systems; Control systems; Facebook; History; Java; Leak detection; Pattern matching; Software; api keys; git; mining software repositories; security
(ID#: 16-10048)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7180102&isnumber=7180053
J. Spring, S. Kern and A. Summers, “Global Adversarial Capability Modeling,” 2015 APWG Symposium on Electronic Crime Research (eCrime), Barcelona, 2015, pp. 1-21. doi: 10.1109/ECRIME.2015.7120797
Abstract: Intro: Computer network defense has models for attacks and incidents comprised of multiple attacks after the fact. However, we lack an evidence-based model the likelihood and intensity of attacks and incidents. Purpose: We propose a model of global capability advancement, the adversarial capability chain (ACC), to fit this need. The model enables cyber risk analysis to better understand the costs for an adversary to attack a system, which directly influences the cost to defend it. Method: The model is based on four historical studies of adversarial capabilities: capability to exploit Windows XP, to exploit the Android API, to exploit Apache, and to administer compromised industrial control systems. Result: We propose the ACC with five phases: Discovery, Validation, Escalation, Democratization, and Ubiquity. We use the four case studies as examples as to how the ACC can be applied and used to predict attack likelihood and intensity.
Keywords: Analytical models; Androids; Biological system modeling; Computational modeling; Humanoid robots; Integrated circuit modeling; Software systems; CND; computer network defense; cybersecurity; Incident response; intelligence; intrusion detection; modeling; security (ID#: 16-10049)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120797&isnumber=7120794
K. Shah and D. K. Singh, “A Survey on Data Mining Approaches for Dynamic Analysis of Malwares,” Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, Noida, 2015, pp. 495-499. doi: 10.1109/ICGCIoT.2015.7380515
Abstract: The number of samples being analyzed by the security vendors is continuously increasing on daily basis. Therefore generic automated malware detection tools are needed, to detect zero day threats. Using machine learning techniques, the exploitation of behavioral patterns obtained, can be done for classifying malwares (unknown samples) to their families. Variable length instructions of Intel x86 placed at any arbitrary addresses makes it affected by obfuscation techniques. Padding bytes insertion at locations that are unreachable during runtime tends static analyzers being contused to misinterpret binaries of program. Often the code that is actually running may not necessarily be the code which static analyzer analyzed. Such programs use polymorphism, metamorphism techniques and are self modifying. In this paper, using dynamic analysis of executable and based on mining techniques. Application Programming Interface (API) calls invoked by samples during execution are used as parameter of experimentation.
Keywords: application program interfaces; data mining; invasive software; learning (artificial intelligence); pattern classification; system monitoring; application programming interface; behavioral pattern exploitation; data mining approach; dynamic malware analysis; generic automated malware detection tools; machine learning techniques; malware classification; metamorphism techniques; obfuscation techniques; padding byte insertion; polymorphism; security vendors; variable length instructions; Classification algorithms; API Calls; AdaBoost; Classifiers; Dynamic Analysis (ID#: 16-10050)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380515&isnumber=7380415
N. S. Gawale and N. N. Patil, “Implementation of a System to Detect Malicious URLs for Twitter Users,” Pervasive Computing (ICPC), 2015 International Conference on, Pune, 2015, pp. 1-5. doi: 10.1109/PERVASIVE.2015.7087078
Abstract: Over the last few years, there is tremendous use of online social networking sites. It's also providing opportunities for hackers to enter easily in network and do their unauthorized activities. There are many notable social networking websites like Twitter, Facebook and Google+ etc. These are popularly practiced by numerous people to become linked up with each other and partake their daily happenings through it. Here we focus on twitter for an experiment which is more popular for micro-blogging and its community interact through publishing text-based posts of 140 characters known as tweets. By considering this popularity of tweeter hacker's use of short Uniform Resource Locator (URL), as a result it disseminates viruses on user accounts. Our study is based on examining the malicious content or their short URLs and protect the user from unauthorized activities. We introduce such a system which provides the security to multiple users of twitter. Besides, they get some alert mails. Our goal is to download URLs in real time from multiple accounts. Then we get entry points of correlated URLs. Crawler browser marks the suspicious URL. This system finds such malicious URLs by using five features like initial URL, similar text, friend follower ratio and relative URLs. Then alert mail is sent to users, which is added to the host.
Keywords: authorisation; computer crime; computer viruses; social networking (online); Facebook; Google+; Twitter; Uniform Resource Locator; alert mail; crawler browser; friend follower ratio; hackers; malicious URL detection; malicious content; microblogging; online social networking Web sites; suspicious URL; text-based post publishing; unauthorized activities; user accounts; virus dissemination; Crawlers; Databases; Real-time systems; Servers; Uniform resource locators; API keys; Conditional redirect; Suspicious URL; classifier; crawler (ID#: 16-10051)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087078&isnumber=7086957
Y. Li, J. Fang, C. Liu, M. Liu and S. Wu, “Study on the Application of Dalvik Injection Technique for the Detection of Malicious Programs in Android,” Electronics Information and Emergency Communication (ICEIEC), 2015 5th International Conference on, Beijing, 2015, pp. 309-312. doi: 10.1109/ICEIEC.2015.7284546
Abstract: With the increasing popularization of smart phones in life, malicious software targeting smart phones is emerging in an endless stream. As the phone system possessing the highest current market share, Android is facing a full-scale security challenge. This article focuses on analyzing the application of Dalvik injection technique in the detection of Android malware. Modify the system API (Application Program Interface) through Dalvik injection technique can detect the programs on an Android phone directly. Through the list of sensitive API called by malicious programs, eventually judge the target program as malicious or not.
Keywords: Android (operating system); application program interfaces; invasive software; API; Android malware; Dalvik injection technique; application program interface; malicious program detection; Google; Java; Libraries; Security; Smart phones; Software; Dalvik injection; detection of malicious programs; sensitive API (ID#: 16-10052)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284546&isnumber=7284473
A. d. Benedictis, M. Rak, M. Turtur and U. Villano, “REST-Based SLA Management for Cloud Applications,” 2015 IEEE 24th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises, Larnaca, 2015, pp. 93-98. doi: 10.1109/WETICE.2015.36
Abstract: In cloud computing, possible risks linked to availability, performance and security can be mitigated by the adoption of Service Level Agreements (SLAs) formally agreed upon by cloud service providers and their users. This paper presents the design of services for the management of cloud-oriented SLAs that hinge on the use of a REST-based API. Such services can be easily integrated into existing cloud applications, platforms and infrastructures, in order to support SLA-based cloud services delivery. After a discussion on the SLA life-cycle, an agreement protocol state diagram is introduced. It takes explicitly into account negotiation, remediation and renegotiation issues, is compliant with all the active standards, and is compatible with the WS-Agreement standard. The requirement analysis and the design of a solution able to support the proposed SLA protocol is presented, introducing the REST API used. This API aims at being the basis for a framework to build SLA-based applications.
Keywords: application program interfaces; cloud computing; contracts; diagrams; formal specification; formal verification; protocols; systems analysis; CSP; REST-based API; REST-based SLA management; SLA-based cloud services delivery; agreement protocol state diagram; cloud service provider; requirement analysis; service level agreement; Cloud computing; Monitoring; Protocols; Security; Standards; Uniform resource locators; XML; API; Cloud; REST; SLA; WS-Agreement (ID#: 16-10053)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194337&isnumber=7194298
K. Shekanayaki, A. Chakure and A. Jain, “A Survey of Journey of Cloud and Its Future,” Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, Pune, 2015, pp. 60-64. doi: 10.1109/ICCUBEA.2015.20
Abstract: Cloud computing in the past few years has grown from a promising business idea to one of the fastest growing field of the IT industry. Still the IT organization is concern about critical issues (like security, data loss) existing with the implementation of cloud computing, related to security in particular. Consequently issues arise due to client switching to cloud computing. This paper briefs about the role of cloud in IT business enterprise, its woes and its projected solutions. It proposes the use of “Honey-Comb Infrastructure” for flexible, secure and reliable storage supported by parallel computing.
Keywords: cloud computing; electronic commerce; parallel processing; security of data; IT business enterprise; IT industry; IT organization; honey-comb infrastructure; information technology; parallel computing; security issue; Business; Cloud computing; Computational modeling; Computer architecture; Security; Servers; Software as a service; API (Application Programming Interface); CSP (Cloud Service Provider); Cloud Computing; DC (Data-centers); PAAS (Platform as a Service); SAAS (Software as a Service); SOA (Service-Oriented Architecture); TC (Telecommunications Closet); Virtualization (ID#: 16-10054)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155808&isnumber=7155781
M. Coblenz, R. Seacord, B. Myers, J. Sunshine and J. Aldrich, “A Course-Based Usability Analysis of Cilk Plus and OpenMP,” Visual Languages and Human-Centric Computing (VL/HCC), 2015 IEEE Symposium on, Atlanta, GA, 2015, pp. 245-249. doi: 10.1109/VLHCC.2015.7357223
Abstract: Cilk Plus and OpenMP are parallel language extensions for the C and C++ programming languages. The CPLEX Study Group of the ISO/IEC C Standards Committee is developing a proposal for a parallel programming extension to C that combines ideas from Cilk Plus and OpenMP. We conducted a preliminary comparison of Cilk Plus and OpenMP in a master's level course on security to evaluate the design tradeoffs in the usability and security of these two approaches. The eventual goal is to inform decision-making within the committee. We found several usability problems worthy of further investigation based on student performance, including declaring and using reductions, multi-line compiler directives, and the understandability of task assignment to threads.
Keywords: C++ language; application program interfaces; computer aided instruction; computer science education; human factors; multi-threading; program compilers; C programming language; C++ programming language; CPLEX Study Group; Cilk Plus; ISO/IEC C Standards Committee; OpenMP; course-based usability analysis; decision-making; master level course; multiline compiler directives; parallel language extensions; student performance analysis; task assignment understandability; Programming; API usability; Cilk Plus; OpenMP; empirical studies of programmers; parallel programming (ID#: 16-10055)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357223&isnumber=7356963
P. Gohar and L. Purohit, “Discovery and Prioritization of Web Services Based on Fuzzy User Preferences for QoS,” Computer, Communication and Control (IC4), 2015 International Conference on, Indore, 2015, pp. 1-6. doi: 10.1109/IC4.2015.7375702
Abstract: Web services are the key technologies for the web applications developed using Service Oriented Architecture (SOA). There are many challenges involved in implementing web services. Some of them are web service selection and discovery which involves matchmaking and finding the most suitable web service from a large collection of functionally-equivalent web services. In this paper a fuzzy-based approach for web service discovery is developed that model the ranking of QoS-aware web services as a fuzzy multi-criteria decision-making problem. To describe the web services available in the registry, ontology is created for each web service; and to represent the functional and imprecise Quality of Service (QoS) preferences of both the web service consumer and provider in linguistics term, fuzzy rule base is created with the help of Java Expert System Shell (JESS) API. To make decisions on multiple and conflicting QoS requirements, enhanced Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) model is adopted for QoS-based web service ranking. To demonstrate the abilities of the proposed framework, a web based system “E-Recruitment System“ is implemented.
Keywords: Java; Web services; decision making; fuzzy set theory; ontologies (artificial intelligence);operations research; quality of service; service-oriented architecture; E-Recruitment System; JESS API; Java Expert System Shell API;PROMETHEE model; Preference Ranking Organization METHod for Enrichment Evaluation; QoS requirements; QoS-aware Web service ranking; SOA; Web applications; Web based system; Web service consumer; Web service discovery; Web service prioritization; Web service selection; fuzzy multicriteria decision-making problem; fuzzy rule base; fuzzy user preference; fuzzy-based approach; linguistics term; ontology; quality of service preference; service oriented architecture; Computer architecture; Computers; Conferences; Quality of service; Security; Service-oriented architecture; Fuzzy Discovery; JESS API; PROMETHEE; QoS Parameters; Web Service (ID#: 16-10056)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7375702&isnumber=7374772
F. Yamaguchi, A. Maier, H. Gascon and K. Rieck, “Automatic Inference of Search Patterns for Taint-Style Vulnerabilities,” 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015, pp. 797-812. doi: 10.1109/SP.2015.54
Abstract: Taint-style vulnerabilities are a persistent problem in software development, as the recently discovered “Heart bleed” vulnerability strikingly illustrates. In this class of vulnerabilities, attacker-controlled data is passed unsanitized from an input source to a sensitive sink. While simple instances of this vulnerability class can be detected automatically, more subtle defects involving data flow across several functions or project-specific APIs are mainly discovered by manual auditing. Different techniques have been proposed to accelerate this process by searching for typical patterns of vulnerable code. However, all of these approaches require a security expert to manually model and specify appropriate patterns in practice. In this paper, we propose a method for automatically inferring search patterns for taint-style vulnerabilities in C code. Given a security-sensitive sink, such as a memory function, our method automatically identifies corresponding source-sink systems and constructs patterns that model the data flow and sanitization in these systems. The inferred patterns are expressed as traversals in a code property graph and enable efficiently searching for unsanitized data flows — across several functions as well as with project-specific APIs. We demonstrate the efficacy of this approach in different experiments with 5 open-source projects. The inferred search patterns reduce the amount of code to inspect for finding known vulnerabilities by 94.9% and also enable us to uncover 8 previously unknown vulnerabilities.
Keywords: application program interfaces; data flow analysis; public domain software; security of data; software engineering; C code; attacker-controlled data; automatic inference; code property graph; data flow; data security; inferred search pattern; memory function; open-source project; project- specific API; search pattern; security-sensitive sink; sensitive sink; software development; source-sink system; taint-style vulnerability; Databases; Libraries; Payloads; Programming; Security; Software; Syntactics; Clustering; Graph Databases; Vulnerabilities (ID#: 16-10057)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163061&isnumber=7163005
P. Jia, X. He, L. Liu, B. Gu and Y. Fang, “A Framework for Privacy Information Protection on Android,” Computing, Networking and Communications (ICNC), 2015 International Conference on, Garden Grove, CA, 2015, pp. 1127-1131. doi: 10.1109/ICCNC.2015.7069508
Abstract: Permissions-based security model of Android increasingly shows its vulnerability in protecting users' privacy information. According to the permissions-based security model, an application should have the appropriate permissions before gaining various resources (including data and hardware) in the phone. This model can only restrict an application to access system resources without appropriate permissions, but can not prevent malicious accesses to privacy data after the application having obtained permissions. During the installation of an application, the system will prompt what permissions the application is requesting. Users have no choice but to allow all the requested permissions if they want to use the application. Once an application is successfully installed, the system is unable to control its behavior dynamically, and at this time the application can obtain privacy information and send them out without the acknowledgements of users. Therefore, there is a great security risk of the permissions-based security model. This paper researches on different ways to access users' privacy information and proposes a framework named PriGuard for dynamically protecting users' privacy information based on Binder communication interception technology and feature selection algorithm. Applications customarily call system services remotely by using the Binder mechanism, then access the equipment and obtain information through system services. By redirecting the Binder interface function of Native layer, PriGuard intercepts Binder messages, as a result, intercepting the application's Remote Procedure Call (RPC) for system services, then it can dynamically monitor the application's behaviors that access privacy information. In this paper, we collect many different types of benign Application Package File (APK) samples, and get the Application Programming Interface (API) calls of each sample when it is running. Afterwards we transform these API calls of each sample into f- ature vectors. Feature selection algorithm is used to generate the optimal feature subset. PriGuard automatically completes the privacy policy configuration on the newly installed software according to the optimal feature subset, and then control the calls on system service of the software using Binder message interception technology, which achieves the purpose of protecting users' privacy information.
Keywords: Android (operating system); application program interfaces; authorisation; data protection; remote procedure calls; API; APK; Android; Binder communication interception technology; Binder interface function; Binder message interception technology; PriGuard framework; RPC; application installation; application package file; application programming interface; application remote procedure call; dynamic application behavior monitoring; dynamic user privacy information protection; feature selection algorithm; native layer; optimal feature subset generation; permission-based security model; privacy policy configuration; security risk; system resource access; system services; user privacy information access; user privacy information protection; Conferences; Monitoring; Privacy; Security; Smart phones; Software; Vectors; RPC intercept; android; binder; feature selection algorithm; privacy protection (ID#: 16-10058)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069508&isnumber=7069279
B. He et al., “Vetting SSL Usage in Applications with SSLINT,” 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015, pp. 519-534. doi: 10.1109/SP.2015.38
Abstract: Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols have become the security backbone of the Web and Internet today. Many systems including mobile and desktop applications are protected by SSL/TLS protocols against network attacks. However, many vulnerabilities caused by incorrect use of SSL/TLS APIs have been uncovered in recent years. Such vulnerabilities, many of which are caused due to poor API design and inexperience of application developers, often lead to confidential data leakage or man-in-the-middle attacks. In this paper, to guarantee code quality and logic correctness of SSL/TLS applications, we design and implement SSLINT, a scalable, automated, static analysis system for detecting incorrect use of SSL/TLS APIs. SSLINT is capable of performing automatic logic verification with high efficiency and good accuracy. To demonstrate it, we apply SSLINT to one of the most popular Linux distributions -- Ubuntu. We find 27 previously unknown SSL/TLS vulnerabilities in Ubuntu applications, most of which are also distributed with other Linux distributions.
Keywords: Linux; application program interfaces; formal verification; program diagnostics; protocols; security of data; API design; Linux distributions; SSL usage vetting; SSL-TLS protocols; SSLINT; Ubuntu; automatic logic verification; code quality; logic correctness; network attacks; secure sockets layer; static analysis system; transport layer security; Accuracy; Libraries; Protocols; Security; Servers; Software; Testing (ID#: 16-10059)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163045&isnumber=7163005
N. Pazos, M. Müller, M. Aeberli and N. Ouerhani, “ConnectOpen – Automatic Integration of IoT Devices,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 640-644. doi: 10.1109/WF-IoT.2015.7389129
Abstract: There exists, today, a wide consensus that Internet of Things (IoT) is creating a wide range of business opportunities for various industries and sectors like Manufacturing, Healthcare, Public infrastructure management, Telecommunications and many others. On the other hand, the technological evolution of IoT facing serious challenges. The fragmentation in terms of communication protocols and data formats at device level is one of these challenges. Vendor specific application architectures, proprietary communication protocols and lack of IoT standards are some reasons behind the IoT fragmentation. In this paper we propose a software enabled framework to address the fragmentation challenge. The framework is based on flexible communication agents that are deployed on a gateway and can be adapted to various devices communicating different data formats using different communication protocol. The communication agent is automatically generated based on specifications and automatically deployed on the Gateway in order to connect the devices to a central platform where data are consolidated and exposed via REST APIs to third party services. Security and scalability aspects are also addressed in this work.
Keywords: Internet of Things; application program interfaces; cloud computing; computer network security; internetworking; transport protocols; ConnectOpen; IoT fragmentation; REST API; automatic IoT device integration; central platform; communication agents; communication protocol; communication protocols; data formats; device level; scalability aspect; security aspect; software enabled framework; third party services; Business; Embedded systems; Logic gates; Protocols; Scalability; Security; Sensors; Communication Agent; End Device; Gateway; IoT; Kura; MQTT; OSGi (ID#: 16-10060)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389129&isnumber=7389012
L. Qiu, Z. Zhang, Z. Shen and G. Sun, “AppTrace: Dynamic Trace on Android Devices,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7145-7150. doi: 10.1109/ICC.2015.7249466
Abstract: Mass vulnerabilities involved in the Android alternative applications could threaten the security of the launched device or users data. To analyze the alternative applications, generally, researchers would like to observe applications' runtime features first. Then they need to decompile the target application and read the complicated code to figure out what the application really does. Traditional dynamic analysis methodology, for instance, the TaintDroid, uses dynamic taint tracking technique to mark information at source APIs. However, TaintDroid is limited to constraint on requiring target application to run in custom sandbox that might be not compatible with all the Android versions. For solving this problem and helping analysts to have insight into the runtime behavior, this paper presents AppTrace, a novel dynamic analysis system that uses dynamic instrumentation technique to trace member methods of target application that could be deployed in any version above Android 4.0. The paper presents an evaluation of AppTrace with 8 apps from Google Play as well as 50 open source apps from F-Droid. The results show that AppTrace could trace methods of target applications successfully and notify users effectively when some sensitive APIs are invoked.
Keywords: application program interfaces; smart phones; system monitoring; API; Android devices; AppTrace; Google Play; TaintDroid; dynamic instrumentation technique; dynamic taint tracking technique; dynamic trace; novel dynamic analysis system; open source apps; Androids; Humanoid robots; Instruments; Java; Runtime; Security; Smart phones (ID#: 16-10061)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249466&isnumber=7248285
S. S. Shinde and S. S. Sambare, “Enhancement on Privacy Permission Management for Android Apps,” Communication Technologies (GCCT), 2015 Global Conference on, Thuckalay, 2015, pp. 838-842. doi: 10.1109/GCCT.2015.7342779
Abstract: Nowadays everyone is using smartphone devices for personal and official data storage. Smartphone apps are usually not secure and need user permission to access protected system resources. Specifically, the existing Android permission system will check whether the calling app has the right permission to invoke sensitive system APIs. Android OS allows third-party applications. Whenever a user installs any third party application user is having, only limited options at the time of installation. Either user can agree to all terms and conditions and install that application or reject from the installation of applications. Existing approaches failed to provide security to user's sensitive data from being violated. To protect user's privacy, there is a need for secure permission management for Android applications. In this paper, to fine-grained the permission management, we have proposed a system which provides a facility to smartphone user's grant or revoke access to user's private sensitive data as per user's choice. Overall Performance of proposed system improves the limitations of existing android permission system shown in detail in terms of results and features in the paper.
Keywords: Android (operating system); authorisation; data privacy; smart phones; APIs; Android OS; Android apps; Android permission system; official data storage; personal data storage; privacy permission management; smartphone apps; smartphone devices; third-party applications; Androids; Databases; Humanoid robots; Internet; Privacy; Security; Smart phones; Android OS; Fine-grained; Permission system; Smartphone user; privacy (ID#: 16-10062)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7342779&isnumber=7342608
A. Slominski, V. Muthusamy and R. Khalaf, “Building a Multi-Tenant Cloud Service from Legacy Code with Docker Containers,” Cloud Engineering (IC2E), 2015 IEEE International Conference on, Tempe, AZ, 2015, pp. 394-396. doi: 10.1109/IC2E.2015.66
Abstract: In this paper we address the problem of migrating a legacy Web application to a cloud service. We develop a reusable architectural pattern to do so and validate it with a case study of the Beta release of the IBM Bluemix Workflow Service [1] (herein referred to as the Beta Workflow service). It uses Docker [2] containers and a Cloudant [3] persistence layer to deliver a multi-tenant cloud service by re-using a legacy codebase. We are not aware of any literature that addresses this problem by using containers.The Beta Workflow service provides a scalable, stateful, highly available engine to compose services with REST APIs. The composition is modeled as a graph but authored in a Javascript-based domain specific language that specifies a set of activities and control flow links among these activities. The primitive activities in the language can be used to respond to HTTP REST requests, invoke services with REST APIs, and execute Javascript code to, among other uses, extract and construct the data inputs and outputs to external services, and make calls to these services. Examples of workflows that have been built using the service include distributing surveys and coupons to customers of a retail store [1], the management of sales requests between a salesperson and their regional managers, managing the staged deployment of different versions of an application, and the coordinated transfer of jobs among case workers.
Keywords: Java; application program interfaces; cloud computing; specification languages; Beta Workflow service; Cloudant persistence layer; HTTP REST requests; IBM Bluemix Workflow Service; Javascript code; Javascript-based domain specific language; REST API; docker containers; legacy Web application; legacy codebase; multitenant cloud service; reusable architectural pattern; Browsers; Cloud computing; Containers; Engines; Memory management; Organizations; Security (ID#: 16-10063)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092950&isnumber=7092808
H. Hamadeh, S. Chaudhuri and A. Tyagi, “Area, Energy, and Time Assessment for a Distributed TPM for Distributed Trust in IoT Clusters,” 2015 IEEE International Symposium on Nanoelectronic and Information Systems, Indore, 2015, pp. 225-230. doi: 10.1109/iNIS.2015.17
Abstract: IoT clusters arise from natural human societal clusters such as a house, an airport, and a highway. IoT clusters are heterogeneous with a need for device to device as well as device to user trust. The IoT devices are likely to be thin computing clients. Due to low cost, an individual IoT device is not built to be fault tolerant through redundancy. Hence the trust protocols cannot take the liveness of a device for granted. In fact, the differentiation between a failing device and a malicious device is difficult from the trust protocol perspective. We present a minimal distributed trust layer based on distributed consensus like operations. These distributed primitives are cast in the context of the APIs supported by a trusted platform module (TPM). TPM with its 1024 bit RSA is a significant burden on a thin IoT design. We use RNS based slicing of a TPM where in each slice resides within a single IoT device. The overall TPM functionality is distributed among several IoT devices within a cluster. The VLSI area, energy, and time savings of such a distributed TMP implementation is assessed. A sliced/distributed TPM is better suited for an IoT environment based on its resource needs. We demonstrate over 90% time reduction, over 3% area reduction, and over 90% energy reduction per IoT node in order to support TPM protocols.
Keywords: Internet of Things; VLSI; application program interfaces; cryptographic protocols; residue number systems; trusted computing;1024 bit RSA; APIs; IoT clusters; RNS based slicing; TPM protocols; VLSI area savings; VLSI energy savings; VLSI time savings; distributed TPM; distributed consensus like operations; minimal distributed trust layer; sliced TPM; trust protocol; trusted platform module; Airports; Approximation algorithms; Computers; Delays; Electronic mail; Protocols; Security; Area; IoT; Residue Number System; Time and Energy; Trusted Platform Module (ID#: 16-10064)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7434429&isnumber=7434375
A. Aflatoonian, A. Bouabdallah, K. Guillouard, V. Catros and J. M. Bonnin, “BYOC: Bring Your Own Control—A New Concept to Monetize SDN’s Openness,” Network Softwarization (NetSoft), 2015 1st IEEE Conference on, London, 2015, pp. 1-5. doi: 10.1109/NETSOFT.2015.7116147
Abstract: Software Defined Networking (SDN) is supposed to bring flexibility, dynamicity and automation to today's network through a logically centralized network controller. We argue that reaching SDN's full capacities requires however the development of standardized programming capabilities on its top. In this paper we introduce “Bring Your Own Control” (BYOC) as a new concept providing a convenient framework structuring the openness of the SDN on its northbound side. We derive from the lifecycle characterizing the services deployed in an SDN, the parts of services the control of which may be delegated by the operator to external customers through dedicated application programming interfaces (API) located in the northbound interface (NBI). We argue that the exploitation of such services may noticeably be refined by the operator through various business models monetizing the openness of the SDN following the new paradigm of “Earn as Your Bring” (EaYB). We propose an analysis of BYOC and we illustrate our approach with several use cases.
Keywords: application program interfaces; open systems; software defined networking; API; BYOC; EaYB; NBI; SDN openness; bring your own control; dedicated application programming interfaces; earn as your bring; framework structuring; logically centralized network controller; northbound interface; software defined networking; standardized programming capabilities; Business; Computer architecture; Monitoring; Multiprotocol label switching; Real-time systems; Security; Virtual private networks; Bring Your Own Control (BYOC); Business model; Earn as You Bring (EaYB); Northbound Interface (NBI); Software Defined Networking (SDN) (ID#: 16-10065)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116147&isnumber=7116113
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
APIs 2015 (Part 2) |
Applications Programming Interfaces, APIs, are definitions of interfaces to systems or modules. As code is reused, more and more are modified from earlier code. For the Science of Security community, the problems of compositionality and resilience are direct. The research work cited here was presented in 2015.
E. Kowalczyk, A. M. Memon and M. B. Cohen, “Piecing Together App Behavior from Multiple Artifacts: A Case Study,” Software Reliability Engineering (ISSRE), 2015 IEEE 26th International Symposium on, Gaithersburg, MD, 2015, pp. 438-449. doi: 10.1109/ISSRE.2015.7381837
Abstract: Recent research in mobile software analysis has begun to combine information extracted from an app's source code and marketplace webpage to identify correlated variables and validate an app's quality properties such as its intended behavior, trust or suspiciousness. Such work typically involves analysis of one or two artifacts such as the GUI text, user ratings, app description keywords, permission requests, and sensitive API calls. However, these studies make assumptions about how the various artifacts are populated and used by developers, which may lead to a gap in the resulting analysis. In this paper, we take a step back and perform an in-depth study of 14 popular apps from the Google Play Store. We have studied a set of 16 different artifacts for each app, and conclude that the output of these must be pieced together to form a complete understanding of the app's true behavior. We show that (1) developers are inconsistent in where and how they provide descriptions; (2) each artifact alone has incomplete information; (3) different artifacts may contain contradictory pieces of information; (4) there is a need for new analyses, such as those that use image processing; and (5) without including analyses of advertisement libraries, the complete behavior of an app is not defined. In addition, we show that the number of downloads and ratings of an app does not appear to be a strong predictor of overall app quality, as these are propagated through versions and are not necessarily indicative of the current app version's behavior.
Keywords: application program interfaces; graphical user interfaces; mobile computing; source code (software); GUI text; Google Play Store; app description keywords; apps source code; marketplace webpage; mobile software analysis; permission requests; sensitive API calls; user ratings; Androids; Cameras; Data mining; Google; Humanoid robots; Security; Videos (ID#: 16-10066)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381837&isnumber=7381793
R. Cziva, S. Jouet, K. J. S. White and D. P. Pezaros, “Container-Based Network Function Virtualization for Software-Defined Networks,” 2015 IEEE Symposium on Computers and Communication (ISCC), Larnaca, 2015, pp. 415-420. doi: 10.1109/ISCC.2015.7405550
Abstract: Today's enterprise networks almost ubiquitously deploy middlebox services to improve in-network security and performance. Although virtualization of middleboxes attracts a significant attention, studies show that such implementations are still proprietary and deployed in a static manner at the boundaries of organisations, hindering open innovation. In this paper, we present an open framework to create, deploy and manage virtual network functions (NF)s in OpenFlow-enabled networks. We exploit container-based NFs to achieve low performance overhead, fast deployment and high reusability missing from today's NFV deployments. Through an SDN northbound API, NFs can be instantiated, traffic can be steered through the desired policy chain and applications can raise notifications. We demonstrate the systems operation through the development of exemplar NFs from common Operating System utility binaries, and we show that container-based NFV improves function instantiation time by up to 68% over existing hypervisor-based alternatives, and scales to one hundred co-located NFs while incurring sub-millisecond latency.
Keywords: computer network performance evaluation; computer network security; software defined networking; virtualisation; NFV deployments; OpenFlow-enabled networks; SDN northbound API; container-based NF; container-based network function virtualization; enterprise networks; function instantiation time; hypervisor-based alternatives; in-network security; middlebox services; network performance; operating system utility binaries; performance overhead; policy chain; software-defined networks; systems operation; virtual network functions; Containers; Middleboxes; Noise measurement; Ports (Computers); Routing; Servers; Virtualization
(ID#: 16-10067)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7405550&isnumber=7405441
F. Shahzad, “Safe Haven in the Cloud: Secure Access Controlled File Encryption (SAFE) System,” Science and Information Conference (SAI), 2015, London, 2015, pp. 1329-1334. doi: 10.1109/SAI.2015.7237315
Abstract: The evolution of cloud computing has revolutionized how the computing is abstracted and utilized on remote third party infrastructure. It is now feasible to try out novel ideas over the cloud with no or very low initial cost. There are challenges in adopting cloud computing; but with obstacles, we have opportunities for research in several aspects of cloud computing. One of the main issue is the data security and privacy of information stored and processed at cloud provider's systems. In this work, a practical system (called SAFE) is designed and implemented to securely store/retrieve user's files on the third party cloud storage systems using well established cryptographic techniques. It utilizes the client-side, multilevel, symmetric/asymmetric encryption and decryption operations to provide policy-based access control and assured deletion of remotely hosted client's files. The SAFE is a generic application which can be extended to support any cloud storage provider as long as there is an API which support basic file upload and download operations.
Keywords: application program interfaces; authorisation; client-server systems; cloud computing; computer network security; cryptography; data privacy; outsourcing; API; SAFE system; client-side-multilevel asymmetric encryption operation; client-side-multilevel symmetric encryption operation; client-side-multilevel-asymmetric decryption operation; client-side-multilevel-symmetric decryption operation; cloud provider systems; cloud storage provider; cryptographic techniques; data security; file download operation; file upload operation; information privacy; policy-based access control; remote third-party infrastructure; remotely hosted client file deletion; secure access controlled file encryption system; third-party cloud storage systems; user file retrieval; user file storage; Access control; Cloud computing; Encryption; Java; Servers; Assured deletion; Cryptography; Data privacy; Secure storage (ID#: 16-10068)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237315&isnumber=7237120
S. Chunwijitra et al., “The Strategy to Sustainable Sharing Resources Repository for Massive Open Online Courses in Thailand,” Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2015 12th International Conference on, Hua Hin, 2015, pp. 1-5. doi: 10.1109/ECTICon.2015.7206980
Abstract: The proposed paper investigates on the educational knowledge and resources to support lifelong Massive Open Online Courses (MOOC) especially in Thailand. In this paper, we proposed a strategy to provide resource center repository for sharing among e-Learning systems based on the Creative Commons license. An aim of the strategy is to develop a sustainable educational resource repository used for massive e-Learning systems. We decide to integrate the Open Educational Resources (OER) system and MOOC system by using a newly implemented FedX API for exchanging resources between them. FedX API applies the REST API and XBlock SDK to establish resources accession. Fifteen elements of Dublin Core metadata are agreed for inter-changing resources among OER systems that support the Open Archives Initiative (OAI) standard. The proposed system is designed to stand on a cloud computing system concerning the advantages of data storage, processing, bandwidth, and security.
Keywords: application program interfaces; cloud computing; computer aided instruction; open systems; security of data; storage management; FedX API; MOOC; OER system; REST API; Thailand; XBlock SDK; cloud computing system; data bandwidth; data processing; data security; data storage; e-learning system; massive open online course; open educational resources system; sustainable sharing resource repository; Electronic learning; Licenses; Standards; Massive Open Online Courses; Open Archives Initiative; Open Educational Resources; e-Learning (ID#: 16-10069)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7206980&isnumber=7206924
P. Dewan and P. Kumaraguru, “Towards Automatic Real Time Identification of Malicious Posts on Facebook,” Privacy, Security and Trust (PST), 2015 13th Annual Conference on, Izmir, 2015, pp. 85-92. doi: 10.1109/PST.2015.7232958
Abstract: Online Social Networks (OSNs) witness a rise in user activity whenever a news-making event takes place. Cyber criminals exploit this spur in user-engagement levels to spread malicious content that compromises system reputation, causes financial losses and degrades user experience. In this paper, we characterized a dataset of 4.4 million public posts generated on Facebook during 17 news-making events (natural calamities, terror attacks, etc.) and identified 11,217 malicious posts containing URLs. We found that most of the malicious content which is currently evading Facebook's detection techniques originated from third party and web applications, while more than half of all legitimate content originated from mobile applications. We also observed greater participation of Facebook pages in generating malicious content as compared to legitimate content. We proposed an extensive feature set based on entity profile, textual content, metadata, and URL features to automatically identify malicious content on Facebook in real time. This feature set was used to train multiple machine learning models and achieved an accuracy of 86.9%. We performed experiments to show that past techniques for spam campaign detection identified less than half the number of malicious posts as compared to our model. This model was used to create a REST API and a browser plug-in to identify malicious Facebook posts in real time.
Keywords: learning (artificial intelligence); meta data; security of data; social networking (online); Facebook detection technique; Facebook page; OSN; REST API; URL feature; automatic real time identification; browser plug-in; cyber criminal; financial loss; malicious content; malicious post; metadata; multiple machine learning model; online social network; spam campaign detection; system reputation; user activity; user-engagement level; Facebook; Malware; Real-time systems; Twitter; Uniform resource locators
(ID#: 16-10070)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232958&isnumber=7232940
H. Kondylakis et al., “Digital Patient: Personalized and Translational Data Management through the Myhealthavatar EU Project,” 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, 2015,
pp. 1397-1400. doi: 10.1109/EMBC.2015.7318630
Abstract: The advancements in healthcare practice have brought to the fore the need for flexible access to health-related information and created an ever-growing demand for the design and the development of data management infrastructures for translational and personalized medicine. In this paper, we present the data management solution implemented for the MyHealthAvatar EU research project, a project that attempts to create a digital representation of a patient's health status. The platform is capable of aggregating several knowledge sources relevant for the provision of individualized personal services. To this end, state of the art technologies are exploited, such as ontologies to model all available information, semantic integration to enable data and query translation and a variety of linking services to allow connecting to external sources. All original information is stored in a NoSQL database for reasons of efficiency and fault tolerance. Then it is semantically uplifted through a semantic warehouse which enables efficient access to it. All different technologies are combined to create a novel web-based platform allowing seamless user interaction through APIs that support personalized, granular and secure access to the relevant information.
Keywords: SQL; health care; medical information systems; ontologies (artificial intelligence); query processing; security of data; semantic Web; MyHealthAvatar EU research project; NoSQL database; Web-based platform; health-related information; healthcare practice; ontologies; personalized data management; personalized medicine; query translation; semantic warehouse; translational data management; translational medicine; Data models; Data warehouses; Europe; Joining processes; Medical services; Ontologies; Semantics (ID#: 16-10071)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7318630&isnumber=7318236
A. Arins, “Firewall as a Service in SDN OpenFlow Network,” Information, Electronic and Electrical Engineering (AIEEE), 2015 IEEE 3rd Workshop on Advances in, Riga, 2015, pp. 1-5. doi: 10.1109/AIEEE.2015.7367309
Abstract: Protecting publicly available servers in internet today is a serious challenge, especially when encountering Distributed denial-of-service (DDoS) attacks. In traditional internet, there is narrow scope of choices one can take when ingress traffic overloads physical connection limits. This paper proposes Firewall as a service in internet service providers (ISP) networks allowing end users to request and install match-action rules in ISPs edge routers. In proposed scenario, ISP runs Software Defined Networking environment where control plane is separated from data plane utilizing OpenFlow protocol and ONOS controller. For interaction between end-users and SDN Controller author defines an Application Programming Interface (API) over a secure SSL/TLS connection. The Controller is responsible for translating high-level logics in low-level rules in OpenFlow switches. This study runs experiments in OpenFlow test-bed researching a mechanism for end-user to discard packets on ISP edge routers thus minimizing their uplink saturation and staying on-line.
Keywords: Internet; application program interfaces; computer network security; firewalls; routing protocols; software defined networking; Firewall; ISP edge routers; ISP networks; Internet service provider networks; ONOS controller; OpenFlow protocol; OpenFlow switches; OpenFlow test-bed; SDN Controller; SDN OpenFlow network; SSL/TLS connection; application programming interface; data plane; distributed denial-of-service attacks; high-level logics; low-level rules; match-action rules; publicly available server protection; software defined networking environment; uplink saturation; Computer crime; Control systems; Firewalls (computing); IP networks; Servers; BGP; BGP experimentation; latency (ID#: 16-10072)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7367309&isnumber=7367271
A. J. Poulter, S. J. Johnston and S. J. Cox, “Using the MEAN Stack to Implement a RESTful Service for an Internet of Things Application,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 280-285.
doi: 10.1109/WF-IoT.2015.7389066
Abstract: This paper examines the components of the MEAN development stack (MongoDb, Express.js, Angular.js, & Node.js), and demonstrate their benefits and appropriateness to be used in implementing RESTful web-service APIs for Internet of Things (IoT) appliances. In particular, we show an end-to-end example of this stack and discuss in detail the various components required. The paper also describes an approach to establishing a secure mechanism for communicating with IoT devices, using pull-communications.
Keywords: Internet of Things; Web services; application program interfaces; security of data; software tools; Angular.js; Express. js; Internet of Things application; IoT devices; MEAN development stack; MongoDb; Node.js; RESTful Web-service API; pull-communications; secure mechanism; Databases; Hardware; Internet of things; Libraries; Logic gates; Servers; Software; Angular. js; Express.js; IoT; MEAN; REST; web programming (ID#: 16-10073)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389066&isnumber=7389012
S. Colbran and M. Schulz, “An Update to the Software Architecture of the iLab Service Broker,” Remote Engineering and Virtual Instrumentation (REV), 2015 12th International Conference on, Bangkok, 2015, pp. 90-93. doi: 10.1109/REV.2015.7087269
Abstract: The MIT iLab architecture (consisting of Lab Servers and Service Brokers) was designed in the 1990's and while the Lab Server was designed as a software service the same architectural approach was not adopted for the Service Broker. This paper reports on a redesign of the Service Broker as a software service, which is itself a collection of software services. In the process of this redesign it was decided to examine the API on the Lab Server and to support not only the existing Lab Server API (to maintain support for all existing iLab Lab Servers) but to concurrently support an alternative lightweight API based upon a RESTful architecture and to use JSON to encode the data. As these changes required a complete rewrite of the Service Broker code base, it was decided to experiment with an implementation of the services using Node.js — a popular approach to the implementation of servers in Javascript. The intention was to open up the code base to code developers normally associated with web development and not normally associated with the development of remote laboratories. A new software service named an “agent” was developed that wraps around the service broker to allow programmable modification of requests. The agent also has the ability to serve up an interface to user clients. The use of agents has advantages over existing implementations because it allows customised authentication schemes (such as OAuth) as well as providing different user groups with unique Lab Clients to the same Lab Servers. Lab Clients no longer are served up through the Service Broker, but can reside anywhere on the Internet and access the Service Broker via access to a suitable agent. One outcome of these architectural changes has been the introduction of a simple integration of a remote laboratory in the Blackboard Learning Management System (LMS) using a Learning Tool Interoperability (LTI) module for user authentication.
Keywords: Internet; Java; application program interfaces; learning management systems; open systems; security of data; software architecture; user interfaces; API; Blackboard learning management system; JSON; Javascript; LMS; LTI module; Lab Server; MIT iLab architecture; Node.js; RESTful architecture; customised authentication schemes; iLab service broker; learning tool interoperability; remote laboratory; software service; user authentication; Authentication; Computer architecture; Protocols; Remote laboratories; Servers; Software; ISA; ISABM; MIT iLab; Web Services; iLab Service Broker (ID#: 16-10074)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087269&isnumber=7087248
B. Caillat, B. Gilbert, R. Kemmerer, C. Kruegel and G. Vigna, “Prison: Tracking Process Interactions to Contain Malware,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1282-1291. doi: 10.1109/HPCC-CSS-ICESS.2015.297
Abstract: Modern operating systems provide a number of different mechanisms that allow processes to interact. These interactions can generally be divided into two classes: inter-process communication techniques, which a process supports to provide services to its clients, and injection methods, which allow a process to inject code or data directly into another process' address space. Operating systems support these mechanisms to enable better performance and to provide simple and elegant software development APIs that promote cooperation between processes. Unfortunately, process interaction channels introduce problems at the end-host that are related to malware containment and the attribution of malicious actions. In particular, host-based security systems rely on process isolation to detect and contain malware. However, interaction mechanisms allow malware to manipulate a trusted process to carry out malicious actions on its behalf. In this case, existing security products will typically either ignore the actions or mistakenly attribute them to the trusted process. For example, a host-based security tool might be configured to deny untrusted processes from accessing the network, but malware could circumvent this policy by abusing a (trusted) web browser to get access to the Internet. In short, an effective host-based security solution must monitor and take into account interactions between processes. In this paper, we present Prison, a system that tracks process interactions and prevents malware from leveraging benign programs to fulfill its malicious intent. To this end, an operating system kernel extension monitors the various system services that enable processes to interact, and the system analyzes the calls to determine whether or not the interaction should be allowed. Prison can be deployed as an online system for tracking and containing malicious process interactions to effectively mitigate the threat of malware. The system can also be used as a dynamic analysis too- to aid an analyst in understanding a malware sample's effect on its environment.
Keywords: Internet; application program interfaces; invasive software; online front-ends; operating system kernels; software engineering; system monitoring; Prison; Web browser; code injection; dynamic analysis tool; host-based security solution; host-based security systems; injection method; interprocess communication technique; malicious action attribution; malware containment; operating system kernel extension; process address space; process interaction tracking; process isolation; software development API; trusted process; Browsers; Kernel; Malware; Monitoring; inter-process communication; ; prison; windows (ID#: 16-10075)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336344&isnumber=7336120
W. He and D. Jap, “Dual-Rail Active Protection System Against Side-Channel Analysis in FPGAs,” 2015 IEEE 26th International Conference on Application-specific Systems, Architectures and Processors (ASAP), Toronto, ON, 2015, pp. 64-65. doi: 10.1109/ASAP.2015.7245707
Abstract: The security of the implemented cryptographic module in hardware has seen severe vulnerabilities against Side-Channel Attack (SCA), which is capable of retrieving hidden things by observing the pattern or quantity of unintentional information leakage. Dual-rail Precharge Logic (DPL) theoretically thwarts side-channel analyses by its low-level compensation manner, while the security reliability of DPLs can only be achieved at high resource expenses and degraded performance. In this paper, we present a dynamic protection system for selectively configuring the security-sensitive crypto modules to SCA-resistant dual-rail style in the scenario that the real-time threat is detected. The threat-response mechanism helps to dynamically balance the security and cost. The system is driven by a set of automated dual-rail conversion APIs for partially transforming the cryptographic module into its dual-rail format, particularly to a highly secure symmetric and interleaved placement. The elevated security grade from the safe to threat mode is validated by EM based mutual information analysis using fine-grained surface scan to a decapsulated Virtex-5 FPGA on SASEBO GII board.
Keywords: cryptography; field programmable gate arrays; reliability; DPL; EM based mutual information analysis; SASEBO GII board; SCA-resistant dual-rail style; Virtex-5 FPGA; automated dual-rail conversion API; cryptographic module; dual-rail active protection system; dual-rail format; dual-rail precharge logic; dynamic protection system; fine-grained surface scan; information leakage; security reliability; security-sensitive cryptomodules; side-channel analysis; side-channel attack; threat-response mechanism; Ciphers; Field programmable gate arrays; Hardware; Mutual information; Rails (ID#: 16-10076)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7245707&isnumber=7245687
M. K. Debnath, S. Samet and K. Vidyasankar, “A Secure Revocable Personal Health Record System with Policy-Based Fine-Grained Access Control,” Privacy, Security and Trust (PST), 2015 13th Annual Conference on, Izmir, 2015, pp. 109-116. doi: 10.1109/PST.2015.7232961
Abstract: Collaborative sharing of information is becoming much more needed technique to achieve complex goals in today's fast-paced tech-dominant world. In our context, Personal Health Record (PHR) system has become a popular research area for sharing patients information very quickly among health professionals. PHR systems store and process sensitive information, which should have proper security mechanisms to protect data. Thus, access control mechanisms of the PHR should be well-defined. Secondly, PHRs should be stored in encrypted form. Therefore, cryptographic schemes offering a more suitable solution for enforcing access policies based on user attributes are needed. Attribute-based encryption can resolve these problems. We have proposed a framework with fine-grained access control mechanism that protects PHRs against service providers, and malicious users. We have used the Ciphertext Policy Attribute Based Encryption system as an efficient cryptographic technique, enhancing security and privacy of the system, as well as enabling access revocation in a hierarchical scheme. The Web Services and APIs for the proposed framework have been developed and implemented, along with an Android mobile application for the system.
Keywords: authorisation; cryptography; data protection; electronic health records; API; Android mobile application; PHR system; Web services; access policies; access revocation; ciphertext policy attribute based encryption system; collaborative information sharing; cryptographic schemes; cryptographic technique; health professionals; malicious users; patients information sharing; policy-based fine-grained access control; secure revocable personal health record system; security mechanisms; service providers; system privacy; system security; tech-dominant world; user attributes; Access control; Data privacy; Encryption; Medical services; Servers; Attribute Revocation; Attribute-Based Encryption; Fine-Grained Access Control; Patient-centric Data Privacy; Personal Health Records
(ID#: 16-10077)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232961&isnumber=7232940
S. Hou, L. Chen, E. Tas, I. Demihovskiy and Y. Ye, “Cluster-Oriented Ensemble Classifiers for Intelligent Malware Detection,” Semantic Computing (ICSC), 2015 IEEE International Conference on, Anaheim, CA, 2015, pp. 189-196. doi: 10.1109/ICOSC.2015.7050805
Abstract: With explosive growth of malware and due to its damage to computer security, malware detection is one of the cyber security topics that are of great interests. Many research efforts have been conducted on developing intelligent malware detection systems applying data mining techniques. Such techniques have successes in clustering or classifying particular sets of malware samples, but they have limitations that leave a large room for improvement. Specifically, based on the analysis of the file contents extracted from the file samples, existing researches apply only specific clustering or classification methods, but not integrate them together. Actually, the learning of class boundaries for malware detection between overlapping class patterns is a difficult problem. In this paper, resting on the analysis of Windows Application Programming Interface (API) calls extracted from the file samples, we develop the intelligent malware detection system using cluster-oriented ensemble classifiers. To the best of our knowledge, this is the first work of applying such method for malware detection. A comprehensive experimental study on a real and large data collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our proposed method outperform other alternate data mining based detection techniques.
Keywords: application program interfaces; data mining; invasive software; pattern classification; pattern clustering; Comodo Cloud Security Center; Windows API; Windows application programming interface; cluster-oriented ensemble classifiers; computer security; cybersecurity; data mining techniques; intelligent malware detection; Training (ID#: 16-10078)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7050805&isnumber=7050753
L. Chen, T. Li, M. Abdulhayoglu and Y. Ye, “Intelligent Malware Detection Based on File Relation Graphs,” Semantic Computing (ICSC), 2015 IEEE International Conference on, Anaheim, CA, 2015, pp. 85-92. doi: 10.1109/ICOSC.2015.7050784
Abstract: Due to its damage to Internet security, malware and its detection has caught the attention of both anti-malware industry and researchers for decades. Many research efforts have been conducted on developing intelligent malware detection systems. In these systems, resting on the analysis of file contents extracted from the file samples, like Application Programming Interface (API) calls, instruction sequences, and binary strings, data mining methods such as Naive Bayes and Support Vector Machines have been used for malware detection. However, driven by the economic benefits, both diversity and sophistication of malware have significantly increased in recent years. Therefore, anti-malware industry calls for much more novel methods which are capable to protect the users against new threats, and more difficult to evade. In this paper, other than based on file contents extracted from the file samples, we study how file relation graphs can be used for malware detection and propose a novel Belief Propagation algorithm based on the constructed graphs to detect newly unknown malware. A comprehensive experimental study on a real and large data collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our proposed method outperform other alternate data mining based detection techniques.
Keywords: belief maintenance; cloud computing; data mining; invasive software; support vector machines; API call; Comodo cloud security center; Internet security; anti-malware industry; application programming interface; belief propagation algorithm; binary strings; data mining method; file relation graph; instruction sequences; intelligent malware detection system; malware diversity; malware sophistication; naive Bayes method; Facebook; Welding (ID#: 16-10079)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7050784&isnumber=7050753
A. Javed and M. Akhlaq, “Patterns in Malware Designed for Data Espionage and Backdoor Creation,” 2015 12th International Bhurban Conference on Applied Sciences and Technology (IBCAST), Islamabad, 2015, pp. 338-342. doi: 10.1109/IBCAST.2015.7058526
Abstract: In the recent past, malware have become a serious cyber security threat which has not only targeted individuals and organizations but has also threatened the cyber space of countries around the world. Amongst malware variants, trojans designed for data espionage and backdoor creation dominates the threat landscape. This necessitates an in depth study of these malware with the scope of extracting static features like APIs, strings, IP Addresses, URLs, email addresses etc. by and large found in such malicious codes. Hence in this research paper, an endeavor has been made to establish a set of patterns, tagged as APIs and Malicious Strings persistently existent in these malware by articulating an analysis framework.
Keywords: application program interfaces; feature extraction; invasive software; APIs; backdoor creation; cyber security threat; data espionage; malicious codes; malicious strings; malware; static feature extraction; trojans; Accuracy; Feature extraction; Lead; Malware; Sensitivity (ID#: 16-10080)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058526&isnumber=7058466
W. Li, J. Ge and G. Dai, “Detecting Malware for Android Platform: An SVM-Based Approach,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 464-469. doi: 10.1109/CSCloud.2015.50
Abstract: In recent years, Android has become one of the most popular mobile operating systems because of numerous mobile applications (apps) it provides. However, the malicious Android applications (malware) downloaded from third-party markets have significantly threatened users' security and privacy, and most of them remain undetected due to the lack of efficient and accurate malware detection techniques. In this paper, we study a malware detection scheme for Android platform using an SVM-based approach, which integrates both risky permission combinations and vulnerable API calls and use them as features in the SVM algorithm. To validate the performance of the proposed approach, extensive experiments have been conducted, which show that the proposed malware detection scheme is able to identify malicious Android applications effectively and efficiently.
Keywords: invasive software; mobile computing; support vector machines; API calls; Android platform; SVM-based approach; application program interface; malware detection; mobile applications; mobile operating systems; user privacy; user security; Androids; Feature extraction; Humanoid robots; Malware; Mobile communication; Smart phones; Android; Support Vector Machine (SVM); TF-IDF; malware (ID#: 16-10081)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371523&isnumber=7371418
H. Chen, L. J. Zhang, B. Hu, S. Z. Long and L. H. Luo, “On Developing and Deploying Large-File Upload Services of Personal Cloud Storage,” Services Computing (SCC), 2015 IEEE International Conference on, New York, NY, 2015, pp. 371-378. doi: 10.1109/SCC.2015.58
Abstract: Personal cloud storage is rapidly gaining popularity. A number of Internet service providers, such as Google and Baidu, entered this emerging market and developed a variety of cloud storage services. These ubiquitous services allow people to access personal files all over the world at anytime. With the prevalence of mobile Internet and rich media on web, more and more people use cloud storage for storing working documents, music, private photos and movies. Nevertheless, the size of the media files is often beyond the upper limit that a normal form-based file upload allows hence dedicated large-file upload services are required to be developed. Although various cloud vendors offer versatile cloud storage services, very little is known about the detailed development and deployment of the large-file upload services. This paper proposes a complete solution of large-file upload service, with the contributions in manyfold: Firstly, we do not limit the maximum size of a large file that can be uploaded. This is extremely practical to store huge database files from ERP tools. Secondly, we developed large-file upload service APIs that have very strict verification of correctness, to reduce the risk of data inconsistency. Thirdly, we extend the service developed recently for team collaboration with the capability of handling large files. Fourthly, this paper is arguably the first one that formalizes the testing and deployment procedures of large-file upload services with the help of Docker. In general, most large-file upload services are exposed to the public, facing security and performance issues, which brings much concern. With the proposed Docker-based deployment strategy, we can replicate the large-file upload service agilely and locally, to satisfy massive private or local deployment of KDrive. Finally, we evaluate and analyze the proposed strategies and technologies in accordance to the experimental results.
Keywords: Internet; application program interfaces; cloud computing; mobile computing; storage management; Docker-based deployment strategy; ERP tools; Internet service providers; cloud storage services; database files; large-file upload service APIs; local KDrive deployment; media files; mobile Internet; normal form-based file upload; personal cloud storage; risk reduction; ubiquitous services; Cloud computing; Context; Databases; Google; Media; Servers; Testing; Docker; Large-file Upload; Personal Cloud Storage; Team Collaboration (ID#: 16-10082)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7207376&isnumber=7207317
N. Thamsirarak, T. Seethongchuen and P. Ratanaworabhan, “A Case for Malware that Make Antivirus Irrelevant,” Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2015 12th International Conference on, Hua Hin, 2015, pp. 1-6. doi: 10.1109/ECTICon.2015.7206972
Abstract: Most security researchers realize that the effectiveness of antivirus software (AV) is questionable at best. However, people in the general public still use it daily, perhaps for a lack of better alternatives. It is well-known that signature-based detection technique used in almost all commercial and non-commercial AV cannot be completely effective against zero-day malware. Many evaluations conducted by renowned security firms confirm this. These evaluations often employ sophisticated malware, involve elaborated scheme, and require more resources than what is available to an average person to replicate. This paper investigates the creation of simple zero-day malware that can comprehensively exploit hosts and protractedly evade the installed AV products. What we discovered is alarming, but illuminating. Our malware, written in a high-level language using well-documented APIs, are able to bypass AV detection and launch full-fledged exploits similar to sophisticated malware. In addition, they are able to stay undetected for much longer than other previously reported zero-day malware. We attribute such success to the unreadiness of AV products against malware in intermediate language form. On a positive note, a firewall-like AV product that, to a certain extent, incorporates behavioral-based detection is able to warn against our malware.
Keywords: application program interfaces; computer viruses; digital signatures; firewalls; APIs; AV detection; antivirus software; firewall-like AV product; signature-based detection technique; zero-day malware; Floods; Malware; Software; Testing; Uniform resource locators; Viruses (medical); Antivirus software evaluation; signature-based detection; zero-day exploits (ID#: 16-10083)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7206972&isnumber=7206924
J. Xue et al., “Task-D: A Task Based Programming Framework for Distributed System,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1663-1668. doi: 10.1109/HPCC-CSS-ICESS.2015.299
Abstract: We present Task-D, a task-based distributed programming framework. Traditionally, programming for distributed programs requires using either low-level MPI or high-level pattern based models such as Hadoop/Spark. Task based models are frequently and well used for multicore and heterogeneous environment rather than distributed. Our Task-D tries to bridge this gap by creating a higher-level abstraction than MPI, while providing more flexibility than Hadoop/Spark for task-based distributed programming. The Task-D framework alleviates programmers from considering the complexities involved in distributed programming. We provide a set of APIs that can be directly embedded into user code to enable the program to run in a distributed fashion across heterogeneous computing nodes. We also explore the design space and necessary features the runtime should support, including data communication among tasks, data sharing among programs, resource management, memory transfers, job scheduling, automatic workload balancing and fault tolerance, etc. A prototype system is realized as one implementation of Task-D. A distributed ALS algorithm is implemented using the Task-D APIs, and achieved significant performance gains compared to Spark based implementation. We conclude that task-based models can be well suitable to distributed programming. Our Task-D is not only able to improve the programmability for distributed environment, but also able to leverage the performance with effective runtime support.
Keywords: application program interfaces; message passing; parallel programming; automatic workload balancing; data communication; distributed ALS algorithm; distributed programming; distributed system; heterogeneous computing node; high-level pattern based; job scheduling; low-level MPI; resource management; task-D API; task-based programming framework; Algorithm design and analysis; Data communication; Fault tolerance; Fault tolerant systems; Programming; Resource management; Synchronization (ID#: 16-10084)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336408&isnumber=7336120
P. J. Chen and Y. W. Chen, “Implementation of SDN Based Network Intrusion Detection and Prevention System,” Security Technology (ICCST), 2015 International Carnahan Conference on, Taipei, 2015, pp. 141-146. doi: 10.1109/CCST.2015.7389672
Abstract: In recent years, the rise of software-defined networks (SDN) have made network control more flexible, easier to set up and manage, and have provided a stronger ability to adapt to the changing demands of application development and network conditions. The network becomes easier to maintain, but also achieves improved security as a result of SDN. The architecture of SDN is designed for Control Plane and Forwarding Plane separation and uses open APIs to realize programmable control. SDN allows for the importing of third-party applications to improve network service, or even provide a new network service. In this paper, we present a defense mechanism, which can find attack packets previously identified through the Sniffer function, and once the abnormal flow is found, the protection mechanism of the Firewall function will be activated. For the capture of the packets, available libraries will be used to determine the properties and contents of the malicious packet, and to anticipate any possible attacks. Through the prediction of all latent malicious behaviors, our new defense algorithm can prevent potential losses like system failures or crashes and reduce the risk of being attacked.
Keywords: application program interfaces; firewalls; software defined networking; SDN based network intrusion detection and prevention system; control plane separation; defense mechanism; firewall; forwarding plane separation; malicious packet; open APIs; packet sniffer function; software-defined networks; third-party applications; Control systems; Firewalls (computing); Operating systems; Ports (Computers); Routing; Controller; Defense Mechanism; Firewall; OpenFlow; Packet Sniffer; SDN; Software Defined Networks (ID#: 16-10085)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389672&isnumber=7389647
C. H. Lin, P. Y. Sun and F. Yu, “Space Connection: A New 3D Tele-immersion Platform for Web-Based Gesture-Collaborative Games and Services,” Games and Software Engineering (GAS), 2015 IEEE/ACM 4th International Workshop on, Florence, 2015, pp. 22-28. doi: 10.1109/GAS.2015.12
Abstract: The 3D tele-immersion technique has brought a revolutionary change to human interaction-physically apart users can interact naturally with each other through body gesture in a shared 3D virtual environment. The scheme of cloud- or Web-based applications on the other hand facilitates global connections among players without the need to equip with additional devices. To realize Web-based 3D immersion techniques, we propose Space Connection that integrates techniques for virtual collaboration and motion sensing techniques with the aim of pushing motion sensing a step forward to seamless collaboration among multiple users. Space Connection provides not only human-computer interaction but also enables instant human- to-human collaboration with body gestures beyond physical space boundary. Technically, to develop gesture-interactive applications, it requires parsing signals of motion sensing devices, passing network data transformation, and synchronizing states among multiple users. The challenge for developing web-based applications comes from that there is no native library for browser applications to access the application interfaces of motion sensing devices due to the security sandbox policy. We further develop a new socket transmission protocol that provides transparent APIs for browsers and external devices. We develop an interactive ping pong game and a rehabilitation system as two example applications of the presented technique.
Keywords: Web services; application program interfaces; computer games; gesture recognition; groupware; human computer interaction; virtual reality; 3D teleimmersion technique; Web-based gesture collaborative game; Web-based gesture collaborative services; gesture-interactive applications; human interaction; human-computer interaction; instant human-to-human collaboration; motion sensing device; motion sensing technique; network data transformation; physical space boundary; security sandbox policy; shared 3D virtual environment; socket transmission protocol; space connection; state synchronisation; transparent API; virtual collaboration; Browsers; Collaboration; Games; Sensors; Servers; Sockets; Three-dimensional displays; Kinect applications; Motion sensing; Space Connection (ID#: 16-10086)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169465&isnumber=7169453
J. Horalek, R. Cimler and V. Sobeslav, “Virtualization Solutions for Higher Education Purposes,” Radioelektronika (RADIOELEKTRONIKA), 2015 25th International Conference, Pardubice, 2015, pp. 383-388. doi: 10.1109/RADIOELEK.2015.7128970
Abstract: Utilization of virtualization and cloud computing technologies is very topical. A large number different of technologies, tools and software solutions exists. Successful adoption of this kind of solution may effectively support the education of specialized topics such as programming, operating system, computer networks, security and many others. This solution offers remote access, automated deployment of infrastructure, API or virtualized study environment etc. The goal of this paper is to explore the broad number of virtualization technologies and proposed the solution which deals with the analysis, design and practical implementation of the virtual laboratory that can serve as an educational tool.
Keywords: application program interfaces; cloud computing; computer aided instruction; further education; virtualisation; API; automated infrastructure deployment; cloud computing technologies; computer networks; educational tool; operating system; remote access; security; software solutions; virtual laboratory; virtualization technologies; virtualized study environment; Computers; Education; Hardware; Protocols; Servers; Software; Virtualization (ID#: 16-10087)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7128970&isnumber=7128969
V. Mehra, V. Jain and D. Uppal, “DaCoMM: Detection and Classification of Metamorphic Malware,” Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, Gwalior, 2015, pp. 668-673. doi: 10.1109/CSNT.2015.62
Abstract: With the fast and vast upliftment of IT sector in 21st century, the question for system security also accounts. As on one side, the IT field is growing with positivity, malware attacks are also arising on the other. Hence, a great challenge for zero day malware attack. Also, malware authors of metamorphic malware and polymorphic malware gain and extra advantage through mutation engine and virus generation toolkits as they can produce as many malware as they want. Our approach focuses on detection and classification of metamorphic malware. MM are hardest to detect by Antivirus Scanners because they differ structurally. We had gathered a total of 600 malware including those also that bypasses the AVS and 150 benign files. These files are disassembled, preprocessed, control flow graphs and API call graphs are generated. We had proposed an algorithm-Gourmand Feature Selection algorithm for selecting desired features from call graphs. Classification is done through WEKA tool, for which J-48 has given the most accuracy of 99.10%. Once the metamorphic malware are detected, they are classified according to their families using the histograms and Chi-square distance formula.
Keywords: application program interfaces; computer viruses; feature selection; pattern classification; API call graphs; DaCoMM; IT sector; WEKA tool; antivirus scanners; control flow graphs; gourmand feature selection algorithm; metamorphic malware classification; metamorphic malware detection; mutation engine; polymorphic malware; system security; virus generation toolkits; zero day malware attack; Classification algorithms; Engines; Flow graphs; Generators; Histograms; Malware; Software; code obfuscation; histograms; metamorphic malware (ID#: 16-10088)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280002&isnumber=7279856
Y. Liu, “Teaching Programming on Cloud: A Perspective Beyond Programming,” 2015 IEEE 7th International Conference on Cloud Computing Technology and Science (CloudCom), Vancouver, BC, 2015, pp. 594-599. doi: 10.1109/CloudCom.2015.101
Abstract: This paper presents the design and implementation of a programming on cloud course. Teaching programming on cloud embraces topics of the cloud service model, architectural patterns, REST APIs, data models, schema free databases, the MapReduce paradigm and quality of services such as scalability, availability and security. The design of this programming course focuses on the breadth of the essential topics and their intrinsic connection as a roadmap. This enables students with programming skills but no cloud computing background to achieve an overview of the structure of a cloud-based service. This further guides students to make design decision on what (and how) technologies can be adopted by means of a practical project development of a service application on cloud.
Keywords: cloud computing; computer aided instruction; computer science education; educational courses; programming; MapReduce; REST API; architectural pattern; cloud course; cloud service model; data model; programming course; quality of service; schema free database; teaching; Cloud computing; Computer architecture; Data models; Databases; Programming; Servers; big data; course design (ID#: 16-10089)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7396219&isnumber=7396111
A. Moawad, T. Hartmann, F. Fouquet, G. Nain, J. Klein and Y. Le Traon, “Beyond Discrete Modeling: A Continuous and Efficient Model for IoT,” Model Driven Engineering Languages and Systems (MODELS), 2015 ACM/IEEE 18th International Conference on, Ottawa, ON, 2015, pp. 90-99. doi: 10.1109/MODELS.2015.7338239
Abstract: Internet of Things applications analyze our past habits through sensor measures to anticipate future trends. To yield accurate predictions, intelligent systems not only rely on single numerical values, but also on structured models aggregated from different sensors. Computation theory, based on the discretization of observable data into timed events, can easily lead to millions of values. Time series and similar database structures can efficiently index the mere data, but quickly reach computation and storage limits when it comes to structuring and processing IoT data. We propose a concept of continuous models that can handle high-volatile IoT data by defining a new type of meta attribute, which represents the continuous nature of IoT data. On top of traditional discrete object-oriented modeling APIs, we enable models to represent very large sequences of sensor values by using mathematical polynomials. We show on various IoT datasets that this significantly improves storage and reasoning efficiency.
Keywords: Big Data; Internet of Things; application program interfaces; computation theory; data structures; object-oriented methods; API; Big data; Internet-of-Things; IoT data processing; computation theory; database structure; discrete object-oriented modeling; high-volatile IoT data structuring; mathematical polynomial; time series; Computational modeling; Context; Data models; Mathematical model; Object oriented modeling; Polynomials; Time series analysis; Continuous modeling; Discrete modeling; Extrapolation; IoT; Polynomial (ID#: 16-10090)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7338239&isnumber=7338220
R. Ko, H. M. Lee, A. B. Jeng and T. E. Wei, “Vulnerability Detection of Multiple Layer Colluding Application Through Intent Privilege Checking,” IT Convergence and Security (ICITCS), 2015 5th International Conference on, Kuala Lumpur, 2015, pp. 1-7. doi: 10.1109/ICITCS.2015.7293036
Abstract: In recent years, the privilege escalation attacks can be performed based on collusion attacks. However, a novel privilege escalation attack is Multiple Layer Collusion Attack, which can divide collusion applications into three parts: Spyware, Deputy and Delivery. Spyware steals private data and transmits data to Deputy. Next, Deputy doesn't need to declare any permissions and just bypass data to Delivery. Colluding attack escapes from malware detection through Deputy. In this paper, we propose a mechanism which is capable to detect both capability and deputy leaks. First, we decode APK file to resources and disassembly code. To extract function calls, our system constructs correlation map from source data to intent through API calls, in which, URIs are potential permissions whether Intent has vulnerabilities or not. Hence, we need to trace the potential function-call and overcome the Inter- component communication. The experiment results prove that deputy applications exist in Android official market, Google Play.
Keywords: Android (operating system); application program interfaces; data privacy; API calls; APK file; Android; Google Play; intent privilege checking; multiple layer colluding application; multiple layer collusion attack; private data; privilege escalation attacks; spyware; vulnerability detection; Androids; Computer science; Correlation; Decision trees; Humanoid robots; Spyware (ID#: 16-10091)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293036&isnumber=7292885
W. B. Gardner, A. Gumtie and J. D. Carter, “Supporting Selective Formalism in CSP++ with Process-Specific Storage,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1057-1065. doi: 10.1109/HPCC-CSS-ICESS.2015.265
Abstract: Communicating Sequential Processes (CSP) is a formal language whose primary purpose is to model and verify concurrent systems. The CSP++ toolset was created to realize the concept of selective formalism by making machine-readable CSPm specifications both executable (through automatic C++ code generation) and extensible (by allowing integration of C++ user-coded functions, UCFs). However, UCFs were limited by their inability to share data with each other, thus their application was constrained to solving simple problems in isolation. We extend CSP++ by providing UCFs in the same CSP process with safe access to a shared storage area, similar in concept and API to Pthreads' thread-local storage, enabling cooperation between them and granting them the ability to undertake more complex tasks without breaking the formalism of the underlying specification. Process-specific storage is demonstrated with a line-following robot case study, applying CSP++ in a soft real-time system. Also described is the Eclipse plug-in that supports the CSPm design flow.
Keywords: C++ language; application program interfaces; communicating sequential processes; concurrency (computers); control engineering computing; formal languages; formal specification; formal verification; program compilers; real-time systems; robots; storage management; API; C++ user-coded function; CSP++; CSPm design flow; Eclipse plug-in; Pthread thread-local storage; UCF; automatic C++ code generation; concurrent system modelling; concurrent system verification; formal language; line-following robot case study; machine-readable CSPm specification; process-specific storage; selective formalism; soft real-time system; Libraries; Real-time systems; Robot sensing systems; Switches; System recovery; Writing; C++; CSPm; Eclipse; Timed CSP; code generation; embedded systems; formal methods; model-based design; selective formalism; soft real-time; software synthesis (ID#: 16-10092)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336309&isnumber=7336120
X. Chen, G. Sime, C. Lutteroth and G. Weber, “OAuthHub — A Service for Consolidating Authentication Services,” Enterprise Distributed Object Computing Conference (EDOC), 2015 IEEE 19th International, Adelaide, SA, 2015, pp. 201-210. doi: 10.1109/EDOC.2015.36
Abstract: OAuth has become a widespread authorization protocol to allow inter-enterprise sharing of user preferences and data: a Consumer that wants access to a user's protected resources held by a Service Provider can use OAuth to ask for the user's authorization for access to these resources. However, it can be tedious for a Consumer to use OAuth as a way to organize user identities, since doing so requires supporting all Service Providers that the Consumer would recognize as users' “identity providers”. Each Service Provider added requires extra work, at the very least, registration at that Service Provider. Different Service Providers may differ slightly in the API they offer, their authentication/authorization process or even their supported version of OAuth. The use of different OAuth Service Providers also creates privacy, security and integration problems. Therefore OAuth is an ideal candidate for Software as a Service, while posing interesting challenges at the same time. We use conceptual modelling to derive new high-level models and provide an analysis of the solution space. We address the aforementioned problems by introducing a trusted intermediary — OAuth Hub — into this relationship and contrast it with a variant, OAuth Proxy. Instead of having to support and control different OAuth providers, Consumers can use OAuth Hub as a single trusted intermediary to take care of managing and controlling how authentication is done and what data is shared. OAuth Hub eases development and integration issues by providing a consolidated API for a range of services. We describe how a trusted intermediary such as OAuth Hub can fit into the overall OAuth architecture and discuss how it can satisfy demands on security, reliability and usability.
Keywords: cloud computing; cryptographic protocols; API; OAuth service providers; OAuthHub; authentication services; authorization protocol; software as a service; Analytical models; Authentication; Authorization; Privacy; Protocols; Servers (ID#: 16-10093)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7321173&isnumber=7321136
A. Lashgar and A. Baniasadi, “Rethinking Prefetching in GPGPUs: Exploiting Unique Opportunities,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 72-77. doi: 10.1109/HPCC-CSS-ICESS.2015.145
Abstract: In this paper we investigate static memory access predictability in GPGPU workloads, at the thread block granularity. We first show that a significant share of accessed memory addresses can be predicted using thread block identifiers. We build on this observation and introduce a hardware-software prefetching scheme to reduce average memory access time. Our proposed scheme issues the memory requests of thread block before it starts execution. The scheme relies on static analyzer to parse the kernel and find predictable memory accesses. Runtime API calls pass this information to the hardware. Hardware dynamically prefetches the data of each thread block based on this information. In our scheme, prefetch accuracy is controlled by software (static analyzer and API calls) and hardware controls the prefetching timeliness. We introduce few machine models to explore the design space and performance potential behind the scheme. Our evaluation shows that the scheme can achieve a performance improvement of 59% over the baseline without prefetching.
Keywords: application program interfaces; graphics processing units; multi-threading; program diagnostics; storage management; API calls; GPGPU workloads; accessed memory address; average memory access time reduction; design space; dynamic data prefetching; hardware control; hardware-software prefetching scheme; kernel parsing; memory requests; performance improvement; predictable memory accesses; prefetching timeliness; runtime API calls; software control; static analyzer; static memory access predictability; thread block; thread block granularity; thread block identifiers; Arrays; Graphics processing units; Hardware; Indexes; Kernel; Prefetching; CUDA; GPGPU; prefetch cache (ID#: 16-10094)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336146&isnumber=7336120
K. Patel, I. Dube, L. Tao and N. Jiang, “Extending OWL to Support Custom Relations,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 494-499. doi: 10.1109/CSCloud.2015.74
Abstract: Web Ontology Language (OWL) is used by domain experts to encode knowledge. OWL primarily only supports the subClassOf (is-a or inheritance) relation. Various other relations, such as partOf, are essential for representing information in various fields including all engineering disciplines. The current syntax of OWL does not support the declaration and usage of new custom relations. Workarounds to emulate custom relations do exist, but they add syntax burden to knowledge modelers and don't support accurate semantics for inference engines. This paper proposes minimal syntax extension to OWL for declaring custom relations with special attributes, and applying them in knowledge representation. Domain experts can apply custom relations intuitively and concisely as they do with the familiar built-in subClassOf relation. We present our additions to the OWL API for the declaration, application, and visualization of custom relations. We outline our revision and additions to the ontology editor Protégé so its users could visually declare, apply and remove custom relations according to our enriched OWL syntax. Work relating to our modification of the OWLViz plugin for custom relations visualization is also discussed.
Keywords: application program interfaces; data visualisation; inference mechanisms; knowledge representation languages; ontologies (artificial intelligence); programming language semantics; OWL API; OWL syntax; OWLViz plugin modification; Web ontology language; custom relation visualization; custom relations; engineering disciplines; inference engines; information representation; knowledge encoding; knowledge modelers; knowledge representation; ontology editor Protégé; subClassOf relation; syntax extension; Engines; OWL; Ontologies; Syntactics; Visualization; Custom relation; Knowledge Representation; OWLAPI; Protégé (ID#: 16-10095)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371528&isnumber=7371418
L. Herscheid, D. Richter and A. Polze, “Hovac: A Configurable Fault Injection Framework for Benchmarking the Dependability of C/C++ Applications,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 1-10. doi: 10.1109/QRS.2015.12
Abstract: The increasing usage of third-party software and complexity of modern software systems makes dependability, in particular robustness against faulty code, an ever more important concern. To compare and quantitatively assess the dependability of different software systems, dependability benchmarks are needed. We present a configurable tool for dependability benchmarking, Hovac, which uses DLL API hooking to inject faults into third party library calls. Our fault classes are implemented based on the Common Weakness Enumeration (CWE) database, a community maintained source of real life software faults and errors. Using two example applications, we discuss a detailed and systematic approach to benchmarking the dependability of C/C++ applications using our tool.
Keywords: C++ language; software fault tolerance; software libraries; software tools; C/C++ applications; CWE database; DLL API hooking; Hovac; common weakness enumeration database; configurable fault injection framework; configurable tool; dependability benchmarking; fault classes; faulty code; software errors; software faults; software systems complexity; software systems dependability; third party library calls; third-party software usage; Benchmark testing; Databases; Libraries; Operating systems; Robustness; benchmarking; dependability; fault injection; open source; software reliability; third-party library (ID#: 16-10096)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272908&isnumber=7272893
H. S. Sheshadri, S. R. B. Shree and M. Krishna, “Diagnosis of Alzheimer's Disease Employing Neuropsychological and Classification Techniques,” IT Convergence and Security (ICITCS), 2015 5th International Conference on, Kuala Lumpur, 2015, pp. 1-6. doi: 10.1109/ICITCS.2015.7292973
Abstract: All over the world, a large number of people are suffering from brain related diseases. Diagnosing of these diseases is the requirement of the day. Dementia is one such brain related disease. This causes loss of cognitive functions such as reasoning, memory and other mental abilities which may be due to trauma or normal ageing. Alzheimer's disease is one of the types of the dementia which accounts to 60-80% of mental disorders [1]. For the diagnosis of such diseases many tests are conducted. In this paper, the authors have collected the data of 466 subjects by conducting neuro psychological test. The subjects are classified as demented or not using machine learning techniques. The authors have preprocessed the data. The data set is classified using Naive Bayes, Jrip and Random Forest. The data set is evaluated using explorer, knowledge flow and API. WEKA tool is used for the analysis purpose. Results show Jrip and Random forest performs better compared to Naive Bayes.
Keywords: Bayes methods; brain; cognition; data analysis; data mining; diseases; learning (artificial intelligence); medical computing; neurophysiology; patient diagnosis; pattern classification; trees (mathematics); API; Alzheimer's disease diagnosis; Jrip; WEKA tool; brain related disease; classification technique; cognitive function; data set classification; dementia; knowledge flow; machine learning; memory; mental ability; mental disorder; naive Bayes; neuropsychological technique; neuropsychological test; normal ageing; random forest; reasoning; trauma; Cancer; Classification algorithms; Data mining; Data visualization; Dementia (ID#: 16-10097)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292973&isnumber=7292885
S. Das, A. Singh, S. P. Singh and A. Kumar, “A Low Overhead Dynamic Memory Management System for Constrained Memory Embedded Systems,” Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, New Delhi, 2015, pp. 809-815. doi: (not provided)
Abstract: Embedded systems programming often involve choosing the worst case static memory allocation for most applications over a dynamic allocation approach. Such a design decision is rightly justified in terms of reliability, security and real time performance requirements from such low end systems. However with the introduction of public key cryptography and dynamic reconfiguration in IP enabled sensing devices for use in several “Internet of Things” applications, dynamic memory allocation in embedded devices, is becoming more important than ever before. While several embedded operating systems like MantisOS, SOS and Contiki provide dynamic memory allocation support, they usually lack flexibility or have relatively large memory overhead. In this paper we introduce two novel dynamic memory allocation schemes, ST_MEMMGR (without memory compaction) and ST_COMPACT_MEMMGR (with memory compaction), with a close compliance with the libc memory allocation API. Both designs take into account the very limited RAM (1KB - 64KB) in most microcontrollers. Experimental results show that ST_MEMMGR has a 256 - 5376 bytes lesser memory overhead than similar non-compaction based open source allocators like heapLib and memmgr. Similarly, ST_COMPACT_MEMMGR is observed to have 33% smaller memory descriptor as compared to Contiki's managed memory allocator with similar performance in terms of execution speed.
Keywords: application program interfaces; embedded systems; operating systems (computers); storage management; Internet of Things; ST_COMPACT_MEMMGR; ST_MEMMGR; constrained memory embedded systems; dynamic memory allocation support; dynamic reconfiguration; embedded operating systems; libc memory allocation API; low overhead dynamic memory management system; public key cryptography; worst case static memory allocation; Compaction; Dynamic scheduling; Embedded systems; Memory management; Protocols; Random access memory; Resource management; Dynamic Memory Management; Embedded Systems; Memory Compaction; Memory Fragmentation; Microcontrollers; WSN (ID#: 16-10098)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100361&isnumber=7100186
A. Banchs et al., “A Novel Radio Multiservice Adaptive Network Architecture for 5G Networks,” 2015 IEEE 81st Vehicular Technology Conference (VTC Spring), Glasgow, 2015, pp. 1-5. doi: 10.1109/VTCSpring.2015.7145636
Abstract: This paper proposes a conceptually novel, adaptive and future-proof 5G mobile network architecture. The proposed architecture enables unprecedented levels of network customisability, ensuring stringent performance, security, cost and energy requirements to be met; as well as providing an API-driven architectural openness, fuelling economic growth through over-the-top innovation. Not following the 'one system fits all services' paradigm of current architectures, the architecture allows for adapting the mechanisms executed for a given service to the specific service requirements, resulting in a novel service- and context-dependent adaptation of network functions paradigm. The technical approach is based on the innovative concept of adaptive (de)composition and allocation of mobile network functions, which flexibly decomposes the mobile network functions and places the resulting functions in the most appropriate location. By doing so, access and core functions no longer (necessarily) reside in different locations, which is exploited to jointly optimize their operation when possible. The adaptability of the architecture is further strengthened by the innovative software-defined mobile network control and mobile multi-tenancy concepts.
Keywords: 5G mobile communication; application program interfaces; 5G mobile network architecture; API-driven architectural openness; context-dependent adaptation; economic growth; innovative software-defined mobile network control; mobile multitenancy concepts; mobile network functions; network customisability; novel radio multiservice adaptive network architecture; stringent performance; Adaptive systems; Computer architecture; Mobile communication; Mobile computing; Quality of service; Radio access networks; Resource management (ID#: 16-10099)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7145636&isnumber=7145573
Ó M. Pereira and R. L. Aguiar, “Multi-Purpose Adaptable Business Tier Components Based on Call Level Interfaces,” Computer and Information Science (ICIS), 2015 IEEE/ACIS 14th International Conference on, Las Vegas, NV, 2015, pp. 215-221. doi: 10.1109/ICIS.2015.7166596
Abstract: Call Level Interfaces (CLI) play a key role in business tiers of relational and on some NoSQL database applications whenever a fine tune control between application tiers and the host databases is a key requirement. Unfortunately, in spite of this significant advantage, CLI are low level API, this way not addressing high level architectural requirements. Among the examples we emphasize two situations: a) the need to decouple or not to decouple the development process of business tiers from the development process of application tiers and b) the need to automatically adapt business tiers to new business and/or security needs at runtime. To tackle these CLI drawbacks, and simultaneously keep their advantages, this paper proposes an architecture relying on CLI from which multi-purpose business tiers components are built, herein referred to as Adaptable Business Tier Components (ABTC). Beyond the reference architecture, this paper presents a proof of concept based on Java and Java Database Connectivity (an example of CLI).
Keywords: Java; SQL; application program interfaces; business data processing; database management systems; ABTC; CLI drawbacks; Java database connectivity; NoSQL database applications; adaptable business tier components; application tiers; call level interfaces; high level architectural requirements; low level API; multipurpose adaptable business tier components; multipurpose business tier components; Access control; Buildings; Business; Databases; Java; Runtime; component; middleware; reuse; software architecture (ID#: 16-10100)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166596&isnumber=7166553
T. Nguyen, “Using Unrestricted Mobile Sensors to Infer Tapped and Traced User Inputs,” Information Technology - New Generations (ITNG), 2015 12th International Conference on, Las Vegas, NV, 2015, pp. 151-156. doi: 10.1109/ITNG.2015.29
Abstract: As of January 2014, 58 percent of Americans over the age of 18 own a smart phone. Of these smart phones, Android devices provide some security by requiring that third party application developers declare to users which components and features their applications will access. However, the real time environmental sensors on devices that are supported by the Android API are exempt from this requirement. We evaluate the possibility of exploiting the freedom to discretely use these sensors and expand on previous work by developing an application that can use the gyroscope and accelerometer to interpret what the user has written, even if trace input is used. Trace input is a feature available on Samsung's default keyboard as well as in many popular third-party keyboard applications. The inclusion of trace input in a key logger application increases the amount of personal information that can be captured since users may choose to use the time-saving trace-based input as opposed to the traditional tap-based input. In this work, we demonstrate that it is indeed possible to recover both tap and trace inputted text using only motion sensor data.
Keywords: accelerometers; application program interfaces; gyroscopes; invasive software; smart phones; Android API; Android device; accelerometer; key logger application; keyboard application; mobile security; motion sensor data; personal information; real-time environmental sensor; smart phone; tapped user input; traced user input; unrestricted mobile sensor; Accelerometers; Accuracy; Feature extraction; Gyroscopes; Keyboards; Sensors; Support vector machines; key logger; mobile malware; motion sensors; spyware (ID#: 16-10101)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113464&isnumber=7113432
C. Banse and S. Rangarajan, “A Secure Northbound Interface for SDN Applications,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 834-839. doi: 10.1109/Trustcom.2015.454
Abstract: Software-Defined Networking (SDN) promises to introduce flexibility and programmability into networks by offering a northbound interface (NBI) for developers to create SDN applications. However, current designs and implementations have several drawbacks, including the lack of extended security features. In this paper, we present a secure northbound interface, through which an SDN controller can offer network resources, such as statistics, flow information or topology data, via a REST-like API to registered SDN applications. A trust manager ensures that only authenticated and trusted applications can utilize the interface. Furthermore, a permission system allows for fine-grained authorization and access control to the aforementioned resources. We present a prototypical implementation of our interface and developed example applications using our interface, including an SDN management dashboard.
Keywords: application program interfaces; computer network security; network interfaces; software defined networking; API; NBI; SDN controller; SDN management dashboard; access control; fine-grained authorization; secure northbound interface; software-defined networking; trusted application; Access control; Network topology; Protocols; Switches; Topology; SDN; Software-Defined Networking; network security; northbound interface; trust (ID#: 16-10102)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345362&isnumber=7345233
L. Wu, X. Du and H. Zhang, “An Effective Access Control Scheme for Preventing Permission Leak in Android,” Computing, Networking and Communications (ICNC), 2015 International Conference on, Garden Grove, CA, 2015, pp. 57-61. doi: 10.1109/ICCNC.2015.7069315
Abstract: In the Android system, each application runs in its own sandbox, and the permission mechanism is used to enforce access control to the system APIs and applications. However, permission leak could happen when an application without certain permission illegally gain access to protected resources through other privileged applications. We propose SPAC, a component-level system permission based access control scheme that can help developers better secure the public components of their applications. In the SPAC scheme, obscure custom permissions are replaced by explicit system permissions. We extend current permission checking mechanism so that multiple permissions are supported on component level. SPAC has been implemented on a Nexus 4 smartphone, and our evaluation demonstrates its effectiveness in mitigating permission leak vulnerabilities.
Keywords: Android (operating system); application program interfaces; authorisation; Android system; Nexus 4 smartphone; SPAC scheme; component-level system permission based access control scheme; permission checking mechanism; permission leak prevention; permission leak vulnerabilities; permission mechanism; public components; system API; Access control; Androids; Google; Humanoid robots; Information security; Receivers; Permission leak; access control; smartphone security (ID#: 16-10103)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069315&isnumber=7069279
M. Jemel and A. Serhrouchni, “Toward User's Devices Collaboration to Distribute Securely the Client Side Storage,” 2015 International Conference on Protocol Engineering (ICPE) and International Conference on New Technologies of Distributed Systems (NTDS), Paris, 2015, pp. 1-6. doi: 10.1109/NOTERE.2015.7293479
Abstract: Web application and browsers are adopting intensively the client side storage. In fact, this strategy ensures a high user's quality of experience, offline application usage and server load reduction. In this paper, we concentrate on all devices equipped with a browser to integrate the distribution of data stored locally by HTML5 APIs. Therefore, a decentralized browser-to-browser data distribution is ensured across different user's devices within the same Personal Area.
Keywords: Internet; application program interfaces; hypermedia markup languages; quality of experience; security of data; storage allocation; HTML5 API; WebRTC; chromium code; client side storage; decentralized browser-to-browser data distribution; device collaboration; local storage API; quality of experience; secure remote data; server load reduction; Browsers; Databases; Encryption; Protocols; HTML5; Local Storage API; Secure remote data management (ID#: 16-10104)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293479&isnumber=7293442
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Cryptography with Photons 2015 |
Quantum cryptography uses the transfer of photons using a filter that indicates the orientation of the photon sent. Eavesdropping on the communication affects it. This property is of interest to the Science of Security community in building secure cyber-physical systems, and for resiliency and compositionality. The work cited here was presented in 2015.
B. Archana and S. Krithika, “Implementation of BB84 Quantum Key Distribution Using OptSim,” Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, Coimbatore, 2015, pp. 457-460. doi: 10.1109/ECS.2015.7124946
Abstract: This paper proposes a cryptographic method know as quantum cryptography. Quantum cryptography uses quantum channel to exchange key securely and keeps unwanted parties or eavesdroppers from learning sensitive information. A technique called Quantum Key Distribution (QKD) is used to share random secret key by encoding the information in quantum states. Photons are the quantum material used for encoding. QKD provides an unique way of sharing random sequence of bits between users with a level of security not attainable with any other classical cryptographic methods. In this paper, BB84 protocol is used to implement QKD, that deals with the photon polarization states used to transmit the telecommunication information with high level of security using optical fiber. In this paper we have implemented BB84 protocol using photonic simulator OptSim 5.2.
Keywords: cryptographic protocols; quantum cryptography; BB84 protocol; BB84 quantum key distribution; QKD; cryptographic method; eavesdroppers; learning sensitive information; optical fiber; photon polarization states; photonic simulator OptSim 5.2; quantum channel; quantum material; quantum states; random secret key; telecommunication information; Cryptography; Photonics; Polarization; Protocols; Quantum entanglement; BB84protocol; OptSim5.2; Quantum Mechanism(QM); QuantumKey Distribution (QKD); Quantumcryptography(QC); photonpolarization (ID#: 16-11339)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124946&isnumber=7124722
B. G. Norton, M. Ghadimi, V. Blums and D. Kielpinski, “Monolithic Optical Integration for Scalable Trapped-Ion Quantum Information Processing,” Lasers and Electro-Optics Pacific Rim (CLEO-PR), 2015 11th Conference on, Busan, 2015, pp. 1-2. doi: 10.1109/CLEOPR.2015.7376434
Abstract: Quantum information processing (QIP) promises to radically change the outlook for secure communications, both by breaking existing cryptographic protocols and offering new quantum protocols in their place. A promising technology for QIP uses arrays of atomic ions that are trapped in ultrahigh vacuum and manipulated by lasers. Over the last several years, work in my research group has led to the demonstration of a monolithically integrated, scalable optical interconnect for trapped-ion QIP. Our interconnect collects single photons from trapped ions using a diffractive mirror array, which is fabricated directly on a chip-type ion trap using a CMOS-compatible process. Based on this interconnect, we have proposed an architecture that couples trapped ion arrays with photonic integrated circuits to achieve compatibility with current telecom networks. Such tightly integrated, highly parallel systems open the prospect of long-distance quantum cryptography.
Keywords: CMOS integrated circuits; cryptographic protocols; integrated optics; mirrors; optical arrays; optical communication; optical fabrication; optical interconnections; quantum cryptography; quantum optics; security of data; CMOS-compatible process; QIP; chip-type ion trap; cryptographic protocols; diffractive mirror array; long-distance quantum cryptography; monolithic optical integration; photonic integrated circuits; quantum protocols; scalable optical interconnect; scalable trapped-ion quantum information processing; secure communications; ultrahigh vacuum; Charge carrier processes; Computer architecture; Information processing; Ions; Mirrors; Optical diffraction; Optical waveguides (ID#: 16-11340)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7376434&isnumber=7376373
T. Graham, C. Zeitler, J. Chapman, P. Kwiat, H. Javadi and H. Bernstein, “Superdense Teleportation and Quantum Key Distribution for Space Applications,” 2015 IEEE International Conference on Space Optical Systems and Applications (ICSOS), New Orleans, LA, 2015, pp. 1-7. doi: 10.1109/ICSOS.2015.7425090
Abstract: The transfer of quantum information over long distances has long been a goal of quantum information science and is required for many important quantum communication and computing protocols. When these channels are lossy and noisy, it is often impossible to directly transmit quantum states between two distant parties. We use a new technique called superdense teleportation to communicate quantum information deterministically with greatly reduced resources, simplified measurements, and decreased classical communication cost. These advantages make this technique ideal for communicating quantum information for space applications. We are currently implementing an superdense teleportation lab demonstration, using photons hyperentangled in polarization and temporal mode to communicate a special set of two-qubit, single-photon states between two remote parties. A slight modification of the system readily allows it to be used to implement quantum cryptography as well. We investigate the possibility of implementation from an Earth's orbit to ground. We will discuss our current experimental progress and the design challenges facing a practical demonstration of satellite-to-Earth SDT.
Keywords: optical communication; quantum computing; quantum cryptography; quantum entanglement; satellite communication; teleportation; hyperentangled photons; lossy channels; noisy channels; quantum communication; quantum information; quantum key distribution; quantum states; satellite-to-Earth SDT; space applications; superdense teleportation; two-qubit single-photon states; Extraterrestrial measurements; Photonics; Protocols; Quantum entanglement; Satellites; Teleportation; Superdense teleportation; (ID#: 16-11341)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7425090&isnumber=7425053
K. W. C. Chan, M. E. Rifai, P. Verma, S. Kak and Y. Chen, “Multi-Photon Quantum Key Distribution Based on Double-Lock Encryption,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1364/CLEO_QELS.2015.FF1A.3
Abstract: We present a quantum key distribution protocol based on the double-lock cryptography. It exploits the asymmetry in the detection strategies between the legitimate users and the eavesdropper. With coherent states, the mean photon number can be as larger as 10.
Keywords: light coherence; multiphoton processes; photodetectors; quantum cryptography; quantum optics; coherent states; double-lock cryptography; double-lock encryption; mean photon number; multiphoton quantum key distribution; photodetection strategies; Authentication; Computers; Error probability; Photonics; Protocols; Quantum cryptography (ID#: 16-11342)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182995&isnumber=7182853
M. Koashi, “Quantum Key Distribution with Coherent Laser Pulse Train: Security Without Monitoring Disturbance,” Photonics North, 2015, Ottawa, ON, 2015, pp. 1-1. doi: 10.1109/PN.2015.7292456
Abstract: Conventional quantum key distribution (QKD) schemes determine the amount of leaked information through estimation of signal disturbance. Here we present a QKD protocol based on an entirely different principle, which works without monitoring the disturbance. The protocol is implemented with a laser, an interferometer with a variable delay, and photon detectors. It is capable of producing a secret key when the bit error rate is high and the communication time is short.
Keywords: high-speed optical techniques; light coherence; quantum cryptography; quantum optics; QKD; bit error rate; coherent laser pulse train; photon detectors; quantum key distribution; secret key; variable delay; Delays; Estimation; Monitoring; Photonics; Privacy; Protocols; Security; differential phase shift keying; information-disturbance trade off; variable delay (ID#: 16-11343)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292456&isnumber=7292453
C. J. Chunnilall, “Metrology for Quantum Communications,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1364/CLEO_AT.2015.AF1J.6
Abstract: Industrial technologies based on the production, manipulation, and detection of single and entangled photons are emerging, and quantum key distribution via optical fibre is one of the most commercially-advanced. The National Physical Laboratory is developing traceable performance metrology for the quantum devices used in these technologies. This is part of a broader effort to develop metrological techniques and standards to accelerate the development and commercial uptake of new industrial quantum communication technologies based on single photons. This presentation will give an overview of the work carried out at NPL and within the wider European community, and highlight plans for the future.
Keywords: fibre optic sensors; photon counting; quantum cryptography; quantum entanglement; National Physical Laboratory; entangled photons; metrology; optical fibre; quantum communications; quantum devices; quantum key distribution; single photons; Communication systems; Detectors; Metrology; Optical fibers; Optical transmitters; Photonics; Security (ID#: 16-11344)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182864&isnumber=7182853
D. Bunandar, Z. Zhang, J. H. Shapiro and D. R. Englund, “Practical High-Dimensional Quantum Key Distribution with Decoy States,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1103/PhysRevA.91.022336
Abstract: We propose a high-dimensional quantum key distribution protocol secure against photon-number splitting attack by employing only one or two decoy states. Decoy states dramatically increase the protocol's secure distance.
Keywords: cryptographic protocols; quantum cryptography; quantum optics; security of data; decoy states; high-dimensional quantum key distribution protocol; photon-number splitting attack; protocol secure distance; Correlation; Dispersion; Photonics; Protocols; Security; System-on-chip (ID#: 16-11345)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182994&isnumber=7182853
N. Li, S. M. Cui, Y. M. Ji, K. s. Feng and L. Shi, “Analysis for Device Independent Quantum Key Distribution Based on the Law of Large Number,” 2015 IEEE Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, 2015, pp. 1073-1076. doi: 10.1109/IAEAC.2015.7428723
Abstract: Measuring device-independent quantum key distribution scheme can remove all detectors' side-channel flaw, combined with decoy state program to achieve absolute security of quantum key distribution. In this paper, the statistical law of large numbers fluctuate limited key length measuring device-independent quantum key distribution scheme in single-photon counting rate and BER (Bit Error Rate) were analyzed, and the key length of the single-photon N = 106 ~ 1012 counting rate and the key generation rate simulation were performed. Simulation results show that: in the optical fiber transmission, with decreasing key length 300 km, the secure transmission distance, decreased respectively to 260 km (N = 1010) and 75 km (N = 106). When N = 1012, secure transmission distance is reached at 295km, close to the theoretical limit.
Keywords: error statistics; quantum cryptography; BER; bit error rate; decoy state program; key generation rate simulation; key length measuring device-independent quantum key distribution scheme; law of large number; optical fiber transmission; secure transmission distance; security; side-channel flaw; single-photon counting rate; statistical law; Decision support systems; Force measurement; Frequency modulation; Navigation; Q measurement; Measuring device-independent; QKD; law of large numbers; three-intensity decoy-state program (ID#: 16-11346)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7428723&isnumber=7428505
F. Piacentini et al., “Metrology for Quantum Communication,” 2015 IEEE Globecom Workshops (GC Wkshps), San Diego, CA, 2015, pp. 1-5. doi: 10.1109/GLOCOMW.2015.7413960
Abstract: INRIM is making efforts to produce a metrology for quantum communication purposes, ranging from the establishment of measurement procedures for specific quantities related to QKD components, namely pseudo single-photon sources and detectors, to the implementation of novel QKD protocol based on paradigm other than non-commuting observables, to the development of quantum tomographic techniques, to the realization and characterization of a quasi-noiseless single-photon source. In particular in this paper we summarize this last activity together with the description of the preliminary results related to a four-wave mixing source that our group realized in order to obtain a source with a narrow band low noise single photon emission, a demanding feature for applications to quantum repeaters and memories.
Keywords: multiwave mixing; quantum cryptography; INRIM; QKD components; QKD protocol; four-wave mixing source; measurement procedures; narrow band low noise single photon emission; pseudo single-photon sources; quantum communication metrology; quantum tomographic techniques; quasi-noiseless single-photon source; Cesium; Communication systems; Four-wave mixing; Laser beams; Laser excitation; Metrology; Photonics (ID#: 16-11347)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7413960&isnumber=7413928
U. S. Chahar and K. Chatterjee, “A Novel Differential Phase Shift Quantum Key Distribution Scheme for Secure Communication,” Computing and Communications Technologies (ICCCT), 2015 International Conference on, Chennai, 2015,
pp. 156-159. doi: 10.1109/ICCCT2.2015.7292737
Abstract: Quantum key distribution is used for secure communication between two parties for generation of secret key. Differential Phase Shift Quantum Key Distribution is a new and unique QKD protocol that is different from traditional ones, providing simplicity and practicality. This paper presents Delay Selected DPS-QKD scheme in which it uses a weak coherent pulse train, and features simple configuration and efficient use of the time domain. All detected photon participate to form a secure key bits and resulting in a higher key creation efficiency.
Keywords: cryptographic protocols; differential phase shift keying; quantum cryptography; telecommunication security; time-domain analysis; QKD protocol; coherent pulse train; delay selected DPS-QKD scheme; differential phase shift quantum key distribution scheme; secret key generation; secure communication; secure key bits; time domain analysis; Delays; Detectors; Differential phase shift keying; Photonics; Protocols; Security; Differential Phase Shift; Differential phase shift keying protocol; Quantum Key Distribution (ID#: 16-11348)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292737&isnumber=7292708
G. S. Kanter, “Fortifying Single Photon Detectors to Quantum Hacking Attacks by Using Wavelength Upconversion,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1364/CLEO_AT.2015.JW2A.7
Abstract: Upconversion detection can isolate the temporal and wavelength window over which light can be efficiency received. Using appropriate designs the ability of an eavesdropper to damage, measure, or control QKD receiver components is significantly constricted.
Keywords: optical control; optical design techniques; optical receivers; optical testing; optical wavelength conversion; optical windows; photodetectors; photon counting; quantum cryptography; QKD receiver component control; QKD receiver component measurement; optical designs; quantum hacking attacks; single-photon detectors; temporal window; wavelength upconversion detection; wavelength window; Band-pass filters; Computer crime; Detectors; Insertion loss; Monitoring; Photonics; Receivers (ID#: 16-11349)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7183658&isnumber=7182853
T. Horikiri, “Quantum Key Distribution with Mode-Locked Two-Photon States,” Lasers and Electro-Optics Pacific Rim (CLEO-PR), 2015 11th Conference on, Busan, 2015, pp. 1-2. doi: 10.1109/CLEOPR.2015.7376514
Abstract: Quantum key distribution (QKD) with mode-locked two-photon states is discussed. The photon source with a comb-like second-order correlation function is shown to be useful for implementing long distance time-energy entanglement QKD.
Keywords: laser mode locking; optical correlation; quantum cryptography; quantum entanglement; quantum optics; two-photon processes; comblike second-order correlation function; long distance time-energy entanglement QKD; mode-locked two-photon states; quantum key distribution; Cavity resonators; Correlation; Detectors; Photonics; Signal resolution; Timing; Yttrium (ID#: 16-11350)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7376514&isnumber=7376373
X. Tan, S. Cheng, J. Li and Z. Feng, “Quantum Key Distribution Protocol Using Quantum Fourier Transform,” Advanced Information Networking and Applications Workshops (WAINA), 2015 IEEE 29th International Conference on, Gwangiu, 2015,
pp. 96-101. doi: 10.1109/WAINA.2015.8
Abstract: A quantum key distribution protocol is proposed base on the discrete quantum Fourier transform. In our protocol, we perform Fourier transform on each particle of the sequence to encode the qubits and insert sufficient decoy photons into the sequence for preventing eavesdropping. Furthermore, we prove the security of this protocol with it's immunization to intercept-measurement attack, intercept-resend attack and entanglement-measurement attack. Then, we analyse the efficiency of the protocol, the efficiency of our protocol is about 25% that higher than many other protocols. Also, the proposed protocol has another advantage that it is completely compatible with quantum computation and more easy to realize in the distributed quantum secure computation.
Keywords: cryptographic protocols; discrete Fourier transforms; quantum cryptography; discrete quantum Fourier transform; distributed quantum secure computation; eavesdropping; immunization; intercept-measurement attack; intercept-resend attack; quantum key distribution protocol; Atmospheric measurements; Fourier transforms; Particle measurements; Photonics; Protocols; Quantum computing; Security; Intercept-resend attack; Quantum Fourier transform; Quantum key distribution; Unitary operation (ID#: 16-11351)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7096154&isnumber=7096097
D. Aktas, B. Fedrici, F. Kaiser, L. Labonté and S. Tanzilli, “Distributing Energy-Time Entangled Photon Pairs in Demultiplexed Channels Over 110 Km,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1364/CLEO_QELS.2015.FTu2A.6
Abstract: We propose a novel approach to quantum cryptography using the latest demultiplexing technology to distribute photonic entanglement over a fully fibred network. We achieve unprecedented bit-rates, beyond the state of the art for similar approaches.
Keywords: demultiplexing; optical fibre networks; quantum cryptography; quantum entanglement; quantum optics; demultiplexed channels; demultiplexing technology; distance 110 km; energy-time entangled photon pairs; fully fibred network; photonic entanglement; quantum cryptography; Bit rate; Optical filters; Optimized production technology; Photonics; Quantum cryptography; Quantum entanglement; Standards (ID#: 16-11352)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7183249&isnumber=7182853
M. Koashi, “Round-Robin Differential-Phase-Shift QKD Protocol,” Lasers and Electro-Optics Pacific Rim (CLEO-PR), 2015 11th Conference on, Busan, 2015, pp. 1-2. doi: 10.1109/CLEOPR.2015.7376020
Abstract: Conventional quantum key distribution (QKD) schemes determine the amount of leaked information through estimation of signal disturbance. Here we present a QKD protocol based on an entirely different principle, which works without monitoring the disturbance.
Keywords: cryptographic protocols; differential phase shift keying; optical communication; quantum cryptography; quantum optics; leaked information; quantum key distribution schemes; round-robin differential-phase-shift QKD protocol; signal disturbance; Delays; Detectors; Optical interferometry; Photonics; Privacy; Protocols; Receivers (ID#: 16-11353)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7376020&isnumber=7375930
J. M. Vilardy O., M. S. Millán and E. Pérez-Cabré, “Secure Image Encryption and Authentication Using the Photon Counting Technique in the Gyrator Domain,” Signal Processing, Images and Computer Vision (STSIVA), 2015 20th Symposium on, Bogota, 2015, pp. 1-6. doi: 10.1109/STSIVA.2015.7330460
Abstract: In this work, we present the integration of the photon counting technique (PhCT) with an encryption system in the Gyrator domain (GD) for secure image authentication. The encryption system uses two random phase masks (RPMs), one RPM is defined at the spatial domain and the other RPM is defined at the GD, in order to encode the image to encrypt (original image) into random noise. The rotation angle of the Gyrator transform adds a new key that increases the security of the encryption system. The decryption system is an inverse system with respect to the encryption system. The PhCT limits the information content of an image in a nonlinear, random and controlled way; the photon-limited image only has a few pixels of information, this type of image is usually known as sparse image. We apply the PhCT over the encrypted image. The resulting image in the decryption system is not a copy of the original image, this decrypted image is a random code that should contain the sufficient information for the authentication of the original image using a nonlinear correlation technique. Finally, we evaluate the peak-to-correlation energy metric for different values of the parameters involved in the encryption and authentication systems, in order to test the verification capability of the authentication system.
Keywords: cryptography; image processing; photon counting; random noise; gyrator domain; inverse system; nonlinear correlation technique; peak-to-correlation energy metric; photon counting technique; random noise; random phase masks; secure image authentication; secure image encryption; sparse image; Authentication; Correlation; Encryption; Gyrators; Photonics; Transforms (ID#: 16-11354)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7330460&isnumber=7330388
E. Y. Zhu, C. Corbari, A. V. Gladyshev, P. G. Kazansky, H. K. Lo and L. Qian, “Multi-Party Agile QKD Network with a Fiber-Based Entangled Source,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1364/CLEO_AT.2015.JW2A.10
Abstract: A multi-party quantum key distribution scheme is experimentally demonstrated by utilizing a poled fiber-based broadband polarization-entangled source and dense wavelength-division multiplexing. Entangled photon pairs are delivered over 40-km of fiber, with secure key rates of more than 20 bits/s observed.
Keywords: optical fibre networks; optical fibre polarisation; quantum cryptography; quantum entanglement; quantum optics; wavelength division multiplexing; bit rate 20 bit/s; dense wavelength-division multiplexing; entangled photon pairs; fiber-based entangled source; multiparty Agile QKD network; multiparty quantum key distribution scheme; poled fiber-based broadband polarization-entangled source; secure key rates; size 40 km; Adaptive optics; Broadband communication; Optical polarization; Optical pumping; Photonics; Wavelength division multiplexing (ID#: 16-11355)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7183602&isnumber=7182853
S. Kleis, R. Herschel and C. G. Schaeffer, “Simple and Efficient Detection Scheme for Continuous Variable Quantum Key Distribution with M-ary Phase-Shift-Keying,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1364/CLEO_SI.2015.SW3M.7
Abstract: A detection scheme for discriminating coherent states in quantum key distribution systems employing PSK is proposed. It is simple and uses only standard components. Its applicability at extremely low power levels of as low as 0.045 photons per symbol is experimentally verified.
Keywords: light coherence; optical modulation; phase shift keying; photodetectors; quantum cryptography; quantum optics; PSK; coherent states discrimination; continuous variable quantum key distribution; detection scheme; m-ary phase-shift-keying; Modulation; Optical mixing; Optical receivers; Optical transmitters; Photonics; Signal to noise ratio (ID#: 16-11356)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7184376&isnumber=7182853
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Data Race Vulnerabilities 2015 |
A race condition is a flaw that occurs when the timing or ordering of events affects a program’s correctness. A data race happens when there are two memory accesses in a program where both target the same location and are performed concurrently by two threads. For the Science of Security, data races may impact compositionality. The research work cited here was presented in 2015.
D. Last, “Using Historical Software Vulnerability Data to Forecast Future Vulnerabilities,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-7. doi: 10.1109/RWEEK.2015.7287429
Abstract: The field of network and computer security is a never-ending race with attackers, trying to identify and patch software vulnerabilities before they can be exploited. In this ongoing conflict, it would be quite useful to be able to predict when and where the next software vulnerability would appear. The research presented in this paper is the first step towards a capability for forecasting vulnerability discovery rates for individual software packages. This first step involves creating forecast models for vulnerability rates at the global level, as well as the category (web browser, operating system, and video player) level. These models will later be used as a factor in the predictive models for individual software packages. A number of regression models are fit to historical vulnerability data from the National Vulnerability Database (NVD) to identify historical trends in vulnerability discovery. Then, k-NN classification is used in conjunction with several time series distance measurements to select the appropriate regression models for a forecast. 68% and 95% confidence bounds are generated around the actual forecast to provide a margin of error. Experimentation using this method on the NVD data demonstrates the accuracy of these forecasts, as well as the accuracy of the confidence bounds forecasts. Analysis of these results indicates which time series distance measures produce the best vulnerability discovery forecasts.
Keywords: pattern classification; regression analysis; security of data; software packages; time series; computer security; k-NN classification; regression model; software package; software vulnerability data; time series distance measure; vulnerability forecasting; Accuracy; Market research; Predictive models; Software packages; Time series analysis; Training; cybersecurity; vulnerability discovery model; vulnerability prediction (ID#: 16-11192)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287429&isnumber=7287407
F. Schuster, T. Tendyck, C. Liebchen, L. Davi, A.-R. Sadeghi and T. Holz, “Counterfeit Object-Oriented Programming: On the Difficulty of Preventing Code Reuse Attacks in C++ Applications,” 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015, pp. 745-762. doi: 10.1109/SP.2015.51
Abstract: Code reuse attacks such as return-oriented programming (ROP) have become prevalent techniques to exploit memory corruption vulnerabilities in software programs. A variety of corresponding defenses has been proposed, of which some have already been successfully bypassed -- and the arms race continues. In this paper, we perform a systematic assessment of recently proposed CFI solutions and other defenses against code reuse attacks in the context of C++. We demonstrate that many of these defenses that do not consider object-oriented C++ semantics precisely can be generically bypassed in practice. Our novel attack technique, denoted as counterfeit object-oriented programming (COOP), induces malicious program behavior by only invoking chains of existing C++ virtual functions in a program through corresponding existing call sites. COOP is Turing complete in realistic attack scenarios and we show its viability by developing sophisticated, real-world exploits for Internet Explorer 10 on Windows and Fire fox 36 on Linux. Moreover, we show that even recently proposed defenses (CPS, T-VIP, vfGuard, and VTint) that specifically target C++ are vulnerable to COOP. We observe that constructing defenses resilient to COOP that do not require access to source code seems to be challenging. We believe that our investigation and results are helpful contributions to the design and implementation of future defenses against control flow hijacking attacks.
Keywords: C++ language; Turing machines; object-oriented programming; security of data; C++ applications; C++ virtual functions; CFI solutions; COOP; CPS; Firefox 36; Internet Explorer 10; Linux; ROP; T-VIP; Turing complete; VTint; Windows; code reuse attack prevention; code reuse attacks; control flow hijacking attacks; counterfeit object-oriented programming; malicious program behavior; memory corruption vulnerabilities; return-oriented programming; software programs; source code; vfGuard; Aerospace electronics; Arrays; Layout; Object oriented programming; Runtime; Semantics; C++; CFI; ROP; code reuse attacks (ID#: 16-11193)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163058&isnumber=7163005
Z. Wu, K. Lu and X. Wang, “Efficiently Trigger Data Races Through Speculative Execution,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 90-95. doi: 10.1109/HPCC-CSS-ICESS.2015.57
Abstract: Harmful data races hidden in concurrent programs are hard to be detected due to non-determinism. Many race detectors report a large number of benign data races. To detect the harmful data races automatically, previous tools dynamically execute program and actively insert delay to create real race condition, checking whether failure occurs due to the race. If so, a harmful race is detected. However, performance may be affected due to the inserted delay. We use speculative execution to alleviate this problem. Unlike previous tools that suspend one thread's memory access to wait for another thread's memory access, we continue to execute this thread's memory accesses and do not suspend this thread until it is going to execute a memory access that may change the effort of race. Therefore, real race condition will be created with less delay or even no delay. To our knowledge, this is the first technique that can trigger data races by speculative execution. The speculative execution does not affect the detection of harmful races. We have implemented a prototype tool and experimented on some real world programs. Results show that our tool can detect harmful races effectively. By speculative execution, the performance is improved significantly.
Keywords: concurrency control; parallel programming; program compilers; security of data; concurrent programs; data race detection; dynamic program execution; nondeterminism; race detectors; speculative execution; thread memory access; Concurrent computing; Delays; Instruction sets; Instruments; Message systems; Programming; Relays; concurrent program; dynamic analysis; harmful data race (ID#: 16-11194)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336149&isnumber=7336120
J. Adebayo and L. Kagal, “A Privacy Protection Procedure for Large Scale Individual Level Data,” Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 120-125. doi: 10.1109/ISI.2015.7165950
Abstract: We present a transformation procedure for large scale individual level data that produces output data in which no linear combinations of the resulting attributes can yield the original sensitive attributes from the transformed data. In doing this, our procedure eliminates all linear information regarding a sensitive attribute from the input data. The algorithm combines principal components analysis of the data set with orthogonal projection onto the subspace containing the sensitive attribute(s). The algorithm presented is motivated by applications where there is a need to drastically 'sanitize' a data set of all information relating to sensitive attribute(s) before analysis of the data using a data mining algorithm. Sensitive attribute removal (sanitization) is often needed to prevent disparate impact and discrimination on the basis of race, gender, and sexual orientation in high stakes contexts such as determination of access to loans, credit, employment, and insurance. We show through experiments that our proposed algorithm outperforms other privacy preserving techniques by more than 20 percent in lowering the ability to reconstruct sensitive attributes from large scale data.
Keywords: data analysis; data mining; data privacy; principal component analysis; data mining algorithm; large scale individual level data; orthogonal projection; principal component analysis; privacy protection procedure; sanitization; sensitive attribute removal; Data privacy; Loans and mortgages; Noise; Prediction algorithms; Principal component analysis; Privacy; PCA; privacy preserving (ID#: 16-11195)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165950&isnumber=7165923
J. García, “Broadband Connected Aircraft Security,” 2015 Integrated Communication, Navigation and Surveillance Conference (ICNS), Herndon, VA, USA, 2015, pp. 1-23. doi: 10.1109/ICNSURV.2015.7121291
Abstract: There is an inter-company race among service providers to offer the highest speed connections and services to the passenger. With some providers offering up to 50Mbps per aircraft and global coverage, traditional data links between aircraft and ground are becoming obsolete.
Keywords: (not provided) (ID#: 16-11197)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7121291&isnumber=7121207
D. H. Summerville, K. M. Zach and Y. Chen, “Ultra-Lightweight Deep Packet Anomaly Detection for Internet of Things Devices,” 2015 IEEE 34th International Performance Computing and Communications Conference (IPCCC), Nanjing, 2015, pp. 1-8. doi: 10.1109/PCCC.2015.7410342
Abstract: As we race toward the Internet of Things (IoT), small embedded devices are increasingly becoming network-enabled. Often, these devices can't meet the computational requirements of current intrusion prevention mechanisms or designers prioritize additional features and services over security; as a result, many IoT devices are vulnerable to attack. We have developed an ultra-lightweight deep packet anomaly detection approach that is feasible to run on resource constrained IoT devices yet provides good discrimination between normal and abnormal payloads. Feature selection uses efficient bit-pattern matching, requiring only a bitwise AND operation followed by a conditional counter increment. The discrimination function is implemented as a lookup-table, allowing both fast evaluation and flexible feature space representation. Due to its simplicity, the approach can be efficiently implemented in either hardware or software and can be deployed in network appliances, interfaces, or in the protocol stack of a device. We demonstrate near perfect payload discrimination for data captured from off the shelf IoT devices.
Keywords: Internet of Things; feature selection; security of data; table lookup; Internet of Things devices; IoT devices; bit-pattern matching; bitwise AND operation; conditional counter increment; lookup-table; ultra-lightweight deep packet anomaly detection approach; Computational complexity; Detectors; Feature extraction; Hardware; Hidden Markov models; Payloads; Performance evaluation; network anomaly detection (ID#: 16-11198)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7410342&isnumber=7410258
B. M. Bhatti and N. Sami, “Building Adaptive Defense Against Cybercrimes Using Real-Time Data Mining,” Anti-Cybercrime (ICACC), 2015 First International Conference on, Riyadh, 2015, pp. 1-5. doi: 10.1109/Anti-Cybercrime.2015.7351949
Abstract: In today's fast changing world, cybercrimes are growing at perturbing pace. At the very definition of it, cybercrimes get engendered by capitalizing on threats and exploitation of vulnerabilities. However, recent history reveals that such crimes often come with surprises and seldom follow the trends. This puts the defense systems behind in the race, because of their inability to identify new patters of cybercrime and to ameliorate to the required levels of security. This paper visualizes the empowerment of security systems through real-time data mining by the virtue of which these systems will be able to dynamically identify patterns of cybercrimes. This will help those security systems stepping up their defense capabilities, while adapting to the required levels posed by newly germinating patterns. In order to confine within scope of this paper, the application of this approach is being discussed in the context of selected scenarios of cybercrime.
Keywords: computer crime; data mining; perturbation techniques; adaptive cybercrime defense system; real-time data mining; security systems; vulnerability exploitation; Computer crime; Data mining; Engines; Internet; Intrusion detection; Real-time systems; Cybercrime; Cybercrime Pattern Recognition (CPR); Information Security; Real-time Data Mining Engine (RTDME); Real-time Security Protocol (RTSP); Realtime Data Mining; TPAC (Threat Prevention & Adaptation Code); Threat Prevention and Response Algorithm Generator (TPRAG) (ID#: 16-11199)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7351949&isnumber=7351910
K. V. Muhongya and M. S. Maharaj, “Visualising and Analysing Online Social Networks,” Computing, Communication and Security (ICCCS), 2015 International Conference on, Pamplemousses, 2015, pp. 1-6. doi: 10.1109/CCCS.2015.7374121
Abstract: The immense popularity of online social networks generates sufficient data, that when carefully analysed, can reveal unexpected realities. People are using them to establish relationships in the form of friendships. Based on data collected, students' networks were extracted, visualized and analysed to reflect the connections among South African communities using Gephi. The analysis revealed a slow progress in terms of connections among communities from different ethnic groups in South Africa. This was facilitated through analysis of data collected through Netvizz as well as by using Gephi to visualize social media network structures.
Keywords: data visualisation; social networking (online); Gephi; South African communities; analysing online social networks; student network; visualising online social networks; visualize social media network structures; Business; Data visualization; Facebook; Image color analysis; Joining processes; Media; Gephi; Online social network; betweeness centrality; closeness centrality; graph; race; visualization (ID#: 16-11200)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7374121&isnumber=7374113
W. A. R. d. Souza and A. Tomlinson, “SMM Revolutions,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1466-1472.
doi: 10.1109/HPCC-CSS-ICESS.2015.278
Abstract: The System Management Mode (SMM) is a highly privileged processor operating mode in x86 platforms. The goal of the SMM is to perform system management functions, such as hardware control and power management. Because of this, SMM has powerful resources. Moreover, its executive software executes unnoticed by any other component in the system, including operating systems and hypervisors. For that reason, SMM has been exploited in the past to facilitate attacks, misuse, or alternatively, building security tools capitalising on its resources. In this paper, we discuss how the use of the SMM has been contributing to the arms race between system's attackers and defenders. We analyse the main work published on attacks, misuse and implementing security tools in the SMM and how the SMM has been modified to respond to those issues. Finally, we discuss how Intel Software Guard Extensions (SGX) technology, a sort of “hypervisor in processor”, presents a possible answer to the issue of using SMM for security purposes.
Keywords: operating systems (computers); security of data; virtualisation; Intel Software Guard Extensions technology; SGX technology; SMM; hardware control; hypervisor; hypervisors; operating systems; power management; processor operating mode; system attackers; system defenders; system management mode; Hardware; Operating systems; Process control; Registers; Security; Virtual machine monitors; PC architecture; SGX; SMM; security; virtualization (ID#: 16-11201)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336375&isnumber=7336120
S. Pietrowicz, B. Falchuk, A. Kolarov and A. Naidu, “Web-Based Smart Grid Network Analytics Framework,” Information Reuse and Integration (IRI), 2015 IEEE International Conference on, San Francisco, CA, 2015, pp. 496-501. doi: 10.1109/IRI.2015.82
Abstract: As utilities across the globe continue to deploy Smart Grid technology, there is an immediate and growing need for analytics, diagnostics and forensics tools akin to those commonly employed in enterprise IP networks to provide visibility and situational awareness into the operation, security and performance of Smart Energy Networks. Large-scale Smart Grid deployments have raced ahead of mature management tools, leaving gaps and challenges for operators and asset owners. Proprietary Smart Grid solutions have added to the challenge. This paper reports on the research and development of a new vendor-neutral, packet-based, network analytics tool called MeshView that abstracts information about system operation from low-level packet detail and visualizes endpoint and network behavior of wireless Advanced Metering Infrastructure, Distribution Automation, and SCADA field networks. Using real utility use cases, we report on the challenges and resulting solutions in the software design, development and Web usability of the framework, which is currently in use by several utilities.
Keywords: Internet; power engineering computing; smart power grids; software engineering; Internet protocols; MeshView tool; SCADA field network; Web usability; Web-based smart grid network analytics framework; distribution automation; enterprise IP networks; smart energy networks; smart grid technology; software design; software development; wireless advanced metering infrastructure; Conferences; Advanced Meter Infrastructure; Big data visualization; Cybersecurity; Field Area Networks; Network Analytics; Smart Energy; Smart Grid; System scalability; Web management (ID#: 16-11202)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301018&isnumber=7300933
M. Phillips, B. M. Knoppers and Y. Joly, “Seeking a ‘Race to the Top’ in Genomic Cloud Privacy?,” Security and Privacy Workshops (SPW), 2015 IEEE, San Jose, CA, 2015, pp. 65-69. doi: 10.1109/SPW.2015.26
Abstract: The relationship between data-privacy lawmakers and genomics researchers may have gotten off on the wrong foot. Critics of protectionism in the current laws advocate that we abandon the existing paradigm, which was formulated in an entirely different medical research context. Genomic research no longer requires physically risky interventions that directly affect participants' integrity. But to simply strip away these protections for the benefit of research projects neglects not only new concerns about data privacy, but also broader interests that research participants have in the research process. Protectionism and privacy should not be treated as unwelcome anachronisms. We should instead seek to develop an updated, positive framework for data privacy and participant participation and collective autonomy. It is beginning to become possible to imagine this new framework, by reflecting on new developments in genomics and bioinformatics, such as secure remote processing, data commons, and health data co-operatives.
Keywords: bioinformatics; cloud computing; data privacy; genomics; security of data; collective autonomy; data commons; genomic cloud privacy; genomics research; health data cooperatives; medical research; participant participation; protectionism; secure remote processing; Bioinformatics; Cloud computing; Context; Data privacy; Genomics; Law; Privacy; data protection; health data co-operatives; privacy (ID#: 16-11203)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163210&isnumber=7163193
N. Koutsopoulos, M. Northover, T. Felden and M. Wittiger, “Advancing Data Race Investigation and Classification Through Visualization,” Software Visualization (VISSOFT), 2015 IEEE 3rd Working Conference on, Bremen, 2015, pp. 200-204. doi: 10.1109/VISSOFT.2015.7332437
Abstract: Data races in multi-threaded programs are a common source of serious software failures. Their undefined behavior may lead to intermittent failures with unforeseeable, and in embedded systems, even life-threatening consequences. To mitigate these risks, various detection tools have been created to help identify potential data races. However, these tools produce thousands of data race warnings, often in text-based format, which makes the manual assessment process slow and error-prone. Through visualization, we aim to speed up the data race assessment process by reducing the amount of information to be investigated, and to provide a versatile interface that quality assurance engineers can use to investigate data race warnings. The ultimate goal of our integrated software suite, called RaceView, is to improve the usability of the data race information to such an extent that the elimination of data races can be incorporated into the regular software development process.
Keywords: data visualisation; multi-threading; pattern classification; program diagnostics; software quality; RaceView; data race assessment process; data race classification; data race elimination; data race information usability; data race warnings; integrated software suite; interface; multithreaded programs; quality assurance engineers; software development process; visualization; Data visualization; Instruction sets; Manuals; Merging; Navigation; Radiation detectors; data race detection; graph navigation; graph visualization; static analysis; user interface (ID#: 16-11204)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332437&isnumber=7332403
J. Schimmel, K. Molitorisz, A. Jannesari and W. F. Tichy, “Combining Unit Tests for Data Race Detection,” Automation of Software Test (AST), 2015 IEEE/ACM 10th International Workshop on, Florence, 2015, pp. 43-47. doi: 10.1109/AST.2015.16
Abstract: Multithreaded programs are subject to data races. Data race detectors find such defects by static or dynamic inspection of the program. Current race detectors suffer from high numbers of false positives, slowdown, and false negatives. Because of these disadvantages, recent approaches reduce the false positive rate and the runtime overhead by applying race detection only on a subset of the whole program. To achieve this, they make use of parallel test cases, but this has other disadvantages: Parallel test cases have to be engineered manually, cover code regions that are affected by data races, and execute with input data that provoke the data races. This paper introduces an approach that does not need additional parallel use cases to be engineered. Instead, we take conventional unit tests as input and automatically generate parallel test cases, execution contexts and input data. As can be observed, most real-world software projects nowadays have high test coverages, so a large information base as input for our approach is already available. We analyze and reuse input data, initialization code, and mock objects that conventional unit tests already contain. With this information, no further oracles are necessary for generating parallel test cases. Instead, we reuse the knowledge that is already implicitly available in conventional unit tests. We implemented our parallel test case generation strategy in a tool called Test Merge. To evaluate these test cases we used them as input for the dynamic race detector CHESS that evokes all possible thread interleavings for a given program. We evaluated Test Merge using six sample programs and one industrial application with a high test case coverage of over 94%. For this benchmark, Test Merge identified all previously known data races and even revealed previously unknown ones.
Keywords: multi-threading; program testing; CHESS; TestMerge; data race detectors; dynamic race detector; multithreaded programs; parallel test case generation; thread interleavings; unit tests; Computer bugs; Context; Customer relationship management; Detectors; Schedules; Software; Testing; Data Races; Multicore Software Engineering; Unit Testing (ID#: 16-11205)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166265&isnumber=7166248
S. W. Park, O. K. Ha and Y. K. Jun, “A Loop Filtering Technique for Reducing Time Overhead of Dynamic Data Race Detection,” 2015 8th International Conference on Database Theory and Application (DTA), Jeju, 2015, pp. 29-32. doi: 10.1109/DTA.2015.18
Abstract: Data races are the hardest defect to handle in multithread programs due to their nondeterministic interleaving of concurrent threads. The main drawback of data race detection using dynamic techniques is the additional overhead of monitoring program execution and analyzing every conflicting memory operation. Thus, it is important to reduce the additional overheads for debugging data races. This paper presents a loop filtering technique that rules out repeatedly execution regions of loops from the monitoring targets in the multithread programs. The empirical results using multithread programs show that the filtering technique reduces the average runtime overhead to 60% of that of dynamic data race detection.
Keywords: concurrency (computers); monitoring; multi-threading; program debugging; concurrent threads; data races debugging; dynamic data race detection; dynamic techniques; loop filtering technique; monitoring program execution; multithread programs; nondeterministic interleaving; Databases; Filtering; Monitoring; Performance analysis; Runtime; Multithread programs; data race detection; dynamic analysis; filtering; runtime overheads (ID#: 16-11207)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7433734&isnumber=7433698
C. Jia, C. Yang and W. K. Chan, “Architecturing Dynamic Data Race Detection as a Cloud-Based Service,” Web Services (ICWS), 2015 IEEE International Conference on, New York, NY, 2015, pp. 345-352. doi: 10.1109/ICWS.2015.54
Abstract: A web-based service consists of layers of programs (components) in the technology stack. Analyzing program executions of these components separately allows service vendors to acquire insights into specific program behaviors or problems in these components, thereby pinpointing areas of improvement in their offering services. Many existing approaches for testing as a service take an orchestration approach that splits components under test and the analysis services into a set of distributed modules communicating through message-based approaches. In this paper, we present the first work in providing dynamic analysis as a service using a virtual machine (VM)-based approach on dynamic data race detection. Such a detection needs to track a huge number of events performed by each thread of a program execution of a service component, making such an analysis unsuitable to use message passing to transit huge numbers of events individually. In our model, we instruct VMs to perform holistic dynamic race detections on service components and only transfer the detection results to our service selection component. With such result data as the guidance, the service selection component accordingly selects VM instances to fulfill subsequent analysis requests. The experimental results show that our model is feasible.
Keywords: Web services; cloud computing; program diagnostics; virtual machines; VM-based approach; Web-based service; cloud-based service; dynamic analysis-as-a-service; dynamic data race detection; message-based approach; orchestration approach; program behavior; program execution analysis; program execution thread; virtual machine; Analytical models; Clocks; Detectors; Instruction sets; Optimization; Performance analysis; cloud-based usage model; data race detection; dynamic analysis; service engineering; service selection strategy (ID#: 16-11208)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195588&isnumber=7195533
K. Shankari and N. G. B. Amma, “Clasp: Detecting Potential Deadlocks and Its Removal by Iterative Method,” 2015 Online International Conference on Green Engineering and Technologies (IC-GET), Coimbatore, 2015, pp. 1-5. doi: 10.1109/GET.2015.7453824
Abstract: Considering a multithreaded code there is a possibility of getting deadlocks while running it which cannot provide the necessary output from the required program. It is necessary to eliminate the deadlock to get the process to be successful. In this proposed system which actively eliminates the dependencies that are removable. This can cause potential deadlock localization. It is done in an iterative manner. It can detect the dependencies in iteration based. It identifies the deadlock and then it confirms using its techniques. It can be obtained by finding the lock dependencies and dividing them into partitions and then validating the thread specific partitions and then it again searches the dependencies iteratively to eliminate them. The bugs in the multithreaded program can be traced. When a data race is identified it is isolated and then removed. By using a scheduler the bug is removed. It can increase the execution time of the code. By iterating this process the code can be free from bugs and deadlocks. It can be applied real world problems and can be used to detect the problems that causing a deadlock.
Keywords: concurrency control; iterative methods; multi-threading; program debugging; system recovery; Clasp; bugs; code execution time; data race; deadlock removal; iterative method; multithreaded program; potential deadlock detection; thread specific partitions; Algorithm design and analysis; Clocks; Computer bugs; Heuristic algorithms; Instruction sets; Synchronization; System recovery; data races; deadlock; lock dependencies; multi threaded code (ID#: 16-11209)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7453824&isnumber=7453764
J. R. Wilcox, P. Finch, C. Flanagan and S. N. Freund, “Array Shadow State Compression for Precise Dynamic Race Detection (T),” Automated Software Engineering (ASE), 2015 30th IEEE/ACM International Conference on, Lincoln, NE, 2015, pp. 155-165. doi: 10.1109/ASE.2015.19
Abstract: Precise dynamic race detectors incur significant time and space overheads, particularly for array-intensive programs, due to the need to store and manipulate analysis (or shadow) state for every element of every array. This paper presents SlimState, a precise dynamic race detector that uses an adaptive, online algorithm to optimize array shadow state representations. SlimState is based on the insight that common array access patterns lead to analogous patterns in array shadow state, enabling optimized, space efficient representations of array shadow state with no loss in precision. We have implemented SlimState for Java. Experiments on a variety of benchmarks show that array shadow compression reduces the space and time overhead of race detection by 27% and 9%, respectively. It is particularly effective for array-intensive programs, reducing space and time overheads by 35% and 17%, respectively, on these programs.
Keywords: Java; program testing; system monitoring; Java; SLIMSTATE; adaptive online algorithm; analogous patterns; array access patterns; array shadow state compression; array shadow state representations; array-intensive programs; precise dynamic race detection; space efficient representations; space overhead; time overhead; Arrays; Clocks; Detectors; Heuristic algorithms; Instruction sets; Java; Synchronization; concurrency; data race detection; dynamic analysis (ID#: 16-11210)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7372005&isnumber=7371976
A. S. Rajam, L. E. Campostrini, J. M. M. Caamaño and P. Clauss, “Speculative Runtime Parallelization of Loop Nests: Towards Greater Scope and Efficiency,” Parallel and Distributed Processing Symposium Workshop (IPDPSW), 2015 IEEE International, Hyderabad, 2015, pp. 245-254. doi: 10.1109/IPDPSW.2015.10
Abstract: Runtime loop optimization and speculative execution are becoming more and more prominent to leverage performance in the current multi-core and many-core era. However, a wider and more efficient use of such techniques is mainly hampered by the prohibitive time overhead induced by centralized data race detection, dynamic code behavior modeling and code generation. Most of the existing Thread Level Speculation (TLS) systems rely on slicing the target loops into chunks, and trying to execute the chunks in parallel with the help of a centralized performance-penalizing verification module that takes care of data races. Due to the lack of a data dependence model, these speculative systems are not capable of doing advanced transformations and, more importantly, the chances of rollback are high. The poly tope model is a well known mathematical model to analyze and optimize loop nests. The current state-of-art tools limit the application of the poly tope model to static control codes. Thus, none of these tools can handle codes with while loops, indirect memory accesses or pointers. Apollo (Automatic Polyhedral Loop Optimizer) is a framework that goes one step beyond, and applies the poly tope model dynamically by using TLS. Apollo can predict, at runtime, whether the codes are behaving linearly or not, and applies polyhedral transformations on-the-fly. This paper presents a novel system, which extends the capability of Apollo to handle codes whose memory accesses are not necessarily linear. More generally, this approach expands the applicability of the poly tope model at runtime to a wider class of codes.
Keywords: multiprocessing systems; optimisation; parallel programming; program compilers; program verification; Apollo; TLS; automatic polyhedral loop optimizer; centralized data race detection; centralized performance-penalizing verification module; code generation; data dependence model; dynamic code behavior modeling; loop nests; many-core era; memory accesses; multicore era; polyhedral transformations; polytope model; prohibitive time overhead; runtime loop optimization; speculative execution; speculative runtime parallelization; static control codes; thread level speculation systems; Adaptation models; Analytical models; Mathematical model; Optimization; Predictive models; Runtime; Skeleton; Automatic parallelization; Polyhedral model; Thread level speculation; loop optimization; non affine accesses (ID#: 16-11211)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284316&isnumber=7284273
C. Segulja and T. S. Abdelrahman, “Clean: A Race Detector with Cleaner Semantics,” 2015 ACM/IEEE 42nd Annual International Symposium on Computer Architecture (ISCA), Portland, OR, 2015, pp. 401-413. doi: 10.1145/2749469.2750395
Abstract: Data races make parallel programs hard to understand. Precise race detection that stops an execution on first occurrence of a race addresses this problem, but it comes with significant overhead. In this work, we exploit the insight that precisely detecting only write-after-write (WAW) and read-after-write (RAW) races suffices to provide cleaner semantics for racy programs. We demonstrate that stopping an execution only when these races occur ensures that synchronization-free-regions appear to be executed in isolation and that their writes appear atomic. Additionally, the undetected racy executions can be given certain deterministic guarantees with efficient mechanisms. We present CLEAN, a system that precisely detects WAW and RAW races and deterministically orders synchronization. We demonstrate that the combination of these two relatively inexpensive mechanisms provides cleaner semantics for racy programs. We evaluate both software-only and hardware-supported CLEAN. The software-only CLEAN runs all Pthread benchmarks from the SPLASH-2 and PARSEC suites with an average 7.8x slowdown. The overhead of precise WAW and RAW detection (5.8x) constitutes the majority of this slowdown. Simple hardware extensions reduce the slowdown of CLEAN's race detection to on average 10.4% and never more than 46.7%.
Keywords: parallel programming; programming language semantics; synchronisation; CLEAN system; PARSEC; Pthread benchmarks; RAW races; SPLASH-2; WAW races; cleaner semantics; data races; deterministic guarantees; hardware-supported CLEAN; parallel programs; race detection; race detector; racy executions; racy programs; read-after-write races; software-only CLEAN; synchronization-free-regions; write-after-write races; Instruction sets; Switches; Synchronization (ID#: 16-11212)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284082&isnumber=7284049
P. Wang, D. J. Dean and X. Gu, “Understanding Real World Data Corruptions in Cloud Systems,” Cloud Engineering (IC2E), 2015 IEEE International Conference on, Tempe, AZ, 2015, pp. 116-125. doi: 10.1109/IC2E.2015.41
Abstract: Big data processing is one of the killer applications for cloud systems. MapReduce systems such as Hadoop are the most popular big data processing platforms used in the cloud system. Data corruption is one of the most critical problems in cloud data processing, which not only has serious impact on the integrity of individual application results but also affects the performance and availability of the whole data processing system. In this paper, we present a comprehensive study on 138 real world data corruption incidents reported in Hadoop bug repositories. We characterize those data corruption problems in four aspects: (1) what impact can data corruption have on the application and system? (2) how is data corruption detected? (3) what are the causes of the data corruption? and (4) what problems can occur while attempting to handle data corruption? Our study has made the following findings: (1) the impact of data corruption is not limited to data integrity, (2) existing data corruption detection schemes are quite insufficient: only 25% of data corruption problems are correctly reported, 42% are silent data corruption without any error message, and 21% receive imprecise error report. We also found the detection system raised 12% false alarms, (3) there are various causes of data corruption such as improper runtime checking, race conditions, inconsistent block states, improper network failure handling, and improper node crash handling, and (4) existing data corruption handling mechanisms (i.e., data replication, replica deletion, simple re-execution) make frequent mistakes including replicating corrupted data blocks, deleting uncorrupted data blocks, or causing undesirable resource hogging.
Keywords: cloud computing; data handling; Hadoop; MapReduce systems; big data processing; cloud data processing; cloud systems; data corruption; data corruption problems; data integrity; improper network failure handling; improper node crash handling; inconsistent block states; race conditions; real world data corruptions; runtime checking; Availability; Computer bugs; Data processing; Radiation detectors; Software; Yarn (ID#: 16-11213)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092909&isnumber=7092808
J. Zarei, M. M. Arefi and H. Hassani, “Bearing Fault Detection Based on Interval Type-2 Fuzzy Logic Systems for Support Vector Machines,” Modeling, Simulation, and Applied Optimization (ICMSAO), 2015 6th International Conference on, Istanbul, 2015, pp. 1-6. doi: 10.1109/ICMSAO.2015.7152214
Abstract: A method based on Interval Type-2 Fuzzy Logic Systems (IT2FLSs) for combination of different Support Vector Machines (SVMs) in order to bearing fault detection is the main argument of this paper. For this purpose, an experimental setup has been provided to collect data samples of stator current phase a of the induction motor using healthy and defective bearing. The defective bearing has an inner race hole with the diameter 1-mm that is created by the spark. An Interval Type-2 Fuzzy Fusion Model (IT2FFM) has been presented that is consists of two phases. Using this IT2FFM, testing data samples have been classified. A comparison between T1FFM, IT2FFM, SVMs and also Adaptive Neuro Fuzzy Inference Systems (ANFIS) in classification of testing data samples has been done and the results show the effectiveness of the proposed ITFFM.
Keywords: electrical engineering computing; fault diagnosis; fuzzy logic; fuzzy neural nets; fuzzy reasoning; fuzzy set theory; induction motors; machine bearings; mechanical engineering computing; pattern classification; stators; support vector machines; ANFIS; IT2FFM; Interval Type-2 Fuzzy Fusion Model; SVM; T1FFM; adaptive neuro fuzzy inference systems; bearing fault detection; defective bearing; healthy bearing; induction motor; inner race hole; interval type-2 fuzzy logic systems; size 1 mm; stator current phase; support vector machines; testing data sample classification; Accuracy; Fault detection; Fuzzy logic; Fuzzy sets; Kernel; Support vector machines; Testing; Bearing; Fault Detection; Support Vector Machines; Type-2 fuzzy logic system (ID#: 16-11214)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152214&isnumber=7152193
R. Z. Haddad, C. A. Lopez, J. Pons-Llinares, J. Antonino-Daviu and E. G. Strangas, “Outer Race Bearing Fault Detection in Induction Machines Using Stator Current Signals,” 2015 IEEE 13th International Conference on Industrial Informatics (INDIN), Cambridge, 2015, pp. 801-808. doi: 10.1109/INDIN.2015.7281839
Abstract: This paper discusses the effect of the operating load as well as the suitability of combined startup and steady-state analysis for the detection of bearing faults in induction machines, Motor Current Signature Analysis and Linear Discriminant Analysis are used to detect and estimate the severity of an outer race bearing fault. The algorithm is based on using the machine stator current signals, instead of the conventional vibration signals, which has the advantages of simplicity and low cost of the necessary equipment. The machine stator current signals are analyzed during steady state and start up using Fast Fourier Transform and Short Time Fourier Transform. For steady state operation, two main changes in the spectrum compared to the healthy case: firstly, new harmonics related to bearing faults are generated, and secondly, the amplitude of the grid harmonics changes with the degree of the fault. For start up signals, the energy of the current signal frequency within a specific frequency band related to the bearing fault increases with the fault severity. Linear Discriminant Analysis classification is used to detect a bearing fault and estimate its severity for different loads using the amplitude of the grid harmonics as features for the classifier. Experimental data were collected from a 1.1 kW, 400V, 50 Hz induction machine in healthy condition, and two severities of outer race bearing fault at three different load levels: no load, 50% load, and 100% load.
Keywords: asynchronous machines; fast Fourier transforms; fault diagnosis; machine bearings; stators; bearing faults detection; fast Fourier transform; fault severity; grid harmonics amplitude; induction machines; linear discriminant analysis; machine stator current signals; motor current signature analysis; outer race bearing fault; short time Fourier transform; steady-state analysis; Fault detection; Harmonic analysis; Induction machines; Stators; Steady-state; Torque; Vibrations; Ball bearing; Fast Fourier Transform; Induction machine; Linear Discriminant Analysis; Outer race bearing fault; Short Time Fourier Transform (ID#: 16-11215)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7281839&isnumber=7281697
S. Saidi and Y. Falcone, “Dynamic Detection and Mitigation of DMA Races in MPSoCs,” Digital System Design (DSD), 2015 Euromicro Conference on, Funchal, 2015, pp. 267-270. doi: 10.1109/DSD.2015.77
Abstract: Explicitly managed memories have emerged as a good alternative for multicore processors design in order to reduce energy and performance costs. Memory transfers then rely on Direct Memory Access (DMA) engines which provide a hardware support for accelerating data. However, programming explicit data transfers is very challenging for developers who must manually orchestrate data movements through the memory hierarchy. This is in practice very error-prone and can easily lead to memory inconsistency. In this paper, we propose a runtime approach for monitoring DMA races. The monitor acts as a safeguard for programmers and is able to enforce at runtime a correct behavior w.r.t the semantics of the program execution. We validate the approach using traces extracted from industrial benchmarks and executed on the multiprocessor system-on-chip platform STHORM. Our experiments demonstrate that the monitoring algorithm has a low overhead (less than 1.5 KB) of on-chip memory consumption and an overhead of less than 2% of additional execution time.
Keywords: multiprocessing systems; storage management; system-on-chip; DMA engines; DMA races monitoring; MPSoC; STHORM; accelerating data; data movements; data transfers; direct memory access engines; dynamic detection and mitigation; energy reduction; hardware support; memories management; memory hierarchy; memory inconsistency; memory transfers; monitoring algorithm; multicore processors design; multiprocessor system-on-chip platform; on-chip memory consumption; performance costs; program execution semantics; runtime approach; Benchmark testing; Memory management; Monitoring; Program processors; Runtime; System-on-chip (ID#: 16-11216)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7302281&isnumber=7302233
P. Chatarasi and V. Sarkar, “Extending Polyhedral Model for Analysis and Transformation of OpenMP Programs,” 2015 International Conference on Parallel Architecture and Compilation (PACT), San Francisco, CA, 2015, pp. 490-491. doi: 10.1109/PACT.2015.57
Abstract: The polyhedral model is a powerful algebraic framework that has enabled significant advances in analysis and transformation of sequential affine (sub)programs, relative to traditional AST-based approaches. However, given the rapid growth of parallel software, there is a need for increased attention to using polyhedral compilation techniques to analyze and transform explicitly parallel programs. In our PACT'15 paper titled “Polyhedral Optimizations of Explicitly Parallel Programs” [1, 2], we addressed the problem of analyzing and transforming programs with explicit parallelism that satisfy the serial-elision property, i.e., the property that removal of all parallel constructs results in a sequential program that is a valid (albeit inefficient) implementation of the parallel program semantics. In this poster, we address the problem of analyzing and transforming more general OpenMP programs that do not satisfy the serial-elision property. Our contributions include the following: (1) An extension of the polyhedral model to represent input OpenMP programs, (2) Formalization of May Happen in Parallel (MHP) and Happens before (HB) relations in the extended model, (3) An approach for static detection of data races in OpenMP programs by generating race constraints that can be solved by an SMT solver such as Z3, and (4) An approach for transforming OpenMP programs.
Keywords: algebra; parallel programming; program compilers; AST-based approach; OpenMP programs; SMT solver; algebraic framework; parallel programs; parallel software; polyhedral compilation techniques; polyhedral model; sequential affine (sub)programs; serial-elision property; Analytical models; Instruction sets; Parallel architectures; Parallel processing; Schedules; Semantics
(ID#: 16-11217)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7429335&isnumber=7429279
J. Huang, Q. Luo and G. Rosu, “GPredict: Generic Predictive Concurrency Analysis,” 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Florence, 2015, pp. 847-857. doi: 10.1109/ICSE.2015.96
Abstract: Predictive trace analysis (PTA) is an effective approach for detecting subtle bugs in concurrent programs. Existing PTA techniques, however, are typically based on adhoc algorithms tailored to low-level errors such as data races or atomicity violations, and are not applicable to high-level properties such as “a resource must be authenticated before use“ and “a collection cannot be modified when being iterated over”. In addition, most techniques assume as input a globally ordered trace of events, which is expensive to collect in practice as it requires synchronizing all threads. In this paper, we present GPredict: a new technique that realizes PTA for generic concurrency properties. Moreover, GPredict does not require a global trace but only the local traces of each thread, which incurs much less runtime overhead than existing techniques. Our key idea is to uniformly model violations of concurrency properties and the thread causality as constraints over events. With an existing SMT solver, GPredict is able to precisely predict property violations allowed by the causal model. Through our evaluation using both benchmarks and real world applications, we show that GPredict is effective in expressing and predicting generic property violations. Moreover, it reduces the runtime overhead of existing techniques by 54% on DaCapo benchmarks on average.
Keywords: concurrency control; program debugging; program diagnostics; DaCapo benchmarks; GPredict; PTA; SMT solver; concurrent programs; generic predictive concurrency analysis; local traces; predictive trace analysis; subtle bug detection; Concurrent computing; Java; Prediction algorithms; Predictive models; Runtime; Schedules; Syntactics (ID#: 16-11219)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194631&isnumber=7194545
R. Pang, A. Baretto, H. Kautz and J. Luo, “Monitoring Adolescent Alcohol Use via Multimodal Analysis in Social Multimedia,” Big Data (Big Data), 2015 IEEE International Conference on, Santa Clara, CA, 2015, pp. 1509-1518. doi: 10.1109/BigData.2015.7363914
Abstract: Underage drinking or adolescent alcohol use is a major public health problem that causes more than 4,300 annual deaths. Traditional methods for monitoring adolescent alcohol consumption are based on surveys, which have many limitations and are difficult to scale. The main limitations include 1) respondents may not provide accurate, honest answers, 2) surveys with closed-ended questions may have a lower validity rate than other question types, 3) respondents who choose to respond may be different from those who chose not to respond, thus creating bias, 4) cost, 5) small sample size, and 6) lack of temporal sensitivity. We propose a novel approach to monitoring underage alcohol use by analyzing Instagram users' contents in order to overcome many of the limitations of surveys. First, Instagram users' demographics (such as age, gender and race) are determined by analyzing their selfie photos with automatic face detection and face analysis techniques supplied by a state-of-the-art face processing toolkit called Face++. Next, the tags associated with the pictures uploaded by users are used to identify the posts related to alcohol consumption and discover the existence of drinking patterns in terms of time, frequency and location. To that end, we have built an extensive dictionary of drinking activities based on internet slang and major alcohol brands. Finally, we measure the penetration of alcohol brands among underage users within Instagram by analyzing the followers of such brands in order to evaluate to what extent they might influence their followers' drinking behaviors. Experimental results using a large number of Instagram users have revealed several findings that are consistent with those of the conventional surveys, thus partially validating the proposed approach. Moreover, new insights are obtained that may help develop effective intervention. We believe that this approach can be effectively applied to other domains of public health.
Keywords: face recognition; medical computing; multimedia computing; social networking (online); Face++; Instagram user content analysis; Instagram user demographics; Internet slang; adolescent alcohol consumption monitoring; adolescent alcohol monitoring; automatic face detection; drinking behaviors; face analysis techniques; face processing toolkit; major alcohol brands; multimodal analysis; public health problem; selfie photo analysis; social multimedia; temporal sensitivity; underage alcohol usage monitoring; underage drinking; Big data; Conferences; Decision support systems; Dictionaries; Handheld computers; Media; Multimedia communication; data mining; social media; social multimedia; underage drinking public health (ID#: 16-11220)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363914&isnumber=7363706
G. A. Skrimpas et al., “Detection of Generator Bearing Inner Race Creep by Means of Vibration and Temperature Analysis,” Diagnostics for Electrical Machines, Power Electronics and Drives (SDEMPED), 2015 IEEE 10th International Symposium on, Guarda, 2015, pp. 303-309. doi: 10.1109/DEMPED.2015.7303706
Abstract: Vibration and temperature analysis are the two dominating condition monitoring techniques applied to fault detection of bearing failures in wind turbine generators. Relative movement between the bearing inner ring and generator axle is one of the most severe failure modes in terms of secondary damages and development. Detection of bearing creep can be achieved reliably based on continuous trending of the amplitude of vibration running speed harmonic and temperature absolute values. In order to decrease the number of condition indicators which need to be assessed, it is proposed to exploit a weighted average descriptor calculated based on the 3rd up to 6th harmonic orders. Two cases of different bearing creep severity are presented, showing the consistency of the combined vibration and temperature data utilization. In general, vibration monitoring reveals early signs of abnormality several months prior to any permanent temperature increase, depending on the fault development.
Keywords: condition monitoring; creep; electric generators; failure analysis; fault diagnosis; harmonic analysis; machine bearings; thermal analysis; vibrations; bearing failures; bearing inner ring; condition monitoring techniques; fault detection; generator axle; generator bearing inner race creep; temperature absolute values; temperature analysis; vibration analysis; vibration running speed harmonic; weighted average descriptor; wind turbine generators; Creep; Generators; Harmonic analysis; Market research; Shafts; Vibrations; Wind turbines; Condition monitoring; angular resampling; bearing creep; rotational looseness; vibration analysis
(ID#: 16-11221)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7303706&isnumber=7303654
M. Charles, T. N. Miano, X. Zhang, L. E. Barnes and J. M. Lobo, “Monitoring Quality Indicators for Screening Colonoscopies,” Systems and Information Engineering Design Symposium (SIEDS), 2015, Charlottesville, VA, 2015, pp. 171-175. doi: 10.1109/SIEDS.2015.7116968
Abstract: The detection rate of adenomas in screening colonoscopies is an important quality indicator for endoscopists. Successful detection of adenomas is linked to reduced cancer incidence and mortality. This study focuses on evaluating the performance of endoscopists on adenoma detection rate (ADR), polyp detection rate (PDR), and scope withdrawal time. The substitution of PDR for ADR has been suggested due the reliance of ADR calculation on pathology reports. We compare these metrics to established clinical guidelines and to the performance of other individual endoscopists. Our analysis (n = 2730 screening colonoscopies) found variation in ADR for 14 endoscopists, ranging from 0.20 to 0.41. PDR ranged from 0.38 to 0.62. Controlling for age, sex, race, withdrawal time, and the presence of a trainee endoscopist accounted for 34% of variation in PDR but failed to account for any variation in ADR. The Pearson correlation between PDR and ADR is 0.82. These results suggest that PDR has significant value as a quality indicator. The reported variation in detection rates after controlling for case mix signals the need for greater scrutiny of individual endoscopist skill. Understanding the root cause of this variation could potentially lead to better patient outcomes.
Keywords: cancer; endoscopes; medical image processing; object detection; ADR; PDR; Pearson correlation; adenomas detection rate; cancer incidence; cancer mortality; clinical guidelines; endoscopists; pathology reports; polyp detection rate; quality indicator monitoring; screening colonoscopies; Cancer; Colonoscopy; Endoscopes; Guidelines; Logistics; Measurement; Predictive models; Electronic Medical Records; Health Data; Machine Learning; Physician Performance (ID#: 16-11222)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116968&isnumber=7116953
B. Bayart, A. Vartanian, P. Haefner and J. Ovtcharova, “TechViz XL Helps KITs Formula Student Car ‘Become Alive,’” 2015 IEEE Virtual Reality (VR), Arles, 2015, pp. 395-396. doi: 10.1109/VR.2015.7223462
Abstract: TechViz has been a supporter of Formula Student at KIT for several years reflecting the companys long-term commitment to enhance engineering and education by providing students with powerful VR system software to connect curriculum to real-world applications. Incorporating immersive visualisation and interaction environment into Formula Student vehicle design is proven to deliver race day success, by helping to detect faults and optimise product life cycle. The TechViz LESC system helps to improve the car design despites the short limit of time, thanks to the direct visualisation in the VR system of the CAD mockup and the ease of usage for non-VR experts.
Keywords: automobiles; computer aided instruction; data visualisation; graphical user interfaces; human computer interaction; virtual reality; CAD mockup; KITs formula student car; TechViz LESC system; TechViz XL; VR system software; fault detection; formula student vehicle design; immersive visualisation; interaction environment; product life cycle optimization; virtual reality system; Companies; Hardware; Solid modeling; Three-dimensional displays; Vehicles; Virtual reality; Visualization (ID#: 16-11223)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7223462&isnumber=7223305
T. Sim and L. Zhang, “Controllable Face Privacy,” Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, Ljubljana, 2015, pp. 1-8. doi: 10.1109/FG.2015.7285018
Abstract: We present the novel concept of Controllable Face Privacy. Existing methods that alter face images to conceal identity inadvertently also destroy other facial attributes such as gender, race or age. This all-or-nothing approach is too harsh. Instead, we propose a flexible method that can independently control the amount of identity alteration while keeping unchanged other facial attributes. To achieve this flexibility, we apply a subspace decomposition onto our face encoding scheme, effectively decoupling facial attributes such as gender, race, age, and identity into mutually orthogonal subspaces, which in turn enables independent control of these attributes. Our method is thus useful for nuanced face de-identification, in which only facial identity is altered, but others, such gender, race and age, are retained. These altered face images protect identity privacy, and yet allow other computer vision analyses, such as gender detection, to proceed unimpeded. Controllable Face Privacy is therefore useful for reaping the benefits of surveillance cameras while preventing privacy abuse. Our proposal also permits privacy to be applied not just to identity, but also to other facial attributes as well. Furthermore, privacy-protection mechanisms, such as k-anonymity, L-diversity, and t-closeness, may be readily incorporated into our method. Extensive experiments with a commercial facial analysis software show that our alteration method is indeed effective.
Keywords: computer vision; data privacy; face recognition; image coding; L-diversity mechanism; computer vision analysis; controllable face privacy concept; face de-identification; face encoding scheme; face images; facial attributes; identity alteration control; k-anonymity mechanism; mutually orthogonal subspaces; privacy-protection mechanisms; subspace decomposition; t-closeness mechanism; Cameras; Detectors; Face; Privacy; Shape; Training; Visual analytics (ID#: 16-11224)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7285018&isnumber=7285013
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Dynamical Systems 2015 |
Research into dynamical systems cited here focuses on non-linear and chaotic dynamical systems and in proving abstractions of dynamical systems through numerical simulations. Many of the applications studied are cyber-physical systems and are relevant to the Science of Security hard problems of resiliency, predictive metrics, and composability. These works were presented in 2015.
R. K. Yedavalli, “Security and Vulnerability in the Stabilization of Networks of Controlled Dynamical Systems via Robustness Framework,” 2015 American Control Conference (ACC), Chicago, IL, 2015, pp. 5396-5401. doi: 10.1109/ACC.2015.7172183
Abstract: This paper addresses the issues of security and vulnerability of links in the stabilization of networked control systems from robustness viewpoint. As is done in recent research, we view network security as a robustness issue. However, in this paper we shed considerable new insight into this topic and offer a new and differing perspective. We argue that `robustness' aspect is a common theme related to both vulnerability and security. This paper puts forth the viewpoint that vulnerability of a networked system is a manifestation of the combination of two types of robustness, namely ‘qualitative robustness’ and ‘quantitative robustness’. In other words, the entire robustness concept is treated as a combination of qualitative robustness and quantitative robustness, wherein qualitative robustness is linked to the system’s nature of interactions and interconnections i.e. system’s structure while quantitative robustness is linked to the system dynamics. Put it another way, qualitative robustness is independent of magnitudes and depends only on the signs of the system dynamics matrix whereas quantitative robustness is purely a function of the quantitative information (both magnitudes and signs) of the entries of the system dynamics matrix. In that sense, these two concepts are inter-related, each influencing and complementing the other. Applying these notions to the networked control systems represented by ‘dynamical structure functions’, it is shown that any specific dynamical structure function originated by a state space represenation, is a function of both qualitative and quantitative robustness. In other words, vulnerability of links in that network is determined by both the signs and magnitudes of that state space matrix of that dynamical structure function. Thus the notion in the recent literature that `vulnerability depends on the system structure, not the dynamics and the robustness, which depends on the dynamics, and not on the system structure' is disput- d and clear justification for our viewpoint is provided by newly introduced notions of ‘qualitative robustness’ and ‘quantitative robustness’. This paper then presents few specific dynamical structure functions that possess a large number of non-vulnerable links which is desirable for a secure network. The proposed concepts are illustrated with many useful examples.
Keywords: matrix algebra; networked control systems; robust control; security; controlled dynamical systems; dynamical structure functions; link security; link vulnerability; network security; networked control system stabilization; qualitative robustness; quantitative robustness; robustness framework; state space matrix; state space represenation; system dynamics; Indexes; Jacobian matrices; Robustness; Security; Stability criteria; Transfer functions (ID#: 16-10196)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7172183&isnumber=7170700
Yingjun Yuan, Zhitao Huang, Fenghua Wang and Xiang Wang, “Radio Specific Emitter Identification Based on Nonlinear Characteristics of Signal,” Communications and Networking (BlackSeaCom), 2015 IEEE International Black Sea Conference on, Constanta, 2015, pp. 77-81. doi: 10.1109/BlackSeaCom.2015.7185090
Abstract: Radio Specific Emitter Identification (SEI) is the technique which identifies the individual radio emitter based on the received signals’ specific properties called signals’ Radio Frequency Fingerprint (RFF). SEI is very significant for improving the security of wireless networks. A novel SEI method which treats the emitter as a nonlinear dynamical system is proposed in this paper. The method works based on the signal’s nonlinear characteristics which result from the unintentional and unavoidable physical-layer imperfections. The reconstructed phase space (RPS) is used as the tool for analyzing the nonlinearity. The entire characteristics of RPS and state changes characteristics of points in RPS are extracted to form RFF. To evaluate the availability of the RFF, the signals from four wireless network cards are collected by a signal acquisition system. The proposed RFF’s discrimination capabilities are visually analyzed using the boxplot. The results of visual analysis and classification demonstrate that this method is effective.
Keywords: nonlinear dynamical systems; radio networks; signal reconstruction; telecommunication security; nonlinear dynamical system; nonlinear signal characteristics; phase space reconstruction; radio specific emitter identification; signal nonlinear characteristics; wireless network security; Conferences; Feature extraction; Fingerprint recognition; Nonlinear dynamical systems; Transient analysis; Visualization; Wireless networks; Specific emitter identification; nonlinearity; phase space; radio frequency fingerprint (ID#: 16-10197)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7185090&isnumber=7185069
J. Wang, Y. Wang, X. Wen, T. Yang and Q. Ding, “The Simulation Research and NIST Test of Binary Quantification Methods for the Typical Chaos,” 2015 Third International Conference on Robot, Vision and Signal Processing (RVSP), Kaohsiung, 2015,
pp. 180-184. doi: 10.1109/RVSP.2015.50
Abstract: In this paper, we study into the binary quantification methods of Logistic, Tent and Lorenz three typical chaos, and apply direction quantization, threshold quantization and interval quantization to quantify typical chaos signal respectively. On the basic of studying the standards of NIST test and the test suite of STS, we do a lot of NIST tests and analysis on the three typical chaos sequence to find the best quantification method for the three typical chaos, and study the impact of the system parameters, the initial value and the length of sequence on the digital chaotic sequence, and achieve better chaotic sequence in randomness, which provide some theoretical guidance to the digital secure communication.
Keywords: chaotic communication; quantisation (signal); NIST test; binary quantification methods; chaos; chaotic sequence; digital chaotic sequence; direction quantization; quantification method; threshold quantization; Chaotic communication; Logistics; NIST; Nonlinear dynamical systems; Quantization (signal); Security; binary quantification; digital secure communication (ID#: 16-10198)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7399174&isnumber=7398743
C. Luo and D. Zeng, “Multivariate Embedding Based Causality Detection with Short Time Series,” Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 138-140. doi: 10.1109/ISI.2015.7165954
Abstract: Existing causal inference methods for social media usually rely on limited explicit causal context, preassume certain user interaction model, or neglect the nonlinear nature of social interaction, which could lead to bias estimations of causality. Besides, they often require sufficiently long time series to achieve reasonable results. Here we propose to take advantage of multivariate embedding to perform causality detection in social media. Experimental results show the efficacy of the proposed approach in causality detection and user behavior prediction in social media.
Keywords: causality; inference mechanisms; social networking (online); time series; bias estimations; causal inference methods; multivariate embedding based causality detection; social interaction; social media; user behavior prediction; user interaction model; Manifolds; Media; Neural networks; Nonlinear dynamical systems; Social network services; Time series analysis; Training; causality detection; multivariate embedding; nonlinear dynamic system; user influence (ID#: 16-10199)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165954&isnumber=7165923
C. Lee, H. Shim and Y. Eun, “Secure and Robust State Estimation Under Sensor Attacks, Measurement Noises, and Process Disturbances: Observer-Based Combinatorial Approach,” Control Conference (ECC), 2015 European, Linz, 2015, pp. 1872-1877. doi: 10.1109/ECC.2015.7330811
Abstract: This paper presents a secure and robust state estimation scheme for continuous-time linear dynamical systems. The method is secure in that it correctly estimates the states under sensor attacks by exploiting sensing redundancy, and it is robust in that it guarantees a bounded estimation error despite measurement noises and process disturbances. In this method, an individual Luenberger observer (of possibly smaller size) is designed from each sensor. Then, the state estimates from each of the observers are combined through a scheme motivated by error correction techniques, which results in estimation resiliency against sensor attacks under a mild condition on the system observability. Moreover, in the state estimates combining stage, our method reduces the search space of a minimization problem to a finite set, which substantially reduces the required computational effort.
Keywords: continuous time systems; error correction; linear systems; observers; redundancy; robust control; security; Luenberger observer; bounded estimation error; continuous-time linear dynamical system; error correction technique; observer-based combinatorial approach; robust state estimation; search space; secure state estimation; sensor attack; Indexes; Minimization; Noise measurement; Observers; Redundancy; Robustness (ID#: 16-10200)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7330811&isnumber=7330515
D. Palma, P. L. Montessoro, G. Giordano and F. Blanchini, “A Dynamic Algorithm for Palmprint Recognition,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 659-662. doi: 10.1109/CNS.2015.7346883
Abstract: Most of the existing techniques for palmprint recognition are based on metrics that evaluate the distance between a pair of features. These metrics are typically based on static functions. In this paper we propose a new technique for palmprint recognition based on a dynamical system approach, focusing on preliminary experimental results. The essential idea is that the procedure iteratively eliminates points in both images to be compared which do not have enough close neighboring points in the image itself and in the comparison image. As a result of the iteration, in each image the surviving points are those having enough neighboring points in the comparison image. Our preliminary experimental results show that the proposed dynamic algorithm is competitive and slightly outperforms some state-of-the-art methods by achieving a higher genuine acceptance rate.
Keywords: palmprint recognition; biometric systems; dynamic algorithm; dynamical system approach; iteration; Biometrics (access control); Databases; Feature extraction; Heuristic algorithms; Security; Yttrium (ID#: 16-10201)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346883&isnumber=7346791
S. W. Neville, M. Elgamal and Z. Nikdel, “Robust Adversarial Learning and Invariant Measures,” Communications, Computers and Signal Processing (PACRIM), 2015 IEEE Pacific Rim Conference on, Victoria, BC, 2015, pp. 529-535. doi: 10.1109/PACRIM.2015.7334893
Abstract: A number of open cyber-security challenges are arising due to the rapidly evolving scale, complexity, and heterogeneity of modern IT systems and networks. The ease with which copious volumes of operational data can be collected from such systems has produced a strong interest in the use of machine learning (ML) for cyber-security, provided that ML can itself be made sufficiently immune to attack. Adversarial learning (AL) is the domain focusing on such issues and an arising AL theme is the need to ensure that ML solutions make use of robust input measurement features (i.e., the data sets used for ML training must themselves be robust against adversarial influences). This observation leads to further open questions, including: “What formally denotes sufficient robustness?”, “Must robust features necessarily exist for all IT systems?”, “Do robust features necessarily provide complete coverage of the attack space?”, etc. This work shows that these (and other) open AL questions can be usefully re-cast in terms of the classical dynamical system’s problem of needing to focus analyses on a system’s invariant measures. This re-casting is useful as a large body of mature dynamical systems theory exists concerning invariant measures which can then be applied to cyber-security. To our knowledge this the first work to identify and highlight this potentially useful cross-domain linkage.
Keywords: learning (artificial intelligence); security of data; ML training; adversarial learning; cross-domain linkage; cyber-security; machine learning; Complexity theory; Computer security; Extraterrestrial measurements; Focusing; Robustness; Sensors
(ID#: 16-10202)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7334893&isnumber=7334793
H. A. Kingravi, H. Maske and G. Chowdhary, “Kernel Controllers: A Systems-Theoretic Approach for Data-Driven Modeling and Control of Spatiotemporally Evolving Processes,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 7365-7370. doi: 10.1109/CDC.2015.7403382
Abstract: We consider the problem of modeling, estimating, and controlling the latent state of a spatiotemporally evolving continuous function using very few sensor measurements and actuator locations. Our solution to the problem consists of two parts: a predictive model of functional evolution, and feedback based estimator and controllers that can robustly recover the state of the model and drive it to a desired function. We show that layering a dynamical systems prior over temporal evolution of weights of a kernel model is a valid approach to spatiotemporal modeling that leads to systems theoretic, control-usable, predictive models. We provide sufficient conditions on the number of sensors and actuators required to guarantee observability and controllability. The approach is validated on a large real dataset, and in simulation for the control of spatiotemporally evolving function.
Keywords: estimation theory; feedback; predictive control; simulation; system theory; actuator locations; data-driven modeling; feedback based estimator; kernel controllers; predictive model; ; spatiotemporally evolving continuous function; systems-theoretic approach; Dictionaries; High definition video; Hilbert space; Kernel; Mathematical model; Predictive models; Spatiotemporal phenomena (ID#: 16-10203)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7403382&isnumber=7402066
J. Zhang, “An Image Encryption Scheme Based on Cat Map and Hyperchaotic Lorenz System,” Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, Ghaziabad, 2015, pp. 78-82. doi: 10.1109/CICT.2015.134
Abstract: In recent years, chaos-based image cipher has been widely studied and a growing number of schemes based on permutation-diffusion architecture have been proposed. However, recent studies have indicated that those approaches based on low-dimensional chaotic maps/systems have the drawbacks of small key space and weak security. In this paper, a security improved image cipher which utilizes cat map and hyper chaotic Lorenz system is reported. Compared with ordinary chaotic systems, hyper chaotic systems have more complex dynamical behaviors and number of system variables, which demonstrate a greater potential for constructing a secure cryptosystem. In diffusion stage, a plaintext related key stream generation strategy is introduced, which further improves the security against known/chosen-plaintext attack. Extensive security analysis has been performed on the proposed scheme, including the most important ones like key space analysis, key sensitivity analysis and various statistical analyses, which has demonstrated the satisfactory security of the proposed scheme.
Keywords: cryptography; image processing; statistical analysis; cat map; chaos-based image cipher; complex dynamical behaviors; cryptosystem; hyperchaotic Lorenz system; image encryption scheme; key sensitivity analysis; key space analysis; key stream generation strategy; low-dimensional chaotic maps; permutation-diffusion architecture; security analysis; Chaotic communication; Ciphers; Correlation; Encryption; image cipher; permutation-diffusion (ID#: 16-10204)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078671&isnumber=7078645
W. Qi et al., “A Dynamic Reactive Power Reserve Optimization Method to Enhance Transient Voltage Stability,” Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2015 IEEE International Conference on, Shenyang, 2015, pp. 1523-1528. doi: 10.1109/CYBER.2015.7288171
Abstract: Dynamic reactive power reserve of power system is vital to improve transient voltage security. A novel definition of reactive power reserve considering transient voltage security in transient procession is proposed. Participation factor to evaluate reactive power reserve’s contribution to transient voltage stability is computed through trajectory sensitivity method. Then an optimization model to enhance transient voltage stability is built and a solving algorithm is proposed. Based on an analysis of the transient voltage stability characteristics of East China Power Grid, the effectiveness of the proposed dynamical reactive power reserve optimization approach for improving transient voltage stability of large-scale AC-DC hybrid power systems is verified.
Keywords: AC-DC power convertors; optimisation; power grids; power system control; power system transient stability; reactive power control; voltage regulators; East China power grid; dynamic reactive power reserve optimization method; large-scale AC-DC hybrid power systems; participation factor; trajectory sensitivity method; transient procession; transient voltage security improvement; transient voltage stability enhancement; Optimization; Power system dynamics; Power system stability; Reactive power; Stability analysis; Transient analysis; AC-DC hybrid power system; Dynamic reactive power reserve; optimization method; transient voltage stability (ID#: 16-10205)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288171&isnumber=7287893
N. Ye, R. Geng, X. Song, Q. Wang and Z. Ning, “Hierarchic Topology Management by Decision Model and Smart Agents in Space Information Networks,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1817-1825. doi: 10.1109/HPCC-CSS-ICESS.2015.122
Abstract: Space information network, which is envisioned as a new type of self-organizing networks constituted by information systems of land, sea, air, and space, has attracted tremendous interest recently. In this paper, to improve the data delivery performance and the network scalability of space information networks, a new hierarchic topology management scheme based on decision model and smart agents is proposed. Different from the schemes studied in mobile ad hoc networks and wireless sensor networks, the proposed algorithm in space information networks introduces a decision model based on analytic hierarchy process (AHP) to first select cluster heads, and then forms non-overlapping k-hop clusters. The proposed dynamical self-maintenance mechanisms take not only the node mobility but also the cluster equalization into consideration. Smart mobile agents are used to migrate and duplicate functions of cluster heads in a recruiting way, besides of cluster merger/partition disposal, reaffiliation management and adaptive adjustment of information update period. Simulation experiments are performed to evaluate the performance of the proposed algorithm in terms of network scalability, overhead of clustering and reaffiliation frequency. It is shown from the analytical and simulation results that the proposed hierarchic topology management algorithm significantly improves the performance and the scalability of space information networks.
Keywords: analytic hierarchy process; computer network performance evaluation; information networks; merging; pattern clustering; telecommunication network topology; AHP; adaptive adjustment; cluster equalization; cluster head function duplication; cluster head function migration; cluster head selection; cluster merger disposal; cluster partition disposal; clustering overhead; data delivery performance; decision model; dynamical self-maintenance mechanisms; hierarchic topology management; information systems; information update period; network scalability improvement; node mobility; nonoverlapping k-hop clusters; performance evaluation; reaffiliation frequency; reaffiliation management; self-organizing networks; smart mobile agents; space information networks; Clustering algorithms; Network topology; Satellites; Scalability; Space vehicles; Topology; Wireless sensor networks; smart agent; topology management (ID#: 16-10206)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336436&isnumber=7336120
S. Gulhane and S. Bodkhe, “DDAS Using Kerberos with Adaptive Huffman Coding to Enhance Data Retrieval Speed and Security,” Pervasive Computing (ICPC), 2015 International Conference on, Pune, 2015, pp. 1-6. doi: 10.1109/PERVASIVE.2015.7086987
Abstract: The increasing fad of deploying application over the web and store as well as retrieve database to/from particular server. As data stored in distributed manner so scalability, flexibility, reliability and security are important aspects need to be considered while established data management system. There are several systems for database management. After reviewing Distributed data aggregation service (DDAS) system which is relying on Blobseer it found that it provide a high level performance in aspects such as data storage as a Blob (Binary large objects) and data aggregation. For complicated analysis and instinctive mining of scientific data, Blobseer serve as a repository backend. WS-Aggregation is the another framework which is viewed as a web services but it is actually carried out aggregation of data. In this framework for executing multi-site queries a single-site interface is provided to the clients. Simple storage service (S3) is another type of storage utility. This S3 system provides an anytime available and low cost service. Kerberos is a method which provides a secure authentication as only authorized clients are able to access distributed database. Kerberos consist of four steps i.e. Authentication Key exchange, Ticket granting service Key exchange, Client/Server service exchange and Build secure communication. Adaptive Huffman method to writing (also referred to as Dynamic Huffman method) is associate accommodative committal to writing technique basic of Huffman coding. It permits compression as well as decompression of data and also permits building the code because the symbols square measure is being transmitted, having no initial information of supply distribution, that enables one-pass cryptography and adaptation to dynamical conditions in data.
Keywords: Huffman codes; Web services; cryptography; data mining; distributed databases; query processing; Blob; Blobseer; DDAS; Kerberos; WS-Aggregation; adaptive Huffman coding; authentication key exchange; binary large objects; client-server service exchange; data aggregation; data management system; data retrieval security; data retrieval speed; data storage; distributed data aggregation service system; distributed database; dynamic Huffman method; instinctive scientific data mining; multisite queries; one-pass cryptography; secure communication; Authentication; Catalogs; Distributed databases; Memory; Servers; XML; adaptive huffman method; blobseer; kerberos; simple storage service; ws aggregation (ID#: 16-10207)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7086987&isnumber=7086957
M. De Paula and G. G. Acosta, “Trajectory Tracking Algorithm for Autonomous Vehicles Using Adaptive Reinforcement Learning,” OCEANS 2015 - MTS/IEEE Washington, Washington, DC, 2015, pp. 1-8. doi: (not provided)
Abstract: The off-shore industry requires periodic monitoring of underwater infrastructure for preventive maintenance and security reasons. The tasks in hostile environments can be achieved automatically through autonomous robots like UUV, AUV and ASV. When the robot controllers are based on prespecified conditions they could not function properly in these hostile changing environments. It is beneficial to count with adaptive control strategies that are capable to readapt the control policies when deviations, from the desired behavior, are detected. In this paper, we present an online selective reinforcement learning approach for learning reference tracking control policies given no prior knowledge of the dynamical system. The proposed approach enables real-time adaptation of the control policies, executed by the adaptive controller, based on ongoing interactions with the non-stationary environments. Applications on surface vehicles under nonstationary and perturbed environments are simulated. The presented simulation results demonstrate the performance of this novel algorithm to find optimal control policies that solve the trajectory tracking control task in unmanned vehicles.
Keywords: adaptive control; intelligent robots; learning (artificial intelligence); mobile robots; optimal control; preventive maintenance; remotely operated vehicles; trajectory control; adaptive reinforcement learning; autonomous robots; autonomous vehicles; hostile environments; learning reference tracking control policies; nonstationary environments; off-shore industry; online selective reinforcement learning approach; optimal control policies; periodic monitoring; perturbed environments; robot controllers; security reasons; surface vehicles; trajectory tracking control task; underwater infrastructure; unmanned vehicles; Gaussian distribution; Monitoring; Robots; Security; Vehicles; Autonomous vehicles; cognitive control; reinforcement learning; trajectory tracking
(ID#: 16-10208)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7401861&isnumber=7401802
Bo Yang, Bo Li, Mao Yang, Zhongjiang Yan and Xiaoya Zuo, “Mi-MMAC: MIMO-Based Multi-Channel MAC Protocol for WLAN,” Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, Taipei, 2015, pp. 223-226. doi: (not provided)
Abstract: In order to meet the proliferating demands in wireless local area networks (WLANs), the multi-channel media access control (MMAC) technology has attracted a considerable attention to exploit the increasingly scarce spectrum resources more efficiently. This paper proposes a novel multi-channel MAC to resolve the congestion on the control channel, named as Mi-MMAC, by multiplexing the control-radio and the data-radio as a multiple-input multiple-output (MIMO) array, working on both the control channel and the data channels alternately. Furthermore, we model Mi-MMAC as an M/M/k queueing system and obtain a closed-form approximate formula of the saturation throughput. Simulation results validate our model and analysis, and we demonstrate that the saturation throughput gain of the proposed protocol is close to 3.3 times compared with the dynamical channel assignment (DCA) protocol [1] under the few collisions condition.
Keywords: MIMO communication; access protocols; approximation theory; queueing theory; telecommunication congestion control; wireless LAN; wireless channels; DCA protocol; M/M/k queueing system; MIMO; Mi-MMAC; WLAN; closed form approximate formula; control channel; control radio; data channels; data radio; dynamical channel assignment; media access control; multichannel MAC protocol; multiple-input multiple-output array; scarce spectrum resources; DH-HEMTs; Mobile communication; Multiplexing; Protocols; Queueing analysis; Switches; Transceivers; Media access control; Multi-channel; Multiple-input multiple-output; Wireless LAN
(ID#: 16-10209)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332572&isnumber=7332527
Y. Nakahira and Y. Mo, “Dynamic State Estimation in the Presence of Compromised Sensory Data,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 5808-5813. doi: 10.1109/CDC.2015.7403132
Abstract: In this article, we consider the state estimation problem of a linear time invariant system in adversarial environment. We assume that the process noise and measurement noise of the system are l∞ functions. The adversary compromises at most γ sensors, the set of which is unknown to the estimation algorithm, and can change their measurements arbitrarily. We first prove that if after removing a set of 2γ sensors, the system is undetectable, then there exists a destabilizing noise process and attacker’s input to render the estimation error unbounded. For the case that the system remains detectable after removing an arbitrary set of 2γ sensors, we construct a resilient estimator and provide an upper bound on the l∞ norm of the estimation error. Finally, a numerical example is provided to illustrate the effectiveness of the proposed estimator design.
Keywords: invariance; linear systems; measurement errors; measurement uncertainty; state estimation; compromised sensory data; dynamic state estimation; estimation error; estimator design; l∞ functions; linear time invariant system; measurement noise; measurements arbitrarily; process noise; Estimation error; Robustness; Security; Sensor systems; State estimation (ID#: 16-10210)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7403132&isnumber=7402066
K. G. Vamvoudakis et al., “Autonomy and Machine Intelligence in Complex Systems: A Tutorial,” 2015 American Control Conference (ACC), Chicago, IL, 2015, pp. 5062-5079. doi: 10.1109/ACC.2015.7172127
Abstract: This tutorial paper will discuss the development of novel state-of-the-art control approaches and theory for complex systems based on machine intelligence in order to enable full autonomy. Given the presence of modeling uncertainties, the unavailability of the model, the possibility of cooperative/non-cooperative goals and malicious attacks compromising the security of teams of complex systems, there is a need for approaches that respond to situations not programmed or anticipated in design. Unfortunately, existing schemes for complex systems do not take into account recent advances of machine intelligence. We shall discuss on how to be inspired by the human brain and combine interdisciplinary ideas from different fields, i.e. computational intelligence, game theory, control theory, and information theory to develop new self-configuring algorithms for decision and control given the unavailability of model, the presence of enemy components and the possibility of network attacks. Due to the adaptive nature of the algorithms, the complex systems will be capable of breaking or splitting into parts that are themselves autonomous and resilient. The algorithms discussed will be characterized by strong abilities of learning and adaptivity. As a result, the complex systems will be fully autonomous, and tolerant to communication failures.
Keywords: artificial intelligence; game theory; information theory; large-scale systems; learning systems; adaptive systems; complex systems; computational intelligence; control theory; learning; machine intelligence; network attacks; self-configuring algorithms; Complex systems; Computational modeling; Control systems; Machine intelligence; Mathematical model; Uncertainty; Vehicles; Autonomy; cyber-physical systems; networks (ID#: 16-10211)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7172127&isnumber=7170700
K. G. Vamvoudakis and J. P. Hespanha, “Model-Free Plug-n-Play Optimization Techniques to Design Autonomous and Resilient Complex Systems,” 2015 American Control Conference (ACC), Chicago, IL, 2015, pp. 5081-5081. doi: 10.1109/ACC.2015.7172129
Abstract: Summary form only given: This talk will focus on model-free distributed optimization based algorithms for complex systems with formal optimality and robustness guarantees. Given the presence of modeling uncertainties, the unavailability of the model, the possibility of cooperative/non-cooperative goals and malicious attacks compromising the security of networked teams, there is a need for completely model-free plug-n-play approaches that respond to situations not programmed or anticipated in design, in order to guarantee mission completion. Unfortunately, existing schemes for complex systems do not take into account recent advances of computational intelligence. This talk will combine interdisciplinary ideas from different fields, i.e. computational intelligence, game theory, control theory, and information theory to develop new self-configuring algorithms for decision and control given the unavailability of model, the presence of enemy components and the possibility of measurement and jamming network attacks. Due to the adaptive nature of the algorithms, the complex systems will be capable of breaking or splitting into parts that are themselves autonomous and resilient. The proposed algorithms will be provided with guaranteed optimality and robustness and will be able to enable complete on-board autonomy, to multiply engagement capability, and enable coordination of distributed, heterogeneous teams of manned/unmanned vehicles and humans.
Keywords: large-scale systems; mobile robots; optimisation; vehicles; complex systems; computational intelligence; control theory; cooperative goals; enemy components; engagement capability; formal optimality; game theory; heterogeneous teams; information theory; interdisciplinary ideas; jamming network attacks; manned vehicles; measurement attacks; model-free plug-n-play optimization techniques; noncooperative goals; on-board autonomy; robustness guarantees; self-configuring algorithms; unmanned vehicles; Algorithm design and analysis; Complex systems; Computational intelligence; Computational modeling; Control systems; Optimization; Robustness (ID#: 16-10212)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7172129&isnumber=7170700
L. Pan, H. Voos, Y. Li, M. Darouach and S. Hu, “Uncertainty Quantification of Exponential Synchronization for a Novel Class of Complex Dynamical Networks with Hybrid TVD Using PIPC,” The 27th Chinese Control and Decision Conference (2015 CCDC), Qingdao, 2015, pp. 125-130. doi: 10.1109/CCDC.2015.7161678
Abstract: This paper investigates the Uncertainty Quantification (UQ) of Exponential Synchronization (ES) problems for a new class of Complex Dynamical Networks (CDNs) with hybrid Time-Varying Delay (TVD) and Non-Time-Varying Delay (NTVD) nodes by using coupling Periodically Intermittent Pinning Control (PIPC) which has three switched intervals in every period. Based on Kronecker product rules, Lyapunov Stability Theory (LST), Cumulative Distribution Function (CDF), and PIPC method, the robustness of the control algorithm with respect to the value of the final time is studied. Moreover, we assume a normal distribution for the time and used the Stochastic Collocation (SC) method [1] with different values of nodes and collocation points to quantify the sensitivity. For different numbers of nodes, the results show that the ES errors converge to zero with a high probability. Finally, to verify the effectiveness of our theoretical results, Nearest-Neighbor Network (NNN) and Barabási-Albert Network (BAN) consisting of coupled non-delayed and delay Chen oscillators are studied and to demonstrate that the accuracies of the ES and PIPC are robust to variations of time.
Keywords: Lyapunov methods; complex networks; convergence; delays; large-scale systems; normal distribution; periodic control; robust control; stochastic processes; switching systems (control); synchronisation; BAN; Barabási-Albert Network; CDF; CDN; Kronecker product rule; LST; Lyapunov stability theory; NNN; NTVD node; PIPC method; collocation points; complex dynamical network; control algorithm robustness; cumulative distribution function; delay Chen oscillator; error convergence; exponential synchronization problem; hybrid TVD; hybrid time-varying delay; nearest-neighbor network; nondelayed Chen oscillator; nontime-varying delay; normal distribution; periodically intermittent pinning control; probability; sensitivity quantification; stochastic collocation method; switched interval; time variation; uncertainty quantification; Artificial neural networks; Chaos; Couplings; Delays; Switches; Synchronization; Complex Dynamical Networks (CDNs); Exponential Synchronization (ES); Periodically Intermittent Pinning Control (PIPC);Time-varying Delay (TVD) (ID#: 16-10213)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161678&isnumber=7161655
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Effectiveness and Work Factor Metrics 2015 – 2016 (Part 1) |
Measurement to determine the effectiveness of security systems is an essential element of the Science of Security. The work cited here was presented in 2015 and 2016.
I. Kotenko and E. Doynikova, “Countermeasure Selection in SIEM Systems Based on the Integrated Complex of Security Metrics,” 2015 23rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, Turku, 2015, pp. 567-574. doi: 10.1109/PDP.2015.34
Abstract: The paper considers a technique for countermeasure selection in security information and event management (SIEM) systems. The developed technique is based on the suggested complex of security metrics. For the countermeasure selection the set of security metrics is extended with an additional level needed for security decision support. This level is based on the countermeasure effectiveness metrics. Key features of the suggested technique are application of the attack and service dependencies graphs, the introduced model of the countermeasure and the suggested metrics of the countermeasure effectiveness, cost and collateral damage. Other important feature of the technique is providing the solution on the countermeasure implementation in any time on the base of the current security state and security events.
Keywords: decision support systems; graph theory; security of data; software metrics; SIEM systems; attack dependencies graphs; countermeasure selection; integrated complex; security decision support; security events; security information and event management; security metrics; security state; service dependencies graphs; Authentication; Measurement; Risk management; Taxonomy; attack graphs; countermeasures; cyber security; risk assessment (ID#: 16-10214)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092776&isnumber=7092002
M. Ge and D. S. Kim, “A Framework for Modeling and Assessing Security of the Internet of Things,” Parallel and Distributed Systems (ICPADS), 2015 IEEE 21st International Conference on, Melbourne, VIC, 2015, pp. 776-781. doi: 10.1109/ICPADS.2015.102
Abstract: Internet of Things (IoT) is enabling innovative applications in various domains. Due to its heterogeneous and wide scale structure, it introduces many new security issues. To address the security problem, we propose a framework for security modeling and assessment of the IoT. The framework helps to construct graphical security models for the IoT. Generally, the framework involves five steps to find attack scenarios, analyze the security of the IoT through well-defined security metrics, and assess the effectiveness of defense strategies. The benefits of the framework are presented via a study of two example IoT networks. Through the analysis results, we show the capabilities of the proposed framework on mitigating impacts of potential attacks and evaluating the security of large-scale networks.
Keywords: Internet of Things; security of data; IoT networks; defense strategies effectiveness; graphical security models; large-scale networks; security assessment; security metrics; security modeling; Analytical models; Body area networks; Computational modeling; Measurement; Network topology; Security; Wireless communication; Attack Graphs; Hierarchical Attack Representation Model; Security Analysis (ID#: 16-10215)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7384365&isnumber=7384203
P. Pandey and E. A. Snekkenes, “A Performance Assessment Metric for Information Security Financial Instruments,” Information Society (i-Society), 2015 International Conference on, London, 2015, pp. 138-145. doi: 10.1109/i-Society.2015.7366876
Abstract: Business interruptions caused by cyber-attacks pose a serious threat to revenue and share price of the organisation. Furthermore, recent cyber-attacks on various organisations prove that the technical controls, security policies, and regulatory compliance are not sufficient to mitigate the cyber risks. In such a scenario, the residual cyber risk can be mitigated with cyber-insurance policies and with information security derivatives (financial instruments). Information security derivatives are a new class of financial instruments designed to provide an alternate risk mitigation mechanism to reduce the potential adverse impact of an information security event. However, there is a lack of research on the metrics to measure the performance of information security derivatives in mitigating the underlying risk. This article examines the basic requirements to assess the performance of information security derivatives. Furthermore, the article presents three metrics, namely hedge ratio, hedge effectiveness, and hedge efficiency to formulate and evaluate a cyber risk mitigation strategy devised with information security derivatives. Also, the application of these metrics is demonstrated in an imaginary scenario. The accurate measure of performance of information security derivatives is of practical importance for effective risk management strategy.
Keywords: business data processing; risk management; security of data; business interruptions; cyber risk mitigation strategy; cyber-attacks; cyber-insurance policies; hedge effectiveness; hedge efficiency; hedge ratio; information security derivatives; information security financial instruments; performance assessment metric; regulatory compliance; residual cyber risk; risk management strategy; risk mitigation mechanism; Correlation; Information security; Instruments; Measurement; Portfolios; Risk management; Financial Instrument; Hedge Effectiveness; Hedge Efficiency; Information Security; Risk Management (ID#: 16-10216)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366876&isnumber=7366836
F. Dai, K. Zheng, S. Luo and B. Wu, “Towards a Multiobjective Framework for Evaluating Network Security Under Exploit Attacks,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7186-7191. doi: 10.1109/ICC.2015.7249473
Abstract: Exploit attacks have been one of the major threats to computer network systems, the damage of which has been extensively studied and numerous countermeasures have been proposed to defend against them. In this work, we propose a multiobjective optimization framework to facilitate evaluation of network security under exploit attacks. Our approach explores a promising avenue of integrating attack graph methodology to evaluate network security. In particular, we innovatively utilize attack graph based security metrics to model exploit attacks and dynamically measure security risk under these attacks. Then a multiobjective problem is formulated to maximize network exploitability and security impact under feasible exploit compositions. Furthermore, an artificial immune algorithm is employed to solve the formulated problem. We conduct a series of simulation experiments on hypothetical network models to testify the performance of proposed mechanism. Simulation results show that our approach can innovatively solve the security evaluation problem under multiple decision variables with feasibility and effectiveness.
Keywords: artificial immune systems; computer network security; graph theory; artificial immune algorithm; attack graph based security metrics; attack graph methodology; computer network security evaluation; exploit attacks; multiobjective optimization framework; security risk; Analytical models; Communication networks; Measurement; Optimization; Security; Sociology; Statistics; attack graph; exploit attack; network security evaluation (ID#: 16-10217)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249473&isnumber=7248285
B. Duncan and M. Whittington, “The Importance of Proper Measurement for a Cloud Security Assurance Model,” 2015 IEEE 7th International Conference on Cloud Computing Technology and Science (CloudCom), Vancouver, BC, 2015, pp. 517-522. doi: 10.1109/CloudCom.2015.91
Abstract: Defining proper measures for evaluating the effectiveness of an assurance model, which we have developed to ensure cloud security, is vital to ensure the successful implementation and continued running of the model. We need to understand that with security being such an essential component of business processes, responsibility must lie with the board. The board must be responsible for defining their security posture on all aspects of the model, and therefore must also be responsible for defining what the necessary measures should be. Without measurement, there can be no control. However, it will also be necessary to properly engage with cloud service providers to achieve a more meaningful degree of security for the cloud user.
Keywords: business data processing; cloud computing; security of data; business process; cloud security assurance model; cloud service provider; security posture; Cloud computing; Companies; Complexity theory; Privacy; Security; Standards; assurance; audit; cloud service providers; compliance; measurement; privacy; security; service level agreements; standards (ID#: 16-10218)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7396206&isnumber=7396111
N. Aminudin, T. K. A. Rahman, N. M. M. Razali, M. Marsadek, N. M. Ramli and M. I. Yassin, “Voltage Collapse Risk Index Prediction for Real Time System’s Security Monitoring,” Environment and Electrical Engineering (EEEIC), 2015 IEEE 15th International Conference on, Rome, 2015, pp. 415-420. doi: 10.1109/EEEIC.2015.7165198
Abstract: Risk based security assessment (RBSA) for power system security deals with the impact and probability of uncertainty to occur in the power system. In this study, the risk of voltage collapse is measured by considering the L-index as the impact of voltage collapse while Poisson probability density function is used to model the probability of transmission line outage. The prediction of voltage collapse risk index in real time requires precise, reliable and short processing time. To facilitate this analysis, Artificial Intelligent using Generalize Regression Neural Network (GRNN) technique is proposed where the spread value is determined using Cuckoo Search (CS) optimization method. To validate the effectiveness of the proposed method, the performance of GRNN with optimized spread value obtained using CS is compared with heuristic approach.
Keywords: Poisson distribution; neural nets; optimisation; power system dynamic stability; power system measurement; power system security; power transmission reliability; probability; real-time systems; risk management; GRNN; L-index; Poisson probability density function; artificial intelligent; cuckoo search; generalize regression neural network; optimization method; real time system; risk based security assessment; security monitoring; transmission line outage; voltage collapse risk index prediction; Indexes; Optimization; Power system stability; Power transmission lines; Security; Transmission line measurements; Voltage measurement; Risk based security assessment; cuckoo search optimization; voltage collapse (ID#: 16-10219)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165198&isnumber=7165173
H. Jiang, Y. Zhang, J. J. Zhang and E. Muljadi, “PMU-Aided Voltage Security Assessment for a Wind Power Plant,” 2015 IEEE Power & Energy Society General Meeting, Denver, CO, 2015, pp. 1-5. doi: 10.1109/PESGM.2015.7286274
Abstract: Because wind power penetration levels in electric power systems are continuously increasing, voltage stability is a critical issue for maintaining power system security and operation. The traditional methods to analyze voltage stability can be classified into two categories: dynamic and steady-state. Dynamic analysis relies on time-domain simulations of faults at different locations; however, this method needs to exhaust faults at all locations to find the security region for voltage at a single bus. With the widely located phasor measurement units (PMUs), the Thevenin equivalent matrix can be calculated by the voltage and current information collected by the PMUs. This paper proposes a method based on a Thevenin equivalent matrix to identify system locations that will have the greatest impact on the voltage at the wind power plant's point of interconnection. The number of dynamic voltage stability analysis runs is greatly reduced by using the proposed method. The numerical results demonstrate the feasibility, effectiveness, and robustness of the proposed approach for voltage security assessment for a wind power plant.
Keywords: phasor measurement; power system security; power system stability; wind power plants; PMU-aided voltage security assessment; Thevenin equivalent matrix; dynamic voltage stability analysis; electric power systems; phasor measurement units; Phasor measurement units; Power system; fault disturbance recorder; phasor measurement unit; voltage security; wind power plant
(ID#: 16-10220)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286274&isnumber=7285590
D. Adrianto and F. J. Lin, “Analysis of Security Protocols and Corresponding Cipher Suites in ETSI M2M Standards,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 777-782. doi: 10.1109/WF-IoT.2015.7389152
Abstract: ETSI, as a standard body in telecommunication industry, has defined a comprehensive set of common security mechanisms to protect the IoT/M2M system. They are Service Bootstrapping, Service Connection, and mId Security. For each mechanism, there are several protocols that we can choose. However, the standards do not describe in what condition a particular protocol will be the best among the others. In this paper we analyze which protocol is the most suitable for the use case where an IoT/M2M application generates a large amount of data in a short period of time. The criteria used include efficiency, cost, and effectiveness of the protocol. Our analysis is done based on the actual measurement of an ETSI standard-compliant prototype.
Keywords: Internet of Things; cryptographic protocols; telecommunication industry; telecommunication security; ETSI M2M standards; ETSI standard-compliant prototype; Internet-of-things; IoT system; cipher suites; common security mechanisms; machine-to-machine communication; security protocol analysis; service bootstrapping; service connection; Authentication; Cryptography; Logic gates; Probes; Protocols; Servers; Machine-to-Machine Communication; The Internet of Things; security protocols (ID#: 16-10221)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389152&isnumber=7389012
Y. Jitsui and A. Kajiwara, “Home Security Monitoring Based Stepped-FM UWB,” 2016 International Workshop on Antenna Technology (iWAT), Cocoa Beach, FL, 2016, pp. 189-191. doi: 10.1109/IWAT.2016.7434839
Abstract: This paper presents the effectiveness of stepped-FM UWB home security sensor. UWB sensor has attracted considerable attention it can be expected to detect human body in a home, not room. Then a few schemes had been suggested to detect an intruder in home or room. However, it is important to detect an intruder prior to breaking in the house. This paper suggests a UWB sensor which can detect not only intruder, but also stranger intruding into a house. It can also estimate the intrusion port (window). The measurements were conducted under five scenarios using our fabricated sensor system installed inside a typical four-room apartment house.
Keywords: security; sensors; ultra wideband technology; UWB sensor; fabricated sensor system; four-room apartment house; home security monitoring; intrusion port; stepped-FM UWB; stepped FM UWB home security sensor; Antenna measurements; Monitoring; Routing; Security; Sensor systems; Trajectory; Transmitting antennas; Ultra-wideband; monitoring; sensor; stepped-fm (ID#: 16-10222)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7434839&isnumber=7434773
R. Kastner, W. Hu and A. Althoff, “Quantifying Hardware Security Using Joint Information Flow Analysis,” 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 2016, pp. 1523-1528. doi: (not provided)
Abstract: Existing hardware design methodologies provide limited methods to detect security flaws or derive a measure on how well a mitigation technique protects the system. Information flow analysis provides a powerful method to test and verify a design against security properties that are typically expressed using the notion of noninterference. While this is useful in many scenarios, it does have drawbacks primarily related to its strict enforcement of limiting all information flows — even those that could only occur in rare circumstances. Quantitative metrics based upon information theoretic measures provide an approach to loosen such restrictions. Furthermore, they are useful in understanding the effectiveness of security mitigations techniques. In this work, we discuss information flow analysis using noninterference and qualitative metrics. We describe how to use them in a synergistic manner to perform joint information flow analysis. And we use this novel technique to analyze security properties across several different hardware cryptographic cores.
Keywords: Control systems; Design methodology; Hardware; Logic gates; Measurement; Mutual information; Security (ID#: 16-10223)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7459555&isnumber=7459269
M. S. Salah, A. Maizate and M. Ouzzif, “Security Approaches Based on Elliptic Curve Cryptography in Wireless Sensor Networks,” 2015 27th International Conference on Microelectronics (ICM), Casablanca, 2015, pp. 35-38. doi: 10.1109/ICM.2015.7437981
Abstract: Wireless sensor networks are ubiquitous in monitoring applications, medical control, environmental control and military activities... In fact, a wireless sensor network consists of a set of communicating nodes distributed over an area in order to measure a given magnitude, or receive and transmit data independently to a base station which is connected to the user via the Internet or a satellite, for example. Each node in a sensor network is an electronic device which has calculation capacity, storage, communication and power. However, attacks in wireless sensor networks can have negative impacts on critical network applications leading to the minimization of security within these networks. So it is important to secure these networks in order to maintain their effectiveness. In this paper, we have initially attempted to study approaches oriented towards cryptography and based on elliptic curves, then we have compared the performance of each method relative to others.
Keywords: public key cryptography; telecommunication security; wireless sensor networks; Internet; base station; communicating nodes; critical network applications; electronic device calculation capacity; electronic device communication; electronic device power; electronic device storage; elliptic curve cryptography; environmental control; magnitude measurement; medical control; military activities; monitoring applications; security approaches; security minimization; ubiquitous network; wireless sensor network; Elliptic curve cryptography; Energy consumption; Irrigation; Jamming; Monitoring; Terrorism; AVL???; CECKM; ECC; RECC-C; RECC-D; Security; WSN (ID#: 16-10224)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7437981&isnumber=7437967
A. Kargarian, Yong Fu and Zuyi Li, “Distributed Security-Constrained Unit Commitment for Large-Scale Power Systems,” 2015 IEEE Power & Energy Society General Meeting, Denver, CO, 2015, pp. 1-1. doi: 10.1109/PESGM.2015.7286540
Abstract: Summary from only given. Independent system operators (ISOs) of electricity markets solve the security-constrained unit commitment (SCUC) problem to plan a secure and economic generation schedule. However, as the size of power systems increases, the current centralized SCUC algorithm could face critical challenges ranging from modeling accuracy to calculation complexity. This paper presents a distributed SCUC (D-SCUC) algorithm to accelerate the generation scheduling of large-scale power systems. In this algorithm, a power system is decomposed into several scalable zones which are interconnected through tie lines. Each zone solves its own SCUC problem and a parallel calculation method is proposed to coordinate individual D-SCUC problems. Several power systems are studied to show the effectiveness of the proposed algorithm.
Keywords: distributed algorithms; power generation economics; power markets; power system security; scheduling; D-SCUC problems; distributed SCUC algorithm; distributed security-constrained unit commitment; economic generation schedule; independent system operators; large-scale power systems; parallel calculation method; security-constrained unit commitment problem; Computers; Distance measurement; Economics; Electricity supply industry; Face; Power systems; Schedules (ID#: 16-10225)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286540&isnumber=7285590
M. Cayford, “Measures of Success: Developing a Method for Evaluating the Effectiveness of Surveillance Technology,” Intelligence and Security Informatics Conference (EISIC), 2015 European, Manchester, 2015, pp. 187-187. doi: 10.1109/EISIC.2015.33
Abstract: This paper presents a method for evaluating the effectiveness of surveillance technology in intelligence work. The method contains measures against which surveillance technology would be assessed to determine its effectiveness. Further research, based on interviews of experts, will inform the final version of this method, including a weighting system.
Keywords: surveillance; terrorism; Sproles method; counterterrorism; surveillance technology; Current measurement; Interviews; Privacy; Security; Standards; Surveillance; Weight measurement; effectiveness; intelligence; measures; method; technology
(ID#: 16-10226)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7379756&isnumber=7379706
S. Schinagl, K. Schoon and R. Paans, “A Framework for Designing a Security Operations Centre (SOC),” System Sciences (HICSS), 2015 48th Hawaii International Conference on, Kauai, HI, 2015, pp. 2253-2262. doi: 10.1109/HICSS.2015.270
Abstract: Owning a SOC is an important status symbol for many organizations. Although the concept of a 'SOC' can be considered a hype, only a few of them are actually effective in counteracting cybercrime and IT abuse. A literature review reveals that there is no standard framework available and no clear scope or vision on SOCs. In most of the papers, specific implementations are described, although often with a commercial purpose. Our research was focused on identifying and defining the generic building blocks for a SOC, to draft a design framework. In addition, a measurement method has been developed to assess the effectiveness of the protection provided by a SOC.
Keywords: computer crime; IT abuse; SOC; Security Operations Centre design; cybercrime; measurement method; Conferences; Monitoring; Organizations; Security; Standards organizations; System-on-chip; IT Abuse; Intelligence; Value; baseline security; continuous monitoring; damage control; forensic; framework; model; monitoring; pentest; secure service development; sharing knowledge (ID#: 16-10227)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070084&isnumber=7069647
Y. Wu, T. Wang and J. Li, “Effectiveness Analysis of Encrypted and Unencrypted Bit Sequence Identification Based on Randomness Test,” 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, 2015, pp. 1588-1591. doi: 10.1109/IMCCC.2015.337
Abstract: Encrypted and unencrypted bit sequences identification has great significance of network management. Compared with unencrypted bit sequences, encrypted bit sequences are more random. Randomness tests are used to evaluate the security of cipher algorithms. Whether they could be used to identify the encrypted and unencrypted bit sequences still need to do some further research. We introduced the principle of randomness tests at first. According to the input size limit of each test in the SP800-22 rev1a standard, we selected frequency test, frequency test within a block, runs test, longest run of ones in a block test and cumulative sums test to identify encrypted and unencrypted bit sequences. At the time, we analyzed the preconditions of the selected tests to successfully identify the encrypted and unencrypted bit sequences, then presented the relevant conclusions. Finally, the effectiveness of the selected tests is verified according to the experiments.
Keywords: cryptography; SP800-22 rev1a standard; block test; cipher algorithms; cumulative sums test; effectiveness analysis; frequency test; network management; randomness test; security evaluation; unencrypted bit sequence identification; Ciphers; Encryption; Probability; Protocols; Standards; bit sequences; cipher algorithm; cumulative sums; encryption (ID#: 16-10228)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7406118&isnumber=7405778
X. Yuan, P. Tu and Y. Qi, “Sensor Bias Jump Detection and Estimation with Range Only Measurements,” Information and Automation, 2015 IEEE International Conference on, Lijiang, 2015, pp. 1658-1663. doi: 10.1109/ICInfA.2015.7279552
Abstract: A target can be positioned by wireless communication sensors. In practical system, the range based sensors may have biased measurements. The biases are mostly constant value, but they may jump abruptly in some special scenarios. An on-line bias change detection and estimation algorithm is presented in this paper. This algorithm can detect the jump bias based on Chi-Square Test, and then estimate the jump bias through Modified Augmented Extended Kalman filter. The feasibility and effectiveness of the proposed algorithms are illustrated in comparison with the Augmented Extended Kalman filter by simulations.
Keywords: Kalman filters; estimation theory; nonlinear filters; sensors; Chi-Square test; modified augmented extended Kalman filter; online bias change detection algorithm; online bias change estimation algorithm; range only measurement; sensor bias jump detection; sensor bias jump estimation; wireless communication sensors; Change detection algorithms; Estimation; Noise; Position measurement; Wireless communication; Bias estimation; Jump of bias; Wireless positioning systems; range-only measurements (ID#: 16-10229)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7279552&isnumber=7279248
M. M. Hasan and H. T. Mouftah, “Encryption as a Service for Smart Grid Advanced Metering Infrastructure,” 2015 IEEE Symposium on Computers and Communication (ISCC), Larnaca, 2015, pp. 216-221. doi: 10.1109/ISCC.2015.7405519
Abstract: Smart grid advanced metering infrastructure (AMI) bridges between consumers, utilities, and market. Its operation relies on large scale communication networks. At the lowest level, information are acquired by smart meters and sensors. At the highest level, information are stored and processed by smart grid control centers for various purposes. The AMI conveys a big amount of sensitive information. Prevention of unauthorized access to these information is a major concern for smart grid operators. Encryption is the primary security measure for preventing unauthorized access. It incurs various overheads and deployment costs. In recent times, the security as a service (SECaaS) model has introduced a number cloud-based security solutions such as encryption as a service (EaaS). It promises the speed and cost-effectiveness of cloud computing. In this paper, we propose a framework named encryption service for smart grid AMI (ES4AM). The ES4AM framework focuses on lightweight encryption of in-flight AMI data. We also study the feasibility of the framework using relevant simulation results.
Keywords: cloud computing; cryptography; power engineering computing; power markets; power system control; power system measurement; sensors; smart power grids; telecommunication security; ES4AM; EaaS; SECaaS model; communication networks; encryption as a service; number cloud-based security solutions; primary security measure; security as a service; smart grid AMI; smart grid advanced metering infrastructure; smart grid control centers; smart grid operators; unauthorized access; Cloud computing; Encryption; Public key; Servers; Smart grids; encryption; managed security; smart grid (ID#: 16-10230)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7405519&isnumber=7405441
Z. Hu, Y. Wang, X. Tian, X. Yang, D. Meng and R. Fan, “False Data Injection Attacks Identification for Smart Grids,” Technological Advances in Electrical, Electronics and Computer Engineering (TAEECE), 2015 Third International Conference on, Beirut, 2015, pp. 139-143. doi: 10.1109/TAEECE.2015.7113615
Abstract: False Data Injection Attacks (FDIA) in Smart Grid is considered to be the most threatening cyber-physics attack. According to the variety of measurement categories in power system, a new method for false data detection and identification is presented. The main emphasis of our research is that we have equivalent measurement transformation instead of traditional weighted least squares state estimation in the process of SE and identify false data by the residual researching method. In this paper, one FDIA attack case in IEEE 14 bus system is designed by exploiting the MATLAB to test the effectiveness of the algorithm. Using this method the false data can be effectively dealt with.
Keywords: IEEE standards; power system security; security of data; smart power grids; FDIA; IEEE 14 bus system; SE; cyberphysical attack threatening; equivalent measurement transformation; false data injection attack identification; power system; residual researching method; smart grid; Current measurement; Pollution measurement; Power measurement; Power systems; State estimation; Transmission line measurements; Weight measurement; false data detection and identification; false data injection attacks; smart grid (ID#: 16-10231)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113615&isnumber=7113589
A. K. Al-Khamis and A. A. Khalafallah, “Secure Internet on Google Chrome: Client Side Anti-Tabnabbing Extension,” Anti-Cybercrime (ICACC), 2015 First International Conference on, Riyadh, 2015, pp. 1-4. doi: 10.1109/Anti-Cybercrime.2015.7351942
Abstract: Electronic transactions rank the top on our daily transactions. Internet became invaluable for government, business, and personal use. This occurred in synchronization with the great increase in online attacks, particularly the development of newest forms from known attacks such as Tabnabbing. Thus, users' confidentiality and personal information must be protected using information security. Tabnabbing is a new form of phishing. The attacker needs nothing to steal credentials except users' preoccupation with other work and exploitation of human memory weakness. The impact of this malicious attempt begins with identity theft and ends with financial loss. That has encouraged some security specialists and researchers to tackle tabnabbing attack, but their studies are still in their infancy and not sufficient. The work done here focuses on developing an effective anti-tabnabbing extension for the Google Chrome browser to protect Internet users from been victims as well as raise their awareness. The system developed has a novel significance due to its effectiveness in detecting a tabnabbing attack and the combination of two famous approaches used to combat online attacks. The success of the system was examined by performance measurements such as confusion matrix and ROC. The system produces promising results.
Keywords: Internet; security of data; Google Chrome browser; Internet users; ROC; client side anti-tabnabbing extension; confusion matrix; electronic transactions; financial loss; human memory weakness; information security; online attacks; personal information; phishing; secure Internet; security specialists; synchronization; tabnabbing attack; Browsers; Business; HTML; Matrix converters; Security; Uniform resource locators; Browser security; Detection; Google Extension; Phishing; Social engineering; Tabnabbing attack; Usable security (ID#: 16-10232)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7351942&isnumber=7351910
N. Soule et al., “Quantifying & Minimizing Attack Surfaces Containing Moving Target Defenses,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-6. doi: 10.1109/RWEEK.2015.7287449
Abstract: The cyber security exposure of resilient systems is frequently described as an attack surface. A larger surface area indicates increased exposure to threats and a higher risk of compromise. Ad-hoc addition of dynamic proactive defenses to distributed systems may inadvertently increase the attack surface. This can lead to cyber friendly fire, a condition in which adding superfluous or incorrectly configured cyber defenses unintentionally reduces security and harms mission effectiveness. Examples of cyber friendly fire include defenses which themselves expose vulnerabilities (e.g., through an unsecured admin tool), unknown interaction effects between existing and new defenses causing brittleness or unavailability, and new defenses which may provide security benefits, but cause a significant performance impact leading to mission failure through timeliness violations. This paper describes a prototype service capability for creating semantic models of attack surfaces and using those models to (1) automatically quantify and compare cost and security metrics across multiple surfaces, covering both system and defense aspects, and (2) automatically identify opportunities for minimizing attack surfaces, e.g., by removing interactions that are not required for successful mission execution.
Keywords: security of data; attack surface minimization; cyber friendly fire; cyber security exposure; dynamic proactive defenses; moving target defenses; resilient systems; timeliness violations; Analytical models; Computational modeling; IP networks; Measurement; Minimization; Security; Surface treatment; cyber security analysis; modeling; threat assessment (ID#: 16-10233)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287449&isnumber=7287407
K. A. Torkura, F. Cheng and C. Meinel, “A Proposed Framework for Proactive Vulnerability Assessments in Cloud Deployments,” 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST), London, 2015,
pp. 51-57. doi: 10.1109/ICITST.2015.7412055
Abstract: Vulnerability scanners are deployed in computer networks and software to timely identify security flaws and misconfigurations. However, cloud computing has introduced new attack vectors that requires commensurate change of vulnerability assessment strategies. To investigate the effectiveness of these scanners in cloud environments, we first conduct a quantitative security assessment of OpenStack's vulnerability lifecycle and discover severe risk levels resulting from prolonged patch release duration. More specifically, there are long time lags between OpenStack patch releases and patch inclusion in vulnerability scanning engines. This scenario introduces sufficient time for malicious actions and creation of exploits such as zero-days. Mitigating these concern requires systems with current knowledge on events within the vulnerability lifecycle. However, current vulnerability scanners are designed to depend on information about publicly announced vulnerabilities which mostly includes only vulnerability disclosure dates. Accordingly, we propose a framework that would mitigate these risks by gathering and correlating information from several security information sources including exploit databases, malware signature repositories and Bug Tracking Systems. The information is thereafter used to automatically generate plugins armed with current information about zero-day exploits and unknown vulnerabilities. We have characterized two new security metrics to describe the discovered risks.
Keywords: cloud computing; invasive software; OpenStack vulnerability lifecycle; attack vector; bug tracking system; cloud deployment; exploit databases; malware signature repositories; proactive vulnerability assessment; security flaws; vulnerability scanner; Cloud computing; Databases; Engines; Measurement; Security; Cloud security; cloud vulnerabilities; security metrics; vulnerability lifecycle; vulnerability signature; zero-days (ID#: 16-10234)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7412055&isnumber=7412034
S. K. Rao, D. Krishnankutty, R. Robucci, N. Banerjee and C. Patel, “Post-Layout Estimation of Side-Channel Power Supply Signatures,” Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, Washington, DC, 2015,
pp. 92-95. doi: 10.1109/HST.2015.7140244
Abstract: Two major security challenges for integrated circuits (IC) that involve encryption cores are side-channel based attacks and malicious hardware insertions (trojans). Side-channel attacks predominantly use power supply measurements to exploit the correlation of power consumption with the underlying logic operations on an IC. Practical attacks have been demonstrated using power supply traces and either plaintext or cipher-text collected during encryption operations. Also, several techniques that detect trojans rely on detecting anomalies in the power supply in combination with other circuit parameters. Counter-measures against these side-channel attacks as well as detection schemes for hardware trojans are required and rely on accurate pre-fabrication power consumption predictions. However, available state-of-the-art techniques would require prohibitive full-chip SPICE simulations. In this work, we present an optimized technique to accurately estimate the power supply signatures that require significantly less computational resources, thus enabling integration of Design-for-Security (DfS) based paradigms. To demonstrate the effectiveness of our technique, we present data for a DES crypto-system that proves that our framework can identify vulnerabilities to Differential Power Analysis (DPA) attacks. Our framework can be generically applied to other crypto-systems and can handle larger IC designs without loss of accuracy.
Keywords: cryptography; estimation theory; integrated circuit layout; logic testing; power consumption; power supply circuits; security; DES cryptosystem; DPA; DfS; IC; SPICE simulation; anomaly detection; cipher-text; design-for-security; differential power analysis; encryption core; hardware trojan; integrated circuit; logic operation; malicious hardware insertion; plaintext; post-layout estimation; power consumption correlation; power supply measurement; power supply tracing; practical attack; prefabrication power consumption prediction; side-channel based attack; side-channel power supply signature estimation; Correlation; Hardware; Integrated circuits; Power supplies; SPICE; Security; Transient analysis; Hardware Security; Power Supply analysis; Side-channel attacks; Trojan Detection (ID#: 16-10235)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140244&isnumber=7140225
S. Abraham and S. Nair, “Exploitability Analysis Using Predictive Cybersecurity Framework,” Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, Gdynia, 2015, pp. 317-323. doi: 10.1109/CYBConf.2015.7175953
Abstract: Managing Security is a complex process and existing research in the field of cybersecurity metrics provide limited insight into understanding the impact attacks have on the overall security goals of an enterprise. We need a new generation of metrics that can enable enterprises to react even faster in order to properly protect mission-critical systems in the midst of both undiscovered and disclosed vulnerabilities. In this paper, we propose a practical and predictive security model for exploitability analysis in a networking environment using stochastic modeling. Our model is built upon the trusted CVSS Exploitability framework and we analyze how the atomic attributes namely Access Complexity, Access Vector and Authentication that make up the exploitability score evolve over a specific time period. We formally define a nonhomogeneous Markov model which incorporates time dependent covariates, namely the vulnerability age and the vulnerability discovery rate. The daily transition-probability matrices in our study are estimated using a combination of Frei's model & Alhazmi Malaiya's Logistic model. An exploitability analysis is conducted to show the feasibility and effectiveness of our proposed approach. Our approach enables enterprises to apply analytics using a predictive cyber security model to improve decision making and reduce risk.
Keywords: Markov processes; authorisation; decision making; risk management; access complexity; access vector; authentication; daily transition-probability matrices; exploitability analysis; nonhomogeneous Markov model; predictive cybersecurity framework; risk reduction; trusted CVSS exploitability framework; vulnerability age; vulnerability discovery rate; Analytical models; Computer security; Measurement; Predictive models; Attack Graph; CVSS; Markov Model; Security Metrics; Vulnerability Discovery Model; Vulnerability Lifecycle Model (ID#: 16-10236)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175953&isnumber=7175890
C. Callegari, S. Giordano and M. Pagano, “Histogram Cloning and CuSum: An Experimental Comparison Between Different Approaches to Anomaly Detection,” Performance Evaluation of Computer and Telecommunication Systems (SPECTS), 2015 International Symposium on, Chicago, IL, 2015, pp. 1-7. doi: 10.1109/SPECTS.2015.7285294
Abstract: Due to the proliferation of new threats from spammers, attackers, and criminal enterprises, Anomaly-based Intrusion Detection Systems have emerged as a key element in network security and different statistical approaches have been considered in the literature. To cope with scalability issues, random aggregation through the use of sketches seems to be a powerful prefiltering stage that can be applied to backbone data traffic. In this paper we compare two different statistical methods to detect the presence of anomalies from such aggregated data. In more detail, histogram cloning (with different distance measurements) and CuSum algorithm (at the bucket level) are tested over A well-known publicly available data set. The performance analysis, presented in this paper, demonstrates the effectiveness of the CuSum when a proper definition of the algorithm, which takes into account the standard deviation of the underlying variables, is chosen.
Keywords: computer network security; data analysis; statistical analysis; CuSum algorithm; aggregated data anomalies; anomaly based intrusion detection systems; backbone data traffic; bucket level; cumulative sum control chart statistics; histogram cloning; network security; scalability issues; statistical methods; Aggregates; Algorithm design and analysis; Cloning; Histograms; Mathematical model; Monitoring; Standards; Anomaly Detection; CUSUM; Histogram Cloning; Network Security; Statistical Traffic Analysis
(ID#: 16-10237)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7285294&isnumber=7285273
K. Xiong and X. Chen, “Ensuring Cloud Service Guarantees via Service Level Agreement (SLA)-Based Resource Allocation,” 2015 IEEE 35th International Conference on Distributed Computing Systems Workshops, Columbus, OH, 2015, pp. 35-41. doi: 10.1109/ICDCSW.2015.18
Abstract: This paper studies the problem of resource management and placement for high performance clouds. It is concerned with the three most important performance metrics: response time, throughput, and utilization as Quality of Service (QoS) metrics defined in a Service Level Agreement (SLA). We propose SLA-based approaches for resource management in clouds. Specifically, we first quantify the metrics of trustworthiness, a percentile of response time, and availability. Then, this paper provides the formulation of cloud resource management as a nonlinear optimization problem subject to SLA requirements and further gives the solution of the problem. Finally, we give a solution of this nonlinear optimization problem and demonstrate the effectiveness of proposed solutions through illustrative examples.
Keywords: cloud computing; contracts; nonlinear programming; resource allocation; SLA-based approaches; SLA-based resource allocation; cloud service guarantees; nonlinear optimization problem; quality of service metrics; resource management; service level agreement; Cloud computing; Measurement; Quality of service; Resource management; Security; Servers; Time factors; Performance; Resource Allocation; Service Level Agreement (ID#: 16-10238)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165081&isnumber=7165001
G. Sabaliauskaite, G. S. Ng, J. Ruths and A. P. Mathur, “Empirical Assessment of Methods to Detect Cyber Attacks on a Robot,” 2016 IEEE 17th International Symposium on High Assurance Systems Engineering (HASE), Orlando, FL, 2016, pp. 248-251. doi: 10.1109/HASE.2016.19
Abstract: An experiment was conducted using a robot to investigate the effectiveness of four methods for detecting cyber attacks and analyzing robot failures. Cyber attacks were implemented on three robots of the same make and model through their wireless control mechanisms. Analysis of experimental data indicates the differences in attack detection effectiveness across the detection methods. A method that compares sensors values at each time step to the average historical values, was the most effective. Further, the attack detection effectiveness was the same or lower in actual robots as compared to simulation. Factors such as attack size and timing, influenced attack detection effectiveness.
Keywords: security of data; telerobotics; cyber attack detection; robot failure analysis; wireless control mechanisms; Computer crashes; Data models; Robot sensing systems; Time measurement; cyber-attacks; cyber-physical systems; robots; safety; security (ID#: 16-10239)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7423162&isnumber=7423114
Y. Mo and R. M. Murray, “Multi-Dimensional State Estimation in Adversarial Environment,” Control Conference (CCC), 2015 34th Chinese, Hangzhou, 2015, pp. 4761-4766. doi: 10.1109/ChiCC.2015.7260376
Abstract: We consider the estimation of a vector state based on m measurements that can be potentially manipulated by an adversary. The attacker is assumed to have limited resources and can only manipulate up to l of the m measurements. However, it can the compromise measurements arbitrarily. The problem is formulated as a minimax optimization, where one seeks to construct an optimal estimator that minimizes the “worst-case” error against all possible manipulations by the attacker and all possible sensor noises. We show that if the system is not observable after removing 2l sensors, then the worst-case error is infinite, regardless of the estimation strategy. If the system remains observable after removing arbitrary set of 2l sensor, we prove that the optimal state estimation can be computed by solving a semidefinite programming problem. A numerical example is provided to illustrate the effectiveness of the proposed state estimator.
Keywords: mathematical programming; minimax techniques; state estimation; adversarial environment; minimax optimization; multidimensional state estimation; semidefinite programming problem; vector state estimation; worst-case error; Indexes; Noise; Optimization; Robustness; Security; State estimation; Estimation; Security (ID#: 16-10240)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7260376&isnumber=7259602
N. Antunes and M. Vieira, “On the Metrics for Benchmarking Vulnerability Detection Tools,” 2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, Rio de Janeiro, 2015, pp. 505-516. doi: 10.1109/DSN.2015.30
Abstract: Research and practice show that the effectiveness of vulnerability detection tools depends on the concrete use scenario. Benchmarking can be used for selecting the most appropriate tool, helping assessing and comparing alternative solutions, but its effectiveness largely depends on the adequacy of the metrics. This paper studies the problem of selecting the metrics to be used in a benchmark for software vulnerability detection tools. First, a large set of metrics is gathered and analyzed according to the characteristics of a good metric for the vulnerability detection domain. Afterwards, the metrics are analyzed in the context of specific vulnerability detection scenarios to understand their effectiveness and to select the most adequate one for each scenario. Finally, an MCDA algorithm together with experts' judgment is applied to validate the conclusions. Results show that although some of the metrics traditionally used like precision and recall are adequate in some scenarios, others require alternative metrics that are seldom used in the benchmarking area.
Keywords: invasive software; software metrics; MCDA algorithm; alternative metrics; benchmarking vulnerability detection tool; software vulnerability detection tool; Benchmark testing; Concrete; Context; Measurement; Security; Standards; Automated Tools; Benchmarking; Security Metrics; Software Vulnerabilities; Vulnerability Detection (ID#: 16-10241)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266877&isnumber=7266818
D. Evangelista, F. Mezghani, M. Nogueira and A. Santos, “Evaluation of Sybil Attack Detection Approaches in the Internet of Things Content Dissemination,” 2016 Wireless Days (WD), Toulouse, France, 2016, pp. 1-6. doi: 10.1109/WD.2016.7461513
Abstract: The Internet of Things (IoT) comprises a diversity of heterogeneous objects that collects data in order to disseminate information to applications. The IoT data dissemination service can be tampered by several types of attackers. Among these, the Sybil attack emerged as the most critical since it operates in the data confidentiality. Although there are approaches against Sybil attack in several services, they disregard the presence of heterogeneous devices and have complex solutions. This paper presents a study highlighting strengths and weaknesses of Sybil attack detection approaches when applied in the IoT content dissemination. An evaluation of the LSD solution was made to assess its effectiveness and efficiency in a IoT network.
Keywords: Authentication; Cryptography; Feature extraction; Internet of things; Measurement; Security and privacy in the Internet of Things; Security in networks; Sybil Detection Techniques (ID#: 16-10242)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7461513&isnumber=7461453
X. Yang, D. Lo, X. Xia, Y. Zhang and J. Sun, “Deep Learning for Just-in-Time Defect Prediction,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 17-26. doi: 10.1109/QRS.2015.14
Abstract: Defect prediction is a very meaningful topic, particularly at change-level. Change-level defect prediction, which is also referred as just-in-time defect prediction, could not only ensure software quality in the development process, but also make the developers check and fix the defects in time. Nowadays, deep learning is a hot topic in the machine learning literature. Whether deep learning can be used to improve the performance of just-in-time defect prediction is still uninvestigated. In this paper, to bridge this research gap, we propose an approach Deeper which leverages deep learning techniques to predict defect-prone changes. We first build a set of expressive features from a set of initial change features by leveraging a deep belief network algorithm. Next, a machine learning classifier is built on the selected features. To evaluate the performance of our approach, we use datasets from six large open source projects, i.e., Bugzilla, Columba, JDT, Platform, Mozilla, and PostgreSQL, containing a total of 137,417 changes. We compare our approach with the approach proposed by Kamei et al. The experimental results show that on average across the 6 projects, Deeper could discover 32.22% more bugs than Kamei et al's approach (51.04% versus 18.82% on average). In addition, Deeper can achieve F1-scores of 0.22-0.63, which are statistically significantly higher than those of Kamei et al.'s approach on 4 out of the 6 projects.
Keywords: just-in-time; learning (artificial intelligence); pattern classification; software quality; change-level defect prediction; deep learning; just-in-time defect prediction; machine learning classifier; machine learning literature; Computer bugs; Feature extraction; Logistics; Machine learning; Measurement; Software quality; Training; Cost Effectiveness; Deep Belief Network; Deep Learning;
Just-In-Time Defect Prediction (ID#: 16-10243)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272910&isnumber=7272893
H. Hemmati, “How Effective Are Code Coverage Criteria?,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 151-156. doi: 10.1109/QRS.2015.30
Abstract: Code coverage is one of the main metrics to measure the adequacy of a test case/suite. It has been studied a lot in academia and used even more in industry. However, a test case may cover a piece of code (no matter what coverage metric is being used) but miss its faults. In this paper, we studied several existing and standard control and data flow coverage criteria on a set of developer-written fault-revealing test cases from several releases of five open source projects. We found that a) basic criteria such as statement coverage is very weak (detecting only 10% of the faults), b) combining several control-flow coverage together is better than the strongest criterion alone (28% vs. 19%), c) a basic data-flow coverage can detect many undetected faults (79% of the undetected faults by control-flow coverage can be detected by a basic def/use pair coverage), and d) on average 15% of the faults may not be detected by any of the standard control and data-flow coverage criteria. Classification of the undetected faults showed that they are mostly to do with specification (missing logic).
Keywords: data flow analysis; program testing; public domain software; software quality; code coverage criteria; control-flow coverage; data flow coverage criteria; developer-written fault-revealing test cases; missing logic; open source projects; statement coverage; Arrays; Data mining; Fault diagnosis; Instruments; Java; Measurement; Testing; Code Coverage; Control Flow; Data Flow; Effectiveness; Experiment; Fault Categorization; Software Testing (ID#: 16-10244)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272926&isnumber=7272893
S. Sanders and J. Kaur, “Can Web Pages Be Classified Using Anonymized TCP/IP Headers?,” 2015 IEEE Conference on Computer Communications (INFOCOM), Kowloon, 2015, pp. 2272-2280. doi: 10.1109/INFOCOM.2015.7218614
Abstract: Web page classification is useful in many domains- including ad targeting, traffic modeling, and intrusion detection. In this paper, we investigate whether learning-based techniques can be used to classify web pages based only on anonymized TCP/IP headers of traffic generated when a web page is visited. We do this in three steps. First, we select informative TCP/IP features for a given downloaded web page, and study which of these remain stable over time and are also consistent across client browser platforms. Second, we use the selected features to evaluate four different labeling schemes and learning-based classification methods for web page classification. Lastly, we empirically study the effectiveness of the classification methods for real-world applications.
Keywords: Web sites; online front-ends; security of data; telecommunication traffic; transport protocols; TCP/IP header; Web page classification; ad targeting; client browser platforms; intrusion detection; labeling schemes; learning-based classification methods; learning-based techniques; traffic modeling; Browsers; Feature extraction; IP networks; Labeling; Navigation; Streaming media; Web pages; Traffic Classification; Web Page Measurement (ID#: 16-10245)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218614&isnumber=7218353
X. Luo, J. Li, Z. Jiang and X. Guan, “Complete Observation Against Attack Vulnerability for Cyber-Physical Systems with Application to Power Grids,” 2015 5th International Conference on Electric Utility Deregulation and Restructuring and Power Technologies (DRPT), Changsha, 2015, pp. 962-967. doi: 10.1109/DRPT.2015.7432368
Abstract: This paper presents a novel framework based on system observability to solve the structural vulnerability of cyber-physical systems (CPSs) under attack with application to power grids. The adding power measurement point method is applied to detect the angle and voltage of the bus by adding detection points between two bus lines in the power grid. Then the generator dynamic equations are built to analyze the rotor angles and angular velocities of generators, and the system is simplified by the Power Management Units (PMUs) to observe the status on the generators. According to the impact of a series of attacks on the grid, we make use of grid measurements detection and state estimation to achieve observation status on the grid, enabling the monitor of the entire grid. Finally it is shown that the structural vulnerability against attacks can be solved by combining with the above-mentioned observations. Finally, some simulations are used to demonstrate the effectiveness of the proposed method. It is shown that some attacks can be effectively monitored to improve CPS security.
Keywords: electric generators; phasor measurement; power grids; power system protection; rotors; CPS security; PMU; attack vulnerability; cyber-physical systems; generator dynamic equations; grid measurements detection; power management units; power measurement point; state estimation; structural vulnerability; system observability; Angular velocity; Generators; Monitoring; Phasor measurement units; Power grids; Power measurement; Rotors; CPS; observation; structural vulnerability; undetectable attack
(ID#: 16-10246)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7432368&isnumber=7432193
L. M. Putranto, R. Hara, H. Kita and E. Tanaka, “Risk-Based Voltage Stability Monitoring and Preventive Control Using Wide Area Monitoring System,” PowerTech, 2015 IEEE Eindhoven, Eindhoven, 2015, pp. 1-6. doi: 10.1109/PTC.2015.7232547
Abstract: nowadays, power system tends to be operated in heavily stressed load, which can cause voltage stability problem. Moreover, occurrence probability of contingency is increasing due to growth of power system size and complexity. This paper proposes a new preventive control scheme based on voltage stability and security monitoring by means of wide area monitoring systems (WAMS). The proposed control scheme ensures voltage stability under major N-1 line contingencies, which are selected from all possible N-1 contingencies considering their occurrence probability and/or causing load curtailment. Some cases based on IEEE 57-bus test system are used to demonstrate the effectiveness of the proposed method. The demonstration results show that the proposed method can provide important contribution in improving voltage stability and security performance.
Keywords: IEEE standards; load regulation; power system control; power system economics; power system measurement; power system security; power system stability; probability; IEEE 57-bus test system; N-1 line contingency probability; WAMS; load curtailment; power system complexity; power system operation; preventive control scheme; risk-based voltage stability monitoring; security monitoring; wide area monitoring system; Fuels; Generators; Indexes; Power system stability; Security; Stability criteria; Voltage control; Economics Load Dispatch; Load Shedding; Multistage Preventive Control; Optimum Power Flow; Voltage Stability Improvement (ID#: 16-10247)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232547&isnumber=7232233
P. Zhonghua, H. Fangyuan, Z. Yuguo and S. Dehui, “False Data Injection Attacks for Output Tracking Control Systems,” Control Conference (CCC), 2015 34th Chinese, Hangzhou, 2015, pp. 6747-6752. doi: 10.1109/ChiCC.2015.7260704
Abstract: Cyber-physical systems (CPSs) have been gaining popularity with their high potential in widespread applications, and the security of CPSs becomes a rigorous problem. In this paper, an output track control (OTC) method is designed for discrete-time linear time-invariant Gaussian systems. The output tracking error is regarded as an additional state, Kalman filter-based incremental state observer and LQG-based augmented state feedback control strategy are designed, and Euclidean-based detector is used for detecting the false data injection attacks. Stealthy false data attacks which can completely disrupt the normal operation of the OTC systems without being detected are injected into the sensor measurements and control commands, respectively. Three kinds of numerical examples are employed to illustrate the effectiveness of the designed false data injection attacks.
Keywords: Gaussian processes; Kalman filters; discrete time systems; linear systems; observers; security of data; sensors; state feedback; CPS security; Euclidean-based detector; Kalman filter-based incremental state observer; LQG-based augmented state feedback control strategy; OTC method; OTC systems; cyber-physical systems; discrete-time linear time-invariant Gaussian systems; false data injection attacks; output track control method; output tracking control systems; output tracking error; sensor measurements; Detectors; Robot sensing systems; Security; State estimation; State feedback; Cyber-physical systems; Kalman filter; false data injection attacks; output tracking control (ID#: 16-10248)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7260704&isnumber=7259602
A. Basak, F. Zhang and S. Bhunia, “PiRA: IC Authentication Utilizing Intrinsic Variations in Pin Resistance,” Test Conference (ITC), 2015 IEEE International, Anaheim, CA, 2015, pp. 1-8. doi: 10.1109/TEST.2015.7342388
Abstract: The rapidly rising incidences of counterfeit Integrated Circuits (ICs) including cloning attacks pose a significant threat to the semiconductor industry. Conventional functional/structural testing are mostly ineffective to identify different forms of cloned ICs. On the other hand, existing design for security (DfS) measures are often not attractive due to additional design effort, hardware overhead and test cost. In this paper, we propose a novel robust IC authentication approach, referred to as PiRA, to validate the integrity of ICs in presence of cloning attacks. It exploits intrinsic random variations in pin resistances across ICs to create unique chip-specific signatures for authentication. Pin resistance is defined as the resistance looking into or out the pin according to set parameters and biasing conditions, measured by standard tests for IC defect/performance analysis such as input leakage, protection diode and output load current tests. A major advantage of PiRA over existing methodologies is that it incurs virtually zero design effort and overhead. Furthermore, unlike most authentication approaches, it works for all chip types including analog/mixed-signal ICs and can be applied to legacy designs. Theoretical analysis as well as experimental measurements with common digital and analog ICs verify the effectiveness of PiRA.
Keywords: authorisation; integrated circuit testing; mixed analogue-digital integrated circuits; semiconductor diodes; IC defect-performance analysis; PiRA; analog-mixed-signal IC; biasing conditions; chip-specific signatures; cloning attacks; counterfeit integrated circuits; design for security; functional-structural testing; input leakage intrinsic random variations; intrinsic variations; output load current tests; pin resistance; protection diode; robust IC authentication approach; semiconductor industry; set parameters; Authentication; Cloning; Current measurement; Electrical resistance measurement; Integrated circuits; Resistance; Semiconductor device measurement (ID#: 16-10249)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7342388&isnumber=7342364
M. Shiozaki, T. Kubota, T. Nakai, A. Takeuchi, T. Nishimura and T. Fujino, “Tamper-Resistant Authentication System with Side-Channel Attack Resistant AES and PUF Using MDR-ROM,” 2015 IEEE International Symposium on Circuits and Systems (ISCAS), Lisbon, 2015, pp. 1462-1465. doi: 10.1109/ISCAS.2015.7168920
Abstract: As a threat of security devices, side-channel attacks (SCAs) and invasive attacks have been identified in the last decade. The SCA reveals a secret key on a cryptographic circuit by measuring power consumption or electromagnetic radiation during the cryptographic operations. We have proposed the MDR-ROM scheme as the low-power and small-area counter-measure against SCAs. Meanwhile, secret data in a nonvolatile memory is analyzed by invasive attacks, and the cryptographic device is counterfeited and cloned by an adversary. We proposed to combine the MDR-ROM scheme with the Physical Unclonable Function (PUF) technique, which is expected as the counter-measure against the counterfeit, and the prototype chip was fabricated with a 180nm CMOS technology. In addition, the keyless entry demonstration system was produced in order to present the effectiveness of SCA resistance and PUF technique. Our experiments confirmed that this demonstration system achieved sufficient tamper resistance.
Keywords: CMOS integrated circuits; cryptography; random-access storage; read-only storage; 180nm CMOS technology; AES; MDR-ROM scheme; PUF; SCA; cryptographic circuit; cryptographic operations; electromagnetic radiation measurement; invasive attacks; low-power counter-measure; nonvolatile memory; physical unclonable function technique; power consumption measurement; secret key; security devices; side-channel attack resistant; small-area counter-measure; tamper-resistant authentication system; Authentication; Correlation; Cryptography; Large scale integration; Power measurement; Read only memory; Resistance; IO-masked dual-rail ROM (MDR-ROM); Siede channel attacks (SCA); physical unclonable function (PUF); tamper-resistant authentication system (ID#: 16-10250)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168920&isnumber=7168553
S. Alsemairi and M. Younis, “Clustering-Based Mitigation of Anonymity Attacks in Wireless Sensor Networks,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-7. doi: 10.1109/GLOCOM.2015.7417501
Abstract: The use of wireless sensor networks (WSNs) can be advantageous in applications that serve in hostile environments such as security surveillance and military battlefield. The operation of a WSN typically involves collection of sensor measurements at an in-situ Base-Station (BS) that further processes the data and either takes action or reports findings to a remote command center. Thus the BS plays a vital role and is usually guarded by concealing its identity and location. However, the BS can be susceptible to traffic analysis attack. Given the limited communication range of the individual sensors and the objective of conserving their energy supply, the sensor readings are forwarded to the BS over multi-hop paths. Such a routing topology allows an adversary to correlate intercepted transmissions, even without being able to decode them, and apply attack models such as Evidence Theory (ET) in order to determine the position of the BS. This paper proposes a technique to counter such an attack by reshaping the routing topology. Basically, the nodes in a WSN are grouped in unevenly-sized clusters and each cluster has a designated aggregation node (cluster head). An inter-cluster head routes are then formed so that the BS experiences low traffic volume and does not become distinguishable among the WSN nodes. The simulation results confirm the effectiveness of the proposed technique in boosting the anonymity of the BS.
Keywords: military communication; telecommunication network routing; telecommunication traffic; wireless sensor networks; WSN nodes; anonymity attacks; clustering-based mitigation; evidence theory; in-situ base-station; military battlefield; security surveillance; Measurement; Optimized production technology; Receivers; Routing; Security; Topology; Wireless sensor networks (ID#: 16-10251)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417501&isnumber=7416057
Wei Li, Weiyi Qian and Mingqiang Yin, “Portfolio Selection Models in Uncertain Environment,” Fuzzy Systems and Knowledge Discovery (FSKD), 2015 12th International Conference on, Zhangjiajie, 2015, pp. 471-475. doi: 10.1109/FSKD.2015.7381988
Abstract: It is difficult that the security returns are reflected by previous data for portfolio selection (PS) problems. In order to overcome this, we take security returns as uncertain variables. In this paper, two portfolio selection models are presented in uncertain environment. In order to express divergence, the cross-entropy of uncertain variables is introduced into these mathematical models. In two models, we use expected value to express the investment return. At the same time, variance or semivariance expresses the risk, respectively. The mathematical models are solved by the gravitation search algorithm proposed by E. Rashedi. We apply the proposed models to two examples to exhibit effectiveness and correctness of the proposed models.
Keywords: entropy; investment; search problems; gravitation search algorithm; investment return; mathematical models; portfolio selection models; uncertain environment; uncertain variables cross-entropy; Force; Investment; Mathematical model; Measurement uncertainty; Portfolios; Security; Uncertainty; cross-entropy; gravitation search algorithm; portfolio selection problem; uncertain measure (ID#: 16-10252)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381988&isnumber=7381900
J. R. Ward and M. Younis, “A Cross-Layer Defense Scheme for Countering Traffic Analysis Attacks in Wireless Sensor Networks,” Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 972-977. doi: 10.1109/MILCOM.2015.7357571
Abstract: In most Wireless Sensor Network (WSN) applications the sensors forward their readings to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary's attack. Even if a WSN employs conventional security mechanisms such as encryption and authentication, an adversary may apply traffic analysis techniques to locate the BS. This motivates a significant need for improved BS anonymity to protect the identity, role, and location of the BS. Published anonymity-boosting techniques mainly focus on a single layer of the communication protocol stack and assume that changes in the protocol operation will not be detectable. In fact, existing single-layer techniques may not be able to protect the network if the adversary could guess what anonymity measure is being applied by identifying which layer is being exploited. In this paper we propose combining physical-layer and network-layer techniques to boost the network resilience to anonymity attacks. Our cross-layer approach avoids the shortcomings of the individual single-layer schemes and allows a WSN to effectively mask its behavior and simultaneously misdirect the adversary's attention away from the BS's location. We confirm the effectiveness of our cross-layer anti-traffic analysis measure using simulation.
Keywords: cryptographic protocols; telecommunication security; telecommunication traffic; wireless sensor networks; WSN; anonymity-boosting techniques; authentication; base station; central sink; communication protocol; cross-layer defense scheme; encryption; network-layer techniques; physical-layer techniques; single-layer techniques; traffic analysis attacks; traffic analysis techniques; Array signal processing; Computer security; Measurement; Protocols; Sensors; Wireless sensor networks; anonymity; location privacy
(ID#: 16-10253)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357571&isnumber=7357245
C. Moreno, S. Kauffman and S. Fischmeister, “Efficient Program Tracing and Monitoring Through Power Consumption — wth a Little Help from the Compiler,” 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 2016, pp. 1556-1561. doi: (not included).
Abstract: Ensuring correctness and enforcing security are growing concerns given the complexity of modern connected devices and safety-critical systems. A promising approach is non-intrusive runtime monitoring through reconstruction of program execution traces from power consumption measurements. This can be used for verification, validation, debugging, and security purposes. In this paper, we propose a framework for increasing the effectiveness of power-based program tracing techniques. These systems determine the most likely block of source code that produced an observed power trace (CPU power consumption as a function of time). Our framework maximizes distinguishability between power traces for different code blocks. To this end, we provide a special compiler optimization stage that reorders intermediate representation (IR) and determines the reorderings that lead to power traces with highest distances between each other, thus reducing the probability of misclassification. Our work includes an experimental evaluation, using LLVM for an ARM architecture. Experimental results confirm the effectiveness of our technique.
Keywords: optimisation; power consumption; probability; program compilers; program diagnostics; safety-critical software; IR; compiler optimization stage; distinguishability maximization; intermediate representation; misclassification probability; power consumption measurement; program compiler; program execution trace reconstruction; program monitoring; program tracing; safety-critical system; Electronic mail; Monitoring; Optimization; Power demand; Power measurement; Security; Training (ID#: 16-10254)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7459561&isnumber=7459269
L. Zhang, D. Chen, Y. Cao and X. Zhao, “A Practical Method to Determine Achievable Rates for Secure Steganography,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1274-1281. doi: 10.1109/HPCC-CSS-ICESS.2015.62
Abstract: With a chosen steganographic method and a cover image, the steganographer always hesitates about how many bits should be embedded. Though there have been works on theoretical capacity analysis, it is still difficult to apply them in practice. In this paper, we propose a practical method to determine the appropriate hiding rate of a cover image with the purpose of evading possible statistical detections. The core of this method is a non-linear regression, which is used to learn the mapping between the detection rate and the estimated rate with respect to a specific steganographic method. In order to deal with images with different visual contents, multiple regression functions are trained based on image groups with different texture complexity levels. To demonstrate the effectiveness of the proposed method, estimators are constructed for selected steganographic algorithms for both spatial and JPEG transform domains.
Keywords: image watermarking; regression analysis; steganography; transforms; JPEG transform domain; multiple regression function; nonlinear regression method; secure steganography; specific steganographic method; statistical detection; texture complexity level; theoretical capacity analysis; Complexity theory; Entropy; Measurement; Payloads; Security; Transform coding; Yttrium; capacity analysis; estimated rate; non-linear regression (ID#: 16-10255)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336343&isnumber=7336120
N. J. Ahuja and I. Singh, “Innovative Road Map for Leveraging ICT Enabled Tools for Energy Efficiency — from Awareness to Adoption,” Advances in Computing and Communication Engineering (ICACCE), 2015 Second International Conference on, Dehradun, 2015, pp. 702-707. doi: 10.1109/ICACCE.2015.45
Abstract: Educating the energy efficiency measures at grass root levels, ranging from awareness to adoption, is the need of the hour and a very significant step towards energy security. The present work proposes a project-oriented approach based roadmap for the same. The approach initiates with a pre-survey of energy users, in terms of understanding their awareness level, current energy consumption patterns, and ascertaining their proposed adoption level towards innovative energy efficiency measures. It also assesses their interest towards different IT tools and mechanisms including their interface design preferences. Material designed, custom-tailored as per the needs of the users, is proposed to be delivered through identified IT methods. A post-survey done after an active IT intervention period proposes to bring out the variation from the pre-survey. Finally, use of analytical tools in concluding phase adjudges the interventions' effectiveness in terms of awareness generation, technology adoption level, change in energy consumption patterns, and energy savings.
Keywords: energy conservation; energy consumption; power aware computing; power engineering computing; user interfaces; ICT enabled tool; energy consumption pattern; energy efficiency; energy security; innovative road map; interface design preference; project-oriented approach; Current measurement; Energy consumption; Energy efficiency; Energy measurement; Mobile applications; Portals; Training; Computer Based Training; Energy Efficiency; ICT adoption; Mobile applications; Web-based Applications (ID#: 16-10256)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7306773&isnumber=7306547
M. J. F. Alenazi and J. P. G. Sterbenz, “Comprehensive Comparison and Accuracy of Graph Metrics in Predicting Network Resilience,” Design of Reliable Communication Networks (DRCN), 2015 11th International Conference on the, Kansas City, MO, 2015, pp. 157-164. doi: 10.1109/DRCN.2015.7149007
Abstract: Graph robustness metrics have been used largely to study the behavior of communication networks in the presence of targeted attacks and random failures. Several researchers have proposed new graph metrics to better predict network resilience and survivability against such attacks. Most of these metrics have been compared to a few established graph metrics for evaluating the effectiveness of measuring network resilience. In this paper, we perform a comprehensive comparison of the most commonly used graph robustness metrics. First, we show how each metric is determined and calculate its values for baseline graphs. Using several types of random graphs, we study the accuracy of each robustness metric in predicting network resilience against centrality-based attacks. The results show three conclusions. First, our path diversity metric has the highest accuracy in predicting network resilience for structured baseline graphs. Second, the variance of node-betweenness centrality has mostly the best accuracy in predicting network resilience for Waxman random graphs. Third, path diversity, network criticality, and effective graph resistance have high accuracy in measuring network resilience for Gabriel graphs.
Keywords: graph theory; telecommunication network reliability; telecommunication security; Gabriel graphs; Waxman random graphs; baseline graphs; centrality-based attacks; communication network behavior; comprehensive comparison; effective graph resistance; graph robustness metrics accuracy; network criticality; network resilience measurement; network resilience prediction; node-betweenness centrality variance; path diversity metric; random failures; survivability prediction; targeted attacks; Accuracy; Communication networks; Joining processes; Measurement; Resilience; Robustness; Connectivity evaluation; Fault tolerance; Graph robustness; Graph spectra; Network design; Network resilience; Network science; Reliability; Survivability (ID#: 16-10257)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149007&isnumber=7148972
J. Wang, M. Zhao, Q. Zeng, D. Wu and P. Liu, “Risk Assessment of Buffer ‘Heartbleed’ Over-Read Vulnerabilities,” 2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, Rio de Janeiro, 2015, pp. 555-562. doi: 10.1109/DSN.2015.59
Abstract: Buffer over-read vulnerabilities (e.g., Heartbleed) can lead to serious information leakage and monetary lost. Most of previous approaches focus on buffer overflow (i.e., over-write), which are either infeasible (e.g., canary) or impractical (e.g., bounds checking) in dealing with over-read vulnerabilities. As an emerging type of vulnerability, people need in-depth understanding of buffer over-read: the vulnerability, the security risk and the defense methods. This paper presents a systematic methodology to evaluate the potential risks of unknown buffer over-read vulnerabilities. Specifically, we model the buffer over-read vulnerabilities and focus on the quantification of how much information can be potentially leaked. We perform risk assessment using the RUBiS benchmark which is an auction site prototype modeled after eBay.com. We evaluate the effectiveness and performance of a few mitigation techniques and conduct a quantitative risk measurement study. We find that even simple techniques can achieve significant reduction on information leakage against over-read with reasonable performance penalty. We summarize our experience learned from the study, hoping to facilitate further studies on the over-read vulnerability.
Keywords: Internet; risk management; security of data; Heartbleed; buffer over-read vulnerabilities; defense method; information leakage; monetary lost; risk assessment; security risk; vulnerability method; Benchmark testing; Entropy; Heart rate variability; Measurement; Memory management; Payloads; Risk management (ID#: 16-10258)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266882&isnumber=7266818
K. Z. Ye, E. M. Portnov, L. G. Gagarina and K. Z. Lin, “Method for Increasing Reliability for Transmission State of Power Equipment Energy,” 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Orlando, FL, 2015,
pp. 433-437. doi: 10.1109/GlobalSIP.2015.7418232
Abstract: In this paper the problems of transmitting trustworthy monitoring and control signals through the communication channels of sophisticated telemechanics systems using the IEC 60870-5-101 (104) are debated (104). Mathematically justified discrepancy between concepts of “information veracity” and “information protection from noise in communication channel” is shown. Principles of combined encoding ensuring high level of veracity of systems intended for energy supply are proposed. The paper also presents a methodology for estimating the level of veracity of information signals of systems used in telemechanics and the results of experimental studies of proposed encoding principles effectiveness.
Keywords: IEC standards; encoding; power apparatus; power system measurement; power transmission control; power transmission reliability; protocols; security of data; IEC 60870-5-101 (104); combined encoding; communication channels; control signal transmission; energy supply; information protection; information signal veracity; monitoring signal transmission; power equipment energy transmission state reliability; telemechanics systems; Communication channels; Distortion; Distortion measurement; Encoding; IEC Standards; Information processing; Probability; biimpulse conditionally correlational code; communication channel; information veracity; protocol IEC 608705-101 (104); reliability; telemechanics system (ID#: 16-10259)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7418232&isnumber=7416920
I. Kiss, B. Genge, P. Haller and G. Sebestyén, “A Framework for Testing Stealthy Attacks in Energy Grids,” Intelligent Computer Communication and Processing (ICCP), 2015 IEEE International Conference on, Cluj-Napoca, 2015, pp. 553-560. doi: 10.1109/ICCP.2015.7312718
Abstract: The progressive integration of traditional Information and Communication Technologies (ICT) hardware and software into the supervisory control of modern Power Grids (PG) has given birth to a unique technological ecosystem. Modern ICT handles a wide variety of advantageous services in PG, but in turn exposes PG to significant cyber threats. To ensure security, PG use various anomaly detection modules to detect the malicious effects of cyber attacks. In many reported cases the newly appeared targeted cyber-physical attacks can remain stealthy even in presence of anomaly detection systems. In this paper we present a framework for elaborating stealthy attacks against the critical infrastructure of power grids. Using the proposed framework, experts can verify the effectiveness of the applied anomaly detection systems (ADS) either in real or simulated environments. The novelty of the technique relies in the fact that the developed “smart” power grid cyber attack (SPGCA) first reveals the devices which can be compromised causing only a limited effect observed by ADS and PG operators. Compromising low impact devices first conducts the PG to a more sensitive and near unstable state, which leads to high damages when the attacker at last compromises high impact devices, e.g. breaking high demand power lines to cause blackout. The presented technique should be used to strengthen the deployment of ADS and to define various security zones to defend PG against such intelligent cyber attacks. Experimental results based on the IEEE 14-bus electricity grid model demonstrate the effectiveness of the framework.
Keywords: computer network security; power engineering computing; power system control; power system reliability; power system simulation; smart power grids; ADS; ICT hardware; IEEE 14-bus electricity grid model; PG operators; SPGCA; anomaly detection modules; anomaly detection systems; cyber threats; cyber-physical attacks; energy grids; information and communication technologies; intelligent cyber attacks; power grids; power lines; smart power grid cyber attack; stealthy attacks; supervisory control; Actuators; Phasor measurement units; Power grids; Process control; Sensors; Voltage measurement; Yttrium; Anomaly Detection; Control Variable; Cyber Attack; Impact Assessment; Observed Variable; Power Grid (ID#: 16-10260)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312718&isnumber=7312586
X. Lu, S. Wang, W. Li, P. Jiang and C. Zhang, “Development of a WSN Based Real Time Energy Monitoring Platform for Industrial Applications,” Computer Supported Cooperative Work in Design (CSCWD), 2015 IEEE 19th International Conference on, Calabria, 2015, pp. 337-342. doi: 10.1109/CSCWD.2015.7230982
Abstract: In recent years, with significantly increasing pressures from both energy price and the scarcity of energy resources have dramatically raised sustainability awareness in the industrial sector where the effective energy efficient process planning and scheduling are urgently demanded. To response this trend, development of a low cost, high accuracy, great flexibility and distributed real time energy monitoring platform is imperative. This paper presents the design, implementation, and testing of a remote energy monitoring system to support energy efficient sustainable manufacturing in an industrial workshop based on a hierarchical network architecture by integrating WSNs with Internet communication into a knowledge and information services platform. In order to verify the feasibility and effectiveness of the proposed system, the system has been implemented in a real shop floor to evaluate with various production processes. The assessing results showed that the proposed system has significance in practice of discovering energy relationships between various manufacturing processes which can be used to support for machining scheme selection, energy saving discovery and energy quota allocation in a shop floor.
Keywords: Internet; energy conservation; information services; machining; manufacturing processes; power engineering computing; power system measurement; pricing; sustainable development; wireless sensor networks; Internet communication; WSN based real time energy monitoring platform; energy efficient process planning; energy efficient process scheduling; energy price; energy quota allocation; energy resource scarcity; energy saving discovery; industrial applications; information services platform; machining scheme selection; manufacturing process; sustainability awareness; wireless sensor network; Communication system security; Electric variables measurement; Manufacturing; Monitoring; Planning; Wireless communication; Wireless sensor networks; Cloud service; Wireless sensor network; energy monitoring; sustainable manufacturing (ID#: 16-10261)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7230982&isnumber=7230917
P. H. Yang and S. M. Yen, “Memory Attestation of Wireless Sensor Nodes by Trusted Local Agents,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 82-89. doi: 10.1109/Trustcom.2015.360
Abstract: Wireless Sensor Networks (WSNs) have been deployed for a wide variety of commercial, scientific, or military applications for the purposes of surveillance and critical data collection. Malicious code injection is a serious threat to the sensor nodes which enable fake data delivery or private data disclosure. The technique of memory attestation used to verify the integrity of a device's firmware is a potential solution against the aforementioned threat, and among which low cost software-based schemes are particularly suitable for protecting the resource-constraint sensor nodes. Unfortunately, the software-based attestation usually requires additional mechanisms to provide a reliable protection when the sensor nodes communicate with the verifier via multi-hop. Alternative hardware-based attestation (e.g., TPM) guarantees a reliable integrity measurement while it is impractical for the WSN applications primary due to the high computational overhead and hardware cost. This paper proposes a lightweight hardware-based memory attestation scheme by employing a simple tamper-resistant trusted local agent which is free from any cryptographic computation and is particularly suitable for the sensor nodes. The experimental results show the effectiveness of the proposed scheme.
Keywords: cryptography; firmware; telecommunication network reliability; telecommunication security; wireless sensor networks; WSN; computational overhead; cryptographic computation; device firmware; fake data delivery; hardware cost; hardware-based attestation; lightweight hardware-based memory attestation; low cost software-based schemes; malicious code injection; private data disclosure; reliable integrity measurement; reliable protection; resource-constraint sensor nodes; simple tamper-resistant trusted local agent; software-based attestation; trusted local agents; wireless sensor nodes; Base stations; Clocks; Hardware; Protocols; Security; Wireless sensor networks; Attestation; malicious code; trusted platform (ID#: 16-10262)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345268&isnumber=7345233
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Effectiveness and Work Factor Metics 2015 – 2016 (Part 2) |
Measurement to determine the effectiveness of security systems is an essential element of the Science of Security. The work cited here was presented in 2015 and 2016.
J. R. Ward and M. Younis, “A Cross-Layer Distributed Beamforming Approach to Increase Base Station Anonymity in Wireless Sensor Networks,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-7. doi: 10.1109/GLOCOM.2015.7417430
Abstract: In most applications of wireless sensor networks (WSNs), nodes act as data sources and forward measurements to a central base station (BS) that may also perform network management tasks. The critical role of the BS makes it a target for an adversary's attack. Even if a WSN employs conventional security primitives such as encryption and authentication, an adversary can apply traffic analysis techniques to find the BS. Therefore, the BS should be kept anonymous to protect its identity, role, and location. Previous work has demonstrated distributed beamforming to be an effective technique to boost BS anonymity in WSNs; however, the implementation of distributed beamforming requires significant coordination messaging that increases transmission activities and alerts the adversary to the possibility of deceptive activities. In this paper we present a novel, cross-layer design that exploits the integration of the control traffic of distributed beamforming with the MAC protocol in order to boost the BS anonymity while keeping the rate of node transmission at a normal rate. The advantages of our proposed approach include minimizing the overhead of anonymity measures and lowering the transmission power throughout the network which leads to increased spectrum efficiency and reduced energy consumption. The simulation results confirm the effectiveness our cross-layer design.
Keywords: access protocols; array signal processing; wireless sensor networks; MAC protocol; WSN; base station anonymity; central base station; cross-layer distributed beamforming approach; Array signal processing; Media Access Protocol; Schedules; Security; Synchronization; Wireless sensor networks (ID#: 16-10263)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417430&isnumber=7416057
A. Chahar, S. Yadav, I. Nigam, R. Singh and M. Vatsa, “A Leap Password Based Verification System,” Biometrics Theory, Applications and Systems (BTAS), 2015 IEEE 7th International Conference on, Arlington, VA, 2015, pp. 1-6. doi: 10.1109/BTAS.2015.7358745
Abstract: Recent developments in three-dimensional sensing devices has led to the proposal of a number of biometric modalities for non-critical scenarios. Leap Motion device has received attention from Vision and Biometrics community due to its high precision tracking. In this research, we propose Leap Password; a novel approach for biometric authentication. The Leap Password consists of a string of successive gestures performed by the user during which physiological as well as behavioral information is captured. The Conditional Mutual Information Maximization algorithm selects the optimal feature set from the extracted information. Match-score fusion is performed to reconcile information from multiple classifiers. Experiments are performed on the Leap Password Dataset, which consists of over 1700 samples obtained from 150 subjects. An accuracy of over 81% is achieved, which shows the effectiveness of the proposed approach.
Keywords: biometrics (access control); feature selection; gesture recognition; image fusion; optimisation; security of data; 3D sensing devices; Leap Motion device; Leap Password based verification system; biometric authentication; conditional mutual information maximization algorithm; gestures; high precision tracking; match-score fusion; optimal feature set selection; Feature extraction; Performance evaluation; Physiology; Sensors; Spatial resolution; Three-dimensional displays; Time measurement (ID#: 16-10264)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7358745&isnumber=7358743
J. Pang and Y. Zhang, “Event Prediction with Community Leaders,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 238-243. doi: 10.1109/ARES.2015.24
Abstract: With the emerging of online social network services, quantitative studies on social influence become achievable. Leadership is one of the most intuitive and common forms for social influence, understanding it could result in appealing applications such as targeted advertising and viral marketing. In this work, we focus on investigating leaders' influence for event prediction in social networks. We propose an algorithm based on events that users conduct to discover leaders in social communities. Analysis on the leaders that we found on a real-life social network dataset leads us to several interesting observations, such as that leaders do not have significantly higher number of friends but are more active than other community members. We demonstrate the effectiveness of leaders' influence on users' behaviors by learning tasks: given a leader has conducted one event, whether and when a user will perform the event. Experimental results show that with only a few leaders in a community the event predictions are always very effective.
Keywords: social networking (online); community leaders; event prediction; leadership; online social network services; real-life social network dataset; social influence; Entropy; Measurement; Prediction algorithms; Reliability; Social network services; Testing; Training (ID#: 16-10265)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299921&isnumber=7299862
H. Pazhoumand-Dar, M. Masek and C. P. Lam, “Unsupervised Monitoring of Electrical Devices for Detecting Deviations in Daily Routines,” 2015 10th International Conference on Information, Communications and Signal Processing (ICICS), Singapore, 2015, pp. 1-6. doi: 10.1109/ICICS.2015.7459849
Abstract: This paper presents a novel approach for automatic detection of abnorma
behaviours in daily routine of people living alone in their homes, without any manual labelling of the training dataset. Regularity and frequency of activities are monitored by estimating the status of specific electrical appliances via their power signatures identified from the composite power signal of the house. A novel unsupervised clustering technique is presented to automatically profile the power signatures of electrical devices. Then, the use of a test statistic is proposed to distinguish power signatures resulted from the occupant interactions from those of self-regulated appliances such as refrigerator. Experiments on real-world data showed the effectiveness of the proposed approach in terms of detection of the occupant's interactions with appliances as well as identifying those days that the behaviour of the occupant was outside the normal pattern.
Keywords: Monitoring; Power demand; Power measurement; Reactive power; Refrigerators; Training; abnormality detection; behaviour monitoring; power sensor; statistical measures (ID#: 16-10266)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7459849&isnumber=7459813
N. Sae-Bae and N. Memon, “Quality of Online Signature Templates,” Identity, Security and Behavior Analysis (ISBA), 2015 IEEE International Conference on, Hong Kong, 2015, pp. 1-8. doi: 10.1109/ISBA.2015.7126354
Abstract: This paper proposes a metric to measure the quality of an online signature template derived from a set of enrolled signature samples in terms of its distinctiveness against random signatures. Particularly, the proposed quality score is computed based on statistical analysis of histogram features that are used as part of an online signature representation. Experiments performed on three datasets consistently confirm the effectiveness of the proposed metric as an indication of false acceptance rate against random forgeries when the system is operated at a particular decision threshold. Finally, the use of the proposed quality metric to enforce a minimum signature strength policy in order to enhance security and reliability of the system against random forgeries is demonstrated.
Keywords: counterfeit goods; digital signatures; feature extraction; random processes; statistical analysis; decision threshold; false acceptance rate; histogram features; online signature representation; online signature template quality; quality metric; quality score; random forgeries; random signatures; signature strength policy; Biometrics (access control); Forgery; Histograms; Measurement; Sociology; Standards (ID#: 16-10267)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7126354&isnumber=7126341
M. Rezvani, A. Ignjatovic, E. Bertino and S. Jha, “A Collaborative Reputation System Based on Credibility Propagation in WSNs,” Parallel and Distributed Systems (ICPADS), 2015 IEEE 21st International Conference on, Melbourne, VIC, 2015, pp. 1-8. doi: 10.1109/ICPADS.2015.9
Abstract: Trust and reputation systems are widely employed in WSNs to help decision making processes by assessing trustworthiness of sensor nodes in a data aggregation process. However, in unattended and hostile environments, more sophisticated malicious attacks, such as collusion attacks, can distort the computed trust scores and lead to low quality or deceptive service as well as undermine the aggregation results. In this paper we propose a novel, local, collaborative-based trust framework for WSNs that is based on the concept of credibility propagation which we introduce. In our approach, trustworthiness of a sensor node depends on the amount of credibility that such a node receives from other nodes. In the process we also obtain an estimates of sensors' variances which allows us to estimate the true value of the signal using the Maximum Likelihood Estimation. Extensive experiments using both real-world and synthetic datasets demonstrate the efficiency and effectiveness of our approach.
Keywords: decision making; maximum likelihood estimation; telecommunication security; wireless sensor networks; WSN; collaborative reputation system; collaborative-based trust framework; credibility propagation; data aggregation process; reputation systems; sensor nodes; trust systems; Aggregates; Collaboration; Computer science; Maximum likelihood estimation; Robustness; Temperature measurement; Wireless sensor networks; collusion attacks; data aggregation; iterative filtering; reputation system (ID#: 16-10268)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7384212&isnumber=7384203
X. Qu, S. Kim, D. Atnafu and H. J. Kim, “Weighted Sparse Representation Using a Learned Distance Metric for Face Recognition,” Image Processing (ICIP), 2015 IEEE International Conference on, Quebec City, QC, 2015, pp. 4594-4598. doi: 10.1109/ICIP.2015.7351677
Abstract: This paper presents a novel weighted sparse representation classification for face recognition with a learned distance metric (WSRC-LDM) which learns a Mahalanobis distance to calculate the weight and code the testing face. The Mahalanobis distance is learned by using the information-theoretic metric learning (ITML) which helps to define a better weight used in WSRC. In the meantime, the learned distance metric takes advantage of the classification rule of SRC which helps the proposed method classify more accurately. Extensive experiments verify the effectiveness of the proposed method.
Keywords: face recognition; image representation; information theory; learning (artificial intelligence); ITML; Mahalanobis distance; WSRC-LDM; information-theoretic metric learning; learned distance metric; weighted sparse representation; Encoding; Face; Face recognition; Image reconstruction; Measurement; Testing; Training; Face Recognition; Metric Learning; Weighted Sparse Representation Classification (ID#: 16-10269)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7351677&isnumber=7350743
B. Niu, S. Gao, F. Li, H. Li and Z. Lu, “Protection of Location Privacy in Continuous LBSs Against Adversaries with Background Information,” 2016 International Conference on Computing, Networking and Communications (ICNC), Kauai, HI, 2016, pp. 1-6. doi: 10.1109/ICCNC.2016.7440649
Abstract: Privacy issues in continuous Location-Based Services (LBSs) have gained attractive attentions in literature over recent years. In this paper, we illustrate the limitations of existing work and define an entropy-based privacy metric to quantify the privacy degree based on a set of vital observations. To tackle the privacy issues, we propose an efficient privacy-preserving scheme, DUMMY-T, which aims to protect LBSs user's privacy against adversaries with background information. By our Dummy Locations Generating (DLG) algorithm, we first generate a set of realistic dummy locations for each snapshot with considering the minimum cloaking region and background information. Further, our proposed Dummy Paths Constructing (DPC) algorithm guarantees the location reachability by taking the maximum distance of the moving mobile users into consideration. Security analysis and empirical evaluation results further verify the effectiveness and efficiency of our DUMMY-T.
Keywords: data protection; entropy; mobile computing; mobility management (mobile radio); telecommunication security; DLG algorithm; DPC algorithm; DUMMY-T scheme; LBS user privacy protection; adversaries; background information; continuous LBS; continuous location-based services; dummy path-constructing algorithm; dummy-location generating algorithm; empirical evaluation; entropy-based privacy metric; location privacy protection; location reachability; maximum moving-mobile user distance; minimum cloaking region; privacy degree quantification; privacy-preserving scheme; security analysis; snapshots; Entropy; Measurement; Mobile communication; Privacy; Servers; System performance; Uncertainty (ID#: 16-10270)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7440649&isnumber=7440540
F. Qin, Z. Zheng, C. Bai, Y. Qiao, Z. Zhang and C. Chen, “Cross-Project Aging Related Bug Prediction,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 43-48. doi: 10.1109/QRS.2015.17
Abstract: In a long running system, software tends to encounter performance degradation and increasing failure rate during execution, which is called software aging. The bugs contributing to the phenomenon of software aging are defined as Aging Related Bugs (ARBs). Lots of manpower and economic costs will be saved if ARBs can be found in the testing phase. However, due to the low presence probability and reproducing difficulty of ARBs, it is usually hard to predict ARBs within a project. In this paper, we study whether and how ARBs can be located through cross-project prediction. We propose a transfer learning based aging related bug prediction approach (TLAP), which takes advantage of transfer learning to reduce the distribution difference between training sets and testing sets while preserving their data variance. Furthermore, in order to mitigate the severe class imbalance, class imbalance learning is conducted on the transferred latent space. Finally, we employ machine learning methods to handle the bug prediction tasks. The effectiveness of our approach is validated and evaluated by experiments on two real software systems. It indicates that after the processing of TLAP, the performance of ARB bug prediction can be dramatically improved.
Keywords: learning (artificial intelligence); program debugging; program testing; software maintenance; ARB bug prediction; TLAP; aging related bugs; class imbalance learning; cross-project aging; data variance; low presence probability; machine learning method; software aging; software execution; software failure rate; software performance degradation; software system; software testing; transfer learning based aging related bug prediction approach; Aging; Computer bugs; Learning systems; Measurement; Software; Testing; Training; aging related bug; bug prediction; ross-project; transfer learning (ID#: 16-10271)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272913&isnumber=7272893
X. Gong, X. Zhang and N. Wang, “Random-Attack Median Fortification Problems with Different Attack Probabilities,” Control Conference (CCC), 2015 34th Chinese, Hangzhou, 2015, pp. 9127-9133. doi: 10.1109/ChiCC.2015.7261083
Abstract: Critical infrastructure can be lost due to random and intentional attacks. The random-attack median fortification problem has been presented to minimize the expected operation cost after pure random attacks with same facility attack probability. This paper discusses the protection problem for supply systems considering random attacks with tendentiousness, that is, some facilities have more attractions for attackers. The random-attack median fortification problem with different attack probabilities (RAM F-DP) is proposed and solved by calculating the service probabilities for all demand nodes and facilities p airs after attacking. The effectiveness of solving RAMF-DP is verified through experiments on various attack probabilities.
Keywords: cost reduction; critical infrastructures; disasters; dynamic programming; national security; probability; random processes; terrorism; RAM F-DP; critical infrastructure; demand nodes; expected operation cost minimization; facility attack probability; intentional attack; protection problem; pure random attack; random-attack median fortification problem; service probability; supply system; tendentiousness; Computational modeling; Games; Indexes; Linear programming; Mathematical model; Q measurement; Terrorism; Different attack probabilities; Median problems; Random attacks; Tendentiousness (ID#: 16-10272)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7261083&isnumber=7259602
R. F. Lima and A. C. M. Pereira, “A Fraud Detection Model Based on Feature Selection and Undersampling Applied to Web Payment Systems,” 2015 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), Singapore, 2015, pp. 219-222. doi: 10.1109/WI-IAT.2015.13
Abstract: The volume of electronic transactions has raised a lot in last years, mainly due to the popularization of e-commerce. Since this popularization, we have observed a significant increase in the number of fraud cases, resulting in billions of dollars losses each year worldwide. Therefore, it is important and necessary to develop and apply techniques that can assist in fraud detection in Web transactions. Due to the large amount of data generated in electronic transactions, to find the best set of features is an essential task to identify frauds. Fraud detection is a specific application of anomaly detection, characterized by a large imbalance between the classes (e.g., fraud or non fraud), which can be a detrimental factor for feature selection techniques. In this work we evaluate the behavior and impact of feature selection techniques to detect fraud in a Web Transaction scenario, applying feature selection techniques and performing undersampling in this step. To measure the effectiveness of the feature selection approach we use some state-of-the-art classification techniques to identify frauds, using real data from one of the largest electronic payment system in Latin America. Thus, the fraud detection models comprises a feature selection and classification techniques. To evaluate our results we use metrics of F-Measure and Economic Efficiency. Our results show that the imbalance between the classes reduces the effectiveness of feature selection and the undersampling strategy applied in this task improves the final results. We achieve a very good performance in fraud detection using the proposed methodology, reducing the number of features and presenting financial gains of up to 61% compared to the actual scenario of the company.
Keywords: 1/f noise; Internet; electronic commerce; security of data; F-measure; Latin America; Web payment system; Web transactions; e-commerce; economic efficiency; electronic payment system; electronic transactions; feature selection; fraud detection model; under sampling; Computational modeling; Economics; Feature extraction; Frequency modulation; Logistics; Measurement; Yttrium; Anomaly Detection; Electronic Transactions; Feature Selection; Fraud Detection (ID#: 16-10273)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7397461&isnumber=7397392
I. Kiss, B. Genge and P. Haller, “Behavior-Based Critical Cyber Asset Identification in Process Control Systems Under Cyber Attacks,” Carpathian Control Conference (ICCC), 2015 16th International, Szilvasvarad, 2015, pp. 196-201. doi: 10.1109/CarpathianCC.2015.7145073
Abstract: The accelerated advancement of Process Control Systems (PCS) transformed the traditional and completely isolated systems view into a networked inter-connected “system of systems” perspective, where off-the-shelf Information and Communication Technologies (ICT) are deeply embedded into the heart of PCS. This has brought significant economical and operational benefits, but it also provided new opportunities for malicious actors targeting critical PCS. To address these challenges, in this work we employ our previously developed Cyber Attack Impact Assessment (CAIA) technique to provide a systematic mechanism to help PCS designers and industry operators to assess the impact severity of various cyber threats. Moreover, the question of why a device is more critical than others, and also the motivation of this work, are answered through extensive numerical results showing the significance of systems dynamics in the context of closed-loop PCS. The CAIA approach is validated against the simulated Tennessee Eastman chemical process, including 41 observed variables and 12 control variables, involved in cascade controller structures. The results show the application possibilities and effectiveness of CAIA for various attack scenarios.
Keywords: closed loop systems; control engineering computing; interconnected systems; process control; production engineering computing; security of data; CAIA technique; Tennessee Eastman chemical process; behavior-based critical cyber asset identification; cascade controller structures; closed-loop PCS; critical PCS; cyber attack impact assessment; cyber threats impact severity; economical benefits; malicious actors; networked interconnected system of systems; operational benefits; process control systems; systematic mechanism; systems dynamics; Chemical processes; Feeds; Hardware; Inductors; Mathematical model; Process control; Time measurement; Control Variable; Cyber Attack; Impact Assessment; Observed Variable; Process Control Systems; System Dynamics (ID#: 16-10274)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7145073&isnumber=7145033
L. Behe, Z. Wheeler, C. Nelson, B. Knopp and W. M. Pottenger, “To Be or Not to Be IID: Can Zipf's Law Help?,” Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-6. doi: 10.1109/THS.2015.7225274
Abstract: Classification is a popular problem within machine learning, and increasing the effectiveness of classification algorithms has many significant applications within industry and academia. In particular, focus will be given to Higher-Order Naive Bayes (HONB), a relational variant of the famed Naive Bayes (NB) statistical classification algorithm that has been shown to outperform Naive Bayes in many cases [1,10]. Specifically, HONB has outperformed NB on character n-gram based feature spaces when the available training data is small [2]. In this paper, a correlation is hypothesized between the performance of HONB on character n-gram feature spaces and how closely the feature space distribution follows Zipf's Law. This hypothesis stems from the overarching goal of ultimately understanding HONB and knowing when it will outperform NB. Textual datasets ranging from several thousand instances to nearly 20,000 instances, some containing microtext, were used to generate character n-gram feature spaces. HONB and NB were both used to model these datasets, using varying character n-gram sizes (2-7) and dictionary sizes up to 5000 features. The performances of HONB and NB were then compared, and the results show potential support for our hypothesis: namely, the results support the hypothesized correlation for the Accuracy and Precision metrics. Additionally, a solution is provided for an open problem which was presented in [1], giving an explicit formula for the number of SDRs from k given sets, which has connections to counting higher-order paths of arbitrary length, which are important in the learning stage of HONB.
Keywords: Bayes methods; learning (artificial intelligence); natural language processing; pattern classification; text analysis; HONB; IDD; Zipf's law; accuracy metrics; character n-gram based feature spaces; character n-gram feature spaces; classification algorithms; feature space distribution; higher-order naive Bayes; independent and identically distributed; machine learning; naive Bayes statistical classification algorithm; precision metrics; textual datasets; Accuracy; Classification algorithms; Correlation; Earthquakes; Measurement; Niobium; Prediction algorithms (ID#: 16-10275)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225274&isnumber=7190491
Q. Yang, Rui Min, D. An, W. Yu and X. Yang, “Towards Optimal PMU Placement Against Data Integrity Attacks in Smart Grid,” 2016 Annual Conference on Information Science and Systems (CISS), Princeton, NJ, USA, 2016, pp. 54-58. doi: 10.1109/CISS.2016.7460476
Abstract: State estimation plays a critical role in self-detection and control of the smart grid. Data integrity attacks (also known as false data injection attacks) have shown significant potential in undermining the state estimation of power systems, and corresponding countermeasures have drawn increased scholarly interest. In this paper, we consider the existing least-effort attack model that computes the minimum number of sensors that must be compromised in order to manipulate a given number of states, and develop an effective greedy-based algorithm for optimal PMU placement to defend against data integrity attacks. We develop a greedy-based algorithm for optimal PMU placement, which can not only combat data integrity attacks, but also ensure the system observability with low overhead. The experimental data obtained based on IEEE standard systems demonstrates the effectiveness of the proposed defense scheme against data integrity attacks.
Keywords: Observability; Phasor measurement units; Power grids; Security; Sensors; State estimation; Data integrity attacks; defense strategy; optimal PMU placement; state estimation; system observability (ID#: 16-10276)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7460476&isnumber=7460463
Y. Zhauniarovich, A. Philippov, O. Gadyatskaya, B. Crispo and F. Massacci, “Towards Black Box Testing of Android Apps,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 501-510. doi: 10.1109/ARES.2015.70
Abstract: Many state-of-art mobile application testing frameworks (e.g., Dynodroid [1], EvoDroid [2]) enjoy Emma [3] or other code coverage libraries to measure the coverage achieved. The underlying assumption for these frameworks is availability of the app source code. Yet, application markets and security researchers face the need to test third-party mobile applications in the absence of the source code. There exists a number of frameworks both for manual and automated test generation that address this challenge. However, these frameworks often do not provide any statistics on the code coverage achieved, or provide coarse-grained ones like a number of activities or methods covered. At the same time, given two test reports generated by different frameworks, there is no way to understand which one achieved better coverage if the reported metrics were different (or no coverage results were provided). To address these issues we designed a framework called BBOXTESTER that is able to generate code coverage reports and produce uniform coverage metrics in testing without the source code. Security researchers can automatically execute applications exploiting current state-of-art tools, and use the results of our framework to assess if the security-critical code was covered by the tests. In this paper we report on design and implementation of BBOXTESTER and assess its efficiency and effectiveness.
Keywords: Android (operating system); mobile computing; program testing; security of data; Android apps; BBOXTESTER; automated test generation; black box testing; code coverage report generation; coverage metrics; manual test generation; security-critical code; third-party mobile application testing; Androids; Humanoid robots; Instruments; Java; Measurement; Runtime; Testing (ID#: 16-10277)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299958&isnumber=7299862
J. Armin, B. Thompson, D. Ariu, G. Giacinto, F. Roli and P. Kijewski, “2020 Cybercrime Economic Costs: No Measure No Solution,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 701-710. doi: 10.1109/ARES.2015.56
Abstract: Governments needs reliable data on crime in order to both devise adequate policies, and allocate the correct revenues so that the measures are cost-effective, i.e., The money spent in prevention, detection, and handling of security incidents is balanced with a decrease in losses from offences. The analysis of the actual scenario of government actions in cyber security shows that the availability of multiple contrasting figures on the impact of cyber-attacks is holding back the adoption of policies for cyber space as their cost-effectiveness cannot be clearly assessed. The most relevant literature on the topic is reviewed to highlight the research gaps and to determine the related future research issues that need addressing to provide a solid ground for future legislative and regulatory actions at national and international levels.
Keywords: government data processing; security of data; cyber security; cyber space; cyber-attacks; cybercrime economic cost; economic costs; Computer crime; Economics; Measurement; Organizations; Reliability; Stakeholders (ID#: 16-10278)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299982&isnumber=7299862
P. Pantazopoulos and I. Stavrakakis, “Low-Cost Enhancement of the Intra-Domain Internet Robustness Against Intelligent Node Attacks,” Design of Reliable Communication Networks (DRCN), 2015 11th International Conference on the, Kansas City, MO, 2015, pp. 219-226. doi: 10.1109/DRCN.2015.7149016
Abstract: Internet vulnerability studies typically consider highly central nodes as favorable targets of intelligent (malicious) attacks. Heuristics that use redundancy adding k extra links in the topology are a common class of countermeasures seeking to enhance Internet robustness. To identify the nodes to be linked most previous works propose very simple centrality criteria that lack a clear rationale and only occasionally address Intra-domain topologies. More importantly, the implementation cost induced by adding lengthy links between nodes of remote network locations is rarely taken into account. In this paper, we explore cost-effective link additions in the locality of the targets having the k extra links added only between their first neighbors. We introduce an innovative link utility metric that identifies which pair of a target's neighbors aggregates the most shortest paths coming from the rest of the nodes and therefore could enhance the network connectivity, if linked. This metric drives the proposed heuristic that solves the problem of assigning the link budget k to the neighbors of the targets. By employing a rich Intra-domain networks dataset we first conduct a proof-of-concept study to validate the effectiveness of the metric. Then we compare our approach with the so-far most effective heuristic that does not bound the length of the added links. Our results suggest that the proposed enhancement can closely approximate the connectivity levels the so-far winner yields, yet with up to eight times lower implementation cost.
Keywords: Internet; computer network security; telecommunication links; telecommunication network topology; innovative link utility metric; intelligent node attack; intradomain internet robustness low-cost enhancement; intradomain topology; network connectivity enhancement; proof-of-concept study; Communication networks; Measurement; Network topology; Reliability engineering; Robustness; Topology (ID#: 16-10279)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149016&isnumber=7148972
M. Bradbury, M. Leeke and A. Jhumka, “A Dynamic Fake Source Algorithm for Source Location Privacy in Wireless Sensor Networks,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 531-538. doi: 10.1109/Trustcom.2015.416
Abstract: Wireless sensor networks (WSNs) are commonly used in asset monitoring applications, where it is often desirable for the location of the asset being monitored to be kept private. The source location privacy (SLP) problem involves protecting the location of a WSN source node from an attacker who is attempting to locate it. Among the most promising approaches to the SLP problem is the use of fake sources, with much existing research demonstrating their efficacy. Despite the effectiveness of the approach, the most effective algorithms providing SLP require network and situational knowledge that makes their deployment impractical in many contexts. In this paper, we develop a novel dynamic fake sources-based algorithm for SLP. We show that the algorithm provides state-of-the-art levels of location privacy under practical operational assumptions.
Keywords: data privacy; telecommunication security; wireless sensor networks; SLP problem; WSN source node; asset monitoring applications; dynamic fake source algorithm; location protection; source location privacy problem; wireless sensor networks; Context; Heuristic algorithms; Monitoring; Position measurement; Privacy; Temperature sensors; Wireless sensor networks; Dynamic; Sensor Networks; Source Location Privacy (ID#: 16-10280)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345324&isnumber=7345233
J. R. Ward and M. Younis, “Base Station Anonymity Distributed Self-Assessment in Wireless Sensor Networks,” Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 103-108. doi: 10.1109/ISI.2015.7165947
Abstract: In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. In most applications, the sensors act as data sources and forward information generated by event triggers to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN with the least amount of effort. Even if a WSN employs conventional security mechanisms such as encryption and authentication, an adversary may apply traffic analysis techniques to identify the BS. This motivates a significant need for improved BS anonymity to protect the identity, role, and location of the BS. Previous work has proposed anonymity-boosting techniques to improve the BS's anonymity posture, but all require some amount of overhead such as increased energy consumption, increased latency, or decreased throughput. If the BS understood its own anonymity posture, then it could evaluate whether the benefits of employing an anti-traffic analysis technique are worth the associated overhead. In this paper we propose two distributed approaches to allow a BS to assess its own anonymity and correspondingly employ anonymity-boosting techniques only when needed. Our approaches allow a WSN to increase its anonymity on demand, based on real-time measurements, and therefore conserve resources. The simulation results confirm the effectiveness of our approaches.
Keywords: security of data; wireless sensor networks; WSN; anonymity-boosting techniques; anti-traffic analysis technique; base station; base station anonymity distributed self-assessment; conventional security mechanisms; improved BS anonymity; Current measurement; Energy consumption; Entropy; Protocols; Sensors; Wireless sensor networks; anonymity; location privacy
(ID#: 16-10281)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165947&isnumber=7165923
L. Ren, C. Gong, Q. Shen and H. Wang, “A Method for Health Monitoring of Power MOSFETs Based on Threshold Voltage,” Industrial Electronics and Applications (ICIEA), 2015 IEEE 10th Conference on, Auckland, 2015, pp. 1729-1734. doi: 10.1109/ICIEA.2015.7334390
Abstract: The prognostics and health management (PHM) of airborne equipment plays an important role in ensuring the security of flight and improving the ratio of combat readiness. The widely use of electronics equipment in aircraft is now making the PHM technology for power electronics devices become more important. It is the main circuit devices that are proved to have high failure rate in power equipment. This paper does some research about the fault feature extraction of power metal oxide semiconductor field effect transistor (MOSFET). Firstly, the failure mechanism and failure feature of active power switches are analyzed in this paper, and the junction temperature is indicated to be an overall parameter for the health monitoring of MOSFET. Then, a health monitoring method based on the threshold voltage is proposed. For buck converter, a measuring method of the threshold voltage is proposed, which is simple to realize and of high precision. Finally, the simulation and experimental results verify the effectiveness of the proposed measuring method.
Keywords: monitoring; power MOSFET; power electronics; active power switches; airborne equipment; buck converter; electronics equipment; failure mechanism; fault feature extraction; health monitoring; junction temperature; power electronics devices; prognostics and health management; threshold voltage; Aging; Degradation; Junctions; MOSFET; Temperature; Temperature measurement; Threshold voltage; Buck converter; The prognostics and health management (PHM); the failure mechanism (ID#: 16-10282)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7334390&isnumber=7334072
T. Saito, H. Miyazaki, T. Baba, Y. Sumida and Y. Hori, “Study on Diffusion of Protection/Mitigation Against Memory Corruption Attack in Linux Distributions,” Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, Blumenau, 2015, pp. 525-530. doi: 10.1109/IMIS.2015.73
Abstract: Memory corruption attacks that exploit software vulnerabilities have become a serious problem on the Internet. Effective protection and/or mitigation technologies aimed at countering these attacks are currently being provided with operating systems, compilers, and libraries. Unfortunately, the attacks continue. One of the reasons for this state of affairs can be attributed to the uneven diffusion of the latest (and thus most potent) protection and/or mitigation technologies. This results because attackers are likely to have found ways of circumventing most well-known older versions, thus causing them to lose effectiveness. Therefore, in this paper, we will explore diffusion of relatively new technologies, and analyze the results of a Linux distributions survey.
Keywords: Linux; security of data; Internet; Linux distributions; memory corruption attack mitigation; memory corruption attack protection; software vulnerabilities; Buffer overflows; Geophysical measurement techniques; Ground penetrating radar; Kernel; Libraries; Anti-thread; Buffer Overflow; Diffusion of countermeasure techniques; Memory corruption attacks (ID#: 16-10283)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7285008&isnumber=7284886
X. Zhou, H. Wang and J. Zhao, “A Fault-Localization Approach Based on the Coincidental Correctness Probability,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 292-297. doi: 10.1109/QRS.2015.48
Abstract: Coverage-based fault localization is a spectrum-based technique that identifies the executing program elements that correlate with failure. However, the effectiveness of coverage-based fault localization suffers from the effect of coincidental correctness which occurs when a fault is executed but no failure is detected. Coincidental correctness is prevalent and proved as a safety reducing factor for the coverage-based fault location techniques. In this paper, we propose a new fault-localization approach based on the coincidental correctness probability. We estimate the probability that coincidental correctness happens for each program execution using dynamic data-flow analysis and control-flow analysis. To evaluate our approach, we use safety and precision as evaluation metrics. Our experiment involved 62 seeded versions of C programs from SIR. We discuss the comparison results with Tarantula and two improved CBFL techniques cleansing test suites from coincidental correctness. The results show that our approach can improve the safety and precision of the fault-localization technique to a certain degree.
Keywords: data flow analysis; probability; program testing; software fault tolerance; C programs; CBFL techniques; Tarantula; coincidental correctness probability; control-flow analysis; coverage-based fault localization; coverage-based fault location techniques; dynamic data-flow analysis; evaluation metrics; failure; fault-localization approach; precision; probability estimation; program elements; program execution; safety reducing factor; software testing; spectrum-based technique; test suites; Algorithm design and analysis; Circuit faults; Estimation; Heuristic algorithms; Lead; Measurement; Safety; coincidental correctness; fault localization (ID#: 16-10284)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272944&isnumber=7272893
P. Xu, Q. Miao, T. Liu and X. Chen, “Multi-Direction Edge Detection Operator,” 2015 11th International Conference on Computational Intelligence and Security (CIS), Shenzhen, 2015, pp. 187-190. doi: 10.1109/CIS.2015.53
Abstract: Due to the noise in the images, the edges extracted from these noisy images are always discontinuous and inaccurate by traditional operators. In order to solve these problems, this paper proposes multi-direction edge detection operator to detect edges from noisy images. The new operator is designed by introducing the shear transformation into the traditional operator. On the one hand, the shear transformation can provide a more favorable treatment for directions, which can make the new operator detect edges in different directions and overcome the directional limitation in the traditional operator. On the other hand, all the single pixel edge images in different directions can be fused. In this case, the edge information can complement each other. The experimental results indicate that the new operator is superior to the traditional ones in terms of the effectiveness of edge detection and the ability of noise rejection.
Keywords: edge detection; image denoising; mathematical operators; transforms; edge extraction; multidirection edge detection operator; noise rejection ability; noisy images; shear transformation; single-pixel edge images; Computed tomography; Convolution; Image edge detection; Noise measurement; Sensitivity; Standards; Wavelet transforms; false edges; matched edges; the shear transformation (ID#: 16-10285)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7396283&isnumber=7396229
W. Li, B. Niu, H. Li and F. Li, “Privacy-Preserving Strategies in Service Quality Aware Location-Based Services,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7328-7334. doi: 10.1109/ICC.2015.7249497
Abstract: The popularity of Location-Based Services (LBSs) have resulted in serious privacy concerns recently. Mobile users may lose their privacy while enjoying kinds of social activities due to the untrusted LBS servers. Many Privacy Protection Mechanisms (PPMs) are proposed in literature by employing different strategies, which come at the cost of either system overhead, or service quality, or both of them. In this paper, we design privacy-preserving strategies for both of the users and adversaries in service quality aware LBSs. Different from existing approaches, we first define and point out the importance of the Fine-Grained Side Information (FGSI) over existing concept of the side information, and propose a Dual-Privacy Metric (DPM) and Service Quality Metric (SQM). Then, we build analytical frameworks that provide privacy-preserving strategies for mobile users and the adversaries to achieve their goals, respectively. Finally, the evaluation results show the effectiveness of our proposed frameworks and the strategies.
Keywords: data protection; mobility management (mobile radio); quality of service; DPM; FGSI; LBS; PPM; SQM; dual-privacy metric; fine-grained side information; mobile user; privacy protection mechanism; privacy-preserving strategy; service quality aware location-based service; service quality metric; Information systems; Measurement; Mobile radio mobility management; Privacy; Security; Servers (ID#: 16-10286)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249497&isnumber=7248285
S. Debroy, P. Calyam, M. Nguyen, A. Stage and V. Georgiev, “Frequency-Minimal Moving Target Defense Using Software-Defined Networking,” 2016 International Conference on Computing, Networking and Communications (ICNC), Kauai, HI, 2016, pp. 1-6. doi: 10.1109/ICCNC.2016.7440635
Abstract: With the increase of cyber attacks such as DoS, there is a need for intelligent counter-strategies to protect critical cloud-hosted applications. The challenge for the defense is to minimize the waste of cloud resources and limit loss of availability, yet have effective proactive and reactive measures that can thwart attackers. In this paper we address the defense needs by leveraging moving target defense protection within Software-Defined Networking-enabled cloud infrastructure. Our novelty is in the frequency minimization and consequent location selection of target movement across heterogeneous virtual machines based on attack probability, which in turn minimizes cloud management overheads. We evaluate effectiveness of our scheme using a large-scale GENI testbed for a just-in-time news feed application setup. Our results show low attack success rate and higher performance of target application in comparison to the existing static moving target defense schemes that assume homogenous virtual machines.
Keywords: cloud computing; computer network security; software defined networking; DoS; critical cloud-hosted applications; cyber attacks; frequency-minimal moving target defense; heterogeneous virtual machines; intelligent counter-strategies; software-defined networking-enabled cloud infrastructure; Bandwidth; Cloud computing; Computer crime; Feeds; History; Loss measurement; Time-frequency analysis (ID#: 16-10287)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7440635&isnumber=7440540
Y. Nakahira and Y. Mo, “Dynamic State Estimation in the Presence of Compromised Sensory Data,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 5808-5813. doi: 10.1109/CDC.2015.7403132
Abstract: In this article, we consider the state estimation problem of a linear time invariant system in adversarial environment. We assume that the process noise and measurement noise of the system are l∞ functions. The adversary compromises at most γ sensors, the set of which is unknown to the estimation algorithm, and can change their measurements arbitrarily. We first prove that if after removing a set of 2γ sensors, the system is undetectable, then there exists a destabilizing noise process and attacker's input to render the estimation error unbounded. For the case that the system remains detectable after removing an arbitrary set of 2γ sensors, we construct a resilient estimator and provide an upper bound on the l∞ norm of the estimation error. Finally, a numerical example is provided to illustrate the effectiveness of the proposed estimator design.
Keywords: invariance; linear systems; measurement errors; measurement uncertainty; state estimation; compromised sensory data; dynamic state estimation; estimation error; estimator design; l∞ functions; linear time invariant system; measurement noise; measurements arbitrarily; process noise; Estimation error; Robustness; Security; Sensor systems; State estimation (ID#: 16-10288)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7403132&isnumber=7402066
M. Kargar, A. An, N. Cercone, P. Godfrey, J. Szlichta and X. Yu, “Meaningful Keyword Search in Relational Databases with Large and Complex Schema,” 2015 IEEE 31st International Conference on Data Engineering, Seoul, 2015, pp. 411-422. doi: 10.1109/ICDE.2015.7113302
Abstract: Keyword search over relational databases offers an alternative way to SQL to query and explore databases that is effective for lay users who may not be well versed in SQL or the database schema. This becomes more pertinent for databases with large and complex schemas. An answer in this context is a join tree spanning tuples containing the query's keywords. As there are potentially many answers to the query, and the user is often only interested in seeing the top-k answers, how to rank the answers based on their relevance is of paramount importance. We focus on the relevance of join as the fundamental means to rank answers. We devise means to measure relevance of relations and foreign keys in the schema over the information content of the database. This can be done offline with no need for external models. We compare the proposed measures against a gold standard we derive from a real workload over TPC-E and evaluate the effectiveness of our methods. Finally, we test the performance of our measures against existing techniques to demonstrate a marked improvement, and perform a user study to establish naturalness of the ranking of the answers.
Keywords: SQL; query processing; relational databases; trees (mathematics); SQL; TPC-E; answer ranking; complex schema; database querying; foreign keys; join tree spanning tuples; keyword search; large schema; query answering; relation relevance measurement; relational databases; Companies; Gold; Indexes; Keyword search; Relational databases; Security (ID#: 16-10289)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113302&isnumber=7113253
H. B. M. Shashikala, R. George and K. A. Shujaee, “Outlier Detection in Network Data Using the Betweenness Centrality,” SoutheastCon 2015, Fort Lauderdale, FL, 2015, pp. 1-5. doi: 10.1109/SECON.2015.7133008
Abstract: Outlier detection has been used to detect and, where appropriate, remove anomalous observations from data. It has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. In this paper, we propose a Betweenness Centrality (BEC) as novel to determine the outlier in network analyses. The Betweenness Centrality of a vertex in a graph is a measure for the participation of the vertex in the shortest paths in the graph. The Betweenness centrality is widely used in network analyses. Especially in a social network, the recursive computation of the betweenness centralities of vertices is performed for the community detection and finding the influential user in the network. In this paper, we propose that this method is efficient in finding outlier in social network analyses. Furthermore we show the effectiveness of the new methods using the experiments data.
Keywords: fraud; graph theory; recursive estimation; security of data; social networking (online); BEC; betweenness centrality; community detection; fraud detection; graph analysis; intrusion detection; network data; network robustness analysis; outlier detection; recursive computation; social network analyses; vertices; Atmospheric measurements; Chaos; Particle measurements; Presses; adjacency matrix; (ID#: 16-10290)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133008&isnumber=7132866
E. Lagunas, M. G. Amin and F. Ahmad, “Through-the-Wall Radar Imaging for Heterogeneous Walls Using Compressive Sensing,” Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa), 2015 3rd International Workshop on, Pisa, 2015, pp. 94-98. doi: 10.1109/CoSeRa.2015.7330271
Abstract: Front wall reflections are considered one of the main challenges in sensing through walls using radar. This is especially true under sparse time-space or frequency-space sampling of radar returns which may be required for fast and efficient data acquisition. Unlike homogeneous walls, heterogeneous walls have frequency and space varying characteristics which violate the smooth surface assumption and cause significant residuals under commonly used wall clutter mitigation techniques. In the proposed approach, the phase shift and the amplitude of the wall reflections are estimated from the compressive measurements using a Maximum Likelihood Estimation (MLE) procedure. The estimated parameters are used to model electromagnetic (EM) wall returns, which are subsequently subtracted from the total radar returns, rendering wall-reduced and wall-free signals. Simulation results are provided, demonstrating the effectiveness of the proposed technique and showing its superiority over existing methods.
Keywords: compressed sensing; data acquisition; image sampling; maximum likelihood estimation; radar clutter; radar imaging; EM wall return; MLE procedure; compressive sensing; electromagnetic wall return; frequency-space sampling; front wall reflection; heterogeneous wall; maximum likelihood estimation procedure; sparse time-space sampling; through-the-wall radar imaging; wall clutter mitigation technique; Antenna measurements; Arrays; Maximum likelihood estimation; Radar antennas; Radar imaging (ID#: 16-10291)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7330271&isnumber=7330246
Rong Jin and Kai Zeng, “Physical Layer Key Agreement Under Signal Injection Attacks,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 254-262. doi: 10.1109/CNS.2015.7346835
Abstract: Physical layer key agreement techniques derive a symmetric cryptographic key from the wireless fading channel between two wireless devices by exploiting channel randomness and reciprocity. Existing works mainly focus on the security analysis and protocol design of the techniques under passive attacks. The study on physical layer key agreement techniques under active attacks is largely open. In this paper, we present a new form of high threatening active attack, named signal injection attack. By injecting the similar signals to both keying devices, the attacker aims at manipulating the channel measurements and compromising a portion of the key. We further propose a countermeasure to the signal injection attack, PHY-UIR (PHYsical layer key agreement with User Introduced Randomness). In PHY-UIR, both keying devices independently introduce randomness into the channel probing frames, and compose common random series by combining the randomness in the fading channel and the ones introduced by users together. With this solution, the composed series and injected signals become uncorrelated. Thus, the final key will automatically exclude the contaminated portion related to injected signal while persisting the other portion related to random fading channel. Moreover, the contaminated composed series at two keying devices become decorrelated, which help detect the attack. We analyze the security strength of PHY-UIR and conduct extensive simulations to evaluate it Both theoretical analysis and simulations demonstrate the effectiveness of PHY-UIR. We also perform proof-of-concept experiments by using software defined radios in a real-world environment. We show that signal injection attack is feasible in practice and leads to a strong correlation (0.75) between the injected signal and channel measurements at legitimate users for existing key generation methods. PHY-UIR is immune to the signal injection attack and results in low correlation (0.15) between the injected signal and the composed random signals at legitimate users.
Keywords: cryptography; fading channels; telecommunication security; PHY-UIR; channel measurements; channel probing frames; channel randomness; pHY-mR; physical layer key agreement techniques; physical layer key agreement with user introduced randomness; protocol design; random fading channel; reciprocity; security analysis; security strength; signal injection attack; signal injection attacks; symmetric cryptographic key; theoretical analysis; wireless fading channel; Clocks; Cryptography; DH-HEMTs; Niobium; Protocols; Yttrium (ID#: 16-10292)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346835&isnumber=7346791
X. Zhang, X. Yang, J. Lin and W. Yu, “On False Data Injection Attacks Against the Dynamic Microgrid Partition in the Smart Grid,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7222-7227. doi: 10.1109/ICC.2015.7249479
Abstract: To enhance the reliability and efficiency of energy service in the smart grid, the concept of the microgrid has been proposed. Nonetheless, how to secure the dynamic microgrid partition process is essential in the smart grid. In this paper, we address the security issue of the dynamic microgrid partition process and systematically investigate three false data injection attacks against the dynamic microgrid partition process. Particularly, we first discussed the dynamic microgrid partition problem based on a Connected Graph Constrained Knapsack Problem (CGKP) algorithm. We then developed a theoretical model and carried out simulations to investigate the impacts of these false data injection attacks on the effectiveness of the dynamic microgrid partition process. Our theoretical and simulation results show that the investigated false data injection attacks can disrupt the dynamic microgrid partition process and pose negative impacts on the balance of energy demand and supply within microgrids such as an increased number of lack-nodes and increased energy loss in microgrids.
Keywords: computer network security; distributed power generation; graph theory; knapsack problems; power engineering computing; power system management; power system measurement; power system reliability; smart power grids; algorithm; connected graph constrained knapsack problem; dynamic microgrid partition process security; energy service efficiency; false data injection attacks; smart power grid reliability; Energy loss; Heuristic algorithms; Microgrids; Partitioning algorithms; Smart grids; Smart meters
(ID#: 16-10293)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249479&isnumber=7248285
X. Zhao, F. Deng, H. Liang and L. Zhou, “Monitoring the Deformation of the Facade of a Building Based on Terrestrial Laser Point-Cloud,” 2015 11th International Conference on Computational Intelligence and Security (CIS), Shenzhen, 2015, pp. 183-186. doi: 10.1109/CIS.2015.52
Abstract: When terrestrial laser point-cloud data are employed for monitoring the façade of a building, point-cloud data collected in different phases cannot be used directly to calculate the deforming displacement due to data points in a homogeneous region caused by inhomogeneous sampling. Aiming at this problem, a triangular patch is built for the previous point-cloud data, the distance is measured between the latter point-cloud data and the former patch in the homogeneous region, and thus the distance of the deforming displacement is determined. On this basis, the software of laser point-cloud monitoring analysis is developed and three series of experiments are designed to verify the effectiveness of the method.
Keywords: buildings (structures); condition monitoring; deformation; distance measurement; structural engineering; building façade deformation monitoring; data points; deforming displacement; distance measurement; homogeneous region; inhomogeneous sampling; laser point-cloud monitoring analysis; point-cloud data; terrestrial laser point-cloud; triangular patch; Buildings; Data models; Deformable models; Mathematical model; Monitoring; Reliability; Three-dimensional displays; building façade; deformation monitoring; point-cloud (ID#: 16-10294)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7396282&isnumber=7396229
H. Alizadeh, A. Khoshrou and A. Zúquete, “Traffic Classification and Verification Using Unsupervised Learning of Gaussian Mixture Models,” Measurements & Networking (M&N), 2015 IEEE International Workshop on, Coimbra, 2015, pp. 1-6. doi: 10.1109/IWMN.2015.7322980
Abstract: This paper presents the use of unsupervised Gaussian Mixture Models (GMMs) for the production of per-application models using their flows' statistics in order to be exploited in two different scenarios: (i) traffic classification, where the goal is to classify traffic flows by application (ii) traffic verification or traffic anomaly detection, where the aim is to confirm whether or not traffic flow generated by the claimed application conforms to its expected model. Unlike the first scenario, the second one is a new research path that has received less attention in the scope of Intrusion Detection System (IDS) research. The term “unsupervised” refers to the method ability to select the optimal number of components automatically without the need of careful initialization. Experiments are carried out using a public dataset collected from a real network. Favorable results indicate the effectiveness of unsupervised GMMs.
Keywords: Gaussian processes; computer network security; mixture models; pattern classification; security of data; telecommunication traffic; unsupervised learning; Gaussian mixture model; IDS; intrusion detection system; traffic anomaly detection; traffic classification; traffic flow; traffic verification; unsupervised GMM; unsupervised learning; Accuracy; Feature extraction; Mixture models; Payloads; Ports (Computers); Protocols; Training (ID#: 16-10295)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322980&isnumber=7322959
N. Matyunin, J. Szefer, S. Biedermann and S. Katzenbeisser, “Covert Channels Using Mobile Device's Magnetic Field Sensors,” 2016 21st Asia and South Pacific Design Automation Conference (ASP-DAC), Macau, 2016, pp. 525-532. doi: 10.1109/ASPDAC.2016.7428065
Abstract: This paper presents a new covert channel using smartphone magnetic sensors. We show that modern smartphones are capable to detect the magnetic field changes induced by different computer components during I/O operations. In particular, we are able to create a covert channel between a laptop and a mobile device without any additional equipment, firmware modifications or privileged access on either of the devices. We present two encoding schemes for the covert channel communication and evaluate their effectiveness.
Keywords: encoding; magnetic field measurement; magnetic sensors; smart phones; I/O operations; computer components; covert channels; encoding schemes; laptop; magnetic field changes; magnetic field sensors; mobile device; smartphone magnetic sensors; Encoding; Hardware; Magnetic heads; Magnetic sensors; Magnetometers; Portable computers (ID#: 16-10296)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7428065&isnumber=7427971
H. Wei, Y. Zhang, D. Guo and X. Wei, “CARISON: A Community and Reputation Based Incentive Scheme for Opportunistic Networks,” 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, 2015, pp. 1398-1403. doi: 10.1109/IMCCC.2015.299
Abstract: Forwarding messages in opportunistic networks incurs costs for nodes in terms of storage and energy. Some nodes become selfish or even malicious. The selfish and malicious behaviors depress the connectivity of opportunistic networks. To tackle this issue, in this paper, we propose CARISON: a community and reputation based incentive scheme for opportunistic networks. CARISON allows every node belongs to a community and manages its reputation evidence and demonstrate its reputation whenever necessary. In order to kick out malicious nodes we propose altruism function which is a critical factor. Besides, considering the social attributes of nodes, we propose two ways to calculate reputation: intra-community reputation calculating and inter-community reputation calculating. Meanwhile this paper proposes the binary exponent punishment strategy to punish the nodes with low reputation. Extensive performance analysis and simulations are given to demonstrate the effectiveness and efficiency of the proposed scheme.
Keywords: cooperative communication; incentive schemes; telecommunication security; CARISON; altruism function; binary exponent punishment strategy; community and reputation based incentive scheme; inter-community reputation calculating; intra-community reputation calculating; malicious behaviors; malicious nodes; opportunistic networks; reputation evidence; selfish behaviors; social attributes; Analytical models; Computational modeling; Computers; History; Incentive schemes; Monitoring; Performance analysis; Altruism function; Binary exponent punishment strategy; Community; Opportunistic networks; Reputation based incentive; Selfish (ID#: 16-10297)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7406078&isnumber=7405778
Z. Pang, F. Hou, Y. Zhou and D. Sun, “Design of False Data Injection Attacks for Output Tracking Control of CARMA Systems,” Information and Automation, 2015 IEEE International Conference on, Lijiang, 2015, pp. 1273-1277. doi: 10.1109/ICInfA.2015.7279482
Abstract: Considerable attention has focused on the problem of cyber-attacks on cyber-physical systems in recent years. In this paper, we consider a class of single-input single-output systems which are described by a controlled auto-regressive moving average (CARMA) model. A PID controller is designed to make the system output track the reference signal. Then the state-space model of the controlled plant and the corresponding Kalman filter are employed to generate stealthy false data injection attacks for the sensor measurements, which can destroy the control system performance without being detected by an online parameter identification algorithm. Finally, two numerical simulation results are given to demonstrate the effectiveness of the proposed false data injection attacks.
Keywords: Kalman filters; autoregressive moving average processes; control system synthesis; security of data; state-space methods; three-term control; CARMA systems; Kalman filter; PID controller design; controlled auto-regressive moving average; false data injection attacks; online parameter identification algorithm; output tracking control; single-input single-output systems; state-space model; Conferences; Control systems; Detectors; Mathematical model; Parameter estimation; Smart grids; CARMA model; Cyber-physical systems (CPSs); output feedback control (ID#: 16-10298)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7279482&isnumber=7279248
Y. Hu and M. Sun, “Synchronization of a Class of Hyperchaotic Systems via Backstepping and Its Application to Secure Communication,” 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, 2015, pp. 1055-1060. doi: 10.1109/IMCCC.2015.228
Abstract: Researches on multi-scroll hyper chaotic systems, which present excellent activities in secure communication, are comparatively poor. There are no systematic design methods and current methods are difficult to deal with uncertainties. In this paper, an adaptive back stepping control is proposed. The adaptive updating laws are presented to approximate the uncertainties. The proposed method improves the robust performance of controller by only two control input. The asymptotical convergence of synchronization errors is proved to zero by Lyapunov functions. Finally, simulation examples are presented to demonstrated the effectiveness of the proposed synchronization control scheme and its validity in secure communication.
Keywords: Lyapunov methods; chaotic communication; control nonlinearities; synchronisation; telecommunication security; Lyapunov functions; adaptive back stepping control; multiscroll hyperchaotic systems; secure communication; synchronization control scheme; systematic design methods; Adaptive control; Backstepping; Chaotic communication; Synchronization; adaptive control; backstepping; hyperchaos; multi-scroll (ID#: 16-10299)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7406007&isnumber=7405778
I. Kiss, B. Genge and P. Haller, “A Clustering-Based Approach to Detect Cyber Attacks in Process Control Systems,” 2015 IEEE 13th International Conference on Industrial Informatics (INDIN), Cambridge, 2015, pp. 142-148. doi: 10.1109/INDIN.2015.7281725
Abstract: Modern Process Control Systems (PCS) exhibit an increasing trend towards the pervasive adoption of commodity, off-the-shelf Information and Communication Technologies (ICT). This has brought significant economical and operational benefits, but it also shifted the architecture of PCS from a completely isolated environment to an open, “system of systems” integration with traditional ICT systems, susceptible to traditional computer attacks. In this paper we present a novel approach to detect cyber attacks targeting measurements sent to control hardware, i.e., typically to Programmable Logical Controllers (PLC). The approach builds on the Gaussian mixture model to cluster sensor measurement values and a cluster assessment technique known as silhouette. We experimentally demonstrate that in this particular problem the Gaussian mixture clustering outperforms the k-means clustering algorithm. The effectiveness of the proposed technique is tested in a scenario involving the simulated Tennessee-Eastman chemical process and three different cyber attacks.
Keywords: Gaussian processes; control engineering computing; mixture models; pattern clustering; process control; production engineering computing; programmable controllers security of data; Gaussian mixture model; ICT systems; Information and Communication Technologies; PCS; PLC; cluster assessment technique; cluster sensor measurement values; computer attacks; cyber attack detection; process control systems; programmable logical controllers; silhouette; simulated Tennessee-Eastman chemical process; system of systems integration; Clustering algorithms; Computer crime; Engines; Mathematical model; Process control (ID#: 16-10300)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7281725&isnumber=7281697
H. Wu, X. Dang, L. Zhang and L. Wang, “Kalman Filter Based DNS Cache Poisoning Attack Detection,” 2015 IEEE International Conference on Automation Science and Engineering (CASE), Gothenburg, 2015, pp. 1594-1600. doi: 10.1109/CoASE.2015.7294328
Abstract: Detection for Domain Name Systems cache poisoning attack is investigated. We exploit the fact that when attack is happening, the entropies of the query packet IP addresses of the cache server will have a decrease, to detect the cache poisoning attack. We pay attention to the detection method for the case that the entropy sequence has nonstationary dynamic at normal cases. In order to handle the nonstationarity, we first model the entropy sequence by a state space equation, and then we utilize Kalman filter to implement the attack detection. The problem is discussed for single and distributed cache poisoning attack, respectively. For the single one, we use the measurement errors to detect the anomaly. Under distributed attack, we utilize the correlation variation of the prediction errors to detect the attack event and identify the attacked cache servers. An experiment is illustrated to verify the effectiveness of our presented method.
Keywords: IP networks; Kalman filters; cache storage; computer network security; entropy; file servers; query processing; Kalman filter based DNS cache poisoning attack detection; attack event; attacked cache servers; correlation variation; domain name systems; entropy sequence; measurement errors; query packet IP addresses; state space equation; Correlation; Entropy; Mathematical model; Servers (ID#: 16-10301)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7294328&isnumber=7294025
R. Cao, J. Wu, C. Long and S. Li, “Stability Analysis for Networked Control Systems Under Denial-of-Service Attacks,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 7476-7481. doi: 10.1109/CDC.2015.7403400
Abstract: With the large-scale application of modern information technology in networked control systems (NCSs), the security of networked control systems has drawn more and more attention in recent years. However, how far the NCSs can be affected by adversaries are few considered. In this paper, we consider a stability problem for NCSs under denial-of-service (DoS) attacks in which control and measurement packets are transmitted over the communication networks. We model the NCSs under DoS attacks as a singular system, where the effect of the DOS attack is described as a time-varying delay. By a Wirtinger-based integral inequality, a less conservative attack-based delay-dependent criterion for NCSs' stability is obtained in term of linear matrix inequalities (LMIs). Finally, examples are given to illustrate the effectiveness of our methods.
Keywords: delays; linear matrix inequalities; networked control systems; stability; time-varying systems; DoS attacks; LMI; NCS stability; Wirtinger-based integral inequality; attack-based delay-dependent criterion; communication networks; control packets; denial-of-service attacks; large-scale application; measurement packets; stability analysis; stability problem; time-varying delay; Computer crime; Delays; Networked control systems; Power system stability; Stability criteria; Symmetric matrices (ID#: 16-10302)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7403400&isnumber=7402066
M. Wang, X. Wu, D. Liu, C. Wang, T. Zhang and P. Wang, “A Human Motion Prediction Algorithm for Non-Binding Lower Extremity Exoskeleton,” Information and Automation, 2015 IEEE International Conference on, Lijiang, 2015, pp. 369-374. doi: 10.1109/ICInfA.2015.7279315
Abstract: This paper introduces a novel approach to predict human motion for the Non-binding Lower Extremity Exoskeleton (NBLEX). Most of the exoskeletons must be attached to the pilot, which exists potential security problems. In order to solve these problems, the NBLEX is studied and designed to free pilots from the exoskeletons. Rather than applying Electromyography (EMG) and Ground Reaction Force (GFR) signals to predict human motion in the binding exoskeleton, the non-binding exoskeleton robot collect the Inertial Measurement Unit (IMU) signals of the pilot. Seven basic motions are studied, each motion is divided into four phases except the standing-still motion which only has one motion phase. The human motion prediction algorithm adopts Support Vector Machine (SVM) to classify human motion phases and Hidden Markov Model (HMM) to predict human motion. The experimental data demonstrate the effectiveness of the proposed algorithm.
Keywords: control engineering computing; hidden Markov models; mobile robots; motion control; support vector machines; EMG signal; GFR signal; HMM; IMU signal; NBLEX; SVM; electromyography; ground reaction force signal; hidden Markov model; human motion phase; human motion prediction algorithm; inertial measurement unit signal; nonbinding exoskeleton robot; nonbinding lower extremity exoskeleton; standing-still motion; support vector machine; Accuracy; Classification algorithms; Exoskeletons; Hidden Markov models; Prediction algorithms; Support vector machines; Training; Exoskeleton; Hidden Markov Model; Human Motion Prediction; Non-binding Lower Extremity Exoskeleton; Support Vector Machine (ID#: 16-10303)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7279315&isnumber=7279248
M. Ingels, A. Valjarevic and H. S. Venter, “Evaluation and Analysis of a Software Prototype for Guidance and Implementation of a Standardized Digital Forensic Investigation Process,” Information Security for South Africa (ISSA), 2015, Johannesburg, 2015, pp. 1-8. doi: 10.1109/ISSA.2015.7335052
Abstract: Performing a digital forensic investigation requires a standardized and formalized process to be followed. The authors have contributed to the creation of an international standard on digital forensic investigation process, namely ISO/IEC 27043:2015, which was published in 2015. However, currently, there exists no application that would guide a digital forensic investigator to implement such a standardized process. The prototype of such an application has been developed by the authors and presented in their previous work. The prototype is in the form of a software application which has two main functionalities. The first functionality is to act as an expert system that can be used for guidance and training of novice investigators. The second functionality is to enable reliable logging of all actions taken within the investigation processes, enabling the validation of use of a correct process. The benefits of such a prototype include possible improvement in efficiency and effectiveness of an investigation and easier training of novice investigators. The last, and possibly most important benefit, includes that higher admissibility of digital evidence will be possible due to the fact that it will be easier to show that the standardized process was followed. This paper presents an evaluation of the prototype. Evaluation was performed in order to measure the usability and the quality of the prototype software, as well as the effectiveness of the prototype. The evaluation of the prototype consisted of two main parts. The first part was a software usability evaluation, which was performed using the Software Usability Measurement Inventory (SUMI), a reliable method of measuring software usability and quality. The second part of evaluation was in a form of a questionnaire set up by the authors, with the aim to evaluate whether the prototype meets its goals. The results indicated that the prototype reaches most of its goals, that it does have intended functionalities and that it is relatively easy to learn and use. Areas of improvement and future work were also identified in this work.
Keywords: digital forensics; software performance evaluation; software prototyping; software quality; ISO/lEC 27043: 2015; SUMI; digital forensic investigation process; software prototype analysis; software prototype evaluation; software quality; software usability evaluation; software usability measurement inventory; Cryptography; Libraries; Organizations; Software; Standards organizations; Yttrium; digital forensic investigation process model; implementation prototype; software evaluation; standardization (ID#: 16-10304)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335052&isnumber=7335039
J. G. Cui, P. J. Zhou, M. y. Yu, C. Liu and X. y. Xu, “Research on Time Optimization Algorithm of Aircraft Support Activity with Limited Resources,” 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, 2015, pp. 1298-1303. doi: 10.1109/IMCCC.2015.279
Abstract: The required time of aircraft turnaround support activity directly affects the aircraft combat effectiveness. Aiming at the problem that the shortest time of support activity is hard to realize under the limited aircraft support resources conditions, the time optimization algorithm based on Branch and Cut Method (BCM) of the aircraft turnaround support activity is given in this paper. The purpose is to achieve the required shortest time of the aircraft turnaround support activity. The constraints are logical relationship between the limited support personnel and the support jobs. The shortest time process is calculated and compiled into computer program. The time optimal simulation system of aircraft turnaround support activity is designed and developed. Finally, a certain type of aircraft real support job is analyzed. The results show that the calculated result is accurate and reliable. It is in line with the actual security and can provide guidance for aircraft turnaround support and decision-making. The reliability and automation level of support activities are enhanced. It has a good application value in engineering.
Keywords: aircraft; decision making; optimisation; reliability theory; resource allocation; tree searching; BCM; aircraft turnaround support activity; automation level; branch and cut method; reliability level; resource limitation; time optimal simulation system; time optimization algorithm; Aerospace electronics; Aircraft; Aircraft manufacture; Atmospheric modeling; Mathematical model; Optimization; Personnel; Branch and Cut Method; Limited resources; Simulation; Support activity time (ID#: 16-10305)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7406058&isnumber=7405778
J. M. G. Duarte, E. Cerqueira and L. A. Villas, “Indoor Patient Monitoring Through Wi-Fi and Mobile Computing,” 2015 7th International Conference on New Technologies, Mobility and Security (NTMS), Paris, 2015, pp. 1-5. doi: 10.1109/NTMS.2015.7266497
Abstract: The developments in wireless sensor networks, mobile technology and cloud computing have been pushing forward the concept of intelligent or smart cities, and each day smarter infrastructures are being developed with the aim of enhancing the well-being of citizens. These advances in technology can provide considerable benefits for the diverse components of smart cities including smart health which can be seen as the facet of smart cities dedicated to healthcare. A considerable defy that stills requiring appropriate responses is the development of mechanisms to detect health issues in patients from the very beginning. In this work, we propose a novel solution for indoor patient monitoring for medical purposes. The output of our solution will consist of a report containing the patterns of room occupation by the patient inside her/his home during a certain period of time. This report will allow health care professionals to detect changes on the behavior of the patient that can be interpreted as early signs of any health related issue. The proposed solution was implemented in an Android smartphone and tested in a real scenario. To assess our solution, 400 measurements divided into 10 experiments were performed, reaching a total of 391 correct detections which corresponds to an average effectiveness of 97.75%.
Keywords: cloud computing; indoor radio; mobile computing; patient monitoring; smart cities; smart phones; wireless LAN; wireless sensor networks; Android smartphone; Wi-Fi; indoor patient monitoring; intelligent cities; smart health; wireless sensor networks; IEEE 802.11 Standard; Medical services; Mobile communication; Mobile computing; Monitoring; Sensors; Wireless sensor networks; Behavior; Indoor monitoring; Patient; Smart health; Smartphone; Wi-Fi (ID#: 16-10306)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266497&isnumber=7266450
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
End to End Security and the Internet of Things 2015 |
End to end security focuses on the concept of uninterrupted protection of data traveling between two communicating partners. Generally, encryption is the method of choice. For the Internet of Things (IOT), “baked in” security is a major challenge. The research cited here was presented during 2015.
S. R. Moosavi et al., “Session Resumption-Based End-to-End Security for Healthcare Internet-of-Things,” Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015, pp. 581-588. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.83
Abstract: In this paper, a session resumption-based end-to-end security scheme for healthcare Internet of things (IoT) is pro-posed. The proposed scheme is realized by employing certificate-based DTLS handshake between end-users and smart gateways as well as utilizing DTLS session resumption technique. Smart gateways enable the sensors to no longer need to authenticate and authorize remote end-users by handing over the necessary security context. Session resumption technique enables end-users and medical sensors to directly communicate without the need for establishing the communication from the initial handshake. Session resumption technique has an abbreviated form of DTLS handshake and neither requires certificate-related nor public-key funtionalities. This alleviates some burden of medical sensors tono longer need to perform expensive operations. The energy-performance evaluations of the proposed scheme are evaluated by developing a remote patient monitoring prototype based on healthcare IoT. The energy-performance evaluation results show that our scheme is about 97% and 10% faster than certificate-based and symmetric key-based DTLS, respectively. Also, the certificate-based DTLS consumes about 2.2X more RAM and 2.9X more ROM resources required by our scheme. While, our scheme and symmetric key-based DTLS have almost similar RAM and ROM requirements. The security analysis reveals that the proposed scheme fulfills the requirements of end-to-end security and provides higher security level than related approaches found in the literature. Thus, the presented scheme is a well-suited solution to provide end-to-end security for healthcare IoT.
Keywords: Internet of Things; health care; public key cryptography; DTLS session resumption technique; IoT; end-to-end security; energy performance evaluations; healthcare Internet-of-Things; medical sensors; public key functionalities; remote end-users; remote patient monitoring prototype; security context; session resumption technique; smart gateways; Computers; Conferences; Information technology; Ubiquitous computing (ID#: 16-11225)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363124&isnumber=7362962
S. S. Basu, S. Tripathy and A. R. Chowdhury, “Design Challenges and Security Issues in the Internet of Things,” Region 10 Symposium (TENSYMP), 2015 IEEE, Ahmedabad, 2015, pp. 90-93. doi: 10.1109/TENSYMP.2015.25
Abstract: The world is rapidly getting connected. Commonplace everyday things are providing and consuming software services exposed by other things and service providers. A mash up of such services extends the reach of the current Internet to potentially resource constrained “Things”, constituting what is being referred to as the Internet of Things (IoT). IoT is finding applications in various fields like Smart Cities, Smart Grids, Smart Transportation, e-health and e-governance. The complexity of developing IoT solutions arise from the diversity right from device capability all the way to the business requirements. In this paper we focus primarily on the security issues related to design challenges in IoT applications and present an end-to-end security framework.
Keywords: Internet; Internet of Things; security of data; Internet of Things; IoT; e-governance; e-health; end-to-end security framework; service providers; smart cities; smart grids; smart transportation; software services; Computer crime; Encryption; Internet of things; Peer-to-peer computing; Protocols; End-to-end (E2E) security; Internet of Things (IoT); Resource constrained devices; Security
(ID#: 16-11226)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166245&isnumber=7166213
D. Bonino et al., “ALMANAC: Internet of Things for Smart Cities,” Future Internet of Things and Cloud (FiCloud), 2015 3rd International Conference on, Rome, 2015, pp. 309-316. doi: 10.1109/FiCloud.2015.32
Abstract: Smart cities advocate future environments where sensor pervasiveness, data delivery and exchange, and information mash-up enable better support of every aspect of (social) life in human settlements. As this vision matures, evolves and is shaped against several application scenarios, and adoption perspectives, a common need for scalable, pervasive, flexible and replicable infrastructures emerges. Such a need is currently fostering new design efforts to grant performance, reuse and interoperability while avoiding knowledge silos typical of early efforts on similar top is, e.g. Automation in buildings and homes. This paper introduces a federated smart city platform (SCP) developed in the context of the ALMANAC FP7 EU project and discusses lessons learned during the first experimental application of the platform to a smart waste management scenario in a medium-sized, European city. The ALMANAC SCP aims to integrate Internet of Things (IoT), capillary networks and metro access networks to offer smart services to the citizens, and thus enable Smart City processes. The key element of the SCP is a middleware supporting semantic interoperability of heterogeneous resources, devices, services and data management. The platform is built upon a dynamic federation of private and public networks, while supporting end-to-end security and privacy. Furthermore, it also enables the integration of services that, although being natively external to the platform itself, allow enriching the set of data and information used by the Smart City applications supported.
Keywords: Internet of Things; data privacy; middleware; open systems; smart cities; waste management; ALMANAC FP7 EU project; European city; capillary networks; data management; end-to-end privacy; end-to-end security; heterogeneous devices; heterogeneous resources; heterogeneous services; metro access networks; middleware; private networks; public networks; semantic interoperability; sensor pervasiveness; smart city platform; smart waste management scenario; Cities and towns; Context; Data integration; Metadata; Semantics; Smart cities; federation; internet of things; platform; smart city (ID#: 16-11227)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7300833&isnumber=7300539
J. M. Bohli, A. Skarmeta, M. Victoria Moreno, D. García and P. Langendörfer, “SMARTIE Project: Secure IoT Data Management for Smart Cities,” Recent Advances in Internet of Things (RIoT), 2015 International Conference on, Singapore, 2015, pp. 1-6. doi: 10.1109/RIOT.2015.7104906
Abstract: The vision of SMARTIE (Secure and sMARter ciTIEs data management) is to create a distributed framework for IoT-based applications storing, sharing and processing large volumes of heterogeneous information. This framework is envisioned to enable end-to-end security and trust in information delivery for decision-making purposes following the data owner's privacy requirements. SMARTIE follows a data-centric paradigm, which will offer highly scalable and secure information for smart city applications. The heart of this paradigm will be the “information management and services” plane as a unifying umbrella, which will operate above heterogeneous network devices and data sources, and will provide advanced secure information services enabling powerful higher-layer applications.
Keywords: Internet of Things; data privacy; database management systems; decision making; distributed processing; information services; smart cities; town and country planning; trusted computing; IoT-based applications; SMARTIE project; data owner privacy requirements; data sources; data-centric paradigm; decision-making purposes; distributed framework; end-to-end security; heterogeneous information processing; heterogeneous information sharing; heterogeneous information storing; heterogeneous network devices; information delivery; information management; secure IoT data management; secure and smarter cities data management; secure information services; smart city applications; trust; Authorization; Cities and towns; Cryptography; Heating; Monitoring; IoT; Security; Smart Cities (ID#: 16-11228)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7104906&isnumber=7104893
F. Van den Abeele, T. Vandewinckele, J. Hoebeke, I. Moerman and P. Demeester, “Secure Communication in IP-Based Wireless Sensor Networks via a Trusted Gateway,” Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, Singapore, 2015, pp. 1-6. doi: 10.1109/ISSNIP.2015.7106963
Abstract: As the IP-integration of wireless sensor networks enables end-to-end interactions, solutions to appropriately secure these interactions with hosts on the Internet are necessary. At the same time, burdening wireless sensors with heavy security protocols should be avoided. While Datagram TLS (DTLS) strikes a good balance between these requirements, it entails a high cost for setting up communication sessions. Furthermore, not all types of communication have the same security requirements: e.g. some interactions might only require authorization and do not need confidentiality. In this paper we propose and evaluate an approach that relies on a trusted gateway to mitigate the high cost of the DTLS handshake in the WSN and to provide the flexibility necessary to support a variety of security requirements. The evaluation shows that our approach leads to considerable energy savings and latency reduction when compared to a standard DTLS use case, while requiring no changes to the end hosts themselves.
Keywords: IP networks; Internet; authorisation; computer network security; energy conservation; internetworking; protocols; telecommunication power management; trusted computing; wireless sensor networks; DTLS handshake; WSN authorization; communication security; datagram TLS; end-to-end interactions; energy savings; heavy security protocol; latency reduction; trusted gateway; wireless sensor network IP integration; Bismuth; Cryptography; Logic gates; Random access memory; Read only memory; Servers; Wireless sensor networks; 6LoWPAN; CoAP; DTLS; Gateway; IP; IoT (ID#: 16-11229)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106963&isnumber=7106892
V. L. Shivraj, M. A. Rajan, M. Singh and P. Balamuralidhar, “One Time Password Authentication Scheme Based on Elliptic Curves for Internet of Things (IoT),” Information Technology: Towards New Smart World (NSITNSW), 2015 5th National Symposium on, Riyadh, 2015, pp. 1-6. doi: 10.1109/NSITNSW.2015.7176384
Abstract: Establishing end-to-end authentication between devices and applications in Internet of Things (IoT) is a challenging task. Due to heterogeneity in terms of devices, topology, communication and different security protocols used in IoT, existing authentication mechanisms are vulnerable to security threats and can disrupt the progress of IoT in realizing Smart City, Smart Home and Smart Infrastructure, etc. To achieve end-to-end authentication between IoT devices/applications, the existing authentication schemes and security protocols require a two-factor authentication mechanism. Therefore, as part of this paper we review the suitability of an authentication scheme based on One Time Password (OTP) for IoT and proposed a scalable, efficient and robust OTP scheme. Our proposed scheme uses the principles of lightweight Identity Based Elliptic Curve Cryptography scheme and Lamport's OTP algorithm. We evaluate analytically and experimentally the performance of our scheme and observe that our scheme with a smaller key size and lesser infrastructure performs on par with the existing OTP schemes without compromising the security level. Our proposed scheme can be implemented in real-time IoT networks and is the right candidate for two-factor authentication among devices, applications and their communications in IoT.
Keywords: Internet of Things; message authentication; public key cryptography; IoT; OTP; end-to-end authentication; identity based elliptic curve cryptography; one time password; password authentication; Algorithm design and analysis; Authentication; Elliptic curves; Logic gates; Protocols; Servers (ID#: 16-11230)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7176384&isnumber=7176382
N. Zhang, K. Yuan, M. Naveed, X. Zhou and X. Wang, “Leave Me Alone: App-Level Protection Against Runtime Information Gathering on Android,” 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015, pp. 915-930. doi: 10.1109/SP.2015.61
Abstract: Stealing of sensitive information from apps is always considered to be one of the most critical threats to Android security. Recent studies show that this can happen even to the apps without explicit implementation flaws, through exploiting some design weaknesses of the operating system, e.g., Shared communication channels such as Bluetooth, and side channels such as memory and network-data usages. In all these attacks, a malicious app needs to run side-by-side with the target app (the victim) to collect its runtime information. Examples include recording phone conversations from the phone app, gathering WebMD's data usages to infer the disease condition the user looks at, etc. This runtime-information-gathering (RIG) threat is realistic and serious, as demonstrated by prior research and our new findings, which reveal that the malware monitoring popular Android-based home security systems can figure out when the house is empty and the user is not looking at surveillance cameras, and even turn off the alarm delivered to her phone. To defend against this new category of attacks, we propose a novel technique that changes neither the operating system nor the target apps, and provides immediate protection as soon as an ordinary app (with only normal and dangerous permissions) is installed. This new approach, called App Guardian, thwarts a malicious app's runtime monitoring attempt by pausing all suspicious background processes when the target app (called principal) is running in the foreground, and resuming them after the app stops and its runtime environment is cleaned up. Our technique leverages a unique feature of Android, on which third-party apps running in the background are often considered to be disposable and can be stopped anytime with only a minor performance and utility implication. We further limit such an impact by only focusing on a small set of suspicious background apps, which are identified by their behaviors inferred from their side channels (e.g., Thread names, CPU scheduling and kernel time). App Guardian is also carefully designed to choose the right moments to start and end the protection procedure, and effectively protect itself against malicious apps. Our experimental studies show that this new technique defeated all known RIG attacks, with small impacts on the utility of legitimate apps and the performance of the OS. Most importantly, the idea underlying our approach, including app-level protection, side-channel based defense and lightweight response, not only significantly raises the bar for the RIG attacks and the research on this subject but can also inspire the follow-up effort on new detection systems practically deployable in the fragmented Android ecosystem.
Keywords: Internet of Things; cryptography; invasive software; mobile computing; smart phones; Android security; App Guardian; IoT; RIG threat; app-level protection; malware monitoring; runtime information gathering; side-channel based defense; Androids; Bluetooth; Humanoid robots; Monitoring; Runtime; Security; Smart phones (ID#: 16-11231)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163068&isnumber=7163005
M. Rao, T. Newe, I. Grout, E. Lewis and A. Mathur, “FPGA Based Reconfigurable IPSec AH Core Suitable for IoT Applications,” Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015,
pp. 2212-2216. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.327
Abstract: Real-world deployments of Internet of Things (IoTs) applications require secure communication. The IPSec (Internet Protocol Security) is an important and widely used security protocol (in the IP layer) to provide end to end secure communication. Implementation of the IPSec is a computing intensive work, which significantly limits the performance of the high speed networks. To overcome this issue, hardware implementation of IPSec is a best solution. IPSec includes two main protocols namely, Authentication Header (AH) and Encapsulating Security Payload (ESP) with two modes of operations, transport mode and tunnel mode. In this work we presented an FPGA implementation of IPSec AH protocol. This implementation supports both, tunnel and transport mode of operation. Cryptographic hash function called Secure Hash Algorithm – 3 (SHA-3) is used to calculate hash value for AH protocol. The proposed IPSec AH core can be used to provide data authentication security service to IoT applications.
Keywords: IP networks; Internet of Things; cryptographic protocols; field programmable gate arrays; AH; ESP; FPGA based reconfigurable IPSec AH core; IP layer; Internet protocol security; IoT applications; SHA; authentication header; cryptographic hash function; data authentication security service; encapsulating security payload; end to end secure communication; secure hash algorithm; transport mode; tunnel mode; Authentication; Cryptography; Field programmable gate arrays; Internet; Protocols; FPGA; IPSec; SHA-3 (ID#: 16-11232)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363373&isnumber=7362962
A. Ahrary and D. Ludena, “Research Studies on the Agricultural and Commercial Field,” Advanced Applied Informatics (IIAI-AAI), 2015 IIAI 4th International Congress on, Okayama, 2015, pp. 669-673. doi: 10.1109/IIAI-AAI.2015.291
Abstract: The new Internet of Things (IoT) paradigm is giving to the scientific community the possibility to create integrated environments where information could be exchanged among heterogeneous characteristic networks in an automated way, in order to provide a richer experience to the user and to give specific relevant information regarding the particular environment in which the user is interacting with. Those characteristic are highly valuable for the novel nutrition-based vegetable production and distribution system, in which the multiple benefits of Big Data where used in order to generate a healthy food recommendation to the end user and to feed to the system different analytics to improve the system efficiency. Moreover, the different IoT capabilities, specifically automation and heterogeneous network communication are valuable to improve the information matrix of our project. This paper discusses the different IoT available technologies, their security capabilities and assessment, and how could be useful for our project.
Keywords: Big Data; Internet of Things; agriculture; IoT capabilities; agricultural field; commercial field; distribution system; healthy food recommendation; integrated environments; network communication; research studies; scientific community; vegetable production; Agriculture; Big data; Business; Internet of things; Production; Security; Big Data infrastructure; Data Analysis; IoT; IoT Security (ID#: 16-11233)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373989&isnumber=7373854
W. K. Bodin, D. Jaramillo, S. K. Marimekala and M. Ganis, “Security Challenges and Data Implications by Using Smartwatch Devices in the Enterprise,” Emerging Technologies for a Smarter World (CEWIT), 2015 12th International Conference & Expo on, Melville, NY, 2015, pp. 1-5. doi: 10.1109/CEWIT.2015.7338164
Abstract: In the age of the Internet of Things, use of Smartwatch devices in the enterprise is evolving rapidly and many companies are exploring, adopting and researching the use of these devices in the Enterprise IT (Information Technology). The biggest challenge presented to an organization is understanding how to integrate these devices with the back end systems, building the data correlation and analytics while ensuring the security of the overall systems. The core objective of this paper is to provide a brief overview of such security challenges and data exposures to be considered. The research effort focuses on three key questions: 1. Data: how will we integrate these data streams into of physical world instrumentation with all of our existing data? 2. Security: how can pervasive sensing and analytics systems preserve and protect user security? 3. Usability: what hardware and software systems will make developing new intelligent and secure Smartwatch applications as easy as a modern web application? This area of research is in the early stages and through this paper we attempt to bring different views on how data, security and usability is important for Enterprise IT to adopt this type of Internet of Things (IoT) device in the Enterprise.
Keywords: Internet of Things; electronic commerce; mobile computing; security of data; watches; IoT device; analytics systems; data implications; enterprise IT; information technology; pervasive sensing system; security challenges; smartwatch devices; Biomedical monitoring; Internet; Media; Mobile communication; Monitoring; Security; Smart phones; Enterprise IT; Security; Smartwatch; analytics; data correlation (ID#: 16-11234)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7338164&isnumber=7338153
K. Lee, D. Kim, D. Ha, U. Rajput and H. Oh, “On Security and Privacy Issues of Fog Computing Supported Internet of Things Environment,” Network of the Future (NOF), 2015 6th International Conference on the, Montreal, QC, 2015, pp. 1-3. doi: 10.1109/NOF.2015.7333287
Abstract: Recently, the concept of Internet of Things (IoT) is attracting much attention due to the huge potential. IoT uses the Internet as a key infrastructure to interconnect numerous geographically diversified IoT nodes which usually have scare resources, and therefore cloud is used as a key back-end supporting infrastructure. In the literature, the collection of the IoT nodes and the cloud is collectively called as an IoT cloud. Unfortunately, the IoT cloud suffers from various drawbacks such as huge network latency as the volume of data which is being processed within the system increases. To alleviate this issue, the concept of fog computing is introduced, in which foglike intermediate computing buffers are located between the IoT nodes and the cloud infrastructure to locally process a significant amount of regional data. Compared to the original IoT cloud, the communication latency as well as the overhead at the backend cloud infrastructure could be significantly reduced in the fog computing supported IoT cloud, which we will refer as IoT fog. Consequently, several valuable services, which were difficult to be delivered by the traditional IoT cloud, can be effectively offered by the IoT fog. In this paper, however, we argue that the adoption of IoT fog introduces several unique security threats. We first discuss the concept of the IoT fog as well as the existing security measures, which might be useful to secure IoT fog. Then, we explore potential threats to IoT fog.
Keywords: Internet of Things; cloud computing; data privacy; security of data; Internet of Things environment; IoT cloud; IoT fog; IoT nodes; back-end cloud infrastructure; back-end supporting infrastructure; cloud infrastructure; communication latency; fog computing; network latency; privacy issues; security issues; security threats; Cloud computing; Distributed databases; Internet of things; Privacy; Real-time systems; Security; Sensors (ID#: 16-11235)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7333287&isnumber=7333276
R. M. Savola, P. Savolainen, A. Evesti, H. Abie and M. Sihvonen, “Risk-Driven Security Metrics Development for an E-Health IoT Application,” Information Security for South Africa (ISSA), 2015, Johannesburg, 2015, pp. 1-6. doi: 10.1109/ISSA.2015.7335061
Abstract: Security and privacy for e-health Internet-of-Things applications is a challenge arising due to the novelty and openness of the solutions. We analyze the security risks of an envisioned e-health application for elderly persons' day-to-day support and chronic disease self-care, from the perspectives of the service provider and end-user. In addition, we propose initial heuristics for security objective decomposition aimed at security metrics definition. Systematically defined and managed security metrics enable higher effectiveness of security controls, enabling informed risk-driven security decision-making.
Keywords: Internet of Things; data privacy; decision making; diseases; geriatrics; health care; risk management; security of data; chronic disease self-care; e-health Internet-of-Things applications; e-health IoT application; elderly person day-to-day support; privacy; risk-driven security decision-making; risk-driven security metrics development; security controls; security objective decomposition; Artificial intelligence; Android; risk analysis; security effectiveness; security metrics (ID#: 16-11236)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335061&isnumber=7335039
E. Vasilomanolakis, J. Daubert, M. Luthra, V. Gazis, A. Wiesmaier and P. Kikiras, “On the Security and Privacy of Internet of Things Architectures and Systems,” 2015 International Workshop on Secure Internet of Things (SIoT), Vienna, 2015, pp. 49-57. doi: 10.1109/SIOT.2015.9
Abstract: The Internet of Things (IoT) brings together a multitude of technologies, with a vision of creating an interconnected world. This will benefit both corporations as well as the end-users. However, a plethora of security and privacy challenges need to be addressed for the IoT to be fully realized. In this paper, we identify and discuss the properties that constitute the uniqueness of the IoT in terms of the upcoming security and privacy challenges. Furthermore, we construct requirements induced by the aforementioned properties. We survey the four most dominant IoT architectures and analyze their security and privacy components with respect to the requirements. Our analysis shows a mediocre coverage of security and privacy requirements. Finally, through our survey we identify a number of research gaps that constitute the steps ahead for future research.
Keywords: Internet of Things; data privacy; IoT architecture; privacy; security; Communication networks; Computer architecture; Internet of things; Privacy; Resilience; Security; Sensors (ID#: 16-11237)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7411837&isnumber=7411823
G. Kim, J. Kim and S. Lee, “An SDN Based Fully Distributed NAT Traversal Scheme for IoT Global Connectivity,” Information and Communication Technology Convergence (ICTC), 2015 International Conference on, Jeju, 2015, pp. 807-809. doi: 10.1109/ICTC.2015.7354671
Abstract: Existing NAT solves to IP address exhaustion problem binding private IP address and public IP address, and NAT traversal such as hole punching scheme enables to communicate End-to-End devices located in different private networks. However, such technologies are centralized the workload at NAT gateway and increase transmission delay caused by packet modification per packet. In this paper, we propose an SDN based fully distributed NAT traversal scheme, which can distribute the workload of NAT processing to devices and reduce transmission delay by packet switching instead of packet modification. Furthermore, we describe SDN based IoT connectivity management architecture for supporting IoT global connectivity and enhanced real-time and security.
Keywords: IP networks; Internet of Things; computer network management; packet switching; software defined networking; telecommunication security; IP address; IoT connectivity management architecture; IoT global connectivity; NAT traversal scheme; SDN; end-to-end devices; hole punching scheme; packet modification; packet switching; transmission delay; Computer architecture; Delays; Internet; Performance evaluation; Ports (Computers); Punching; Connectivity; Network Address Translation; Software Defined Networking (ID#: 16-11238)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7354671&isnumber=7354472
P. Porambage, A. Braeken, P. Kumar, A. Gurtov and M. Ylianttila, “Efficient Key Establishment for Constrained IoT Devices with Collaborative HIP-Based Approach,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-6. doi: 10.1109/GLOCOM.2015.7417094
Abstract: The Internet of Things (IoT) technologies interconnect wide ranges of network devices irrespective of their resource capabilities and local networks. The device constraints and the dynamic link creations make it challenging to use pre-shared keys for every secure end-to-end (E2E) communication scenario in IoT. Variants of Host Identity Protocol (HIP) are adopted for constructing dynamic and secure E2E connections among the heterogenous network devices with imbalanced resource profiles and less or no previous knowledge about each other. We propose a collaborative HIP solution with an efficient key establishment component for the high constrained devices in IoT, which delegates the expensive cryptographic operations to the resource rich devices in the local networks. Finally, we demonstrate the applicability of the key establishment in collaborative HIP solution for the constrained IoT devices rather than the existing HIP variants, by providing performance and security analysis.
Keywords: Internet of Things; computer network security; protocols; E2E; HIP; Internet of Things technologies; collaborative HIP based approach; constrained IoT devices; device constraints; dynamic link creations; efficient key establishment; host identity protocol; local networks; network devices; preshared keys; resource capabilities; secure end-to-end communication; security analysis; Collaboration; Cryptography; DH-HEMTs; Protocols; Visualization (ID#: 16-11239)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417094&isnumber=7416057
H. Derhamy, J. Eliasson, J. Delsing, P. P. Pereira and P. Varga, “Translation Error Handling for Multi-Protocol SOA Systems,” 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, 2015, pp. 1-8. doi: 10.1109/ETFA.2015.7301473
Abstract: The IoT research area has evolved to incorporate a plethora of messaging protocol standards, both existing and new, emerging as preferred communications means. The variety of protocols and technologies enable IoT to be used in many application scenarios. However, the use of incompatible communication protocols also creates vertical silos and reduces interoperability between vendors and technology platform providers. In many applications, it is important that maximum interoperability is enabled. This can be for reasons such as efficiency, security, end-to-end communication requirements etc. In terms of error handling each protocol has its own methods, but there is a gap for bridging the errors across protocols. Centralized software bus and integrated protocol agents are used for integrating different communications protocols. However, the aforementioned approaches do not fit well in all Industrial IoT application scenarios. This paper therefore investigates error handling challenges for a multi-protocol SOA-based translator. A proof of concept implementation is presented based on MQTT and CoAP. Experimental results show that multi-protocol error handling is possible and furthermore a number of areas that need more investigation have been identified.
Keywords: open systems; protocols; service-oriented architecture; CoAP; MQTT; centralized software bus; communication protocols; industrial IoT; integrated protocol agents; maximum interoperability; messaging protocol standards; multiprotocol SOA systems; multiprotocol SOA-based translator; translation error handling; Computer architecture; Delays; Monitoring; Protocols; Quality of service; Servers; Service-oriented architecture; Arrowhead; Cyber-physical systems; Error handling; Internet of Things; Protocol translation; SOA; Translation (ID#: 16-11240)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301473&isnumber=7301399
P. Porambage, A. Braeken, P. Kumar, A. Gurtov and M. Ylianttila, “Proxy-Based End-to-End Key Establishment Protocol for the Internet of Things,” 2015 IEEE International Conference on Communication Workshop (ICCW), London, 2015, pp. 2677-2682. doi: 10.1109/ICCW.2015.7247583
Abstract: The Internet of Things (IoT) drives the world towards an always connected paradigm by interconnecting wide ranges of network devices irrespective of their resource capabilities and local networks. This would inevitably enhance the requirements of constructing dynamic and secure end-to-end (E2E) connections among the heterogenous network devices with imbalanced resource profiles and less or no previous knowledge about each other. The device constraints and the dynamic link creations make it challenging to use pre-shared keys for every secure E2E communication scenario in IoT. We propose a proxy-based key establishment protocol for the IoT, which enables any two unknown high resource constrained devices to initiate secure E2E communication. The high constrained devices should be legitimate and maintain secured connections with the neighbouring less constrained devices in the local networks, in which they are deployed. The less constrained devices are performing as the proxies and collaboratively advocate the expensive cryptographic operations during the session key computation. Finally, we demonstrate the applicability of our solution in constrained IoT devices by providing performance and security analysis.
Keywords: Internet of Things; cryptographic protocols; next generation networks; E2E connections; IoT drives; cryptographic operations; end-to-end connections; heterogenous network devices; preshared keys; proxy-based end-to-end key establishment protocol; secure E2E communication; Conferences; Cryptography; DH-HEMTs; Internet of things; Polynomials; Protocols
(ID#: 16-11241)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247583&isnumber=7247062
L. Kypus, L. Vojtech and L. Kvarda, “Qualitative and Security Parameters Inside Middleware Centric Heterogeneous RFID/IoT Networks, On-Tag Approach,” Telecommunications and Signal Processing (TSP), 2015 38th International Conference on, Prague, 2015, pp. 21-25. doi: 10.1109/TSP.2015.7296217
Abstract: Work presented in the paper started as preliminary research, and analysis, ended as testing of radio frequency identification (RFID) middlewares. The intention was to get better insight into the architecture and functionalities with respect to its impact to overall quality of service (QoS). Main part of paper focuses on lack of QoS awareness due to missing classification of data originated from tags and from the very beginning of the delivery process. Method we used to evaluate did follow up on existing researches in area of QoS for RFID, combining them with new proposal from standard ISO 25010 regarding - Quality Requirements and Evaluation, system and software quality models. The idea is to enhance application identification area in user memory bank with encoded QoS flags and security attributes. The proof of concept of on-tag specified classes and attributes is able to manage and intentionally influence applications and data processing behavior.
Keywords: middleware; quality of service; radiofrequency identification; software quality; telecommunication computing; IoT networks; QoS awareness; middleware centric heterogeneous RFID network; on-tag approach; quality requirements; radio frequency identification middlewares; software quality models; standard ISO 25010; Ecosystems; Middleware; Protocols; Quality of service; Radiofrequency identification; Security; Standards; Application identification; IoT; QoS flags; RFID; Security attributes (ID#: 16-11242)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7296217&isnumber=7296206
S. C. Arseni, S. Halunga, O. Fratu, A. Vulpe and G. Suciu, “Analysis of the Security Solutions Implemented in Current Internet of Things Platforms,” Grid, Cloud & High Performance Computing in Science (ROLCG), 2015 Conference, Cluj-Napoca, 2015, pp. 1-4. doi: 10.1109/ROLCG.2015.7367416
Abstract: Our society finds itself in a point where it becomes more and more bounded by the use of technology in each activity, no matter how simple it could be. Following this social trend, the IT paradigm called Internet of Things (IoT) aims to group each technological end-point that has the ability to communicate, under the same “umbrella”. In recent years many private or public organizations have discussed on this topic and tried to provide IoT Platforms that will allow the grouping of devices scattered worldwide. Yet, while information flows and a certain level of scalability and connectivity have been assured, one key component, security, remains a vulnerable point of IoT Platforms. In this paper we describe the main features of some of these “umbrellas”, either open source or with payment, while analyzing and comparing the security solutions integrated in each one of these IoT Platforms. Moreover, through this paper we try to raise users and organizations awareness of the possible vulnerabilities that could appear in any moment, when using one of the presented IoT Platforms.
Keywords: Internet of Things; data analysis; security of data; IoT platform; security solution analysis; Authentication; Internet of things; Organizations; Protocols; Sensors; Internet of Things architectures; Internet of Things platforms; platforms security (ID#: 16-11243)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7367416&isnumber=7367220
U. Celentano and J. Röning, “Framework for Dependable and Pervasive eHealth Services,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 634-639. doi: 10.1109/WF-IoT.2015.7389128
Abstract: Provision of health care and well-being services at end-user residence, together with its benefits, brings important concerns to be dealt with. This article discusses selected issues in dependable pervasive eHealth services support. Dependable services need to be implemented in a resource-efficient and safe way due to constrained and concurrent, pre-existing conditions and radio environment. Security is a must when dealing with personal information, even more critical when regarding health. Once these fundamental requirements are satisfied, and services designed in an effective manner, social significance can be achieved in various scenarios. After having discussed the above viewpoints, the article concludes with the future directions in eHealth IoT including scaling the system down to the nanoscale, to interact more intimately with biological organisms.
Keywords: Internet of Things; health care; software reliability; IoT; dependable service; eHealth service; pervasive service; Data analysis; Data privacy; Distributed databases; Medical services; Privacy; Safety; Security; Dependability; diagnostics; inclusive health care; nanoscale; preventative health care; privacy; remote patient monitoring; resource use efficiency; robustness; safety; security; treatment (ID#: 16-11244)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389128&isnumber=7389012
S. Rao, D. Chendanda, C. Deshpande and V. Lakkundi, “Implementing LWM2M in Constrained IoT Devices,” Wireless Sensors (ICWiSe), 2015 IEEE Conference on, Melaka, 2015, pp. 52-57. doi: 10.1109/ICWISE.2015.7380353
Abstract: LWM2M is an emerging Open Mobile Alliance standard that defines a fast deployable client-server specification to provide various machine to machine services. It provides both efficient device management as well as security workflow for Internet of Things applications, making it especially suitable for use in constrained networks. However, most of the ongoing research activities on this topic focus on the server domain of LWM2M. Enabling relevant LWM2M functionalities on the client side is not only critical and important but challenging as well since these end-nodes are invariably resource constrained. In this paper, we address those issues by proposing the client-side architecture for LWM2M and its complete implementation framework carried out over Contiki-based IoT nodes. We also present a lightweight IoT protocol stack that incorporates the proposed LWM2M client engine architecture and its interfaces. Our implementation is based on the recently released OMA LWM2M v1.0 specification, and supports OMA, IPSO as well as third party objects. We employ a real world application scenario to validate its usability and effectiveness. The results obtained indicate that the memory footprint overheads incurred due to the introduction of LWM2M into the client side IoT protocol stack are around 6-9%, thus making this implementation framework very appealing to even Class 1 constrained device types.
Keywords: Internet of Things; client-server systems; computer network security; mobile computing; Constrained Contiki-based IoT node; IPSO; Internet of Things application; LWM2M client engine architecture; OMA; client-server specification; device management; lightweight IoT protocol stack; machine to machine service; open mobile alliance standard; security workflow; Computer architecture; Engines; Logic gates; Microprogramming; Protocols; Servers; Standards; Constrained Nodes; Device Management; IPSO Objects; IoT Gateway; L WM2M; OMA Objects (ID#: 16-11245)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380353&isnumber=7380339
C. Doukas and F. Antonelli, “Developing and Deploying End-To-End Interoperable & Discoverable IoT Applications,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 673-678. doi: 10.1109/ICC.2015.7248399
Abstract: This paper presents COMPOSE: a collection of open source tools that enable the development and deployment of end-to-end Internet of Things applications and services. COMPOSE targets developers and entrepreneurs providing a full PaaS and the essential IoT tools for applications and services. Device interoperability, service discovery and composition, security and scalability integrated and demonstrated in use cases around smart cities and smart retail context.
Keywords: Internet of Things; cloud computing; open systems; public domain software; smart cities; COMPOSE; IoT tool; PaaS; device interoperability; end-to-end Internet of Things application; open source tool; platform as a service; service discovery; smart city; Intelligent sensors; Internet of things; Mobile communication; Protocols; Internet of Things; IoT development; Smart City; Smart Retail (ID#: 16-11246)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7248399&isnumber=7248285
A. Saxena, V. Kaulgud and V. Sharma, “Application Layer Encryption for Cloud,” 2015 Asia-Pacific Software Engineering Conference (APSEC), New Delhi, India, 2015, pp. 377-384. doi: 10.1109/APSEC.2015.52
Abstract: As we move to the next generation of networks such as Internet of Things (IoT), the amount of data generated and stored on the cloud is going to increase by several orders of magnitude. Traditionally, storage or middleware layer encryption has been used for protecting data at rest. However, such mechanisms are not suitable for cloud databases. More sophisticated methods include user-layer-encryption (ULE) (where the encryption is performed at the end-user's browser) and application-layer-encryption (ALE) (where the encryption is done within the web-app). In this paper, we study security and functionality aspects of cloud encryption and present an ALE framework for Java called JADE that is designed to protect data in the event of a server compromise.
Keywords: Cloud computing; Databases; Encryption; Java; PaaS security; application layer encryption; cloud encryption; cloud security; database security (ID#: 16-11247)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7467324&isnumber=7467261
P. Srivastava and N. Garg, “Secure and Optimized Data Storage for IoT through Cloud Framework,” Computing, Communication & Automation (ICCCA), 2015 International Conference on, Noida, 2015, pp. 720-723. doi: 10.1109/CCAA.2015.7148470
Abstract: Internet of Things (IoT) is the future. With increasing popularity of internet, soon internet in routine devices will be a common practice by people. Hence we are writing this paper to encourage IoT accomplishment using cloud computing features with it. Basic setback of IoT is management of the huge quantity of data. In this paper, we have suggested a framework with several data compression techniques to store this large amount of data on cloud acquiring lesser space and using AES encryption techniques we have also improved the security of this data. Framework also shows the interaction of data with reporting and analytic tools through cloud. At the end, we have concluded our paper with some of the future scopes and possible enhancements of our ideas.
Keywords: Internet of Things; cloud computing; cryptography; data compression; optimisation; storage management; AES encryption technique; Internet of Things; IoT; cloud computing feature; data compression technique; data storage optimization; data storage security; Cloud computing; Encryption; Image coding; Internet of things; Sensors; AES; IoT; actuators; compression; encryption; sensors; trigger (ID#: 16-11248)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148470&isnumber=7148334
K. Yasaki, H. Ito and K. Nimura, “Dynamic Reconfigurable Wireless Connection between Smartphone and Gateway,” Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 228-233.
doi: 10.1109/COMPSAC.2015.234
Abstract: In a broad sense, the Internet of Things (IoT) includes devices that do not have Internet access capability but are aided by a gateway (such as a smartphone) that does have such access. The combination of a gateway and devices with a wireless connection can provide flexibility, but there are limitations to the network capability of each gateway in terms of how many network connections can be accommodated. It would be possible to get rid of the constraint and provide further flexibility and stability if we could deal with multiple gateways and balance the connections. Therefore, we propose a dynamic reconfigurable wireless connection system that can hand over the device connection between gateways by introducing a driver management framework that migrates the driver module handling the network connection. We have implemented a prototype using smartphones as gateways, Bluetooth low energy (BLE) sensors as devices, and a Web application that works on an extended Web runtime that can directly control a device from the application. The combination of these, composed by the user, can be migrated from the smartphone to other gateways (including the network connection) by dragging and dropping icons, after which the gateway and devices take over the combined task. We confirmed that the proposed architecture enables end users to utilize devices flexibly and can easily migrate the network connections of a particular service to another gateway.
Keywords: Internet; Internet of Things; internetworking; network servers; smart phones; Bluetooth low energy sensors; Internet access; Internet of Things; IoT; Web application; driver management framework; driver module handling; dynamic reconfigurable wireless connection system; extended Web runtime; multiple gateways; network connections; smartphone; Communication system security; IEEE 802.11 Standard; Logic gates; Protocols; Sensors; Wireless communication; Wireless sensor networks; Internet of Things; Javascript; dynamic reconfiguration; gateway; heterogeneity; mash-up; smartphone (ID#: 16-11249)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273359&isnumber=7273299
T. F. J. M. Pasquier, J. Singh, J. Bacon and O. Hermant, “Managing Big Data with Information Flow Control,” 2015 IEEE 8th International Conference on Cloud Computing, New York City, NY, 2015, pp. 524-531. doi: 10.1109/CLOUD.2015.76
Abstract: Concern about data leakage is holding back more widespread adoption of cloud computing by companies and public institutions alike. To address this, cloud tenants/applications are traditionally isolated in virtual machines or containers. But an emerging requirement is for cross-application sharing of data, for example, when cloud services form part of an IoT architecture. Information Flow Control (IFC) is ideally suited to achieving both isolation and data sharing as required. IFC enhances traditional Access Control by providing continuous, data-centric, cross-application, end-to-end control of data flows. However, large-scale data processing is a major requirement of cloud computing and is infeasible under standard IFC. We present a novel, enhanced IFC model that subsumes standard models. Our IFC model supports 'Big Data' processing, while retaining the simplicity of standard IFC and enabling more concise, accurate and maintainable expression of policy.
Keywords: Big Data; Internet of Things; authorisation; cloud computing; Big Data management; IFC; IoT architecture; access control; cloud services; cloud tenants; containers; cross-application data sharing; data flows; data leakage; information flow control; large-scale data processing; virtual machines; Access control; Companies; Context; Data models; Hospitals; Standards; Data Management; Information Flow Control; Security (ID#: 16-11250)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7214086&isnumber=7212169
A. J. Poulter, S. J. Johnston and S. J. Cox, “Using the MEAN Stack to Implement a RESTful Service for an Internet of Things Application,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 280-285.
doi: 10.1109/WF-IoT.2015.7389066
Abstract: This paper examines the components of the MEAN development stack (MongoDb, Express.js, Angular.js, & Node.js), and demonstrate their benefits and appropriateness to be used in implementing RESTful web-service APIs for Internet of Things (IoT) appliances. In particular, we show an end-to-end example of this stack and discuss in detail the various components required. The paper also describes an approach to establishing a secure mechanism for communicating with IoT devices, using pull-communications.
Keywords: Internet of Things; Web services; application program interfaces; security of data; software tools; Angular.js; Express.js; Internet of Things application; IoT devices; MEAN development stack; MongoDb; Node.js; RESTful Web-service API; pull-communications; secure mechanism; Databases; Hardware; Internet of things; Libraries; Logic gates; Servers; Software; IoT; MEAN; REST; web programming (ID#: 16-11251)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389066&isnumber=7389012
Z. Liu, Mianxiong Dong, Bo Gu, Cheng Zhang, Y. Ji and Y. Tanaka, “Inter-Domain Popularity-Aware Video Caching in Future Internet Architectures,” Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, Taipei, 2015, pp. 404-409. doi: (not provided)
Abstract: Current TCP/IP based network is suffering from the usage of IP especially in the era of Internet of things (IoT). Recently Content Centric Network (CCN) is proposed as an alternative of the future network architecture. In CCN, data itself, which is authenticated and secured, is a name and can be directly requested at the network level instead of using IP and Domain Name System (DNS). Another difference between CCN and traditional networks is that the routers in CCN have the caching abilities. Then the end users can obtain the data from routers instead of from the remote server if the content has been stored in the router. Hence the overall network performance can be improved by reducing the required transmission hops and the advantage of the CCN caching has been shown in literature. In this paper, we design a new caching policy for the popularity-aware video caching in CCN to handle the 'redundancy' problem in the existing schemes, where the same content may be stored multiple times along the road from server to users, thus leading to a significant performance degradation. Simulations are conducted and we could observe that the proposed scheme performs better comparing with the existing caching policies.
Keywords: Internet; Internet of Things; CCN; DNS; Internet of things; TCP-IP based network; content centric network; domain name system; future Internet architecture; interdomain popularity-aware video caching; loT; redundancy problem; remote server; router; Artificial neural networks; Degradation; IP networks; Indexes; Redundancy; Servers; Topology (ID#: 16-11252)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332603&isnumber=7332527
P. Porambage, A. Braeken, A. Gurtov, M. Ylianttila and S. Spinsante, “Secure End-to-End Communication for Constrained Devices in IoT-Enabled Ambient Assisted Living Systems,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 711-714. doi: 10.1109/WF-IoT.2015.7389141
Abstract: The Internet of Things (IoT) technologies interconnect broad ranges of network devices irrespective of their resource capabilities and local networks. In order to upgrade the standard of life of elderly people, Ambient Assisted Living (AAL) systems are also widely deployed in the context of IoT applications. To preserve user security and privacy in AAL systems, it is significant to ensure secure communication link establishment among the medical devices and the remote hosts or servers that are interested in accessing the critical health data. However, due to the limited resources available in such constrained devices, it is challenging to exploit expensive cryptographic operations in the conventional security protocols. Therefore, in this paper we propose a novel proxy-based authentication and key establishment protocol, which is lightweight and suitable to safeguard sensitive data generated by resource-constrained devices in IoT-enabled AAL systems.
Keywords: Internet of Things; assisted living; cryptographic protocols; data privacy; geriatrics; health care; medical computing; Internet of Things technology; IoT-enabled ambient assisted living system; constrained device; critical health data assessment; cryptographic operation; elderly people; key establishment protocol; medical device; proxy-based authentication protocol; remote host; remote server; secure end-to-end communication link; security protocol; user privacy; user security; Authentication; Cryptography; DH-HEMTs; Protocols; Senior citizens; Sensors; authentication; key establishment; proxy; resource-constrained device (ID#: 16-11253)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389141&isnumber=7389012
H. C. Pöhls, “JSON Sensor Signatures (JSS): End-to-End Integrity Protection from Constrained Device to IoT Application,” Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, Blumenau, 2015, pp. 306-312. doi: 10.1109/IMIS.2015.48
Abstract: Integrity of sensor readings or actuator commands is of paramount importance for a secure operation in the Internet-of-Things (IoT). Data from sensors might be stored, forwarded and processed by many different intermediate systems. In this paper we apply digital signatures to achieve end-to-end message level integrity for data in JSON. JSON has become very popular to represent data in the upper layers of the IoT domain. By signing JSON on the constrained device we extend the end-to-end integrity protection starting from the constrained device to any entity in the IoT data-processing chain. Just the JSON message's contents including the enveloped signature and the data must be preserved. We reached our design goal to keep the original data accessible by legacy parsers. Hence, signing does not break parsing. We implemented an elliptic curve based signature algorithm on a class 1 (following RFC 7228) constrained device (Zolertia Z1: 16-bit, MSP 430). Furthermore, we describe the challenges of end-to-end integrity when crossing from IoT to the Web and applications.
Keywords: Internet of Things; Java; data integrity; digital signatures; public key cryptography; Internet-of-Things; IoT data-processing chain; JSON sensor signatures; actuator commands; digital signatures; elliptic curve based signature algorithm; end-to-end integrity protection; end-to-end message level integrity; enveloped signature; legacy parsers; sensor readings integrity; Data structures; Digital signatures; Elliptic curve cryptography; NIST; Payloads; XML; ECDSA; IoT; JSON; integrity (ID#: 16-11254)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284966&isnumber=7284886
E. Z. Tragos et al., “An IoT Based Intelligent Building Management System for Ambient Assisted Living,” 2015 IEEE International Conference on Communication Workshop (ICCW), London, 2015, pp. 246-252. doi: 10.1109/ICCW.2015.7247186
Abstract: Ambient Assisted Living (AAL) describes an ICT based environment that exposes personalized and context-aware intelligent services, thus creating an appropriate experience to the end user to support independent living and improvement of the everyday quality of life of both healthy elderly and disabled people. The social and economic impact of AAL systems have boosted the research activities that combined with the advantages of enabling technologies such as Wireless Sensor Networks (WSNs) and Internet of Things (IoT) can greatly improve the performance and the efficiency of such systems. Sensors and actuators inside buildings can create an intelligent sensing environments that help gather realtime data for the patients, monitor their vital signs and identify abnormal situations that need medical attention. AAL applications might be life critical and therefore have very strict requirements for their performance with respect to the reliability of the devices, the ability of the system to gather data from heterogeneous devices, the timeliness of the data transfer and their trustworthiness. This work presents the functional architecture of SOrBet (Marie Curie IAPP project) that provides a framework for interconnecting efficiently smart devices, equipping them with intelligence that helps automating many of the everyday activities of the inhabitants. SOrBet is a paradigm shift of traditional AAL systems based on a hybrid architecture, including both distributed and centralized functionalities, extensible, self-organising, robust and secure, built on the concept of “reliability by design”, thus being capable of meeting the strict Quality of Service (QoS) requirements of demanding applications such as AAL.
Keywords: Internet of Things; assisted living; building management systems; patient monitoring; quality of service; wireless sensor networks; IoT based intelligent building management system; SOrBet; ambient assisted living; hybrid architecture; Artificial intelligence; Automation; Buildings; Quality of service; Reliability; Security; Sensors (ID#: 16-11255)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247186&isnumber=7247062
N. Pazos, M. Müller, M. Aeberli and N. Ouerhani, “ConnectOpen — Automatic Integration of IoT Devices,” Internet of Things
(WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 640-644. doi: 10.1109/WF-IoT.2015.7389129
Abstract: There exists, today, a wide consensus that Internet of Things (IoT) is creating a wide range of business opportunities for various industries and sectors like Manufacturing, Healthcare, Public infrastructure management, Telecommunications and many others. On the other hand, the technological evolution of IoT facing serious challenges. The fragmentation in terms of communication protocols and data formats at device level is one of these challenges. Vendor specific application architectures, proprietary communication protocols and lack of IoT standards are some reasons behind the IoT fragmentation. In this paper we propose a software enabled framework to address the fragmentation challenge. The framework is based on flexible communication agents that are deployed on a gateway and can be adapted to various devices communicating different data formats using different communication protocol. The communication agent is automatically generated based on specifications and automatically deployed on the Gateway in order to connect the devices to a central platform where data are consolidated and exposed via REST APIs to third party services. Security and scalability aspects are also addressed in this work.
Keywords: Internet of Things; application program interfaces; cloud computing; computer network security; internetworking; transport protocols; ConnectOpen; IoT fragmentation; REST API; automatic IoT device integration; central platform; communication agents; communication protocol; communication protocols; data formats; device level; scalability aspect; security aspect; software enabled framework; third party services; Business; Embedded systems; Logic gates; Protocols; Scalability; Security; Sensors; Communication Agent; End Device; Gateway; IoT; Kura; MQTT; OSGi (ID#: 16-11256)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389129&isnumber=7389012
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Expandability 2015 |
The expansion of a network to more nodes creates security problems. For the Science of Security community, expandability relates to resilience and compositionality. The research work cited here was presented in 2015.
Z. Li and Y. Yang, “ABCCC: An Advanced Cube Based Network for Data Centers,” Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, Columbus, OH, 2015, pp. 547-556. doi: 10.1109/ICDCS.2015.62
Abstract: A new network structure called BCube Connected Crossbars (BCCC) was recently proposed. Its short diameter, good expandability and low cost make it a very promising topology for data center networks. However, it can utilize only two NIC ports of each server, which is suitable for nowadays technology, even when more ports are available. Due to technology advances, servers with more NIC ports are emerging and they will become low-cost commodities some time later. In this paper, we propose a more general server-centric data center network structure, called Advanced BCube Connected Crossbars (ABCCC), which can utilize inexpensive commodity off-the-shelf switches and servers with any fixed number of NIC ports and provide good network properties. Like BCCC, ABCCC has good expandability. When doing expansion, there is no need to alter the existing system but only to add new components into it. Thus the expansion cost that BCube suffers from can be significantly reduced in ABCCC. We also introduce an addressing scheme and an efficient routing algorithm for one-to-one communication in ABCCC. We make comprehensive comparisons between ABCCC and some popular existing structures in terms of several critical metrics, such as diameter, network size, bisection bandwidth and capital expenditure. We also conduct extensive simulations to evaluate ABCCC, which show that ABCCC achieves the best trade off among all these critical metrics and it suits for many different applications by fine tuning its parameters.
Keywords: computer centres; computer networks; topology; ABCCC; NIC port; advanced BCube connected crossbar; off-the-shelf switch; one-to-one communication; routing algorithm; server-centric data center network structure; Hardware; Hypercubes; Network topology; Ports (Computers); Routing; Servers; Topology; Data center networks; expandability; network diameter; server-centric (ID#: 16-9991)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164940&isnumber=7164877
Z. Li and Y. Yang, “GBC3: A Versatile Cube-Based Server-Centric Network for Data Centers,” in IEEE Transactions on Parallel and Distributed Systems, vol. 27, no. 10, pp. 2895-2910, 2016. doi: 10.1109/TPDS.2015.2511725
Abstract: A new network structure called BCube Connected Crossbars (BCCC) was recently proposed. Its short diameter, good expandability and low cost make it a very promising topology for data center networks. However, it can utilize only two NIC ports of each server, which is suitable for nowadays technology, even though more NIC ports are available. Due to technology advances, servers with more NIC ports are emerging and they will become low-cost commodities some time later. In this paper, we propose a more general server-centric data center network structure, called GBC3, which can utilize inexpensive commodity off-the-shelf switches and servers with any fixed number of NIC ports and provide good network properties. Like BCCC, GBC3 has good expandability. When doing expansion, there is no need to alter the existing system but only to add new components into it. Thus the expansion cost that BCube suffers from can be significantly reduced in GBC3. We also introduce an addressing scheme and several efficient routing algorithms for one-to-one, one-to-all and one-to-many communications in GBC3 respectively. We make comprehensive comparisons between GBC3 and some popular existing structures in terms of several critical metrics, such as diameter, network size, bisection bandwidth and capital expenditure. We also conduct extensive experiments to evaluate GBC3, which show that GBC3 achieves the best flexibility to make tradeoff among all these critical metrics and it can suit for many different applications by fine tuning its parameters.
Keywords: Hardware; Hypercubes; Network topology; Ports (Computers); Routing; Servers; Topology; Data center networks; expandability; network diameter; server-centric; topology (ID#: 16-9992)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7364277&isnumber=4359390
Y. Cheng, D. Zhao, F. Tao, L. Zhang and Y. Liu, “Complex Networks Based Manufacturing Service and Task Management in Cloud Environment,” Industrial Electronics and Applications (ICIEA), 2015 IEEE 10th Conference on, Auckland, 2015, pp. 242-247. doi: 10.1109/ICIEA.2015.7334119
Abstract: In the process of development and application of service-oriented manufacturing (SOM) system, e.g., cloud manufacturing (CMfg), manufacturing resource allocation is always one of the most important issues need to be addressed. With the permeation of Internet of things (IoT), big data, and cloud technologies in manufacturing, manufacturing service and task management in SOM is facing some new challenges under the cloud environment. In consideration of the characteristics of cloud environment (i.e., complexity, sociality, dynamics, uncertainty, distribution, expandability, etc.), a manufacturing service and task management method based on complex networks is proposed in this paper. The models of manufacturing service network (S_Net) and manufacturing task network (T_Net) are built according to the digital description of manufacturing service and task. Then the manufacturing service management upon S_Net and manufacturing task management upon T_Net are discussed respectively. Finally, the conclusion and future works are pointed out.
Keywords: cloud computing; manufacturing data processing; service-oriented architecture; Big Data; CMfg; Internet of things; SOM system; S_Net; T_Net; cloud environment characteristics; cloud manufacturing; cloud technologies; complex network-based manufacturing service; complexity characteristic; distribution characteristic; dynamics characteristic; expandability characteristic; manufacturing resource allocation; manufacturing service network; manufacturing task network; service-oriented manufacturing system; sociality characteristic; task management; uncertainty characteristic; Cloud computing; Collaborative work; Complex networks; Computational modeling; Correlation; Manufacturing; Resource management; cloud environment; manufacturing service network (S_Net); manufacturing task network (T_Net); service management; service-oriented manufacturing (SOM) (ID#: 16-9993)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7334119&isnumber=7334072
R. Zhao and J. Zhang, “High Efficiency Hybrid Current Balancing Method for Multi-Channel LED Drive,” 2015 IEEE Applied Power Electronics Conference and Exposition (APEC), Charlotte, NC, 2015, pp. 854-860. doi: 10.1109/APEC.2015.7104449
Abstract: In this paper, a novel hybrid current balancing method for multi-channel LED drive based on quasi-two-stage converter are proposed. In the proposed structure, each output module has two outputs and their output currents can be balanced by a capacitor based on charge balancing principle. A switching mode current regulator is adopted for each output module to balance the currents of the output modules. Since the current regulator only process part of the total output power, the cost is low and the efficiency is high. The proposed method combines the advantages of passive and active current balancing method, which is simple and flexible for load expandability. Performance of the proposed method is validated by the simulation and experimental results from a 120W prototype with four LED strings.
Keywords: capacitors; driver circuits; electric current control; light emitting diodes; switching convertors; active current balancing method; capacitor; hybrid current balancing method; load expandability; multichannel LED drive; passive current balancing method; power 120 W; quasitwo-stage converter; switching mode current regulator; Adaptive control; Capacitors; DC-DC power converters; Light emitting diodes; Regulators; Switches; Voltage control; Current balancing method; Hybrid; LLC; Multi-output LED driver; high efficiency (ID#: 16-9994)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7104449&isnumber=7104313
S. C. Lin, C. Wang, C. Y. Lo, Y. W. Chang, H. Y. Lai and P. L. Hsu, “Using Constructivism as a Basic Idea to Design Multi-Situated Game-Based Learning Platform and ITS Application,” Advanced Applied Informatics (IIAI-AAI), 2015 IIAI 4th International Congress on, Okayama, 2015, pp. 711-712. doi: 10.1109/IIAI-AAI.2015.264
Abstract: Nowadays, e-learning becomes a popular learning strategies because of the advance in technology and the development of learning platforms. At present, most of platforms are designed for single topic rather than multiple topics and are difficultly extended to different topics, since learning mode, design, and limitations of applications for game-based learning. Therefore, in this study, we developed a tower defense game-based platform based on situated learning theory and constructivism of knowledge and this platform can be applied to diverse learning programs. In this platform, users can learn in a simulated scenario. Additionally, the flexible design of platforms will provide the usability and expandability of the system.
Keywords: computer aided instruction; computer games; diverse learning programs; e-learning; knowledge constructivism; learning mode; learning strategies; multisituated game-based learning platform design; situated learning theory; system expandability; system usability; tower defense game-based learning platform; Electronic learning; Games; Information management; Multimedia communication; Poles and towers; Usability; Constructivist Learning; Game-based learning; Situated learning (ID#: 16-9995)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7374001&isnumber=7373854
L. Mossucca et al., “Polar Data Management Based on Cloud Technology,” Complex, Intelligent, and Software Intensive Systems (CISIS), 2015 Ninth International Conference on, Blumenau, 2015, pp. 459-463. doi: 10.1109/CISIS.2015.67
Abstract: IDIPOS, that stands for Italian Database Infrastructure for Polar Observation Sciences, has been conceived to realize a feasibility study on infrastructure devoted to management of data coming from Polar areas. This framework adopted a modular approach identifying two main parts: the first one defines main components of infrastructure, and, the latter selects possible cloud solutions to manage and organize these components. The main purpose is the creation of a scalable and flexible infrastructure for the exchange of scientific data from various application fields. The envisaged infrastructure is based on the cutting-edge technology of the Community Cloud Infrastructure for an aggregation and federation of resources, to optimize the use of hardware. The infrastructure is composed of: a central node, several nodes distributed in Italy, interconnection between other systems realized in Polar areas. This paper aims to investigate cloud solution, and explore the key factors which may influence cloud adoption in the project such as scalability, flexibility and expandability. In particular, main cloud aspects addressed are related to data storage, data management, data analysis, infrastructure federation following recommendations from the Cloud Expert Group to allow sharing information in scientific communities.
Keywords: cloud computing; data analysis; database management systems; open systems; scientific information systems; Cloud Expert Group; IDIPOS; Italian Database Infrastructure for Polar Observation Sciences; central node; cloud technology; community cloud infrastructure; cutting-edge technology; data management; data storage; expandability; flexibility; information sharing; infrastructure federation; modular approach; polar data management; resource aggregation; resource federation; scalability; scientific communities; scientific data exchange; Cloud computing; Clouds; Communities; Computer architecture; Interoperability; Organizations; Servers; Polar Observation Sciences; e-science; interoperability (ID#: 16-9996)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7185231&isnumber=7185122
A. Musa, T. Minotani, K. Matsunaga, T. Kondo and H. Morimura, “An 8-Mode Reconfigurable Sensor-Independent Readout Circuit for Trillion Sensors Era,” Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, Singapore, 2015, pp. 1-6. doi: 10.1109/ISSNIP.2015.7106913
Abstract: The Internet of Things (IoT) is opening the doors to many new devices and applications. Such an increase in the variety of applications requires reconfigurable, flexible and expandable hardware for fabrication and development cost reduction. This has been achieved for the digital part with devices like Arduino. However, the sensor readout Analog-Front-End (AFE) circuits are mainly designed for a specific sensor type or application. Such an approach would be feasible for the current small number of applications and sensors used. However, it will increase cost drastically as the variety and number of applications and sensors are increased. Moreover, flexibility and expandability of the system will be limited. Therefore, a universal sensor platform that can be reconfigured to adapt to various sensors and applications is needed. Moreover, an array of such circuit can be made with the same sensor to increase measurement accuracy and reliability. It can also be used to integrate heterogeneous sensors for increasing the flexibility of the system, which will make the system adaptable to many applications through only activating the desired sensors. In this paper, an 8-mode reconfigurable sensor readout AFE with offset-cancellation-resolution enhancing scheme is proposed to serve as a step towards a universal sensor interface. The proposed AFE can be reconfigured to interface resistive, capacitive, current producing, and voltage producing sensors through direct or capacitive connection to its terminals. The proposed system is fabricated in 180nm CMOS process and has successfully measured the four types of sensor outputs. It has also been interfaced to Arduino board to allow easy interfacing of various sensors. Therefore, the proposed work can be used as general purpose AFE resulting in manufacturing and development cost reduction and increased flexibility and expandability.
Keywords: CMOS digital integrated circuits; Internet of Things; capacitive sensors; cloud computing; digital-analogue conversion; microprocessor chips; readout electronics; 8-mode reconfigurable sensor readout AFE; 8-mode reconfigurable sensor-independent readout circuit; Arduino board; CMOS process; IoT; current producing sensors; development cost reduction; digital part; expandable hardware; flexible hardware; heterogeneous sensors; interface resistive sensors; measurement accuracy; offset-cancellation-resolution enhancing scheme; reconfigurable hardware; reliability; sensor outputs; sensor readout AFE circuit; sensor readout analog-front-end circuits; system flexibility; trillion sensor era; universal sensor interface; universal sensor platform; voltage producing sensors; Arrays; Current measurement; Electrical resistance measurement; Integrated circuits; Signal resolution; Transducers; Voltage measurement (ID#: 16-9997)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106913&isnumber=7106892
Z. Xu and C. Zhang, “Optimal Direct Voltage Control of MTDC Grids for Integration of Offshore Wind Power,” Smart Grid Technologies - Asia (ISGT ASIA), 2015 IEEE Innovative, Bangkok, 2015, pp. 1-6. doi: 10.1109/ISGT-Asia.2015.7387179
Abstract: This paper presents an optimal control of multiterminal high voltage DC (MTDC) networks. The conventional methods of controlling direct voltages of MTDC networks suffered from serials of issues, such as lack of ability to steer the power flow, less expandability to scale up and poor dynamic responses. In this paper, an innovative strategy of regulating DC voltages is derived through three main steps: calculation of DC loads flow, optimization of power flow and N-1 security for MTDC networks. Further, this strategy is numerically tested by incorporating the loss minimization in a MTDC network. The advantages of the control strategy are verified by simulations using MATLAB/Simulink package.
Keywords: dynamic response; load flow; offshore installations; optimal control; optimisation; power system security; voltage control; wind power plants; DC load flow; DC voltages; MATLAB-Simulink package; MTDC grids; MTDC networks; N-1 security; dynamic responses; loss minimization; multiterminal high voltage DC networks; offshore wind power; optimal direct voltage control; power flow; HVDC transmission; Load flow; Reactive power; Security; Voltage control; Wind power generation; Control; MTDC; Power flow (ID#: 16-9998)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7387179&isnumber=7386954
A. Agarwal, V. Mukati and P. Kumar, “Performance Analysis of Variable Rate Multicarrier Transmission Schemes over LMS Channel,” Electronics, Computing and Communication Technologies (CONECCT), 2015 IEEE International Conference on, Bangalore, 2015, pp. 1-6. doi: 10.1109/CONECCT.2015.7383866
Abstract: With the increasing demand for increased coverage area, higher QoS, ubiquitous availability, flexibility and expandability, Land Mobile Satellite (LMS) multimedia communication is gaining popularity over existing Land Mobile Terrestrial (LMT) communication. This paper presents a comparative study of GO-OFDMA and VSL MC-CDMA variable rate transmission scheme over L and Ka-band LMS channel. For both the schemes, four variable rate classes employing 15 users are considered. It is shown that, for both the frequency bands, the BER performance of GO-OFDMA scheme is better than that of VSL MC-CDMA for all the different data rate class of users. Though, for Ka-band, the performance of both the schemes is relatively poor than L-band. Also, the performance of both the schemes for different elevation angles are illustrated and analyzed. Later, the composite signal PAPR performance of both the transmission schemes is shown and compared. It is observed that, the PAPR performance of GO-OFDMA scheme is better than VSL MC-CDMA. Hence GO-OFDMA scheme is a suitable candidate for variable rate communication over LMS channel.
Keywords: OFDM modulation; code division multiple access; error statistics; frequency division multiple access; land mobile radio; mobile satellite communication; multimedia communication; quality of service; GO-OFDMA scheme BER performance; Ka-band LMS channel; L-band LMS channel; LMS multimedia communication channel; QoS; VSL MC-CDMA scheme; composite signal PAPR performance; land mobile satellite multimedia communication; multicarrier code division multiple access; orthogonal frequency division multiple access; variable rate multicarrier transmission scheme performance analysis; variable spreading length; Channel models; Mobile communication; Multicarrier code division multiple access; OFDM; Satellite broadcasting; Satellites; Shadow mapping; GO-OFDMA; L and Ka-Band; LMS channel; PAPR; VSL MC-CDMA (ID#: 16-9999)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7383866&isnumber=7383851
Q. Qiu, Xiao Yao, Cuiting Chen, Yu Liu and Jinyun Fang, “A Spatial Data Partitioning and Merging Method for Parallel Vector Spatial Analysis,” Geoinformatics, 2015 23rd International Conference on, Wuhan, 2015, pp. 1-5. doi: 10.1109/GEOINFORMATICS.2015.7378651
Abstract: Based on the principle of the proximity of spatial elements and the equilibrium of spatial data's size, this paper presents a data partitioning and merging method based on spatial filling curve and collection of spatial features. In the data reducing section, this method takes the principle of dynamic tree merging and reduces the times of data serialization and deserialization. The experiment shows that such methods can cut down the time of every process' computing and merging, improve the load balancing degree, and make a great improvement to the efficiency of parallel algorithm and expandability.
Keywords: data reduction; geographic information systems; merging; parallel algorithms; vectors; data deserialization; data reducing section; data serialization; dynamic tree merging; load balancing degree; parallel algorithm; parallel vector spatial analysis; spatial data merging method; spatial data partitioning method; spatial data size equilibrium; spatial element proximity; spatial feature collection; spatial filling curve; Algorithm design and analysis; Hardware; Linux cluster; MPI; SLFB; serialize; spatial filling curve (ID#: 16-10000)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7378651&isnumber=7378547
K. Liu, R. Fu, Y. Gao, Y. Sun and P. Yan, “High Voltage Regulating Frequency AC Power Supply Based on CAN Bus Communication Control,” 2015 IEEE Pulsed Power Conference (PPC), Austin, TX, 2015, pp. 1-4. doi: 10.1109/PPC.2015.7297000
Abstract: High voltage high frequency AC power supply (HVHFACPS) is widely used in military, industry, scientific research and so on. Different applications ask for different functions, such as the control modes selectable, work modes selectable, output voltage adjustable, frequency adjustable, and even the integratability and expandability. In this paper, a kind of HVHFACPS is introduced, which output voltage can be regulated from 0 to 30 kV and output frequency can be regulated from 1kHz to 50kHz. There are continuous and discontinuous work modes to be chosen for a continuous AC voltage output or a discontinuous output and the work time and frequency can be regulate in the discontinuous work mode. There are remote and local control modes to be chosen for a remote control by computer or a local control by the keyboard on the cabinet panel. The control system of this power supply has the CAN bus communication function so that it can be connected to the CAN bus network and work cooperate to other equipments. Some experiments such as dielectric barrier discharge (DBD) and plasma generation are carried on use the power supply and the results proved that the functions are realized and the performance is good.
Keywords: controller area networks; field buses; frequency control; power supply circuits; telecontrol; voltage control; CAN bus communication control; HVHFACPS; cabinet panel; high voltage high frequency AC power supply; remote control; Control systems; Digital signal processing; Frequency control; Inductance; Power supplies; Resonant frequency; Voltage control (ID#: 16-10001)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7297000&isnumber=7296781
Z. Li and Y. Yang, “Permutation Generation for Routing in BCube Connected Crossbars,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 5460-5465. doi: 10.1109/ICC.2015.7249192
Abstract: BCube Connected Crossbars (BCCC) is a recently proposed network structure with short diameter and good expandability for cloud-based networks. Its diameter increases linearly to its order (dimension) and it has multiple near-equal parallel paths between any pair of servers. These advantages make BCCC a very promising network structure for next generation cloud-based networks. An efficient routing algorithm for BCCC has also been proposed, in which a permutation is used to determine which order (or dimension) will be routed first. However, there is no discussion yet about how to choose the permutation. In this paper, we mainly focus on permutation generations for routing in BCCC. We analyze the impact of choosing different permutations in both theory and simulation and propose two efficient permutation generation algorithms which take advantage of BCCC structure and give good performance.
Keywords: cloud computing; multicast communication; telecommunication network routing; BCube connected crossbars; multiple near-equal parallel paths; next generation cloud-based networks; permutation generation; Aggregates; Arrays; Cloud computing; Next generation networking; Routing; Servers; Throughput; BCube Connected Crossbars (BCCC); Cloud-based networks; dual-port server; load balance (ID#: 16-10002)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249192&isnumber=7248285
A. S. Bouhouras, K. I. Sgouras and D. P. Labridis, “Multi-Objective Planning Tool for the Installation of Renewable Energy Resources,” in IET Generation, Transmission & Distribution, vol. 9, no. 13, pp. 1782-1789, Oct. 01 2015.
doi: 10.1049/iet-gtd.2014.1054
Abstract: This study examines how environmental and socioeconomic criteria affect renewable energy resources (RES) distribution strategic plans regarding national energy policies. Four criteria are introduced with respective coefficients properly formulated to quantify their capacity. Moreover, these coefficients are properly normalised to combine the effect of each criterion under a uniform formulation. The base case scenario in this work considers an initially available capacity of RESs to be equally distributed among the candidate regions. Six scenarios about different prioritisation are examined. The results prove that different prioritisation criteria yield significant variations regarding the assigned regional RES capacity. The proposed algorithm defines optimisation only by terms of predefined prioritisation criteria; each solution could be considered optimal given that the respective installation strategic plan is subject to specific weighted criteria. The advantages of the proposed algorithm rely on its simplicity and expandability, since both coefficients formulation and resizing procedure are easily performed, as well as additional criteria could be easily incorporated in the resizing procedure. Thus, this algorithm could be considered as a multi-objective planning tool regarding long-term strategic plans for nationwide RES distribution.
Keywords: environmental economics; optimisation; power distribution economics; power distribution planning; renewable energy sources; coefficients formulation; environmental criteria; multiobjective planning tool; national energy policies; optimisation; predefined prioritisation criteria; regional RES capacity; renewable energy resource distribution strategic plans; renewable energy resource installation; resizing procedure; socioeconomic criteria (ID#: 16-10003)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7274082&isnumber=7274068
G. Parise, L. Parise, L. Martirano and A. Germolé, “The Relevance of the Architecture of Electrical Power Systems in Hospitals: The Service Continuity Safety by Design,” 2015 IEEE/IAS 51st Industrial & Commercial Power Systems Technical Conference (I&CPS), Calgary, AB, 2015, pp. 1-6. doi: 10.1109/ICPS.2015.7266433
Abstract: The power system architecture of hospitals supports by design an enhanced electrical behavior also adequate to the better withstand to external forces, if actual, as earthquake, fire, flood, applying a “Darwinian” approach. The architecture of the power system, supported by supervision control systems and a business continuity management (BCM), must guarantee operational performances that preserve the global service continuity such as: selectivity of faults and immunity to interferences among the system areas; easy maintainability of the system and its parts; flexibility and expandability. The paper deals with sample cases of systems in complexes of buildings applying the micro approach to satisfy hospital requirements and medical quality performances.
Keywords: SCADA systems; business continuity; hospitals; power system security; BCM; Darwinian approach; business continuity management; external forces; global service continuity; hospital requirements; medical quality performances; operational performances; power system architecture; service continuity safety; supervision control systems; Artificial neural networks; Heating; Load modeling; Reliability engineering; Substations; Switches; Critical loads; architecture efficiency; business and service continuity; complex systems; operation efficiency (ID#: 16-10004)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266433&isnumber=7266399
W. Wang, Q. Cao, X. Zhu and S. Liang, “A Framework for Intelligent Service Environments Based on Middleware and General Purpose Task Planner,” Intelligent Environments (IE), 2015 International Conference on, Prague, 2015, pp. 184-187. doi: 10.1109/IE.2015.40
Abstract: Aiming at providing various services for daily living, a framework of Intelligent Service Environment of Ubiquitous Robotics (ISEUR) is presented. This framework mainly addresses two important issues. First, it builds standardized component models for heterogeneous sensing and acting devices based on the middleware technology. Second, it implements a general purpose task planner, which coordinates associated components to achieve various tasks. The video demonstrates how these two functionalities are combined together in order to provide services in intelligent environments. Two different tasks, a localization task and a robopub task, are implemented to show the feasibility, efficiency and expandability of the system.
Keywords: intelligent robots; middleware; mobile robots; robot programming; ISEUR; daily living; general-purpose task planner; heterogeneous acting devices; heterogeneous sensing devices; intelligent environments; intelligent service environment-of-ubiquitous robotics; localization task; middleware technology; robopub task; standardized component models; Cameras; Middleware; Planning; Ports (Computers); Robot kinematics; Robot vision systems; intelligent service environment; middleware; task planning (ID#: 16-10005)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194295&isnumber=7194254
G. Papadopoulos, “Challenges in the Design and Implementation of Wireless Sensor Networks: A Holistic Approach-Development and Planning Tools, Middleware, Power Efficiency, Interoperability,” 2015 4th Mediterranean Conference on Embedded Computing (MECO), Budva, 2015, pp. 1-3. doi: 10.1109/MECO.2015.7181857
Abstract: Wireless Sensor Networks (WSNs) constitute a networking area with promising impact in the environment, health, security, industrial applications and more. Each of these presents different requirements, regarding system performance and QoS, and involves a variety of mechanisms such as routing and MAC protocols, algorithms, scheduling policies, security, OS, all of which are residing over the HW, the sensors, actuators and the Radio Tx/Rx. Furthermore, they encompass special characteristics, such as constrained energy, CPU and memory resources, multi-hop communication, leading to a few steps higher the required special knowledge. Although the status of WSNs is nearing the stage of maturity and wide-spread use, the issue of their sustainability hinges upon the implementation of some features of paramount importance: Low power consumption to achieve long operational life-time for battery-powered unattended WSN nodes, joint optimization of connectivity and energy efficiency leading to best-effort utilization of constrained radios and minimum energy cost, self-calibration and self-healing to recover from failures and errors to which WSNs are prone, efficient data aggregation lessening the traffic load in constrained WSNs, programmable and reconfigurable stations allowing for long life-cycle development, system security enabling protection of data and system operation, short development time making more efficient the time-to-market process and simple installation and maintenance procedures for wider acceptance. Despite the considerable research and important advances in WSNs, large scale application of the technology is still hindered by technical, complexity and cost impediments. Ongoing R&D is addressing these shortcomings by focusing on energy harvesting, middleware, network intelligence, standardization, network reliability, adaptability and scalability. However, for efficient WSN development, deployment, testing, and maintenance, a holistic unified approach is necessary which will address the above WSN challenges by developing an integrated platform for smart environments with built-in user friendliness, practicality and efficiency. This platform will enable the user to evaluate his design by identifying critical features and application requirements, to verify by adopting design indicators and to ensure ease of development and long life cycle by incorporating flexibility, expandability and reusability. These design requirements can be accomplished to a significant extent via an integration tool that provides a multiple level framework of functionality composition and adaptation for a complex WSN environment consisting of heterogeneous platform technologies, establishing a software infrastructure which couples the different views and engineering disciplines involved in the development of such a complex system, by means of the accurate definition of all necessary rules and the design of the 'glue-logic' which will guarantee the correctness of composition of the various building blocks. Furthermore, to attain an enhanced efficiency, the design/development tool must facilitate consistency control as well as evaluate the selections made by the user and, based on specific criteria, provide feedback on errors concerning consistency and compatibility as well as warnings on potentially less optimal user selections. Finally, the WSN planning tool will provide answers to fundamental issues such as the number of nodes needed to meet overall system objectives, the deployment of these nodes to optimize network performance and the adjustment of network topology and sensor node placement in case of changes in data sources and network malfunctioning.
Keywords: computer network reliability; computer network security; data protection; energy conservation; energy harvesting; middleware; open systems; optimisation; quality of service; sensor placement; telecommunication network planning; telecommunication network topology; telecommunication power management; telecommunication traffic; time to market; wireless sensor networks; QoS; WSN reliability; constrained radio best-effort utilization; data aggregation; data security enabling protection; design-development tool; energy efficiency; failure recovery; heterogeneous platform technology; holistic unified approach; interoperability; network intelligence; network topology adjustment; power consumption; power efficiency; sensor node placement; time-to-market process; traffic load; wireless sensor network planning tools; Electrical engineering; Embedded computing; Europe; Security; Wireless sensor networks (ID#: 16-10006)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7181857&isnumber=7181853
M. Jaekel, P. Schaefer, D. Schacht, S. Patzack and A. Moser, “Modular Probabilistic Approach for Modelling Distribution Grids and Its Application,” International ETG Congress 2015; Die Energiewende — Blueprints for the new energy age; Proceedings of, Bonn, Germany, 2015, pp. 1-7. doi: (not provided)
Abstract: Due to the high increase in installed distributed renewable energy sources (DRES) new challenges in the planning and operation of distribution grids (DG) exist. This paper proposes an approach to generate models of present and future synthetic DG based on statistical data of existing networks and operational planning. Compared to the utilization of grid samples a probabilistic network generator offers significant advantages, which are demonstrated in this paper. A modular design and a simple expandability is one of the most important requirements for its application in different issues. In this context four exemplary use cases are described - reactive power analysis, the identification of planning principles, analysing benefits of innovative network equipment and short circuit protection analysis.
Keywords: (not provided) (ID#: 16-10007)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7388477&isnumber=7388454
S. R. Bandela, S. K and R. K. P, “Implementation of NTCIP in Road Traffic Controllers for Traffic Signal Coordination,” 2015 Fifth International Conference on Advances in Computing and Communications (ICACC), Kochi, 2015, pp. 20-23. doi: 10.1109/ICACC.2015.58
Abstract: National Transportation Communication for Intelligent Transportation System Protocol (NTCIP) is a family of open standards, defining common communications protocols and data definitions for transmitting data and messages between computer systems used in Intelligent Transportation Systems (ITS). The Intelligent Transportation Systems make use of Information Technology, Computers, Telecommunication and Electronics (ICTE) in the effort of improving safety and mobility of automobiles and road users. In this effort it is likely that the various devices and gadgets used in ITS communicate each other. As of now many ITS solutions use proprietary protocol for communication that restricts interoperability and interchangeability while sharing a common platform. NTCIP provides the benefits of device interoperability and interchangeability, bridging the gap. In ITS, the Adaptive Traffic Control System (ATCS) is widely accepted in the present day for road traffic control and realtime signal coordination. The ATCS receives traffic information from all traffic junctions in a road traffic network in a timely manner. This information is processed centrally by the ATCS and signal timings at the traffic junctions are updated in realtime for minimum stops and delays to improve the travel time. There are many vendors manufacturing ATCS and traffic controllers with their proprietary protocol. This leads to the lack of interoperability between the ATCS and the traffic controllers restricting the expandability and customer choice. This problem can be overcome by adopting the concept of NTCIP in the communication process. This paper discusses how the traffic controller is made NTCIP compliant by adding the SNMP Agent functionality into it and how the communication is carried out in the form of NTCIP standards in spite of its proprietary terminology.
Keywords: automobiles; intelligent transportation systems; protocols; road safety; road traffic control; ATCS; ICTE; ITS; NTCIP; SNMP agent functionality; adaptive traffic control system; automobile mobility; automobile safety; communication process; communications protocols; computer systems; computers; data definitions; device interchangeability; device interoperability; electronics; information technology; national transportation communication for intelligent transportation system protocol; open standards; proprietary protocol; realtime signal coordination; road traffic controllers; road traffic network; telecommunication; traffic signal coordination; Interoperability; Junctions; Protocols; Servers; Standards; Traffic control; Vehicles; ATCS; NTCIP; SNMP TRAP; Traffic Controller (ID#: 16-10008)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7433767&isnumber=7433753
Ritu, N. Verma, S. Mishra and S. Shukla, “Implementation of Solar Based PWM Fed Two Phase Interleaved Boost Converter,” 2015 Communication, Control and Intelligent Systems (CCIS), Mathura, 2015, pp. 470-476. doi: 10.1109/CCIntelS.2015.7437962
Abstract: Renewable energy plays a dominant role in electricity production with the increase in global warning. Advantages like ENVIRONMENTAL friendliness, expandability and flexibility have made its wider application. Nowadays, step up power conversion is widely used in many applications and power capability demands. The applications of step up power conversion may be seen in electric vehicles, photovoltaic (PV) system, uninterruptable power supplies (UPS), and fuel cell power system. Boost converter is one type of DC-DC step up power converter. Step up power converters is quite popular because it can produce higher DC voltage output from low voltage input. In this paper, the analysis of interleaved boost converter is done by controlling with interleaved switching signals, which are having same switching frequency but shifted in phase. By utilizing the parallel operation of converters, the input current can be shared among the inductors so that high reliability and efficiency in power electronic systems can be obtained. Simulation study for PWM fed two phases IBC for solar cell has been implemented using MATLAB/ SIMULINK. The simulation results show the reduction in ripple quantity up to zero, which makes the operation of IBC to be more reliable and stable when it is utilized with solar cell.
Keywords: DC-DC power convertors; PWM power convertors; power electronics; renewable energy sources; solar cells; solar power stations; DC-DC step up power converter; electricity production; global warning; interleaved switching signals; power electronic systems; renewable energy; solar based PWM fed two phase interleaved boost converter; solar cell; Capacitors; Inductors; Insulated gate bipolar transistors; MATLAB; Mathematical model; Pulse width modulation; Switches; IBC; MATLAB; PWM; Ripple; Solar PV Cell (ID#: 16-10009)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7437962&isnumber=7437856
L. Kohútka, M. Vojtko and T. Krajcovic, “Hardware Accelerated Scheduling in Real-Time Systems,” Engineering of Computer Based Systems (ECBS-EERC), 2015 4th Eastern European Regional Conference on the, Brno, 2015, pp. 142-143. doi: 10.1109/ECBS-EERC.2015.32
Abstract: There are two groups of task scheduling algorithms in real-time systems. The first group contains algorithms that have constant asymptotic time complexity and thus these algorithms lead to deterministic task switch duration but smaller theoretical CPU utilisation. The second group contains complex algorithms that plan more efficient task sequences and thus the better CPU utilisation. The problem is that each task scheduling algorithm belongs to one of these two groups only. This is a motivation to design a real-time task scheduler that has all the benefits mentioned above. In order to reach this goal, we decided to reduce the time complexity of an algorithm from the second group by using hardware acceleration. We propose a scalable hardware representation of task scheduler in a form of coprocessor based on EDF algorithm. Thanks to the achieved constant time complexity, the hardware scheduler can help real-time systems to have more tasks that meet their deadlines while keeping high CPU utilisation and system determinism. Another advantage of our task scheduler is that any task can be removed from the scheduler according to the ID of the task, which increases expandability of the task scheduler.
Keywords: computational complexity; coprocessors; real-time systems; scheduling; CPU utilisation; EDF algorithm; asymptotic time complexity; coprocessor; hardware accelerated scheduling; task scheduling algorithms; Computer architecture; Coprocessors; Hardware; Real-time systems; Scheduling algorithms; Software; FPGA; hardware acceleration; performance; task queue; task scheduling (ID#: 16-10010)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275241&isnumber=7275108
J. J. Lin, “Integration of Multiple Automotive Radar Modules Based on Fiber-Wireless Network,” Wireless and Optical Communication Conference (WOCC), 2015 24th, Taipei, 2015, pp. 36-39. doi: 10.1109/WOCC.2015.7346112
Abstract: An integrated millimeter-wave (77/79 GHz) automotive radar system based on fiber-wireless network is proposed. The purpose of this integrated system is to realize a 360° radar protection and to make the overall automotive radar system more affordable. The central module (CM) generates the desired radar signals and processes the received data. The individual radar modules (RMs) only amplify and mix down the signals. No PLL and ADCs are in RMs. Fiber network is applied for distributing the millimeter-wave radar signals from CM to RMs. An example of integration of four automotive radar modules based on fiber-wireless (Fi-Wi) network is also discussed. A frequency quadrupler is utilized in RM. Therefore, CM needs to generate only 19~20.25-GHz signals. This could lower the operation frequencies as well as the cost of optical-to-electrical (O/E) and electrical-to-optical (E/O) converters. The smaller individual RM will provide more installation flexibility. Furthermore, the fiber network can also be the backbone of Advanced Driver Assistance Systems (ADAS) to connect more sensors, and accommodate the future big data flow. Fi-Wi network could provide the overall integrated automotive radar system with more expandability. This proposed system could be a great candidate to provide sensing functions of future fully autonomous cars.
Keywords: free-space optical communication; millimetre wave radar; radar signal processing; road vehicle radar; Big Data flow; Fi-Wi network; advanced driver assistance system; central module; electrical-to-optical converter; fiber-wireless network; frequency 77 GHz; frequency 79 GHz; frequency quadrupler; fully autonomous cars; millimeter-wave automotive radar system integration; millimeter-wave radar signal; multiple automotive radar module integration; optical-to-electrical converter; radar module; radar protection; Advanced driver assistance systems; Automotive engineering; Optical fiber amplifiers; Optical fiber networks; Optical fiber sensors; Radar; advanced driver assistance systems; automotive radar; fiber-wireless; optical-wireless; radio-over-fiber; sensor (ID#: 16-10011)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346112&isnumber=7346101
B. Wu, S. Li, K. Ma Smedley and S. Singer, “A Family of Two-Switch Boosting Switched-Capacitor Converters,” in IEEE Transactions on Power Electronics, vol. 30, no. 10, pp. 5413-5424, Oct. 2015. doi: 10.1109/TPEL.2014.2375311
Abstract: A family of “Two-Switch Boosting Switched-Capacitor Converters (TBSC)” is introduced, which distinguishes itself from the prior arts by symmetrically interleaved operation, reduced output ripple, low yet even voltage stress on components, and systematic expandability. Along with the topologies, a modeling method is formulated, which provokes the converter regulation method through duty cycle and frequency adjustment. In addition, the paper also provides guidance for circuit components and parameter selection. A 1-kW 3X TBSC was built to demonstrate the converter feasibility, regulation capability via duty cycle and frequency, which achieved a peak efficiency of 97.5% at the rated power.
Keywords: power convertors; converter regulation method; duty cycle; efficiency 97.5 percent; frequency adjustment; power 1 kW; two-switch boosting switched-capacitor converters; Capacitors; Integrated circuit modeling; Pulse width modulation; Stress; Switches; Topology; Voltage control; Frequency modulation; TBSC; frequency modulation; interleaved; modeling; switched-capacitor; two-switch boosting switched-capacitor converters (TBSC) (ID#: 16-10012)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6971218&isnumber=7112252
S. Khan, W. Dang, L. Lorenzelli and R. Dahiya, “Flexible Pressure Sensors Based on Screen-Printed P(VDF-TrFE) and P(VDF-TrFE)/MWCNTs,” in IEEE Transactions on Semiconductor Manufacturing, vol. 28, no. 4, pp. 486-493, Nov. 2015. doi: 10.1109/TSM.2015.2468053
Abstract: This paper presents large-area-printed flexible pressure sensors developed with an all screen-printing technique. The 4 × 4 sensing arrays are obtained by printing polyvinylidene fluoride-trifluoroethylene P(VDF-TrFE) and their nanocomposite with multi-walled carbon nanotubes (MWCNTs) and are sandwiched between printed metal electrodes in a parallel plate structure. The bottom electrodes and sensing materials are printed sequentially on polyimide and polyethylene terephthalate (PET) substrates. The top electrodes with force concentrator posts on backside are printed on a separate PET substrate and adhered with good alignment to the bottom electrodes. The interconnects, linking the sensors in series, are printed together with metal electrodes and they provide the expandability of the cells. Different weight ratios of MWCNTs are mixed in P(VDF-TrFE) to optimize the percolation threshold for a better sensitivity. The nanocomposite of MWCNTs in piezoelectric P(VDF-TrFE) is also explored for application in stretchable interconnects, where the higher conductivity at lower percolation ratios are of significant importance compared to the nanocomposite of MWCNTs in an insulator material. To examine the functionality and sensitivity of sensor module, the capacitance-voltage analysis at different frequencies, and the piezoelectric and piezoresistive response of the sensor are presented. The whole package of foldable pressure sensor is completely developed by screen-printing and is targeted toward realization of low-cost electronic skin.
Keywords: electrodes; insulating materials; multi-wall carbon nanotubes; nanocomposites; polymers; pressure sensors; C; P(VDF-TrFE)-MWCNT; bottom electrodes; capacitance-voltage analysis; force concentrator; insulator material; large-area-printed flexible pressure sensors; low-cost electronic skin; multiwalled carbon nanotubes; nanocomposite; parallel plate structure; percolation threshold; piezoelectric response; piezoresistive response; polyethylene terephthalate substrates; polyimide substrates; polyvinylidene fluoride-trifluoroethylene; printed metal electrodes; screen-printed P(VDF-TrFE); sensing arrays; sensing materials; stretchable interconnects; top electrodes; Flexible electronics; Nanocomposites; Piezoelectric devices; Pressure sensors; Printing; Flexible Sensors; P(VDF-TrFE); Piezoelectric; Screen Printing; Screen printing; Spin Coating; flexible sensors; piezoelectric; spin coating
(ID#: 16-10013)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194825&isnumber=7310646
Yang Liu and J. Ai, “A Software Evolution Complex Network for Object Oriented Software,” Prognostics and System Health Management Conference (PHM), 2015, Beijing, 2015, pp. 1-6. doi: 10.1109/PHM.2015.7380050
Abstract: With rapid growth of software Complexity, software reliability is an important issue in recent years. Thus, software complex networks are raised to give an expression of software complexity. Recent software complex networks are insufficient in expressing software feature with software reliability. In this paper, a software evolution complex network for object-oriented software (OOSEN) is built based on object-oriented code. With detailed structural feature and software version updating information, OOSEN makes improvement in expressing software features. By analysis software version updating dates, OOSEN builds a more effectively relationship with software reliability. The expandability makes OOSEN more suitable to express software system.
Keywords: object-oriented methods; software metrics; software reliability; OOSEN; object oriented software; software complexity; software evolution complex network; software version updating information; Software reliability; Software systems; software code; software complex network; software evolution complex network; software version (ID#: 16-10014)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380050&isnumber=7380000
Z. Xiao-yan and K. Dan, “Research of Coal Quality Detection Management Information System in Coal Enterprise,” Signal Processing, Communications and Computing (ICSPCC), 2015 IEEE International Conference on, Ningbo, 2015, pp. 1-4. doi: 10.1109/ICSPCC.2015.7338855
Abstract: On the basis of deeply studying coal quality inspection management business process, and combine with open source framework technology, we have designed a coal quality detection management information system based on J2EE. The system uses jxl report processing technology and vector graphics library Raphael, and makes it easy for users to analyze the coal seam and coal quality visually. The trial results show that the coal quality detection management information system could have excellent stability and expandability and would have wide application prospects in the information management of coal enterprise.
Keywords: coal; computer graphic equipment; inspection; public domain software; quality management; J2EE; coal enterprise; coal quality detection management information system; coal quality inspection management business process; coal seam; information management; jxl report processing technology; open source framework technology; vector graphics library Raphael; Coal; Face; Inspection; Management information systems; Personnel; Tunneling; Open source framework; Raphael; coal quality detection
(ID#: 16-10015)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7338855&isnumber=7338753
B. H. Song, J. Shin, S. Kim and J. Jeong, “On PMIPv6-Based Mobility Support for Hierarchical P2P-SIP Architecture in Intelligent Transportation System,” System Sciences (HICSS), 2015 48th Hawaii International Conference on, Kauai, HI, 2015, pp. 5446-5452. doi: 10.1109/HICSS.2015.637
Abstract: Network Service providers have many worries about providing network services with an expandable, reliable, flexible and low-cost structure according to the expanding market environment. The current client-server system has various problems such as complexity and high costs in providing network services. On the contrary to this, this problem can be simply solved, if the Peer-to-Peer (P2P) communication terminal supporting access of distributed resources provides functions which the current Session Initiation Protocol (SIP) -based network devices have. Because diverse terminals in a network access through networks, also, partitioning network domains with gateways to manage, and applying the Proxy Mobile IPv6 (PMIPv6) technology considering mobility of terminals would help to have a more efficient network structure. Especially, the proposed P2P-SIP structure proves itself as a very efficient structure to have an outstanding expandability among different networks in a region, and to reduce maintenance costs.
Keywords: IP networks; client-server systems; cost reduction; intelligent transportation systems; internetworking; mobile computing; network servers; peer-to-peer computing; signalling protocols; P2P communication terminal; P2P-SIP structure; PMIPv6 technology; PMIPv6-based mobility support; SIP-based network devices; client-server system; distributed resources; gateways; hierarchical P2P-SIP architecture; intelligent transportation system; maintenance cost reduction; network domain partitioning; peer-to-peer communication terminal; proxy mobile IPv6 technology; session initiation protocol-based network devices; Logic gates; Maintenance engineering; Manganese; Mobile radio mobility management; Overlay networks; Registers; Servers; Intelligent Transportation System; P2P-SIP Architecture; PMIPv6-Based Mobility Management; Proxy Mobile IPv6 (ID#: 16-10016)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070470&isnumber=7069647
A. V. Ho, T. W. Chun and H. G. Kim, “Extended Boost Active-Switched-Capacitor/Switched-Inductor Quasi-Z-Source Inverters,” in IEEE Transactions on Power Electronics, vol. 30, no. 10, pp. 5681-5690, Oct. 2015. doi: 10.1109/TPEL.2014.2379651
Abstract: This paper proposes a new topology named the active-switched-capacitor/switched-inductor quasi-Z -source inverter (ASC/SL-qZSI), which is based on a traditional qZSI topology. Compared to other qZSI-based topologies under the same operating conditions, the proposed ASC/SL-qZSI provides higher boost ability, requires fewer passive components such as inductors and capacitors, and achieves lower voltage stress across the switching devices of the main inverter. Another advantage of the topology is its expandability. If a higher boosting rate is required, additional cells can easily be cascaded at the impedance network by adding one inductor and three diodes. Both the simulation studies and the experimental results obtained from a prototype built in the laboratory validate proper operation and performance of the proposed ASC/SL-qZSI.
Keywords: invertors; power capacitors; power inductors; extended boost active-switched-capacitor quasi-Z-source inverters; extended boost active-switched-inductor quasi-Z-source inverters; impedance network; Capacitors; Inductors; Inverters; Modulation; Network topology; Switches; Topology; Active switched capacitor; Active switched capacitor (ASC); boost ability; quasi-Z-source inverter (qZSI); switched inductor (ID#: 16-10017)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6981968&isnumber=7112252
V. S. Latha and D. S. B. Rao, “The Evolution of the Ethernet: Various Fields of Applications,” 2015 Online International Conference on Green Engineering and Technologies (IC-GET), Coimbatore, India, 2015, pp. 1-7. doi: 10.1109/GET.2015.7453807
Abstract: Ethernet technology became predominant due to optimistic nature with proven simplicity, cost, reliability, ease of installation and expandability. This Attractive nature of Ethernet made its existence in any fields of applications such as Industry to Avionics, Video and Voice applications which are intended for higher network speeds. To handle such faster data rates Ethernet has been adopted as alternative technology. The main objective of this paper is to describe Evolution of Ethernet towards the 400GBPS technology and various fields of applications.
Keywords: Bandwidth; EPON; IEEE 802.3 Standard; Local area networks; Physical layer; Wavelength division multiplexing; CSMA/CD Standard; IEEE 802.3; Media Independent Interface (MII); Physical Coding Sublayer (PCS); Physical Layer (PHY); Physical Medium Attachment (PMA); Physical Medium Dependent (PMD) (ID#: 16-10018)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7453807&isnumber=7453764
A. M. Lalge, A. Shrivastav and S. U. Bhandari, “Implementing PSK MODEMs on FPGA Using Partial Reconfiguration,” Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, Pune, 2015, pp. 917-921. doi: 10.1109/ICCUBEA.2015.182
Abstract: The radio, which has as many as components with programmable devices, was envisioned as future of telecommunication industry by Joseph Mitola in 1991. The traditional, bulky and costly radios are expected to be replaced by a radio in which properties of carrier frequency, signal bandwidth, modulation and network access are defined in software. The key requirements for SDR platforms are flexibility, expandability, scalability, re-configurability and re-programmability. In SDR, the power consumption, configuration time, hardware usage plays significant role. FPGA has both high speed processing capability and good reconfigurable performance hence FPGA architecture is a viable solution for SDR technology. The objective of this paper is to demonstrate simulation and implementation of the PSK modems on FPGA using Partial Reconfiguration. By using Partial Reconfiguration (PR) technique the hardware usage, configuration time and power consumption can be reduced. The PSK modulator and demodulator algorithms are simulated using MATLAB R2013a and implemented on FPGA using Xilinx ISE 14.2 System Generator, PlanAhead, Partial Reconfiguration Tool. The results indicate Partial Reconfiguration design leads to negligible reconfiguration time saving in resource utilization by 55%, in power consumption by 75%. The output waveforms are displayed and analyzed using Xilinx ChipScope Pro. The output waveforms are displayed and analyzed using Xilinx ChipScope Pro.
Keywords: demodulators; field programmable gate arrays; modems; phase shift keying; reconfigurable architectures; software radio; FPGA architecture; MATLAB R2013a; PSK demodulator algorithm; PSK modem; PSK modulator; PlanAhead; SDR; Xilinx ChipScope Pro; Xilinx ISE 14.2 system generator; configuration time; hardware usage; partial reconfiguration design; partial reconfiguration tool; power consumption; reconfigurable performance; Binary phase shift keying; Field programmable gate arrays; Generators; Hardware; Modems; BPSK; LFSR; PR; QPSK (ID#: 16-10019)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155980&isnumber=7155781
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Fog Computing Security 2015 |
Fog computing is a concept that extends the Cloud concept to the end user. As with most new technologies, a survey of the scope and types of security problems is necessary. Much of the research presented relates to the Internet of Things. The articles cited here were presented in 2015.
Y. Wang, T. Uehara and R. Sasaki, “Fog Computing: Issues and Challenges in Security and Forensics,” Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 53-59. doi: 10.1109/COMPSAC.2015.173
Abstract: Although Fog Computing is defined as the extension of the Cloud Computing paradigm, its distinctive characteristics in the location sensitivity, wireless connectivity, and geographical accessibility create new security and forensics issues and challenges which have not been well studied in Cloud security and Cloud forensics. In this paper, through an extensive review of the motivation and advantages of the Fog Computing and its unique features as well as the comparison on various scenarios between the Fog Computing and Cloud Computing, the new issues and challenges in Fog security and Fog forensics are presented and discussed. The result of this study will encourage and promote more extensive research in this fascinating field, Fog security and Fog forensics.
Keywords: cloud computing; digital forensics; cloud computing paradigm; cloud forensics; cloud security; fog computing; fog forensics; fog security; geographical accessibility; location sensitivity; wireless connectivity; Cloud computing; Digital forensics; Mobile communication; Security; Wireless communication; Wireless sensor networks; Cloud Computing; Cloud Forensics; Cloud Security; Fog Computing; Fog Forensics; Fog Security (ID#: 16-10307)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273323&isnumber=7273299
K. Lee, D. Kim, D. Ha, U. Rajput and H. Oh, “On Security and Privacy Issues of Fog Computing Supported Internet of Things Environment,” Network of the Future (NOF), 2015 6th International Conference on the, Montreal, QC, 2015, pp. 1-3. doi: 10.1109/NOF.2015.7333287
Abstract: Recently, the concept of Internet of Things (IoT) is attracting much attention due to the huge potential. IoT uses the Internet as a key infrastructure to interconnect numerous geographically diversified IoT nodes which usually have scare resources, and therefore cloud is used as a key back-end supporting infrastructure. In the literature, the collection of the IoT nodes and the cloud is collectively called as an IoT cloud. Unfortunately, the IoT cloud suffers from various drawbacks such as huge network latency as the volume of data which is being processed within the system increases. To alleviate this issue, the concept of fog computing is introduced, in which foglike intermediate computing buffers are located between the IoT nodes and the cloud infrastructure to locally process a significant amount of regional data. Compared to the original IoT cloud, the communication latency as well as the overhead at the backend cloud infrastructure could be significantly reduced in the fog computing supported IoT cloud, which we will refer as IoT fog. Consequently, several valuable services, which were difficult to be delivered by the traditional IoT cloud, can be effectively offered by the IoT fog. In this paper, however, we argue that the adoption of IoT fog introduces several unique security threats. We first discuss the concept of the IoT fog as well as the existing security measures, which might be useful to secure IoT fog. Then, we explore potential threats to IoT fog.
Keywords: Internet of Things; cloud computing; data privacy; security of data; Internet of Things environment; IoT cloud; IoT fog; IoT nodes; back-end cloud infrastructure; back-end supporting infrastructure; cloud infrastructure; communication latency; fog computing; network latency; privacy issues; security issues; security threats; Cloud computing; Distributed databases; Internet of things; Privacy; Real-time systems; Security; Sensors (ID#: 16-10308)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7333287&isnumber=7333276
M. Aazam and E. N. Huh, “Fog Computing Micro Datacenter Based Dynamic Resource Estimation and Pricing Model for IoT,” 2015 IEEE 29th International Conference on Advanced Information Networking and Applications, Gwangiu, 2015, pp. 687-694. doi: 10.1109/AINA.2015.254
Abstract: Pervasive and ubiquitous computing services have recently been under focus of not only the research community, but developers as well. Prevailing wireless sensor networks (WSNs), Internet of Things (IoT), and healthcare related services have made it difficult to handle all the data in an efficient and effective way and create more useful services. Different devices generate different types of data with different frequencies. Therefore, amalgamation of cloud computing with IoTs, termed as Cloud of Things (CoT) has recently been under discussion in research arena. CoT provides ease of management for the growing media content and other data. Besides this, features like: ubiquitous access, service creation, service discovery, and resource provisioning play a significant role, which comes with CoT. Emergency, healthcare, and latency sensitive services require real-time response. Also, it is necessary to decide what type of data is to be uploaded in the cloud, without burdening the core network and the cloud. For this purpose, Fog computing plays an important role. Fog resides between underlying IoTs and the cloud. Its purpose is to manage resources, perform data filtration, preprocessing, and security measures. For this purpose, Fog requires an effective and efficient resource management framework for IoTs, which we provide in this paper. Our model covers the issues of resource prediction, customer type based resource estimation and reservation, advance reservation, and pricing for new and existing IoT customers, on the basis of their characteristics. The implementation was done using Java, while the model was evaluated using CloudSim toolkit. The results and discussion show the validity and performance of our system.
Keywords: Internet of Things; Java; cloud computing; computer centres; pricing; resource allocation; wireless sensor networks; CloudSim toolkit; CoT; IoT; WSN; cloud of things; customer type based resource estimation; customer type based resource reservation; data filtration; fog computing microdata center based dynamic resource estimation; healthcare related services; latency sensitive services; media content; pervasive computing services; pricing model; real-time response; resource prediction issues; resource provisioning; service creation; service discovery; ubiquitous access; ubiquitous computing services; wireless sensor networks; Cloud computing; Logic gates; Mobile handsets; Performance evaluation; Pricing; Resource management; Wireless sensor networks; Cloud of Things; Edge computing; Fog computing; Micro Data Center; resource management (ID#: 16-10309)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098039&isnumber=7097928
M. A. Hassan, M. Xiao, Q. Wei and S. Chen, “Help Your Mobile Applications with Fog Computing,” Sensing, Communication, and Networking - Workshops (SECON Workshops), 2015 12th Annual IEEE International Conference on, Seattle, WA, 2015, pp. 1-6. doi: 10.1109/SECONW.2015.7328146
Abstract: Cloud computing has paved a way for resource-constrained mobile devices to speed up their computing tasks and to expand their storage capacity. However, cloud computing is not necessary a panacea for all mobile applications. The high network latency to cloud data centers may not be ideal for delay-sensitive applications while storing everything on public clouds risks users' security and privacy. In this paper, we discuss two preliminary ideas, one for mobile application offloading and the other for mobile storage expansion, by leveraging the edge intelligence offered by fog computing to help mobile applications. Preliminary experiments conducted based on implemented prototypes show that fog computing can provide an effective and sometimes better alternative to help mobile applications.
Keywords: cloud computing; mobile computing; cloud data centers; edge intelligence; fog computing; mobile applications; network latency; Androids; Bandwidth; Cloud computing; Mobile applications; Mobile handsets; Servers; Time factors (ID#: 16-10310)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7328146&isnumber=7328133
M. Aazam and E. N. Huh, “Dynamic Resource Provisioning Through Fog Micro Datacenter,” Pervasive Computing and Communication Workshops (PerCom Workshops), 2015 IEEE International Conference on, St. Louis, MO, 2015, pp. 105-110. doi: 10.1109/PERCOMW.2015.7134002
Abstract: Lately, pervasive and ubiquitous computing services have been under focus of not only the research community, but developers as well. Different devices generate different types of data with different frequencies. Emergency, healthcare, and latency sensitive services require real-time response. Also, it is necessary to decide what type of data is to be uploaded in the cloud, without burdening the core network and the cloud. For this purpose, Fog computing plays an important role. Fog resides between underlying IoTs and the cloud. Its purpose is to manage resources, perform data filtration, preprocessing, and security measures. For this purpose, Fog requires an effective and efficient resource management framework, which we provide in this paper. Moreover, since Fog has to deal with mobile nodes and IoTs, which involves objects and devices of different types, having a fluctuating connectivity behavior. All such types of service customers have an unpredictable relinquish probability, since any object or device can quit resource utilization at any moment. In our proposed methodology for resource estimation and management, we have taken into account these factors and formulate resource management on the basis of fluctuating relinquish probability of the customer, service type, service price, and variance of the relinquish probability. Implementation of our system was done using Java, while evaluation was done on CloudSim toolkit. The discussion and results show that these factors can help service provider estimate the right amount of resources, according to each type of service customers.
Keywords: Internet of Things; cloud computing; computer centres; mobile computing; probability; resource allocation; CloudSim toolkit; Fog computing; Fog microdatacenter; IoT; Java; data filtration; data preprocessing; dynamic resource provisioning; mobile nodes; pervasive computing services; real-time response; research community; resource management framework; resource utilization; security measures; service price; service provider; service type; ubiquitous computing services; Cloud computing; Conferences; Estimation; Logic gates; Resource management; Sensors; Wireless sensor networks; Cloud of Things; Edge Computing; Fog-Smart Gateway (FSG); IoT; Micro Data Center (MDC); resource management (ID#: 16-10311)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7134002&isnumber=7133953
C. Vallati, A. Virdis, E. Mingozzi and G. Stea, “Exploiting LTE D2D Communications in M2M Fog Platforms: Deployment and Practical Issues,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 585-590.
doi: 10.1109/WF-IoT.2015.7389119
Abstract: Fog computing is envisaged as the evolution of the current centralized cloud to support the forthcoming Internet of Things revolution. Its distributed architecture aims at providing location awareness and low-latency interactions to Machine-to-Machine (M2M) applications. In this context, the LTE-Advanced technology and its evolutions are expected to play a major role as a communication infrastructure that guarantees low deployment costs, plug-and-play seamless configuration and embedded security. In this paper, we show how the LTE network can be configured to support future M2M Fog computing platforms. In particular it is shown how a network deployment that exploits Device-to-Device (D2D) communications, currently under definition within 3GPP, can be employed to support efficient communication between Fog nodes and smart objects, enabling low-latency interactions and locality-preserving multicast transmissions. The proposed deployment is presented highlighting the issues that its practical implementation raises. The advantages of the proposed approach against other alternatives are shown by means of simulation.
Keywords: Internet of Things; Long Term Evolution; cloud computing; mobile computing; D2D communication; LTE-Advanced technology; M2M fog platform; device-to-device communication; fog computing; machine-to-machine application; Actuators; Cloud computing; Computer architecture; Intelligent sensors; Long Term Evolution; D2D; Fog Computing; LTE; LTE-Advanced; M2M
(ID#: 16-10312)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389119&isnumber=7389012
Hongyu Xiang, Mugen Peng, Yuanyuan Cheng and H. H. Chen, “Joint Mode Selection and Resource Allocation for Downlink Fog Radio Access Networks Supported D2D,” Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, Taipei, 2015, pp. 177-182. doi: (not provided)
Abstract: Presented as an innovative paradigm incorporating the cloud computing into radio access network, cloud radio access networks (C-RANs) have been shown advantageous in curtailing the capital and operating expenditures as well as providing better services to the customers. However, heavy burden on the non-ideal fronthaul limits performances of C-RANs. Here we focus on the alleviation of burden on the fronthaul via the edge devices' caches and propose a fog computing based RAN (F-RAN) architecture with three candidate transmission modes: device to device, local distributed coordination, and global C-RAN. Followed by the proposed simple mode selection scheme, the average energy efficiency (EE) of systems optimization problem considering congestion control is presented. Under the Lyapunov framework, the problem is reformulated as a joint mode selection and resource allocation problem, which can be solved by block coordinate descent method. The mathematical analysis and simulation results validate the benefits of F-RAN and an EE-delay tradeoff can be achieved by the proposed algorithm.
Keywords: mathematical analysis; optimisation; radio equipment; radio links; radio networks; C-RANs; F-RAN architecture; Lyapunov framework; capital expenditures; cloud computing; cloud radio access networks; congestion control; device to device; downlink fog radio access networks supported D2D; edge devices; joint mode selection; local distributed coordination; operating expenditures; optimization problem; resource allocation problem; Chlorine; Performance evaluation; Resource management (ID#: 16-10313)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332564&isnumber=7332527
M. Koschuch, M. Hombauer, S. Schefer-Wenzl, U. Haböck and S. Hrdlicka, “Fogging the Cloud — Implementing and Evaluating Searchable Encryption Schemes in Practice,” 2015 IFIP/IEEE International Symposium on Integrated Network Management (IM), Ottawa, ON, 2015, pp. 1365-1368. doi: 10.1109/INM.2015.7140497
Abstract: With the rise of cloud computing new ways to secure outsourced data have to be devised. Traditional approaches like simply encrypting all data before it is transferred only partially alleviate this problem. Searchable Encryption (SE) schemes enable the cloud provider to search for user supplied strings in the encrypted documents, while neither learning anything about the content of the documents nor about the search terms. Currently there are many different SE schemes defined in the literature, with their number steadily growing. But experimental results of real world performance, or direct comparisons between different schemes, are severely lacking. In this work we propose a simple Java client-server framework to efficiently implement different SE algorithms and compare their efficiency in practice. In addition, we demonstrate the possibilities of such a framework by implementing two different existing SE schemes from slightly different domains and compare their behavior in a real-world setting.
Keywords: Java; cloud computing; cryptography; document handling; Java client-server framework; SE schemes; encrypted documents; outsourced data security; searchable encryption schemes; user supplied strings; Arrays; Conferences; Encryption; Indexes; Servers (ID#: 16-10314)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140497&isnumber=7140257
R. Gupta and R. Garg, “Mobile Applications Modelling and Security Handling in Cloud-Centric Internet of Things,” Advances in Computing and Communication Engineering (ICACCE), 2015 Second International Conference on, Dehradun, 2015, pp. 285-290. doi: 10.1109/ICACCE.2015.119
Abstract: The Mobile Internet of Things (IoT) applications are already a part of technical world. The integration of these application with Cloud can increase the storage capacity and help users to collect and process their personal data in an organized manner. There are a number of techniques adopted for sensing, communicating and intelligently transmitting data from mobile devices onto the Cloud in IoT applications. Thus, security must be maintained while transmission. The paper outlines the need for Cloud-centric IoT applications using Mobile phones as the medium for communication. Overview of different techniques to use Mobile IoT applications with Cloud has been presented. Majorly four techniques namely Mobile Sensor Data Processing Engine (MOSDEN), Mobile Fog, Embedded Integrated Systems (EIS) and Dynamic Configuration using Mobile Sensor Hub (MosHub) are discussed and few of the similarities and comparisons between them is mentioned. There is a need to maintain confidentiality and security of the data being transmitted by these methodologies. Therefore, cryptographic mechanisms like Public Key Encryption (PKI)and Digital certificates are used for data mechanisms like Public Key Encryption (PKI) and Digital certificates are used for data management (TSCM) allows trustworthy sensing of data for public in IoT applications. The above technologies are used to implement an application called Smart Helmet by us to bring better understanding of the concept of Cloud IoT and support Assisted Living for the betterment of the society. Thus the Applications makes use of Nordic BLE board transmission and stores data onto the Cloud to be used by large number of people.
Keywords: Internet of Things; cloud computing; data acquisition; embedded systems; mobile computing; public key cryptography; trusted computing; EIS; MOSDEN; MosHub; Nordic BLE board transmission; PKI; Smart Helmet; TSCM; assisted living; cloud-centric Internet of Things; cloud-centric IoT applications; communication; cryptographic mechanisms; data confidentiality; data management; data mechanisms; data security; data transmission; digital certificates; dynamic configuration; embedded integrated systems; mobile Internet of Things; mobile IoT applications; mobile applications modelling; mobile devices; mobile fog; mobile phones; mobile sensor data processing engine; mobile sensor hub; personal data collection; personal data processing; public key encryption; security handling; sensing; storage capacity; trustworthy data; Bluetooth; Cloud computing; Mobile applications; Mobile communication; Mobile handsets; Security; Cloud IoT; Embedded Integrated Systems; Mobile Applications; Mobile Sensor Data Processing Engine; Mobile Sensor Hub; Nordic BLE board; Public Key Encryption; Smart Helmet (ID#: 16-10315)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7306695&isnumber=7306547
M. Dong, K. Ota and A. Liu, “Preserving Source-Location Privacy Through Redundant Fog Loop for Wireless Sensor Networks,” Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015,
pp. 1835-1842. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.274
Abstract: A redundant fog loop-based scheme is proposed to preserve the source node-location privacy and achieve energy efficiency through two important mechanisms in wireless sensor networks (WSNs). The first mechanism is to create fogs with loop paths. The second mechanism creates fogs in the real source node region as well as many interference fogs in other regions of the network. In addition, the fogs are dynamically changing, and the communication among fogs also forms the loop path. The simulation results show that for medium-scale networks, our scheme can improve the privacy security by 8 fold compared to the phantom routing scheme, whereas the energy efficiency can be improved by 4 fold.
Keywords: data privacy; energy conservation; telecommunication power management; telecommunication security; wireless sensor networks; energy efficiency; medium-scale network; privacy security improvement; redundant fog loop-based scheme; source-location privacy preservation; wireless sensor network; Energy consumption; Phantoms; Position measurement; Privacy; Protocols; Routing; Wireless sensor networks; performance optimization; redundant fog loop; source-location privacy; wireless sensor networks
(ID#: 16-10316)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363320&isnumber=7362962
M. Zhanikeev, “A Cloud Visitation Platform to Facilitate Cloud Federation and Fog Computing,” in Computer, vol. 48, no. 5, pp. 80-83, May 2015. doi: 10.1109/MC.2015.122
Abstract: Evolving from hybrid clouds to true cloud federations and, ultimately, fog computing will require that cloud platforms allow for—and embrace—local hardware awareness.
Keywords: cloud computing; cloud federations; cloud visitation platform; fog computing; hybrid clouds; local hardware awareness; Cloud computing; Computer security; Software architecture; Streaming media; Cloud; cloud federations; hardware awareness; hardware virtualization (ID#: 16-10317)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7111861&isnumber=7111853
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Integrated Security Technologies in Cyber-Physical Systems, 2015 |
Cybersecurity has spent the past two decades largely as a “bolt-on” product added as an afterthought. To get to composability, built-in, integrated security will be a key factor. The research cited here was presented in 2015.
H. Hidaka, “How Future Mobility Meets IT: Cyber-Physical System Designs Revisit Semiconductor Technology,” Solid-State Circuits Conference (A-SSCC), 2015 IEEE Asian, Xiamen, 2015, pp. 1-4. doi: 10.1109/ASSCC.2015.7387514
Abstract: Cyber-Physical System (CPS) exemplified by future mobility application systems necessitates unconventional embedded design considerations in embedded systems; multiples latency-aware computing and communication construction, the importance of once non-functional requirements like security and safety to cover physical- and cyber-systems, and VLSI life-time design by ecology. All in all we have to reexamine and re-organize current semiconductor technology to produce platform bases for connected open collaborations to tackle global human challenges.
Keywords: VLSI; circuit analysis computing; cyber-physical systems; integrated circuit design; semiconductor technology; CPS; IT; VLSI life-time design; communication construction; cyber-physical system designs; embedded design; embedded systems; mobility application systems; multiple latency-aware computing; semiconductor technology; Automotive engineering; Cyber-physical systems; Safety; Security; Sensors; System analysis and design; Very large scale integration (ID#: 16-11257)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7387514&isnumber=7387429
Y. Peng et al., “Cyber-Physical Attack-Oriented Industrial Control Systems (ICS) Modeling, Analysis and Experiment Environment,” 2015 International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP),
Adelaide, SA, 2015, pp. 322-326. doi: 10.1109/IIH-MSP.2015.110
Abstract: The most essential difference between information technology (IT) and industrial control systems (ICS) is that ICSs are Cyber-Physical Systems (CPS) and they have direct effects on the physical world. In the context of this paper, the specific attacks which can lead to physical damage via cyber means are named as Cyber-Physical Attacks. In the real world, malware associated events, such as Stuxnet, have proven that this kind of attack is both feasible and destructive. We proposed an ICS-CPS operation dual-loop analysis model (ICONDAM) for analyzing ICS' human-cyber-physical interdependences. And we present an architecture and the features of our CPS-based Critical Infrastructure Integrated Experiment Platform (C2I2EP) ICS experiment environment. Through both theory analysis and experiments over the Cyber-Physical Attacks performed on our ICS experiment environment, we can say that ICONDAM model and C2I2EP experiment environment has a promising prospect in the field of ICS cyber-security research.
Keywords: industrial control; invasive software; production engineering computing; C2I2EP; CPS-based critical infrastructure integrated experiment platform; ICONDAM model; ICS cyber-security research; ICS experiment environment; ICS human-cyber-physical interdependences; ICS modeling; ICS-CPS operation dual-loop analysis model; IT; Stuxnet; cyber-physical attack-oriented industrial control systems; information technology; malware associated events; Analytical models; Biological system modeling; Integrated circuit modeling; Malware; Process control; Sensors; Cyber-Physical Attacks; Cyber-Physical Systems (CPS); Industrial Control Systems (ICS); cyber security; experiment environment (ID#: 16-11258)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7415822&isnumber=7415733
L. Vegh and L. Miclea, “A Simple Scheme for Security and Access Control in Cyber-Physical Systems,” 2015 20th International Conference on Control Systems and Computer Science, Bucharest, 2015, pp. 294-299. doi: 10.1109/CSCS.2015.13
Abstract: In a time when technology changes continuously, where things you need today to run a certain system, might not be needed tomorrow anymore, security is a constant requirement. No matter what systems we have, or how we structure them, no matter what means of digital communication we use, we are always interested in aspects like security, safety, privacy. An example of the ever-advancing technology are cyber-physical systems. We propose a complex security architecture that integrates several consecrated methods such as cryptography, steganography and digital signatures. This architecture is designed to not only ensure security of communication by transforming data into secret code, it is also designed to control access to the system and detect and prevent cyber attacks.
Keywords: authorisation; cryptography; digital signatures; steganography; access control; cyber attacks; cyber-physical system; security architecture; security requirement; system security; Computer architecture; Digital signatures; Encryption; Public key; multi-agent systems; (ID#: 16-11259)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168445&isnumber=7168393
M. Heiss, A. Oertl, M. Sturm, P. Palensky, S. Vielguth and F. Nadler, “Platforms for Industrial Cyber-Physical Systems Integration: Contradicting Requirements as Drivers for Innovation,” Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2015 Workshop on, Seattle, WA, 2015, pp. 1-8. doi: 10.1109/MSCPES.2015.7115405
Abstract: The full potential of distributed cyber-physical systems (CPS) can only be leveraged if their functions and services can be flexibly integrated. Challenges like communication quality, interoperability, and amounts of data are massive. The design of such integration platforms therefore requires radically new concepts. This paper shows the industrial view, the business perspective on such envisioned platforms. It turns out that there are not only huge technical challenges to overcome but also fundamental dilemmas. Contradicting requirements and conflicting trends force us to re-think the task of interconnecting services of distributed CPS.
Keywords: embedded systems; manufacturing data processing; business perspective; distributed CPS; distributed cyber-physical system; industrial cyber-physical system integration; Business; Complexity theory; Computer architecture; Optimization; Reliability; Security; Software; IT platforms; complexity management; cyber-physical systems; distributed systems; software integration
(ID#: 16-11260)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7115405&isnumber=7115373
D. Chen, K. Meinke, F. Asplund and C. Baumann, “A Knowledge-in-the-Loop Approach to Integrated Safety & Security for Cooperative System-of-Systems,” 2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, 2015, pp. 13-20. doi: 10.1109/IntelCIS.2015.7397237
Abstract: A system-of-systems (SoS) is inherently open in configuration and evolutionary in lifecycle. For the next generation of cooperative cyber-physical system-of-systems, safety and security constitute two key issues of public concern that affect the deployment and acceptance. In engineering, the openness and evolutionary nature also entail radical paradigm shifts. This paper presents one novel approach to the development of qualified cyber-physical system-of-systems, with Cooperative Intelligent Transport Systems (C-ITS) as one target. The approach, referred to as knowledge-in-the-loop, aims to allow a synergy of well-managed lifecycles, formal quality assurance, and smart system features. One research goal is to enable an evolutionary development with continuous and traceable flows of system rationale from design-time to post-deployment time and back, supporting automated knowledge inference and enrichment. Another research goal is to develop a formal approach to risk-aware dynamic treatment of safety and security as a whole in the context of system-of-systems. Key base technologies include: (1) EAST-ADL for the consolidation of system-wide concerns and for the creation of an ontology for advanced run-time decisions, (2) Learning Based-Testing for run-time and post-deployment model inference, safety monitoring and testing, (3) Provable Isolation for run-time attack detection and enforcement of security in real-time operating systems.
Keywords: cyber-physical systems; evolutionary computation; formal verification; intelligent transportation systems; learning (artificial intelligence); ontologies (artificial intelligence); security of data; C-ITS; EAST-ADL; cooperative intelligent transport systems; cooperative system-of-systems; cyber-physical system-of-systems; evolutionary development; formal quality assurance; integrated safety and security; knowledge-in-the-loop approach; learning based-testing; ontology; risk-aware dynamic treatment; run-time attack detection; safety monitoring; smart system feature; Analytical models; Ontologies; Organizations; Risk management; Roads; Security; System analysis and design; cyber-physical system; knowledge modeling; machine learning; model-based development; ontology; quality-of-service; safety; security; systems-of-systems; verification and validation (ID#: 16-11261)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7397237&isnumber=7397173
H. Derhamy, J. Eliasson, J. Delsing, P. P. Pereira and P. Varga, “Translation Error Handling for Multi-Protocol SOA Systems,” 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, 2015, pp. 1-8. doi: 10.1109/ETFA.2015.7301473
Abstract: The IoT research area has evolved to incorporate a plethora of messaging protocol standards, both existing and new, emerging as preferred communications means. The variety of protocols and technologies enable IoT to be used in many application scenarios. However, the use of incompatible communication protocols also creates vertical silos and reduces interoperability between vendors and technology platform providers. In many applications, it is important that maximum interoperability is enabled. This can be for reasons such as efficiency, security, end-to-end communication requirements etc. In terms of error handling each protocol has its own methods, but there is a gap for bridging the errors across protocols. Centralized software bus and integrated protocol agents are used for integrating different communications protocols. However, the aforementioned approaches do not fit well in all Industrial IoT application scenarios. This paper therefore investigates error handling challenges for a multi-protocol SOA-based translator. A proof of concept implementation is presented based on MQTT and CoAP. Experimental results show that multi-protocol error handling is possible and furthermore a number of areas that need more investigation have been identified.
Keywords: open systems; protocols; service-oriented architecture; CoAP; MQTT; centralized software bus; communication protocols; industrial IoT; integrated protocol agents; maximum interoperability; messaging protocol standards; multiprotocol SOA systems; multiprotocol SOA-based translator; translation error handling; Computer architecture; Delays; Monitoring; Protocols; Quality of service; Servers; Service-oriented architecture; Arrowhead; Cyber-physical systems; Error handling; Internet of Things; Protocol translation; SOA; Translation (ID#: 16-11262)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301473&isnumber=7301399
V. Meza, X. Gomez and E. Perez, “Quantifying Observability in State Estimation Considering Network Infrastructure Failures,” Innovative Smart Grid Technologies Latin America (ISGT LATAM), 2015 IEEE PES, Montevideo, 2015, pp. 171-176. doi: 10.1109/ISGT-LA.2015.7381148
Abstract: Smart grid integrates electrical network, communication systems and information technologies, where increasing architecture interdependency is introducing new challenges in the evaluation of how possible threats could affect security and reliability of power system. While cyber-attacks have been widely studied, consequences of physical failures on real-time applications are starting to receive attention due to implications for power system security. This paper presents a methodology to quantify the impact on observability in state estimation of possible disruptive failures of a common transmission infrastructure. Numerical results are obtained by calculating observability indicators on an IEEE 14-bus test case, considering the simultaneous disconnection of power transmission lines and communication links installed on the same infrastructure.
Keywords: computer network reliability; computer network security; power engineering computing; power system measurement; power system reliability; power system security; smart power grids; state estimation; common transmission infrastructure; communication link disconnection; disruptive failures; network infrastructure failures; observability quantification; physical failure; power transmission lines; smart power grid; Jacobian matrices; Mathematical model; Observability; Power measurement; Power systems; Security; State estimation; Observability; cyber-physical security; power systems; (ID#: 16-11263)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381148&isnumber=7381114
C. C. Sun, J. Hong and C. C. Liu, “A Co-Simulation Environment for Integrated Cyber and Power Systems,” 2015 IEEE International Conference on Smart Grid Communications (SmartGridComm), Miami, FL, 2015, pp. 133-138. doi: 10.1109/SmartGridComm.2015.7436289
Abstract: Due to the development of new power technologies, cyber infrastructures have been widely deployed for monitoring, control, and operation of a power grid. Information and Communications Technology (ICT) provides connectivity of the cyber and power systems. As a result, cyber intrusions become a threat that may cause damages to the physical infrastructures. Research on cyber security for the power grid is a high priority subject for the emerging smart grid environment. A cyber-physical testbed is critical for the study of cyber-physical security of power systems. For confidentiality, measurements (e.g., voltages, currents and binary status) and ICT data (e.g., communication protocols, system logs, and security logs) from the power grids are not publicly accessible. Therefore, a realistic testbed is a good alternative for study of the interactions between physical and cyber systems of a power grid.
Keywords: power engineering computing; power system security; security of data; smart power grids; ICT; co-simulation environment; cyber infrastructures; cyber intrusions; cyber systems; cyber-physical security; cyber-physical testbed; information and communications technology; physical infrastructures; power grid; power systems; smart grid environment; Computer security; Protocols; Real-time systems; Smart grids; Substations; Co-simulations; Cyber Security; Cyber-Physical Security; Intrusion Detection System for Substations; Smart Grid Testbed (ID#: 16-11264)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7436289&isnumber=7436263
R. Liu and A. Srivastava, “Integrated Simulation to Analyze the Impact of Cyber-Attacks on the Power Grid,” Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2015 Workshop on, Seattle, WA, 2015, pp. 1-6. doi: 10.1109/MSCPES.2015.7115395
Abstract: With the development of the smart grid technology, Information and Communication Technology (ICT) plays a significant role in the smart grid. ICT enables to realize the smart grid, but also brings cyber vulnerabilities. It is important to analyze the impact of possible cyber-attacks on the power grid. In this paper, a real-time, cyber-physical co-simulation testbed with hardware-in-the-loop capability is discussed. Real-time Digital Simulator (RTDS), Synchrophasor devices, DeterLab, and a wide- area monitoring application with closed-loop control are utilized in the developed testbed. Two different real life cyber-attacks, including TCP SYN flood attack, and man-in-the-middle attack, are simulated on an IEEE standard power system test case to analyze the the impact of these cyber-attacks on the power grid.
Keywords: closed loop systems; digital simulation; phasor measurement; power system simulation; smart power grids; DeterLab; ICT; IEEE standard power system test case; RTDS; TCP SYN flood attack; closed loop control; cyber vulnerability; cyber-attack impact analysis; hardware-in-the-loop capability; information and communication technology; integrated simulation; man-in-the-middle attack; real-time cyber-physical cosimulation testbed; real-time digital simulator; smart power grid technology; synchrophasor devices; wide-area monitoring application; Capacitors; Loading; Phasor measurement units; Power grids; Power system stability; Reactive power; Real-time systems; Cyber Security; Cyber-Physical; DeterLab; Real-Time Co-Simulation; Synchrophasor Devices (ID#: 16-11265)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7115395&isnumber=7115373
Bowen Zheng, W. Li, P. Deng, L. Gérardy, Q. Zhu and N. Shankar, “Design and Verification for Transportation System Security,” 2015 52nd ACM/EDAC/IEEE Design Automation Conference (DAC), San Francisco, CA, 2015, pp. 1-6. doi: 10.1145/2744769.2747920
Abstract: Cyber-security has emerged as a pressing issue for transportation systems. Studies have shown that attackers can attack modern vehicles from a variety of interfaces and gain access to the most safety-critical components. Such threats become even broader and more challenging with the emergence of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication technologies. Addressing the security issues in transportation systems requires comprehensive approaches that encompass considerations of security mechanisms, safety properties, resource constraints, and other related system metrics. In this work, we propose an integrated framework that combines hybrid modeling, formal verification, and automated synthesis techniques for analyzing the security and safety of transportation systems and carrying out design space exploration of both in-vehicle electronic control systems and vehicle-to-vehicle communications. We demonstrate the ideas of our framework through a case study of cooperative adaptive cruise control.
Keywords: formal verification; on-board communications; road safety; security of data; traffic engineering computing; automated synthesis techniques; cooperative adaptive cruise control; design space exploration; formal verification; hybrid modeling; in-vehicle electronic control systems; transportation system safety; transportation system security; vehicle-to-vehicle communications; Delays; Safety; Security; Sensors; Vehicles (ID#: 16-11266)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167280&isnumber=7167177
M. S. Mispan, B. Halak, Z. Chen and M. Zwolinski, “TCO-PUF: A Subthreshold Physical Unclonable Function,” Ph.D. Research in Microelectronics and Electronics (PRIME), 2015 11th Conference on, Glasgow, 2015, pp. 105-108. doi: 10.1109/PRIME.2015.7251345
Abstract: A Physical Unclonable Function (PUF) is a promising technology towards comprehensive security protection for integrated circuit applications. It provides a secure method of hardware identification and authentication by exploiting inherent manufacturing process variations to generate a unique response for each device. Subthreshold Current Array PUFs, which are based on the non-linearity of currents and voltages in MOSFETs in the subthreshold region, provide higher security against machine learning-based attacks compared with delay-based PUFs. However, their implementation is not practical due to the low output voltages generated from transistor arrays. In this paper, a novel architecture for a PUF, called the “Two Chooses One” PUF or TCO-PUF, is proposed to improve the output voltage ranges. The proposed PUF shows excellent quality metrics. The average inter-chip Hamming distance is 50.23%. The reliability over the temperature and ±10% supply voltage fluctuations is 91.58%. In terms of security, on average TCO-PUF shows higher security compared to delay-based PUFs and existing designs of Subthreshold Current Array PUFs against machine learning attacks.
Keywords: MOSFET; cryptographic protocols; integrated circuit design; integrated circuit reliability; learning (artificial intelligence); security of data; TCO-PUF; current nonlinearity; hardware authentication; hardware identification; integrated circuit applications; interchip Hamming distance; machine learning-based attacks; security protection; subthreshold current array PUF; two chooses one physical unclonable function; Arrays; Measurement; Reliability; Security; Subthreshold current; Transistors; Modelling attacks; Physical Unclonable Function; Subthreshold (ID#: 16-11267)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7251345&isnumber=7251078
V. Casola, A. D. Benedictis and M. Rak, “Security Monitoring in the Cloud: An SLA-Based Approach,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 749-755. doi: 10.1109/ARES.2015.74
Abstract: In this paper we present a monitoring architecture that is automatically configured and activated based on a signed Security SLA. Such monitoring architecture integrates different security-related monitoring tools (either developed ad-hoc or already available as open-source or commercial products) to collect measurements related to specific metrics associated with the set of security Service Level Objectives (SLOs) that have been specified in the Security SLA. To demonstrate our approach, we discuss a case study related to detection and management of vulnerabilities and illustrate the integration of the popular open source monitoring system Open VAS into our monitoring architecture. We show how the system is configured and activated by means of available Cloud automation technologies and provide a concrete example of related SLOs and metrics.
Keywords: cloud computing; contracts; public domain software; security of data; system monitoring; OpenVAS; SLA-based approach; SLO; cloud automation technologies; monitoring architecture; open source monitoring system; open-source products; security monitoring; security service level objectives; security-related monitoring tools; signed security SLA; vulnerability management; Automation; Computer architecture; Measurement; Monitoring; Protocols; Security; Servers; Cloud security monitoring; Open VAS; Security Service Level Agreements; vulnerability monitoring (ID#: 16-11268)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299988&isnumber=7299862
M. Ennahbaoui, H. Idrissi and S. E. Hajji, “Secure and Flexible Grid Computing Based Intrusion Detection System Using Mobile Agents and Cryptographic Traces,” Innovations in Information Technology (IIT), 2015 11th International Conference on, Dubai, 2015, pp. 314-319. doi: 10.1109/INNOVATIONS.2015.7381560
Abstract: Grid Computing is one of the new and innovative information technologies that attempt to make resources sharing global and more easier. Integrated in networked areas, the resources and services in grid are dynamic, heterogeneous and they belong to multiple spaced domains, which effectively enables a large scale collection, sharing and diffusion of data. However, grid computing stills a new paradigm that raises many security issues and conflicts in the computing infrastructures where it is integrated. In this paper, we propose an intrusion detection system (IDS) based on the autonomy, intelligence and independence of mobile agents to record the behaviors and actions on the grid resource nodes to detect malicious intruders. This is achieved through the use of cryptographic traces associated with chaining mechanism to elaborate hashed black statements of the executed agent code, which are then compared to depict intrusions. We have conducted experiments basing three metrics: network load, response time and detection ability to evaluate the effectiveness of our proposed IDS.
Keywords: cryptography; grid computing; mobile agents; IDS; chaining mechanism; cryptographic traces; data collection; data diffusion; data sharing; detection ability metric; intrusion detection system; network load metric; resources sharing; response time metric; security issues; Computer architecture; Cryptography; Grid computing; Intrusion detection; Mobile agents; Monitoring
(ID#: 16-11269)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381560&isnumber=7381480
Y. Bi, K. Shamsi, J. S. Yuan, F. X. Standaert and Y. Jin, “Leverage Emerging Technologies for DPA-Resilient Block Cipher Design,” 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 2016, pp. 1538-1543.
doi: (not provided)
Abstract: Emerging devices have been designed and fabricated to extend Moore's Law. While the benefits over traditional metrics such as power, energy, delay, and area certainly apply to emerging device technologies, new devices may offer additional benefits in addition to improvements in the aforementioned metrics. In this sense, we consider how new transistor technologies could also have a positive impact on hardware security. More specifically, we consider how tunneling FETs (TFET) and silicon nanowire FETs (SiNW FETs) could offer superior protection to integrated circuits and embedded systems that are subject to hardware-level attacks — e.g., differential power analysis (DPA). Experimental results on SiNW FET and TFET CML gates are presented. In addition, simulation results of utilizing TFET CML on a light-weight cryptographic circuit, KATAN32, show that TFET-based current mode logic (CML) can both improve DPA resilience and preserve low power consumption in the target design. Compared to the CMOS-based CML designs, the TFET CML circuit consumes 15 times less power while achieving a similar level of DPA resistance.
Keywords: cryptography; current-mode logic; field effect transistors; nanowires; security; silicon; tunnel transistors; CMOS-based CML design; DPA resilience; DPA-resilient block cipher design; KATAN32; Moore law; Si; SiNW FET; TFET CML gate; complementary metal oxide semiconductor; current mode logic; differential power analysis; hardware security; hardware-level attack; integrated circuit; leverage emerging technology; light-weight cryptographic circuit; low power consumption; silicon nanowire FET; transistor technologies; tunneling field effect transistor; CMOS integrated circuits; Cryptography; Logic gates; Power demand; TFETs; Current Mode Logic (CML); Differential Power Analysis (DPA); Emerging Technologies (ID#: 16-11270)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7459558&isnumber=7459269
S. R. Sahoo, S. Kumar and K. Mahapatra, “A Modified Configurable RO PUF with Improved Security Metrics,” 2015 IEEE International Symposium on Nanoelectronic and Information Systems, Indore, 2015, pp. 320-324. doi: 10.1109/iNIS.2015.37
Abstract: Physical Unclonable Functions (PUF) are promising security primitives used to produce unique signature for Integrated circuit (IC) which are useful in hardware security and cryptographic applications. Out of several PUF proposed by researcher like Ring Oscillator (RO) PUF, Arbiter PUF, configurable RO (CRO) PUF etc. RO PUF is widely used because of its higher uniqueness. As the frequency of RO is highly susceptible to temperature and voltage fluctuation it affects the reliability of IC signature. So to improve the reliability configurable ROs (CRO) are used. In this paper we present a modified CRO PUF in which inverters used to design RO use different logic styles: static CMOS and Feed through logic (FTL). The FTL based CRO PUF improves the uniqueness as well as the reliability of signature against environmental fluctuation (temperature and voltage) because of its higher leakage current and low switching threshold. The security metrics like uniqueness and reliability are calculated for proposed modified CRO PUF and compared with earlier proposed CRO PUF by carrying out the simulation in 90 nm technology.
Keywords: CMOS logic circuits; copy protection; cryptography; integrated circuit design; integrated circuit reliability; leakage currents; logic design; logic gates; oscillators; CRO PUF;FTL;IC signature; arbiter PUF; configurable RO PUF; cryptographic applications; feed through logic; hardware security; integrated circuit; inverters; leakage current; logic styles; physical unclonable functions; ring oscillator; security metrics; size 90 nm; static CMOS; switching threshold; voltage fluctuation; Information systems; Challenge-Response pair (CRP); Configurable Ring Oscillator (CRO);Feedthrough logic (FTL); Physical Unclonable Function (PUF);process variation (PV)
(ID#: 16-11271)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7434447&isnumber=7434375
K. E. Lever, K. Kifayat and M. Merabti, “Identifying Interdependencies Using Attack Graph Generation Methods,” Innovations in Information Technology (IIT), 2015 11th International Conference on, Dubai, 2015, pp. 80-85. doi: 10.1109/INNOVATIONS.2015.7381519
Abstract: Information and communication technologies have augmented interoperability and rapidly advanced varying industries, with vast complex interconnected networks being formed in areas such as safety-critical systems, which can be further categorised as critical infrastructures. What also must be considered is the paradigm of the Internet of Things which is rapidly gaining prevalence within the field of wireless communications, being incorporated into areas such as e-health and automation for industrial manufacturing. As critical infrastructures and the Internet of Things begin to integrate into much wider networks, their reliance upon communication assets by third parties to ensure collaboration and control of their systems will significantly increase, along with system complexity and the requirement for improved security metrics. We present a critical analysis of the risk assessment methods developed for generating attack graphs. The failings of these existing schemas include the inability to accurately identify the relationships and interdependencies between the risks and the reduction of attack graph size and generation complexity. Many existing methods also fail due to the heavy reliance upon the input, identification of vulnerabilities, and analysis of results by human intervention. Conveying our work, we outline our approach to modelling interdependencies within large heterogeneous collaborative infrastructures, proposing a distributed schema which utilises network modelling and attack graph generation methods, to provide a means for vulnerabilities, exploits and conditions to be represented within a unified model.
Keywords: graph theory; risk management; security of data; Internet of Things; attack graph generation methods; communication assets; complex interconnected networks; critical infrastructures; distributed schema; e-health; generation complexity; heterogeneous collaborative infrastructures; industrial manufacturing automation; information and communication technologies; interdependencies identification; interdependencies modelling; interoperability; risk assessment methods; safety-critical systems; security metrics; system complexity; vulnerabilities identification; wireless communications; Collaboration; Complexity theory; Internet of things; Power system faults; Power system protection; Risk management; Security; Attack Graphs; Cascading Failures; Collaborative Infrastructures; Interdependency
(ID#: 16-11272)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381519&isnumber=7381480
C. Herber, A. Saeed and A. Herkersdorf, “Design and Evaluation of a Low-Latency AVB Ethernet Endpoint Based on ARM SoC,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1128-1134. doi: 10.1109/HPCC-CSS-ICESS.2015.52
Abstract: Communication requirements in automotive electronics are steadily increasing. To satisfy this demand and enable future automotive embedded architectures, new interconnect technologies are needed. Audio Video Bridging (AVB) Ethernet is a promising candidate to accomplish this as it features time sensitive and synchronous communication in combination with high bit rates. However, there is a lack of commercial products as well as research regarding AVB-capable system-on-chips (SoCs). In this paper, we investigate how and at what cost a legacy Ethernet MAC can be enhanced into an AVB Ethernet controller. Using FPGA prototyping and a real system based on an ARM Cortex-A9 SoC running Linux, we conducted a series of experiments to evaluate important performance metrics and to validate our design decisions. We achieved frame release latencies of less than 6 μs and time-synchronization with an endpoint-induced inaccuracy of up to 8 μs.
Keywords: Linux; automotive electronics; field programmable gate arrays; local area networks; system-on-chip; ARM Cortex-A9; ARM SoC; Ethernet MAC; FPGA; audio video bridging; automotive electronic; bit rate; field programmable gate array; low-latency AVB Ethernet endpoint; synchronous communication; system-on-chip; Automotive engineering; Field programmable gate arrays; Hardware; Random access memory; Software; Synchronization; Audio Video Bridging; Automotive Electronics; Ethernet
(ID#: 16-11273)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336320&isnumber=7336120
H. Manem, K. Beckmann, M. Xu, R. Carroll, R. Geer and N. C. Cady, “An Extendable Multi-Purpose 3D Neuromorphic Fabric Using Nanoscale Memristors,” 2015 IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA), Verona, NY, 2015, pp. 1-8. doi: 10.1109/CISDA.2015.7208625
Abstract: Neuromorphic computing offers an attractive means for processing and learning complex real-world data. With the emergence of the memristor, the physical realization of cost-effective artificial neural networks is becoming viable, due to reduced area and increased performance metrics than strictly CMOS implementations. In the work presented here, memristors are utilized as synapses in the realization of a multi-purpose heterogeneous 3D neuromorphic fabric. This paper details our in-house memristor and 3D technologies in the design of a fabric that can perform real-world signal processing (i.e., image/video etc.) as well as everyday Boolean logic applications. The applicability of this fabric is therefore diverse with applications ranging from general-purpose and high performance logic computing to power-conservative image detection for mobile and defense applications. The proposed system is an area-effective heterogeneous 3D integration of memristive neural networks, that consumes significantly less power and allows for high speeds (3D ultra-high bandwidth connectivity) in comparison to a purely CMOS 2D implementation. Images and results provided will illustrate our state of the art 3D and memristor technology capabilities for the realization of the proposed 3D memristive neural fabric. Simulation results also show the results for mapping Boolean logic functions and images onto perceptron based neural networks. Results demonstrate the proof of concept of this system, which is the first step in the physical realization of the multi-purpose heterogeneous 3D memristive neuromorphic fabric.
Keywords: Boolean functions; CMOS integrated circuits; fabrics; memristors; neural chips; perceptrons; signal processing; three-dimensional integrated circuits; 3D memristive neural fabric; 3D technology; Boolean logic function application; CMOS implementation; area effective heterogeneous 3D integration; artificial neural network; complementary metal oxide semiconductor; defense application; extendable multipurpose 3D neuromorphic fabric; logic computing; memristive neural network; mobile application; nanoscale memristor; neuromorphic computing; perceptron; power conservative image detection; Decision support systems; Fabrics; Memristors; Metals; Neuromorphics; Neurons; Three-dimensional displays; 3D integrated circuits; image processing; memristor; nanoelectronics; neural networks (ID#: 16-11274)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208625&isnumber=7208613
P. R. da Paz Ferraz Santos, R. P. Esteves and L. Z. Granville, “Evaluating SNMP, NETCONF, and RESTful Web Services for Router Virtualization Management,” 2015 IFIP/IEEE International Symposium on Integrated Network Management (IM), Ottawa, ON, 2015, pp. 122-130. doi: 10.1109/INM.2015.7140284
Abstract: In network virtualization environments (NVEs), the physical infrastructure is shared among different users (or service providers) who create multiple virtual networks (VNs). As part of VN provisioning, virtual routers (VRs) are created inside physical routers supporting virtualization. Currently, the management of NVEs is mostly realized by proprietary solutions. Heterogeneous NVEs (i.e., with different equipment and technologies) are difficult to manage due to the lack of standardized management solutions. As a first step to achieve management interoperability, good performance, and high scalability, we implemented, evaluated, and compared four management interfaces for physical routers that host virtual ones. The interfaces are based on SNMP (v2c and v3), NETCONF, and RESTful Web Services, and are designed to perform three basic VR management operations: VR creation, VR retrieval, and VR removal. We evaluate these interfaces with regard to the following metrics: response time, CPU time, memory consumption, and network usage. Results show that the SNMPv2c interface is the most suitable one for small NVEs without strict security requirements and NETCONF is the best choice to compose a management interface to be deployed in more realistic scenarios, where security and scalability are major concerns.
Keywords: Web services; open systems; security of data; virtualisation; NETCONF; NVEs; RESTful Web services; SNMPv2c interface; VN provisioning; VR creation; VR management operations; VR removal; VR retrieval; management interoperability; network virtualization environments; router virtualization management; security; virtual networks; virtual routers; Data models; Memory management; Protocols; Servers; Virtual machine monitors; Virtualization; XML (ID#: 16-11275)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140284&isnumber=7140257
E. Takamura, K. Mangum, F. Wasiak and C. Gomez-Rosa, “Information Security Considerations for Protecting NASA Mission Operations Centers (MOCs),” 2015 IEEE Aerospace Conference, Big Sky, MT, 2015, pp. 1-14. doi: 10.1109/AERO.2015.7119207
Abstract: In NASA space flight missions, the Mission Operations Center (MOC) is often considered “the center of the (ground segment) universe,” at least by those involved with ground system operations. It is at and through the MOC that spacecraft is commanded and controlled, and science data acquired. This critical element of the ground system must be protected to ensure the confidentiality, integrity and availability of the information and information systems supporting mission operations. This paper identifies and highlights key information security aspects affecting MOCs that should be taken into consideration when reviewing and/or implementing protecting measures in and around MOCs. It stresses the need for compliance with information security regulation and mandates, and the need for the reduction of IT security risks that can potentially have a negative impact to the mission if not addressed. This compilation of key security aspects was derived from numerous observations, findings, and issues discovered by IT security audits the authors have conducted on NASA mission operations centers in the past few years. It is not a recipe on how to secure MOCs, but rather an insight into key areas that must be secured to strengthen the MOC, and enable mission assurance. Most concepts and recommendations in the paper can be applied to non-NASA organizations as well. Finally, the paper emphasizes the importance of integrating information security into the MOC development life cycle as configuration, risk and other management processes are tailored to support the delicate environment in which mission operations take place.
Keywords: aerospace computing; command and control systems; data integrity; information systems; risk management; security of data; space vehicles; IT security audits; IT security risk reduction; MOC development life cycle; NASA MOC protection; NASA mission operation center protection; NASA space flight missions; ground system operations; information availability; Information confidentiality; information integrity; information security considerations; information security regulation; information systems; nonNASA organizations; spacecraft command and control; Access control; Information security; Monitoring; NASA; Software; IT security metrics; access control; asset protection; automation; change control; connection protection; continuous diagnostics and mitigation; continuous monitoring; ground segment ground system; incident handling; information assurance; information security; information security leadership; information technology leadership; infrastructure protection; least privilege; logical security; mission assurance; mission operations; mission operations center; network security; personnel screening; physical security; policies and procedures; risk management; scheduling restrictions; security controls; security hardening; software updates; system cloning and software licenses; system security; system security life cycle; unauthorized change detection; unauthorized change deterrence; unauthorized change prevention
(ID#: 16-11276)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7119207&isnumber=7118873
S. R. Sahoo, S. Kumar and K. Mahapatra, “A Novel ROPUF for Hardware Security,” VLSI Design and Test (VDAT), 2015 19th International Symposium on, Ahmedabad, 2015, pp. 1-2. doi: 10.1109/ISVDAT.2015.7208093
Abstract: Physical Unclonable Functions (PUFs) are promising security primitives in recent times. A PUF is a die-specific random function or silicon biometric that is unique for every instance of the die. PUFs derive their randomness from the uncontrolled random variations in the IC manufacturing process which is used to generate cryptographic keys. Researchers have proposed different kinds of PUF in last decade, with varying properties. Quality of PUF is decided by its properties like: uniqueness, reliability, uniformity etc. In this paper we have designed a novel CMOS based RO PUF with improved quality metrics at the cost of additional hardware. The novel PUF is a modified Ring Oscillator PUF (RO-PUF), in which CMOS inverters of RO-PUF are replaced with Feedthrough logic (FTL) inverters. The FTL inverters in RO-PUF improve the security metrics because of its high leakage current. The use of pulse injection circuit (PIC) is responsible to increase challenge-response pairs (CRP's). Then a comparison analysis has been carried out by simulating both the PUF in 90 nm technology. The simulation results shows that the proposed modified FTL PUF provides a uniqueness of 45.24% with a reliability of 91.14%.
Keywords: CMOS analogue integrated circuits; copy protection; cryptography; elemental semiconductors; integrated circuit modelling; leakage currents; logic circuits; logic design; logic gates; oscillators; random functions; silicon; CMOS based RO PUF; CMOS inverters; CRP; FTL PUF; FTL inverters; IC manufacturing process; PIC; Si; challenge-response pairs; cryptographic keys; die-specific random function; feedthrough logic inverters; hardware security; leakage current; physical unclonable functions; pulse injection circuit; ring oscillator PUF; security metrics; silicon biometric; size 90 nm; CMOS integrated circuits; Inverters; Leakage currents; Measurement; Reliability; Security; Silicon; Challenge-Response pair (CRP); Feedthrough logic (FTL); Physical Unclonable Function (PUF); Ring Oscillator (RO); process variation (PV) (ID#: 16-11277)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208093&isnumber=7208044
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Kerberos 2015 |
Kerberos supports authentication in distributed systems. Used in intelligent systems, it is an encrypted data structure naming a user and a service the user may access. For the Science of Security community, it is relevant to the broad issues of cryptography and to resilience, human behavior, resiliency, and metrics. The work cited here was presented in 2015.
Hoa Quoc Le, Hung Phuoc Truong, Hoang Thien Van and Thai Hoang Le, “A New Pre-Authentication Protocol in Kerberos 5: Biometric Authentication,” Computing & Communication Technologies - Research, Innovation, and Vision for the Future (RIVF), 2015 IEEE RIVF International Conference on, Can Tho, 2015, pp. 157-162. doi: 10.1109/RIVF.2015.7049892
Abstract: Kerberos is a well-known network authentication protocol that allows nodes to communicate over a non-secure network connection. After Kerberos is used to prove the identity of objects in client-server model, it will encrypt all of their communications in following steps to assure privacy and data integrity. In this paper, we modify the initial authentication exchange in Kerberos 5 by using biometric data and asymmetric cryptography. This proposed method creates a new preauthentication protocol in order to make Kerberos 5 more secure. Due to the proposed method, the limitation of password-based authentication in Kerberos 5 is solved. It is too difficult for a user to repudiate having accessed to the application. Moreover, the mechanism of user authentication is more convenient. This method is a strong authentication scheme that is against several attacks.
Keywords: cryptographic protocols; data integrity; data privacy; message authentication; Kerberos 5; asymmetric cryptography; attacks; authentication exchange; biometric authentication; biometric data; client-server model; data integrity; encryption; network authentication protocol; nonsecure network connection; objects identity; password-based authentication; preauthentication protocol; privacy; user authentication; Authentication; Bioinformatics; Cryptography; Fingerprint recognition; Protocols; Servers; Authentication; Kerberos; biometric; cryptography; fingerprint (ID#: 16-9978)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7049892&isnumber=7049862
N. S. Khandelwal and P. Kamboj, “Two Factor Authentication Using Visual Cryptography and Digital Envelope in Kerberos,” Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, Visakhapatnam, 2015, pp. 1-6. doi: 10.1109/EESCO.2015.7253638
Abstract: Impersonation is the obvious security risk in an undefended distributed network. An adversary pretends to be a client and can have illicit access to the server. To counter this threat, user authentication is used which is treated as the first line of defense in a networked environment. The most popular and widely used authentication protocol is Kerberos. Kerberos is the de facto standard, used to authenticate users mutually by the use of trusted third party. But this strong protocol is vulnerable to various security attacks. This paper gives an overview of Kerberos protocol and its existing security problems. To enhance security and combat security attacks, it also describes a novel approach of incorporating the features of Visual Cryptography and Digital Envelope into Kerberos. Using Visual cryptography, we have added one more layer of security by considering a secret share as one of the factor of providing mutual authentication. While the session key is securely distributed by using the concept of Digital envelope in which user's private key is considered as another factor of authentication. Thus, our proposed scheme makes the Kerberos protocol highly robust, secure and efficient.
Keywords: computer network security cryptographic protocols; image coding; private key cryptography; Kerberos protocol; authentication protocol; digital envelope; distributed network; factor authentication; security attacks; security risk; session key; user authentication; user private key; visual cryptography; Authentication; Encryption; Protocols; Servers; Visualization; Digital Envelope; Kerberos; Visual cryptography (ID#: 16-9979)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7253638&isnumber=7253613
B. Bakhache and R. Rostom, “Kerberos Secured Address Resolution Protocol (KARP),” Digital Information and Communication Technology and its Applications (DICTAP), 2015 Fifth International Conference on, Beirut, 2015, pp. 210-215. doi: 10.1109/DICTAP.2015.7113201
Abstract: Network security has become more significant to users computers, associations, and even in military applications. With the presence of internet, security turned into a considerable issue. The Address Resolution Protocol (ARP) is used by computers on a Local Area Network (LAN) in order to map each network address (IP) to its physical address (MAC). This protocol has been verified to function well under regular conditions. Thus, it is a stateless and an all trusting protocol which makes it vulnerable to numerous ARP cache poisoning attacks such as Man-in-the-Middle (MITM) and Denial of service (DoS). However, ARP spoofing is a simple attack that can be done on data link layer profiting from the weak points of the ARP protocol. In this paper, we propose a new method called KARP (Kerberos ARP) to secure the ARP by integrating the Kerberos protocol. KARP is designed to add authentication to ARP inspiring from the procedures used in the famous Kerberos protocol. The simulated results of the new method show the advantage of KARP in highly securing ARP against spoofing attacks providing the lowest computational cost possible.
Keywords: access protocols; security of data; ARP cache poisoning attacks; KARP; LAN; MAC; all trusting protocol; denial of service attacks; kerberos secured address resolution protocol; local area network; man-in-the-middle attacks; network security; physical address; spoofing attacks; Authentication; IP networks; Protocols; Public key; Servers; ARP; ARP Spoofing; K-ARP; Kerberos Protocol; authentication (ID#: 16-9980)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113201&isnumber=7113160
T. A. T. Nguyen and T. K. Dang, “Combining Fuzzy Extractor in Biometric-Kerberos Based Authentication Protocol,” 2015 International Conference on Advanced Computing and Applications (ACOMP), Ho Chi Minh City, 2015, pp. 1-6. doi: 10.1109/ACOMP.2015.23
Abstract: Kerberos is a distributed authentication protocol which guarantees the mutual authentication between client and server over an insecure network. After the identification, all the subsequent communications are encrypted by session keys to ensure privacy and data integrity. In this paper, we have proposed a biometric authentication protocol based on Kerberos scheme. This protocol is not only resistant against attacks on the insecure network such as man-in-the-middle attack, replay attack, but also able to protect the biometric for using fuzzy extractor. This technique conceals the user's biometric into the cryptographic key called biometric key. This key is used to verify a user in authentication phase. Therefore, there is no need to store users' biometric in the database. Even if biometric keys is revealed, it is impossible for an attack to infer the users' biometric for the high security of the fuzzy extractor scheme. The protocol also supports multi-factor authentication to enhance security of the entire system.
Keywords: client-server systems; cryptographic protocols; data integrity; data privacy; fuzzy set theory; private key cryptography; public key cryptography; Kerberos scheme; biometric key; biometric-Kerberos based authentication protocol; client-server mutual authentication; cryptographic key; distributed authentication protocol; fuzzy extractor scheme; insecure network; man-in-the-middle attack; replay attack; session keys; user biometric; Authentication; Cryptography; Databases; Mobile communication; Protocols; Servers; Kerberos; biometric; fuzzy extractor; mutual authentication; remote authentication (ID#: 16-9981)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7422367&isnumber=7422358
R. Maheshwari, A. Gupta and N. Chandra, “Secure Authentication Using Biometric Templates in Kerberos,” Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, New Delhi, 2015, pp. 1247-1250. doi: (not provided)
Abstract: The paper suggests the use of biometric templates for achieving the authentication in distributed systems and networks using Kerberos. The most important advantage in using the biometric templates is implying biologically inspired passwords such as pupil, fingerprints, face, iris, hand geometry, voice, palm print, handwritten signatures and gait. Using biometric templates in Kerberos gives more reliability to client server architectures for analysis in distributed platform while dealing with sensitive and confidential information. Even today the companies face challenge of security of confidential data. Although the main focus of the development of Hadoop, CDBMS like technologies was primarily oriented towards the big data analysis, data management and further conversion of huge chunks of raw data into useful information. Hence, implementing biometric templates in Kerberos makes various frameworks on master slave architecture to be more reliable providing an added security advantage.
Keywords: biometrics (access control); client-server systems; cryptographic protocols; message authentication; parallel processing; software architecture; CDBMS; Hadoop; Kerberos; biologically inspired passwords; biometric templates; client server architectures; confidential data security; confidential information; distributed networks; distributed platform; distributed systems; face; fingerprints; gait; hand geometry; handwritten signatures; Iris; master slave architecture; palm print; pupil; secure authentication; sensitive information; voice; Authentication; Authorization; Computer architecture; Cryptography; Databases; Servers; Biometric templates; Data Security; Hadoop; Kerberos; distributed system; master slave architecture (ID#: 16-9982)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100449&isnumber=7100186
M. Colombo, S. N. Valeije and L. Segura, “Issues and Disadvantages that Prevent the Native Implementation of Single Sign On Using Kerberos on Linux Based Systems,” 2015 CHILEAN Conference on Electrical, Electronics Engineering, Information and Communication Technologies (CHILECON), Santiago, 2015, pp. 885-889. doi: 10.1109/Chilecon.2015.7404677
Abstract: This paper discusses the problems and disadvantages users have to deal with when they attempt to use the Single Sign On mechanism, in conjunction with the Kerberos V5 protocol as a means of authenticating users on Linux based environments. Some known incompatibilities and Security problems are exposed for which, today, native Single Sign On in Kerberos is not a standard in Linux. Finally, the future prospects regarding the possibility of accomplishing this goal will be discussed.
Keywords: Linux; authorisation; user interfaces; Kerberos V5 protocol; Linux based systems; single sign; user authentication; Java; Protocols; Security; Servers; Silicon compounds; Standards; Authenticaton; Kerberos; Single Sign On (ID#: 16-9983)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7404677&isnumber=7400334
S. Gulhane and S. Bodkhe, “DDAS Using Kerberos with Adaptive Huffman Coding to Enhance Data Retrieval Speed and Security,” Pervasive Computing (ICPC), 2015 International Conference on, Pune, 2015, pp. 1-6. doi: 10.1109/PERVASIVE.2015.7086987
Abstract: The increasing fad of deploying application over the web and store as well as retrieve database to/from particular server. As data stored in distributed manner so scalability, flexibility, reliability and security are important aspects need to be considered while established data management system. There are several systems for database management. After reviewing Distributed data aggregation service(DDAS) system which is relying on Blobseer it found that it provide a high level performance in aspects such as data storage as a Blob (Binary large objects) and data aggregation. For complicated analysis and instinctive mining of scientific data, Blobseer serve as a repository backend. WS-Aggregation is another framework which is viewed as a web services but it is actually carried out aggregation of data. In this framework for executing multi-site queries a single-site interface is provided to the clients. Simple storage service (S3) is another type of storage utility. This S3 system provides an anytime available and low cost service. Kerberos is a method which provides a secure authentication as only authorized clients are able to access distributed database. Kerberos consist of four steps i.e. Authentication Key exchange, Ticket granting service Key exchange, Client/Server service exchange and Build secure communication. Adaptive Huffman method to writing (also referred to as Dynamic Huffman method) is associate accommodative committal to writing technique basic of Huffman coding. It permits compression as well as decompression of data and also permits building the code because the symbols square measure is being transmitted, having no initial information of supply distribution, that enables one-pass cryptography and adaptation to dynamical conditions in data.
Keywords: Huffman codes; Web services; cryptography; data mining; distributed databases; query processing; Blob; Blobseer; DDAS; Kerberos; WS-Aggregation; Web services; adaptive Huffman coding; authentication key exchange; binary large objects; client-server service exchange; data aggregation; data management system; data retrieval security; data retrieval speed; data storage; distributed data aggregation service system; distributed database; dynamic Huffman method; instinctive scientific data mining; multisite queries; one-pass cryptography; secure communication; Authentication; Catalogs; Distributed databases; Memory; Servers; XML; adaptive huffman method; blobseer; distributed database; kerberos; simple storage service; ws aggregation (ID#: 16-9984)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7086987&isnumber=7086957
H. Zhang, Q. You and J. Zhang, “A Lightweight Electronic Voting Scheme Based on Blind Signature and Kerberos Mechanism,” Electronics Information and Emergency Communication (ICEIEC), 2015 5th International Conference on, Beijing, 2015, pp. 210-214. doi: 10.1109/ICEIEC.2015.7284523
Abstract: Blind signature has been widely used in electronic voting because of its anonymity. However, all existing electronic voting schemes based on it require maintaining a Certificate Authority to distribute key pairs to voters, which is a huge burden to the electronic voting system. In this paper, we present a lightweight electronic voting system based on blind signature that removes the Certificate Authority by integrating the Kerberos authentication mechanism into the blind signature electronic voting scheme. It uses symmetric keys to encrypt the exchanged information instead of asymmetric keys to avoid the requirement for the Certificate Authority, and thus greatly reduces the cost of the electronic voting system. We have implemented the proposed system, and demonstrated it not only satisfies all the criteria for a practical and secure electronic voting system but also can resist most likely attacks depicted by the three threat models.
Keywords: cryptography; digital signatures; government data processing; Kerberos authentication mechanism; anonymity; blind signature; certificate authority; encryption; exchanged information; lightweight electronic voting scheme; lightweight electronic voting system; secure electronic voting system; symmetric keys; threat models; Authentication; Cryptography; Electronic voting; Nominations and elections; Radiation detectors; Servers; Kerberos; electronic voting; security (ID#: 16-9985)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284523&isnumber=7284473
P. P. Gaikwad, J. P. Gabhane and S. S. Golait, “3-level Secure Kerberos Authentication for Smart Home Systems Using IoT,” Next Generation Computing Technologies (NGCT), 2015 1st International Conference on, Dehradun, 2015, pp. 262-268. doi: 10.1109/NGCT.2015.7375123
Abstract: Uses of Internet-of-Things have been increased almost in all domains. Smart Home System can be made using Internet-of-Things. This paper presents the design and an effective implementation of smart home system using Internet of things. The designed system is very effective and ecofriendly having the advantage of low cost. This system ease out the home automation task and user can easily monitor control home appliances from anywhere and anytime using internet. Embedded system, GPRS module and RF modules are used for making this system. Security has been increased in this system on the server side by using 3 level Kerberos authentication. Hence, the system is now more secure to use than the current smart homes systems. Design of hardware and software is also presented in paper.
Keywords: Internet of Things; authorisation; cellular radio; domestic appliances; embedded systems; home automation; packet radio networks; 3-level secure Kerberos authentication; GPRS module; Internet-of-Things; IoT; RF modules; embedded system; hardware design; home appliance monitoring; home automation task; server side; smart home systems; software design; Microcontrollers; Modems; Radio frequency; Relays; Servers; Smart homes; Switches; Kerberos; RF Identification; Smart home (ID#: 16-9986)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7375123&isnumber=7375067
A. Desai, Nagegowda K S and Ninikrishna T, “Secure and QoS Aware Architecture for Cloud Using Software Defined Networks and Hadoop,” 2015 International Conference on Computing and Network Communications (CoCoNet), Trivandrum, 2015, pp. 369-373. doi: 10.1109/CoCoNet.2015.7411212
Abstract: Cloud services have become a daily norm in today's world. Many services today are been migrated to the cloud. Although it has its own benefits it is difficult to manage due to the sheer volume of data and the various different types of services provided. Adhering to the Service Level Agreement (SLA) becomes a challenging task. Also the security of the cloud is very important since if broken all the services provided by the cloud distributor are at risk. Thus there is need of an architecture which is better equipped with security as well as adhering to the quality of service (QoS) written in the SLA given to the tenants of the cloud. In this paper we propose an architecture which will be use software defined networking (SDN) and Hadoop to provide QoS aware and secure architecture. We will also use Kerberos for authentication and single sign on (SSO). In this paper we have shown the sequence of flows of data in a cloud center and how the proposed architecture takes care of it and is equipped to manage the cloud compared to the existing system.
Keywords: cloud computing; contracts; cryptographic protocols; data handling; quality of service; software defined networking; Hadoop; Kerberos; QoS aware architecture; SDN; SLA; SSO; cloud center; cloud distributor; cloud services; secure architecture; service level agreement; single sign on; software defined network; Authentication; Cloud computing; Computer architecture; Control systems; Quality of service; Servers; Big data; Quality of service (QoS); Software defined networks (SDN) (ID#: 16-9987)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7411212&isnumber=7411155
S. C. Patel, R. S. Singh and S. Jaiswal, “Secure and Privacy Enhanced Authentication Framework for Cloud Computing,” Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, Coimbatore, 2015, pp. 1631-1634. doi: 10.1109/ECS.2015.7124863
Abstract: Cloud computing is a revolution in information technology. The cloud consumer outsources their sensitive data and personal information to cloud provider's servers which is not within the same trusted domain of data-owner so most challenging issues arises in cloud are data security users privacy and access control. In this paper we also have proposed a method to achieve fine grained security with combined approach of PGP and Kerberos in cloud computing. The proposed method provides authentication, confidentiality, integrity, and privacy features to Cloud Service Providers and Cloud Users.
Keywords: authorisation; cloud computing; data integrity; data privacy; outsourcing; personal information systems; sensitivity; trusted computing; Kerberos approach; PGP approach; access control; authentication features cloud computing; cloud consumer; cloud provider servers; cloud service providers; cloud users; confidentiality features; data security user privacy; data-owner; information technology; integrity features; personal information outsourcing; privacy enhanced authentication framework; privacy features; secure authentication framework; sensitive data outsourcing; Access control; Authentication; Cloud computing; Cryptography; Privacy; Servers; Kerberos; Pretty Good Privacy; access control; authentication; privacy; security (ID#: 16-9988)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124863&isnumber=7124722
S. V. Baghel and D. P. Theng, “A Survey for Secure Communication of Cloud Third Party Authenticator,” Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, Coimbatore, 2015, pp. 51-54. doi: 10.1109/ECS.2015.7124959
Abstract: Cloud computing is an information technology where user can remotely store their outsourced data so as enjoy on demand high quality application and services from configurable resources. Using information data exchange, users can be worried from the load of local data storage and protection. Thus, allowing freely available auditability for cloud data storage is more importance so that user gives change to check data integrity through external audit party. In the direction of securely establish efficient third party auditor (TPA), which has next two primary requirements to be met: 1) TPA should able to audit outsourced data without demanding local copy of user outsourced data; 2) TPA process should not bring in new threats towards user data privacy. To achieve these goals this system will provide a solution that uses Kerberos as a Third Party Auditor/ Authenticator, RSA algorithm for secure communication, MD5 algorithm is used to verify data integrity, Data centers is used for storing of data on cloud in effective manner with secured environment and provides Multilevel Security to Database.
Keywords: authorisation; cloud computing; computer centres; data integrity; data protection; outsourcing; public key cryptography; MD5 algorithm; RSA algorithm; TPA; cloud third party authenticator; data centers; data outsourcing; external audit party; information data exchange; information technology; local data protection; local data storage; multilevel security; on demand high quality application; on demand services; secure communication; third party auditor; user data privacy; user outsourced data; Algorithm design and analysis; Authentication; Cloud computing; Heuristic algorithms; Memory; Servers; Cloud Computing; Data center; Multilevel database; Public Auditing; Third Party Auditor (ID#: 16-9989)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124959&isnumber=7124722
J. Song, H. Kim and S. Park, “Enhancing Conformance Testing Using Symbolic Execution for Network Protocols,” in IEEE Transactions on Reliability, vol. 64, no. 3, pp. 1024-1037, Sept. 2015. doi: 10.1109/TR.2015.2443392
Abstract: Security protocols are notoriously difficult to get right, and most go through several iterations before their hidden security vulnerabilities, which are hard to detect, are triggered. To help protocol designers and developers efficiently find non-trivial bugs, we introduce SYMCONF, a practical conformance testing tool that generates high-coverage test input packets using a conformance test suite and symbolic execution. Our approach can be viewed as the combination of conformance testing and symbolic execution: (1) it first selects symbolic inputs from an existing conformance test suite; (2) it then symbolically executes a network protocol implementation with the symbolic inputs; and (3) it finally generates high-coverage test input packets using a conformance test suite. We demonstrate the feasibility of this methodology by applying SYMCONF to the generation of a stream of high quality test input packets for multiple implementations of two network protocols, the Kerberos Telnet protocol and Dynamic Host Configuration Protocol (DHCP), and discovering non-trivial security bugs in the protocols.
Keywords: conformance testing; cryptographic protocols; DHCP; Kerberos Telnet protocol; SYMCONF; conformance testing enhancement; dynamic host configuration protocol; hidden security vulnerability; high-coverage test input packets; network protocols; nontrivial security bugs; security protocols; symbolic execution; symbolic inputs; Computer bugs; IP networks; Interoperability; Protocols; Security; Software; Testing; Conformance testing; Kerberos; Telnet; protocol verification; test packet generation
(ID#: 16-9990)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7128419&isnumber=7229405
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Location-Based Services 2015 |
Location is an important element of many wireless telephone applications. Location tracking can offer potential for privacy invasions or as an attack vector. For Science of Security, location-based services relate to cyber-physical systems, resilience, and metrics. The work cited here was presented in 2015.
M. Yassin and E. Rachid, “A Survey of Positioning Techniques and Location Based Services in Wireless Networks,” Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, Kozhikode, 2015, pp. 1-5. doi: 10.1109/SPICES.2015.7091420
Abstract: Positioning techniques are known in a wide variety of wireless radio access technologies. Traditionally, Global Positioning System (GPS) is the most popular outdoor positioning system. Localization also exists in mobile networks such as Global System for Mobile communications (GSM). Recently, Wireless Local Area Networks (WLAN) become widely deployed, and they are also used for localizing wireless-enabled clients. Many techniques are used to estimate client position in a wireless network. They are based on the characteristics of the received wireless signals: power, time or angle of arrival. In addition, hybrid positioning techniques make use of the collaboration between different wireless radio access technologies existing in the same geographical area. Client positioning allows the introduction of numerous services like real-time tracking, security alerts, informational services and entertainment applications. Such services are known as Location Based Services (LBS), and they are useful in both commerce and security sectors. In this paper, we explain the principles behind positioning techniques used in satellite networks, mobile networks and Wireless Local Area Networks. We also describe hybrid localization methods that exploit the coexistence of several radio access technologies in the same region, and we classify the location based services into several categories. When localization accuracy is improved, position-dependent services become more robust and efficient, and user satisfaction increases.
Keywords: Global Positioning System; direction-of-arrival estimation; mobile radio; radio access networks; wireless LAN; GPS; GSM; Global Positioning System; LBS; WLAN; angle of arrival; client position estimation; entertainment applications; geographical area; global system for mobile communication network; hybrid positioning techniques; informational services; location based services; outdoor positioning system; real-time tracking; received wireless signals; satellite networks; security alerts; security sectors; time-of-arrival; wireless local area networks; wireless radio access technology; wireless-enabled client localization; Accuracy; IEEE 802.11 Standards; Mobile communication; Mobile computing; Position measurement; Satellites; Location Based Services; Positioning techniques; Wi-Fi; hybrid positioning systems (ID#: 16-10146)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7091420&isnumber=7091354
U. P. Rao and H. Girme, “A Novel Framework for Privacy Preserving in Location Based Services,” 2015 Fifth International Conference on Advanced Computing & Communication Technologies, Haryana, 2015, pp. 272-277. doi: 10.1109/ACCT.2015.30
Abstract: As availability of the mobile has been increased and many providers have started offering Location Based Services (LBS). There is undoubted potential of location-aware computing, but location awareness also comes up with the inherent threats, perhaps the most important of which is location privacy. This tracking of the location information result into unauthorized access of location data of user and causes serious consequences. It is a challenge to develop effective security schemes which can allow users to freely navigate through different applications, services and also ensure that the user's private information cannot be revealed elsewhere. This paper presents a detailed overview of existing schemes applied to Location Based Services (LBS). It also proposes a novel privacy preserving method (based on PIR) to provide the location privacy to the user.
Keywords: mobile computing; security of data; telecommunication security; trusted computing; location based services; location data; location information; location-aware computing; privacy preserving; Accuracy; Collaboration; Computer architecture; Databases; Mobile communication; Privacy; Security; Location based service; Location privacy; Private Information Retrieval (PIR); Trusted third party (TTP) (ID#: 16-10147)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7079092&isnumber=7079031
G. Zhuo, Q. Jia, L. Guo, M. Li and Y. Fang, “Privacy-Preserving Verifiable Proximity Test for Location-Based Services,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-6. doi: 10.1109/GLOCOM.2015.7417154
Abstract: The prevalence of smartphones with geo-positioning functionalities gives rise to a variety of location-based services (LBSs). Proximity test, an important branch of location-based services, enables the LBS users to determine whether they are in a close proximity with their friends, which can be extended to numerous applications in location-based mobile social networks. Unfortunately, serious security and privacy issues may occur in the current solutions to proximity test. On the one hand, users' private location information is usually revealed to the LBS server and other users, which may lead to physical attacks to users. On the other hand, the correctness of proximity test computation results from LBS server cannot be verified in the existing schemes and thus the creditability of LBS is greatly reduced. Besides, privacy should be defined by user him/herself, not the LBS server. In this paper, we propose a privacy-preserving verifiable proximity test for location-based services. Our scheme enables LBS users to verify the correctness of proximity test results from LBS server without revealing their location information. We show the security, efficiency, and feasibility of our proposed scheme through detailed performance evaluation.
Keywords: data privacy; mobile computing; smart phones; social networking (online); geo-positioning; location-based mobile social networks; location-based services; privacy-preserving verifiable proximity test; private location information; smartphones; Cryptography; Mobile radio mobility management; Privacy; Protocols; Servers (ID#: 16-10148)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417154&isnumber=7416057
P. P. Lindenberg, Bo-Chao Cheng and Yu-Ling Hsueh, “Novel Location Privacy Protection Strategies for Location-Based Services,” 2015 Seventh International Conference on Ubiquitous and Future Networks, Sapporo, 2015, pp. 866-870.
doi: 10.1109/ICUFN.2015.7182667
Abstract: The usage of Location-Based Services (LBS) holds a potential privacy issue when people exchange their locations for information relative to these locations. While most people perceive these information exchange services as useful, others do not, because an adversary might take advantage of the users' sensitive data. In this paper, we propose k-path, an algorithm for privacy protection for continuous location tracking-typed LBS. We take inspiration in k-anonymity to hide the user location or trajectory among k locations or trajectories. We introduce our simulator as a tool to test several strategies to hide users' locations. Afterwards, this paper will give an evaluation about the effectiveness of several approaches by using the simulator and data provided by the GeoLife data set.
Keywords: mobile communication; telecommunication security; GeoLife data set; LBS; continuous location tracking; information exchange services; location based services; mobile devices; novel location privacy protection strategies; privacy protection; user sensitive data; Data privacy; History; Mobile radio mobility management; Privacy; Sensitivity; Trajectory; Uncertainty; Location-Based Service; Privacy; k-anonymity (ID#: 16-10149)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182667&isnumber=7182475
B. Niu, Q. Li, X. Zhu, G. Cao and H. Li, “Enhancing Privacy Through Caching in Location-Based Services,” 2015 IEEE Conference on Computer Communications (INFOCOM), Kowloon, 2015, pp. 1017-1025. doi: 10.1109/INFOCOM.2015.7218474
Abstract: Privacy protection is critical for Location-Based Services (LBSs). In most previous solutions, users query service data from the untrusted LBS server when needed, and discard the data immediately after use. However, the data can be cached and reused to answer future queries. This prevents some queries from being sent to the LBS server and thus improves privacy. Although a few previous works recognize the usefulness of caching for better privacy, they use caching in a pretty straightforward way, and do not show the quantitative relation between caching and privacy. In this paper, we propose a caching-based solution to protect location privacy in LBSs, and rigorously explore how much caching can be used to improve privacy. Specifically, we propose an entropy-based privacy metric which for the first time incorporates the effect of caching on privacy. Then we design two novel caching-aware dummy selection algorithms which enhance location privacy through maximizing both the privacy of the current query and the dummies' contribution to cache. Evaluations show that our algorithms provide much better privacy than previous caching-oblivious and caching-aware solutions.
Keywords: data privacy; entropy; query processing; caching-aware dummy selection; caching-based solution; entropy-based privacy metric; location-based services; privacy enhancement; privacy protection; untrusted LBS server; users query service data; Algorithm design and analysis; Computers; Entropy; Measurement; Mobile communication; Privacy; Servers (ID#: 16-10150)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218474&isnumber=7218353
B. Niu, X. Zhu, W. Li, H. Li, Y. Wang and Z. Lu, “A Personalized Two-Tier Cloaking Scheme for Privacy-Aware Location-Based Services,” Computing, Networking and Communications (ICNC), 2015 International Conference on, Garden Grove, CA, 2015,
pp. 94-98. doi: 10.1109/ICCNC.2015.7069322
Abstract: The ubiquity of modern mobile devices with GPS modules and Internet connectivity such as 3G/4G techniques have resulted in rapid development of Location-Based Services (LBSs). However, users enjoy the convenience provided by the untrusted LBS server at the cost of their privacy. To protect user's sensitive information against adversaries with side information, we design a personalized spatial cloaking scheme, termed TTcloak, which provides k-anonymity for user's location privacy, 1-diversity for query privacy and desired size of cloaking region for mobile users in LBSs, simultaneously. TTcloak uses Dummy Query Determining (DQD) algorithm and Dummy Location Determining (DLD) algorithm to find out a set of realistic cells as candidates, and employs a CR-refinement Module (CRM) to guarantee that dummy users are assigned into the cloaking region with desired size. Finally, thorough security analysis and empirical evaluation results verify our proposed TTcloak.
Keywords: 3G mobile communication; 4G mobile communication; Global Positioning System; Internet; data privacy; mobile computing; mobility management (mobile radio); telecommunication security; telecommunication services; 3G techniques; 4G techniques; CR-refinement module; CRM; DLD algorithm; DQD algorithm; GPS modules; Internet connectivity; LBS server; TTcloak; cloaking region; dummy location determining algorithm; dummy query determining algorithm; dummy users; mobile users; modern mobile devices; personalized spatial cloaking scheme; personalized two-tier cloaking scheme; privacy-aware location-based services; query privacy; security analysis; user location privacy; Algorithm design and analysis; Complexity theory; Entropy; Mobile radio mobility management; Privacy; Servers (ID#: 16-10151)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069322&isnumber=7069279
A. K. Tyagi and N. Sreenath, “Location Privacy Preserving Techniques for Location Based Services over Road Networks,” Communications and Signal Processing (ICCSP), 2015 International Conference on, Melmaruvathur, 2015, pp. 1319-1326. doi: 10.1109/ICCSP.2015.7322723
Abstract: With the rapid development of wireless and mobile technologies (LBS, Privacy of personal location information in location-based services of a vehicle ad-hoc network (VANET) users is becoming an increasingly important issue. LBSs provide enhanced functionalities, they open up new vulnerabilities that can be exploited to cause security and privacy breaches. During communication in LBSs, individuals (vehicle users) face privacy risks (for example location privacy, identity privacy, data privacy etc.) when providing personal location data to potentially untrusted LBSs. However, as vehicle users with mobile (or wireless) devices are highly autonomous and heterogeneous, it is challenging to design generic location privacy protection techniques with desired level of protection. Location privacy is an important issue in vehicular networks since knowledge of a vehicle's location can result in leakage of sensitive information. This paper focuses and discussed on both potential location privacy threats and preserving mechanisms in LBSs over road networks. The proposed research in this paper carries significant intellectual merits and potential broader impacts i.e. a) investigate the impact of inferential attacks (for example inference attack, position co-relation attack, transition attack and timing attack etc.) in LBSs for vehicular ad-hoc networks (VANET) users, and proves the vulnerability of using long-term pseudonyms (or other approaches like silent period, random encryption period etc.) for camouflaging users' real identities. b) An effective and extensible location privacy architecture based on the one approach like mix zone model with other approaches to protect location privacy are discussed. c) This paper addresses the location privacy preservation problems in details from a novel angle and provides a solid foundation for future research to protecting user's location information.
Keywords: data privacy; mobile computing; risk management; road traffic; security of data; telecommunication security; vehicular ad hoc networks; VANET; extensible location privacy architecture; identity privacy; inference attack; intellectual merits; location privacy preserving techniques; location privacy threats; location-based services; long-term pseudonyms; mix zone model; mobile technologies; personal location information; position correlation attack; privacy breach; privacy risks; road networks; security breach; timing attack; transition attack; vehicle ad-hoc network; wireless technologies; Communication system security; Mobile communication; Mobile computing; Navigation; Privacy; Vehicles; Wireless communication; Location privacy; Location-Based Service; Mix zones; Mobile networks; Path confusion; Pseudonyms; k-anonymity (ID#: 16-10152)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322723&isnumber=7322423
D. Liao, H. Li, G. Sun and V. Anand, “Protecting User Trajectory in Location-Based Services,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-6. doi: 10.1109/GLOCOM.2015.7417512
Abstract: Preserving user location and trajectory privacy while using location-based service (LBS) is an important issue. To address this problem, we first construct three kinds of attack models that can expose a user's trajectory or path while the user is sending continuous queries to a LBS server. Then we propose the k-anonymity trajectory (KAT) algorithm, which is suitable for both single query and continuous queries. Different from existing works, the KAT algorithm selects k-1 dummy locations using the sliding widow based k- anonymity mechanism when the user is making single queries and selects k-1 dummy trajectories using the trajectory selection mechanism for continuous queries. We evaluate and validate the effectiveness of our proposed algorithm by conducting simulations for the single and continuous query scenarios.
Keywords: data privacy; mobility management (mobile radio); telecommunication security; LBS server; attack models; continuous queries; k-1 dummy locations; k-1 dummy trajectories; k-anonymity trajectory algorithm; location-based services; query scenarios; sliding widow based k-anonymity mechanism; trajectory privacy; trajectory selection mechanism; user location; user trajectory; Algorithm design and analysis; Entropy; Handheld computers; Mobile radio mobility management; Privacy; Probability; Trajectory (ID#: 16-10153)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417512&isnumber=7416057
W. Li, B. Niu, H. Li and F. Li, “Privacy-Preserving Strategies in Service Quality Aware Location-Based Services,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7328-7334. doi: 10.1109/ICC.2015.7249497
Abstract: The popularity of Location-Based Services (LBSs) have resulted in serious privacy concerns recently. Mobile users may lose their privacy while enjoying kinds of social activities due to the untrusted LBS servers. Many Privacy Protection Mechanisms (PPMs) are proposed in literature by employing different strategies, which come at the cost of either system overhead, or service quality, or both of them. In this paper, we design privacy-preserving strategies for both of the users and adversaries in service quality aware LBSs. Different from existing approaches, we first define and point out the importance of the Fine-Grained Side Information (FGSI) over existing concept of the side information, and propose a Dual-Privacy Metric (DPM) and Service Quality Metric (SQM). Then, we build analytical frameworks that provide privacy-preserving strategies for mobile users and the adversaries to achieve their goals, respectively. Finally, the evaluation results show the effectiveness of our proposed frameworks and the strategies.
Keywords: data protection; mobility management (mobile radio); quality of service; DPM; FGSI; LBS; PPM; SQM; dual-privacy metric; fine-grained side information; mobile user; privacy protection mechanism; privacy-preserving strategy; service quality aware location-based service; service quality metric; Information systems; Measurement; Mobile radio mobility management; Privacy; Security; Servers (ID#: 16-10154)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249497&isnumber=7248285
S. Ishida, S. Tagashira, Y. Arakawa and A. Fukuda, “On-demand Indoor Location-Based Service Using Ad-hoc Wireless Positioning Network,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1005-1013. doi: 10.1109/HPCC-CSS-ICESS.2015.111
Abstract: WiFi-based localization is a promising candidate for indoor localization because the localization systems can be implemented on WiFi devices widely used today. In this paper, we present a distributed localization system to realize on-demand location-based services. We define characteristics of on-demand from both the service providers' and users' perspectives. From the service providers' perspective, we utilize our previous work, a WiFi ad-hoc wireless positioning network (AWPN). From the users' perspective, we address two challenges: the elimination of a user-application installation process and a reduction in network traffic. We design a localization system using the AWPN and provide a location-based service as a Web service, which allows the use of Web browsers. The proposed localization system is built on WiFi access points and distributes network traffic over the network. We describe the design and implementation and include a design analysis of the proposed localization system. Experimental evaluations confirm that the proposed localization system can localize a user device within 220 milliseconds. We also perform simulations and demonstrate that the proposed localization system reduces network traffic by approximately 24% compared to a centralized localization system.
Keywords: Web services; ad hoc networks; wireless LAN; AWPN; Web browsers; Web service; WiFi ad-hoc wireless positioning network; WiFi-based localization; ad-hoc wireless positioning network; distributed localization system; location-based service; on-demand indoor location-based service; Accuracy; Ad hoc networks; IEEE 802.11 Standard; Mobile radio mobility management; Web servers; Wireless communication; WiFi mesh network; indoor localization; location-based Web service; on-demand (ID#: 16-10155)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336301&isnumber=7336120
D. Goyal and M. B. Krishna, “Secure Framework for Data Access Using Location Based Service in Mobile Cloud Computing,” 2015 Annual IEEE India Conference (INDICON), New Delhi, 2015, pp. 1-6. doi: 10.1109/INDICON.2015.7443761
Abstract: Mobile Cloud Computing (MCC) extends the services of cloud computing with respect to mobility in cloud and user device. MCC offloads the computation and storage to the cloud since the mobile devices are resource constrained with respect to computation, storage and bandwidth. The task can be partitioned to offload different sub-tasks to the cloud and achieve better performance. Security and privacy are the primary factors that enhance the performance of MCC applications. In this paper we present a security framework for data access using Location-based service (LBS) that acts as an additional layer in authentication process. User having valid credentials in location within the organization are enabled as authenticated user.
Keywords: authorisation; cloud computing; data privacy; message authentication; mobile computing; resource allocation; LBS; MCC; data access; location based service; mobile cloud computing; security framework; task partitioning; user authentication process; Cloud computing; Mobile communication; Mobile computing; Organizations; Public key; Cloud Computing; Encryption; Geo-encryption; Location-based Service; Mobile Cloud Computing; Security in MCC (ID#: 16-10156)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7443761&isnumber=7443105
Anju S and J. Joseph, “Location Based Service Applications to Secure Locations with Dual Encryption,” Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, Coimbatore, 2015, pp. 1-4. doi: 10.1109/ICIIECS.2015.7193061
Abstract: Location Based Service Applications (LBSAs) are becoming a part of our lives. Through these applications the users can interact with the physical world and get all data they want.eg; Foursquare. But it misuses it in many ways by extracting personal information of users and lead to many threats. To improve the location privacy we use the technique LocX. Here, the location and data related with it are encrypted before store in different servers. So a third party cannot track the location from the server and the server itself cannot see the location. In addition, to improve the security in location points and data points we introduce dual encryption method in LocX. Asymmetric keys are used to encrypt the data with two keys public key and user's private key. But in LocX random inexpensive symmetric keys are used.
Keywords: data privacy; mobile computing; mobility management (mobile radio); private key cryptography; public key cryptography; Foursquare; LBSA; LocX random inexpensive symmetric keys; LocX technique; dual encryption method; location based service applications; location privacy; personal information; public key; user private key; Encryption; Indexes; Privacy; Public key; Servers; Asymmetric; Encrypt; Location Privacy (ID#: 16-10157)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193061&isnumber=7192777
V. A. Kachore, J. Lakshmi and S. K. Nandy, “Location Obfuscation for Location Data Privacy,” 2015 IEEE World Congress on Services, New York City, NY, 2015, pp. 213-220. doi: 10.1109/SERVICES.2015.39
Abstract: Advances in wireless internet, sensor technologies, mobile technologies, and global positioning technologies have renewed interest in location based services (LBSs) among mobile users. LBSs on smartphones allow consumers to locate nearby products and services, in exchange of their location information. Precision of location data helps for accurate query processing of LBSs but it may lead to severe security violations and several privacy threats, as intruders can easily determine user's common paths or actual locations. Encryption is the most explored approach for ensuring security. It can give protection against third party attacks but it cannot provide protection against privacy threats on the server which can still obtain user location and use it for malicious purposes. Location obfuscation is a technique to protect user privacy by altering the location of the users while preserving capability of server to compute few mathematical functions which are useful for the user over the obfuscated location information. This work mainly concentrates on LBSs which wants to know the distance travelled by user for providing their services and compares encryption and obfuscation techniques. This study proposes various methods of location obfuscation for GPS location data which are used to obfuscate user's path and location from service provider. Our work shows that user privacy can be maintained without affecting LBSs results, and without incurring significant overheads.
Keywords: Global Positioning System; cryptography; data protection; mobile computing; query processing; smart phones; GPS location data privacy; LBS; encryption; location based service; location obfuscation; mobile user; query processing; smart phone; user privacy protection; Data privacy; Encryption; Privacy; Servers; Location Based Services; Location data protection; Path Obfuscation Techniques; User Privacy (ID#: 16-10158)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7196527&isnumber=7196486
K. Kasori and F. Sato, “Location Privacy Protection Considering the Location Safety,” Network-Based Information Systems (NBiS), 2015 18th International Conference on, Taipei, 2015, pp. 140-145. doi: 10.1109/NBiS.2015.24
Abstract: With rapid advances in mobile communication technologies and continued price reduction of location tracking devices, location-based services (LBSs) are widely recognized as an important feature of the future computing environment. Though LBSs provide many new opportunities, the ability to locate mobile users also presents new threats - the intrusion of location privacy. Lots of different techniques for securing the location privacy have been proposed, for instance the concept of Silent period, the concept of Dummy node, and the concept of Cloaking-region. However, many of these researches have a problem that quality of the LBS (QoS) decreased when anonymity is improved, and anonymity falls down when QoS is improved. In this paper, we propose a location privacy scheme by utilizing the cloaking region and the regional safety degree. The regional safety degree means the measure of the needs of the anonymity of the location information. If the node is in the place of high regional safety, the node does not need any anonymization. The proposed method is evaluated by the quality of location information and the location safety. The location safety is calculated by multiplying the regional safety degree and the identification level. From our simulation results, the proposed method improves the quality of the location information without the degradation of the location safety.
Keywords: data privacy; mobile computing; security of data; tracking; LBSs; cloaking region; cloaking-region; dummy node; identification level; location information anonymity; location privacy intrusion; location privacy protection; location safety; location tracking devices; location-based services; mobile communication technologies; price reduction; regional safety degree; silent period; Measurement; Mobile radio mobility management; Privacy; Quality of service; Safety; Servers; k-anonymity; location anonymization; location based services (ID#: 16-10159)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7350610&isnumber=7350553
B. G. Patel, V. K. Dabhi, U. Tyagi and P. B. Shah, “A Survey on Location Based Application Development for Android Platform,” Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, Ghaziabad, 2015, pp. 731-739. doi: 10.1109/ICACEA.2015.7164786
Abstract: Android is currently the fastest growing mobile platform. One of the fastest growing areas in Android applications is Location Based Service (LBS). LBS provides information services based on the current or a known location and is supported by the Mobile positioning system. Presently, MOSDAC (Meteorological and Oceanographic Satellite Data Archival Centre) disseminates the weather forecast information through web. Android is one of the most widely used mobile OS these days and that is the reason why it is the best practice to develop application on Android platform. The application for disseminating location based weather forecast is a client-server application on Android platform. It provides weather forecast information as per user's location or location of interest. While developing a client-server application, the communication between client and database server becomes imperative. This paper discusses detailed analysis for choosing appropriate type of web service, data exchange protocols, data exchange format, and Mobile positioning technologies for client-server application. It also highlights issues like memory capacity, security, poor response time, and battery consumption in mobile devices. This paper is about exploring effective options to establish the dissemination service over smart phones with Android OS.
Keywords: Global Positioning System; Web services; client-server systems; electronic data interchange; information dissemination; protocols; smart phones; LBS; MOSDAC; Meteorological and Oceanographic Satellite Data Archival Centre; Web service; android applications; battery consumption; client-server application; data exchange format; data exchange protocols; database server; dissemination service; information services; location based application development; location based service; memory capacity; mobile OS; mobile devices; mobile positioning system; response time; smart phones; weather forecast information dissemination; Batteries; Mobile communication; Simple object access protocol; Smart phones; XML; Android; Battery Consumption; Location Based Services; Response time; Security (ID#: 16-10160)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164786&isnumber=7164643
Z. Riaz, F. Dürr and K. Rothermel, “Optimized Location Update Protocols for Secure and Efficient Position Sharing,” Networked Systems (NetSys), 2015 International Conference and Workshops on, Cottbus, 2015, pp. 1-8. doi: 10.1109/NetSys.2015.7089083
Abstract: Although location-based applications have seen fast growth in the last decade due to pervasive adoption of GPS enabled mobile devices, their use raises privacy concerns. To mitigate these concerns, a number of approaches have been proposed in literature, many of which rely on a trusted party to regulate user privacy. However, trusted parties are known to be prone to data breaches [1]. Consequently, a novel solution, called Position Sharing, was proposed in [2] to secure location privacy in fully non-trusted systems. In Position Sharing, obfuscated position shares of the actual user location are distributed among several location servers, each from a different provider, such that there is no single point of failure if the servers get breached. While Position Sharing can exhibit useful properties such as graceful degradation of privacy, it incurs significant communication overhead as position shares are sent to several location servers instead of one. To this end, we propose a set of location update protocols to minimize the communication overhead of Position Sharing while maintaining the privacy guarantees that it originally provided. As we consider the scenario of frequent location updates, i.e., movement trajectories, our protocols additionally add protection against an attack based on spatio-temporal correlation in published locations. By evaluating on a set of real-world GPS traces, we show that our protocols can reduce the communication overhead by 75% while significantly improving the security guarantees of the original Position Sharing algorithm.
Keywords: Global Positioning System; correlation theory; mobility management (mobile radio); protocols; security of data; GPS; Position Sharing algorithm; communication overhead minimization; data breach; location privacy security; location server; location update protocol optimization; mobile device; movement trajectory; spatio-temporal correlation; trusted party; user privacy; Correlation; Dead reckoning; Mobile handsets; Privacy; Protocols; Servers; dead reckoning; efficient communication; location-based services; privacy; selective update (ID#: 16-10161)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7089083&isnumber=7089054
L. Haukipuro, I. M. Shabalina and M. Ylianttila, “Preventing Social Exclusion for Persons with Disabilities Through ICT Based Services,” Information, Intelligence, Systems and Applications (IISA), 2015 6th International Conference on, Corfu, 2015, pp. 1-7. doi: 10.1109/IISA.2015.7388102
Abstract: The paper addresses opportunities that the fast diffusion of Information and Communication Technology is opening for people with different levels of physical restrictions, or disabilities. For these people mobile technology not only allows ubiquity for communications but also anytime access to some services that are vital for their security and autonomy thus preventing social exclusion. More specifically, the paper describes evaluation study and four developed ICT based services aimed to prevent social exclusion and ease everyday life of persons with disabilities. Findings of the study show that there is enormous need for the services aimed for disabled to promote their equal status in society.
Keywords: handicapped aids; mobile computing; ICT based services; information and communication technology; mobile technology; persons with disabilities; social exclusion; Cities and towns; Cultural differences; Government; Information and communication technology; Interviews; Mobile communication; Navigation; Location Based Services; Preventing Social Exclusion; Services for Disabled; Social Services (ID#: 16-10162)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7388102&isnumber=7387951
M. Maier, L. Schauer and F. Dorfmeister, “ProbeTags: Privacy-Preserving Proximity Detection Using Wi-Fi Management Frames,” Wireless and Mobile Computing, Networking and Communications (WiMob), 2015 IEEE 11th International Conference on, Abu Dhabi, 2015, pp. 756-763. doi: 10.1109/WiMOB.2015.7348038
Abstract: Since the beginning of the ubiquitous computing era, context-aware applications have been envisioned and pursued, with location and especially proximity information being one of the primary building blocks. To date, there is still a lack of feasible solutions to perform proximity tests between mobile entities in a privacy-preserving manner, i.e., one that does not disclose one's location in case the other party is not in proximity. In this paper, we present our novel approach based on location tags built from surrounding Wi-Fi signals originating only from mobile devices. Since the set of mobile devices at a given location changes over time, this approach ensures the user's privacy when performing proximity tests. To improve the robustness of similarity calculations, we introduce a novel extension of the commonly used cosine similarity measure to allow for weighing its components while preserving the signal strength semantics. Our system is evaluated extensively in various settings, ranging from office scenarios to crowded mass events. The results show that our system allows for robust short-range proximity detection while preserving the participants' privacy.
Keywords: computer network management; computer network security; data privacy; mobile computing; wireless LAN; ProbeTags; Wi-Fi management frames; Wi-Fi signals; context-aware applications; cosine similarity measure; location tags; mobile devices; mobile entities; privacy-preserving proximity detection; proximity tests; signal strength semantics; similarity calculation robustness improvement; ubiquitous computing era; Euclidean distance; IEEE 802.11 Standard; Mobile communication; Mobile computing; Mobile handsets; Privacy; Wireless communication; 802.11; location-based services; proximity detection (ID#: 16-10163)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7348038&isnumber=7347915
S. M. H. Sharhan and S. Zickau, “Indoor Mapping for Location-Based Policy Tooling Using Bluetooth Low Energy Beacons,” Wireless and Mobile Computing, Networking and Communications (WiMob), 2015 IEEE 11th International Conference on, Abu Dhabi, 2015, pp. 28-36. doi: 10.1109/WiMOB.2015.7347937
Abstract: Most service providers and data owners desire to control the access to sensitive resources. The user may express restrictions, such as who can access the resources, at which point in time and from which location. However, the location requirement is difficult to achieve in an indoor environment. Determining user locations inside of buildings is based on a variety of solutions. Moreover, current access control solutions do not consider restricting access to sensitive data in indoor environments. This article presents a graphical web interface based on OpenStreetMap (OSM), called Indoor Mapping Web Interface (IMWI), which is designed to use indoor maps and floor plans of several real-world objects, such as hospitals, universities and other premises. By placing Bluetooth Low Energy (BLE) beacons inside buildings and by labeling them on digital indoor maps, the web interface back-end will provide the stored location data within an access control environment. Using the stored information will enable users to express indoor access control restrictions. Moreover, the IMWI enables and ensures the accurate determination of a user device location in indoor scenarios. By defining several scenarios the usability of the IMWI and the validity of the policies have been evaluated.
Keywords: Bluetooth; indoor radio; Indoor Mapping Web Interface; OpenStreetMap; access control environment; bluetooth low energy beacons; device location; indoor access control; indoor mapping; indoor scenarios; location-based policy tooling; service providers; Access control; Communication system security; Medical services; Wireless LAN; Wireless communication; Wireless sensor networks; Access Control; Bluetooth Low Energy Beacons; Indoor Mapping; Location-based Services; XACML Policies (ID#: 16-10164)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7347937&isnumber=7347915
C. Piao and X. Li, “Privacy Preserving-Based Recommendation Service Model of Mobile Commerce and Anonimity Algorithm,” e-Business Engineering (ICEBE), 2015 IEEE 12th International Conference on, Beijing, 2015, pp. 420-427. doi: 10.1109/ICEBE.2015.77
Abstract: The wide location based service in the application of mobile commerce has brought great convenience to people's work and lives, while the risk of privacy disclosure has been receiving more and more attention from the academia and industry. After analyzing the privacy issues in mobile commerce, a privacy preserving recommendation service framework based on cloud is established. According to the defined personalized privacy requirements of mobile users, the (K, L, P)-anonymity model is formally described. Based on the anonymity model, a dynamically structure minimum anonymous sets algorithm DSMAS is proposed, which can be used to protect the location, identifier and other sensitive information of the mobile user on the road network. Finally, based on a real road network and generated privacy profiles of the mobile users, the feasibility of algorithm are validated by experimentally analyzing using the metrics including information entropy, query cost, anonymization time and dummy ratio.
Keywords: cloud computing; data privacy; entropy; mobile commerce; recommender systems; security of data; anonymization time; dummy ratio; information entropy; privacy preserving-based recommendation service model; query cost; Business; Cloud computing; Mobile communication; Mobile computing; Privacy; Roads; Sensitivity; Anonymity model; Cloud platform; Location-based service; Privacy preserving algorithm (ID#: 16-10165)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7350003&isnumber=7349845
R. Beniwal, P. Zavarsky and D. Lindskog, “Study of Compliance of Apple's Location Based APIs with Recommendations of the IETF Geopriv,” 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST), London, 2015, pp. 214-219. doi: 10.1109/ICITST.2015.7412092
Abstract: Location Based Services (LBS) are services offered by smart phone applications which use device location data to offer the location-related services. Privacy of location information is a major concern in LBS applications. This paper compares the location APIs of iOS with the IETF Geopriv architecture to determine what mechanisms are in place to protect location privacy of an iOS user. The focus of the study is on the distribution phase of the Geopriv architecture and its applicability in enhancing location privacy on iOS mobile platforms. The presented review shows that two iOS APIs features known as Geocoder and turning off location services provide to some extent location privacy for iOS users. However, only a limited number of functionalities can be considered as compliant with Geopriv's recommendations. The paper also presents possible ways how to address limited location privacy offered by iOS mobile devices based on Geopriv recommendations.
Keywords: application program interfaces; data privacy; iOS (operating system); recommender systems; smart phones; Apple location based API; Geocoder; Geopriv recommendation; IETF Geopriv architecture; LBS; device location data; distribution phase; iOS mobile device; iOS mobile platform; iOS user; location based service; location information privacy; location privacy; location-related service; off location service; smart phone application; Global Positioning System; Internet; Mobile communication; Operating systems; Privacy; Servers; Smart phones; APIs; Geopriv; iOS; location information (ID#: 16-10166)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7412092&isnumber=7412034
C. Lyu, A. Pande, X. Wang, J. Zhu, D. Gu and P. Mohapatra, “CLIP: Continuous Location Integrity and Provenance for Mobile Phones,” Mobile Ad Hoc and Sensor Systems (MASS), 2015 IEEE 12th International Conference on, Dallas, TX, 2015, pp. 172-180. doi: 10.1109/MASS.2015.33
Abstract: Many location-based services require a mobile user to continuously prove his location. In absence of a secure mechanism, malicious users may lie about their locations to get these services. Mobility trace, a sequence of past mobility points, provides evidence for the user's locations. In this paper, we propose a Continuous Location Integrity and Provenance (CLIP) Scheme to provide authentication for mobility trace, and protect users' privacy. CLIP uses low-power inertial accelerometer sensor with a light-weight entropy-based commitment mechanism and is able to authenticate the user's mobility trace without any cost of trusted hardware. CLIP maintains the user's privacy, allowing the user to submit a portion of his mobility trace with which the commitment can be also verified. Wireless Access Points (APs) or colocated mobile devices are used to generate the location proofs. We also propose a light-weight spatial-temporal trust model to detect fake location proofs from collusion attacks. The prototype implementation on Android demonstrates that CLIP requires low computational and storage resources. Our extensive simulations show that the spatial-temporal trust model can achieve high (> 0.9) detection accuracy against collusion attacks.
Keywords: data privacy; mobile computing; mobile handsets; radio access networks; AP; CLIP; computational resources; continuous location integrity and provenance; light-weight entropy-based commitment mechanism; location-based services; low-power inertial accelerometer sensor; mobile phones; mobility trace; storage resources; user privacy; wireless access points; Communication system security; Mobile communication; Mobile handsets; Privacy; Security; Wireless communication; Wireless sensor networks (ID#: 16-10167)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366930&isnumber=7366897
P. Hallgren, M. Ochoa and A. Sabelfeld, “InnerCircle: A Parallelizable Decentralized Privacy-Preserving Location Proximity Protocol,” Privacy, Security and Trust (PST), 2015 13th Annual Conference on, Izmir, 2015, pp. 1-6. doi: 10.1109/PST.2015.7232947
Abstract: Location Based Services (LBS) are becoming increasingly popular. Users enjoy a wide range of services from tracking a lost phone to querying for nearby restaurants or nearby tweets. However, many users are concerned about sharing their location. A major challenge is achieving the privacy of LBS without hampering the utility. This paper focuses on the problem of location proximity, where principals are willing to reveal whether they are within a certain distance from each other. Yet the principals are privacy-sensitive, not willing to reveal any further information about their locations, nor the distance. We propose InnerCircle, a novel secure multi-party computation protocol for location privacy, based on partially homomorphic encryption. The protocol achieves precise fully privacy-preserving location proximity without a trusted third party in a single round trip. We prove that the protocol is secure in the semi-honest adversary model of Secure Multi-party Computation, and thus guarantees the desired privacy properties. We present the results of practical experiments of three instances of the protocol using different encryption schemes. We show that, thanks to its parallelizability, the protocol scales well to practical applications.
Keywords: cryptographic protocols; data privacy; mobile computing; InnerCircle; LBS privacy; location based services; parallelizability; parallelizable decentralized privacy-preserving location proximity protocol; partially homomorphic encryption; privacy properties; round trip; secure multiparty computation protocol; secure protocol; semihonest adversary model; Approximation methods; Encryption; Privacy; Protocols; Public key (ID#: 16-10168)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232947&isnumber=7232940
P. G. Kolapwar and H. P. Ambulgekar, “Location Based Data Encryption Methods and Applications,” Communication Technologies (GCCT), 2015 Global Conference on, Thuckalay, 2015, pp. 104-108. doi: 10.1109/GCCT.2015.7342632
Abstract: In today’s world, mobile communication has tremendous demand in our daily life. The use of the mobile user's location called Geo-encryption, produces more secure systems that can be used in different mobile applications. Location Based Data Encryption Methods (LBDEM) are used to enhance the security of such applications called as Location based Services (LBS). It collects position, time, latitude and longitude coordinates of mobile nodes and uses for the encryption and decryption process. Geo-encryption plays an important role to raise the security of LBS. Different Geo-protocols have been developed in the same area to add security with better throughput. The (AES-GEDTD) is such an approach which gives higher security with a great throughput. In this paper, we discuss AES-GEDTD as a LBDEM approach and its role in some applications like Digital Cinema Distribution, Patient Telemonitoring System (PTS) and Military Application.
Keywords: cryptographic protocols; mobile communication; decryption process; digital cinema distribution; encryption process; geo-encryption; location based data encryption methods; location based services; military application; mobile nodes; mobile user location; patient telemonitoring system; Encryption; Mobile nodes; Protocols; Receivers; AES-GEDTD; DES-GEDTD; Geo-encryption; Geo-protocol; LBDEM; LBS (ID#: 16-10169)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7342632&isnumber=7342608
S. S. Kumar and A. Pandharipande, “Secure Indoor Positioning: Relay Attacks and Mitigation Using Presence Sensing Systems,” 2015 IEEE 13th International Conference on Industrial Informatics (INDIN), Cambridge, 2015, pp. 82-87. doi: 10.1109/INDIN.2015.7281714
Abstract: Secure indoor positioning is critical to successful adoption of location-based services in buildings. However, positioning based on off-the-air signal measurements is prone to various security threats. In this paper, we provide an overview of security threats encountered in such indoor positioning systems, and particularly focus on the relay threat. In a relay attack, a malicious entity may gain unauthorized access by introducing a rogue relay device in a zone of interest, which is then validly positioned by the location network, and then transfer the control rights to a malicious device outside the zone. This enables the malicious entity to gain access to the application network using the rogue device. We present multiple solutions relying on a presence sensing system to deal with this attack scenario. In one solution, a localized presence sensing system is used to validate the user presence in the vicinity of the position before location-based control is allowed. In another solution, the user device is required to respond to a challenge by a physical action that may be observed and validated by the presence sensing system.
Keywords: indoor navigation; relay networks (telecommunication); telecommunication security; location-based services; off-the-air signal measurement; presence sensing system; relay attack mitigation; rogue relay device; secure indoor positioning; security threat; Lighting; Lighting control; Mobile handsets; Mobile radio mobility management; Relays; Sensors; Servers; Secure indoor positioning; presence sensing systems; relay attacks (ID#: 16-10170)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7281714&isnumber=7281697
C. Yara, Y. Noriduki, S. Ioroi and H. Tanaka, “Design and Implementation of Map System for Indoor Navigation — An Example of an Application of a Platform Which Collects and Provides Indoor Positions,” Inertial Sensors and Systems (ISISS), 2015 IEEE International Symposium on, Hapuna Beach, HI, 2015, pp. 1-4. doi: 10.1109/ISISS.2015.7102376
Abstract: Many kinds of indoor positioning systems have been investigated, and location-based services have been developed and introduced. They are individually designed and developed based on the requirements for each service. This paper presents a map platform that accommodates any positioning system in order to utilize the platform for various application systems. The requirement conditions are summarized and the platform has been implemented using open source software. The software allows the required functions to be assigned into two servers, realizes the independence of each function and allows for future function expansion. The study has verified the basic functions required for a mapping system that can incorporate several indoor positioning systems including dead reckoning calculated by inertia sensors installed in a smartphone and an odometry system operated by the rotary encoders installed in an electric wheel chair.
Keywords: Global Positioning System; cartography; distance measurement; geophysics computing; indoor navigation; mobile computing; public domain software; wheelchairs; basic function; dead reckoning; electric wheel chair; future function expansion; indoor positioning system; inertia sensor; location-based services; mapping system; odometry system; open source software; rotary encoder; smartphone; Browsers; Dead reckoning; History; Information management; Security; Servers; Indoor Positioning Data; Map; Navigation (ID#: 16-10171)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7102376&isnumber=7102353
L. Xiao, J. Liu, Q. Li and H. V. Poor, “Secure Mobile Crowdsensing Game,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7157-7162. doi: 10.1109/ICC.2015.7249468
Abstract: By recruiting sensor-equipped smartphone users to report sensing data, mobile crowdsensing (MCS) provides location-based services such as environmental monitoring. However, due to the distributed and potentially selfish nature of smartphone users, mobile crowdsensing applications are vulnerable to faked sensing attacks by users who bid a low price in an MCS auction and provide faked sensing reports to save sensing costs and avoid privacy leakage. In this paper, the interactions among an MCS server and smartphone users are formulated as a mobile crowdsensing game, in which each smartphone user chooses its sensing strategy such as its sensing time and energy to maximize its expected utility while the MCS server classifies the received sensing reports and determines the payment strategy accordingly to stimulate users to provide accurate sensing reports. Nash equilibrium (NE) of a static MCS game is evaluated and a closed-form expression for the NE in a special case is presented. Moreover, a dynamic mobile crowdsensing game is investigated, in which the sensing parameters of a smartphone are unknown by the server and the other users. A Q-learning discriminated pricing strategy is developed for the server to determine the payment to each user. Simulation results show that the proposed pricing mechanism stimulates users to provide high-quality sensing services and suppress faked sensing attacks.
Keywords: mobility management (mobile radio); pricing; smart phones; telecommunication security; MCS auction; MCS server; NE; Nash equilibrium; Q-learning discriminated pricing strategy; closed-form expression; dynamic mobile crowdsensing game security; faked sensing attack suppression; high-quality sensing service; location-based service; sensor-equipped smartphone; Games; Information systems; Mobile communication; Pricing; Security; Sensors; Servers (ID#: 16-10172)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249468&isnumber=7248285
G. Sarath and Megha Lal S. H, “Privacy Preservation and Content Protection in Location Based Queries,” Contemporary Computing (IC3), 2015 Eighth International Conference on, Noida, 2015, pp. 325-330. doi: 10.1109/IC3.2015.7346701
Abstract: Location based services are widely used to access location information such as nearest ATMs and hospitals. These services are accessed by sending location queries containing user's current location to the Location based service (LBS) server. LBS server can retrieve the current location of user from this query and misuse it, threatening his privacy. In security critical application like defense, protecting location privacy of authorized users is a critical issue. This paper describes the design and implementation of a solution to this privacy problem, which provides location privacy to authorized users and preserve confidentiality of data in LBS server. Our solution is a two stage approach, where first stage is based on Oblivious transfer and second stage is based on Private information Retrieval. Here the whole service area is divided into cells and location information of each cell is stored in the server in encrypted form. The user who wants to retrieve location information will create a clocking region (a subset of service area), containing his current location and generate a query embedding it. Server can only identify the user is somewhere in this clocking region, so user's security can be improved by increasing the size of the clocking region. Even if the server sends the location information of all the cells in the clocking region, user can decrypt service information only for the user's exact location, so confidentiality of server data will be preserved.
Keywords: authorisation; data privacy; mobile computing; query processing; ATM; LBS server; authorized user; content protection; data confidentiality; hospital; location based query; location based service server; location information retrieval; location privacy; oblivious transfer; privacy preservation; private information retrieval; security critical application; Cryptography; Information retrieval; Privacy; Protocols; Receivers; Servers; Location based query (ID#: 16-10173)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346701&isnumber=7346637
X. Gong, X. Chen, K. Xing, D. H. Shin, M. Zhang and J. Zhang, “Personalized Location Privacy in Mobile Networks: A Social Group Utility Approach,” 2015 IEEE Conference on Computer Communications (INFOCOM), Kowloon, 2015, pp. 1008-1016. doi: 10.1109/INFOCOM.2015.7218473
Abstract: With increasing popularity of location-based services (LBSs), there have been growing concerns for location privacy. To protect location privacy in a LBS, mobile users in physical proximity can work in concert to collectively change their pseudonyms, in order to hide spatial-temporal correlation in their location traces. In this study, we leverage the social tie structure among mobile users to motivate them to participate in pseudonym change. Drawing on a social group utility maximization (SGUM) framework, we cast users' decision making of whether to change pseudonyms as a socially-aware pseudonym change game (PCG). The PCG further assumes a general anonymity model that allows a user to have its specific anonymity set for personalized location privacy. For the SGUM-based PCG, we show that there exists a socially-aware Nash equilibrium (SNE), and quantify the system efficiency of the SNE with respect to the optimal social welfare. Then we develop a greedy algorithm that myopically determines users' strategies, based on the social group utility derived from only the users whose strategies have already been determined. It turns out that this algorithm can efficiently find a Pareto-optimal SNE with social welfare higher than that for the socially-oblivious PCG, pointing out the impact of exploiting social tie structure. We further show that the Pareto-optimal SNE can be achieved in a distributed manner.
Keywords: data privacy; game theory; mobile computing; optimisation; telecommunication security; LBS; Pareto-optimal SNE; SGUM-based PCG; location traces; location-based services; mobile networks; optimal social welfare; personalized location privacy; physical proximity; social group utility maximization framework; social tie structure; socially-aware Nash equilibrium; socially-aware pseudonym change game; spatial-temporal correlation; system efficiency quantification; Computers; Games; Mobile communication; Mobile handsets; Nash equilibrium; Privacy; Tin (ID#: 16-10174)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218473&isnumber=7218353
M. Movahedi, J. Saia and M. Zamani, “Shuffle to Baffle: Towards Scalable Protocols for Secure Multi-Party Shuffling,” Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, Columbus, OH, 2015, pp. 800-801. doi: 10.1109/ICDCS.2015.116
Abstract: In secure multi-party shuffling, multiple parties, each holding an input, want to agree on a random permutation of their inputs while keeping the permutation secret. This problem is important as a primitive in many privacy-preserving applications such as anonymous communication, location-based services, and electronic voting. Known techniques for solving this problem suffer from poor scalability, load-balancing issues, trusted party assumptions, and/or weak security guarantees. In this paper, we propose an unconditionally-secure protocol for multi-party shuffling that scales well with the number of parties and is load-balanced. In particular, we require each party to send only a polylogarithmic number of bits and perform a polylogarithmic number of operations while incurring only a logarithmic round complexity. We show security under universal compos ability against up to about n/3 fully-malicious parties. We also provide simulation results in the full version of this paper showing that our protocol improves significantly over previous work. For example, for one million parties, when compared to the state of the art, our protocol reduces the communication and computation costs by at least three orders of magnitude and slightly decreases the number of communication rounds.
Keywords: computational complexity; cryptographic protocols; data privacy; resource allocation; anonymous communication; electronic voting; load-balancing; location-based services; logarithmic round complexity; permutation secret; polylogarithmic number; privacy-preserving; random permutation; scalable protocols; secure multiparty shuffling; trusted party assumptions; unconditionally-secure protocol; Electronic voting; Logic gates; Mobile radio mobility management; Privacy; Protocols; Security; Sorting; Multi-Party Computation; Privacy-Preserving Applications; Secure Shuffling (ID#: 16-10175)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164994&isnumber=7164877
H. Ngo and J. Kim, “Location Privacy via Differential Private Perturbation of Cloaking Area,” 2015 IEEE 28th Computer Security Foundations Symposium, Verona, 2015, pp. 63-74. doi: 10.1109/CSF.2015.12
Abstract: The increasing use of mobile devices has triggered the development of location based services (LBS). By providing location information to LBS, mobile users can enjoy variety of useful applications utilizing location information, but might suffer the troubles of private information leakage. Location information of mobile users needs to be kept secret while maintaining utility to achieve desirable service quality. Existing location privacy enhancing techniques based on K-anonymity and Hilbertcurve cloaking area generation showed advantages in privacy protection and service quality but disadvantages due to the generation of large cloaking areas that makes query processing and communication less effective. In this paper we propose a novel location privacy preserving scheme that leverages some differential privacy based notions and mechanisms to publish the optimal size cloaking areas from multiple rotated and shifted versions of Hilbert curve. With experimental results, we show that our scheme significantly reduces the average size of cloaking areas compared to previous Hilbert curve method. We also show how to quantify adversary's ability to perform an inference attack on user location data and how to limit adversary's success rate under a designed threshold.
Keywords: curve fitting; data privacy; mobile computing; mobile handsets; perturbation techniques; Hilbert curve method; Hilbert-curve cloaking area generation; LBS; differential privacy based notions; differential private perturbation; inference attack; k-anonymity; location based services; location information; location privacy enhancing techniques; location privacy preserving scheme; mobile devices; mobile users; optimal size cloaking areas; private information leakage; service quality; Cryptography; Data privacy; Databases; Mobile communication; Privacy; Protocols; Servers; Hilbert curve; differential identifiability; geo-indistinguishability; location privacy (ID#: 16-10176)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243725&isnumber=7243713
Y. Lin, W. Huang and Y. Tang, “Map-Based Multi-Path Routing Protocol in VANETs,” 2015 IEEE 9th International Conference on Anti-counterfeiting, Security, and Identification (ASID), Xiamen, 2015, pp. 145-149. doi: 10.1109/ICASID.2015.7405680
Abstract: Due to vehicle movement and propagation loss of radio channel, providing a routing protocol for reliable multihop communication in VANETs is particularly challenging. In this paper, we present a map-based multi-path routing protocol for VANETs - MBMPR, which utilizes GPS, digital map and sensors in vehicle. With global road information, MBMPR finds an optimal forward path and an alternate path using Dijkstra algorithm, which improves the reliability of data transmission. Considering the load balance problem in junctions, a congestion detection mechanism is proposed. Aiming at the packet loss problem due to target vehicle's mobility, MBMPR adopts recovery strategy using location-based services and target vehicle mobility prediction. The simulations demonstrate MBMPR has significant performances comparing with classical VANETs routing protocols.
Keywords: multipath channels; resource allocation; routing protocols; telecommunication network reliability; vehicular ad hoc networks; Dijkstra algorithm; GPS; MBMPR; VANET; congestion detection mechanism; data transmission reliability; digital map; global road information; load balance problem; location-based services; map-based multipath routing protocol; optimal forward path; packet loss problem; propagation loss; radio channel; recovery strategy; reliable multihop communication; target vehicle mobility prediction; vehicle movement; Load balance; Multi-path routing; VANETs (ID#: 16-10177)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7405680&isnumber=7405648
X. Chen, A. Mizera and J. Pang, “Activity Tracking: A New Attack on Location Privacy,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 22-30. doi: 10.1109/CNS.2015.7346806
Abstract: The exposure of location information in location-based services (LBS) raises users' privacy concerns. Recent research reveals that in LBSs users concern more about the activities that they have performed than the places that they have visited. In this paper, we propose a new attack with which the adversary can accurately infer users' activities. Compared to existing attacks, our attack provides the adversary not only with the places where users perform activities but also with the information when they stay at each of these places. To achieve this objective, we propose a new model to capture users' mobility and their LBS requests in continuous time, which naturally expresses users' behaviour in LBSs. We then formally implement our attack by extending an existing framework for quantifying location privacy. Through experiments on a real-life dataset, we show the effectiveness of our new tracking attack.
Keywords: data privacy; mobility management (mobile radio); telecommunication security; telecommunication services; activity tracking; attack implementation; location information; location privacy; location-based services; real-life dataset; tracking attack; user privacy concerns; users activity; users mobility; Communication networks; Conferences; Privacy; Real-time systems; Security; Semantics; Trajectory (ID#: 16-10178)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346806&isnumber=7346791
J. R. Shieh, “An End-to-End Encrypted Domain Proximity Recommendation System Using Secret Sharing Homomorphic Cryptography,” Security Technology (ICCST), 2015 International Carnahan Conference on, Taipei, 2015, pp. 1-6. doi: 10.1109/CCST.2015.7389682
Abstract: Location privacy preservation is where a person's location is revealed to other entities, such as a service provider or the person's friends, only if this release is strictly necessary and authorized by the person. This is especially important for location-based services. Other current systems use only a 2D geometric model. We develop 3D geometric location privacy for a service that alerts people of nearby friends. Using a robust encryption algorithm, our location privacy scheme guarantees that users can protect their exact location but still be alerted if and only if the service or friend is nearby and to then determine whether they are getting closer. This is in contrast to other non-secure systems, systems that lack secret sharing, and systems that use location cloaking. In our system, such proximity information can be reconstructed only when a sufficient number of shared keys are combined together; individual shared keys are of no use on their own. The proposed ring homomorphism cryptography combines secret keys from each user to compute relative distances from the encrypted user's location end. Our secret sharing scheme doesn't allow anyone to deceive, mislead, or defraud others of their rights, or to gain an unfair advantage. This relative distance is computed entirely in the encryption domain and is based on the philosophy that everyone has the same right to privacy. We also propose a novel protocol to provide personal anonymity for users of the system. Experiments show that the proposed scheme offers secure, accurate, fast, and anonymous privacy-preserving proximity information. This new approach can potentially be applied to various location-based computing environments.
Keywords: data privacy; mobile computing; private key cryptography; recommender systems; 2D geometric model; 3D geometric location privacy; end-to-end encrypted domain proximity recommendation system; location privacy preservation; location privacy scheme; location-based computing environments; location-based services; personal anonymity; privacy-preserving proximity information; relative distance; ring homomorphism cryptography; robust encryption algorithm; secret keys; secret sharing homomorphic cryptography; user location end encryption; Encryption; Measurement; Mobile radio mobility management; Multimedia communication; Privacy; Three-dimensional displays; Personalization; Recommender Systems (ID#: 16-10179)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389682&isnumber=7389647
N. W. Lo, M. C. Chiang and C. Y. Hsu, “Hash-Based Anonymous Secure Routing Protocol in Mobile Ad Hoc Networks,” Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, Kaohsiung, 2015, pp. 55-62. doi: 10.1109/AsiaJCIS.2015.27
Abstract: A mobile ad hoc network (MANET) is composed of multiple wireless mobile devices in which an infrastructure less network with dynamic topology is built based on wireless communication technologies. Novel applications such as location-based services and personal communication Apps used by mobile users with handheld wireless devices utilize MANET environments. In consequence, communication anonymity and message security have become critical issues for MANET environments. In this study, a novel secure routing protocol with communication anonymity, named as Hash-based Anonymous Secure Routing (HASR) protocol, is proposed to support identity anonymity, location anonymity and route anonymity, and defend against major security threats such as replay attack, spoofing, route maintenance attack, and denial of service (DoS) attack. Security analyses show that HASR can achieve both communication anonymity and message security with efficient performance in MANET environments.
Keywords: cryptography; mobile ad hoc networks; mobile computing; mobility management (mobile radio); routing protocols; telecommunication network topology; telecommunication security; DoS attack; HASR protocol; Hash-based anonymous secure routing protocol; MANET; denial of service attack; dynamic network topology; handheld wireless devices; location-based services; message security; mobile ad hoc networks; mobile users; personal communication Apps; route maintenance attack; wireless communication technologies; wireless mobile devices; Cryptography; Mobile ad hoc networks; Nickel; Routing; Routing protocols; communication anonymity; message security; mobile ad hoc network; routing protocol (ID#: 16-10180)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153936&isnumber=7153836
B. Wang, M. Li, H. Wang and H. Li, “Circular Range Search on Encrypted Spatial Data,” Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, Columbus, OH, 2015, pp. 794-795. doi: 10.1109/ICDCS.2015.113
Abstract: Searchable encryption is a promising technique enabling meaningful search operations to be performed on encrypted databases while protecting user privacy from untrusted third-party service providers. However, while most of the existing works focus on common SQL queries, geometric queries on encrypted spatial data have not been well studied. Especially, circular range search is an important type of geometric query on spatial data which has wide applications, such as proximity testing in Location-Based Services and Delaunay triangulation in computational geometry. In this poster, we propose two novel symmetric-key searchable encryption schemes supporting circular range search. Informally, both of our schemes can correctly verify whether a point is inside a circle on encrypted spatial data without revealing data privacy or query privacy to a semi-honest cloud server. We formally define the security of our proposed schemes, prove that they are secure under Selective Chosen-Plaintext Attacks, and evaluate their performance through experiments in a real-world cloud platform (Amazon EC2). To the best of our knowledge, this work represents the first study in secure circular range search on encrypted spatial data.
Keywords: SQL; computational geometry; data privacy; mesh generation; private key cryptography; query processing; Amazon EC2; Delaunay triangulation; SQL query; circular range search; computational geometry; encrypted database; encrypted spatial data; geometric query; location-based service; proximity testing; query privacy; selective chosen-plaintext attack; semi-honest cloud server; symmetric-key searchable encryption scheme; user privacy protection; Companies; Data privacy; Encryption; Servers; Spatial databases (ID#: 16-10181)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164991&isnumber=7164877
Z. Zhou, Z. Yang, C. Wu, Y. Liu and L. M. Ni, “On Multipath Link Characterization and Adaptation for Device-Free Human Detection,” Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, Columbus, OH, 2015, pp. 389-398. doi: 10.1109/ICDCS.2015.47
Abstract: Wireless-based device-free human sensing has raised increasing research interest and stimulated a range of novel location-based services and human-computer interaction applications for recreation, asset security and elderly care. A primary functionality of these applications is to first detect the presence of humans before extracting higher-level contexts such as physical coordinates, body gestures, or even daily activities. In the presence of dense multipath propagation, however, it is non-trivial to even reliably identify the presence of humans. The multipath effect can invalidate simplified propagation models and distort received signal signatures, thus deteriorating detection rates and shrinking detection range. In this paper, we characterize the impact of human presence on wireless signals via ray-bouncing models, and propose a measurable metric on commodity WiFi infrastructure as a proxy for detection sensitivity. To achieve higher detection rate and wider sensing coverage in multipath-dense indoor scenarios, we design a lightweight sub carrier and path configuration scheme harnessing frequency diversity and spatial diversity. We prototype our scheme with standard WiFi devices. Evaluations conducted in two typical office environments demonstrate a detection rate of 92.0% with a false positive of 4.5%, and almost 1x gain in detection range given a minimal detection rate of 90%.
Keywords: diversity reception; human computer interaction; indoor radio; multipath channels; radio links; radiowave propagation; wireless LAN; Wi-Fi infrastructure; dense multipath propagation; device-free human detection; frequency diversity; higher-level context extraction; human-computer interaction application; lightweight subcarrier; location-based service; multipath dense indoor scenario; multipath link adaptation; multipath link characterization; path configuration scheme; ray bouncing model; received signal signature; shrinking detection range; spatial diversity; wireless based device-free human sensing; IEEE 802.11 Standard; OFDM; Sensitivity; Sensors; Shadow mapping; Wireless communication; Wireless sensor networks (ID#: 16-10182)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164925&isnumber=7164877
Y. Utsunomiya, K. Toyoda and I. Sasase, “LPCQP: Lightweight Private Circular Query Protocol for Privacy-Preserving k-NN Search,” 2015 12th Annual IEEE Consumer Communications and Networking Conference (CCNC), Las Vegas, NV, 2015, pp. 59-64. doi: 10.1109/CCNC.2015.7157947
Abstract: With the recent growth of mobile communication, location-based services (LBSs) are getting much attention. While LBSs provide beneficial information about points of interest (POIs) such as restaurants or cafes near users, their current location could be revealed to the server. Lien et al. have recently proposed a privacy-preserving k-nearest neighbor (k-NN) search with additive homomorphic encryption. However, it requires heavy computation due to unnecessary multiplication in the encryption domain and this causes intolerant burden on the server. In this paper, we propose a lightweight private circular query protocol (LPCQP) for privacy-preserving k-NN search with additive and multiplicative homomorphism. Our proposed scheme divides a POI-table into some sub-tables and aggregates them with homomorphic cryptography in order to remove unnecessary POI information for the request user, and thus the computational cost required on the server is reduced. We evaluate the performance of our proposed scheme and show that our scheme reduces the computational cost on the LBS server while keeping high security and high accuracy.
Keywords: cryptographic protocols; data privacy; mobility management (mobile radio); telecommunication security; LBS server; LPCQP; additive homomorphic encryption; computational cost reduction; homomorphic cryptography; lightweight private circular query protocol; location-based service; mobile communication; points of interest; privacy-preserving k-NN search; privacy-preserving k-nearest neighbor search; Accuracy; Additives; Computational efficiency; Encryption; Servers (ID#: 16-10183)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7157947&isnumber=7157933
S. K. Mazumder, C. Chowdhury and S. Neogy, “Tracking Context of Smart Handheld Devices,” Applications and Innovations in Mobile Computing (AIMoC), 2015, Kolkata, 2015, pp. 176-181. doi: 10.1109/AIMOC.2015.7083849
Abstract: The ability to locate wireless devices has many benefits as already mentioned by researchers. These applications include sport tracking, friend finders, security related jobs, surveillance etc. Performance of such security related services and surveillance would significantly improve if in addition to location, context of the user is also known. Thus in this paper, location based service is designed and implemented that utilizes context sensing along with location to track context (location and state of the device user) of a smart handheld in an energy efficient manner. The service can be used for surveillance and to act proactively. It can also be used to track location of individuals (relatives, children) as well as that of lost or stolen device (say, phone) from any type of other handheld devices. The service can be initiated securely and remotely (not necessarily from smartphones) thus it does not always work in the background and hence save significant battery power. Once initiated it does not stop even when SIM card is changed or the phone is restarted. The performance analysis shows the efficiency of the application.
Keywords: smart phones; telecommunication security; SIM card; battery power; locate wireless devices; location based service; security related services; smart handheld devices; stolen device; track context; track location; tracking context; Context; Global Positioning System; Google; Mobile communication; Sensors; Servers; Smart phones; Android; Context; Location tracking; SmartPhone; tracking (ID#: 16-10184)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7083849&isnumber=7083813
M. Ahmadian, J. Khodabandehloo and D. C. Marinescu, “A Security Scheme for Geographic Information Databases in Location Based Systems,” SoutheastCon 2015, Fort Lauderdale, FL, 2015, pp. 1-7. doi: 10.1109/SECON.2015.7132941
Abstract: LBS (Location-based Services) are ubiquitous nowadays; they are used by a wide variety of applications ranging from social networks to military applications. Moreover, smart phones and hand held devices are increasingly being used for mobile transactions. These devices are mostly GPS-enabled and can provide location information. In some cases, the geographical location of clients as an authentication factor is integrated with applications to enhance security. But for attackers it is easy to forge location information, thus the security of geographical information is a critical issue. In this paper we discuss geographical database features and we propose an effective security scheme for mobile devices with limited resources.
Keywords: Global Positioning System; cryptography; geographic information systems; mobile computing; GPS-enabled devices; authentication factor; geographic information databases; geographical database features; geographical location information; hand held devices; location based systems; military applications; mobile devices; mobile transactions; security scheme; smart phones; social networks; Data structures; Encryption; Hardware; Spatial databases; Cryptography; Digital map; Location based system; Security; databases (ID#: 16-10185)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7132941&isnumber=7132866
B. Wang, M. Li, H. Wang and H. Li, “Circular Range Search on Encrypted Spatial Data,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, Italy, 2015, pp. 182-190. doi: 10.1109/CNS.2015.7346827
Abstract: Searchable encryption is a promising technique enabling meaningful search operations to be performed on encrypted databases while protecting user privacy from untrusted third-party service providers. However, while most of the existing works focus on common SQL queries, geometric queries on encrypted spatial data have not been well studied. Especially, circular range search is an important type of geometric query on spatial data which has wide applications, such as proximity testing in Location-Based Services and Delaunay triangulation in computational geometry. In this paper, we propose two novel symmetric-key searchable encryption schemes supporting circular range search. Informally, both of our schemes can correctly verify whether a point is inside a circle on encrypted spatial data without revealing data privacy or query privacy to a semi-honest cloud server. We formally define the security of our proposed schemes, prove that they are secure under Selective Chosen-Plaintext Attacks, and evaluate their performance through experiments in a real-world cloud platform (Amazon EC2).
Keywords: Cloud computing; Data privacy; Encryption; Nearest neighbor searches; Servers; Spatial databases (ID#: 16-10186)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346827&isnumber=7346791
L. Chen and K. D. Kang, “A Framework for Real-Time Information Derivation from Big Sensor Data,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1020-1026. doi: 10.1109/HPCC-CSS-ICESS.2015.46
Abstract: In data-intensive real-time applications, e.g., transportation management and location-based services, the amount of sensor data is exploding. In these applications, it is desirable to extract value-added information, e.g., fast driving routes, from sensor data streams in real-time rather than overloading users with massive raw data. However, achieving the objective is challenging due to the data volume and complex data analysis tasks with stringent timing constraints. Most existing big data management systems, e.g., Hadoop, are not directly applicable to real-time sensor data analytics, since they are timing agnostic and focus on batch processing of previously stored data that are potentially outdated and subject to I/O overheads. To address the problem, we design a new real-time big data management framework, which supports a non-preemptive periodic task model for continuous in-memory sensor data analysis and a schedulability test based on the EDF (Earliest Deadline First) algorithm to derive information from current sensor data in real-time by extending the map-reduce model originated in functional programming. As a proof-of-concept case study, a prototype system is implemented. In the performance evaluation, it is empirically shown that all deadlines can be met for the tested sensor data analysis benchmarks.
Keywords: Big Data; batch processing (computers); data analysis; functional programming; parallel processing; performance evaluation; EDF algorithm; I/O overheads; batch processing; complex data analysis tasks; continuous in-memory sensor data analysis; data volume; data-intensive real-time applications; earliest deadline first algorithm; functional programming; nonpreemptive periodic task model; performance evaluation; real-time big data management framework; real-time information derivation; real-time sensor big data analytics; schedulability test; stringent timing constraints; timing agnostic; value-added information extraction; Analytical models; Big data; Data analysis; Data models; Mobile radio mobility management; Real-time systems; Timing; Big Sensor Data; Real-Time Information (ID#: 16-10187)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336303&isnumber=7336120
J. Peng, Y. Meng, M. Xue, X. Hei and K. W. Ross, “Attacks and Defenses in Location-Based Social Networks: A Heuristic Number Theory Approach,” Security and Privacy in Social Networks and Big Data (SocialSec), 2015 International Symposium on, Hangzhou, 2015, pp. 64-71. doi: 10.1109/SocialSec2015.19
Abstract: The rapid growth of location-based social network (LBSN) applications — such as WeChat, Momo, and Yik Yak — has in essence facilitated the promotion of anonymously sharing instant messages and open discussions. These services breed a unique anonymous atmosphere for users to discover their geographic neighborhoods and then initiate private communications. In this paper, we demonstrate how such location-based features of WeChat can be exploited to determine the user's location with sufficient accuracy in any city from any location in the world. Guided by the number theory, we design and implement two generic localization attack algorithms to track anonymous users' locations that can be potentially adapted to any other LBSN services. We evaluated the performance of the proposed algorithms using Matlab simulation experiments and also deployed real-world experiments for validating our methodology. Our results show that WeChat, and other LBSN services as such, have a potential location privacy leakage problem. Finally, k-anonymity based countermeasures are proposed to mitigate the localization attacks without significantly compromising the quality-of-service of LBSN applications. We expect our research to bring this serious privacy pertinent issue into the spotlight and hopefully motivate better privacy-preserving LBSN designs.
Keywords: data privacy; social networking (online); LBSN applications; Matlab simulation; Momo; WeChat; Yik Yak; heuristic number theory approach; k-anonymity based countermeasures; localization attack algorithms; location privacy leakage problem; location-based social networks; privacy-preserving LBSN design; private communications; quality-of-service; user location; Algorithm design and analysis; Global Positioning System; Prediction algorithms; Privacy; Probes; Smart phones; Social network services; Wechat; localization attack; location-based social network; number theory; privacy (ID#: 16-10188)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371902&isnumber=7371823
S. Kim, S. Ha, A. Saad and J. Kim, “Indoor Positioning System Techniques and Security,” e-Technologies and Networks for Development (ICeND), 2015 Fourth International Conference on, Lodz, 2015, pp. 1-4. doi: 10.1109/ICeND.2015.7328540
Abstract: Nowadays location based techniques are used various fields such as traffic navigation, map services, etc. Because people spend a lot of time in the indoor place, it is important for users and service providers to get exact indoor positioning information. There are many technologies to get indoor location information like Wi-Fi, Bluetooth, Radio Frequency Identification (RFID), etc. In spite of importance of IPS, there is no standard for IPS techniques. In addition because of characteristic of data, security and privacy problems become issue. This paper introduces the IPS techniques and analyzes each IPS techniques in terms of cost, accuracy, etc. Then introduce related security threats.
Keywords: indoor communication; indoor navigation; radionavigation; security of data; telecommunication traffic; Bluetooth; IPS techniques; RFID; Radio Frequency Identification; Wi-Fi; Indoor location information; indoor positioning system techniques; location based techniques; map services; traffic navigation; Accuracy; Base stations; Fingerprint recognition; Global Positioning System; IEEE 802.11 Standard; Security; Bluetooth4.0; Indoor Positioning System (IPS); Location; Privacy (ID#: 16-10189)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7328540&isnumber=7328528
J. Wang, M. Qiu, B. Guo, Y. Shen and Q. Li, “Low-Power Sensor Polling for Context-Aware Services on Smartphones,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 617-622. doi: 10.1109/HPCC-CSS-ICESS.2015.255
Abstract: The growing availability of sensors integrated in smartphones provides much more opportunities for context-aware services, such as location-based and profile-based applications. Power consumption of the sensors contributes a significant part of the overall power consumption on current smartphones. Furthermore, smartphone sensors have to be activated in a stable period to match to the request frequency of those context-aware applications, known as full polling-based detection, which wastes a large amount of energy in unnecessary detecting. In this paper, we propose a low-power sensor polling for context-aware applications, which can dynamically shrink the extra sensor activities so that the unrelated sensors can keep in sleeping status for a longer time. We also provide an algorithm to find the relationship of application invoking and those sensor activities, which is always hidden in the context middleware. With this method, the polling scheduler is able to calculate and match the detecting frequency of various application combinations aroused by user. We evaluate this framework with different kinds of context-aware applications. The results show that our new low-power polling spends a tiny responding delay (97ms) in the middleware to save 70% sensor energy consumption, comparing with the traditional exhausting polling operation.
Keywords: mobile computing; power aware computing; scheduling; sensors; smart phones; application invoking relationship; context middleware; context-aware services; frequency detection; full-polling-based detection; location-based application; low-power sensor polling; polling scheduler; profile-based application; sensor activities; sensor energy consumption; sensor power consumption; sleeping status; smart phones; stable period; Accelerometers; Context; Electronic mail; Feature extraction; IEEE 802.11 Standard; Matrix converters; Smart phones; Contextaware; attribute; detecting; energy consumption; polling; smartphone sensor (ID#: 16-10190)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336226&isnumber=7336120
S. Imran, R. V. Karthick and P. Visu, “DD-SARP: Dynamic Data Secure Anonymous Routing Protocol for MANETs in Attacking Environments,” Smart Technologies and Management for Computing, Communication, Controls, Energy and Materials (ICSTM), 2015 International Conference on, Chennai, 2015, pp. 39-46. doi: 10.1109/ICSTM.2015.7225388
Abstract: The most important application of MANETs is to maintain anonymous communication in attacking environment. Though lots of anonymous protocols for secure routing have been proposed, but the proposed solutions happen to be vulnerable at some point. The service rejection attacks or DoS, timing attacks makes both system and protocol vulnerable. This paper studies and discuss about the various existing protocols and how efficient they are in the attacking environment. The protocols such as, ALARM: Anonymous Location-Aided Routing in Suspicious MANET, ARM: Anonymous Routing Protocol for Mobile Ad Hoc Networks, Privacy-Preserving Location-Based On-Demand Routing in MANETs, AO2P: Ad Hoc on-Demand Position-Based Private Routing Protocol, Anonymous Connections. In this paper we propose a new concept by combining two proposed protocols based on geographical location based: ALERT which is based mainly on node-to-node hop encryption and bursty traffic. And Greedy Perimeter Stateless Routing (GPSR), a new geographical location based protocol for wireless networks that uses the router's position and a packet's destination to make forwarding of packets. It follows greedy method of forwarding using the information about the immediate neighboring router in the network. Simulation results have explained the efficiency of the proposed DD-SARP protocol with improved performance when compared to the existing protocols.
Keywords: mobile ad hoc networks; routing protocols; telecommunication security; ALARM; ALERT; AO2P; Ad Hoc on-Demand Position-Based Private Routing Protocol, Anonymous Connections; Anonymous Location-Aided Routing in Suspicious MANET; Anonymous Routing Protocol for Mobile Ad Hoc Networks; DD-SARP; DoS; GPSR; Greedy Perimeter Stateless Routing; anonymous communication; attacking environments; bursty traffic; dynamic data secure anonymous routing protocol; geographical location; neighboring router; node-to-node hop encryption; packet destination; packet forwarding; privacy-preserving location-based on-demand routing; router position; secure routing; service rejection attacks; timing attacks; Ad hoc networks; Encryption; Mobile computing; Public key; Routing; Routing protocols; Mobile adhoc network; adversarial; anonymous; privacy (ID#: 16-10191)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225388&isnumber=7225373
A. C. M. Fong, “Conceptual Analysis for Timely Social Media-Informed Personalized Recommendations,” 2015 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, 2015, pp. 150-151. doi: 10.1109/ICCE.2015.7066358
Abstract: Integrating sensor networks and human social networks can provide rich data for many consumer applications. Conceptual analysis offers a way to reason about real-world concepts, which can assist in discovering hidden knowledge from the fused data. Knowledge discovered from such data can be used to provide mobile users with location-based, personalized and timely recommendations. Taking a multi-tier approach that separates concerns of data gathering, representation, aggregation and analysis, this paper presents a conceptual analysis framework that takes unified aggregated data as an input and generates semantically meaningful knowledge as an output. Preliminary experiments suggest that a fusion of sensor network and social media data improves the overall results compared to using either source of data alone.
Keywords: data analysis; data mining; mobile computing; recommender systems; sensor fusion; social networking (online); conceptual analysis; data aggregation; data analysis; data fusion; data gathering; data representation; hidden knowledge discovery; location-based recommendation; multitier approach; sensor network; timely social media-informed personalized recommendations; Conferences; Consumer electronics; Formal concept analysis; Media; Ontologies; Security; Social network services (ID#: 16-10192)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7066358&isnumber=7066289
C. Su, Y. Yu, M. Sui and H. Zhang, “Friend Recommendation Algorithm Based on User Activity and Social Trust in LBSNs,” 2015 12th Web Information System and Application Conference (WISA), Jinan, 2015, pp. 15-20. doi: 10.1109/WISA.2015.11
Abstract: In LBSNs (Location-based Social Networks), friend recommendation results are mainly decided by the number of common friends or depending on similar user preferences. However, lack of description of semantic information about user activity preferences, insufficiency in building social trust among user relationships and individual score ranking by a crowd or the person from third party of social networks make recommendation quality undesirable. Aiming at this issue, FRBTA algorithm is proposed in this paper to recommend best friends by considering multiple factors such as user semantic activity preferences, social trust. Experimental results show that the proposed algorithm is feasible and effective.
Keywords: recommender systems; security of data; social networking (online); FRBTA algorithm; LBSN; friend recommendation algorithm; location-based social networks; similar user preferences; social trust; user activity; user semantic activity preferences; Buildings; Multimedia communication; Semantics; Social network services; Streaming media; User-generated content; Activity Similarity; Friend Recommendation; LBSNs; Social Trust (ID#: 16-10193)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7396600&isnumber=7396587
F. A. Mansoori and C. Y. Yeun, “Emerging New Trends of Location Based Systems Security,” 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST), London, 2015, pp. 158-163. doi: 10.1109/ICITST.2015.7412078
Abstract: Location Base System (LBS) is considered one of the most beneficial technologies in our modern life, commonly imbedded in varies devices. It helps people find their required services in the least amount of time based on their positions. The users submit a query with their locations and their required services to an un-trusted LBS server. This raises the flag of user privacy where the user has to have the right to conduct services with keeping their location or identity concealed. This research will cover introduction to LBS Services and Architecture components. Security threats to LBS, related work to providing security while conducting LBS services which will include checking integrity of provided location information (LI), privacy of end user vs identifying end user for security purposes. Privacy of end user based on key anonymity and the four different LBS security approaches based on key-anonymity which are: Encryption-based K-anonymity, MobiCache, FGcloak and Pseudo-Location Updating System. Comparison and analysis of the four stated LBS security approaches and finally enhancements and recommendations.
Keywords: cryptography; data privacy; mobile computing; FGcloak; LBS; LI; MobiCache; architecture components; encryption-based k-anonymity; end user privacy; key anonymity; location based systems security; location information; pseudo-location updating system; Computer architecture; Internet; Privacy; Public key; Servers; Location Based Systems; Privacy; Security (ID#: 16-10194)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7412078&isnumber=7412034
A. Vasilateanu and A. Buga, “AsthMate — Supporting Patient Empowerment Through Location-Based Smartphone Applications,” 2015 20th International Conference on Control Systems and Computer Science, Bucharest, 2015, pp. 411-417. doi: 10.1109/CSCS.2015.61
Abstract: The ever changing challenges and pressures to the healthcare domain have introduced the urgency of finding a replacement for traditional systems. Breakthroughs registered by information systems, advances in data storage and processing solutions sustained by the ubiquity of gadgets and an efficient infrastructure for network and services have sustained a shift of medical systems towards digital healthcare. Asth Mate application is an e-health tool for asthma patients, acting as an enabler for patient empowerment. The contributions brought by the application are both to the individual and to the community exposing a web application that allows citizens to check the state of the air for the area they live in. The ongoing implementation can benefit of the advantages of cloud computing solutions in order to ensure a better deployment and data accessibility. However, data privacy is a key aspect for such systems. In consideration of this reason, a proper trade-off between the functionality, data openness and security should be reached.
Keywords: cloud computing; health care; smart phones; Asth Mate application; Web application; asthma patients; cloud computing solutions; data privacy; digital healthcare; e-health tool; information systems; location-based smartphone applications; patient empowerment; Biomedical monitoring; Cloud computing; Collaboration; Diseases; Monitoring; Prototypes; e-health; mobile computing (ID#: 16-10195)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168462&isnumber=7168393
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Magnetic Remanence 2015 |
Magnetic remanence is the property that allows an attacker to recreate files that have been overwritten. For the Science of Security community, it is a topic relevant to the hard problem of resilience. The work cited here was presented in 2015.
J. Song, H. Kim and S. Park, “Enhancing Conformance Testing Using Symbolic Execution for Network Protocols,” in IEEE Transactions on Reliability, vol. 64, no. 3, pp. 1024-1037, Sept. 2015. doi: 10.1109/TR.2015.2443392
Abstract: Security protocols are notoriously difficult to get right, and most go through several iterations before their hidden security vulnerabilities, which are hard to detect, are triggered. To help protocol designers and developers efficiently find non-trivial bugs, we introduce SYMCONF, a practical conformance testing tool that generates high-coverage test input packets using a conformance test suite and symbolic execution. Our approach can be viewed as the combination of conformance testing and symbolic execution: (1) it first selects symbolic inputs from an existing conformance test suite; (2) it then symbolically executes a network protocol implementation with the symbolic inputs; and (3) it finally generates high-coverage test input packets using a conformance test suite. We demonstrate the feasibility of this methodology by applying SYMCONF to the generation of a stream of high quality test input packets for multiple implementations of two network protocols, the Kerberos Telnet protocol and Dynamic Host Configuration Protocol (DHCP), and discovering non-trivial security bugs in the protocols.
Keywords: conformance testing; cryptographic protocols; DHCP; Kerberos Telnet protocol; SYMCONF; conformance testing enhancement; dynamic host configuration protocol; hidden security vulnerability; high-coverage test input packets; network protocols; nontrivial security bugs; security protocols; symbolic execution; symbolic inputs; Computer bugs; IP networks; Interoperability; Protocols; Security; Software; Testing; Conformance testing; Kerberos; Telnet; protocol verification; test packet generation
(ID#: 16-9969)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7128419&isnumber=7229405
L. Chen, D. Hopkinson, J. Wang, A. Cockburn, M. Sparkes and W. O’Neill, “Reduced Dysprosium Permanent Magnets and Their Applications in Electric Vehicle Traction Motors,” in IEEE Transactions on Magnetics, vol. 51, no. 11, pp. 1-4, Nov. 2015. doi: 10.1109/TMAG.2015.2437373
Abstract: Permanent magnet (PM) machines employing rare-earth magnets are receiving increasing interest in electrical vehicle (EV) traction applications. However, a significant drawback of PM machine-based EV tractions is the extremely high cost and volatile supply of rare-earth materials, especially for dysprosium (Dy), whose price is almost 6 times higher than neodymium. This paper describes a new Dy grain boundary-diffusion process for sintered Nd2Fe14B magnets to maximize its effect on coercivity enhancement. The new process gains an 81% reduction in Dy consumption normally required by the conventional Nd2Fe14B magnets for the equivalent performance and 17% higher remanence. The investigation into the application in an interior PM (IPM) machine for a small-sized EV traction shows that compared with the conventional Nd2Fe14B magnets, despite the relatively low coercivity, the low-Dy-content magnets still withstand the thermal and demagnetization challenge over various driving operations. In addition, with the magnet's high remanence and energy product, the machine gains significant torque and energy efficiency improvements. The analysis results are validated by a series of tests carried out on a prototype IPM machine with the new magnets.
Keywords: boron alloys; coercive force; demagnetisation; dysprosium; electric vehicles; iron alloys; neodymium alloys; permanent magnet machines; permanent magnets; remanence; traction motors; Dy; Dy consumption reduction; Dy grain boundary-diffusion process; Nd2Fe14B;coercivity enhancement; demagnetization; energy efficiency; interior PM machine; low-Dy-content magnets; magnet high remanence; reduced dysprosium permanent magnets; sintered magnets; small-sized electric vehicle traction motors; torque efficiency; Magnetic domains; Magnetic flux; Magnetic noise; Magnetic shielding; Perpendicular magnetic anisotropy; Torque; rare earth material (ID#: 16-9970)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7112486&isnumber=7305868
J. Li, N. Chen, D. Wei and M. Futamoto, “Micromagnetic Studies of Ultrahigh Resolution Magnetic Force Microscope Tip Coated by Soft Magnetic Materials,” in IEEE Transactions on Magnetics, vol. 51, no. 1, pp. 1-5, Jan. 2015. doi: 10.1109/TMAG.2014.2337835
Abstract: Magnetic force microscope (MFM) tips coated by soft magnetic materials can achieve a spatial resolution above 10 nm. It is interesting to analyze why tips coated with soft magnetic materials can achieve such a high resolution. In experiment, an MFM tip coated by amorphous FeB can achieve a resolution of 8 nm; therefore, we chose an FeB tip as an example and establish a micromagnetic model to understand the measurement mechanism of the soft magnetic MFM tip. In the FeB film simulation, the random crystalline anisotropy results in a soft magnetic loop with an in-plane coercivity of 0.2 Oe, and the film surface roughness will raise the coercivity to the order of 1 Oe. In the tip simulation, it is found that the FeB-coated tip can be switched in a uniform field of the order of 100 Oe, but can remain near a remanent state in a stray field resulting from media. A simple model is set up to analyze the MFM images of bits in hard disk drivers using the simulated magnetic properties of the tip and resolution ~10 nm is confirmed.
Keywords: amorphous magnetic materials; boron alloys; coercive force; disc drives; hard discs; iron alloys; magnetic anisotropy; magnetic force microscopy; magnetic thin film devices; magnetic thin films; metallic thin films; micromagnetics; remanence; soft magnetic materials; surface roughness; FeB; amorphous film simulation; film surface roughness; hard disk driver bits; in-plane coercivity; magnetic properties; micromagnetic model; random crystalline anisotropy; remanent state; soft magnetic MFM tip; soft magnetic loop; soft magnetic materials; spatial resolution; stray field; tip simulation; ultrahigh resolution magnetic force microscopy tip; Amorphous magnetic materials; Educational institutions; Films; Magnetic force microscopy; Micromagnetics; Microscopy; Soft magnetic materials; Amorphous FeB; magnetic force microscope (MFM) tip; micromagnetic simulation; soft magnetic coating (ID#: 16-9971)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6851911&isnumber=7029158
Q. Y. Zhou, Z. Liu, S. Guo, A. R. Yan and D. Lee, “Magnetic Properties and Microstructure of Melt-Spun Ce–Fe–B Magnets,” in IEEE Transactions on Magnetics, vol. 51, no. 11, pp. 1-4, Nov. 2015. doi: 10.1109/TMAG.2015.2447553
Abstract: A series of CexFebalB6 (x = 12, 14, 17, 19, and 23 wt%) ternary ribbons was prepared by melt-spinning. Magnetic properties and the microstructure of the Ce-Fe-B ribbons with different Ce contents were investigated. The X-ray diffraction results indicated that multiphase coexisted in the as-spun Ce-Fe-B ribbons, which contained Ce2Fe17, CeFe2, Ce-rich phase, Fe-rich phase, cerium oxide, and iron oxide. The magnetic properties, microstructure, and phase composition of the ribbons were directly affected by the cerium content. The magnetic properties could be obtained by exchange coupling between the hard and soft magnetic phases in the pure ternary Ce-Fe-B ribbons. Furthermore, by heat treatment, the magnetic properties of the as-spun Ce-Fe-B ribbons could be optimized. The highest magnetic properties of Hcj = 6.2 kOe, Br = 6.9 kGs, and (BH)m = 8.6 MGOe were obtained in Ce17FebalB6 magnets.
Keywords: X-ray diffraction; boron alloys; cerium alloys; coercive force; crystal microstructure; exchange interactions (electron); heat treatment; iron alloys; melt spinning; permanent magnets; remanence; soft magnetic materials; CexFeB6; X-ray diffraction; coercivity; exchange coupling; hard magnetic phase; heat treatment; magnetic energy product; magnetic properties; melt-spun magnets; microstructure; phase composition; soft magnetic phase; ternary ribbons; Heating; Magnetic properties; Microstructure; Perpendicular magnetic anisotropy; Saturation magnetization; Temperature; Ce-Fe-B magnets; Ce2Fe17B magnets; heat treatment; heat-treatment; magnetic properties; melt-spinning (ID#: 16-9972)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7128707&isnumber=7305868
C. Wu et al., “Low-Temperature Sintering of Barium Hexaferrites with Bi2O3/CuO Additives,” in IEEE Transactions on Magnetics, vol. 51, no. 11, pp. 1-4, Nov. 2015. doi: 10.1109/TMAG.2015.2434834
Abstract: Barium hexaferrites BaFe12O19 with Bi2O3/CuO additives were synthesized by the conventional oxide ceramic process. The results manifest that Bi2O3 additives mainly concentrate on the grain boundary regions, while CuO additives could both enter into the grain and segregate at the grain boundary regions. The appropriate sintering temperature of barium hexaferrites with Bi2O3 and Bi2O3+ CuO additives reaches 1020 °C and 920 °C, respectively. As the amount of Bi2O3 is 3 wt%, the specimens of BaFe12O19 show saturation magnetization (Ms) of 346 kA/m, remanence (Mr) of 227 kA/m, and density (d) of 5.06 g/cm3. Meanwhile, the combination of 3 wt% Bi2O3 and 3 wt% CuO additives significantly promotes grain growth and sintering densification, with a high saturation magnetization of 371 kA/m and density of 5.23 g/cm3 for BaFe12O19, which are pretty close to the theoretical values (380 kA/m, 5.28 g/cm3). Moreover, the corresponding remanence could also rise by 10% compared with that in the sintered samples with 3 wt% Bi2O3.
Keywords: barium compounds; bismuth compounds; copper compounds; densification; ferrites; grain boundaries; grain growth; remanence; sintering; BaFe12O19-Bi2O3-CuO; barium hexaferrites; conventional oxide ceramic process; density; grain boundary regions; low-temperature sintering; saturation magnetization; sintering densification; temperature 1020 degC to 920 degC; Additives; Barium; Copper; Ferrites; Grain boundaries; Magnetic properties; Saturation magnetization; Barium hexaferrites; Bi2O3/CuO additives; Density; Low-temperature sintering; magnetic properties (ID#: 16-9973)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7109915&isnumber=7305868
C. Yu, S. Niu, S. L. Ho, W. Fu and L. Li, “Hysteresis Modeling in Transient Analysis of Electric Motors with AlNiCo Magnets,” in IEEE Transactions on Magnetics, vol. 51, no. 3, pp. 1-4, March 2015. doi: 10.1109/TMAG.2014.2362615
Abstract: Electric motors fabricated with aluminum-nickel-cobalt (AlNiCo) permanent magnets (PMs) can be operated over a wide speed range, as the strong nonlinear demagnetization characteristics of AlNiCo allow effective and efficient airgap flux control. This paper presents a linear hysteresis model which is derived from the Preisach model for AlNiCo PMs, to be incorporated into the time-stepping finite element method to study the motor performance, especially transient performance, accurately and effectively. The proposed method can significantly reduce the computing time without scarifying the accuracy of the Preisach model. The validity and accuracy of the proposed method are verified by both simulation and experimentation.
Keywords: air gaps; aluminium alloys; cobalt alloys; demagnetisation; finite element analysis; magnetic flux; magnetic hysteresis; nickel alloys; permanent magnet motors; transient analysis; AlNiCo; Preisach model; airgap flux control; electric motors; linear hysteresis model; nonlinear demagnetization characteristics; permanent magnets; time stepping finite element method; transient performance analysis; Demagnetization; Hysteresis motors; Magnetic flux; Magnetic hysteresis; Magnetization; Permanent magnet motors; Remanence; Aluminum-nickel-cobalt (AlNiCo); finite element method; hysteresis model; memory motor; permanent magnet; permanent magnet (PM) (ID#: 16-9974)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6919313&isnumber=7092981
C. Yang et al., “Giant Converse Magnetoelectric Effect in PZT/FeCuNbSiB/FeGa/FeCuNbSiB/PZT Laminates Without Magnetic Bias Field,” in IEEE Transactions on Magnetics, vol. 51, no. 11, pp. 1-4, Nov. 2015. doi: 10.1109/TMAG.2015.2435010
Abstract: We reported a giant self-biased converse magnetoelectric (CME) effect in laminated composites consisting of graded magnetostrictive FeCuNbSiB/FeGa/FeCuNbSiB layers sandwiched between two electro-parallel-connected PZT piezoelectric plates. The great different magnetic characteristics (such as magnetic permeability and coercivity) in FeGa and nanocrystalline foil FeCuNbSiB result in a large internal magnetic field and remanent piezomagnetic coefficient in FeCuNbSiB/FeGa/FeCuNbSiB, which account for the giant self-biased CME effect. The experimental results show that: (1) a large remanent CME coefficient of 2.228 × 10-3 mGs · cm/V is achieved, which can be used for realizing miniature electrically controlled magnetic flux devices; (2) the dynamic switching of magnetic flux between bistable states in PZT/FeCuNbSiB/FeGa/FeCuNbSiB/PZT through a smaller ac voltage (1 Vrms) controlling is realized; and (3) the induced magnetic induction B has an excellent linear relationship with applied ac voltage Vin.
Keywords: boron alloys; coercive force; copper alloys; electromagnetic induction; gallium alloys; iron alloys; laminates; lead compounds; magnetic flux; magnetic multilayers; magnetic permeability; magnetic switching; magnetoelectric effects; magnetostriction; nanostructured materials; niobium alloys; remanence; silicon alloys; PZT-FeCuNbSiB-FeGa-FeCuNbSiB-PZT; PZT-FeCuNbSiB-FeGa-FeCuNbSiB-PZT laminates; applied ac voltage; bistable states; coercivity; dynamic switching; electroparallel-connected PZT piezoelectric plates; giant self-biased converse magnetoelectric effect; graded-magnetostrictive FeCuNbSiB-FeGa-FeCuNbSiB layers; induced magnetic induction; internal magnetic field; laminated composites; magnetic permeability; miniature electrically controlled magnetic flux devices; nanocrystalline foil FeCuNbSiB; remanent converse magnetoelectric coefficient; remanent piezomagnetic coefficient; Magnetic flux leakage; Magnetic hysteresis; Magnetic resonance; Magnetic switching; Magnetoelectric effects; Magnetostriction; Converse magnetoelectric (CME) effects; Converse magnetoelectric effects; magnetostrictive materials; nanocrystalline foil; switching of magnetic flux. (ID#: 16-9975)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7110327&isnumber=7305868
X. Cui, S. Yakata and T. Kimura, “Detection of Vortex Core Oscillation Using Second-Harmonic Voltage Detection Technique,” in IEEE Transactions on Magnetics, vol. 51, no. 11, pp. 1-3, Nov. 2015. doi: 10.1109/TMAG.2015.2435024
Abstract: A sensitive and reliable detection method for magnetic vortex core dynamics at the remanent state has been demonstrated. Perfectly symmetric potential created in a circular-shaped disk produces twofold symmetry in the position dependence of the resistance of the ferromagnetic disk. We find that the detectable second-harmonic voltage can be induced by flowing the dc current when the circular-shaped core oscillation is excited by the microwave magnetic field. The consistent features have been observed in the current dependence and field dependence of the signals, indicating that the present method is a powerful technique to characterize the core dynamics, even at the remanent state.
Keywords: magnetic cores; remanence; circular-shaped core oscillation; circular-shaped disk; core dynamics; current dependence; dc current; ferromagnetic disk; field dependence; magnetic vortex core dynamics; microwave magnetic field; remanent state; second-harmonic voltage; second-harmonic voltage detection technique; twofold symmetry; vortex core oscillation detection; Magnetic cores; Magnetic resonance; Magnetic separation; Magnetostatics; Perpendicular magnetic anisotropy; 2nd harmonic; Anisotropic magnetoresistance (AMR) effect; RF magnetic field; anisotropic magnetoresistance effect; second harmonic; vortex dynamics
(ID#: 16-9976)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7112132&isnumber=7305868
M. Nakano et al., “Nd–Fe–B Film Magnets with Thickness Above 100 μm Deposited on Si Substrates,” in IEEE Transactions on Magnetics, vol. 51, no. 11, pp. 1-4, Nov. 2015. doi: 10.1109/TMAG.2015.2438099
Abstract: Although increase in thickness of a Nd-Fe-B film magnet is indispensable to provide a sufficient magnetic field, it was difficult to suppress the peeling phenomenon due to the different values of a linear expansion coefficient for a Si substrate and a Nd-Fe-B film even if a buffer layer such as a Ta film was used. In this report, it was confirmed that a control of the microstructure for pulsed laser deposition-fabricated Nd-Fe-B films enabled us to increase the thickness up to approximately 160 μm without a buffer layer on a Si substrate. Namely, we found that the precipitation of the Nd element at the boundary of Nd-Fe-B grains together with the triple junctions due to the composition adjustment is effective in suppressing the destruction of the samples through an annealing process. The magnetic properties of the prepared films were comparable with those of previously reported ones deposited on metal substrates. Although the mechanism is under investigation, the above-mentioned film had stronger adhesive force compared with that of a sputtering-made film. Resultantly, no deterioration of mechanical together with magnetic properties could be observed after a dicing process.
Keywords: annealing; boron alloys; crystal microstructure; iron alloys; magnetic thin films; metallic thin films; neodymium alloys; precipitation; pulsed laser deposition; remanence; Nd-Fe-B film magnets; NdFeB; Si substrates; adhesive force; annealing; linear expansion coefficient; magnetic field; magnetic properties; microstructure; precipitation; pulsed laser deposition; triple junctions; Magnetic films; Magnetic properties; Magnetic tunneling; Magnetomechanical effects; Silicon; Substrates; Film magnet; MEMS; Nd-Fe-B; Nd???Fe???B; PLD (Pulsed Laser Deposition); Si substrate; film magnet; pulsed laser deposition (PLD) (ID#: 16-9977)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7115146&isnumber=7305868
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Multicore Computing Security 2015 |
As Big Data and Cloud applications have grown, the size and capacity of hardware support has grown as well. Multicore and many core systems have security problems related to the Science of Security issues of resiliency, composability, and measurement. The research work cited here was presented in 2015.
F. Dupros, F. Boulahya, H. Aochi and P. Thierry, “Communication-Avoiding Seismic Numerical Kernels on Multicore Processors,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conferen on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 330-335. doi: 10.1109/HPCC-CSS-ICESS.2015.230
Abstract: The finite-difference method is routinely used to simulate seismic wave propagation both in the oil and gas industry and in strong motion analysis in seismology. This numerical method also lies at the heart of a significant fraction of numerical solvers in other fields. In terms of computational efficiency, one of the main difficulties is to deal with the disadvantageous ratio between the limited pointwise computation and the intensive memory access required, leading to a memory-bound situation. Naive sequential implementations offer poor cache-reuse and achieve in general a low fraction of peak performance of the processors. The situation is worst on multicore computing nodes with several levels of memory hierarchy. In this case, each cache miss corresponds to a costly memory access. Additionally, the memory bandwidth available on multicore chips improves slowly regarding the number of computing core which induces a dramatic reduction of the expected parallel performance. In this article, we introduce a cache-efficient algorithm for stencil-based computations using a decomposition along both the space and the time directions. We report a maximum speedup of x3.59 over the standard implementation.
Keywords: cache storage; finite difference methods; gas industry; geophysics computing; multiprocessing systems; petroleum industry; seismic waves; seismology; wave propagation; Naive sequential implementations; cache-efficient algorithm; cache-reuse; communication-avoiding seismic numerical kernel; computational efficiency; finite-difference method; gas industry; memory bandwidth; memory hierarchy; multicore chips; multicore computing nodes; multicore processors; numerical method; numerical solvers; oil industry; peak performance; pointwise computation; seismic wave propagation simulation; seismology; stencil-based computations; strong motion analysis; Memory management; Multicore processing; Optimization; Program processors; Seismic waves; Standards; communication-avoiding; multicore; seismic (ID#: 16-9931)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336184&isnumber=7336120
X. Lin et al., “Realistic Task Parallelization of the H.264 Decoding Algorithm for Multiprocessors,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 871-874. doi: 10.1109/HPCC-CSS-ICESS.2015.33
Abstract: There is a phenomenon that hardware technology has developed ahead of software technology in recent years. Companies lack of software techniques that can fully utilize the modern multi-core computing resources, mainly due to the difficulty of investigating the inherent parallelism inside a software. This problem exists in products ranging from energy-sensitive smartphones to performance-eager data centers. In this paper, we present a case study on the parallelization of the complex industry standard H.264 HDTV decoder application in multi-core systems. An optimal schedule of the tasks is obtained and implemented by a carefully-defined software parallelization framework (SPF). The parallel software framework is proposed together with a set of rules to direct parallel software programming (PSPR). A pre-processing phase based on the rules is applied to the source code to make the SPF applicable. The task-level parallel version of the H.264 decoder is implemented and tested extensively on a workstation running Linux. Significant performance improvement is observed for a set of benchmarks composed of 720p videos. The SPF and the PSPR will together serve as a reference for future parallel software implementations and direct the development of automated tools.
Keywords: Linux; high definition television; multiprocessing systems; parallel programming; source code (software); video coding; H.264 HDTV decoder application; H.264 decoding algorithm; PSPR; SPF; data centers; energy-sensitive smart phones; multicore computing resources; multiprocessors; optimal task schedule; parallel software implementations; parallel software programming; performance improvement; preprocessing phase; realistic task parallelization; software parallelization framework; source code; task-level parallel; workstation; Decoding; Industries; Parallel processing; Parallel programming; Software; Software algorithms; Videos (ID#: 16-9932)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336273&isnumber=7336120
M. Paulitsch, “Keynote,” 2015 27th Euromicro Conference on Real-Time Systems, Lund, 2015, pp. xiii-xiii. doi: 10.1109/ECRTS.2015.7
Abstract: Summary form only given. Mixed-criticality embedded systems are getting more attention due to savings in cost, weight, and power, and fueled by the ever increasing performance of processers. Introduced into practice more than 2 decades ago — e.g. in aerospace with the concept of time and space-partitioning — optimization and different underlying hardware architectures like multicore processors continue to challenge system designers. This talk should present you a mix of different aspects of mixed-criticality system architecture and designs and underlying approaches of the past with excursions into real space, aerospace and railway systems. With the advent of multicore system-on-chip and multicore processors many of the original assumptions and solutions are challenged and sometimes invalidated and new problems emerge that require special attention. We will walk through current and future challenges and look at point solutions and discuss possible research needs. The interplay of safety, security, system design, performance optimization, scheduling aspects, and application needs and constraints combined with modern computing architectures like multicore processors provide a fertile ground for research and discussions in this field.
Keywords: computer architecture; embedded systems; scheduling; security of data; system-on-chip; computing architectures; hardware architectures; mixed-criticality embedded systems; mixed-criticality system architecture; multicore processors; multicore system-on-chip; performance optimization; safety; scheduling aspects; security; system design (ID#: 16-9933)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7176020&isnumber=7175989
A. Cilardo, J. Flich, M. Gagliardi and R. T. Gavila, “Customizable Heterogeneous Acceleration for Tomorrow's High-Performance Computing,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1181-1185. doi: 10.1109/HPCC-CSS-ICESS.2015.303
Abstract: High-performance computing as we know it today is experiencing unprecedented changes, encompassing all levels from technology to use cases. This paper explores the adoption of customizable, deeply heterogeneous manycore systems for future QoS-sensitive and power-efficient high-performance computing. At the heart of the proposed architecture is a NoC-based manycore system embracing medium-end CPUs, GPU-like processors, and reconfigurable hardware regions. The paper discusses the high-level design principles inspiring this innovative architecture as well as the key role that heterogeneous acceleration, ranging from multicore processors and GPUs down to FPGAs, might play for tomorrow's high-performance computing.
Keywords: field programmable gate arrays; graphics processing units; multiprocessing systems; network-on-chip; parallel processing; power aware computing; quality of service; FPGA; GPU-like processors; NoC-based many-core system; QoS-sensitive computing; customizable heterogeneous acceleration; heterogeneous acceleration; heterogeneous manycore systems; high-level design principles; high-performance computing; innovative architecture; medium-end CPU; multicore processors; power-efficient high-performance computing; reconfigurable hardware regions; Acceleration; Computer architecture; Field programmable gate arrays; Hardware; Program processors; Quality of service; Registers (ID#: 16-9934)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336329&isnumber=7336120
Q. Luo, F. Xiao, Y. Zhou and Z. Ming, “Performance Profiling of VMs on NUMA Multicore Platform by Inspecting the Uncore Data Flow,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 914-917. doi: 10.1109/HPCC-CSS-ICESS.2015.47
Abstract: Recently, NUMA (Non-Uniform Memory Access) multicore platform becomes more and more popular which provides hardware level support for many hot fields, such as cloud computing and big data, and deploying virtual machines on NUMA is a key technology. However, performance degradation in virtual machine isn't negligible due to the fact that guest OS has little or inaccurate knowledge about the underlying hardware. Our research will focus on performance profiling of VMs on multicore platform by inspecting the uncore data flow, and we design a performance profiling tool called VMMprof based on PMU (Performance Monitoring Units). It supports the uncore part of the processor, which is a new function beyond the capabilities of those existing tools. Experiments show that VMMprof can obtain typical factors which affect the performance of the processes and the whole system.
Keywords: data flow computing; memory architecture; multiprocessing systems; performance evaluation; virtual machines; NUMA multicore platform; PMU; VM performance profiling; VMMprof; hardware level support; nonuniform memory access; performance degradation; performance monitoring units; performance profiling tool; uncore data flow; uncore data flow inspection; virtual machines; Bandwidth; Hardware; Monitoring; Multicore processing; Phasor measurement units; Sockets; Virtual machining; NUMA; VMs; uncore (ID#: 16-9935)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336283&isnumber=7336120
S. H. VanderLeest and D. White, “MPSoC Hypervisor: The Safe & Secure Future of Avionics,” 2015 IEEE/AIAA 34th Digital Avionics Systems Conference (DASC), Prague, 2015, pp. 6B5-1–6B5-14. doi: 10.1109/DASC.2015.7311448
Abstract: Future avionics must provide increased performance and security while maintaining safety. The additional security capabilities now being required in commercial avionics equipment arise from integration and centralization of processing capabilities combined with passenger expectations for enhanced communications connectivity. Certification of airborne electronic hardware has long provided rigorous assurance of the safety of flight, but security of information is a more recent requirement for avionics processors and communications systems. In this paper, we explore promising options for future avionics equipment leveraging the latest embedded processing hardware and software technologies and techniques. The Xilinx Zynq® UltraScale+TM MultiProcessor System on Chip (MPSoC) provides one promising avionics solution from a hardware standpoint. The MPSoC provides a high performance heterogeneous multicore processing system and programmable logic in a single device with enhanced safety and security features. Combining this processor solution with a safe and secure software hypervisor solution unlocks many opportunities to address the next generation of airborne computing requirements while satisfying embedded multicore hardware and software certification objectives. In this paper we review the Zynq MPSoC and use of a software hypervisor to provide robust partitioning via virtualization. Partitioning is well established to support safety of flight in Integrated Modular Avionics (IMA) while maintaining reasonable performance. Security is a more recent concern, gaining attention as a vulnerability that can also affect safety in unanticipated ways. Hypervisor-based partitioning provides strong isolation that can reduce covert side channels of information exchange and support Multiple Independent Levels of Security (MILS).
Keywords: aerospace computing; air safety; avionics; certification; multiprocessing systems; security of data; software engineering; system-on-chip; virtualisation; IMA; MILS; MPSoC hypervisor; Zynq UltraScale+TM multiprocessor system on chip; airborne computing; airborne electronic hardware certification; avionics processors; commercial avionics equipment; communication systems; embedded multicore hardware certification; enhanced safety features; flight safety; high performance heterogeneous multicore processing system; hypervisor-based partitioning; integrated modular avionics; multiple independent levels of security; programmable logic; secure software hypervisor solution; security features; software certification; virtualization; Aerospace electronics; Hardware; Multicore processing; Program processors; Safety; Security; Virtual machine monitors (ID#: 16-9936)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7311448&isnumber=7311321
A. Haidar, A. YarKhan, C. Cao, P. Luszczek, S. Tomov and J. Dongarra, “Flexible Linear Algebra Development and Scheduling with Cholesky Factorization,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 861-864. doi: 10.1109/HPCC-CSS-ICESS.2015.285
Abstract: Modern high performance computing environments are composed of networks of compute nodes that often contain a variety of heterogeneous compute resources, such as multicore CPUs and GPUs. One challenge faced by domain scientists is how to efficiently use all these distributed, heterogeneous resources. In order to use the GPUs effectively, the workload parallelism needs to be much greater than the parallelism for a multicore-CPU. Additionally, effectively using distributed memory nodes brings out another level of complexity where the workload must be carefully partitioned over the nodes. In this work we are using a lightweight runtime environment to handle many of the complexities in such distributed, heterogeneous systems. The runtime environment uses task-superscalar concepts to enable the developer to write serial code while providing parallel execution. The task-programming model allows the developer to write resource-specialization code, so that each resource gets the appropriate sized workload-grain. Our task-programming abstraction enables the developer to write a single algorithm that will execute efficiently across the distributed heterogeneous machine. We demonstrate the effectiveness of our approach with performance results for dense linear algebra applications, specifically the Cholesky factorization.
Keywords: distributed memory systems; graphics processing units; mathematics computing; matrix decomposition; parallel processing; resource allocation; scheduling; Cholesky factorization; GPU; compute nodes; distributed heterogeneous machine; distributed memory nodes; distributed resources; flexible linear algebra development; flexible linear algebra scheduling; heterogeneous compute resources; high performance computing environments; multicore-CPU; parallel execution; resource-specialization code; serial code; task-programming abstraction; task-programming model; task-superscalar concept; workload parallelism; Graphics processing units; Hardware; Linear algebra; Multicore processing; Parallel processing; Runtime; Scalability; accelerator-based distributed memory computers; heterogeneous HPC computing; superscalar dataflow scheduling (ID#: 16-9937)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336271&isnumber=7336120
J. Xue et al., “Task-D: A Task Based Programming Framework for Distributed System,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1663-1668. doi: 10.1109/HPCC-CSS-ICESS.2015.299
Abstract: We present Task-D, a task-based distributed programming framework. Traditionally, programming for distributed programs requires using either low-level MPI or high-level pattern based models such as Hadoop/Spark. Task based models are frequently and well used for multicore and heterogeneous environment rather than distributed. Our Task-D tries to bridge this gap by creating a higher-level abstraction than MPI, while providing more flexibility than Hadoop/Spark for task-based distributed programming. The Task-D framework alleviates programmers from considering the complexities involved in distributed programming. We provide a set of APIs that can be directly embedded into user code to enable the program to run in a distributed fashion across heterogeneous computing nodes. We also explore the design space and necessary features the runtime should support, including data communication among tasks, data sharing among programs, resource management, memory transfers, job scheduling, automatic workload balancing and fault tolerance, etc. A prototype system is realized as one implementation of Task-D. A distributed ALS algorithm is implemented using the Task-D APIs, and achieved significant performance gains compared to Spark based implementation. We conclude that task-based models can be well suitable to distributed programming. Our Task-D is not only able to improve the programmability for distributed environment, but also able to leverage the performance with effective runtime support.
Keywords: application program interfaces; message passing; parallel programming; automatic workload balancing; data communication; distributed ALS algorithm; distributed programming; distributed system; heterogeneous computing node; high-level pattern based; job scheduling; low-level MPI; resource management; task-D API; task-based programming framework; Algorithm design and analysis; Data communication; Fault tolerance; Fault tolerant systems; Programming; Resource management; Synchronization (ID#: 16-9938)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336408&isnumber=7336120
A. Martínez, C. Domínguez, H. Hassan, J. M. Martínez and P. López, “Using GPU and SIMD Implementations to Improve Performance of Robotic Emotional Processes,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1876-1881. doi: 10.1109/HPCC-CSS-ICESS.2015.288
Abstract: Future robotic systems are being implemented using control architectures based on emotions. In these architectures, the emotional processes decide which behaviors the robot must activate to fulfill the objectives. The number of emotional processes increases with the complexity level of the application, limiting the processing capacity of the control processor to solve complex problems. Fortunately, the potential parallelism of emotional processes permits their execution in parallel. In this paper, different alternatives are used to exploit the parallelism of the emotional processes. On the one hand, we take advantage of the multiple cores and single instruction multiple data (SIMD) instructions sets already available on modern microprocessors. On the other hand, we also consider using a GPU. Different number of cores with and without enabling SIMD instructions and a GPU-based implementation are compared to analyze their suitability to cope with robotic applications. The applications are set-up taking into account different conditions and states of the robot. Experimental results show that the single processor can undertake most of the simple problems at a speed of 1 m/s. For a speed of 2 m/s, a 8-core processor permits solving most of the problems. When the most constrained problem is required, the solution is to combine SIMD instructions with multicore or to use a co-processor GPU to provide the needed computing power.
Keywords: graphics processing units; intelligent robots; mobile robots; multiprocessing systems; parallel processing; GPU co-processor; GPU-based implementation; SIMD instructions; application complexity level; complex problems; control architectures; control processor processing capacity; emotional process parallelism; microprocessors; multicore processer; multiple cores; robot behaviors; robotic emotional process performance improvement; single-instruction multiple data instructions sets; Appraisal; Complexity theory; Computer architecture; Graphics processing units; Instruction sets; Parallel processing; Robots; GPU; OpenMP; robotic systems (ID#: 16-9939)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336446&isnumber=7336120
C. Yount, “Vector Folding: Improving Stencil Performance via Multi-Dimensional SIMD-Vector Representation,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 865-870. doi: 10.1109/HPCC-CSS-ICESS.2015.27
Abstract: Stencil computation is an important class of algorithms used in a large variety of scientific-simulation applications. Modern CPUs are employing increasingly longer SIMD vector registers and operations to improve computational throughput. However, the traditional use of vectors to contain sequential data elements along one dimension is not always the most efficient representation, especially in the multicore and hyper-threaded context where caches are shared among many simultaneous compute streams. This paper presents a general technique for representing data in vectors for 2D and 3D stencils. This method reduces the number of memory accesses required by storing a small multi-dimensional block of data in each vector compared to the single dimension in the traditional approach. Experiments on an Intel Xeon Phi Coprocessor show performance speedups over traditional vectors ranging from 1.2x to 2.7x, depending on the problem size and stencil type. This technique is independent of and complementary to a variety of existing stencil-computation tuning algorithms such as cache blocking, loop tiling, and wavefront parallelization.
Keywords: data structures; multiprocessing systems; parallel processing; CPU; Intel Xeon Phi Coprocessor; hyper-threaded context; memory access; multidimensional SIMD-vector representation; multidimensional block; scientific-simulation application; sequential data element; stencil computation; stencil performance; vector folding; Jacobian matrices; Layout; Memory management; Registers; Shape; Three-dimensional displays; Intel; SIMD; Xeon Phi; high-performance computing; stencil; vectorization (ID#: 16-9940)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336272&isnumber=7336120
M. Raab, “Global and Thread-Local Activation of Contextual Program Execution Environments,” Object/Component/Service-Oriented Real-Time Distributed Computing Workshops (ISORCW), 2015 IEEE International Symposium on, Auckland, 2015, pp. 34-41. doi: 10.1109/ISORCW.2015.52
Abstract: Ubiquitous computing often demands applications to be both customizable and context-aware: Users expect smart devices to adapt to the context and respect their preferences. Currently, these features are not well-supported in a multi-core embedded setup. The aim of this paper is to describe a tool that supports both thread-local and global context-awareness. The tool is based on code generation using a simple specification language and a library that persists the customizations. In a case study and benchmark we evaluate a web server application on embedded hardware. Our web server application uses contexts to represent user sessions, language settings, and sensor states. The results show that the tool has minimal overhead, is well-suited for ubiquitous computing, and takes full advantage of multi-core processors.
Keywords: Internet; multiprocessing systems; program compilers; programming environments; software libraries; specification languages; ubiquitous computing; Web server application; code generation; contextual program execution environments; global context-awareness; language settings; multicore processors; sensor states; smart devices; software library; specification language; thread-local context-awareness; user sessions; Accuracy; Context; Hardware; Instruction sets; Security; Synchronization; Web servers; context oriented programming; customization; multi-core; persistency (ID#: 16-9941)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160121&isnumber=7160107
C. E. Tuncali, G. Fainekos and Y. H. Lee, “Automatic Parallelization of Simulink Models for Multi-Core Architectures,” 2015 IEEE 17th International Conference on,High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conferen ceon Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 964-971. doi: 10.1109/HPCC-CSS-ICESS.2015.232
Abstract: This paper addresses the problem of parallelizing existing single-rate Simulink models for embedded control applications on multi-core architectures considering communication cost between blocks on different CPU cores. Utilizing the block diagram of the Simulink model, we derive the dependency graph between the different blocks. In order to solve the scheduling problem, we describe a Mixed Integer Linear Programming (MILP) formulation for optimally mapping the Simulink blocks to different CPU cores. Since the number of variables and constraints for MILP solver grows exponentially when model size increases, solving this problem in a reasonable time becomes harder. For addressing this issue, we introduce a set of techniques for reducing the number of constraints in the MILP formulation. By using the proposed techniques, the MILP solver finds solutions that are closer to the optimal solution within a given time bound. We study the scalability and efficiency of our consisting approach with synthetic benchmarks of randomly generated directed acyclic graphs. We also use the “Fault-Tolerant Fuel Control System” demo from Simulink and a Diesel engine controller from Toyota as case studies for demonstrating applicability of our approach to real world problems.
Keywords: control engineering computing; diesel engines; directed graphs; embedded systems; fault tolerant control; fuel systems; integer programming; linear programming; parallel architectures; processor scheduling; CPU cores; MILP formulation; MILP solver constraints; MILP solver variables; Simulink model parallelization problem; Toyota; block diagram; communication cost; dependency graph; diesel engine controller; embedded control applications; fault-tolerant fuel control system; mixed integer linear programming formulation; multicore architecture; randomly generated directed acyclic graphs; scheduling problem; synthetic benchmarks; Bismuth; Computational modeling; Job shop scheduling; Multicore processing; Optimization; Software packages; Multiprocessing; Simulink; embedded systems; model based development; optimization; task allocation (ID#: 16-9942)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336295&isnumber=7336120
S. Li, J. Meng, L. Yu, J. Ma, T. Chen and M. Wu, “Buffer Filter: A Last-Level Cache Management Policy for CPU-GPGPU Heterogeneous System,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 266-271. doi:10.1109/HPCC-CSS-ICESS.2015.290
Abstract: There is a growing trend towards heterogeneous systems, which contain CPUs and GPGPUs in a single chip. Managing those various on-chip resources shared between CPUs and GPGPUs, however, is a big issue and the last-level cache (LLC) is one of the most critical resources due to its impact on system performance. Some well-known cache replacement policies like LRU and DRRIP, designed for a CPU, can not be so well qualified for heterogeneous systems because the LLC will be dominated by memory accesses from thousands of threads of GPGPU applications and this may lead to significant performance downgrade for a CPU. Another reason is that a GPGPU is able to tolerate memory latency when quantity of active threads in the GPGPU is sufficient, but those policies do not utilize this feature. In this paper we propose a novel shared LLC management policy for CPU-GPGPU heterogeneous systems called Buffer Filter which takes advantage of memory latency tolerance of GPGPUs. This policy has the ability to restrict streaming requests of GPGPU by adding a buffer to memory system and vacate LLC space for cache-sensitive CPU applications. Although there is some IPC loss for GPGPU but the memory latency tolerance ensures the basic performance of GPGPU's applications. The experiments show that the Buffer Filter is able to filtrate up to 50% to 75% of the total GPGPU streaming requests at the cost of little GPGPU IPC decrease and improve the hit rate of CPU applications by 2x to 7x.
Keywords: cache storage; graphics processing units; CPU-GPGPU heterogeneous system; buffer filter; cache replacement policies; cache-sensitive CPU applications; general-purpose graphics processing unit; last-level cache management policy; memory access; memory latency tolerance; on-chip resources; shared LLC management policy; Benchmark testing; Central Processing Unit; Instruction sets; Memory management; Multicore processing; Parallel processing; System performance; heterogeneous system; multicore; shared last-level cache (ID#: 16-9943)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336174&isnumber=7336120
N. S. V. Rao, D. Towsley, G. Vardoyan, B. W. Settlemyer, I. T. Foster and R. Kettimuthu, “Sustained Wide-Area TCP Memory Transfers over Dedicated Connections,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1603-1606. doi: 10.1109/HPCC-CSS-ICESS.2015.86
Abstract: Wide-area memory transfers between on-going computations and remote steering, analysis and visualization sites can be utilized in several High-Performance Computing (HPC) scenarios. Dedicated network connections with high capacity, low loss rates and low competing traffic, are typically provisioned over current HPC infrastructures to support such transfers. To gain insights into such transfers, we collected throughput measurements for different versions of TCP between dedicated multi-core servers over emulated 10 Gbps connections with round trip times (rtt) in the range 0-366 ms. Existing TCP models and measurements over shared links are well-known to exhibit monotonically decreasing, convex throughput profiles as rtt is increased. In sharp contrast, our these measurements show two distinct regimes: a concave profile at lower rtts and a convex profile at higher rtts. We present analytical results that explain these regimes: (a) at lower rtt, rapid throughput increase due to slow-start leads to the concave profile, and (b) at higher rtt, TCP congestion avoidance phase with slower dynamics dominates. In both cases, however, we analytically show that throughput decreases with rtt, albeit at different rates, as confirmed by the measurements. These results provide practical TCP solutions to these transfers without requiring additional hardware and software, unlike Infiniband and UDP solutions, respectively.
Keywords: network servers; parallel processing; sustainable development; telecommunication congestion control; telecommunication links; telecommunication traffic; transport protocols; wide area networks; HPC; concave profile; congestion avoidance; convex profile; dedicated connection; high-performance computing; multicore server; remote steering; shared link; sustained wide area TCP memory transfer; visualization site; Current measurement; Data transfer; Hardware; Linux; Software; Supercomputers; Throughput; TCP; dedicated connections; memory transfers; throughput measurements (ID#: 16-9944)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336397&isnumber=7336120
Y. Li, Y. Zhao and H. Gao, “Using Artificial Neural Network for Predicting Thread Partitioning in Speculative Multithreading,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 823-826. doi: 10.1109/HPCC-CSS-ICESS.2015.28
Abstract: Speculative multithreading (SpMT) is a thread-level automatic parallelization technique to accelerate sequential programs on multi-core, and it partitions programs into multiple threads to be speculatively executed in the presence of ambiguous data and control dependences while the correctness of the programs is guaranteed by hardware support. Thread granularity, number of parallel threads as well as partition positions are crucial to the performance improvement in SpMT, for they determine the amount of resources (CPU, memory, cache, or waiting cycles, etc.), and affect the efficiency of every PE (Processing Element). In conventional way, these three parameters are determined by heuristic rules. Although it is simple to partition threads with them, they are a type of one-size-fits-all strategy and can not guarantee to get the optimal solution of thread partitioning. This paper proposes an Artificial Neural Network (ANN) based approach to learn and determine the thread partition strategy. Using the ANN-based thread partition approach, an unseen irregular program can obtain a stable, much higher speedup than the Heuristic Rules (HR) based approach. On Prophet, which is a generic SpMT processor to evaluate the performance of multithreaded programs, the novel thread partitioning policy is evaluated and reaches an average speedup of 1.80 on 4-core processor. Experiments show that our proposed approach can obtain a significant increase in speedup and Olden benchmarks deliver a better performance improvement of 2.36% than the traditional heuristic rules based approach. The results indicate that our approach finds the best partitioning scheme for each program and is more stable across programs.
Keywords: multi-threading; multiprocessing systems; neural nets; ANN-based thread partition approach; HR based approach; Olden benchmark; PE; Prophet; SpMT processor; artificial neural network; heuristic rules; multicore; multithreaded programs; one-size-fits-all strategy; parallel threads; partition position; processing element; sequential programs; speculative multithreading; thread granularity; thread partitioning policy; thread partitioning prediction; thread-level automatic parallelization technique; Cascading style sheets; Conferences; Cyberspace; Embedded software; High performance computing; Safety; Security; Machine learning; Prophet; thread partitioning (ID#: 16-9945)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336262&isnumber=7336120
T. C. Xu, V. Leppänen, P. Liljeberg, J. Plosila and H. Tenhunen, “Trio: A Triple Class On-Chip Network Design for Efficient Multicore Processors,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 951-956. doi: 10.1109/HPCC-CSS-ICESS.2015.44
Abstract: We propose and analyse an on-chip interconnect design for improving the efficiency of multicore processors. Conventional interconnection networks are usually based on a single homogeneous network with uniform processing of all traffic. While the design is simplified, this approach can have performance bottlenecks and limitations on system efficiency. We investigate the traffic pattern of several real world applications. Based on a directory cache coherence protocol, we characterise and categorize the traffic in terms of various aspects. It is discovered that control and unicast packets dominated the network, while the percentages of data and multicast messages are relatively low. Furthermore, we find most of the invalidation messages are multicast messages, and most of the multicast messages are invalidation message. The multicast invalidation messages usually have higher number of destination nodes compared with other multicast messages. These observations lead to the proposed triple class interconnect, where a dedicated multicast-capable network is responsible for the control messages and the data messages are handled by another network. By using a detailed full system simulation environment, the proposed design is compared with the homogeneous baseline network, as well as two other network designs. Experimental results show that the average network latency and energy delay product of the proposed design have improved 24.4% and 10.2% compared with the baseline network.
Keywords: cache storage; multiprocessing systems; multiprocessor interconnection networks; network synthesis; network-on-chip; Trio; average network latency; dedicated multicast-capable network; destination nodes; directory cache coherence protocol; energy delay product; full system simulation environment; homogeneous baseline network; multicast invalidation messages; multicore processors; on-chip interconnect design; traffic pattern; triple class on-chip network design; unicast packets; Coherence; Multicore processing; Ports (Computers); Program processors; Protocols; System-on-chip; Unicast; cache; design; efficient; multicore; network-on-chip (ID#: 16-9946)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336293&isnumber=7336120
M. Shekhar, H. Ramaprasad and F. Mueller, “Evaluation of Memory Access Arbitration Algorithm on Tilera's TILEPro64 Platform,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1154-1159. doi: 10.1109/HPCC-CSS-ICESS.2015.245
Abstract: As real-time embedded systems demand more and more computing power under reasonable energy budgets, multi-core platforms are a viable option. However, deploying real-time applications on multi-core platforms introduce several predictability challenges. One of these challenges is bounding the latency of memory accesses issued by real-time tasks. This challenge is exacerbated as the number of cores and, hence, the degree of resource sharing increases. Over the last several years, researchers have proposed techniques to overcome this challenge. In prior work, we proposed an arbitration policy for memory access requests over a Network-on-Chip. In this paper, we implement and evaluate variants of our arbitration policy on a real hardware platform, namely Tilera's TilePro64 platform.
Keywords: embedded systems; multiprocessing systems; network-on-chip; storage management;TILEPro64 platform; memory access arbitration algorithm; multicore platforms; network-on-chip; real-time embedded systems; Dynamic scheduling; Engines; Hardware; Instruction sets; Memory management; Real-time systems; System-on-chip (ID#: 16-9947)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336325&isnumber=7336120
R. Reuillon et al., “Tutorials,” High Performance Computing & Simulation (HPCS), 2015 International Conference on, Amsterdam, 2015, pp. 1-16. doi: 10.1109/HPCSim.2015.7237009
Abstract: These tutorials discusses the following: Model Exploration using OpenMOLE: A Workflow Engine for Large Scale Distributed Design of Experiments and Parameter Tuning; Getting Up To Speed On OpenMP 4.0; Science Gateways - Leveraging Modeling and Simulations in HPC Infrastructures via Increased Usability; Cloud Security, Access Control and Compliance; The EGI Federated Cloud; Getting Started with the AVX-512 on the Multicore and Manycore Platforms.
Keywords: authorisation; cloud computing; multiprocessing systems; parallel processing; EGI federated cloud; HPC Infrastructure; OpenMOLE; access control; cloud security; high performance computing-and-simulation; manycore platform; multicore platform; Biological system modeling; Computational modeling; Distributed computing; High performance computing; Logic gates; Tuning; Tutorials (ID#: 16-9948)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237009&isnumber=7237005
M. D. Grammatikakis, P. Petrakis, A. Papagrigoriou, G. Kornaros and M. Coppola, “High-Level Security Services Based on a Hardware NoC Firewall Module,” Intelligent Solutions in Embedded Systems (WISES), 2015 12th International Workshop on, Ancona, 2015, pp. 73-78. doi: (not provided)
Abstract: Security services are typically based on deploying different types of modules, e.g. firewall, intrusion detection or prevention systems, or cryptographic function accelerators. In this study, we focus on extending the functionality of a hardware Network-on-Chip (NoC) Firewall on the Zynq 7020 FPGA of a Zedboard. The NoC Firewall checks the physical address and rejects untrusted CPU requests to on-chip memory, thus protecting legitimate processes running in a multicore SoC from the injection of malicious instructions or data to shared memory. Based on a validated kernel-space Linux system driver of the NoC Firewall which is seen as a reconfigurable, memory-mapped device on top of AMBA AXI4 interconnect fabric, we develop higher-layer security services that focus on physical address protection based on a set of rules. While our primary scenario concentrates on monitors and actors related to protection from malicious (or corrupt) drivers, other interesting use cases related to healthcare ethics, are also put into the context.
Keywords: field programmable gate arrays; firewalls; multiprocessing systems; network-on-chip; AMBA AXI4 interconnect fabric; Zedboard; Zynq 7020 FPGA; corrupt drivers; hardware NoC firewall module; healthcare ethics; high-level security services; malicious drivers; malicious instructions; multicore SoC; network-on-chip; on-chip memory; physical address protection; reconfigurable memory-mapped device; shared memory; untrusted CPU requests; validated kernel-space Linux system driver; Field programmable gate arrays; Firewalls (computing); Hardware; Linux; Network interfaces; Registers; Linux driver; NoC; firewall; multicore SoC (ID#: 16-9949)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7356985&isnumber=7356973
F. Liu, Y. Yarom, Q. Ge, G. Heiser and R. B. Lee, “Last-Level Cache Side-Channel Attacks are Practical,” 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015, pp. 605-622. doi: 10.1109/SP.2015.43
Abstract: We present an effective implementation of the Prime+Probe side-channel attack against the last-level cache. We measure the capacity of the covert channel the attack creates and demonstrate a cross-core, cross-VM attack on multiple versions of GnuPG. Our technique achieves a high attack resolution without relying on weaknesses in the OS or virtual machine monitor or on sharing memory between attacker and victim.
Keywords: cache storage; cloud computing; security of data; virtual machines; GnuPG; IaaS cloud computing; Prime+Probe side-channel attack; covert channel; cross-VM attack; cross-core attack; last-level cache side-channel attacks; virtual machine monitor; Cryptography; Indexes; Memory management; Monitoring; Multicore processing; Probes; Virtual machine monitors; ElGamal; covert channel; cross-VM side channel; last-level cache; side-channel attack (ID#: 16-9950)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163050&isnumber=7163005
C. Li, W. Hu, P. Wang, M. Song and X. Cao, “A Novel Critical Path Based Routing Method Based on for NOC,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1546-1551. doi: 10.1109/HPCC-CSS-ICESS.2015.159
Abstract: When more and more cores are integrated onto a single chip and connected by lines, network on chip (NOC) has provided a new on chip structure. The tasks are mapped to the cores on the chip. They have communication requirements according to their relationship. When the communication data are transmitted on the network, they need to be given a suitable path to the target cores with low latency. In this paper, we proposed a new routing method based on static critical path for NOC. The tasks with multi-threads will be analyzed first and the running paths of the tasks will be marked. The static critical path can be found according to the length of the running paths. The messages on critical path will be marked as critical messages. When the messages have arrived at the routers on chip, the critical messages will be forwarded firstly in terms of their importance. This new routing method has been tested on simulation environment. The experiment results proved that this method can accelerate the transmission speed of critical messages and improve the performance of the tasks.
Keywords: network routing; network-on-chip; NOC; chip structure; communication data transmission; communication requirements; critical messages; critical path; critical path based routing method latency; multithreads; network on chip; running path length; simulation environment; static critical path; target cores; task mapping; task performance improvement; Algorithm design and analysis; Message systems; Multicore processing; Program processors; Quality of service; Routing; System-on-chip; Critical Path; Network on Chip; Routing Method (ID#: 16-9951)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336388&isnumber=7336120
S. Zhang and S. Su, “Design of Parallel Algorithms for Super Long Integer Operation Based on Multi-Core CPUs,” 2015 11th International Conference on Computational Intelligence and Security (CIS), Shenzhen, 2015, pp. 335-339. doi: 10.1109/CIS.2015.88
Abstract: In cryptographic applications, super long integer operations are often used. However, cryptographic algorithms generally run on a computer with a single-core CPU, and the related computing process is a type of serial execution. In this paper, we investigate how to parallelize the operations of super long integers in multi-core computer environment. The significance of this study lies in that along with the promotion of multi-core computing devices, and the enhancement of multi-core computing ability, we need to make the basic arithmetic of super long integers run in paralleling, which means blocking super long integers, running all data blocks on multi-core threads respectively, converting original serial execution into multi-core parallel computation, and storing multi-thread results after formatting them. According to experiments we have observed: if scheduling thread time is longer than computation, parallel algorithms execute faster, on the contrary, serial algorithms are better. On the whole, parallel algorithms can utilize the computing ability of multi-core hardware more efficiently.
Keywords: cryptography; digital arithmetic; microprocessor chips; multiprocessing systems; parallel algorithms; cryptographic applications; data blocks; multicore CPU; multicore computer environment; multicore hardware; multicore parallel computation; parallel algorithm design; serial execution; single-core CPU; super long integer operation; super long integers; Algorithm design and analysis; Bismuth; Computers; Cryptography; Instruction sets; Operating systems; Parallel algorithms; algorithms; multi-core; multi-thread; parallel computation; super long integers (ID#: 16-9952)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7397102&isnumber=7396229
N. Khalilzad, H. R. Faragardi and T. Nolte, “Towards Energy-Aware Placement of Real-Time Virtual Machines in a Cloud Data Center,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1657-1662. doi: 10.1109/HPCC-CSS-ICESS.2015.22
Abstract: Cloud computing is an evolving paradigm which is becoming an adoptable technology for a variety of applications. However, cloud infrastructures must be able to fulfill application requirements before adopting cloud solutions. Cloud infrastructure providers communicate the characteristics of their services to their customers through Service Level Agreements (SLA). In order for a real-time application to be able to use cloud technology, cloud infrastructure providers have to be able to provide timing guarantees in the SLAs. In this paper, we present our ongoing work regarding a cloud solution in which periodic tasks are provided as a service in the Software as a Service (SaS) model. Tasks belonging to a certain application are mapped in a Virtual Machine (VM). We also study the problem of VM placement on a cloud infrastructure. We propose a placement mechanism which minimizes the energy consumption of the data center by consolidating VMs in a minimum number of servers while respecting the timing requirement of virtual machines.
Keywords: cloud computing; computer centres; contracts; power aware computing; timing; virtual machines; virtual storage; SLA; SaaS; VM placement; application requirements; cloud data center; cloud infrastructure; energy aware placement; energy consumption minimisation; real-time virtual machine; service level agreement; software as a service; timing guarantee; Cloud computing; Energy consumption; Multicore processing; Power demand; Real-time systems; Servers; Timing; Real-time cloud; VM placement; energy aware allocation (ID#: 16-9953)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336407&isnumber=7336120
J. Ye, S. Li and T. Chen, “Shared Write Buffer to Support Data Sharing Among Speculative Multi-Threading Cores,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 835-838. doi: 10.1109/HPCC-CSS-ICESS.2015.287
Abstract: Speculative Multi-threading (SpMT), a.k.a Thread Level Speculation (TLS), is a most noticeable research direction of automatic extraction of thread level parallelism (TLP), which is growing appealing in the multi-core and many-core era. The SpMT threads are extracted from a single thread, and are tightly coupled with data dependences. Traditional private L1 caches with coherence mechanism will not suit such intense data sharing among SpMT threads. We propose a Shared Write Buffer (SWB) that resides in parallel with the private L1 caches, but with much smaller capacity, and short access delay. When a core writes a datum to L1 cache, it will write the SWB first, and when it reads a datum, it will read from the SWB as well as from the L1. Because the SWB is shared among the cores, it may probably return a datum quicker than the L1 if the latter needs to go through a coherence process to load the datum. This way the SWB improves the performance of SpMT inter-core data sharing, and mitigate the overhead of coherence.
Keywords: cache storage; multi-threading; multiprocessing systems; SWB; SpMT intercore data sharing; SpMT thread extraction; TLS; access delay; automatic TLP extraction; coherence overhead mitigation; data dependences; data sharing; datum; performance improvement; private L1 caches; shared write buffer; speculative multithreading cores; thread level parallelism; thread level speculation; Coherence; Delays; Instruction sets; Message systems; Multicore processing; Protocols; Cache; Multi-Core; Shared Write Buffer; SpMT; Speculative Multi-Threading (ID#: 16-9954)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336265&isnumber=7336120
D. Münch, M. Paulitsch, O. Hanka and A. Herkersdorf, “MPIOV: Scaling Hardware-Based I/O Virtualization for Mixed-Criticality Embedded Real-Time Systems Using Non Transparent Bridges to (Multi-Core) Multi-Processor Systems,” 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, 2015, pp. 579-584. doi: (not provided)
Abstract: Safety-critical systems consolidating multiple functionalities of different criticality (so-called mixed-criticality systems) require separation between these functionalities to assure safety and security properties. Performance-hungry and safety-critical applications (like a radar processing system steering an autonomous flying aircraft) may demand an embedded high-performance computing cluster of more than one (multi-core) processor. This paper presents the Multi-Processor I/O Virtualization (MPIOV) concept to enable hardware-based Input/Output (I/O) virtualization or sharing with separation among multiple (multi-core) processors in (mixed-criticality) embedded real-time systems, which usually do not have means for separation like an Input/Output Memory Management Unit (IOMMU). The concept uses a Non-Transparent Bridge (NTB) to connect each processing host to the management host, while checking the target address and source / origin ID to decide whether or not to block a transaction. It is a standardized, portable and non-proprietary platform-independent spatial separation solution that does not require an IOMMU in the processor. Furthermore, the concept sketches an approach for PCI Express (PCIe)-based systems to enable sharing of up to 2048 (virtual) functions per endpoint, while still being compatible to the plain PCIe standard. A practical evaluation demonstrates that the impact to performance degradation (transfer time, transfer rate) is negligible (about 0.01%) compared to a system without separation.
Keywords: multiprocessing systems; parallel processing; safety-critical software; virtualisation; IOMMU; MPIOV; NTB; PCI express; embedded high-performance computing cluster; hardware-based I-O virtualization; input-output memory management unit; mixed-criticality embedded real-time system; multiprocessor I/O virtualization; multiprocessor system; nontransparent bridge; safety-critical systems; Aerospace electronics; Bridges; Memory management; Multicore processing; Real-time systems; Standards; Virtualization; IOMPU; hardware-based I/O virtualization; mixed-criticality systems; multi-core; multiprocessor; non-transparent bridge (NTB); real-time embedded systems; spatial separation (ID#: 16-9955)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092453&isnumber=7092347
P. Sun, S. Chandrasekaran, S. Zhu and B. Chapman, “Deploying OpenMP Task Parallelism on Multicore Embedded Systems with MCA Task APIs,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 843-847. doi: 10.1109/HPCC-CSS-ICESS.2015.88
Abstract: Heterogeneous multicore embedded systems are rapidly growing with cores of varying types and capacity. Programming these devices and exploiting the hardware has been a real challenge. The programming models and its execution are typically meant for general purpose computation, they are mostly too heavy to be adopted for the resource-constrained embedded systems. Embedded programmers are still expected to use low-level and proprietary APIs, making the software built less and less portable. These challenges motivated us to explore how OpenMP, a high-level directive-based model, could be used for embedded platforms. In this paper, we translate OpenMP to Multicore Association Task Management API (MTAPI), which is a standard API for leveraging task parallelism on embedded platforms. Our results demonstrate that the performance of our OpenMP runtime library is comparable to the state-of-the-art task parallel solutions. We believe this approach will provide a portable solution since it abstracts the low-level details of the hardware and no longer depends on vendor-specific API.
Keywords: application program interfaces; embedded systems; multiprocessing systems; parallel processing; MCA; MTAPI; OpenMP runtime library; OpenMP task parallelism; heterogeneous multicore embedded system; high-level directive-based model; multicore association task management API; multicore embedded system; resource-constrained embedded system; vendor-specific API; Computational modeling; Embedded systems; Hardware; Multicore processing; Parallel processing; Programming; Heterogeneous Multicore Embedded Systems; OpenMP; Parallel Computing (ID#: 16-9956)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336267&isnumber=7336120
P. Bogdan and Y. Xue, “Mathematical Models and Control Algorithms for Dynamic Optimization of Multicore Platforms: A Complex Dynamics Approach,” Computer-Aided Design (ICCAD), 2015 IEEE/ACM International Conference on, Austin, TX, 2015, pp. 170-175. doi: 10.1109/ICCAD.2015.7372566
Abstract: The continuous increase in integration densities contributed to a shift from Dennard's scaling to a parallelization era of multi-/many-core chips. However, for multicores to rapidly percolate the application domain from consumer multimedia to high-end functionality (e.g., security, healthcare, big data), power/energy and thermal efficiency challenges must be addressed. Increased power densities can raise on-chip temperatures, which in turn decrease chip reliability and performance, and increase cooling costs. For a dependable multicore system, dynamic optimization (power / thermal management) has to rely on accurate yet low complexity workload models. Towards this end, we present a class of mathematical models that generalize prior approaches and capture their time dependence and long-range memory with minimum complexity. This modeling framework serves as the basis for defining new efficient control and prediction algorithms for hierarchical dynamic power management of future data-centers-on-a-chip.
Keywords: multiprocessing systems; power aware computing; temperature; Dennard scaling; chip performance; chip reliability; complex dynamics approach; control algorithm; data-centers-on-a-chip; dynamic optimization; hierarchical dynamic power management; many-core chips; multicore chips; multicore platform; on-chip temperature; power density; power management; prediction algorithm; thermal management; Autoregressive processes; Heuristic algorithms; Mathematical model; Measurement; Multicore processing; Optimization; Stochastic processes (ID#: 16-9957)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7372566&isnumber=7372533
J. Tian, W. Hu, C. Li, T. Li and W. Luo, “Multi-Thread Connection Based Scheduling Algorithm for Network on Chip,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1473-1478. doi: 10.1109/HPCC-CSS-ICESS.2015.160
Abstract: More and more cores are integrated onto a single chip to improve the performance and reduce the power consumption of CPU without the increased frequency. The core are connected by lines and organized as a network, which is called network on chip (NOC) as the promising paradigm. NOC has improved the performance of the CPU without the increased power consumption. However, there is still a new problem that how to schedule the threads to the different cores to take full advantages of NOC. In this paper, we proposed a new multi-thread scheduling algorithm based on thread connection for NOC. The connection relationship of the threads will be analyzed and divided into different thread sets. And at the same time, the network topology of the NOC is also analyzed. The connection relationship of the cores is set in the NOC model and divided into different regions. The thread sets and core regions will be establish correspondence relationship according to the features of them. And the multi-thread scheduling algorithm will map thread sets to the corresponding core regions. In the same core set, the threads in the same set will be scheduled via different proper approaches. The experiments have showed that the proposed algorithm can improve the performance of the programs and enhance the utilization of NOC cores.
Keywords: multi-threading; network theory (graphs); network-on-chip; performance evaluation; power aware computing; processor scheduling; CPU; NOC core; multithread connection based scheduling; multithread connection-based scheduling algorithm; network topology; power consumption; Algorithm design and analysis; Heuristic algorithms; Instruction sets; Multicore processing; Network topology; Scheduling algorithms; System-on-chip; Algorithm; Network on Chip; Scheduling; Thread Connection (ID#: 16-9958)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336376&isnumber=7336120
Y. Li, J. Niu, M. Qiu and X. Long, “Optimizing Tasks Assignment on Heterogeneous Multi-Core Real-Time Systems with Minimum Energy,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 577-582. doi: 10.1109/HPCC-CSS-ICESS.2015.126
Abstract: The main challenge for embedded real-time systems, especially for mobile devices, is the trade-off between system performance and energy efficiency. Through studying the relationship between energy consumption, execution time and completion probability of tasks on heterogeneous multi-core architectures, we propose an Accelerated Search algorithm based on dynamic programming to obtain a combination of various task schemes which can be completed in a given time with a confidence probability by consuming the minimum possible energy. We adopt a DAG (Directed Acyclic Graph) to represent the precedent relation between tasks and develop a Minimum-Energy Model to find the optimal tasks assignment. The heterogeneous multi-core architectures can execute tasks under different voltage level with DVFS which leads to different execution time and different consumption energy. The experimental results demonstrate our approach outperforms state-of-the-art algorithms in this field (maximum improvement of 24.6%).
Keywords: directed graphs; dynamic programming; embedded systems; energy conservation; energy consumption; mobile computing; multiprocessing systems; power aware computing; probability; search problems; DAG; DVFS; accelerated search algorithm; confidence probability; directed acyclic graph; embedded real-time systems; energy efficiency; execution time; heterogeneous multicore real-time systems; minimum energy model; mobile devices; precedent relation; system performance; task assignment optimization; task completion probability; voltage level; Algorithm design and analysis; Dynamic programming; Energy consumption; Heuristic algorithms; Multicore processing; Real-time systems; Time factors; heterogeneous multi-core real-time system; minimum energy; probability statistics; tasks assignment (ID#: 16-9959)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336220&isnumber=7336120
A. S. S. Mohamed, A. A. El-Moursy and H. A. H. Fahmy, “Real-Time Memory Controller for Embedded Multi-Core System,” 2015 IEEE 17th International Conference on High Performance Computing and Communications, 2015 IEEE 7th International Symposium on Cyberspace Safety and Security, and 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 839-842. doi: 10.1109/HPCC-CSS-ICESS.2015.133
Abstract: Nowadays modern chip multi-cores (CMPs) become more demanding because of their high performance especially in real-time embedded systems. On the other side, bounded latencies has become vital to guarantee high performance and fairness for applications running on CMPs cores. We propose a new memory controller that prioritizes and assigns defined quotas for cores within unified epoch (MCES). Our approach works on variety of generations of double data rate DRAM(DDR DRAM). MCES is able to achieve an overall performance reached 35% for 4 cores system.
Keywords: DRAM chips; embedded systems; multiprocessing systems; CMP cores; DDR-DRAM; MCES; bounded latencies; chip multicores; double-data-rate DRAM generation; embedded multicore system; real-time embedded systems; real-time memory controller; unified epoch; Arrays; Interference; Multicore processing; Random access memory; Real-time systems; Scheduling; Time factors; CMPs; Memory Controller; Real-Time (ID#: 16-9960)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336266&isnumber=7336120
F. M. M. u. Islam and M. Lin, “A Framework for Learning Based DVFS Technique Selection and Frequency Scaling for Multi-Core Real-Time Systems,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 721-726. doi: 10.1109/HPCC-CSS-ICESS.2015.313
Abstract: Multi-core processors have become very popular in recent years due to the higher throughput and lower energy consumption compared with unicore processors. They are widely used in portable devices and real-time systems. Despite of enormous prospective, limited battery capacity restricts their potential and hence, improving the system level energy management is still a major research area. In order to reduce the energy consumption, dynamic voltage and frequency scaling (DVFS) has been commonly used in modern processors. Previously, we have used reinforcement learning to scale voltage and frequency based on the task execution characteristics. We have also designed learning based method to choose a suitable DVFS technique to execute at different states. In this paper, we propose a generalized framework which integrates these two approaches for real-time systems on multi-core processors. The framework is generalized in a sense that it can work with different scheduling policies and existing DVFS techniques.
Keywords: learning (artificial intelligence); multiprocessing systems; power aware computing; real-time systems; dynamic voltage and frequency scaling; learning-based DVFS technique selection; multicore processor; multicore real-time system; reinforcement learning-based method; system level energy management; unicore processor; Energy consumption; Heuristic algorithms; Multicore processing; Power demand; Program processors; Real-time systems; Vehicle dynamics; Dynamic voltage and frequency scaling; Energy efficiency; Machine learning; Multi-core processors; time systems (ID#: 16-9961)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336243&isnumber=7336120
M. A. Aguilar, J. F. Eusse, R. Leupers, G. Ascheid and M. Odendahl, “Extraction of Kahn Process Networks from While Loops in Embedded Software,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1078-1085. doi: 10.1109/HPCC-CSS-ICESS.2015.158
Abstract: Many embedded applications such as multimedia, signal processing and wireless communications present a streaming processing behavior. In order to take full advantage of modern multi-and many-core embedded platforms, these applications have to be parallelized by describing them in a given parallel Model of Computation (MoC). One of the most prominent MoCs is Kahn Process Network (KPN) as it allows to express multiple forms of parallelism and it is suitable for efficient mapping and scheduling onto parallel embedded platforms. However, describing streaming applications manually in a KPN is a challenging task. Especially, since they spend most of their execution time in loops with unbounded number of iterations. These loops are in several cases implemented as while loops, which are difficult to analyze. In this paper, we present an approach to guide the derivation of KPNs from embedded streaming applications dominated by multiple types of while loops. We evaluate the applicability of our approach on an eight DSP core commercial embedded platform using realistic benchmarks. Results measured on the platform showed that we are able to speedup sequential benchmarks on average by a factor up to 4.3x and in the best case up to 7.7x. Additionally, to evaluate the effectiveness of our approach, we compared it against a state-of-the-art parallelization framework.
Keywords: digital signal processing chips; embedded systems; parallel processing; program control structures; DSP core embedded platform; KPN; Kahn process network extraction; MoC; embedded software; embedded streaming applications; execution time; many-core embedded platforms; multicore embedded platforms; parallel embedded platforms; parallel model-of-computation; parallelized applications; sequential benchmarks; while loops; Computational modeling; Data mining; Long Term Evolution; Parallel processing; Runtime; Switches; Uplink; DSP; Kahn Process Networks; MPSoCs; Parallelization; While Loops (ID#: 16-9962)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336312&isnumber=7336120
V. Gunes and T. Givargis, “XGRID: A Scalable Many-Core Embedded Processor,” 2015 IEEE 17th International Conference on,High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1143-1146. doi: 10.1109/HPCC-CSS-ICESS.2015.99
Abstract: The demand for compute cycles needed by embedded systems is rapidly increasing. In this paper, we introduce the XGRID embedded many-core system-on-chip architecture. XGRID makes use of a novel, FPGA-like, programmable interconnect infrastructure, offering scalability and deterministic communication using hardware supported message passing among cores. Our experiments with XGRID are very encouraging. A number of parallel benchmarks are evaluated on the XGRID processor using the application mapping technique described in this work. We have validated our scalability claim by running our benchmarks on XGRID varying in core count. We have also validated our assertions on XGRID architecture by comparing XGRID against the Graphite many-core architecture and have shown that XGRID outperforms Graphite in performance.
Keywords: embedded systems; field programmable gate arrays; multiprocessing systems; parallel architectures; system-on-chip; FPGA-like, programmable interconnect infrastructure; XGRID embedded many-core system-on-chip architecture; application mapping technique; compute cycles; core count; deterministic communication; hardware supported message passing; parallel benchmarks; scalable many-core embedded processor; Benchmark testing; Communication channels; Discrete cosine transforms; Field programmable gate arrays; Multicore processing; Switches; Embedded Processors; Many-core; Multi-core; System-on-Chip Architectures (ID#: 16-9963)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336323&isnumber=7336120
K. Rushaidat, L. Schwiebert, B. Jackman, J. Mick and J. Potoff, “Evaluation of Hybrid Parallel Cell List Algorithms for Monte Carlo Simulation,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1859-1864. doi: 10.1109/HPCC-CSS-ICESS.2015.260
Abstract: This paper describes efficient, scalable parallel implementations of the conventional cell list method and a modified cell list method to calculate the total system intermolecular Lennard-Jones force interactions in the Monte Carlo Gibbs ensemble. We targeted this part of the Gibbs ensemble for optimization because it is the most computationally demanding part of the force interactions in the simulation, as it involves all the molecules in the system. The modified cell list implementation reduces the number of particles that are outside the interaction range by making the cells smaller, thus reducing the number of unnecessary distance evaluations. Evaluation of the two cell list methods is done using a hybrid MPI+OpenMP approach and a hybrid MPI+CUDA approach. The cell list methods are evaluated on a small cluster of multicore CPUs, Intel Phi coprocessors, and GPUs. The performance results are evaluated using different combinations of MPI processes, threads, and problem sizes.
Keywords: Monte Carlo methods; application program interfaces; cellular biophysics; graphics processing units; intermolecular forces; materials science computing; message passing; multi-threading; parallel architectures; GPU; Intel Phi coprocessors; Monte Carlo Gibbs ensemble; Monte Carlo simulation; conventional-cell list method; distance evaluations; force interactions; hybrid MPI-plus-CUDA approach; hybrid MPI-plus-OpenMP approach; hybrid parallel cell list algorithm evaluation; modified cell list implementation; multicore CPU; performance evaluation; scalable-parallel implementations; total system intermolecular Lennard-Jones force interactions; Computational modeling; Force; Graphics processing units; Microcell networks; Monte Carlo methods; Solid modeling; Cell List; Gibbs Ensemble; Hybrid Parallel Architectures; Monte Carlo Simulations (ID#: 16-9964)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336443&isnumber=7336120
J. C. Beard and R. D. Chamberlain, “Run Time Approximation of Non-Blocking Service Rates for Streaming Systems,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 792-797. doi: 10.1109/HPCC-CSS-ICESS.2015.64
Abstract: Stream processing is a compute paradigm that promises safe and efficient parallelism. Its realization requires optimization of multiple parameters such as kernel placement and communications. Most techniques to optimize streaming systems use queueing network models or network flow models, which often require estimates of the execution rate of each compute kernel. This is known as the non-blocking “service rate” of the kernel within the queueing literature. Current approaches to divining service rates are static. To maintain a tuned application during execution (while online) with non-static workloads, dynamic instrumentation of service rate is highly desirable. Our approach enables online service rate monitoring for streaming applications under most conditions, obviating the need to rely on steady state predictions for what are likely non-steady state phenomena. This work describes an algorithm to approximate non-blocking service rate, its implementation in the open source RaftLib framework, and validates the methodology using streaming applications on multi-core hardware.
Keywords: data flow computing; multiprocessing systems; public domain software; compute kernel execution rate; dynamic instrumentation; kernel communications; kernel placement; multicore hardware; multiple parameter optimization; nonblocking service rate approximation; nonstatic workloads; nonsteady state phenomena; online service rate monitoring; open source RaftLib framework; parallelism; run-time approximation; service rate; steady state predictions; stream processing; streaming system optimization; streaming systems; Approximation methods; Computational modeling; Instruments; Kernel; Monitoring; Servers; Timing; instrumentation; parallel processing; raftlib (ID#: 16-9965)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336255&isnumber=7336120
M. Kiperberg, A. Resh and N. J. Zaidenberg, “Remote Attestation of Software and Execution-Environment in Modern Machines,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 335-341. doi: 10.1109/CSCloud.2015.52
Abstract: The research on network security concentrates mainly on securing the communication channels between two endpoints, which is insufficient if the authenticity of one of the endpoints cannot be determined with certainty. Previously presented methods that allow one endpoint, the authentication authority, to authenticate another remote machine. These methods are inadequate for modern machines that have multiple processors, introduce virtualization extensions, have a greater variety of side effects, and suffer from nondeterminism. This paper addresses the advances of modern machines with respect to the method presented by Kennell. The authors describe how a remote attestation procedure, involving a challenge, needs to be structured in order to provide correct attestation of a remote modern target system.
Keywords: security of data; virtual machines; virtualisation; authentication authority; communication channel security; execution-environment; network security; nondeterminism; remote machine authentication; remote software attestation; remote target system; virtualization extensions; Authentication; Computer architecture; Hardware; Program processors; Protocols; Virtualization; Dynamic Root of Trust; Multicore; Rootkit Detection; Self-checksumming Code; Software-based Root-of-trust; Trusted Computing; Virtualization (ID#: 16-9966)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371504&isnumber=7371418
Y. Shen and K. Elphinstone, “Microkernel Mechanisms for Improving the Trustworthiness of Commodity Hardware,” Dependable Computing Conference (EDCC), 2015 Eleventh European, Paris, 2015, pp. 155-166. doi: 10.1109/EDCC.2015.16
Abstract: Trustworthy isolation is required to consolidate safety and security critical software systems on a single hardware platform. Recent advances in formally verifying correctness and isolation properties of a microkernel should enable mutually distrusting software to co-exist on the same platform with a high level of assurance of correct operation. However, commodity hardware is susceptible to transient faults triggered by cosmic rays, and alpha particle strikes, and thus may invalidate the isolation guarantees, or trigger failure in isolated applications. To increase trustworthiness of commodity hardware, we apply redundant execution techniques from the dependability community to a modern microkernel. We leverage the hardware redundancy provided by multicore processors to perform transient fault detection for applications and for the microkernel itself. This paper presents the mechanisms and framework for microkernel based systems to implement redundant execution for improved trustworthiness. It evaluates the performance of the resulting system on x86-64 and ARM platforms.
Keywords: multiprocessing systems; operating system kernels; redundancy; safety-critical software; security of data; 64 platforms; ARM platforms; alpha particle strikes; commodity hardware trustworthiness; correctness formal verification; cosmic rays; dependability community; hardware redundancy; isolation properties; microkernel mechanisms; modern microkernel; multicore processors; redundant execution techniques; safety critical software systems; security critical software systems; transient fault detection; trustworthy isolation; x86 platforms; Hardware; Kernel; Multicore processing; Program processors; Security; Transient analysis; Microkernel; Reliability; SEUs; Security; Trustworthy Systems (ID#: 16-9967)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371963&isnumber=7371940
N. Druml et al., “Time-of-Flight 3D Imaging for Mixed-Critical Systems,” 2015 IEEE 13th International Conference on Industrial Informatics (INDIN), Cambridge, 2015, pp. 1432-1437. doi: 10.1109/INDIN.2015.7281943
Abstract: Computer vision is becoming more and more important in the fields of consumer electronics, cyber-physical systems, and automotive technology. Recognizing and classifying one's environment reliably is imperative for safety-critical applications, as they are omnipresent, e.g., in the automotive or aviation domain. For this purpose, the Time-of-Flight imaging technology is suitable, which enables robust and cost-efficient three-dimensional sensing of the environment. However, the resource limitations of safety- and security-certified processor systems as well as complying to safety standards, poses a challenge for the development and integration of complex Time-of-Flight-based applications. Here we present a Time-of-Flight system approach that focuses in particular on the automotive domain. This Time-of-Flight imaging approach is based on an automotive processing platform that complies to safety and security standards. By employing state-of-the-art hardware/software and multi-core concepts, a robust Time-of-Flight system solution is introduced that can be used in a mixed-critical application context. In this work we demonstrate the feasible implementation of the proposed hardware/software architecture by means of a prototype for the automotive domain. Raw Time-of-Flight sensor data is taken and 3D data is calculated with up to 80 FPS without the usage of dedicated hardware accelerators. In a next step, safety-critical automotive applications (e.g., parking assistance) can exploit this 3D data in a mixed-critical environment respecting the needs of the ISO 26262.
Keywords: computer vision; image sensors; road safety; safety systems; software architecture; traffic engineering computing; 3D data; ISO 26262; automotive processing platform; automotive technology; consumer electronics; cyber-physical systems; hardware accelerators; hardware-software architecture; mixed-critical systems; multicore concepts; raw time-of-flight sensor data; safety-critical automotive applications; security-certified processor systems; time-of-flight 3D imaging; Automotive engineering; Cameras; Hardware; Safety; Sensors; Three-dimensional displays; 3D sensing; Time-of-Flight; automotive applications; functional safety; mixed-critical; multi-core (ID#: 16-9968)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7281943&isnumber=7281697
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Network Layer Security for the Smart Grid 2015 |
The primary value of published research in smart grid technologies—the use of cyber-physical systems to coordinate the generation, transmission, and use of electrical power and its sources—is because of its strategic importance and the consequences of intrusion. Smart grid is of particular importance to the Science of Security. Its problems embrace several of the hard problems, notably resiliency and metrics. The work cited here was published in 2015.
V. Delgado-Gomes, J. F. Martins, C. Lima and P. N. Borza, “Smart Grid Security Issues,” 2015 9th International Conference on Compatibility and Power Electronics (CPE), Costa da Caparica, 2015, pp. 534-538. doi: 10.1109/CPE.2015.7231132
Abstract: The smart grid concept is being fostered due to required evolution of the power network to incorporate distributed energy sources (DES), renewable energy sources (RES), and electric vehicles (EVs). The inclusion of these components on the smart grid requires an information and communication technology (ICT) layer in order to exchange information, control, and monitor the electrical components of the smart grid. The two-way communication flows brings cyber security issues to the smart grid. Different cyber security countermeasures need to be applied to the heterogeneous smart grid according to the computational resources availability, time communication constraints, and sensitive information data. This paper presents the main security issues and challenges of a cyber secure smart grid, whose main objectives are confidentiality, integrity, authorization, and authentication of the exchanged data.
Keywords: authorisation; data integrity; distributed power generation; power engineering computing; power system security; renewable energy sources; smart power grids; DES; ICT; RES; computational resources availability; cyber secure smart grid; cyber security; data authentication; data authorization; data confidentiality; distributed energy sources; electric vehicles; information and communication technology; power network evolution; smart grid security; time communication constraints; two-way communication flow; Computer security; Monitoring; NIST; Privacy; Smart grids; Smart grid; challenges; cyber security; information and communication technology (ICT) (ID#: 16-9897)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7231132&isnumber=7231036
N. Saputro, K. Akkaya and I. Guvenc, “Privacy-Aware Communication Protocol for Hybrid IEEE 802.11s/LTE Smart Grid Architectures,” Local Computer Networks Conference Workshops (LCN Workshops), 2015 IEEE 40th, Clearwater Beach, FL, 2015, pp. 905-911. doi: 10.1109/LCNW.2015.7365945
Abstract: Smart Grid (SG) is expected to use a variety of communications technologies as the underlying communications infrastructures. The interworking between heterogeneous communications network is crucial to support reliability and end-to-end features. In this paper, we consider a hybrid SG communications architecture that consists of a IEEE 802.11s mesh-based smart meter network and an LTE-based wide area network for collecting smart meter data. While a gateway can be used to bridge these networks, it will still not possible to pin point a smart meter directly from the utility control center nor running a TCP-based application that requires a connection establishment phase. We propose a gateway address translation based approach to enable execution of end-to-end protocols without making any changes to LTE and 802.11s mesh networks. Specifically, we introduce a new layer at the gateway which will perform the network address translation by using unique pseudonyms for the smart meters. In this way, we also ensure privacy of the consumer since the IP addresses of the smart meters are not exposed to utility company. We implemented the proposed mechanism under ns-3 network simulator which have libraries to support both IEEE 802.11s and LTE communications. The results indicate that we can achieve our goals without introducing any additional overhead.
Keywords: Long Term Evolution; power engineering computing; power system reliability; power system security; smart meters; smart power grids; transport protocols; wireless LAN; wireless mesh networks; IEEE 802.11s mesh-based smart meter network; LTE-based wide area network; TCP-based application; communication infrastructure; connection establishment phase; end-to-end features; end-to-end protocols; gateway address translation-based approach; heterogeneous communication network; hybrid IEEE 802.11s-LTE smart grid architectures; hybrid SG communication architecture; network address translation; ns-3 network simulator; privacy-aware communication protocol; reliability; smart meter data; utility control center; Communication networks; Companies; IEEE 802.11 Standard; IP networks; Logic gates; Protocols; Smart meters (ID#: 16-9898)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7365945&isnumber=7365758
F. A. A. Alseiari and Z. Aung, “Real-Time Anomaly-Based Distributed Intrusion Detection Systems for Advanced Metering Infrastructure Utilizing Stream Data Mining,” 2015 International Conference on Smart Grid and Clean Energy Technologies (ICSGCE), Offenburg, Germany, 2015, pp. 148-153. doi: 10.1109/ICSGCE.2015.7454287
Abstract: The advanced Metering Infrastructure (AMI) is one of the core components of smart grids' architecture. As AMI components are connected through mesh networks in a distributed mechanism, new vulnerabilities will be exploited by grid's attackers who intentionally interfere with network's communication system and steal customer data. As a result, identifying distributed security solutions to maintain the confidentiality, integrity, and availability of AMI devices' traffic is an essential requirement that needs to be taken into account. This paper proposes a real-time distributed intrusion detection system (DIDS) for the AMI infrastructure that utilizes stream data mining techniques and a multi-layer implementation approach. Using unsupervised online clustering techniques, the anomaly-based DIDS monitors the data flow in the AMI and distinguish if there are anomalous traffics. By comparing between online and offline clustering techniques, the experimental results showed that online clustering “Mini-Batch K-means” were successfully able to suit the architecture requirements by giving high detection rate and low false positive rates.
Keywords: Monitoring; Object recognition; Reliability; TCPIP; Testing; Training; advanced metering infrastructure; distributed intrusion detection system; mini-batch k-means; online clustering; smart grids; stream mining (ID#: 16-9899)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7454287&isnumber=7454254
M. Popovic, M. Mohiuddin, D. C. Tomozei and J. Y. Le Boudec, “iPRP: Parallel Redundancy Protocol for IP Networks,” Factory Communication Systems (WFCS), 2015 IEEE World Conference on, Palma de Mallorca, 2015, pp. 1-4. doi: 10.1109/WFCS.2015.7160549
Abstract: Reliable packet delivery within stringent delay constraints is of primal importance to industrial processes with hard real-time constraints, such as electrical grid monitoring. Because retransmission and coding techniques counteract the delay requirements, reliability is achieved through replication over multiple fail-independent paths. Existing solutions such as parallel redundancy protocol (PRP) replicate all packets at the MAC layer over parallel paths. PRP works best in local area networks, e.g., sub-station networks. However, it is not viable for IP layer wide area networks which are a part of emerging smart grids. Such a limitation on scalability, coupled with lack of security, and diagnostic inability, renders it unsuitable for reliable data delivery in smart grids. To address this issue, we present a transport-layer design: IP parallel redundancy protocol (iPRP). Designing iPRP poses non-trivial challenges in the form of selective packet replication, soft-state and multicast support. Besides unicast, iPRP supports multicast, which is widely using in smart grid networks. It duplicates only time-critical UDP traffic. iPRP only requires a simple software installation on the end-devices. No other modification to the existing monitoring application, end-device operating system or intermediate network devices is needed. iPRP has a set of diagnostic tools for network debugging. With our implementation of iPRP in Linux, we show that iPRP supports multiple flows with minimal processing and delay overhead. It is being installed in our campus smart grid network and is publicly available.
Keywords: IP networks; Linux; access protocols; computer network performance evaluation; local area networks; smart power grids; substations; telecommunication network reliability; transport protocols; IP parallel redundancy protocol; MAC layer; campus smart grid network; coding technique; delay overhead; delay requirements; device operating system; diagnostic inability; electrical grid monitoring; hard real-time constraints; iPRP; industrial processes; intermediate network devices; minimal processing; multicast support; multiple fail-independent paths; network debugging; packet delivery reliability; retransmission technique; security laxness; selective packet replication; soft-state; software installation; stringent delay constraints; substation networks; time-critical UDP traffic; transport-layer design; Delays; Monitoring; Ports (Computers); Receivers; Redundancy; Smart grids (ID#: 16-9900)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160549&isnumber=7160536
H. Senyondo, P. Sun, R. Berthier and S. Zonouz, “PLCloud: Comprehensive Power Grid PLC Security Monitoring with Zero Safety Disruption,” 2015 IEEE International Conference on Smart Grid Communications (SmartGridComm), Miami, FL, 2015,
pp. 809-816. doi: 10.1109/SmartGridComm.2015.7436401
Abstract: Recent security threats against cyber-physical critical power grid infrastructures have further distinguished the differences and complex interdependencies between optimal plant control and infrastructural safety topics. In this paper, we reflect upon few real-world scenarios and threats to understand how those two topics meet. We then propose a practical architectural solutions to address the corresponding concerns. As a first concrete step, we focus on networked industrial control systems in smart grid where several sensing-processing-actuation embedded nodes receive information, make control decisions, and carry out optimal actions. Traditionally, global safety maintenance, e.g., transient stability, is embedded into control and taken into account by the decision making modules. With recent cyber security-induced safety incidents, we believe that the safety-handling modules should also be considered as a part of global trusted computing base (attack surface) for security purposes. Generally, maximizing the system's overall security requires the designers to minimize its trusted computing base. Consequently, we argue that the traditional combined safety-control system architecture is not anymore the optimal design paradigm to follow given existing threats. Instead, we propose PLCLOUD, a new cloud-based safety-preserving architecture that places a minimal trusted safety verifier layer between the physical world and the cyber-based supervisory control and data acquisition (SCADA) infrastructure, specifically programmable logic controllers (PLCs). PLCLOUD's main objective is to take care of infrastructural safety and separate it from optimal plant control that SCADA is responsible for.
Keywords: SCADA systems; industrial control; monitoring; optimal control; programmable controllers; trusted computing; PLCLOUD; PLCloud; SCADA infrastructure; architectural solutions; attack surface; cloud-based safety-preserving architecture; combined safety-control system architecture; complex interdependencies; comprehensive power grid PLC security monitoring; control decisions; cyber security-induced safety incidents; cyber-based supervisory control; cyber-physical critical power grid infrastructures; data acquisition; decision making modules; global safety maintenance; global trusted computing base; infrastructural safety topics; minimal trusted safety verifier layer; networked industrial control systems; optimal actions; optimal design paradigm; optimal plant control; programmable logic controllers; safety-handling modules; security threats; sensing-processing-actuation embedded nodes; smart grid; transient stability; zero safety disruption; Computer architecture; Malware; Monitoring; Real-time systems; Safety; Smart grids (ID#: 16-9901)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7436401&isnumber=7436263
G. Xiong, T. R. Nyberg, P. Hämäläinen, X. Dong, Y. Liu and J. Hou, “To Enhance Power Distribution Network Management of Local Power Service Enterprise by Using Cloud Platform,” 2015 5th International Conference on Information Science and Technology (ICIST), Changsha, 2015, pp. 487-491. doi: 10.1109/ICIST.2015.7289021
Abstract: The availability of new technologies in the areas of digital electronics, communications, internet, and computer technologies opens a door to build a smart grid, which can increase significantly the capacity, services and intelligence of power systems. This paper proposes a power distribution network management cloud platform based on Tekla Xpower. Proposed system can support the implementation of current power grid and/or the coming smart grid. Utilizing the existing computing and storage installations, cloud platform can integrate the existing resources to improve the computation and storage capacities, the data security of the entire system, and can increase application intelligence, service quality and decision-making capability as well. The four layer architecture of cloud platform is designed, and a pilot case study for DMS is provided.
Keywords: cloud computing; distribution networks; power engineering computing; security of data; smart power grids; Tekla Xpower; cloud platform; data security; decision-making capability; local power service enterprise; power distribution network management; service quality; smart grid; storage installations; Business; Computer network reliability; Databases; Electronic mail; Handheld computers; Reliability (ID#: 16-9902)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289021&isnumber=7288906
X. Bao, G. Wang, Z. Hou, M. Xu, L. Peng and H. Han, “WDM Switch Technology Application in Smart Substation Communication Network,” 2015 5th International Conference on Electric Utility Deregulation and Restructuring and Power Technologies (DRPT), Changsha, 2015, pp. 2373-2376. doi: 10.1109/DRPT.2015.7432643
Abstract: By analyzing the typical communication networking method of Process Layer in Smart Substation, this paper proposes the problems of current widely used Process Layer communication isolation network by using VLAN Technology. This paper expounds the basic principle and advantage of using WDM technology to realize the Switch, proposes the Security isolation communication networking method of using WDM Switch in Smart Substation Process Layer. This method has obvious advantages than the current VLAN logic isolation method in theory, truly meets the “Three Networks In One“ needs of Smart Substation Process Layer communication network.
Keywords: smart power grids; substations; wavelength division multiplexing; VLAN technology; WDM switch technology application; process layer communication isolation network; smart substation communication network; Communication networks; Decision support systems; Power industry; Security; Substations; Switches; Wavelength division multiplexing; Isolation; Smart Substation; Switch; VLAN; WDM (ID#: 16-9903)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7432643&isnumber=7432193
G. Peretti, V. Lakkundi and M. Zorzi, “BlinkToSCoAP: An End-to-End Security Framework for the Internet of Things,” 2015 7th International Conference on Communication Systems and Networks (COMSNETS), Bangalore, 2015, pp. 1-6. doi: 10.1109/COMSNETS.2015.7098708
Abstract: The emergence of Internet of Things and the availability of inexpensive sensor devices and platforms capable of wireless communications enable a wide range of applications such as intelligent home and building automation, mobile healthcare, smart logistics, distributed monitoring, smart grids, energy management, asset tracking to name a few. These devices are expected to employ Constrained Application Protocol for the integration of such applications with the Internet, which includes User Datagram Protocol binding with Datagram Transport Layer Security protocol to provide end-to-end security. This paper presents a framework called BlinkToSCoAP, obtained through the integration of three software libraries implementing lightweight versions of DTLS, CoAP and 6LoWPAN protocols over TinyOS. Furthermore, a detailed experimental campaign is presented that evaluates the performance of DTLS security blocks. The experiments analyze BlinkToSCoAP messages exchanged between two Zolertia Z1 devices, allowing evaluations in terms of memory footprint, energy consumption, latency and packet overhead. The results obtained indicate that securing CoAP with DTLS in Internet of Things is certainly feasible without incurring much overhead.
Keywords: Internet; Internet of Things; computer network reliability; computer network security; protocols; 6LoWPAN protocol; BlinkToSCoAP; CoAP protocol; DTLS protocol; TinyOS; Zolertia Zl device; asset tracking; availability; building automation; constrained application protocol; datagram transport layer security protocol; distributed monitoring; end-to-end security framework; energy consumption; energy management; intelligent home; latency overhead; memory footprint; message exchange; mobile healthcare; packet overhead; sensor device; smart grid; smart logistics; user datagram protocol; wireless communication; Computer languages; Logic gates; Payloads; Performance evaluation; Random access memory; Security; Servers; 6LoWPAN; CoAP; DTLS; M2M; end-to-end security (ID#: 16-9904)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098708&isnumber=7098633
A. Mihaita, C. Dobre, B. Mocanu, F. Pop and V. Cristea, “Analysis of Security Approaches for Vehicular Ad-Hoc Networks,” 2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), Krakow, 2015, pp. 304-309. doi: 10.1109/3PGCIC.2015.184
Abstract: In the last years, the number of vehicles has increased up to a point where the road infrastructure cannot easily cope anymore, and congestion in cities becomes the norm rather then exception. Smart technologies are vastly employed to cope with advanced scheduling mechanisms -- from intelligent traffic lights designed to control traffic, up to applications running inside the car to provide updated information to the driver, or simply keep him socially connected. The time for such smart technologies is right: the power of computation along with the memory size of microprocessors have increased, while price per computation, storage and networking power decreased. What few years ago might have sounded rather futuristic, like technologies designed to facilitate communication between cars and automate the exchange of data about traffic, accidents or congestion, is now becoming reality. But the implications of these ideas have only recently become relevant; in particular, security and trust-related implications are just now arising as critical topics for such new applications. The reason is that drivers face new challenges, from their personal data being stolen or applications being fed with false information about traffic conditions, to technology being exposed to all kind of hijacking attacks. A practitioner developing a smart traffic application is faced with an important problem: what security technology or algorithm to use to better cope with these challenges. In this paper, we first present an analysis of various cryptographic algorithms in the context of vehicular scenarios. Our scope is to analyze the designs and approaches for securing networks formed between vehicles. In particular, we are interested in the security layers able to provide strong cryptographic algorithms implementation that can guarantee high levels of trust and security for vehicular applications. The analysis exploits the realistic simulator being developed at the University Polytechnica of Bucharest.
Keywords: cryptography; electronic data interchange; telecommunication security; vehicular ad hoc networks; advanced scheduling mechanisms; cryptographic algorithms; data exchange; security layers; smart technologies; smart traffic application; vehicular ad-hoc networks; Algorithm design and analysis; Authentication; Cryptography; Roads; Routing; Vehicles (ID#: 16-9905)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7424580&isnumber=7424499
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Routing Anomalies 2015 |
The capacity to deal with routing anomalies is a factor in developing resilient systems. The research cited here was presented in 2015.
R. Hiran, N. Carlsson and N. Shahmehri, “Crowd-Based Detection of Routing Anomalies on the Internet,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 388-396. doi: 10.1109/CNS.2015.7346850
Abstract: The Internet is highly susceptible to routing attacks and there is no universally deployed solution that ensures that traffic is not hijacked by third parties. Individuals or organizations wanting to protect themselves from sustained attacks must therefore typically rely on measurements and traffic monitoring to detect attacks. Motivated by the high overhead costs of continuous active measurements, we argue that passive monitoring combined with collaborative information sharing and statistics can be used to provide alerts about traffic anomalies that may require further investigation. In this paper we present and evaluate a user-centric crowd-based approach in which users passively monitor their network traffic, share information about potential anomalies, and apply combined collaborative statistics to identify potential routing anomalies. The approach uses only passively collected round-trip time (RTT) measurements, is shown to have low overhead, regardless if a central or distributed architecture is used, and provides an attractive tradeoff between attack detection rates (when there is an attack) and false alert rates (needing further investigation) under normal conditions. Our data-driven analysis using longitudinal and distributed RTT measurements also provides insights into detector selection and the relative weight that should be given to candidate detectors at different distances from the potential victim node.
Keywords: Internet; computer network security; telecommunication network routing; telecommunication traffic; attack detection; collaborative information sharing; combined collaborative statistics; data-driven analysis; distributed RTT measurement; longitudinal RTT measurement; round-trip time; routing anomaly crowd-based detection; traffic monitoring; user-centric crowd-based approach; Collaboration; Detectors; Monitoring; Organizations; Routing; Security; Crowd-based detection; Imposture attacks; Interception attacks; Routing anomalies (ID#: 16-9906)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346850&isnumber=7346791
L. Trajkovic, “Communication Networks: Traffic Data, Network Topologies, and Routing Anomalies,” 2015 IEEE 13th International Symposium on Intelligent Systems and Informatics (SISY), Subotica, 2015, pp. 15-15. doi: 10.1109/SISY.2015.7325382
Abstract: Understanding modern data communication networks such as the Internet involves collection and analysis of data collected from deployed networks. It also calls for development of various tools for analysis of such datasets. Collected traffic data are used for characterization and modeling of network traffic, analysis of Internet topologies, and prediction of network anomalies. In this talk, I will describe collection and analysis of realtime traffic data using special purpose hardware and software tools. Analysis of such collected datasets indicates a complex underlying network infrastructure that carries traffic generated by a variety of the Internet applications. Data collected from the Internet routing tables are used to analyze Internet topologies and to illustrate the existence of historical trends in the development of the Internet. The Internet traffic data are also used to classify and detect network anomalies such as Internet worms, which affect performance of routing protocols and may greatly degrade network performance. Various statistical and machine learning techniques are used to classify test datasets, identify the correct traffic anomaly types, and design anomaly detection mechanisms.
Keywords: Internet; computer network security; data analysis; data communication; learning (artificial intelligence); routing protocols; statistical analysis; telecommunication network topology; telecommunication traffic; Internet routing tables; Internet topologies; Internet worms; data communication networks; machine learning techniques; network anomaly prediction; network topologies; network traffic data analysis; routing anomalies; statistical techniques; Communication networks; Informatics; Intelligent systems; Internet topology; Network topology; Routing (ID#: 16-9907)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7325382&isnumber=7325349
M. Ćosović, S. Obradović and L. Trajković, “Performance Evaluation of BGP Anomaly Classifiers,” Digital Information, Networking, and Wireless Communications (DINWC), 2015 Third International Conference on, Moscow, 2015, pp. 115-120. doi: 10.1109/DINWC.2015.7054228
Abstract: Changes in the network topology such as large-scale power outages or Internet worm attacks are events that may induce routing information updates. Border Gateway Protocol (BGP) is by Autonomous Systems (ASes) to address these changes. Network reachability information, contained in BGP update messages, is stored in the Routing Information Base (RIB). Recent BGP anomaly detection systems employ machine learning techniques to mine network data. In this paper, we evaluated performance of several machine learning algorithms for detecting Internet anomalies using RIB. Naive Bayes (NB), Support Vector Machine (SVM), and Decision Tree (J48) classifiers are employed to detect network traffic anomalies. We evaluated feature discretization and feature selection using three data sets of known Internet anomalies.
Keywords: Bayes methods; Internet; computer network performance evaluation; computer network security; data mining; decision trees; invasive software; learning (artificial intelligence); routing protocols; support vector machines; telecommunication network topology; telecommunication traffic; AS; BGP anomaly classifiers; Internet anomalies; Internet worm attacks; J48; NB; Naive Bayes; RIB; SVM; autonomous systems; border gateway protocol; decision tree classifiers; feature discretization; feature selection; large-scale power outages; machine learning techniques; network data mining; network traffic anomalies; performance evaluation; routing information base; routing information updates; support vector machine; Accuracy; Classification algorithms; Data models; Machine learning algorithms; Niobium; Support vector machines; BGP; decision tree; machine learning; naive Bayes (ID#: 16-9908)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7054228&isnumber=7054206
G. Pallotta and A. L. Jousselme, “Data-Driven Detection and Context-Based Classification of Maritime Anomalies,” Information Fusion (Fusion), 2015 18th International Conference on, Washington, DC, 2015, pp. 1152-1159. doi: (not provided)
Abstract: Discovering anomalies at sea is one of the critical tasks of Maritime Situational Awareness (MSA) activities and an important enabler for maritime security operations. This paper proposes a data-driven approach to anomaly detection, highlighting challenges specific to the maritime domain. This work builds on unsupervised learning techniques which provide models for normal traffic behaviour. A methodology to associate tracks to the derived traffic model is then presented. This is done by the pre-extraction of contextual information as the baseline patterns of life (i.e., routes) in the area under investigation. In addition to a brief description of the approach to derive the routes, their characterization and representation is presented in support of exploitable knowledge to classify anomalies. A hierarchical reasoning is proposed where new tracks are first associated to existing routes based on their positional information only and “off-route” vessels” are detected. Then, for on-route vessels further anomalies are detected such as “speed anomaly” or “heading anomaly”. The algorithm is illustrated and assessed on a real-world dataset supplemented with synthetic abnormal tracks.
Keywords: information retrieval; marine engineering; pattern classification; security of data; traffic engineering computing; unsupervised learning; anomaly detection; context-based classification; contextual information pre-extraction; data driven detection approach; hierarchical reasoning; maritime security operation; maritime situational awareness; normal traffic behaviour; off route vessel; on route vessel; synthetic abnormal track; traffic model; unsupervised learning technique; Data mining; Detectors; Feature extraction; Radar tracking; Sea measurements; Tracking; Trajectory (ID#: 16-9909)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266688&isnumber=7266535
Y. Q. Zhang, B. Yang, Y. L. Lu and G. Z. Yang, “Anomaly Detection of AS-Level Internet Evolution Based on Temporal Distance,” 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, 2015, pp. 631-634. doi: 10.1109/IMCCC.2015.138
Abstract: As the Inter-domain routing security problem becomes increasingly prominent, the detection of AS (Autonomous System)-level Internet evolution has become a research hotspot. This paper introduces AS Reach ability Distance (ASRD) and AS Connectivity Distance (ASCD) based on temporal distance, used to characterize the difference of AS reach ability and connectivity at different time respectively, and based on ASRD and ASCD, an algorithm of continuously detecting AS-level Internet anomalies is proposed. Experiments show that the proposed method can not only detect AS-level Internet anomalous event accurately, but also reveal the evolution laws of AS-level Internet in the long term.
Keywords: Internet; computer network security; telecommunication network routing; AS connectivity distance; AS reach ability distance; AS-level Internet evolution; ASCD; ASRD; anomaly detection; autonomous system; interdomain routing security problem; temporal distance; Feature extraction; IP networks; Mathematical model; Routing; Time measurement; Time series analysis; AS Connectivity Distance; AS Reachability Distance; AS evolution (ID#: 16-9910)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7405917&isnumber=7405778
A. S. Bhandare and S. B. Patil, “Securing MANET Against Co-operative Black Hole Attack and Its Performance Analysis — A Case Study,” Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, Pune, 2015,
pp. 301-305. doi: 10.1109/ICCUBEA.2015.63
Abstract: Mobile Ad-hoc Network (MANET) is an autonomous system of randomly moving nodes where every node acts as host and router to maintain the network functionality. Ad-hoc on demand Distance Vector (AODV) is one of the principal routing protocols which may be get insecure due to malicious nodes present inside the network. Such malicious nodes affect the network performance severely by dropping all the data packets instead of forwarding it to intended receiver. It is called as “Co-Operative Black hole Attack”. In this paper one detection and defense mechanism is proposed to eliminate the intruder that carry out black hole attack by taking decision about safe route on basis of Normal V/S Abnormal (anomaly) activity”. This anti-prevention system checks route reply against fake reply, named as “Malicious Node Detection System for AODV (MDSAODV)”. In this paper we analyze the network performance without, with one and multiple (two) malicious nodes, by varying their location. The network performance for MDSAODV is again analyzed under same scenarios through NS-2 simulation.
Keywords: mobile ad hoc networks; routing protocols; telecommunication security; MANET security; MDSAODV; NS-2 simulation; ad-hoc-on-demand distance vector; anti prevention system; autonomous system; cooperative black hole attack; data packets; defense mechanism; detection mechanism; intruder elimination; malicious node detection system-for-AODV; mobile ad-hoc network; network functionality; performance analysis; principal routing protocols; Mobile ad hoc networks; Mobile computing; Routing; Routing protocols; Wireless communication; AODV; Co-Operative Black Hole Attack; MANET; MDSAOD (ID#: 16-9911)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155855&isnumber=7155781
Y. Haga, A. Saso, T. Mori and S. Goto, “Increasing the Darkness of Darknet Traffic,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-7. doi: 10.1109/GLOCOM.2015.7416973
Abstract: A Darknet is a passive sensor system that monitors traffic routed to unused IP address space. Darknets have been widely used as tools to detect malicious activities such as propagating worms, thanks to the useful feature that most packets observed by a darknet can be assumed to have originated from non-legitimate hosts. Recent commoditization of Internet-scale survey traffic originating from legitimate hosts could overwhelm the traffic that was originally supposed to be monitored with a darknet. Based on this observation, we posed the following research question: “Can the Internet-scale survey traffic become noise when we analyze darknet traffic?” To answer this question, we propose a novel framework, ID2, to increase the darkness of darknet traffic, i.e., ID2 discriminates between Internet-scale survey traffic originating from legitimate hosts and other traffic potentially associated with malicious activities. It leverages two intrinsic characteristics of Internet-scale survey traffic: a network- level property and some form of footprint explicitly indicated by surveyors. When we analyzed darknet traffic using ID2, we saw that Internet-scale traffic can be noise. We also demonstrated that the discrimination of survey traffic exposes hidden traffic anomalies, which are invisible without using our technique.
Keywords: IP networks; Internet; computer network security; telecommunication traffic; ID2 framework; Internet-scale survey traffic; Internet-scale survey traffic commoditization; darknet traffic darkness; malicious activity detection; passive sensor system; traffic route monitoring; unused IP address space; Monitoring; Organizations; Payloads; Protocols; Sensor systems; Standards organizations (ID#: 16-9912)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7416973&isnumber=7416057
S. J. Chang, K. H. Yeh, G. D. Peng, S. M. Chang and C. H. Huang, “From Safety to Security: Pattern and Anomaly Detections in Maritime Trajectories,” Security Technology (ICCST), 2015 International Carnahan Conference on, Taipei, 2015, pp. 415-419. doi: 10.1109/CCST.2015.7389720
Abstract: This paper presents recent findings in maritime trajectory pattern and anomaly detection using long-term data collected via a shore-based network of Automatic Identification System (AIS). Since the establishment of the AIS network around Taiwan in 2009, the accumulated massive vessel trajectories have been extensively explored under a series of government research projects. The project themes include safety and efficiency in marine transportation, as well as environment issues. Algorithms and software tools are developed to discover patterns of vessel traffic, routes, and delays, detect various near-miss events, investigate marine casualties, and assist in ship emission inventories. When the massive AIS data are investigated in different aspects and ways, anomalies with security implications emerged.
Keywords: data mining; feature extraction; identification; marine engineering; security of data; software tools; AIS network; anomaly detection; automatic identification system; maritime trajectory mining; pattern detection; security implication; shore-based network; software tool; Grounding; Marine vehicles; Navigation; Safety; Security; Trajectory; Transportation; Automatic Identification System; near miss; trajectory mining (ID#: 16-9913)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389720&isnumber=7389647
D. Tong and V. Prasanna, “High Throughput Sketch Based Online Heavy Change Detection on FPGA,” 2015 International Conference on ReConFigurable Computing and FPGAs (ReConFig), Mexico City, 2015, pp. 1-8. doi: 10.1109/ReConFig.2015.7393320
Abstract: Significant changes in traffic patterns often indicate network anomalies. Detecting these changes rapidly and accurately is a critical task for network security. Due to the large number of network users and the high throughput requirement of today's networks, traditional per-item-state techniques are either too expensive when implemented using fast storage devices (such as SRAM) or too slow when implemented using storage devices with massive capacity (such as DRAM). Sketch, as a highly accurate data stream summarization technique, significantly reduces the memory requirements while supporting a large number of items. Sketch based techniques are attractive for exploiting the fast on-chip storage of state-of-the-art computing platforms to achieve high throughput. In this work, we propose a fully pipelined Sketch based architecture on FPGA for online heavy change detection. Our architecture forecasts the activity of the network entities based on their history, then reports the entities whose difference between their observed activities and the forecast activities exceed a given threshold. The post place-and-route results on a state-of-the-art FPGA show that our architecture sustains high throughput of 96 – 103 Gbps using various configurations of online heavy change detection.
Keywords: computer network security; field programmable gate arrays; pipeline processing; storage management; system-on-chip; telecommunication traffic; DRAM; FPGA; SRAM; Sketch based techniques; data stream summarization technique; fast on-chip storage; field programmable gate array; fully pipelined Sketch based architecture; high throughput Sketch; memory requirements; network anomalies; network entities; network security; online heavy change detection; per-item-state techniques; place-and-route results; storage devices; traffic patterns; Change detection algorithms; Field programmable gate arrays; History; Mathematical model; Memory management; Throughput (ID#: 16-9914)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7393320&isnumber=7393279
N. K. Thanigaivelan, E. Nigussie, R. K. Kanth, S. Virtanen and J. Isoaho, “Distributed Internal Anomaly Detection System for Internet-of-Things,” 2016 13th IEEE Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, 2016, pp. 319-320. doi: 10.1109/CCNC.2016.7444797
Abstract: We present overview of a distributed internal anomaly detection system for Internet-of-things. In the detection system, each node monitors its neighbors and if abnormal behavior is detected, the monitoring node will block the packets from the abnormally behaving node at the data link layer and reports to its parent node. The reporting propagates from child to parent nodes until it reaches the root. A novel control message, distress propagation object (DPO), is devised to report the anomaly to the subsequent parents and ultimately the edge-router. The DPO message is integrated to routing protocol for low-power and lossy networks (RPL). The system has configurable profile settings and it is able to learn and differentiate the nodes' normal and suspicious activities without a need for prior knowledge. It has different subsystems and operation phases at data link and network layers, which share a common repository in a node. The system uses network fingerprinting to be aware of changes in network topology and nodes' positions without any assistance from a positioning system.
Keywords: Internet of Things; computer network security; routing protocols; DPO message; Internet-of-things; RPL; abnormally behaving node; data link layer; distress propagation object; distributed internal anomaly detection system; edge-router; network fingerprinting; network layers; network topology; parent node; routing protocol; Conferences; Image edge detection; Intrusion detection; Monitoring; Routing protocols; Wireless sensor networks (ID#: 16-9915)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7444797&isnumber=7440162
S. V. Shirbhate, S. S. Sherekar and V. M. Thakare, “A Novel Framework of Dynamic Learning Based Intrusion Detection Approach in MANET,” Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, Pune, 2015, pp. 209-213. doi: 10.1109/ICCUBEA.2015.46
Abstract: With the growth of security and surveillance system, a huge amount of audit or network data is being generated. It is immense challenge for researcher to protect the mobile ad hoc network from the malicious node as topology of the network dynamically changes. A malicious node can easily inject false routes into the network. A traditional method to detect such malicious nodes is to establish a base profile of normal network behavior and then identify a node's behavior to be anomalous if it deviates from the established profile. As the topology of a MANET constantly changes over time, the simple use of a static base profile is not efficient. In this paper, a novel framework is proposed to detect the malicious node in MANET. In proposed method k-means clustering-based anomaly detection approach is used in which the profile is dynamically updated. The approach consists of three main phases: training, testing and updating. In training phase, the K-means clustering algorithm is used in order to establish a normal profile. In testing phase, check whether the current traffic of the node is normal or anomalous. If it is normal then update the normal profile otherwise isolate the malicious node and ignore that node from the network. To update the normal profile periodically, weighted coefficients and a forgetting equation is used.
Keywords: mobile ad hoc networks; telecommunication security; MANET; anomaly detection approach; dynamic learning; intrusion detection approach; k-means clustering; malicious nodes; mobile ad hoc network; network data; novel framework; security system; static base profile; surveillance system; topology node; Heuristic algorithms; Intrusion detection; Mobile ad hoc networks; Network topology; Routing; Testing; Training; Dynamic Intrusion Detection System; K-means clustering (ID#: 16-9916)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155836&isnumber=7155781
A. Al-Mahrouqi, S. Abdalla and T. Kechadi, “Efficiency of Network Event Logs as Admissible Digital Evidence,” 2015 Science and Information Conference (SAI), London, 2015, pp. 1257-1265. doi: 10.1109/SAI.2015.7237305
Abstract: The large number of event logs generated in a typical network is increasingly becoming an obstacle for forensic investigators to analyze and use to detect and verify malicious activities. Research in the area of network forensic is trying to address the challenge of using network logs to reconstruct attack scenarios by proposing event correlation models. In this paper we introduce a new network forensics model that makes network event-logs admissible in the court of law. Our model collects available logs from connected network devices, applies decision tree algorithm in order to filter anomaly intrusion, then re-route the logs to a central repository where event-logs management functions are applied.
Keywords: computer network security; decision trees; digital forensics; admissible digital evidence; anomaly intrusion; decision tree algorithm; event correlation models; event-logs management functions; malicious activity detection; network event logs; network forensics model; Computer crime; Computer science; Computers; Data mining; Forensics; Reliability; Authentication of Evidence; Best Evidence; Evidence Reliability; Network Evidence Admissibility; SVMs (ID#: 16-9917)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237305&isnumber=7237120
V. F. Arguedas, F. Mazzarella and M. Vespe, “Spatio-Temporal Data Mining for Maritime Situational Awareness,” OCEANS 2015 - Genova, Genoa, 2015, pp. 1-8. doi: 10.1109/OCEANS-Genova.2015.7271544
Abstract: Maritime Situational Awareness (MSA) is the capability of understanding events, circumstances and activities within and impacting the maritime environment. Nowadays, the vessel positioning sensors provide a vast amount of data that could enhance the maritime knowledge if analysed and modelled. Vessel positioning data is dynamic and continuous on time and space, requiring spatio-temporal data mining techniques to derive knowledge. In this paper, several spatio-temporal data mining techniques are proposed to enhance the MSA, tackling existing challenges such as automatic maritime route extraction and synthetic representation, mapping vessels activities, anomaly detection or position and track prediction. The aim is to provide a more complete and interactive Maritime Situational Picture (MSP) and, hence, to provide more capabilities to operational authorities and policy-makers to support the decision-making process. The proposed approaches are evaluated on diverse areas of interest from the Dover Strait to the Icelandic coast.
Keywords: data mining; oceanographic techniques; automatic maritime route extraction; mapping vessels activities; maritime situational awareness; maritime situational picture; spatio-temporal data mining; synthetic representation; Data mining; Knowledge discovery; Ports (Computers); Safety; Security; Synthetic aperture radar; Trajectory (ID#: 16-9918)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7271544&isnumber=7271237
A. Amouri, L. G. Jaimes, R. Manthena, S. D. Morgera and I. J. Vergara-Laurens, “A Simple Scheme for Pseudo Clustering Algorithm for Cross Layer Intrusion Detection in MANET,” 2015 7th IEEE Latin-American Conference on Communications (LATINCOM), Arequipa, 2015, pp. 1-6. doi: 10.1109/LATINCOM.2015.7430139
Abstract: The Mobile AdHoc Network (MANET) is a type of wireless network that does not require infrastructure for its operation; therefore, MANETs lack a centralized architecture which affects the level of security inside the network and increases vulnerability. Although encryption helps to increase network security level, it is not sufficient to protect against malicious intruders. An intrusion detection scheme is proposed in this paper based on cross layer feature collection from the medium access control (MAC) and network layers. The proposed method employs an hierarchical configuration that avoids using a clustering algorithm and, instead, sequentially activates the promiscuity (ability to sniff all packets transmitted by nodes within radio range) of the node based on its location in the network. The node in this case acts as a pseudo cluster head (PCH) that collects data from its neighboring nodes in each quadrant in the field and then uses this information to calculate an anomaly index (AI) in each quadrant. The mechanism uses a C4.5 decision tree to learn the network behavior under blackhole attack and is able to recognize blackhole attacks with up to 97% accuracy. The presented approach is twofold - it is energy efficient and has a high degree of intrusion detection with low overhead.
Keywords: access protocols; cryptography; mobile ad hoc networks; pattern clustering; telecommunication security; C4.5 decision tree; MANET; anomaly index; blackhole attack; clustering algorithm; cross layer intrusion detection; hierarchical configuration; intrusion detection scheme; medium access control; mobile ad hoc network; network security level; pseudocluster head; pseudoclustering scheme; wireless network; Artificial intelligence; Indexes; Intrusion detection; Mobile ad hoc networks; Routing; Routing protocols (ID#: 16-9919)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7430139&isnumber=7430110
B. H. Dang and W. Li, “Impact of Baseline Profile on Intrusion Detection in Mobile Ad Hoc Networks,” SoutheastCon 2015, Fort Lauderdale, FL, 2015, pp. 1-7. doi: 10.1109/SECON.2015.7133013
Abstract: Dynamic topology and limited resources are major limitations that make intrusion detection in mobile ad hoc network (MANET) a difficult task. In recent years, several anomaly detection techniques were proposed to detect malicious nodes using static and dynamic baseline profiles, which depict normal MANET behaviors. In this research, we investigated different baseline profile methods and conducted a set of experiments to evaluate their effectiveness and efficiency for anomaly detection in MANETs using C-means clustering technique. The results indicated that a static baseline profile delivers similar results to other baseline profile methods. However, it requires the least resource usage while a dynamic baseline profile method requires the most resource usage of all the baseline models.
Keywords: mobile ad hoc networks; mobile computing; pattern clustering; security of data; MANET behaviors; c-means clustering technique; dynamic baseline profiles; intrusion detection; malicious nodes; mobile ad hoc networks; resource usage; static baseline profiles; Ad hoc networks; Adaptation models; Computational modeling; Mobile computing; Routing protocols; Mobile ad hoc networks; anomaly detection; baseline profile; clustering technique; unsupervised learning techniques (ID#: 16-9920)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133013&isnumber=7132866
E. Basan and O. Makarevich, “An Energy-Efficient System of Counteraction Against Attacks in the Mobile Wireless Sensor Networks,” Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), 2015 International Conference on, Xi'an, 2015, pp. 403-410. doi: 10.1109/CyberC.2015.72
Abstract: This paper is to create a model of secure wireless sensor network (WSN), which is able to defend against most of known network attacks and don't significantly reduce the energy power of sensor nodes (SN). We propose clustering as a way of network organization, which allows reducing energy consumption. Network Protection is based on the calculation of the trust level and the establishment of trusted relationships between trusted nodes. Operation of trust management system is based on a centralized method.
Keywords: mobile communication; trusted computing; wireless sensor networks; SN; WSN; centralized method; energy consumption; energy-efficient system; mobile wireless sensor networks; network protection; sensor nodes; trust management system; Base stations; Clustering algorithms; Nickel; Partitioning algorithms; Routing protocols; Wireless sensor networks; algorithms; anomaly detection; attacks; clustering; protocol; security; trust; trust evaluation (ID#: 16-9921)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7307850&isnumber=7307766
P. Sarigiannidis, E. Karapistoli and A. A. Economides, “VisIoT: A Threat Visualisation Tool for IoT Systems Security,” 2015 IEEE International Conference on Communication Workshop (ICCW), London, 2015, pp. 2633-2638. doi: 10.1109/ICCW.2015.7247576
Abstract: Without doubt, the Internet of Things (IoT) is changing the way people and technology interact. Fuelled by recent advances in networking, communications, computation, software, and hardware technologies, IoT has stepped out of its infancy and is considered as the next breakthrough technology in transforming the Internet into a fully integrated Future Internet. However, realising a network of physical objects accessed through the Internet brings a potential threat in the shadow of the numerous benefits. The threat is “security”. Given that Wireless Sensor Networks (WSNs) leverage the potential of IoT quite efficiently, this paper faces the challenge of security attention on a particular, yet broad, context of IP-enabled WSNs. In particular, it proposes a novel threat visualisation tool for such networks, called VisIoT. VisIoT is a human-interactive visual-based anomaly detection system that is capable of monitoring and promptly detecting several devastating forms of security attacks, including wormhole attacks, and Sybil attacks. Based on a rigorous, radial visualisation design, VisIoT may expose adversaries conducting one or multiple concurrent attacks against IP-enabled WSNs. The system's visual and anomaly detection efficacy in exposing complex security threats is demonstrated through a number of simulated attack scenarios.
Keywords: Internet of Things; data visualisation; security of data; wireless sensor networks; IP-enabled WSN; IoT systems security; Sybil attacks; VisIoT; complex security threats; concurrent attacks; hardware technologies; human-interactive visual-based anomaly detection system; physical objects; radial visualisation design; security attacks; simulated attack scenarios; software technologies; threat visualisation tool; visual detection efficacy; wormhole attacks; Data visualization; Engines; Monitoring; Routing; Security; Visualization; Wireless sensor networks (ID#: 16-9922)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247576&isnumber=7247062
K. M. A. Alheeti, A. Gruebler and K. D. McDonald-Maier, “On the Detection of Grey Hole and Rushing Attacks in Self-Driving Vehicular Networks,” Computer Science and Electronic Engineering Conference (CEEC), 2015 7th, Colchester, 2015, pp. 231-236. doi: 10.1109/CEEC.2015.7332730
Abstract: Vehicular ad hoc networks play an important role in the success of a new class of vehicles, i.e. self-driving and semi self-driving vehicles. These networks provide safety and comfort to passengers, drivers and vehicles themselves. These vehicles depend heavily on external communication to predicate the surrounding environment through the exchange of cooperative awareness messages (CAMs) and control data. VANETs are exposed to many types of attacks such as black hole, grey hole and rushing attacks. In this paper, we present an intelligent Intrusion Detection System (IDS) which relies on anomaly detection to protect external communications from grey hole and rushing attacks. Many researchers agree that grey hole attacks in VANETs are a substantial challenge due to them having their distinct types of behaviour: normal and abnormal. These attacks try to prevent transmission between vehicles and roadside units and have a direct and negative impact on the wide acceptance of this new class of vehicles. The proposed IDS is based on features that have been extracted from a trace file generated in a network simulator. In our paper, we used a feed-forward neural network and a support vector machine for the design of the intelligent IDS. The proposed system uses only significant features extracted from the trace file. Our research, concludes that a reduction in the number of features leads to a higher detection rate and a decrease in false alarms.
Keywords: cooperative systems; feedforward neural nets; security of data; support vector machines; traffic engineering computing; vehicular ad hoc networks; CAM; IDS; VANET; cooperative awareness messages; feedforward neural network; grey hole; intelligent intrusion detection system; rushing attacks; self-driving vehicular networks; semi self-driving vehicles; support vector machine; Ad hoc networks; Feature extraction; Intrusion detection; Roads; Routing protocols; Vehicles; intrusion detection system; security; self-driving car; semi self-driving car (ID#: 16-9923)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332730&isnumber=7332684
T. Gonnot, W. J. Yi, E. Monsef and J. Saniie, “Robust Framework for 6LoWPAN-Based Body Sensor Network Interfacing with Smartphone,” 2015 IEEE International Conference on Electro/Information Technology (EIT), DeKalb, IL, 2015, pp. 320-323. doi: 10.1109/EIT.2015.7293361
Abstract: This paper presents the design of a robust framework for body sensor network. In this framework, sensor nodes communicate using 6LoWPAN, running on the Contiki operating system, which is designed for energy efficiency and configuration flexibility. Furthermore, an embedded router is implemented using a Raspberry Pi to bridge the information to a Bluetooth capable smartphone. Consequently, the smartphone can process, analyze, compress and send the data to the cloud using its data connection. One of the major application of this framework is home patient monitoring, with 24/7 data collection capability. The collected data can be sent to a doctor at any time, or only when an anomaly is detected.
Keywords: Bluetooth; body sensor networks; computer network security; data analysis; data compression; home networks; operating systems (computers); patient monitoring; smart phones; telecommunication network routing; 6LoWPAN-based body sensor network; Bluetooth capable smartphone; Contiki operating system; Raspberry Pi; anomaly detection; configuration flexibility; data collection capability; data connection; data process; data sending; embedded router; energy efficiency; home patient monitoring; robust framework; sensor nodes; IEEE 802.15 Standard; Reliability; Routing protocols; Servers; Wireless communication (ID#: 16-9924)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293361&isnumber=7293314
K. M. A. Alheeti, A. Gruebler, K. D. McDonald-Maier and A. Fernando, “Prediction of DoS Attacks in External Communication for Self-Driving Vehicles Using a Fuzzy Petri Net Model,” 2016 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, 2016, pp. 502-503. doi: 10.1109/ICCE.2016.7430705
Abstract: In this paper we propose a security system to protect external communications for self-driving and semi self-driving cars. The proposed system can detect malicious vehicles in an urban mobility scenario. The anomaly detection system is based on fuzzy petri nets (FPN) to detect packet dropping attacks in vehicular ad hoc networks. The experimental results show the proposed FPN-IDS can successfully detect DoS attacks in external communication of self-driving vehicles.
Keywords: Petri nets; automobiles; computer network security; fuzzy systems; vehicular ad hoc networks; DoS attack prediction; anomaly detection system; external communications; fuzzy Petri net model; malicious vehicle detection; packet dropping attack detection; security system; self-driving vehicles; semiself-driving cars; urban mobility scenario; Ad hoc networks; Intrusion detection; Measurement; Petri nets; Routing protocols; Vehicles; FPN; IDS; Security; platoon; self-driving cars (ID#: 16-9925)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7430705&isnumber=7430494
K. Limthong, K. Fukuda, Y. Ji and S. Yamada, “Weighting Technique on Multi-Timeline for Machine Learning-Based Anomaly Detection System,” Computing, Communication and Security (ICCCS), 2015 International Conference on, Pamplemousses, 2015, pp. 1-6. doi: 10.1109/CCCS.2015.7374168
Abstract: Anomaly detection is one of the crucial issues of network security. Many techniques have been developed for certain application domains, and recent studies show that machine learning technique contains several advantages to detect anomalies in network traffic. One of the issues applying this technique to real network is to understand how the learning algorithm contains more bias on new traffic than old traffic. In this paper, we investigate the dependency of the time period for learning on the performance of anomaly detection in Internet traffic. For this, we introduce a weighting technique that controls influence of recent and past traffic data in an anomaly detection system. Experimental results show that the weighting technique improves detection performance between
2.7–112% for several learning algorithms, such as multivariate normal distribution, knearest neighbor, and one-class support vector machine.
Keywords: learning (artificial intelligence); security of data; support vector machines; Internet traffic; k-nearest neighbor; machine learning-based anomaly detection system; multivariate normal distribution; network security; network traffic; support vector machine; weighting technique; Delays; Routing; Routing protocols; Throughput; Vehicular ad hoc networks; anomaly detection; machine learning; multiple timeline; weighting technique (ID#: 16-9926)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7374168&isnumber=7374113
S. Banerjee, R. Nandi, R. Dey and H. N. Saha, “A Review on Different Intrusion Detection Systems for MANET and Its Vulnerabilities,” Computing and Communication (IEMCON), 2015 International Conference and Workshop on, Vancouver, BC, 2015, pp. 1-7. doi: 10.1109/IEMCON.2015.7344466
Abstract: In recent years, Mobile Ad hoc NETwork (MANET) have become a very popular research topic. By providing communications in the absence of a fixed infrastructure MANET are an attractive technology for many applications such as resource app, military app, environmental monitoring and conferences. However, this flexibility introduces new security threats due to the vulnerable nature of MANET, there will be the necessity of protecting the data, information from the attackers as it is an infrastructure-less network. Thus, securing such demanding network is a big challenge. At this point, IDS came into existence to secure MANET in detecting at what point they are getting weak. In this review paper, we will discuss, MANET and its vulnerabilities, and how we can tackle it using different techniques of IDS (Intrusion Detection System).
Keywords: data protection; mobile ad hoc networks; security of data; IDS; fixed infrastructure MANET vulnerability; information protection; infrastructure-less network; intrusion detection system; mobile ad hoc network security; security threat; Intrusion detection; Mobile ad hoc networks; Monitoring; Protocols; Routing; Anomaly Detection; EAACK; MANET (ID#: 16-9927)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7344466&isnumber=7344420
M. Azghani and S. Sun, “Low-Rank Block Sparse Decomposition Algorithm for Anomaly Detection in Networks,” 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Hong Kong, 2015, pp. 807-810. doi: 10.1109/APSIPA.2015.7415384
Abstract: In this paper, a method is suggested for the anomaly detection in wireless networks. The main problem that is addressed is to detect the malfunctioning sub-graphs in the network which bring about anomalies with block sparse structure. The proposed algorithm is detecting the anomalies considering the low-rank property of the data matrix and the block-sparsity of the outlier. Hence, the problem boils down to a compressed block sparse plus low rank decomposition that is solved with the aid of the ADMM technique. The simulation results indicate that the suggested method surpasses the other technique especially for higher block-sparsity rates.
Keywords: graph theory; matrix decomposition; matrix multiplication; network theory (graphs); radio networks; signal processing; telecommunication security; ADMM technique; alternating direction method of multipliers; block sparsity; compressed block sparse plus low rank decomposition; data matrix; low rank block sparse decomposition algorithm; low-rank property; malfunctioning subgraphs; network anomaly detection; wireless network; Cost function; Matrix decomposition; Routing; Simulation; Sparse matrices; Wireless networks; Anomaly detection; Compressed Sensing; Low rank minimization (ID#: 16-9928)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7415384&isnumber=7415286
M. Abdelhaq, R. Alsaqour, M. Ismail and S. Abdelhaq, “Dendritic Cell Fuzzy Logic Algorithm over Mobile Ad Hoc Networks,” 2015 6th International Conference on Intelligent Systems, Modelling and Simulation, Kuala Lumpur, 2015, pp. 64-69. doi: 10.1109/ISMS.2015.36
Abstract: A mobile ad hoc network (MANET) is an open wireless network of mobile, decentralized, and self-organized nodes with limited energy and bandwidth resources. The MANET environment is vulnerable to dangerous attacks, such as flooding-based attacks, which paralyze the functionality of the whole network. This paper introduces a hybrid intelligent algorithm, which can meet the challenge of protecting MANET with effective security and network performance. This objective is fulfilled by inspiring the abstract anomaly detection of dendritic cells (DCs) in the human immune system and the accurate decision-making functionality of fuzzy logic theory to introduce a dendritic Cell Fuzzy Algorithm (DCFA). DCFA combines the relevant features of danger theory-based AISs and fuzzy logic theory-based systems. DCFA is verified using QualNet v5.0.2 to detect resource consumption attack. The results show the efficient capability of DCFA to perform the detection operation with high network and security performance.
Keywords: decision making; fuzzy logic; mobile ad hoc networks; DCFA; MANET; QualNet v5.0.2; danger theory-based AIS; decision-making functionality; dendritic cell fuzzy logic algorithm; open wireless network; Context; Fuzzy logic; Immune system; Mobile ad hoc networks; Radiation detectors; Routing protocols; Security; artificial immune system; danger theory; fuzzy logic theory; mobile ad hoc network; resource consumption attack (ID#: 16-9929)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7311211&isnumber=7311187
S. Merat and W. Almuhtadi, “Artificial Intelligence Application for Improving Cyber-Security Acquirement,” Electrical and Computer Engineering (CCECE), 2015 IEEE 28th Canadian Conference on, Halifax, NS, 2015, pp. 1445-1450. doi: 10.1109/CCECE.2015.7129493
Abstract: The main focus of this paper is the improvement of machine learning where a number of different types of computer processes can be mapped in multitasking environment. A software mapping and modelling paradigm named SHOWAN is developed to learn and characterize the cyber awareness behaviour of a computer process against multiple concurrent threads. The examined process start to outperform, and tended to manage numerous tasks poorly, but it gradually learned to acquire and control tasks, in the context of anomaly detection. Finally, SHOWAN plots the abnormal activities of manually projected task and compare with loading trends of other tasks within the group.
Keywords: learning (artificial intelligence); security of data; SHOWAN; anomaly detection; artificial intelligence application; computer process; concurrent threads; cyber awareness behaviour; cyber-security acquirement; machine learning; modelling paradigm; multitasking environment; software mapping; Artificial intelligence; Indexes; Instruction sets; Message systems; Routing; Security; Cyber Multitasking Performance; Cyber-Attack; Cyber-Security; Intrinsically locked; Non-maskable task; Normative Model; Queuing Management; Task Prioritization; synchronized thread (ID#: 16-9930)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7129493&isnumber=7129089
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Sybil Attacks 2015 |
A Sybil attack occurs when a node in a network claims multiple identities. The attacker may subvert the entire reputation system of the network by creating a large number of false identities and using them to gain influence. For the Science of Security community, these attacks are relevant to resilience, metrics, and composability. The research cited here was presented in 2015.
K. Rabieh, M. M. E. A. Mahmoud, T. N. Guo and M. Younis, “Cross-Layer Scheme for Detecting Large-Scale Colluding Sybil Attack in VANETs,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7298-7303. doi: 10.1109/ICC.2015.7249492
Abstract: In Vehicular Ad Hoc Networks (VANETs), the roadside units (RSUs) need to know the number of vehicles in their vicinity to be used in traffic management. However, an attacker may launch a Sybil attack by pretending to be multiple simultaneous vehicles. This attack is severe when a vehicle colludes with others to use valid credentials to authenticate the Sybil vehicles. If RSUs are unable to identify such an attack, they will report wrong number of vehicles to the traffic management center, which may result in disseminating wrong traffic instructions to vehicles. In this paper, we propose a cross-layer scheme to enable the RSUs to identify such Sybil vehicles. Since Sybil vehicles do not exist in their claimed locations, our scheme is based on verifying the vehicles' locations. A challenge packet is sent the vehicle's claimed location using directional antenna to detect the presence of a vehicle. If the vehicle is at the expected location, it should be able to receive the challenge and send back a valid response packet. In order to reduce the overhead and instead of sending challenge packets to all the vehicles all the time, packets are sent only when there is a suspicion of Sybil attack. We also discuss several Sybil attack alarming techniques. The evaluation results demonstrate that our scheme can achieve high detection rate with low probability of false alarm. Additionally, the scheme requires acceptable communication and computation overhead.
Keywords: antenna radiation patterns; directive antennas; probability; road safety; road traffic; vehicular ad hoc networks; Sybil attack alarming technique; VANET; cross-layer scheme; directional antenna; false alarm probability; large-scale colluding Sybil attack detection; road side unit; traffic management; vehicular ad hoc network; Accidents; Directional antennas; Information systems; Public key; Roads; Vehicles; Location verification; Sybil attack; cross layer scheme; false location reporting attack (ID#: 16-10105)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249492&isnumber=7248285
M. Mulla and S. Sambare, “Efficient Analysis of Lightweight Sybil Attack Detection Scheme in Mobile Ad Hoc Networks,” Pervasive Computing (ICPC), 2015 International Conference on, Pune, 2015, pp. 1-6. doi: 10.1109/PERVASIVE.2015.7086988
Abstract: Mobile Ad hoc Networks (MANETs) are vulnerable to different kinds of attacks like Sybil attack. In this paper we are aiming to present practical evaluation of efficient method for detecting lightweight Sybil Attack. In Sybil attack, network attacker disturbs the accuracy count by increasing its trust and decreasing others or takes off the identity of few mobile nodes in MANET. This kind of attack results into major information loss and hence misinterpretation in the network, it also minimizes the trustworthiness among mobile nodes, data routing disturbing with aim of dropping them in network etc. There are many methods previously presented by different researchers with aim of mitigating such attacks from MANET with their own advantage and disadvantages. In this research paper, we are introducing the study of efficient method of detecting the lightweight Sybil attack with aim of identifying the new identities of Sybil attackers and without using any additional resources such as trusted third party or any other hardware. The method which we are investigating in this paper is based on use of RSS (Received Signal Strength) to detect Sybil attacker. This method uses the RSS in order to differentiate between the legitimate and Sybil identities. The practical analysis of this work is done using Network Simulator (NS2) by measuring throughput, end to end delay, and packet delivery ratio under different network conditions.
Keywords: mobile ad hoc networks; MANET; RSS; lightweight Sybil attack detection scheme; major information loss; network simulator; received signal strength; trustworthiness; Delays; Hardware; Mobile ad hoc networks; Mobile computing; Security; Throughput; DCA: Distributed Certificate authority; Mobile Ad hoc Network; Packet Delivery Ratio; Received Signal Strength; Sybil Attack; Threshold; UB: Upper bound (ID#: 16-10106)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7086988&isnumber=7086957
M. AlRubaian, M. Al-Qurishi, S. M. M. Rahman and A. Alamri, “A Novel Prevention Mechanism for Sybil Attack in Online Social Network,” Web Applications and Networking (WSWAN), 2015 2nd World Symposium on, Sousse, 2015, pp. 1-6. doi: 10.1109/WSWAN.2015.7210347
Abstract: In Online Social Network (OSN) one of the major attacks is Sybil attack, in which an attacker subverts the system by creating a large number of pseudonymous identities (i.e. fake user accounts) and using them to establish as many as possible of friendships with honest users to have disproportionately large influence in the network. Finally, the Sybil accounts led to many malicious activities in the online social network. To detect these kinds of fake accounts in online social network is a big challenge. In this paper, we propose a prevention mechanism for Sybil attack in OSN based on pairing and identity-based cryptography. During the formation of a group when any user wants to join the group, a user needs to pass a trapdoor which is built based on pairing-based cryptography and consists of a challenge and response mechanism (process). Only the authenticated users can pass the trapdoor and the fake users cannot pass the process, therefore, exclusively the genuine users can join a group. Thus, the Sybil nodes would not be able to join the group and the Sybil attack would be prevented in the OSN.
Keywords: cryptography; data analysis; social networking (online); OSN; Sybil attack; identity-based cryptography; online social network; pairing cryptography; prevention mechanism; Authentication; Computers; Cryptography; Peer-to-peer computing; Protocols; Social network services; Online Social Network (OSN); Pairing-based cryptography; Sybil Attack (ID#: 16-10107)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210347&isnumber=7209078
M. K. Saggi and R. Kaur, “Isolation of Sybil Attack in VANET Using Neighboring Information,” Advance Computing Conference (IACC), 2015 IEEE International, Banglore, 2015, pp. 46-51. doi: 10.1109/IADCC.2015.7154666
Abstract: The advancement of wireless communication leads researchers to conceive and develop the idea of vehicular networks, also known as vehicular ad hoc networks (VANETs). In Sybil attack, the WSN is destabilized by a malicious node which create an innumerable fraudulent identities in favor of disrupting networks protocols. In this paper, a novel technique has been proposed to detect and isolate Sybil attack on vehicles resulting in proficiency of network. It will work in two-phases. In first phase RSU registers the nodes by identifying their credentials offered by them. If they are successfully verified, second phase starts & it allots identification to vehicles thus, RSU gathers information from neighboring nodes & define threshold speed limit to them & verify the threshold value is exceed the defined limit of speed. A multiple identity generated by Sybil attack is very harmful for the network & can be misused to flood the wrong information over network. Simulation results show that proposed detection technique increases the possibilities of detection and reduces the percentage of Sybil attack.
Keywords: computer network security; vehicular ad hoc networks; RSU; Sybil attack; VANET; credentials; fraudulent identities; malicious node; neighboring nodes; networks protocols disruption; threshold speed limit; threshold value; Mobile nodes; Monitoring; Protocols; Roads; Routing; Vehicles; Vehicular ad hoc networks; Collision; MANET; Malicious node; Sybil Attack; V2V communication (ID#: 16-10108)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154666&isnumber=7154658
S. J. Samuel and B. Dhivya, “An Efficient Technique to Detect and Prevent Sybil Attacks in Social Network Applications,” Electrical, Computer and Communication Technologies (ICECCT), 2015 IEEE International Conference on, Coimbatore, 2015, pp. 1-3. doi: 10.1109/ICECCT.2015.7226059
Abstract: Sybil attack is an attack where malicious users obtain multiple fake identities and access the system from multiple different modes. It is an attack wherein a reputation system is destroyed by falsifying identities in peer to peer networks. Communication between the users of networks only requires the users to be part of the same network. All kinds of distributed systems are capable of being wounded to Sybil attacks. An attacker can easily create a number of duplicate identities (called as Sybil) to impure the system with fake information and affect the exact performance of the system. In this paper, we propose an algorithm to improve the efficiency of blocking a Sybil attack by combining neighborhood similarity method and improved Knowledge Discovery tree based algorithm. This algorithm is proposed to block Sybil attacks in social websites like Facebook, Twitter.
Keywords: data mining; peer-to-peer computing; security of data; social networking (online); trusted computing; Facebook; P2P network; Sybil attack prevention; Sybil trust detection; Twitter; distributed system; knowledge discovery tree based algorithm; neighborhood similarity method; peer to peer network; social network application; Peer-to-peer computing; P2P Security; Security with Trusted Relationship; Social Network Security; Sybil attack (ID#: 16-10109)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226059&isnumber=7225915
P. Li and R. Lu, “A Sybil Attack Detection Scheme for Privacy-Preserving Mobile Social Networks,” 2015 10th International Conference on Information, Communications and Signal Processing (ICICS), Singapore, 2015, pp. 1-5. doi: 10.1109/ICICS.2015.7459922
Abstract: With the pervasiveness of smart phones, mobile social networking (MSN) has received extensive attention in recent years. However, while providing many opportunities to mobile users, MSN also poses new security challenges, and Sybil attack is one of such challenges, in which a malicious user can interact with other mobile users multiple times by creating fake identities and misleading them into making decisions that benefit the malicious user himself. In this paper, we consider a privacy-preserving MSN and propose a Sybil attack detection scheme, called SADS, to effectively prevent sybil attack and allow all mobile users to detect malicious users while the user's privacy is still preserved in MSN. Specifically, based on Paillier homomorphic encryption technique, the proposed SADS scheme can efficiently detect Sybil attack in MSN. Detailed security analysis shows that the user's privacy can be well-protected in the proposed SADS scheme. In addition, the system optimizing design is further proposed to improve the system performance.
Keywords: cryptography; data privacy; mobile computing; optimisation; smart phones; social networking (online); telecommunication security; Paillier homomorphic encryption technique; SADS; malicious user; mobile social networks; mobile users; privacy-preserving MSN; sybil attack detection scheme; user privacy; Encryption; Mobile communication; Mobile computing; Privacy; Social network services; Mobile social network; Paillier cryptosystem; Privacy-preserving; Sybil attack (ID#: 16-10110)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7459922&isnumber=7459813
M. Alimohammadi and A. A. Pouyan, “Sybil Attack Detection Using a Low Cost Short Group Signature In VANET,” Information Security and Cryptology (ISCISC), 2015 12th International Iranian Society of Cryptology Conference on, Rasht, 2015, pp. 23-28. doi: 10.1109/ISCISC.2015.7387893
Abstract: Vehicular ad hoc network (VANET) has attracted the attention of many researchers in recent years. It enables value-added services such as road safety and managing traffic on the road. Security issues are the challenging problems in this network. Sybil attack is one of the serious security threats that attacker tries to forge some identities. One of the main purposes for creating invalid identities is disruption in voting based systems. In this paper we propose a secure protocol for solving two conflicting goals privacy and Sybil attack in vehicle to vehicle (V2V) communications in VANET. The proposed protocol is based on the Boneh-Shacham (BS) short group signature scheme and batch verification. Experimental results demonstrate efficiency and applicability of the proposed protocol for providing the requirements of privacy and Sybil attack detection in V2V communications in VANET.
Keywords: protocols; road safety; road traffic; vehicular ad hoc networks; Boneh-Shacham short group signature scheme; Sybil attack detection; V2V communications; VANET; batch verification; protocol; traffic management; value-added services; vehicle to vehicle communications; vehicular ad hoc network; voting based systems; Authentication; Privacy; Protocols; Vehicles; Vehicular ad hoc networks; Yttrium; Sybil attack; Vehicular ad-hoc network; privacy; vehicle to vehicle communication (ID#: 16-10111)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7387893&isnumber=7387888
S. Thawani and H. Upadhyay, “Securing TORA Against Sybil Attack in MANETs,” Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE), 2015 International Conference on, Noida, 2015, pp. 475-478. doi: 10.1109/ABLAZE.2015.7155042
Abstract: Mobile Ad-hoc Network (MANET) is a quite challenging to ensures security because if it's open nature, lack of infrastructure, and high mobility of nodes. MANETs is a fast changing network in a form of decentralized wireless system. It requires a unique, distinct and persistent identity per node in order to provide their security and also has become an indivisible part for communication for mobile device. In this phase of dissertation, we have focused giving security to Temporally Ordered Routing Protocol Algorithm (TORA) from Sybil attack. TORA is based on a family of link reversal algorithm. It is highly adaptive distributing routing algorithm used in MANET that is able to provide multiple loop-free routes to any destination using the Route Creation, Route Maintenance and Route Erasure functions. Sybil attack is a serious threat for wireless networks. This type of attacker comes in the network and they start creating multiple identities. From that multiple identities they are disrupting the network by participating in communication with line breaking nodes. This cause huge loss in network resources. These networks can be protected using network failure and firewall detection schemes for detecting the attack and minimizing their effects. Proposed approach is expected to secure TORA through the implementation. Performance factor of network would be taken into consideration in order to verify the efficiency of modified TORA in MANET environment.
Keywords: mobile ad hoc networks; routing protocols; telecommunication security; MANETs; Sybil attack; TORA; adaptive distributing routing algorithm; firewall detection schemes; link reversal algorithm; mobile ad-hoc network; network failure schemes; route creation functions; route erasure functions; route maintenance functions; temporally ordered routing protocol algorithm; Ad hoc networks; Mobile communication; Mobile computing; Peer-to-peer computing; Routing; Routing protocols; Security; Mobile Ad-hoc Networks; Security; Sybil Attack (ID#: 16-10112)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155042&isnumber=7154914
R. John, J. P. Cherian and J. J. Kizhakkethottam, “A Survey of Techniques to Prevent Sybil Attacks,” Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, Coimbatore, 2015, pp. 1-6. doi: 10.1109/ICSNS.2015.7292385
Abstract: Any decentralized, distributed network is vulnerable to the Sybil attack wherein a malicious node masquerade as several different nodes, called Sybil nodes disrupting the proper functioning of the network. A Sybil attacker can create more than one identity on a single physical device in order to launch a coordinated attack on the network or can switch identities in order to weaken the detection process, thus promoting lack of accountability in the network. In this paper, different types of Sybil attacks, including those occurring in peer-to-peer reputation systems, self-organizing networks and social network systems are discussed. Also, various methods that have been suggested over time to decrease or eliminate their risk completely are also analysed.
Keywords: computer network security; Sybil attack prevention; Sybil nodes; coordinated attack; decentralized-distributed network; malicious node; peer-to-peer reputation systems; physical device; self-organizing networks; social network systems; Access control; Ad hoc networks; Computers; Peer-to-peer computing; Social network services; Wireless sensor networks; Identity-based attacks; MANET; Sybil attack (ID#: 16-10113)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292385&isnumber=7292366
A. S. Lal and R. Nair, “Region Authority Based Collaborative Scheme to Detect Sybil Attacks in VANET,” 2015 International Conference on Control Communication & Computing India (ICCC), Trivandrum, 2015, pp. 664-668. doi: 10.1109/ICCC.2015.7432979
Abstract: Vehicular ad hoc networks (VANETs) are increasingly used for traffic control, accident avoidance, and management of toll stations and public areas. Security and privacy are two major concerns in VANETs. Most privacy preserving schemes are susceptible to Sybil attack in which a malicious user generates multiple identities to simulate multiple vehicles. This paper proposes an improvement for the scheme CP2DAP [1], which detects Sybil attacks by the cooperation of a central authority and a set of fixed nodes called road-side units (RSUs). The modification proposed is a region authority based collaborative scheme for detecting Sybil attacks and a revocation method using bloom filter to prevent further attacks from malicious vehicles. The detection of Sybil attack in this manner does not require any vehicle to disclose its identity; hence privacy is preserved at all times.
Keywords: security of data; vehicular ad hoc networks; RSU; Sybil attacks; VANET; bloom filter; privacy preserving schemes; region authority based collaborative scheme; road-side units; Privacy; Radiofrequency identification; Roads; Security; Trajectory; Vehicles; Vehicular ad hoc networks; Bloom filter; Region Authority; Sybil Attack (ID#: 16-10114)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7432979&isnumber=7432856
G. Noh and H. Oh, “AuRo-Rec: An Unsupervised and Robust Sybil Attack Defense in Online Recommender Systems,” SAI Intelligent Systems Conference (IntelliSys), 2015, London, 2015, pp. 1017-1024. doi: 10.1109/IntelliSys.2015.7361268
Abstract: With the explosive growth of online social networks (OSNs), the social commerce and online stores facilitating recommender systems (RSs) are a popular way of providing users customized information such as friends, books, goods, and so on. The major function of RSs is recommending items to their system users (i.e., potential consumers), however, malicious users attempt to continuously attack the RSs with fake identities (i.e., Sybils) by injecting false information. In this paper, we propose an Unsupervised and Robust Sybil attack defense in online Recommender systems (AuRo-Rec) which exploits dynamic auto-configuration of system parameters on top of the admission control concept. AuRo-Rec firstly provides highly trusted recommendations regardless of whether ratings are given by Sybils or not. To build the automatic parameter configuration required by Auto-Rec, we propose an unsupervised approach: Dynamic Threshold Auto-configuration (DTA). To evaluate our approaches, we conducted experiments against four possible Sybil attacks. The experimental results confirm that AuRo-Rec works robustly in terms of prediction shift (PS).
Keywords: recommender systems; security of data; social networking (online); AuRo-Rec; DTA; OSN; admission control concept; dynamic threshold autoconfiguration; online recommender systems; online social networks; prediction shift; robust Sybil attack defense; Admission control; Electronic mail; Intelligent systems; Manuals; Recommender systems; Robustness; Social network services; Auto updating; Fuzzy rules; Sybil attack (ID#: 16-10115)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7361268&isnumber=7361074
R. Bhumkar and D. J. Pete, “Reduction of Error Rate in Sybil Attack Detection for MANET,” Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, Coimbatore, 2015, pp. 1-6. doi: 10.1109/ISCO.2015.7282328
Abstract: Mobile ad hoc networks (MANETs) require a unique, distinct, and persistent identity per node in order for their security protocols to be viable, Sybil attacks pose a serious threat to such networks. Fully self-organized MANETs represent complex distributed systems that may also be part of a huge complex system, such as a complex system-of-systems used for crisis management operations. Due to the complex nature of MANETs and its resource constraint nodes, there has always been a need to develop security solutions. A Sybil attacker can either create more than one identity on a single physical device in order to launch a coordinated attack on the network or can switch identities in order to weaken the detection process, thereby promoting lack of accountability in the network. In this research, we propose a scheme to detect the new identities of Sybil attackers without using centralized trusted third party or any extra hardware, such as directional antennae or a geographical positioning system. Through the help of extensive simulations, we are able to demonstrate that our proposed scheme detects Sybil identities with 95% accuracy (true positive) and about 5% error rate (false positive) even in the presence of mobility.
Keywords: emergency management; mobile ad hoc networks; protocols; telecommunication security; MANET; Sybil attack detection; complex distributed system; crisis management operation; error rate reduction; identity-based attack; mobile ad hoc network; resource constraint node; security protocol; Handheld computers; IEEE 802.11 Standard; Mobile ad hoc networks; Mobile computing; Identity-based attacks; Sybil attacks; intrusion detection (ID#: 16-10116)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282328&isnumber=7282219
S. Marian and P. Mircea, “Sybil Attack Type Detection in Wireless Sensor Networks Based on Received Signal Strength Indicator Detection Scheme,” Applied Computational Intelligence and Informatics (SACI), 2015 IEEE 10th Jubilee International Symposium on, Timisoara, 2015, pp. 121-124. doi: 10.1109/SACI.2015.7208183
Abstract: A Wireless Sensor network is very exposed to different type of attacks and the most common one is the Sybil attack. A Sybil node tries to assume a false identity of other nodes from a network by broadcasting packets with multiple node IDs in order to get access into that network. Once it gains access into that network, it can lead to other type of attacks. As an alternative to other solutions which are based on random key distribution, trusted certification and other classic security schemes, we present a solution which is robust and lightweight enough for Sybil attack type detection, based on RSSI (received signal strength indicator). In today's modern wireless sensor networks, there are two known indicators for link quality estimation: Received Signal Strength Indicator and Link Quality Indicator (LQI). We show trough experiments that RSSI is stable enough when used in static environments and with good transceivers. According to wireless channel models, received power should be a function of distance, but we used it to localize Sybil nodes.
Keywords: telecommunication security; wireless channels; wireless sensor networks; LQI; RSSI; Sybil attack type detection; Sybil node; Sybil nodes; broadcasting packets; link quality estimation; link quality indicator; random key distribution; received signal strength indicator; received signal strength indicator detection scheme; trusted certification; wireless channel models; wireless sensor networks; Hardware; Receivers; Standards; Wireless communication; Wireless sensor networks; Zigbee (ID#: 16-10117)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208183&isnumber=7208165
K. M. Ponsurya, R. P. Priyanka and S. Vairachilai, “Transparent User Identity and Overcoming Sybil Attack for Secure Social Networks,” Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, Coimbatore, 2015, pp. 1-4. doi: 10.1109/ICSNS.2015.7292379
Abstract: The secure social networks are online based services that permit unique users to build a public profile, to build the number of users with whom they want to communicate and see the connections within the system. Many social networks are online based and it gives the users to communicate via the internet, such as electronic mailing, quick messaging, sharing photos and videos, uploading the thoughts of the users etc. Such social networks are easily affected by attackers (i.e Sybil attack). The Sybil attack is a kind of security threat and it cause when an insecure system is hijacked to claim various identities. The huge scale peer to peer systems meets security thread from damaged calculating fundamentals. The Robust Recommendation algorithm is used to overcome the Sybil attack that affects the application but it fails when the attacker knows about the profiles of authenticate users. The proposed methodology has to eliminate such constraints by using the combination of session management and face detection and recognition techniques. By using these procedures, the application is secured effectively.
Keywords: Internet; face recognition; peer-to-peer computing; recommender systems; security of data; social networking (online); Internet; Sybil attack; electronic mailing; face detection; face recognition technique; insecure system; online based service; peer to peer system; photo sharing; public profile; quick messaging; robust recommendation algorithm; secure social networks; security thread; security threat; session management; transparent user identity; video sharing; Authentication; Face detection; Face recognition; Protocols; Social network services; Webcams; Webcam; face detection and recognition; profile matching (ID#: 16-10118)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292379&isnumber=7292366
F. Medjek, D. Tandjaoui, M. R. Abdmeziem and N. Djedjig, “Analytical Evaluation of the Impacts of Sybil Attacks Against RPL Under Mobility,” Programming and Systems (ISPS), 2015 12th International Symposium on, Algiers, 2015, pp. 1-9. doi: 10.1109/ISPS.2015.7244960
Abstract: The Routing Protocol for Low-Power and Lossy Networks (RPL) is the standardized routing protocol for constrained environments such as 6LoWPAN networks, and is considered as the routing protocol of the Internet of Things (IoT), However, this protocol is subject to several attacks that have been analyzed on static case. Nevertheless, IoT will likely present dynamic and mobile applications. In this paper, we introduce potential security threats on RPL, in particular Sybil attack when the Sybil nodes are mobile. In addition, we present an analytical analysis and a discussion on how network performances can be affected. Our analysis shows, under Sybil attack while nodes are mobile, that the performances of RPL are highly affected compared to the static case. In fact, we notice a decrease in the rate of packet delivery, and an increase in control messages overhead. As a result, energy consumption at constrained nodes increases. Our proposed attack demonstrate that a Sybil mobile node can easily disrupt RPL and overload the network with fake messages making it unavailable.
Keywords: computer network performance evaluation; computer network security; mobile computing; routing protocols; 6LoWPAN networks; Internet of Things; IoT; RPL; Sybil attacks; constrained environments; dynamic application; energy consumption; lossy network; low-power network; mobile application; network performance; routing protocol; security threats; Maintenance engineering; Mobile nodes; Routing; Routing protocols; Security; Topology (ID#: 16-10119)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7244960&isnumber=7244951
M. A. Jan, P. Nanda, X. He and R. P. Liu, “A Sybil Attack Detection Scheme for a Centralized Clustering-Based Hierarchical Network,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 318-325. doi: 10.1109/Trustcom.2015.390
Abstract: Wireless Sensor Networks (WSNs) have experienced phenomenal growth over the past decade. They are typically deployed in remote and hostile environments for monitoring applications and data collection. Miniature sensor nodes collaborate with each other to provide information on an unprecedented temporal and spatial scale. The resource-constrained nature of sensor nodes along with human-inaccessible terrains poses various security challenges to these networks at different layers. In this paper, we propose a novel detection scheme for Sybil attack in a centralized clustering-based hierarchical network. Sybil nodes are detected prior to cluster formation to prevent their forged identities from participating in cluster head selection. Only legitimate nodes are elected as cluster heads to enhance utilization of the resources. The proposed scheme requires collaboration of any two high energy nodes to analyze received signal strengths of neighboring nodes. The simulation results show that our proposed scheme significantly improves network lifetime in comparison with existing clustering-based hierarchical routing protocols.
Keywords: RSSI; telecommunication security; wireless sensor networks; Sybil attack detection scheme; centralized clustering-based hierarchical network; clustering-based hierarchical routing protocols; neighboring nodes; received signal strengths; Base stations; Energy consumption; Routing protocols; Security; Sensors; Wireless sensor networks; Base Station; Cluster; Cluster Head; Sybil Attack; Wireless Sensor Network (ID#: 16-10120)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345298&isnumber=7345233
S. Goyal, T. Bhatia and A. K. Verma, “Wormhole and Sybil Attack in WSN: A Review,” Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, New Delhi, 2015, pp. 1463-1468. doi: (not provided)
Abstract: With the increasing popularity of mobile devices, recent developments in wireless communication and the deployment of wireless sensor networks in the hostile environment makes it the popular field of interest from research perspective. Sensor networks consist of `smart nodes' communicating wirelessly which are resources constrained in terms of memory, energy, computation power. The design of these networks must encounter all factors including fault tolerance capability, scalability, costs of production, operating environment, hardware constraints etc. However, due to wireless nature of these networks and no tamper-resistant hardware these are vulnerable to various types of attacks. In this paper, various types of attacks have been studied and defensive techniques of one of the severe attacks i.e. wormhole and Sybil are surveyed in major detail with the comparison of merits and demerits of several techniques.
Keywords: telecommunication security; wireless sensor networks; Sybil attack; WSN; fault tolerance; smart nodes; wormhole attack; Communication system security; Economics; Jamming; Routing protocols; Wireless communication; Wireless sensor networks; Attacks; Defensive Mechanisms; Wormhole attack (ID#: 16-10121)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100491&isnumber=7100186
F. d. A. López-Fuentes and S. Balleza-Gallegos, “Evaluating Sybil Attacks in P2P Infrastructures for Online Social Networks,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1262-1267. doi: 10.1109/HPCC-CSS-ICESS.2015.252
Abstract: In recent years, online social networks (OSN) have become very popular. These types of networks have been useful to find former classmates or to improve our interaction with friends. Currently, a huge amount of information is generated and consumed by millions of people from these types of networks. Most popular online social networks are based on centralized servers, which are responsible for the management and storage all information. Although online social networks introduce several benefits, these networks still face many challenges such as central control, privacy or security. P2P infrastructures have emerged as an alternative platform to deploy decentralized online social networks. However, decentralized distributed systems are vulnerable to malicious peers. In this work, we evaluate P2P infrastructures against Sybil attacks. In particular, we simulate and evaluate hybrid and distributed P2P architectures.
Keywords: computer network security; file servers; peer-to-peer computing; social networking (online); OSN; P2P infrastructure; Sybil attack evaluation; centralized server; decentralized distributed systems; decentralized online social network; distributed P2P architecture; hybrid P2P architecture; malicious peers; Bandwidth; Computational modeling; Flowcharts; Peer-to-peer computing; Protocols; Servers; Social network services; Sybil attack; online-social networks; peer-to-peer networks (ID#: 16-10122)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336341&isnumber=7336120
Z. Saoud, N. Faci, Z. Maamar and D. Benslimane, “Impact of Sybil Attacks on Web Services Trust Assessment,” 2015 International Conference on Protocol Engineering (ICPE) and International Conference on New Technologies of Distributed Systems (NTDS), Paris, 2015, pp. 1-6. doi: 10.1109/NOTERE.2015.7293495
Abstract: This paper discusses how Sybil attacks can undermine trust management systems and how to respond to these attacks using advanced techniques such as credibility and probabilistic databases. In such attacks end-users have purposely different identities and hence, can provide inconsistent ratings over the same Web Services. Many existing approaches rely on arbitrary choices to filter out Sybil users and reduce their attack capabilities. However this turns out inefficient. Our approach relies on non-Sybil credible users who provide consistent ratings over Web services and hence, can be trusted. To establish these ratings and debunk Sybil users techniques such as fuzzy-clustering, graph search, and probabilistic databases are adopted. A series of experiments are carried out to demonstrate robustness of our trust approach in presence of Sybil attacks.
Keywords: Web services; graph theory; pattern clustering; probability; trusted computing; Sybil attacks; Sybil user techniques; Web service trust assessment; attack capabilities; credibility; fuzzy-clustering; graph search; nonSybil credible users; probabilistic databases; trust management systems; Cost accounting; Gold; Nickel; Protocols; Radio frequency; Robustness; Web services (ID#: 16-10123)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293495&isnumber=7293442
P. Thakur, R. Patel and N. Patel, “A Proposed Framework for Protection of Identity Based Attack in Zigbee,” Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, Gwalior, 2015, pp. 628-632. doi: 10.1109/CSNT.2015.243
Abstract: ZigBee is used for emerging standard of lowpower, low-rate wireless communication which aims at interoperability and covers a full range of devices even including low-end battery-powered nodes. Zigbee is a specification for a suite of high-level communication protocols used to create personal area network built from small network. Zigbee network are vulnerable to Sybil attack in which a Sybil node send forges multiple identities to trick the system and conduct harmful attack. We propose a Sybil attack detection and prevention method using distance and address of node in Zigbee. In this technique, trusted node verifies other nodes and identifies the malicious node. We will implement this technique using NS2 with AODV protocol for mesh topology.
Keywords: Zigbee; protocols; radiocommunication; telecommunication network topology; telecommunication security; AODV protocol; NS2; Sybil attack detection; Sybil node; high-level communication protocols; identity protection; low-end battery-powered nodes; mesh topology; personal area network; wireless communication; Ad hoc networks; IP networks; Protocols; Security; Wireless communication; Wireless sensor networks; Zigbee network; Trust center; Sybil attack (ID#: 16-10124)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7279994&isnumber=7279856
K. B. Kansara and N. M. Shekokar, “At a Glance of Sybil Detection in OSN,” 2015 IEEE International Symposium on Nanoelectronic and Information Systems, Indore, 2015, pp. 47-52. doi: 10.1109/iNIS.2015.46
Abstract: With increasing popularity of online social network (OSN), major threats are also emerging to challenge the security of OSN. One of the majors is Sybil attack, where malicious user unfairly creates multiple fake identities to penetrate the OSN security and integrity. Since last decades number of schemes have been developed for detecting ad defending Sybil attack. In this survey article, we aim to give an overview of researches against Sybil detection and suggested methodologies which have been implemented so far. Our survey aim to provide the foundation for the future researchers to trigger the significant Sybil defenses by overcoming the existing challenges.
Keywords: security of data; social networking (online); user interfaces; OSN; Sybil attack; Sybil detection; malicious user; online social network; security; threats; Information systems; Social Network; Survey (ID#: 16-10125)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7434396&isnumber=7434375
R. Pecori, “Trust-Based Storage in a Kademlia Network Infected by Sybils,” 2015 7th International Conference on New Technologies, Mobility and Security (NTMS), Paris, 2015, pp. 1-5. doi: 10.1109/NTMS.2015.7266529
Abstract: Coping with multiple false identities, also known as a Sybil attack, is one of the main challenges in securing structured peer-to-peer networks. Poisoning routing tables through these identities may make the process for storing and retrieving resources within a DHT (Distributed Hash Table) extremely difficult and time consuming. We investigate current possible countermeasures and propose a novel adaptive method for making the storage and retrieval process, in a Kademlia-based network, more secure. This is done through the use of a trust-based storage algorithm, exploiting reputation techniques. Our solution shows promising results in thwarting a Sybil attack in a Kademlia network, also in comparison with similar methods.
Keywords: computer network security; information retrieval; information storage; peer-to-peer computing; telecommunication network routing; trusted computing; DHT; Kademlia network; Sybil attack; distributed hash table; peer-to-peer networks; reputation techniques; retrieval process; routing tables; storage process; trust-based storage algorithm; Computational modeling; Conferences; Measurement; Peer-to-peer computing; Positron emission tomography; Routing; Standards; Incorrect storage; Kademlia; Structured peer-to-peer networks; Sybil attack; Trust and reputation (ID#: 16-10126)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266529&isnumber=7266450
Roopashree H. R. and A. Kanavalli, “STREE: A Secured Tree Based Routing with Energy Efficiency in Wireless Sensor Network,” Computing and Communications Technologies (ICCCT), 2015 International Conference on, Chennai, 2015, pp. 25-30. doi: 10.1109/ICCCT2.2015.7292714
Abstract: The Wireless Sensor Network (WSN) applications are today not only limited to the research stage rather it has been adopted practically in many defense as well as general civilians applications. It has been witness that extensive research have been conducted towards energy efficient routing and communication protocols and it has been reached to an acceptable stages, but without having a secure communications wide acceptance of the application is not likely. Due to unique characteristics of WSN, the security schemes suggested for other wireless networks are not applicable to WSN. This paper introduces an novel tree based technique called as Secure Tree based Routing with Energy Efficiency or STREE using clustering approximation along with lightweight key broadcasting mechanism in hierarchical routing protocol. The outcome of the study was compared with standard SecLEACH to find that proposed system ensure better energy efficiency and security.
Keywords: cryptography; routing protocols; trees (mathematics); wireless sensor networks; STREE; WSN; clustering approximation; energy efficiency; energy efficient routing protocols; hierarchical routing protocol; lightweight key broadcasting mechanism; secured tree based routing; wireless sensor network; Algorithm design and analysis; Approximation methods; Batteries; Energy efficiency; Reactive power; Security; Wireless sensor networks; Clustering Approximation; SecLEACH; Sybil Attack; Tree Based approach; Wireless Sensor Network (ID#: 16-10127)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292714&isnumber=7292708
W. Luis da Costa Cordeiro and L. P. Gaspary, “Limiting Fake Accounts in Large-Scale Distributed Systems Through Adaptive Identity Management,” 2015 IFIP/IEEE International Symposium on Integrated Network Management (IM), Ottawa, ON, 2015, pp. 1092-1098. doi: 10.1109/INM.2015.7140438
Abstract: Various online, networked systems offer a lightweight process for obtaining identities (e.g., confirming a valid e-mail address), so that users can easily join them. Such convenience comes with a price, however: with minimum effort, an attacker can subvert the identity management scheme in place, obtain a multitude of fake accounts, and use them for malicious purposes. In this work, we approach the issue of fake accounts in large-scale, distributed systems, by proposing a framework for adaptive identity management. Instead of relying on users' personal information as a requirement for granting identities (unlike existing proposals), our key idea is to estimate a trust score for identity requests, and price them accordingly using a proof of work strategy. The research agenda that guided the development of this framework comprised three main items: (i) investigation of a candidate trust score function, based on an analysis of users' identity request patterns, (ii) combination of trust scores and proof of work strategies (e.g. cryptograhic puzzles) for adaptively pricing identity requests, and (iii) reshaping of traditional proof of work strategies, in order to make them more resource-efficient, without compromising their effectiveness (in stopping attackers).
Keywords: Internet; security of data; trusted computing; adaptive identity management; candidate trust score function; cryptographic puzzles; fake accounts; identity request patterns; large-scale distributed systems; online networked systems; proof of work strategy; Adaptation models; Complexity theory; Computational modeling; Cryptography; Green products; Mathematical model; Proposals; Identity management; collusion attacks; peer-to-peer; proof of work; sybil attack (ID#: 16-10128)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140438&isnumber=7140257
R. K. Kapur and S. K. Khatri, “Analysis of Attacks on Routing Protocols in MANETs,” Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, Ghaziabad, 2015, pp. 791-798. doi: 10.1109/ICACEA.2015.7164811
Abstract: Mobile Adhoc Networks (MANETs) are networks of mobile nodes which have limited resources in terms of processing power, memory and battery life. The traffic to the destination nodes which are beyond the range of source nodes are routed by the intermediate nodes. The routing in the MANETs is different from conventional infrastructure network since the nodes not only act as end devices but also act as routers. Owing to the resource constraint of the nodes the routing protocols for MANETs have to be light weight and assume a trusted environment. The absence of any infrastructure for security and ever changing topology of the network makes the routing protocols vulnerable to variety of attacks. These attacks may lead to either misdirection of data traffic or denial of services. The mitigation techniques to combat the attacks in MANETs have to work under severe constraints, and therefore it is imperative to study the vulnerabilities of the routing protocols and methods of launching the attack in detail. This paper attempts to do the same and has reviewed some current literature on mitigation of the routing attacks.
Keywords: mobile ad hoc networks; routing protocols; telecommunication security; MANET; data traffic misdirection; denial of service attack; routing protocol attacks; trusted environment; Ad hoc networks; Computer crime; Mobile computing; Routing; Routing protocols; Attacks on routing protocols; Blackhole attack; Flooding attak; Greyhole attack; MANETs; Routing Protocols; Rushing attack; Sybil Attack; Wormhole attack (ID#: 16-10129)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164811&isnumber=7164643
P. Banerjee, T. Chatterjee and S. DasBit, “LoENA: Low-Overhead Encryption Based Node Authentication in WSN,” Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, Kochi, 2015, pp. 2126-2132. doi: 10.1109/ICACCI.2015.7275931
Abstract: Nodes in a wireless sensor network (WSN) are susceptible to various attacks primarily due to their nature of deployment and unguarded communication. Therefore, providing security in such networks is of utmost importance. The main challenge to achieve this is to make the security solution light weight so that it is feasible to implement in such resource constrained nodes in WSN. So far, data authentication has drawn more attention than the node authentication in WSN. A robust security solution for such networks must also facilitate node authentication. In this paper, a low overhead encryption based security solution is proposed for node authentication. The proposed node authentication scheme at the sender side consists of three modules viz. dynamic key generation, encryption and embedding of key hint. Performance of the scheme is primarily analyzed by using two suitably chosen parameters such as cracking probability and cracking time. This evaluation guides us in fixing the size of the unique id of a node so that the scheme incurs low-overhead as well as achieves acceptable robustness. The performance is also compared with a couple of recent works in terms of computation and communication overheads and that confirms our scheme's supremacy over competing schemes in terms of both the metrics.
Keywords: cryptography; probability; wireless sensor networks; LoENA; WSN; cracking probability; cracking time; data authentication; low-overhead encryption based node authentication; wireless sensor network; Authentication; Encryption; Heuristic algorithms; Receivers; Wireless sensor networks; Wireless sensor network; authentication; encryption; sybil attack; tampering (ID#: 16-10130)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275931&isnumber=7275573
A. Quyoom, R. Ali, D. N. Gouttam and H. Sharma, “A Novel Mechanism of Detection of Denial of Service Attack (DoS) in VANET Using Malicious and Irrelevant Packet Detection Algorithm (MIPDA),” Computing, Communication & Automation (ICCCA), 2015 International Conference on, Noida, 2015, pp. 414-419. doi: 10.1109/CCAA.2015.7148411
Abstract: Security of Vehicular Ad Hoc Networks (VANET) plays a very important role in order to sustain critical life. VANET is a subtype of MANET. For the secure communication of critical life related information, network must need to be available at all the times. The network availability is exposed to several types of attacks and threads possible in VANET. These security attacks and threats include Sybil attacks, misbehaving nodes generate false information, jamming attacks, selfish driver attack, wrongs vehicle position information. These attacks make other vehicles unsecure. Among all these attacks, denial-of-service (DoS) attacks is a major threat to the information economy. In this paper, we proposed an Malicious and Irrelevant Packet Detection Algorithm (MIPDA) which is used to analyze and detect the Denial-of Service (DoS) attack. As a result, the attack is eventually confined within its source domains, thus avoiding wasteful attack traffic overloading the network infrastructure. It also reduces the overhead delay in the information processing, which increases the communication speed and also enhances the security in VANET.
Keywords: computer network security; signal detection; vehicular ad hoc networks; Denial of Service attack; DoS attack detection; MANET; MIPDA; Sybil attack; VANET; information economy; information processing overhead delay reduction; jamming attack; malicious and irrelevant packet detection algorithm; secure communication; selfish driver attack; vehicular ad hoc network; Computer crime; Jamming; Roads; Safety; Vehicles; Vehicular ad hoc networks (ID#: 16-10131)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148411&isnumber=7148334
S. Bittl, A. A. Gonzalez, M. Myrtus, H. Beckmann, S. Sailer and B. Eissfeller, “Emerging Attacks on VANET Security Based on GPS Time Spoofing,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 344-352. doi: 10.1109/CNS.2015.7346845
Abstract: Car2X communication is about to enter the mass market in upcoming years. So far all realization proposals heavily depend on the global positioning system for providing location information and time synchronization. However, studies on security impact of this kind of data input have focused on the possibility to spoof location information. In contrast, attacks on time synchronization have not received much attention so far. Thus, an analysis of the attack potential on vehicular ad-hoc network (VANET) realizations in regard to spoofed time information is provided in this work. Thereby, we show that this kind of attack allows for severe denial of service attacks. Moreover, by such attacks one can violate the non-repudiation feature of the security system by offering the possibility to misuse authentication features. Additionally, a sybil attack can be performed and reliability of the basic data sets of time and position inside VANET messages is highly questionable considering the outlined attacks. Mechanisms to avoid or limit the impact of outlined security flaws are discussed. An evaluation of the possibility to carry out the described attacks in practice using a current Car2X hardware solution is provided.
Keywords: Global Positioning System; synchronisation; telecommunication network reliability; vehicular ad hoc networks; Car2X communication; Car2X hardware solution; GPS time spoofing; VANET messages; VANET security; denial of service attacks; mass market; security system; spoof location information; spoofed time information; sybil attack; time synchronization; vehicular ad-hoc network; Receivers; Security; Standards; Synchronization; Vehicles; Vehicular ad hoc networks (ID#: 16-10132)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346845&isnumber=7346791
O. Tibermacine, C. Tibermacine and F. Cherif, “Regression-Based Bootstrapping of Web Service Reputation Measurement,” Web Services (ICWS), 2015 IEEE International Conference on, New York, NY, 2015, pp. 377-384. doi: 10.1109/ICWS.2015.57
Abstract: In the literature, many solutions for measuring the reputation of web services have been proposed. These solutions help in building service recommendation systems. Nonetheless, there are still many challenges that need to be addressed in this context, such as the “cold start” problem, and the lack of estimation of the initial reputation values of newcomer web services. As reputation measurement depends on the previous reputation values, the lack of initial values can subvert the performance of the whole service recommendation system, making it vulnerable to different threats, like the Sybil attack. In this paper, we propose a new bootstrapping mechanism for evaluating the reputation of newcomer web services based on their initial Quality of Service (QoS) attributes, and their similarity with “long-standing” web services. Basically, the technique uses regression models for estimating the unknown reputation values of newcomer services from their known values of QoS attributes. The technique has been experimented on a large set of services, and its performance has been measured using some statistical metrics, such as the coefficient of determination (R2), Mean Absolute Error (MSE), and Percentage Error (PE).
Keywords: Web services; computer bootstrapping; computer crime; quality of service; recommender systems; regression analysis; MSE; QoS attributes; Sybil attack; Web service reputation measurement; bootstrapping mechanism; coefficient of determination; mean absolute error; percentage error; quality of service attributes; regression models; regression-based bootstrapping; reputation evaluation; reputation values; service recommendation systems; statistical metrics; threats; Estimation; Mathematical model; Measurement; Quality of service; Silicon; Time factors; Quality of Service; Regression Model; Reputation Bootstrapping; Reputation Measurement; Web Services (ID#: 16-10133)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195592&isnumber=7195533
Ashritha M and Sridhar C S, “RSU Based Efficient Vehicle Authentication Mechanism for VANETs,” Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, Coimbatore, 2015, pp. 1-5. doi: 10.1109/ISCO.2015.7282299
Abstract: Security and privacy are the two major concerns in VANETs. Due to highly dynamic environment in VANETs computation time for authentication is more. At the same time most of the privacy preserving schemes is prone to Sybil attacks. In this paper we propose a lightweight authentication scheme between vehicle to RSU, vehicle to vehicles and to build a secure communication system. In this method we make use of timestamps approach and also reduce the computation cost for authentication in highly dense traffic zones. The privacy of the vehicle is preserved by not disclosing its real identity.
Keywords: cost reduction; data privacy; telecommunication security; telecommunication traffic; vehicular ad hoc networks; RSU; RSU based efficient vehicle authentication mechanism; Sybil attack; VANET; computation cost reduction; highly dense traffic zone; lightweight authentication scheme; privacy preserving scheme; secure communication system; timestamp approach; vehicle privacy; Authentication; Computers; Libraries; Privacy; Vehicular ad hoc networks; OBU; TMA; pseudo-id (ID#: 16-10134)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282299&isnumber=7282219
P. Sarigiannidis, E. Karapistoli and A. A. Economides, “VisIoT: A Threat Visualisation Tool for IoT Systems Security,” 2015 IEEE International Conference on Communication Workshop (ICCW), London, 2015, pp. 2633-2638. doi: 10.1109/ICCW.2015.7247576
Abstract: Without doubt, the Internet of Things (IoT) is changing the way people and technology interact. Fuelled by recent advances in networking, communications, computation, software, and hardware technologies, IoT has stepped out of its infancy and is considered as the next breakthrough technology in transforming the Internet into a fully integrated Future Internet. However, realising a network of physical objects accessed through the Internet brings a potential threat in the shadow of the numerous benefits. The threat is “security”. Given that Wireless Sensor Networks (WSNs) leverage the potential of IoT quite efficiently, this paper faces the challenge of security attention on a particular, yet broad, context of IP-enabled WSNs. In particular, it proposes a novel threat visualisation tool for such networks, called VisIoT. VisIoT is a human-interactive visual-based anomaly detection system that is capable of monitoring and promptly detecting several devastating forms of security attacks, including wormhole attacks, and Sybil attacks. Based on a rigorous, radial visualisation design, VisIoT may expose adversaries conducting one or multiple concurrent attacks against IP-enabled WSNs. The system's visual and anomaly detection efficacy in exposing complex security threats is demonstrated through a number of simulated attack scenarios.
Keywords: Internet of Things; data visualisation; security of data; wireless sensor networks; IP-enabled WSN; IoT systems security; Sybil attacks; VisIoT; complex security threats; concurrent attacks; hardware technologies; human-interactive visual-based anomaly detection system; physical objects; radial visualisation design; security attacks; simulated attack scenarios; software technologies; threat visualisation tool; visual detection efficacy; wormhole attacks; Data visualization; Engines; Monitoring; Routing; Security; Visualization; Wireless sensor networks (ID#: 16-10135)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247576&isnumber=7247062
K. Chen, G. Liu, H. Shen and F. Qi, “Sociallink: Utilizing Social Network and Transaction Links for Effective Trust Management in P2P File Sharing Systems,” Peer-to-Peer Computing (P2P), 2015 IEEE International Conference on, Boston, MA, 2015, pp. 1-10. doi: 10.1109/P2P.2015.7328527
Abstract: Current reputation systems for peer-to-peer (P2P) file sharing systems either fail to utilize existing trust within social networks or suffer from certain attacks (e.g., free-riding and collusion). To handle these problems, we introduce a trust management system, called SocialLink, that utilizes social network and historical transaction links. SocialLink manages file transactions through both the social network and a novel weighted transaction network, which is built based on previous file transaction history. First, SocialLink exploits the trust among friends in social networks by enabling two friends to share files directly. Second, the weighted transaction network is utilized to (1) deduce the trust of the client on a server in reliably providing the requested file and (2) check the fairness of the transaction. In this way, SocialLink prevents potential misbehaving transactions (i.e., providing faulty files), encourages nodes to contribute file resources to non-friends, and avoids free-riding. Furthermore, the weighted transaction network helps SocialLink resist whitewashing, collusion and Sybil attacks. Extensive simulation demonstrates that SocialLink can efficiently ensure trustable and fair P2P file sharing and resist the aforementioned attacks.
Keywords: client-server systems; peer-to-peer computing; social networking (online); transaction processing; trusted computing; SocialLink; Sybil attacks; client-server system; collusion attack; faulty files; file resources; file transaction management; free-riding attack; peer-to-peer file sharing systems; social network; transaction links; trust management system; trustable-fair P2P file sharing; weighted transaction network; whitewashing attack; Nickel; Peer-to-peer computing; Quality of service; Reliability; Resists; Servers; Social network services (ID#: 16-10136)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7328527&isnumber=7328510
J. Chen, H. Ma, D. S. L. Wei and D. Zhao, “Participant-Density-Aware Privacy-Preserving Aggregate Statistics for Mobile Crowd-Sensing,” Parallel and Distributed Systems (ICPADS), 2015 IEEE 21st International Conference on, Melbourne, VIC, 2015,
pp. 140-147. doi: 10.1109/ICPADS.2015.26
Abstract: Mobile crowd-sensing applications produce useful knowledge of the surrounding environment, which makes our life more predictable. However, these applications often require people to contribute, consciously or unconsciously, location-related data for analysis, and this gravely encroaches users' location privacy. Aggregate processing is a feasible way for preserving user privacy to some extent, and based on the mode, some privacy-preserving schemes have been proposed. However, existing schemes still cannot guarantee users' location privacy in the scenarios with low density participants. Meanwhile, user accountability also needs to be considered comprehensively to protect the system from malicious users. In this paper, we propose a participant-density-aware privacy-preserving aggregate statistics scheme for mobile crowd-sensing applications. In our scheme, we make use of multi-pseudonym mechanism to overcome the vulnerability due to low participant density. To further handle sybil attacks, based on the Paillier cryptosystem and non-interactive zero-knowledge verification, we advance and improve our solution framework, which also covers the problem of user accountability. Finally, the theoretical analysis indicates that our scheme achieves the desired properties, and the performance experiments demonstrate that our scheme can achieve a balance among accuracy, privacy-protection and computational overhead.
Keywords: cryptography; data privacy; mobile computing; statistics; Paillier cryptosystem; Sybil attacks; mobile crowd-sensing applications; multipseudonym mechanism; noninteractive zero-knowledge verification; participant-density-aware privacy-preserving aggregate statistics scheme; user accountability; Aggregates; Cryptography; Mobile handsets; Principal component analysis; Privacy; Sensors; Servers; aggregate statistics; mobile crowd-sensing; participant-density; privacy-preservation; user accountability
(ID#: 16-10137)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7384289&isnumber=7384203
A. Singh and P. Sharma, “A Novel Mechanism for Detecting DOS Attack in VANET Using Enhanced Attacked Packet Detection Algorithm (EAPDA),” 2015 2nd International Conference on Recent Advances in Engineering & Computational Sciences (RAECS), Chandigarh, India, 2015, pp. 1-5. doi: 10.1109/RAECS.2015.7453358
Abstract: Security is the major concern with respect to the critical information shared between the vehicles. Vehicular ad hoc network is a sub class of Mobile ad hoc network in which the vehicles move freely and communicate with each other and with the roadside unit (RSU) as well. Since the nodes are self organized, highly mobile and free to move therefore any nodes can interact with any other node which may or may not be trustworthy. This is the area of concern in the security horizon of VANETs. It is the responsibility of RSU to make the network available all the time to every node for secure communication of critical information. For this, network availability occurs as the major security requirement, which may be exposed to several threats or attacks. The vehicles and the RSU are prone to several security attacks such as masquerading, Sybil attack, alteration attack, Selfish driver attack, etc. Among these Denial of Service attack is the major threat to the availability of network. In order to shelter the VANET from DoS attack we have proposed Enhanced Attacked Packet Detection Algorithm which prohibits the deterioration of the network performance even under this attack. EAPDA not only verify the nodes and detect malicious nodes but also improves the throughput with minimized delay thus enhancing security. The simulation is done using NS2 and the results are compared with earlier done work.
Keywords: telecommunication security; vehicular ad hoc networks; DOS attack detection; NS2; Sybil; VANET; delay; denial of service attack; enhanced attacked packet detection algorithm; malicious nodes; mobile ad hoc network; network availability; roadside unit; secure communication; security; security horizon; selfish driver attack; vehicular ad hoc network; Computer crime; Delays; Detection algorithms; Roads; Vehicles; Vehicular ad hoc networks; Availability; DoS Attack; EAPDA; Security (ID#: 16-10138)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7453358&isnumber=7453273
J. Jose and Rigi C. R, “A Comparative Study of Topology Enabled and Topology Hiding Multipath Routing Protocols in MANETs,” Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, Visakhapatnam, 2015, pp. 1-4. doi: 10.1109/EESCO.2015.7254001
Abstract: In the past few years, we have seen a rapid expansion in the area of mobile ad hoc networks due to the rapid increase in the number of inexpensive and widely available wireless devices. This type of network, operating as a stand-alone network or with one or multiple points of attachment to cellular networks paves the way for numerous new and exciting applications. MANETs are characterized by a multi-hop network topology that can change frequently due to mobility, efficient routing protocols are needed to establish communication paths between nodes. It is very important that the routing protocol used must provide a well secure routing architecture and should not provide single bit of loop holes. This is pointing towards the topology exposure problem of existing routing protocols and tells about the need of topology hiding. Routing security is one of the hottest research areas in MANET currently. This paper provides insight into a comparative study of well known AOMDV routing protocol with a topology hiding multipath protocol and the need of hiding topology information within the protocol to resist various kinds of attacks such as blackhole attack, Sybil attack and warmhole attack. This paper also discusses the technological challenges that protocol designers and network developers are faced with.
Keywords: cellular radio; mobile ad hoc networks; radio equipment; routing protocols; telecommunication network topology; telecommunication security; AOMDV routing protocol; MANET routing security; cellular network; mobile ad hoc network; multihop network topology exposure problem; topology enabled multipath routing protocol; topology hiding multipath routing protocol; wireless device; Ad hoc networks; Mobile computing; Network topology; Routing; Routing protocols; Topology; AODV; Routing Protocols; THMR; Topology hiding; formatting (ID#: 16-10139)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7254001&isnumber=7253613
K. Zhang, X. Liang, R. Lu, K. Yang and X. S. Shen, “Exploiting Mobile Social Behaviors for Sybil Detection,” 2015 IEEE Conference on Computer Communications (INFOCOM), Kowloon, 2015, pp. 271-279. doi: 10.1109/INFOCOM.2015.7218391
Abstract: In this paper, we propose a Social-based Mobile Sybil Detection (SMSD) scheme to detect Sybil attackers from their abnormal contacts and pseudonym changing behaviors. Specifically, we first define four levels of Sybil attackers in mobile environments according to their attacking capabilities. We then exploit mobile users' contacts and their pseudonym changing behaviors to distinguish Sybil attackers from normal users. To alleviate the storage and computation burden of mobile users, the cloud server is introduced to store mobile user's contact information and to perform the Sybil detection. Furthermore, we utilize a ring structure associated with mobile user's contact signatures to resist the contact forgery by mobile users and cloud servers. In addition, investigating mobile user's contact distribution and social proximity, we propose a semi-supervised learning with Hidden Markov Model to detect the colluded mobile users. Security analysis demonstrates that the SMSD can resist the Sybil attackers from the defined four levels, and the extensive trace-driven simulation shows that the SMSD can detect these Sybil attackers with high accuracy.
Keywords: cloud computing; hidden Markov models; learning (artificial intelligence); network servers; security of data; Sybil attackers; abnormal contacts; cloud server; hidden Markov model; mobile environments; mobile social behaviors; mobile user contact distribution; mobile user contact signatures; pseudonym changing behaviors; security analysis; semisupervised learning; social proximity; social-based mobile Sybil detection; trace-driven simulation; Aggregates; Computers; Hidden Markov models; Mobile communication; Mobile computing; Resists; Servers (ID#: 16-10140)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218391&isnumber=7218353
D. Gantsou, “On the Use of Security Analytics for Attack Detection in Vehicular Ad Hoc Networks,” Cyber Security of Smart Cities, Industrial Control System and Communications (SSIC), 2015 International Conference on, Shanghai, 2015, pp. 1-6. doi: 10.1109/SSIC.2015.7245674
Abstract: A vehicular ad hoc network (VANET) is a special kind of mobile ad hoc network built on top of the IEEE802.11p standard for a better adaptability to the wireless mobile environment. As it is used for both supporting vehicle-to-vehicle (V2V) as well as vehicle-to-infrastructure (V2I) communications, and connecting vehicles to external resources including cloud services, Internet, and user devices while improving the road traffic conditions, VANET is a Key component of intelligent transportation systems (ITS). As such, VANET can be exposed to cyber attacks related to the wireless environment, and those of traditional information technologies systems it is connected to. However, when looking at solutions that have been proposed to address VANET security issues, it emerges that guaranteeing security in VANET essentially amounts to resorting to cryptographic-centric mechanisms. Although the use of public key Infrastructure (PKI) fulfills most VANET' security requirements related to physical properties of the wireless transmissions, simply relying on cryptography does not secure a network. This is the case for vulnerabilities at layers above the MAC layer. Because of their capability to bypass security policy control, they can still expose VANET, and thus, the ITS to cyber attacks. Thereby, one needs security solutions that go beyond cryptographic mechanisms in order cover multiple threat vectors faced by VANET. In this paper focusing on attack detection, we show how using an implementation combining observation of events and incidents from multiple sources at different layers Sybil nodes can be detected regardless of the VANET architecture.
Keywords: intelligent transportation systems; telecommunication security; vehicular ad hoc networks; IEEE802.11p standard; VANET; attack detection; cryptographic-centric mechanisms; cyber attacks; mobile ad hoc network; security analytics; wireless mobile environment; Communication system security; Cryptography; IP networks; Vehicles; Vehicular ad hoc networks; Intelligent Transportation Systems (ITS); Vehicular ad hoc network (VANET) security (ID#: 16-10141)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7245674&isnumber=7245317
Y. Wang, Z. Ju, A. V. Vasilakos and J. Ma, “An Integrated Incentive Mechanism for Device to Device (D2D)-Enabled Cellular Traffic Offloading,” 2015 IEEE International Conference on Smart City/SocialCom/SustainCom (SmartCity), Chengdu, China, 2015, pp. 384-390. doi: 10.1109/SmartCity.2015.102
Abstract: Cooperative content offloading is a promising technology to relieve the heavy burden of wireless cellular networks, and meanwhile can improve the quality of downloading services. While various optimization frameworks have been intensively studied (e.g., maximizing the amount of cellular traffic that can be offloaded, etc.), little attention has been given to how to systematically accommodate various stakeholders' rational behaviors and incentivize their cooperation. In this paper, we propose an integrated incentive mechanism which incorporates the utilities of three rational stakeholders in traffic offloading: cellular provider and end users including waiting users and downloading users. This incentive mechanism explicitly includes two components. In the first component of reverse auction based incentive mechanism, the cellular provider can classify the general users into downloading users and waiting users, and the waiting users can get some rewards for waiting some time (i.e., delaying their downloading through cellular provider). Besides being involved in the reverse auction phase, in the second component of charge policy based incentive mechanism, the waiting users can obtain data from the downloading user in D2D way, and pay both downloading user and per intermediate node on the delivery path with parts of rewards earned from cellular providers. Preliminary theoretical analysis illustrates this integrated incentive mechanism has the following features: Cellular provide can offload traffic with minimum cost, users in reverse auction will truthfully report their valuations on traffic loading, downloading users can obtain extra reward from waiting users in sybil-proof way (i.e., thwarting edge insertion attack).
Keywords: cellular radio; commerce; cooperative communication; telecommunication network management; telecommunication traffic; D2D-enabled cellular traffic offloading; cellular provider; charge policy; cooperative content offloading; device to device-enabled cellular traffic offloading; downloading services; integrated incentive mechanism; rational stakeholders; reverse auction phase; stakeholders rational behaviors; sybil-proof way; wireless cellular networks; Cost accounting; Delays; Mobile communication; Mobile computing; Resource management; Stakeholders; Wireless communication; Cellular provider; Device to Device (D2D); Incentive mechanism; Reverse auction; Traffic offloading (ID#: 16-10142)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7463756&isnumber=7463653
Y. Qiu and M. Ma, “An Authentication and Key Establishment Scheme to Enhance Security for M2M in 6LoWPANs,” 2015 IEEE International Conference on Communication Workshop (ICCW), London, 2015, pp. 2671-2676. doi: 10.1109/ICCW.2015.7247582
Abstract: With the rapid development of wireless communication technologies, machine-to-machine (M2M) communications, which is an essential part of the Internet of Things (IoT), allows wireless and wired systems to monitor environments without human intervention. To extend the use of M2M applications, the standard of Internet Protocol version 6 (IPv6) over Low power Wireless Personal Area Networks (6LoWPAN), developed by The Internet Engineering Task Force (IETF), would be applied into M2M communication to enable IP-based M2M sensing devices to connect to the open Internet. Although the 6LoWPAN standard has specified important issues in the communication, security functionalities at different protocol layers have not been detailed. In this paper, we propose an enhanced authentication and key establishment scheme for 6LoWPAN networks in M2M communications. The security proof by the Protocol Composition Logic (PCL) and the formal verification by the Simple Promela Interpreter (SPIN) show that the proposed scheme in 6LoWPAN could enhance the security functionality with the ability to prevent malicious attacks such as replay attacks, man-in-the-middle attacks, impersonation attacks, Sybil attacks, and etc.
Keywords: Internet; Internet of Things; cryptographic protocols; personal area networks; transport protocols; 6LoWPAN; IETF; IPv6; Internet engineering task force; Internet protocol version 6; IoT; M2M communication; PCL; SPIN; authentication scheme; key establishment scheme; low power wireless personal area network; machine-to-machine communication; protocol composition logic; protocol layer; security enhancement; simple Promela interpreter; wireless communication technology; Authentication; Cryptography; Internet of things; Protocols; Servers; Authentication; M2M (ID#: 16-10143)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247582&isnumber=7247062
A. Xu, X. Feng and Y. Tian, “Revealing, Characterizing, and Detecting Crowdsourcing Spammers: A Case Study in Community Q&A,” 2015 IEEE Conference on Computer Communications (INFOCOM), Kowloon, 2015, pp. 2533-2541. doi: 10.1109/INFOCOM.2015.7218643
Abstract: Crowdsourcing services have emerged and become popular on the Internet in recent years. However, evidence shows that crowdsourcing can be maliciously manipulated. In this paper, we focus on the “dark side” of the crowdsourcing services. More specifically, we investigate the spam campaigns that are originated and orchestrated on a large Chinese-based crowdsourcing website, namely ZhuBaJie.com, and track the crowd workers to their spamming behaviors on Baidu Zhidao, the largest community-based question answering (QA) site in China. By linking the spam campaigns, workers, spammer accounts, and spamming behaviors together, we are able to reveal the entire ecosystem that underlies the crowdsourcing spam attacks. We present a comprehensive and insightful analysis of the ecosystem from multiple perspectives, including the scale and scope of the spam attacks, Sybil accounts and colluding strategy employed by the spammers, workers' efforts and monetary rewards, and quality control performed by the spam campaigners, etc. We also analyze the behavioral discrepancies between the spammer accounts and the legitimate users in community QA, and present methodologies for detecting the spammers based on our understandings on the crowdsourcing spam ecosystem.
Keywords: Internet; Web sites; outsourcing; security of data; unsolicited e-mail; Baidu Zhidao; China; Chinese-based crowdsourcing Website; Sybil accounts; ZhuBaJie.com; community Q&A; community-based question answering site; crowd workers; crowdsourcing services; crowdsourcing spam attacks; crowdsourcing spammer characterization; crowdsourcing spammer detection; quality control; spam campaigns; spammer accounts; spamming behaviors; Computers; Conferences; Crowdsourcing; Ecosystems; Knowledge discovery; Unsolicited electronic mail (ID#: 16-10144)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218643&isnumber=7218353
D. U. S. Rajkumar and R. Vayanaperumal, “A Leader Based Intrusion Detection System for Preventing Intruder in Heterogeneous Wireless Sensor Network,” 2015 IEEE Bombay Section Symposium (IBSS), Mumbai, India, 2015, pp. 1-6. doi: 10.1109/IBSS.2015.7456671
Abstract: Nowadays communication and data transmission among various heterogeneous networks is growing speedily and drastically. More number of heterogeneous networks are created and deployed by government as well as by private firms. Due to the distance, mobility, behavior of nodes in the networks and dynamic in nature, it is essential to provide security for all the networks separately or distributed. Various existing approaches discuss about the security issues and challenges for heterogeneous networks. In this paper a Leader Based Intrusion Detection System [LBIDS] is proposed to detect and prevent DOS as well as other attacks such as Sybil and Sinkhole in the networks by deploying the Leader Based Intrusion Detection System into access points in the networks. The proposed approach utilizes three core security challenges such as Authentication, positive incentive provision and preventing DOS. In addition to that it will do packet verification and IP verification for improving the efficiency in terms of detection and prevention against attacks in heterogeneous networks. The simulation of our proposed approach is carried out in NS2 software and the results were given.
Keywords: computer network security; message authentication; wireless sensor networks; DOS prevention; IP verification; LBIDS; NS2 software; Sinkhole; Sybil; access points; authentication; data transmission; heterogeneous wireless sensor network; intruder prevention; leader based intrusion detection system; packet verification; positive incentive provision; private firms; Authentication; Heterogeneous networks; Intrusion detection; Routing; Sensors; Wireless sensor networks; Heterogeneous Networks; Intrusion Detection System; Leader Based Intrusion Detection System; Wireless Sensor Network (ID#: 16-10145)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7456671&isnumber=7456621
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Trust and Trustworthiness 2015 (Part 1) |
Trust is created in information security through cryptography to assure the identity of external parties. Trust is essential to cybersecurity and to the Science of Security hard problem of composability. The research work cited here regarding trust and trustworthiness was presented in 2015.
J. M. Seigneur, “Wi-Trust: Improving Wi-Fi Hotspots Trustworthiness with Computational Trust Management,” ITU Kaleidoscope: Trust in the Information Society (K-2015), 2015, Barcelona, 2015, pp. 1-6. doi: 10.1109/Kaleidoscope.2015.7383629
Abstract: In its list of top ten smartphone risks, the European Union Agency for Network and Information Security ranks Network Spoofing Attacks as number 6. In this paper, we present how we have validated different computational trust management techniques by means of implemented prototypes in real devices to mitigate malicious legacy Wi-Fi hotspots including spoofing attacks. Then we explain how some of these techniques could be more easily deployed on a large scale thanks to simply using the available extensions of Hotspot 2.0, which could potentially lead to a new standard to improve Wi-Fi networks trustworthiness.
Keywords: smart phones; trusted computing; wireless LAN; European Union Agency for Network and Information Security; Hotspot 2.0; Wi-Fi hotspots trustworthiness; Wi-trust; computational trust management; malicious legacy Wi-Fi hotspots; network spoofing attacks; smartphone risks; Authentication; Computational modeling; Engines; IEEE 802.11 Standard; Measurement; Quality of service; Wi-Fi; computational trust; public hotspot (ID#: 16-11278)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7383629&isnumber=7383613
P. Mishra, S. Bhunia and S. Ravi, “Validation and Debug of Security and Trust Issues in Embedded Systems,” 2015 28th International Conference on VLSI Design, Bangalore, 2015, pp. 3-5. doi: 10.1109/VLSID.2015.110
Abstract: Summary form only given. Reusable hardware intellectual property (IP) based System-on-Chip (SoC) design has emerged as a pervasive design practice in the industry to dramatically reduce design/verification cost while meeting aggressive time-to-market constraints. However, growing reliance on reusable pre-verified hardware IPs and wide array of CAD tools during SoC design - often gathered from untrusted 3rd party vendors - severely affects the security and trustworthiness of SoC computing platforms. Major security issues in the hardware IPs at different stages of SoC life cycle include piracy during IP evaluation, reverse engineering, cloning, counterfeiting, as well as malicious hardware modifications. The global electronic piracy market is growing rapidly and is now estimated to be $1B/day, of which a significant part is related to hardware IPs. Furthermore, use of untrusted foundry in a fabless business model greatly aggravates the SoC security threats by introducing vulnerability of malicious modifications or piracy during SoC fabrication. Due to ever-growing computing demands, modern SoCs tend to include many heterogeneous processing cores, scalable communication network, together with reconfigurable cores e.g. embedded FPGA in order to incorporate logic that is likely to change as standards and requirements evolve. Such design practices greatly increase the number of untrusted components in the SoC design flow and make the overall system security a pressing concern. There is a critical need to analyze the SoC security issues and attack models due to involvement of multiple untrusted entities in SoC design cycle — IP vendors, CAD tool developers, and foundries — and develop low-cost effective countermeasures. These countermeasures would encompass encryption, obfuscation, watermarking and fingerprinting, and certain analytic methods derived from the behavioral aspects of SoC to enable trusted operation with untrusted components. In this tutorial, we plan to prov- de a comprehensive coverage of both fundamental concepts and recent advances in validation of security and trust of hardware IPs. The tutorial also covers the security and debug trade-offs in modern SoCs e.g., more observability is beneficial for debug whereas limited observability is better for security. It examines the state-of-the-art in research in this challenging area as well as industrial practice, and points to important gaps that need to be filled in order to develop a validation and debug flow for secure SoC systems. The tutorial presenters (one industry expert and two faculty members) will be able to provide unique perspectives on both academic research and industrial practices. The selection of topics covers a broad spectrum and will be of interest to a wide audience including design, validation, security, and debug engineers. The proposed tutorial consists of five parts. The first part introduces security vulnerabilities and various challenges associated with trust validation for hardware IPs. Part II covers various security attacks and countermeasures. Part III covers both formal methods and simulation-based approaches for security and trust validation. Part IV presents the conflicting requirements between security and debug during SoC development and ways to address them. Part V covers real-life examples of security failures and successful countermeasures in industry. Finally, Part VI concludes this tutorial with discussion on emerging issues and future directions.
Keywords: computer debugging; embedded systems; industrial property; security of data; system-on-chip; SoC computing platforms; debug flow; formal methods; hardware IP; reusable hardware intellectual property; security attacks; security failures; security validation; security vulnerabilities; trust validation; Awards activities; Design automation; Hardware; Security; System-on-chip; Tutorials; Very large scale integration (ID#: 16-11279)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7031691&isnumber=7031671
R. Weiss et al., “Trust Evaluation in Mobile Devices: An Empirical Study,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 25-32. doi: 10.1109/Trustcom.2015.353
Abstract: Mobile devices today, such as smartphones and tablets, have become both more complex and diverse. This paper presents a framework to evaluate the trustworthiness of the individual components in a mobile system, as well as the entire system. The major components are applications, devices and networks of devices. Given this diversity and multiple levels of a mobile system, we develop a hierarchical trust evaluation methodology, which enables the combination of trust metrics and allows us to verify the trust metric for each component based on the trust metrics for others. The paper first demonstrates this idea for individual applications and Android-based smartphones. The methodology involves two stages: initial trust evaluation and trust verification. In the first stage, an expert rule system is used to produce trust metrics at the lowest level of the hierarchy. In the second stage, the trust metrics are verified by comparing data from components and a trust evaluation is produced for the combined system. This paper presents the results of two empirical studies, in which this methodology is applied and tested. The first study involves monitoring resource utilization and evaluating trust based on resource consumption patterns. We measured battery voltage, CPU utilization and network communication for individual apps and detected anomalous behavior that could be indicative of malicious code. The second study involves verification of the trust evaluation by comparing the data from two different devices: the GPS location from an Android smartphone in an automobile and the data from an on-board diagnostics (OBD) sensor of the same vehicle.
Keywords: Android (operating system); expert systems; mobile computing; power aware computing; program verification; resource allocation; smart phones; system monitoring; trusted computing; voltage measurement; Android smartphone; Android-based smartphones; CPU utilization; GPS location; OBD sensor; anomalous behavior detection; battery voltage measurement; expert rule system; hierarchical trust evaluation methodology; mobile devices; network communication; onboard diagnostics sensor; resource consumption patterns; resource utilization monitoring; tablets; trust metrics; trust verification; trustworthiness evaluation; Computer science; Electronic mail; Measurement; Privacy; Security; Smart phones; security (ID#: 16-11280)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345261&isnumber=7345233
T. Fadai, S. Schrittwieser, P. Kieseberg and M. Mulazzani, “Trust me, I'm a Root CA! Analyzing SSL Root CAs in Modern Browsers and Operating Systems,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 174-179. doi: 10.1109/ARES.2015.93
Abstract: The security and privacy of our online communications heavily relies on the entity authentication mechanisms provided by SSL. Those mechanisms in turn heavily depend on the trustworthiness of a large number of companies and governmental institutions for attestation of the identity of SSL services providers. In order to offer a wide and unobstructed availability of SSL-enabled services and to remove the need to make a large amount of trust decisions from their users, operating systems and browser manufactures include lists of certification authorities which are trusted for SSL entity authentication by their products. This has the problematic effect that users of such browsers and operating systems implicitly trust those certification authorities with the privacy of their communications while they might not even realize it. The problem is further complicated by the fact that different software vendors trust different companies and governmental institutions, from a variety of countries, which leads to an obscure distribution of trust. To give insight into the trust model used by SSL this thesis explains the various entities and technical processes involved in establishing trust when using SSL communications. It furthermore analyzes the number and origin of companies and governmental institutions trusted by various operating systems and browser vendors and correlates the gathered information to a variety of indexes to illustrate that some of these trusted entities are far from trustworthy. Furthermore it points out the fact that the number of entities we trust with the security of our SSL communications keeps growing over time and displays the negative effects this might have as well as shows that the trust model of SSL is fundamentally broken.
Keywords: certification; cryptographic protocols; data privacy; message authentication; online front-ends; operating systems (computers); trusted computing; CAs; SSL communications; SSL entity authentication; SSL root; SSL-enabled services; browsers; certification authorities; entity authentication mechanisms; online communications; operating systems; privacy; root certificate programs; security; trust model; Browsers; Companies; Government; Indexes; Internet; Operating systems; Security; CA; PKI; trust
(ID#: 16-11281)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299911&isnumber=7299862
S. Singh and J. Sidhu, “A Collaborative Trust Calculation Scheme for Cloud Computing Systems,” 2015 2nd International Conference on Recent Advances in Engineering & Computational Sciences (RAECS), Chandigarh, 2015, pp. 1-5. doi: 10.1109/RAECS.2015.7453380
Abstract: One of the major hurdles in the widespread use of cloud computing systems is the lack of trust between consumer and service provider. Lack of trust can put consumer's sensitive data and applications at risk. Consumers need assurance that service providers will provide services as per agreement and will not deviate from agreed terms and conditions. Though trust is a subjective term, it can be measured objectively also. In this paper we present the design and simulation of a collaborative trust calculation scheme in which trust on a service provider is build by participants in a collaborative way. Each collaborator shares its experience of service provider with the coordinator and then shared experiences are aggregated by coordinator to compute final trust value which represents the trustworthiness of service provider. The scheme makes use of fuzzy logic to aggregate responses and to handle uncertain and imprecise information. Collaborative trust calculation scheme makes it difficult for untrustworthy service provider to build its reputation in the system by providing quality services only to a selected set of participants. A service provider has to provide agreed services to all participants uniformly in order to build reputation in the environment. Simulation has been done using MATLAB toolkit. Simulation results show that the scheme is workable and can be adopted for use in collaborative cloud computing systems to determine trustworthiness of service providers.
Keywords: cloud computing; fuzzy logic; trusted computing; Matlab toolkit; cloud computing systems; collaborative trust calculation scheme; consumer sensitive data; final trust value; untrustworthy service provider; Aggregates; Cloud computing; Collaboration; Computational modeling; Fuzzy logic; Quality of service; Security; trustworthiness (ID#: 16-11282)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7453380&isnumber=7453273
G. Ducatel, “Identity as a Service: A Cloud Based Common Capability,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 675-679. doi: 10.1109/CNS.2015.7346886
Abstract: Driven by benefits in cost efficiency, scale, ease of access and of resource, service and information sharing, the cloud is becoming the power engine to pervasive ICT (Information and Communication Technology). Identity and Access Management has become a prime target to enable trust establishment for cloud services and IoT (Internet of Things). Turning IAM (Identity and Access Management) solutions into IDaaS (Identity as a Service) helps providing ubiquitous identity solutions. In this paper we present a framework for IDaaS emphasizing the aspects relating to identity federation and lifecycle management. Our design approach allows re-sellers and users to view and validate compliance requirements. We present identity as holistic and centralised function and we articulate the benefit of such approach emphasizing on improvements in assurance and trustworthiness. We investigate specific trust issues and suggest identity assurance checks that give organisations the required insight to understand risks, and techniques to mitigate these risks.
Keywords: Internet of Things; cloud computing; security of data; ubiquitous computing; IDaaS; Internet-of-things; IoT; assurance improvement; cloud based common capability; cloud services; cost efficiency; ease-of-access; identity federation; identity-and-access management; identity-as-a-service; information sharing; information-and-communication technology; lifecycle management; pervasive ICT; power engine; resource sharing; service sharing; trustworthiness improvement; Cloud computing; Conferences; Cryptography; Privacy; Standards; IAM; identity (ID#: 16-11283)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346886&isnumber=7346791
X. Shen, H. Long and C. Ma, “Incorporating Trust Relationships in Collaborative Filtering Recommender System,” Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2015 16th IEEE/ACIS International Conference on, Takamatsu, 2015, pp. 1-8. doi: 10.1109/SNPD.2015.7176248
Abstract: Nowadays with the readily accessibility of online social networks (OSNs), people are facilitated to share interesting information with friends through OSNs. Undoubtedly these sharing activities make our life more fantastic. However, meanwhile one challenge we have to face is information overload that we do not have enough time to review all of the content broadcasted through OSNs. So we need to have a mechanism to help users recognize interesting items from a large pool of content. In this project, we aim at filtering unwanted content based on the strength of trust relationships between users. We have proposed two kinds of trust models-basic trust model and source-level trust model. The trust values are estimated based on historical user interactions and profile similarity. We estimate dynamic trusts and analyze the evolution of trust relationships over dates. We also incorporate the auxiliary causes of interactions to moderate the noisy effect of user's intrinsic tendency to perform a certain type of interaction. In addition, since the trustworthiness of diverse information sources are rather distinct, we further estimate trust values at source-level. Our recommender systems utilize several types of Collaborative Filtering (CF) approaches, including conventional CF (namely user-based, item-based, singular value decomposition (SVD)based), and also trust-combined user-based CF. We evaluate our trust models and recommender systems on Friendfeed datasets. By comparing the evaluation results, we found that the recommendations based on estimated trust relationships were better than conventional CF recommendations.
Keywords: collaborative filtering; recommender systems; security of data; singular value decomposition; social networking (online); user interfaces; Friendfeed datasets; OSN; basic trust model; collaborative filtering recommender system; historical user interactions; interesting item recognition; item-based type; online social networks; profile similarity; sharing activities; singular value decomposition-based type; source-level trust model; trust relationship evolution; trust value estimation; trust-combined user-based CF; user-based type; Analytical models; Collaboration; Computational modeling; Facebook; Recommender systems; Collaborative Filtering; Online Social Network; Recommender System; Trust Relationship; User Interaction (ID#: 16-11284)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7176248&isnumber=7176160
K. Gomathi and B. Parvathavarthini, “A Secure Clustering in MANET Through Direct Trust Evaluation Technique,” Cloud Computing (ICCC), 2015 International Conference on, Riyadh, 2015, pp. 1-6. doi: 10.1109/CLOUDCOMP.2015.7149624
Abstract: Ad hoc network is a self organizing wireless network, made up of mobile nodes that act in both way as node and router. Wired network with fixed infrastructure defense against attacks using firewalls and gateways, nevertheless for wireless network with dynamic structure attacks can come from anywhere and at any time, because mobile nodes are unguarded to security attacks. To ensure secure data transmission and for proper functioning of network operations trustworthiness of the node has to be proved before initiating any group activity. When MANET nodes are used for large scale operations, dynamic nature of the MANET induces many problems in terms of routing delay, bandwidth and resource consumption. Consequently many clustering algorithms invented by researchers for betterment of MANET resources. With this objective trust based clustering is used to divide the whole network into sub groups based on trust value. The trustworthiness of the node is evaluated by direct trust evaluation technique and the trust value at each node is calculated as fuzzy value and it lies in between zero and one. The sub group(cluster) security is ensured by electing trustworthy node as Cluster Head(CH). Finally the proposed Trust based Clustering Algorithm(TBCA) is proved its superiority with existing Enhanced Distributed Weighted Clustering Algorithm(EDWCA) based on some metrics like delay, PDR, packet drop and overhead etc.
Keywords: fuzzy set theory; mobile ad hoc networks; pattern clustering; telecommunication security; trusted computing; EDWCA; MANET nodes; TBCA; ad hoc network; cluster head; clustering algorithms; direct trust evaluation technique; enhanced distributed weighted clustering algorithm; fuzzy value; mobile nodes; node trustworthiness; objective trust based clustering; secure data transmission; security attacks; self organizing wireless network; trust based clustering algorithm; trust value; Clustering algorithms; Delays; Mobile ad hoc networks; Nominations and elections; Routing; Thigh (ID#: 16-11285)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149624&isnumber=7149613
M. S. Khan, D. Midi, M. I. Khan and E. Bertino, “Adaptive Trust Update Frequency in MANETs,” Parallel and Distributed Systems (ICPADS), 2015 IEEE 21st International Conference on, Melbourne, VIC, 2015, pp. 132-139. doi: 10.1109/ICPADS.2015.25
Abstract: Most of the existing trust-based security schemes for MANETs compute and update the trustworthiness of the other nodes with a fixed frequency. Although this approach works well in some scenarios, some nodes may not be able to afford the periodical trust update due to the limited resources in energy and computation power. To avoid energy depletion and extend the network lifetime, trust-based security schemes need approaches to update the trust taking into account the network conditions at each node. At the same time, a trade-off in terms of packet loss rate, false positives, detection rate, and energy of nodes is needed for network performance. In this paper, we first investigate the impact of trust update frequency on energy consumption and packet loss rate. We then identify network parameters, such as packet transmission rate, packet loss rate, remaining node energy, and rate of link changes, and leverage these parameters to design an Adaptive Trust Update Frequency scheme that takes into account runtime network conditions. The evaluation of our prototype shows significant improvements in the tradeoff between energy saving and packet loss rate over traditional fixed-frequency approaches.
Keywords: energy consumption; mobile ad hoc networks; telecommunication security; MANET; adaptive trust update frequency; energy depletion; energy saving; packet loss rate; packet transmission rate; remaining node energy; runtime network conditions; trust-based security schemes; trustworthiness; Ad hoc networks; Energy consumption; Mobile computing; Monitoring; Packet loss; Security (ID#: 16-11286)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7384288&isnumber=7384203
J. R. Gandhi and R. H. Jhaveri, “Addressing Packet Forwarding Misbehaviour Using Trust-Based Approach in Ad-Hoc Networks: A Survey,” Signal Processing and Communication Engineering Systems (SPACES), 2015 International Conference on, Guntur, 2015, pp. 391-396. doi: 10.1109/SPACES.2015.7058292
Abstract: Mobile ad hoc networks (MANETs) are spontaneously deployed over a geographically limited area without well-established infrastructure. In a distributed Mobile Ad Hoc Network (MANET), collaboration and cooperation is critical concern to managing trust. The networks work well only if the mobile nodes are trusty and behave cooperatively. Due to the openness in network topology and absence of a centralized administration in management, MANETs are very vulnerable to various attacks from malicious nodes. In order to reduce the hazards from such nodes and enhance the security of network, trust-based model is used to evaluate the trustworthiness of nodes. Trust-based approach provides a flexible and feasible approach to choose the shortest route that meets the security requirement of data packets transmission. This paper focuses on trust management with their properties and provides a survey of various trust-based approaches and it proposes some novel conceptions on trust management in MANETs.
Keywords: mobile ad hoc networks; telecommunication network topology; telecommunication security; MANET; centralized administration; data packets transmission; geographically limited area; malicious nodes; network security; network topology; packet forwarding misbehaviour; trust management; trust-based approach; trust-based model; Ad hoc networks; Delays; Mobile computing; Quality of service; Routing; Routing protocols; Security; Properties of Trust; Trust; Trust Management (ID#: 16-11287)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058292&isnumber=7058196
N. Djedjig, D. Tandjaoui and F. Medjek, “Trust-based RPL for the Internet of Things,” 2015 IEEE Symposium on Computers and Communication (ISCC), Larnaca, 2015, pp. 962-967. doi: 10.1109/ISCC.2015.7405638
Abstract: The Routing Protocol for Low-Power and Lossy Networks (RPL) is the standardized routing protocol for constrained environments such as 6LoWPAN networks, and is considered as the routing protocol of the Internet of Things (IoT). However, this protocol is subject to several internal and external attacks. In fact, RPL is facing many issues. Among these issues, trust management is a real challenge when deploying RPL. In this paper, we highlight and discuss the different issues of trust management in RPL. We consider that using only TPM (Trust Platform Module) to ensure trustworthiness between nodes is not sufficient. Indeed, an internal infected or selfish node could participate in constructing RPL topology. To overcome this issue, we propose to strengthen RPL by adding a new trustworthiness metric during RPL construction and maintenance. This metric represents the level of trust for each node in the network, and is calculated using selfishness, energy, and honesty components. It allows a node to decide whether or not to trust the other nodes during the construction of the topology.
Keywords: Internet of Things; routing protocols; telecommunication network topology; TPM; energy component; honesty component; routing protocol for low-power and lossy network; selfishness component; standardized routing protocol; trust platform module; trust-based RPL topology; Measurement; Routing; Routing protocols; Security; Topology; Wireless sensor networks (ID#: 16-11288)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7405638&isnumber=7405441
A. M. Shabut, K. Dahal, I. Awan and Z. Pervez, “Route Optimisation Based on Multidimensional Trust Evaluation Model in Mobile Ad Hoc Networks,” 2015 Second International Conference on Information Security and Cyber Forensics (InfoSec), Cape Town, 2015, pp. 28-34. doi: 10.1109/InfoSec.2015.7435502
Abstract: With the increased numbers of mobile devices working in an ad hoc manner, there are many problems in secure routing protocols. Finding a path between source and destination faces more challenges in Mobile ad hoc network (MANET) environment because of the node movement and frequent topology changes, besides, the dependence on the intermediate nodes to relay packets. Therefore, trust technique is utilised in such environment to secure routing and stimulate nodes to cooperate in packet forwarding process. In this paper, an investigation of the use of trust to choose the optimised path between two nodes is provided. It comes up with a proposal to select the most reliable path based on multidimensional trust evaluation technique to include number of hubs, trust opinion, confidence in providing trust, and energy level of nodes on the path. The model overcomes the limitation of considering only trustworthiness of the nodes on the path and uses a route optimisation approach to select the path between source and destination. The empirical analysis shows robustness and accuracy of the trust model in a dynamic MANET environment.
Keywords: mobile ad hoc networks; relay networks (telecommunication); routing protocols; telecommunication network topology; telecommunication security; dynamic MANET environment; empirical analysis; frequent topology changes; intermediate nodes; mobile ad hoc network; mobile device; multidimensional trust evaluation model; node movement; packet forwarding process; relay packet; route optimisation approach; routing protocol security; Algorithm design and analysis; Heuristic algorithms; Mobile ad hoc networks; Optimization; Routing; Routing protocols; Security; routing optimisation; routing protocol; selection algorithm; trust; trustworthiness (ID#: 16-11289)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7435502&isnumber=7435496
M. S. Khan, D. Midi, M. I. Khan and E. Bertino, “Adaptive Trust Threshold Strategy for Misbehaving Node Detection and Isolation,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 718-725. doi: 10.1109/Trustcom.2015.439
Abstract: Due to dynamic network topology, distributed architecture and absence of a centralized authority, mobile ad hoc networks (MANETs) are vulnerable to various attacks from misbehaving nodes. To enhance security, various trust-based schemes have been proposed that augment traditional cryptography-based security schemes. However, most of them use static and predefined trust thresholds for node misbehavior detection, without taking into consideration the network conditions locally at each node. Using static thresholds for misbehavior detection may result in high false positives, low malicious node detection rate, and network partitioning. In this paper, we propose a novel Adaptive Trust Threshold (ATT) computation strategy, that adapts the trust threshold in the routing protocol according to network conditions such as rate of link changes, node degree and connectivity, and average neighborhood trustworthiness. We identify the topology factors that affect the trust threshold at each node, and leverage them to build a mathematical model for ATT computation. Our simulation results indicate that the ATT strategy achieves significant improvements in packet delivery ratio, reduction in false positives, and increase in detection rate as compared to traditional static threshold strategies.
Keywords: cryptography; mobile ad hoc networks; routing protocols; telecommunication network topology; telecommunication security; ATT computation strategy; MANETs; adaptive trust threshold strategy; cryptography-based security schemes; distributed architecture; dynamic network topology; high false positive reduction; low malicious node detection rate; mathematical model; misbehaving node detection; misbehaving node isolation; network partitioning; packet delivery ratio; predefined trust thresholds; routing protocol; static threshold strategy; trust-based schemes; Ad hoc networks; Adaptation models; Measurement; Mobile computing; Network topology; Routing; Security; Trust-based security; adaptive threshold; static threshold; threshold computation (ID#: 16-11290)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345347&isnumber=7345233
M. Xiang, W. Liu, Q. Bai and A. Al-Anbuky, “Simmelian Ties and Structural Holes: Exploring Their Topological Roles in Forming Trust for Securing Wireless Sensor Networks,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 96-103. doi: 10.1109/Trustcom.2015.362
Abstract: Due to the nature of wireless sensor networks (WSNs) in open-access and error-prone wireless environments, the security issues are always crucial. The traditional security mechanisms such as Public Key Infrastructure (PKI) is no longer as feasible in protecting WSN as in wired networks. The new concept of trust has emerged in recent studies as an alternative mechanism to address the security concerns in WSNs. Most recent studies on trust are mainly focused on how to model and evaluate trust so as to effectively detect, isolate, and avoid any malicious activity in the network. In this paper, we have introduced the new angle of adaptive network approach to study 'dynamics on networks' i.e., trust state transition on a network with a fixed topology or 'dynamics of networks' i.e., topological transformation of a network with no dynamic trust state changes separately so as to discover the interplay between network overlay entities' trust evaluation and its underlie topological connectivity. Inspired from the trust studies in sociology, we propose that the Simmelian tie structured networks enable more positive impact on fostering trustworthiness among sensor nodes, while structural hole characterized networks provide more opportunity for misbehaviors and have negative impact to secure WSNs. These hypothesis have been confirmed by the extensive simulation studies.
Keywords: public key cryptography; telecommunication network topology; telecommunication security; wireless sensor networks; PKI; Simmelian tie structured networks; WSN; adaptive network approach; error-prone wireless environments; fixed topology; network dynamics; open-access; public key infrastructure; sociology; structural holes; topological roles; topological transformation; trust formation; trust state transition; underlie topological connectivity; wired networks; wireless sensor network security; Adaptive systems; Measurement; Network topology; Security; Sociology; Topology; Wireless sensor networks; Adaptive networks; Security; Simmelian tie and structural hole; Topological metrics; Trust and reputation management; (ID#: 16-11291)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345270&isnumber=7345233
M. Mayhew, M. Atighetchi, A. Adler and R. Greenstadt, “Use of Machine Learning in Big Data Analytics for Insider Threat Detection,” Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 915-922. doi: 10.1109/MILCOM.2015.7357562
Abstract: In current enterprise environments, information is becoming more readily accessible across a wide range of interconnected systems. However, trustworthiness of documents and actors is not explicitly measured, leaving actors unaware of how latest security events may have impacted the trustworthiness of the information being used and the actors involved. This leads to situations where information producers give documents to consumers they should not trust and consumers use information from non-reputable documents or producers. The concepts and technologies developed as part of the Behavior-Based Access Control (BBAC) effort strive to overcome these limitations by means of performing accurate calculations of trustworthiness of actors, e.g., behavior and usage patterns, as well as documents, e.g., provenance and workflow data dependencies. BBAC analyses a wide range of observables for mal-behavior, including network connections, HTTP requests, English text exchanges through emails or chat messages, and edit sequences to documents. The current prototype service strategically combines big data batch processing to train classifiers and real-time stream processing to classifier observed behaviors at multiple layers. To scale up to enterprise regimes, BBAC combines clustering analysis with statistical classification in a way that maintains an adjustable number of classifiers.
Keywords: Big Data; authorisation; data analysis; document handling; learning (artificial intelligence); pattern classification; pattern clustering; trusted computing; BBAC; English text exchanges; HTTP requests; actor trustworthiness; behavior-based access control; big data analytics; big data batch processing; chat messages; classifier training; clustering analysis; document trustworthiness; emails; enterprise environments; information trustworthiness; insider threat detection; interconnected systems; machine learning; mal-behavior; network connections; real-time stream processing; security events; statistical classification; Access control; Big data; Computer security; Electronic mail; Feature extraction; Monitoring; HTTP; TCP; big data; chat; documents; email; insider threat; machine learning; support vector machine; trust; usage patterns (ID#: 16-11292)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357562&isnumber=7357245
A. M. Ahmed, Q. H. Mehdi, R. Moreton and A. Elmaghraby, “Serious Games Providing Opportunities to Empower Citizen Engagement and Participation in E-Government Services,” 2015 Computer Games: AI, Animation, Mobile, Multimedia, Educational and Serious Games (CGAMES), Louisville, KY, 2015, pp. 138-142. doi: 10.1109/CGames.2015.7272971
Abstract: Serious games are electronic games designed not primarily for entertainment but for purposes such as education, training, health, military, politics, advertising and business. Communication between governments and citizens via electronic channels (i.e. e-government)to deliver services is difficult in developing countries due to limited IT knowledge, user experience and trust issues. Serious games can potentially improve citizen engagement in e-services by helping users expand their personal knowledge regarding services benefits, privacy and security. The main purpose of this paper is to investigate the extent to which an extended Technology Acceptance Model (TAM) and Trustworthiness Model (TM) facilitate the use of serious games in e-government services and empower citizen engagement and participation. In this research, the benefits of serious games are assayed in terms of perceived usefulness and perceived ease of use in TAM, as well as increased Internet and government trust in TM to form a conceptual model of factors that influence citizen adoption of e-government initiatives. The model provides a new way to assist governments in increasing citizens' engagement of their online services.
Keywords: Internet; government data processing; serious games (computing); TAM; TM; citizen engagement; citizen participation; e-government initiatives; e-government services; electronic government; serious games; technology acceptance model; trustworthiness model; Computational modeling; Computers; Electronic government; Games; Privacy; Training; Citizen engagement; Serious Games; Trustworthiness; e-Government (ID#: 16-11293)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272971&isnumber=7272892
C. A. Kamhoua, A. Ruan, A. Martin and K. A. Kwiat, “On the Feasibility of an Open-Implementation Cloud Infrastructure: A Game Theoretic Analysis,” 2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC), Limassol, 2015,
pp. 217-226. doi: 10.1109/UCC.2015.38
Abstract: Trusting a cloud infrastructure is a hard problem, which urgently needs effective solutions. There are increasing demands for switching to the cloud in the sectors of financial, healthcare, or government etc., where data security protections are among the highest priorities. But most of them are left unsatisfied, due to the current cloud infrastructures' lack of provable trustworthiness. Trusted Computing (TC) technologies implement effective mechanisms for attesting to the genuine behaviors of a software platform. Integrating TC with cloud infrastructure shows a promising method for verifying the cloud's behaviors, which may in turn facilitate provable trustworthiness. However, the side effect of TC also brings concerns: exhibiting genuine behaviors might attract targeted attacks. Consequently, current Trusted Cloud proposals only integrate limited TC capabilities, which hampers the effective and practical trust establishment. In this paper, we aim to justify the benefits of a fully Open-Implementation cloud infrastructure, which means that the cloud's implementation and configuration details can be inspected by both the legitimate and malicious cloud users. We applied game theoretic analysis to discover the new dynamics formed between the Cloud Service Provider (CSP) and cloud users, when the Open-Implementation strategy is introduced. We conclude that, even though Open-Implementation cloud may facilitate attacks, vulnerabilities or misconfiguration are easier to discover, which in turn reduces the total security threats. Also, cyber threat monitoring and sharing are made easier in an Open-Implementation cloud. More importantly, the cloud's provable trustworthiness will attract more legitimate users, which increases CSP's revenue and helps lowering the price. This eventually creates a virtuous cycle, which will benefit both the CSP and legitimate users.
Keywords: cloud computing; game theory; open systems; security of data; trusted computing; CSP revenue; TC technologies; cloud details; cloud service provider; cloud trustworthiness; cyber threat monitoring; data security protections; fully open-implementation cloud infrastructure; game theoretic analysis; legitimate cloud users; malicious cloud users; open-implementation cloud; open-implementation cloud strategy; software platform; total security threats; trusted computing technologies; Cloud computing; Computational modeling; Games; Hardware; Security; Virtual machine monitors; Cloud Computing; Game Analysis; Trusted Computing (ID#: 16-11294)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7431413&isnumber=7431374
A. Gutmann et al., “ZeTA-Zero-Trust Authentication: Relying on Innate Human Ability, Not Technology,” 2016 IEEE European Symposium on Security and Privacy (EuroS&P), Saarbrucken, Germany, 2016, pp. 357-371. doi: 10.1109/EuroSP.2016.35
Abstract: Reliable authentication requires the devices and channels involved in the process to be trustworthy, otherwise authentication secrets can easily be compromised. Given the unceasing efforts of attackers worldwide such trustworthiness is increasingly not a given. A variety of technical solutions, such as utilising multiple devices/channels and verification protocols, has the potential to mitigate the threat of untrusted communications to a certain extent. Yet such technical solutions make two assumptions: (1) users have access to multiple devices and (2) attackers will not resort to hacking the human, using social engineering techniques. In this paper, we propose and explore the potential of using human-based computation instead of solely technical solutions to mitigate the threat of untrusted devices and channels. ZeTA (Zero Trust Authentication on untrusted channels) has the potential to allow people to authenticate despite compromised channels or communications and easily observed usage. Our contributions are threefold: (1) We propose the ZeTA protocol with a formal definition and security analysis that utilises semantics and human-based computation to ameliorate the problem of untrusted devices and channels. (2) We outline a security analysis to assess the envisaged performance of the proposed authentication protocol. (3) We report on a usability study that explores the viability of relying on human computation in this context.
Keywords: security of data; ZeTA protocol; ZeTA-Zero-Trust Authentication; authentication secrets; formal definition; human computation; innate human ability; multiple devices-channels; reliable authentication; security analysis; social engineering techniques; trustworthy; untrusted communications; untrusted devices; verification protocols; Authentication; Proposals; Protocols; Semantics; Servers; Usability (ID#: 16-11295)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7467365&isnumber=7467331
J. Ma and Y. Zhang, “Research on Trusted Evaluation Method of User Behavior Based on AHP Algorithm,” 2015 7th International Conference on Information Technology in Medicine and Education (ITME), Huangshan, 2015, pp. 588-592. doi: 10.1109/ITME.2015.39
Abstract: The research of trustworthiness measurement of user behavior is the hotpot in the network security. According to the existing problems of the user behavior trust evaluation method in the subjective weight and dynamic adaptability, in this paper, the calculation method of indirect credibility has been improved, and combined with the previously proposed user behavior evaluation method based on Analytic Hierarchy Process, the method of user behavior evaluation is more effectively and accurately. In this method, user behavior activity and reward and punishment factor, and the improved calculation method of indirect credibility are combined to evaluate the user's behavior, and the feasibility of the method is demonstrated by an example. The results show that the proposed method can adapt to the dynamic changes of user behavior trust, and can accurately evaluate the credibility of user behaviors.
Keywords: analytic hierarchy process; trusted computing; AHP algorithm; dynamic adaptability; indirect credibility calculation method; network security; reward-punishment factor; subjective weight; trusted evaluation method; trustworthiness measurement; user behavior; user behavior activity; Adaptation models; Analytic hierarchy process; Analytical models; Computational modeling; Reliability; Security; Time factors; Analytic Hierarchy Process; Indirect Credibility; User Behavior Trust (ID#: 16-11296)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7429218&isnumber=7429072
Y. Yu, C. Xia and Z. Li, “A Trust Bootstrapping Model for Defense Agents,” Communication Software and Networks (ICCSN), 2015 IEEE International Conference on, Chengdu, 2015, pp. 77-84. doi: 10.1109/ICCSN.2015.7296132
Abstract: In the system of computer network collaborative defense (CNCD), defense agents newly added to defense network lack of historical interaction, which leads to the failure of trust establishment. To solve the problem of trust bootstrapping in CNCD, a trust type based trust bootstrapping model was introduced. Trust type, trust utility and defense cost was discussed first. Then the constraints of defense tasks were gained based on the above analysis. According to the constraints obtained, we identified the trust type and assigned the initial trustworthiness to defense agents (DAs). The simulated experiment shows that the methods proposed in the present work have lower failure rate of tasks and better adaptability.
Keywords: computer bootstrapping; computer network security; trusted computing; computer network collaborative defense; defense agents; defense cost; defense network; failure rate; historical interaction; trust bootstrapping model; trust establishment; trust type; trust utility; Collaboration; Computational modeling; Computer science; Game theory; Games; Security; Waste materials; Trust bootstrapping; collaborative defense (ID#: 16-11297)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7296132&isnumber=7296115
J. Hiltunen and J. Kuusijärvi, “Trust Metrics Based on a Trusted Network Element,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 660-667. doi: 10.1109/Trustcom.2015.432
Abstract: In this paper we study and propose a trust model and trust metric composition models based on a trusted network element. Our proposed model, executed in a trusted network element, helps the user to make subjective decisions based on secure and trusted metric information presented in a user friendly form. The composition models present two possible solutions of how to integrate trust constructs into quantitative measurements in order to provide readily available evidence to the trustor about trustee's trustworthiness. The results show how to achieve 5% measurement error probability when detecting malicious actions and what kinds of 95% intervals of confidence the 5% measurement error probability will enable in different trust metric composition models. The presented trust metric is specifically designed for client-server and peer-to-peer communication scenarios over the Internet, such as Web browsing and/or content streaming.
Keywords: probability; trusted computing; Web browsing; client-server scenarios; composition models; confidence intervals; content streaming; malicious action detection; measurement error probability; peer-to-peer communication scenario; quantitative measurements; secure-trusted metric information; subjective decision making; trust metric composition models; trusted network element; trustee trustworthiness; Analytical models; Internet; Measurement errors; Measurement uncertainty; Security; Uncertainty; Trust model; security measurement; security metric; trust metric (ID#: 16-11298)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345340&isnumber=7345233
B. Soeder and K. S. Barber, “A Model for Calculating User-Identity Trustworthiness in Online Transactions,” Privacy, Security and Trust (PST), 2015 13th Annual Conference on, Izmir, 2015, pp. 177-185. doi: 10.1109/PST.2015.7232971
Abstract: Online transactions require a fundamental relationship between users and resource providers (e.g., retailers, banks, social media networks) built on trust; both users and providers must believe the person or organization they are interacting with is who they say they are. Yet with each passing year, major data breaches and other identity-related cybercrimes become a daily way of life, and existing methods of user identity authentication are lacking. Furthermore, much research on identity trustworthiness focuses on the user's perspective, whereas resource providers receive less attention. Therefore, the current research investigated how providers can increase the likelihood their users' identities are trustworthy. Leveraging concepts from existing research, the user-provider trust relationship is modeled with different transaction contexts and attributes of identity. The model was analyzed for two aspects of user-identity trustworthiness - reliability and authenticity - with a significant set of actual user identities obtained from the U.S. Department of Homeland Security. Overall, this research finds that resource providers can significantly increase confidence in user-identity trustworthiness by simply collecting a limited amount of user-identity attributes.
Keywords: computer crime; trusted computing; user interfaces; data breaches; identity-related cybercrimes; online transactions; resource providers; user identity authentication; user identity trustworthiness; Authentication; Computational modeling; Context; Industries; Mathematical model; Protocols; Reliability; authenticity; identity; reliability; trust (ID#: 16-11299)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232971&isnumber=7232940
S. Benabied, A. Zitouni and M. Djoudi, “A Cloud Security Framework Based on Trust Model and Mobile Agent,” Cloud Technologies and Applications (CloudTech), 2015 International Conference on, Marrakech, 2015, pp. 1-8. doi: 10.1109/CloudTech.2015.7336962
Abstract: Cloud computing as a potential paradigm offers tremendous advantages to enterprises. With the cloud computing, the market's entrance time is reduced, computing capabilities is augmented and computing power is really limitless. Usually, to use the full power of cloud computing, cloud users has to rely on external cloud service provider for managing their data. Nevertheless, the management of data and services are probably not fully trustworthy. Hence, data owners are uncomfortable to place their sensitive data outside their own system .i.e., in the cloud., Bringing transparency, trustworthiness and security in the cloud model, in order to fulfill client's requirements are still ongoing. To achieve this goal, our paper introduces two levels security framework: Cloud Service Provider (CSP) and Cloud Service User (CSU). Each level is responsible for a particular task of the security. The CSU level includes a proxy agent and a trust agent, dealing with the first verification. Then a second verification is performed at the CSP level. The framework incorporates a trust model to monitor users' behaviors. The use of mobile agents will exploit their intrinsic features such as mobility, deliberate localization and secure communication channel provision. This model aims to protect user's sensitive information from other internal or external users and hackers. Moreover, it can detect policy breaches, where the users are notified in order to take necessary actions when malicious access or malicious activity would occur.
Keywords: cloud computing; mobile agents; security of data; trusted computing; CSP; CSU; cloud security framework; cloud service provider; cloud service user; data management; mobile agent; policy breach detection; proxy agent; trust agent; trust model; two levels security framework; Cloud computing; Companies; Computational modeling; Mobile agents; Monitoring; Security; Servers; Cloud Computing Security; Mobile Agent; Security and Privacy; Trust; Trust Model (ID#: 16-11300)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336962&isnumber=7336956
S. Mishra, “Network Security Protocol for Constrained Resource Devices in Internet of Things,” 2015 Annual IEEE India Conference (INDICON), New Delhi, 2015, pp. 1-6. doi: 10.1109/INDICON.2015.7443737
Abstract: Security protocols built on strong cryptographic algorithms to defeat attempts of pattern analysis are popular nowadays, but these algorithms consume a lot of processor's efficiency. So, devices with limited processor capabilities need some modified protocols. Billions of such devices, known as `smart objects' are used in IOT (Internet of things). IOT is an interconnection of a large number of smart objects with low resources. WSN (wireless sensor network) which comprises of a large network of sensors and actuators with constrained capabilities also need resource efficient protocol to be implemented. Security, trustworthiness and privacy are major challenges to turn IOT into a reality. Absence of strong security protocols, attacks with malicious intent and malfunctions will outweigh the benefit of IOT components. Data integrity, identity management, trust management and privacy are four crucial obstacles in designing a secure IOT. To alleviate these challenges and obstacles, a security protocol that uses minimal processor capacity and facilitates targeted security benefits of IOT is proposed. This protocol counters most of security issues with existing IOT protocols and is robust against severe attacks. This protocol is unique in a way that it gives different bit-streams in a given authenticated session for same data which cannot be predicted by the transmitter itself and changes within nanoseconds. Also, a perfectly random signal to choose the bit-stream, in place of pseudo-random code algorithm is used. In V2V (vehicle to vehicle) IOT, an illustration of error correction in the key instead of lengthening the sent bit stream is also done.
Keywords: Internet of Things; computer network security; cryptographic protocols; data privacy; trusted computing; IOT; Internet of things; V2V; WSN; constrained resource device; cryptographic algorithm; data integrity; identity management; network security protocol; privacy; pseudorandom code algorithm; trust management; trustworthiness; vehicle to vehicle; wireless sensor network; Encryption; Protocols; Servers; Vehicles; White noise; Internet of things; lightweight cryptography; randomness; secure authentication; security (ID#: 16-11301)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7443737&isnumber=7443105
E. Brumancia and A. Sylvia, “A Profile Based Scheme for Security in Clustered Wireless Sensor Networks,” Communications and Signal Processing (ICCSP), 2015 International Conference on, Melmaruvathur, 2015, pp. 0823-0827. doi: 10.1109/ICCSP.2015.7322608
Abstract: Data aggregation in WSN is usually done by simple methods such as averaging; these methods are vulnerable to certain attacks. To make trust of data and reputation of sensor nodes will be capable of performing more sophisticated data aggregation algorithm, thus making less vulnerable. Iterative filtering algorithm holds great promise for this purpose. To protect WSN from security issue, we introduce an improved iterative filtering technique. This technique makes them not only collusion robust, but also more accurate and faster converging. Trust and reputation systems have a significant role in supporting the operation of a wide range of distributed systems, from wireless sensor networks and e-commerce infrastructure to social networks, by providing an assessment of trustworthiness of participants in a distributed system. We assume that the stochastic components of sensor errors are independent random variables with a Gaussian distribution; however, our experiments show that our method works quite well for other types of errors without any modification. Moreover, if the error distribution of sensors is either known or estimated, our algorithms can be adapted to other distributions to achieve an optimal performance. In the first stage we provide an initial estimate of two noise parameters for sensor nodes, bias and variance; details of the computations for estimating bias and variance of sensors. We provide an initial estimate of the reputation vector calculated using the MLE, the detailed computation operations. In the third stage of the proposed framework, the initial reputation vector provided in the second stage is used to estimate the trustworthiness of each sensor based on the distance of sensor readings to such initial reputation vector.
Keywords: Gaussian distribution; filtering theory; iterative methods; maximum likelihood estimation; telecommunication security; wireless sensor networks; Iterative filtering algorithm; MLE; WSN protection; clustered wireless sensor network security; data aggregation algorithm; distributed system; e-commerce infrastructure; profile based scheme; reputation vector estimation; social network; trust and reputation system; Atmospheric measurements; Detectors; Indexes; Monitoring; Particle measurements; Wireless networks; Wireless sensor networks; Cluster Head (CH); Cluster Member (CM); Data Aggregation; Wireless Sensor Network (WSN) (ID#: 16-11302)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322608&isnumber=7322423
M. Rezvani, A. Ignjatovic, E. Bertino and S. Jha, “A Collaborative Reputation System Based on Credibility Propagation in WSNs,” Parallel and Distributed Systems (ICPADS), 2015 IEEE 21st International Conference on, Melbourne, VIC, 2015, pp. 1-8. doi: 10.1109/ICPADS.2015.9
Abstract: Trust and reputation systems are widely employed in WSNs to help decision making processes by assessing trustworthiness of sensor nodes in a data aggregation process. However, in unattended and hostile environments, more sophisticated malicious attacks, such as collusion attacks, can distort the computed trust scores and lead to low quality or deceptive service as well as undermine the aggregation results. In this paper we propose a novel, local, collaborative-based trust framework for WSNs that is based on the concept of credibility propagation which we introduce. In our approach, trustworthiness of a sensor node depends on the amount of credibility that such a node receives from other nodes. In the process we also obtain an estimate of sensors' variances which allows us to estimate the true value of the signal using the Maximum Likelihood Estimation. Extensive experiments using both real-world and synthetic datasets demonstrate the efficiency and effectiveness of our approach.
Keywords: decision making; maximum likelihood estimation; telecommunication security; wireless sensor networks; WSN; collaborative reputation system; collaborative-based trust framework; credibility propagation; data aggregation process; decision making; maximum likelihood estimation; reputation systems; sensor nodes; trust systems; Aggregates; Collaboration; Computer science; Maximum likelihood estimation; Robustness; Temperature measurement; Wireless sensor networks; collusion attacks; data aggregation; iterative filtering; reputation system (ID#: 16-11303)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7384212&isnumber=7384203
J. Y. Yap and A. Tomlinson, “Provenance-Based Attestation for Trustworthy Computing,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 630-637. doi: 10.1109/Trustcom.2015.428
Abstract: We present a new approach to the attestation of a computer's trustworthiness that is founded on provenance data of its key components. The prevailing method of attestation relies on comparing integrity measurements of the key components of a computer against a reference database of trustworthy integrity measurements. An integrity measurement is obtained by passing a software binary or any component through a hash function but this value carries little information unless there is a reference database. On the other hand, the semantics of provenance contain more details. There are expressive information such as the component's history and its causal dependencies with other elements of a computer. Hence, we argue that provenance data can be used as evidence of trustworthiness during attestation. In this paper, we describe a complete design for provenance-based attestation. The design development is guided by goals and it covers all the phases of this approach. We discuss about collecting provenance data and using the PROV data model to represent provenance data. To determine if provenance data of a component can provide evidence of its trustworthiness, we have developed a rule specification grammar and provided a discourse on using the rules. We then build the key mechanisms of this form of attestation by exploring approaches to capture provenance data and look at transforming the trust evaluation rules to XQuery language before running the rules against an XML based record of provenance data. Finally, the design is analyzed using threat modelling.
Keywords: XML; data models; trusted computing; PROV data model; XML based provenance data record; XQuery language; attestation prevailing method; computer trustworthiness attestation; hash function; key components; provenance data representation; provenance semantics; provenance-based attestation; rule specification grammar; software binary; threat modelling; trust evaluation rules; trustworthiness; trustworthy computing; trustworthy integrity measurements; Computational modeling; Computers; Data models; Databases; Semantics; Software; Software measurement; attestation; provenance (ID#: 16-11304)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345336&isnumber=7345233
S. Yao et al., “FASTrust: Feature Analysis for Third-Party IP Trust Verification,” Test Conference (ITC), 2015 IEEE International, Anaheim, CA, 2015, pp. 1-10. doi: 10.1109/TEST.2015.7342417
Abstract: Third-party intellectual property (3PIP) cores are widely used in integrated circuit designs. It is essential and important to ensure their trustworthiness. Existing hardware trust verification techniques suffer from high computational complexity, low extensibility, and inability to detect implicitly-triggered hardware trojans (HTs). To tackle the above problems, in this paper, we present a novel 3PIP trust verification framework, named FASTrust, which conducts HT feature analysis on the flip-flop level control-data flow graph (CDFG) of the circuit. FASTrust is not only able to identify existing explicitly-triggered and implicitly-triggered HTs appeared in the literature in an efficient and effective manner, but more importantly, it also has the unique advantage of being scalable to defend against future and more stealthy HTs by adding new features to the system.
Keywords: computational complexity; data flow graphs; flip-flops; integrated circuit design; integrated logic circuits; invasive software; trusted computing; 3PIP cores; 3PIP trust verification framework; FASTrust; HT feature analysis; explicitly-triggered HT; flip-flop level control-data flow graph; hardware trust verification techniques; implicitly-triggered HT; implicitly-triggered hardware trojans; integrated circuit designs; third-party IP trust verification; third-party intellectual property core; trustworthiness; Combinational circuits; Feature extraction; Hardware; Integrated circuit modeling; Trojan horses; Wires; Hardware Trojan; feature analysis; hardware security; third-party intellectual property (ID#: 16-11305)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7342417&isnumber=7342364
Rani, JayaKumar and Divya, “Trust Aware Systems in Wireless Sensor Networks,” Computing and Communications Technologies (ICCCT), 2015 International Conference on, Chennai, 2015, pp. 174-179. doi: 10.1109/ICCCT2.2015.7292741
Abstract: Sensor network is an adaptable technology for perceiving environmental criterions and hence finds its pivotal role in a wide range of applications. The applications range from mission critical like military or patient monitoring systems to home surveillance systems where the network may be prone to security attacks. The network is vulnerable to attack as it may be deployed in hostile environments. In addition it may be exposed to attacks due to the inherent feature of not incorporating security mechanisms into the nodes. Hence additional programs for security may be added in the network. One such scheme is making the network a trust ware system. The trust computation serves as a powerful tool in the detection of unexpected node behaviour. In this paper we propose a trust mechanism to determine the trustworthiness of the sensor node. Most of the existing trust aware systems are centralised and suffer from single head failure. In this paper we propose a dynamic and decentralized system.
Keywords: telecommunication security; trusted computing; wireless sensor networks; decentralized system; dynamic system; environmental criterion; hostile environment; network security; network vulnerability; sensor node trustworthiness determination; trust aware system; unexpected node behaviour detection; wireless sensor network; Base stations; Energy efficiency; Monitoring; Reliability; Routing; Security; Wireless sensor networks; security; trust evaluation; (ID#: 16-11306)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292741&isnumber=7292708
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Trust and Trustworthiness (Part 2) |
Trust is created in information security through cryptography to assure the identity of external parties. It is essential to cybersecurity and to the Science of Security hard problem of composability. The research work cited here regarding trust and trustworthiness was presented in 2015.
D. Shehada, M. J. Zemerly, C. Y. Yeun, M. A. Qutayri and Y. A. Hammadi, “A Framework for Comparison of Trust Models for Multi Agent Systems,” Information and Communication Technology Research (ICTRC), 2015 International Conference on, Abu Dhabi, 2015, pp. 318-321. doi: 10.1109/ICTRC.2015.7156486
Abstract: Agents technology plays an important role in the development of many major service applications. However, balancing between the flexible features agents provide, and their vulnerability to many security oriented attacks are considered a great challenge. In this paper we review trust models that are proposed in the literature to provide trustworthiness and security to Multi Agent Systems (MAS). We subsequently develop a framework for comparison of the various different trust models. Trust models are first compared and classified according to types of evaluations used, weight assignment, consideration of inaccurate evaluations and architecture. They are also compared according to suitability to MAS.
Keywords: multi-agent systems; trusted computing; MAS; agents technology; flexible features agents; multi agent systems; security oriented attacks; trustworthiness; Adaptation models; Customer relationship management; Fires; Multi-agent systems; Reliability; Security (ID#: 16-11307)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7156486&isnumber=7156393
M. Mulla and S. Sambare, “Efficient Analysis of Lightweight Sybil Attack Detection Scheme in Mobile Ad Hoc Networks,” Pervasive Computing (ICPC), 2015 International Conference on, Pune, 2015, pp. 1-6. doi: 10.1109/PERVASIVE.2015.7086988
Abstract: Mobile Ad hoc Networks (MANETs) are vulnerable to different kinds of attacks like Sybil attack. In this paper we are aiming to present practical evaluation of efficient method for detecting lightweight Sybil Attack. In Sybil attack, network attacker disturbs the accuracy count by increasing its trust and decreasing others or takes off the identity of few mobile nodes in MANET. This kind of attack results into major information loss and hence misinterpretation in the network, it also minimizes the trustworthiness among mobile nodes, data routing disturbing with aim of dropping them in network etc. There are many methods previously presented by different researchers with aim of mitigating such attacks from MANET with their own advantage and disadvantages. In this research paper, we are introducing the study of efficient method of detecting the lightweight Sybil attack with aim of identifying the new identities of Sybil attackers and without using any additional resources such as trusted third party or any other hardware. The method which we are investigating in this paper is based on use of RSS (Received Signal Strength) to detect Sybil attacker. This method uses the RSS in order to differentiate between the legitimate and Sybil identities. The practical analysis of this work is done using Network Simulator (NS2) by measuring throughput, end to end delay, and packet delivery ratio under different network conditions.
Keywords: mobile ad hoc networks; MANET; RSS; lightweight Sybil attack detection scheme; major information loss; network simulator; received signal strength; trustworthiness; Delays; Hardware; Mobile ad hoc networks; Mobile computing; Security; Throughput; DCA: Distributed Certificate authority; Mobile Ad hoc Network; Packet Delivery Ratio; Received Signal Strength; Sybil Attack; Threshold; UB: Upper bound (ID#: 16-11308)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7086988&isnumber=7086957
G. DAngelo, S. Rampone and F. Palmieri, “An Artificial Intelligence-Based Trust Model for Pervasive Computing,” 2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), Krakow, 2015, pp. 701-706. doi: 10.1109/3PGCIC.2015.94
Abstract: Pervasive Computing is one of the latest and more advanced paradigms currently available in the computers arena. Its ability to provide the distribution of computational services within environments where people live, work or socialize leads to make issues such as privacy, trust and identity more challenging compared to traditional computing environments. In this work we review these general issues and propose a Pervasive Computing architecture based on a simple but effective trust model that is better able to cope with them. The proposed architecture combines some Artificial Intelligence techniques to achieve close resemblance with human-like decision making. Accordingly, Apriori algorithm is first used in order to extract the behavioral patterns adopted from the users during their network interactions. Naïve Bayes classifier is then used for final decision making expressed in term of probability of user trustworthiness. To validate our approach we applied it to some typical ubiquitous computing scenarios. The obtained results demonstrated the usefulness of such approach and the competitiveness against other existing ones.
Keywords: Bayes methods; artificial intelligence; pattern classification; trusted computing; ubiquitous computing; artificial intelligence-based trust model; behavioral patterns; computational services distribution; computers arena; effective trust model; human-like decision making; naïve Bayes classifier; network interactions; pervasive computing; ubiquitous computing scenarios; user trustworthiness; Classification algorithms; Computational modeling; Data mining; Decision making; Itemsets; Pervasive computing; Security; Apriori algorithm; Artificial Intelligence; Naive Bayes Classifier; Pervasive Computing; Trust Model (ID#: 16-11309)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7424653&isnumber=7424499
S. Hazra and S. K. Setua, “Privacy Preservation in Ubiquitous Network,” Future Internet of Things and Cloud (FiCloud), 2015 3rd International Conference on, Rome, 2015, pp. 811-816. doi: 10.1109/FiCloud.2015.18
Abstract: Ubiquitous network deals with wireless communication between service stations and users along with their mobility, invisibility and evolved smart space. In ubiquitous network, users get their services from available service stations invisibly. In such an open environment, an outsider malicious entity can disrupt the service communication by compromising the privacy of communicating users and (or) service stations. On the other hand, a malevolent service station can compromise the privacy of a user, as well as a malevolent user can compromise the privacy of a legitimate service station. To maintain the privacy of communicating users and service stations, we have introduced a trust based security approach. We have proposed “Privacy Preservation with Trust level” (PPT) in ubiquitous network to secure the privacy of entity's identity. With our PPT mechanism, a malevolent service station or user or an external malicious entity can be isolated from service communication process depending on trustworthiness level. The efficiency of our proposed PPT protocol is shown with simulation results.
Keywords: data privacy; radiocommunication; transport protocols; trusted computing; ubiquitous computing; PPT protocol; entity identity privacy; external malicious entity; legitimate service station privacy; malevolent service station; open environment; outsider malicious entity; privacy preservation; privacy preservation with trust level; service communication; service communication process; smart space; trust based security approach; trustworthiness level; ubiquitous network; user privacy; wireless communication; Communication system security; Computer science; Context; Generators; Jamming; Privacy; Servers; direct trust; indirect trust; privacy; trust (ID#: 16-11310)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7300910&isnumber=7300539
Z. Duan, Y. Hui, C. Tian, N. Zhang and B. Huang, “A Self-ORganizing Trust Model Based on HP2P,” 2015 11th International Conference on Mobile Ad-hoc and Sensor Networks (MSN), Shenzhen, 2015, pp. 96-101. doi: 10.1109/MSN.2015.34
Abstract: Peer-to-Peer(P2P) reputation systems are essential to evaluate the trustworthiness of the nodes in a P2P system. This paper presents a distributed algorithm HP2PSORT based on SORT that enables a node to estimate the trustworthiness of other nodes based on the past interactions and recommendations. In an HP2P network, by using the filtering mechanism, the calculation method of the service trust and the dynamic calculation of the threshold value, we show that HP2PSORT outperforms SORT.
Keywords: computer network security; distributed algorithms; peer-to-peer computing; trusted computing; P2P system; Peer-to-Peer reputation systems; calculation method; distributed algorithm HP2PSORT; dynamic calculation; filtering mechanism; self-organizing trust model; service trust; trustworthiness; Context; Cost accounting; Estimation; Mathematical model; Measurement; Peer-to-peer computing; Servers; Chord; File System; P2P; Reputation System; Security (ID#: 16-11311)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7420930&isnumber=7420907
Z. Ning, Z. Chen and X. Kong, “A Trust-Based User Assignment Scheme in Ad Hoc Social Networks,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conferen on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1774-1778. doi: 10.1109/HPCC-CSS-ICESS.2015.106
Abstract: Although cooperation among individuals plays a key factor in the commercial development of wireless networks, trust is an important factor due to the uncertainty and uncontrollability caused by the self-organizing character of different entities. In this paper, we present a trust-based user assignment scheme by considering node sociality, the reason behind is that effective user assignment should not only build a reliable system basing on the behavior of network individuals, but also encourage selfish nodes to forward packets for one another. At first, a model for trustworthiness management is built up by considering social relationship. Then, user assignment for each transmission is decided by a double auction-based mechanism. Simulation result demonstrates that our scheme is able to obtain better network performance than the existing method in link connectivity and social welfare.
Keywords: ad hoc networks; social networking (online); trusted computing; ad hoc social network; double auction-based mechanism; link connectivity; node sociality; self-organizing character; selfish nodes; social relationship; trust-based user assignment scheme; trustworthiness management; wireless network; Ad hoc networks; Bandwidth; Interference; Measurement; Relays; Signal to noise ratio; Social network services; Social relationship; double auction; node trust (ID#: 16-11312)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336428&isnumber=7336120
K. Kalaivani and C. Suguna, “Efficient Botnet Detection Based on Reputation Model and Content Auditing in P2P Networks,” Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, Coimbatore, 2015, pp. 1-4. doi: 10.1109/ISCO.2015.7282358
Abstract: Botnet is a number of computers connected through internet that can send malicious content such as spam and virus to other computers without the knowledge of the owners. In peer-to-peer (p2p) architecture, it is very difficult to identify the botnets because it does not have any centralized control. In this paper, we are going to use a security principle called data provenance integrity. It can verify the origin of the data. For this, the certificate of the peers can be exchanged. A reputation based trust model is used for identifying the authenticated peer during file transmission. Here the reputation value of each peer can be calculated and a hash table is used for efficient file searching. The proposed system can also verify the trustworthiness of transmitted data by using content auditing. In this, the data can be checked against trained data set and can identify the malicious content.
Keywords: authorisation; computer network security; data integrity; information retrieval; invasive software; peer-to-peer computing; trusted computing; P2P networks; authenticated peer; botnet detection; content auditing; data provenance integrity; file searching; file transmission; hash table; malicious content; peer-to-peer architecture; reputation based trust model; reputation model; reputation value; security principle; spam; transmitted data trustworthiness; virus; Computational modeling; Cryptography; Measurement; Peer-to-peer computing; Privacy; Superluminescent diodes; Data provenance integrity; trained data set (ID#: 16-11313)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282358&isnumber=7282219
M. G. Pérez, F. G. Mármol and G. M. Pérez, “Improving Attack Detection in Self-Organizing Networks: A Trust-Based Approach Toward Alert Satisfaction,” Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, Kochi, 2015, pp. 1945-1951. doi: 10.1109/ICACCI.2015.7275903
Abstract: Cyber security has become a major challenge when detecting and preventing attacks on any self-organizing network. Defining a trust and reputation mechanism is a required feature in these networks to assess whether the alerts shared by their Intrusion Detection Systems (IDS) actually report a true incident. This paper presents a way of measuring the trustworthiness of the alerts issued by the IDSs of a collaborative intrusion detection network, considering the detection skills configured in each IDS to calculate the satisfaction on each interaction (alert sharing) and, consequently, to update the reputation of the alert issuer. Without alert satisfaction, collaborative attack detection cannot be a reality in front of ill-intended IDSs. Conducted experiments demonstrate a better accuracy when detecting attacks.
Keywords: security of data; self-organising feature maps; trusted computing; IDS; alert satisfaction; collaborative attack detection; collaborative intrusion detection network; cybersecurity; intrusion detection systems; reputation mechanism; self-organizing networks; trust-based approach; Collaboration; Intrusion detection; Optical wavelength conversion; Resource management; Self-organizing networks; Support vector machines; Attack detection; cyber security; trust assessment (ID#: 16-11314)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275903&isnumber=7275573
Xiaolong Guo, R. G. Dutta, Yier Jin, F. Farahmandi and P. Mishra, “Pre-Silicon Security Verification and Validation: A Formal Perspective,” 2015 52nd ACM/EDAC/IEEE Design Automation Conference (DAC), San Francisco, CA, 2015, pp. 1-6. doi: 10.1145/2744769.2747939
Abstract: Reusable hardware Intellectual Property (IP) based System-on-Chip (SoC) design has emerged as a pervasive design practice in the industry today. The possibility of hardware Trojans and/or design backdoors hiding in the IP cores has raised security concerns. As existing functional testing methods fall short in detecting unspecified (often malicious) logic, formal methods have emerged as an alternative for validation of trustworthiness of IP cores. Toward this direction, we discuss two main categories of formal methods used in hardware trust evaluation: theorem proving and equivalence checking. Specifically, proof-carrying hardware (PCH) and its applications are introduced in detail, in which we demonstrate the use of theorem proving methods for providing high-level protection of IP cores. We also outline the use of symbolic algebra in equivalence checking, to ensure that the hardware implementation is equivalent to its design specification, thus leaving little space for malicious logic insertion.
Keywords: electronic engineering computing; industrial property; integrated circuit design; integrated circuit testing; security of data; system-on-chip; theorem proving; IP cores protection; PCH; SoC design; equivalence checking; formal methods; functional testing methods; hardware Trojans; hardware trust evaluation; logic insertion; pervasive design; presilicon security validation; presilicon security verification; proof-carrying hardware; reusable hardware intellectual property; system-on-chip design; theorem proving methods; Hardware; IP networks; Logic gates; Polynomials; Sensitivity; Trojan horses (ID#: 16-11315)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167331&isnumber=7167177
M. Asplund, “Model-Based Membership Verification in Vehicular Platoons,” 2015 IEEE International Conference on Dependable Systems and Networks Workshops, Rio de Janeiro, 2015, pp. 125-132. doi: 10.1109/DSN-W.2015.21
Abstract: Cooperative vehicular systems have the potential to significantly increase traffic efficiency and safety. However, they also raise the question of to what extent information that is received from other vehicles can be trusted. In this paper we present a novel approach for increasing the trustworthiness of cooperative driving through a model-based approach for verifying membership views in vehicular platoons. We define a formal model for platoon membership, cooperative awareness claims, and membership verification mechanisms. With the help of a satisfiability solver, we are able to quantitatively analyse the impact of different system parameters on the verifiability of received information. Our results demonstrate the importance of cross validating received messages, as well as the surprising difficulty in establishing correct membership views despite powerful verification mechanisms.
Keywords: computability; formal verification; road safety; road traffic; road vehicles; cooperative awareness claim; cooperative driving; cooperative vehicular system; cross validating received message; formal model; membership verification mechanism; model-based approach; model-based membership verification; platoon membership; received information; satisfiability solver; system parameter; traffic efficiency; traffic safety; vehicular platoon; Conferences; Knowledge based systems; Measurement; Security; Sensors; Software; Vehicles (ID#: 16-11316)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272565&isnumber=7272533
S. Burleigh, “Critical Multicast,” Wireless Communications & Signal Processing (WCSP), 2015 International Conference on, Nanjing, 2015, pp. 1-5. doi: 10.1109/WCSP.2015.7341151
Abstract: While the importance of protecting the confidentiality of sensitive cybernetic communications is widely recognized, the importance of ensuring the trustworthiness of information that is public yet critical is perhaps less obvious. Critical non-confidential messages must be issued by a trusted authoritative source in order to serve as the basis for operational decisions, but while existing authentication mechanisms can guard against tampering with the messages from such a source they cannot defend against compromise of the source itself. A public key infrastructure developed for Delay-Tolerant Networking addresses this problem. Its design might serve as the basis for a general “Critical Multicast” technology, ensuring that vital yet non-confidential information received via the network is genuine.
Keywords: cryptographic protocols; delay tolerant networks; message authentication; multicast protocols; public key cryptography; telecommunication security; authentication mechanisms; bundle security protocol; confidentiality protection; critical multicast technology; critical nonconfidential messages; delay-tolerant networking; operational decisions; sensitive cybernetic communications; Internet; Protocols; Public key; Receivers; Reliability; bundle protocol; delay-tolerant networking; multicast; public key infrastructure (ID#: 16-11317)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7341151&isnumber=7340966
E. Bertino, “Big Data - Security and Privacy,” 2015 IEEE International Congress on Big Data (BigData Congress), New York, NY, 2015, pp. 757-761. doi: 10.1109/BigDataCongress.2015.126
Abstract: The paper introduces a research agenda for security and privacy in big data. The paper discusses research challenges and directions concerning data confidentiality, privacy, and trustworthiness in the context of big data. Key research issues discussed in the paper include how to reconcile security with privacy, the notion of data ownership, and how to enforce access control in big data stores.
Keywords: Big Data; data privacy; security of data; trusted computing; data confidentiality; data ownership; data security; trustworthiness; Access control; Big data; Cryptography; Data privacy; Privacy; data trustworthiness; privacy (ID#: 16-11318)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7207310&isnumber=7207183
S. Ristov and M. Gusev, “A Methodology to Evaluate the Trustworthiness of Cloud Service Providers’ Availability,” EUROCON 2015 - International Conference on Computer as a Tool (EUROCON), IEEE, Salamanca, 2015, pp. 1-6. doi: 10.1109/EUROCON.2015.7313734
Abstract: Cloud service providers (CSPs) compete among each other to guarantee very high availability of their services. The most common CSPs guarantee the availability of at least 99.9% (some even 100%) in their service level agreements (SLAs), i.e., they guarantee maximum 8.77 hours of downtime per year for their services. However, this high guarantee does not imply that they comply with their SLAs. Many reports addressed that CSPs' downtime is much greater and usually the cloud consumer's costs cannot be covered by CSP's indemnification. On the other hand, the service availability is not a decisive factor for many cloud consumers. That is, many cloud consumers are interested in lower cost for an acceptable level of availability. In this paper, we define a new methodology to evaluate the CSPs according to the cloud consumers' needs. We introduce a very important factor, i.e., trustworthiness beside the availability. With our methodology, the cloud consumers can quantify the trustworthiness and the security of their potential CSPs, in order to migrate their services to the most appropriate CSP. Our evaluation shows that Google is the best choice of the evaluated CSPs in trustworthiness, although it offers the worst availability in its SLA, compared to other most common CSPs.
Keywords: cloud computing; contracts; customer satisfaction; trusted computing; CSP indemnification; Google; SLA; cloud consumer costs; cloud service provider availability; service level agreements; trustworthiness evaluation; Cloud computing; Computational modeling; Google; ISO Standards; Reliability; Security; Virtual machining; Availability; evaluation; reliability; trustworthiness
(ID#: 16-11319)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7313734&isnumber=7313653
H. Boyes, “Best Practices in an ICS Environment,” Cyber Security for Industrial Control Systems, London, 2015, pp. 1-36. doi: 10.1049/ic.2015.0006
Abstract: Presents a collection of slides covering the following topics: software trustworthiness; insecure building control system; prison system glitch; cyber security; ICS; vulnerability assessment; dynamic risks handling; situational awareness; human factor; industrial control systems and system connectivity.
Keywords: control engineering computing; human factors; industrial control; security of data; trusted computing; ICS; cybersecurity; dynamic risks handling; human factor; industrial control systems; insecure building control system; prison system glitch; situational awareness; software trustworthiness; system connectivity; vulnerability assessment (ID#: 16-11320)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332808&isnumber=7137498
J. Miguel, S. Caballé and F. Xhafa, “A MapReduce Approach for Processing Student Data Activity in a Peer-to-Peer Networked Setting,” 2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), Krakow, 2015, pp. 9-16. doi: 10.1109/3PGCIC.2015.27
Abstract: Collaborative and peer-to-peer networked based models generate a large amount of data from students' learning tasks. We have proposed the analysis of these data to tackle information security in e-Learning breaches with trustworthiness models as a functional requirement. In this context, the computational complexity of extracting and structuring students' activity data is a computationally costly process as the amount of data tends to be very large and needs computational power beyond of a single processor. For this reason, in this paper, we propose a complete MapReduce and Hadoop application for processing learning management systems log file data.
Keywords: data handling; learning management systems; parallel programming; trusted computing; Hadoop application; MapReduce approach; computational complexity; e-learning breaches; information security; learning management systems log file data; peer-to-peer networked based models; student data activity; trustworthiness models; Computational modeling; Computer architecture; Data models; Parallel processing; Peer-to-peer computing; Programming; Software; Hadoop; MapReduce; log files; parallel processing; student activity data (ID#: 16-11321)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7424535&isnumber=7424499
C. Pasquini, C. Brunetta, A. F. Vinci, V. Conotter and G. Boato, “Towards the Verification of Image Integrity in Online News,” Multimedia & Expo Workshops (ICMEW), 2015 IEEE International Conference on, Turin, 2015, pp. 1-6. doi: 10.1109/ICMEW.2015.7169801
Abstract: The widespread of social networking services allows users to share and quickly spread an enormous amount of digital contents. Currently, a low level of security and trustworthiness is applied to such information, whose reliability cannot be taken for granted due to the large availability of image editing software which allow any user to easily manipulate digital contents. This has a huge impact on the deception of users, whose opinion can be seriously influenced by altered media. In this work, we face the challenge of verifying online news by analyzing the images related to the particular news article. Our goal is to create an empirical system which helps in verifying the consistency of visually and semantically similar images used within different news articles on the same topic. Given a certain news online, our system identifies a set of images connected to the same topic and presenting common visual elements, which can be successively compared with the original ones and analyzed in order to discover possible inconsistencies also by means of multimedia forensics tools.
Keywords: digital forensics; image processing; multimedia computing; social networking (online); trusted computing; image editing software; image integrity verification; multimedia forensics tools; online news verification; security; social networking services; trustworthiness; visual elements; Correlation; Face; Manganese; Media; Metadata; Tin; Visualization; Media Verification; news
(ID#: 16-11322)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169801&isnumber=7169738
Y. Ma, Y. Chen and B. Gu, “An Attributes-Based Allocation Approach of Software Trustworthy Degrees,” Software Quality, Reliability and Security - Companion (QRS-C), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 89-94. doi: 10.1109/QRS-C.2015.24
Abstract: Trustworthiness measurement and evaluation of softwares is an important research topic in the field of trustworthy softwares. The existing metric model for software trustworthiness can determine trustworthy degree of a software with given trustworthy degrees of attributes. In this paper, we focus on the reverse of measurement approach, which determines trustworthy degrees of attributes with given trustworthy degree of a software. We introduce an approach to describe the allocation of trustworthy degrees of softwares, and present an allocation model and an attributes-based allocation algorithm. The allocation approach are applied to high-speed reentry aircraft softwares. With the allocation results, it is shown that our approach is effective and practical in guiding and controlling software trustworthiness.
Keywords: aerospace computing; resource allocation; software quality; trusted computing; attributes-based allocation algorithm; high-speed reentry aircraft software; software trustworthy degree; Resource management; Software; Software algorithms; Software measurement; Space vehicles; Standards; Allocation for Software Trustworthiness; Trustworthy Attributes; Trustworthy Software (ID#: 16-11323)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322129&isnumber=7322103
A. Ray, J. Åkerberg, M. Björkman and M. Gidlund, “Towards Trustworthiness Assessment of Industrial Heterogeneous Networks,” 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, 2015, pp. 1-6. doi: 10.1109/ETFA.2015.7301548
Abstract: In industrial plants, there is a mix of devices with different security features and capabilities. If there is a mix of devices with various degree of security levels, then this will create independent islands in a network with similar levels of security features. However, the industrial plant is interconnected for the purpose of reducing cost of monitoring with a centralized control center. Therefore, the different islands also need to communicate with each other to improve the asset management efficiency in a plant. In this work we aim to focus on the trustworthiness assessment of devices in industrial plant networks in term of node value. We study the behavior of industrial plant networks when devices with various degrees of security features communicate. We aim to identify network properties which influence the overall network behavior. From the study, we have found that the communication path, the order of different communication paths and the number of specific types of nodes affect the final trustworthiness of devices in the network.
Keywords: industrial plants; security of data; trusted computing; asset management efficiency; centralized control center; communication path; industrial heterogeneous networks; industrial plant networks; monitoring cost reduction; security features; trustworthiness assessment; Analytical models; Centralized control; Industrial plants; Monitoring; Receivers; Security; Yttrium; Device Trust; Industrial Communication Security; Network Analysis; Security Modeling (ID#: 16-11324)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301548&isnumber=7301399
J. Miguel, S. Caballé and F. Xhafa, “A Knowledge Management Process to Enhance Trustworthiness-based Security in On-line Learning Teams,” Intelligent Networking and Collaborative Systems (INCOS), 2015 International Conference on, Taipei, 2015, pp. 272-279. doi: 10.1109/INCoS.2015.70
Abstract: Both information and communication technologies and computer-supported collaborative learning have been widely adopted in many educational institutions. Likewise, general e-assessment processes offer enormous opportunities to enhance student's learning experience. In this context, e-Learning stakeholders are increasingly demanding new requirements and, among them, information security in e-Learning stands out as a key factor. One of the key strategies in information security is that security drawbacks cannot be solved with technology solutions alone. Thus we have proposed a functional approach based on trustworthiness, namely, a trustworthiness security methodology. Since this methodology proposes processes and methods, which are closely related to knowledge management, in this paper, we will endow our methodology with current knowledge management processes. For this reason, we analyse the current models and techniques used for general knowledge management to be applied to trustworthy data from e-Learning systems. Moreover, we discuss several issues that arise when managing large data sets that span a rather long period of time. Hence, the main goal of this paper is to analyse existing knowledge management processes to endow our trustworthiness security methodology with a suitable set of knowledge management techniques and models. Finally, we exemplify the approach with trustworthy data of the on-line activity of virtual classrooms in our Virtual Campus of Open University of Catalonia.
Keywords: computer aided instruction; groupware; knowledge management; trusted computing; computer-supported collaborative learning; e-assessment process; e-learning; educational institutions; electronic learning; information and communication technology; information security; knowledge management; knowledge management process; online learning teams; student learning experience; trustworthiness security methodology; trustworthiness-based security; Collaboration; Data collection; Data mining; Data visualization; Electronic learning; Knowledge management; Security; Information security; trustworthiness (ID#: 16-11325)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312084&isnumber=7312007
J. Miguel, S. Caballé, F. Xhafa and V. Snasel, “A Data Visualization Approach for Trustworthiness in Social Networks for On-line Learning,” 2015 IEEE 29th International Conference on Advanced Information Networking and Applications, Gwangiu, 2015,
pp. 490-497. doi: 10.1109/AINA.2015.226
Abstract: Up to now, the problem of ensuring collaborative activities in e-Learning against dishonest students' behaviour has been mainly tackled with technological security solutions. Over the last years, technological security solutions have evolved from isolated security approaches based on specific properties, such as privacy, to holistic models based on technological security comprehensive solutions, such as public key infrastructures, biometric models and multidisciplinary approaches from different research areas. Current technological security solutions are feasible in many e-Learning scenarios but on-line assessment involves certain requirements that usually bear specific security challenges related to e-Learning design. In this context, even the most advanced and comprehensive technological security solutions cannot cope with the whole scope of e-Learning vulnerabilities. To overcome these deficiencies, our previous research aimed at incorporating information security properties and services into on-line collaborative e-Learning by a functional approach based on trustworthiness assessment and prediction. In this paper, we present a peer-to-peer on-line assessment approach carried out in a real on-line course developed in our real e-Learning context of the Open University of Catalonia. The design presented in this paper is conducted by our trustworthiness security methodology with the aim of building peer-to-peer collaborative activities, which enhances security e-Learning requirements. Eventually, peer-to-peer visualizations methods are proposed to manage security e-Learning events, as well as on-line visualization through peer-to-peer tools, intended to analyse collaborative relationship.
Keywords: computer aided instruction; data visualisation; social networking (online); trusted computing; Open University of Catalonia; biometric models; data visualization approach; e-learning; holistic models; information security properties; information security services; multidisciplinary approaches; online learning; peer-to-peer collaborative activities; peer-to-peer on-line assessment; public key infrastructures; social networks; student behaviour; technological security; technological security comprehensive solutions; trustworthiness assessment; trustworthiness security methodology; Collaboration; Context; Electronic learning; Peer-to-peer computing; Security; Social network services; Visualization; Information security; computer-supported collaborative learning; on-line assessment; peer-to-peer analysis; trustworthiness (ID#: 16-11326)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098011&isnumber=7097928
K. Xiao, D. Forte and M. M. Tehranipoor, “Efficient and Secure Split Manufacturing via Obfuscated Built-in Self-Authentication,” Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, Washington, DC, 2015, pp. 14-19. doi: 10.1109/HST.2015.7140229
Abstract: The threats of reverse-engineering, IP piracy, and hardware Trojan insertion in the semiconductor supply chain are greater today than ever before. Split manufacturing has emerged as a viable approach to protect integrated circuits (ICs) fabricated in untrusted foundries, but has high cost and/or high performance overhead. Furthermore, split manufacturing cannot fully prevent untargeted hardware Trojan insertions. In this paper, we propose to insert additional functional circuitry called obfuscated built-in self-authentication (OBISA) in the chip layout with split manufacturing process, in order to prevent reverse-engineering and further prevent hardware Trojan insertion. Self-tests are performed to authenticate the trustworthiness of the OBISA circuitry. The OBISA circuit is connected to original design in order to increase the strength of obfuscation, thereby allowing a higher layer split and lower overall cost. Additional fan-outs are created in OBISA circuitry to improve obfuscation without losing testability. Our proposed gating mechanism and net selection method can ensure negligible overhead in terms of area, timing, and dynamic power. Experimental results demonstrate the effectiveness of the proposed technique in several benchmark circuits.
Keywords: foundries; integrated circuit manufacture; integrated circuit reliability; invasive software; reverse engineering; supply chains; IP piracy; OBISA circuit; chip layout; hardware Trojan insertion; integrated circuits; obfuscated built-in self-authentication; semiconductor supply chain; split manufacturing; trustworthiness; untrusted foundries; Delays; Fabrication; Foundries; Layout; Logic gates (ID#: 16-11327)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140229&isnumber=7140225
M. H. Jalalzai, W. B. Shahid and M. M. W. Iqbal, “DNS Security Challenges and Best Practices to Deploy Secure DNS with Digital Signatures,” 2015 12th International Bhurban Conference on Applied Sciences and Technology (IBCAST), Islamabad, 2015, pp. 280-285. doi: 10.1109/IBCAST.2015.7058517
Abstract: This paper is meant to discuss the DNS security vulnerabilities and best practices to address DNS security challenges. The Domain Name System (DNS) is the foundation of internet which translates user friendly domains, named based Resource Records (RR) into corresponding IP addresses and vice-versa. Nowadays usage of DNS services are not merely for translating domain names, but it is also used to block spam, email authentication like DKIM and the latest DMARC, the TXT records found in DNS are mainly about improving the security of services. So, virtually almost every internet application is using DNS. If not works properly then whole internet communication will collapse. Therefore security of DNS infrastructures is one of the core requirements for any organization in current cyber security arena. DNS are favorite place for attackers due to huge loss of its outcome. So breach in DNS security will in resultant affects the trust worthiness of whole internet. Therefore security of DNS is paramount, in case DNS infrastructure is vulnerable and compromised, organizations lose their revenue, they face downtime, customer dissatisfaction, privacy loss, confront legal challenges and many more. As we know that DNS is now become the largest distributed database, but initially at the time of DNS design the only goal was to provide scalable and available name resolution service but its security perspectives were not focused and overlooked at that time. So there are number of security flaws exist and there is an urgent requirement to provide some additional mechanism for addressing known vulnerabilities. From these security challenges, most important one is DNS data integrity and availability. For this purpose we introduced cryptographic framework that is configured on open source platform by incorporating DNSSEC with Bind DNS software which addresses integrity and availability issues of DNS by establishing DNS chain of trust using digitally signed DNS data.
Keywords: Internet; computer network security; cryptography; data integrity; data privacy; digital signatures; distributed databases; public domain software; Bind DNS software; DKIM; DMARC; DNS availability issues; DNS chain; DNS data integrity; DNS design; DNS infrastructures; DNS security; DNS security vulnerabilities; DNS services; DNSSEC; IP addresses; Internet application; Internet communication; Internet trustworthiness; cryptographic framework; customer dissatisfaction; cyber security arena; digital signatures; digitally signed DNS data; distributed database; domain name system; email authentication; index TXT services; named based resource records; open source platform; privacy loss; secure DNS; security flaws; user friendly domains; Best practices; Computer crime; Cryptography; Servers; Software; DNS Security; NS Vulnerabilities; Digital Signatures; Network and Computer Security; PKI
(ID#: 16-11328)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058517&isnumber=7058466
T. O. Mayayise and I. O. Osunmakinde, “Robustness of Computational Intelligent Assurance Models when Assessing E-Commerce Sites,” Information Security for South Africa (ISSA), 2015, Johannesburg, 2015, pp. 1-8. doi: 10.1109/ISSA.2015.7335067
Abstract: E-commerce assurance platforms continue to emerge in order to facilitate trustworthy transactional relationships between buyers and sellers. However, as the sophistication of the e-commerce environments increase, the risks associated with transacting online also increase which pose a challenge to consumers to freely transact online. Although traditional assurance models are still used by various e-commerce sites, some of these models are not robust enough to provide adequate assurance on key areas of customer concerns in the cyber space. This research proposes a robust intelligent PRAHP framework built on Analytical Hierarchy Process complemented with an evidential reasoning from page ranking. PRAHP algorithms are modularised to run concurrently whose consensus decision takes place in a decision table. PRAHP objectively extracts real-life data directly from each of the 10 e-commerce websites comparatively using assurance attributes: Advanced Security, Policy, Advanced ISO, Advanced legislation and Availability. The assurance of e-commerce sites using PRAHP was experimented on small and large e-Commerce enterprises and validated by determining the effects of varied damping factor d on PRAHP, and comparing with customer's site perceptions. The experimental results demonstrate that the proposed framework is sufficiently robust for current site assurance applications and shows the trustworthiness aspect of the framework in instances of uncertainty.
Keywords: ISO standards; Web sites; analytic hierarchy process; electronic commerce; legislation; transaction processing; trusted computing; PRAHP algorithms; advanced ISO; advanced legislation; advanced security; analytical hierarchy process; computational intelligent assurance model; customer concerns; customer site perceptions; cyber space; e-commerce assurance platform; e-commerce environments; e-commerce websites; online transaction; page ranking; robust intelligent PRAHP framework; trustworthy transactional relationships; Legislation; Mathematical model; Robustness; Seals; Security; Standards; AHP; Assessment; Assurance; DT; E-commerce; Legislation; PR; Policy (ID#: 16-11329)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335067&isnumber=7335039
Y. Liu, S. Sakamoto, L. Barolli, M. Ikeda and F. Xhafa, “Evaluation of Peers Trustworthiness for JXTA-overlay Considering Data Download Speed, Local Score and Security Parameters,” Network-Based Information Systems (NBiS), 2015 18th International Conference on, Taipei, 2015, pp. 658-664. doi: 10.1109/NBiS.2015.98
Abstract: In P2P systems, each peer has to obtain information of other peers and propagate the information to other peers through neighboring peers. Thus, it is important for each peer to have some number of neighbor peers. Moreover, it is more significant to discuss if each peer has trustworthy neighbor peers. In reality, each peer might be faulty or might send obsolete, even incorrect information to the other peers. We have implemented a P2P platform called JXTA-Orverlay, which defines a set of protocols that standardize how different devices may communicate and collaborate among them. JXTA-Overlay provides a set of basic functionalities, primitives, intended to be as complete as possible to satisfy the needs of most JXTA-based applications. In this paper, we consider three input parameters: Data Download Speed (DDS), Local Score (LS) and Security (S) to decide the Peer Trustworthiness (PT). We evaluate the proposed system by computer simulations. The simulation results have shown that the proposed system has a good performance and can choose trustworthy peers to connect in JXTA-Overlay platform.
Keywords: peer-to-peer computing; security of data; trusted computing; DDS; JXTA-overlay; LS; P2P systems; computer simulations; data download speed; local score; peers trustworthiness; security parameters; Fuzzy logic; Fuzzy sets; Peer-to-peer computing; Pragmatics; Process control; Protocols; Security; Fuzzy Logic; Intelligent Algorithm; JXTA-Overlay; P2P Systems; Trust-worthiness (ID#: 16-11330)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7350697&isnumber=7350553
N. G. Mohammadi, T. Bandyszak, C. Kalogiros, M. Kanakakis and T. Weyer, “A Framework for Evaluating the End-to-End Trustworthiness,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 638-645. doi: 10.1109/Trustcom.2015.429
Abstract: Trustworthiness of software and services is a key concern for their use and adoption by organizations and end users. Trustworthiness evaluation is an important task to support both providers and consumers in making informed decisions, i.e., for selecting components from a software marketplace. Most of the literature evaluates trustworthiness by focusing on a single dimension (e.g., from the security perspective) while there are limited contributions towards multifaceted and end-to-end trustworthiness evaluation. Our analysis reveals that there is a lack of a comprehensive framework for comparative, multifaceted end-to-end trustworthiness evaluation, which takes into account different layers of abstractions of both the system topology and its trustworthiness. In this paper, we provide a framework for end-to-end trustworthiness evaluation using computational approaches, which is based on aggregating certified trustworthiness values for individual components. The resulting output supports in defining trustworthiness requirements for a software component to be developed and eventually integrated within a system, as well as obtaining trustworthiness evidences for a composite system before the actual deployment. Thereby it supports the designer in analyzing the end-to-end trustworthiness. An application example illustrates the application of the framework.
Keywords: trusted computing; multifaceted end-to-end trustworthiness evaluation; services trustworthiness; software trustworthiness; system topology abstraction; trustworthiness abstraction; trustworthiness evidence; Business; Measurement; Quality of service; Reliability; Security; Web services; Computational Evaluation; End-to-End Evaluation; Metrics; Socio-Technical-System; Trustworthiness (ID#: 16-11331)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345337&isnumber=7345233
S. Lins, S. Thiebes, S. Schneider and A. Sunyaev, “What is Really Going On at Your Cloud Service Provider? Creating Trustworthy Certifications by Continuous Auditing,” System Sciences (HICSS), 2015 48th Hawaii International Conference on, Kauai, HI, 2015, pp. 5352-5361. doi: 10.1109/HICSS.2015.629
Abstract: Cloud service certifications attempt to assure a high level of security and compliance. However, considering that cloud services are part of an ever-changing environment, multi-year validity periods may put in doubt the reliability of such certifications. We argue that continuous auditing of selected certification criteria is required to assure continuously reliable and secure cloud services and thereby increase the trustworthiness of certifications. Continuous auditing of cloud services is still in its infancy, thus, we performed a systematic literature review to identify automated auditing methods that are applicable in the context of cloud computing. Our study yields a set of automated methods for continuous auditing in six clusters. We discuss the identified methods in terms of their applicability to address major concerns about cloud computing and how the methods can aid to continuously audit cloud environments. We thereby provide paths for future research to implement continuous auditing in cloud service contexts.
Keywords: auditing; certification; cloud computing; security of data; trusted computing; certification criteria; cloud service certifications; cloud service provider; compliance; continuous auditing; multiyear validity periods; reliable cloud services; secure cloud services; security; trustworthy certifications; Certification; Computer architecture; Context; Inspection; Monitoring; Reliability; Security; Cloud Computing; Cloud Service Certification; Continuous Auditing; Dynamic Certification; Monitoring (ID#: 16-11332)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070458&isnumber=7069647
P. Stephanow, C. Banse and J. Schütte, “Generating Threat Profiles for Cloud Service Certification Systems,” 2016 IEEE 17th International Symposium on High Assurance Systems Engineering (HASE), Orlando, FL, 2016, pp. 260-267. doi: 10.1109/HASE.2016.43
Abstract: Cloud service certification aims at automatically validating whether a cloud service satisfies a predefined set of requirements. To that end, certification systems collect and evaluate sensitive data from various sources of a cloud service. At the same time, the certification system itself has to be resilient to attacks to generate trustworthy statements about the cloud service. Thus system architects are faced with the task of assessing the trustworthiness of different certification system designs. To cope with that challenge, we propose a method to model different architecture variants of cloud service certification systems and analyze threats these systems face. By applying our method to a specific cloud service certification system, we show how threats to such systems can be derived in a standardized way that allows us to evaluate different architecture configurations.
Keywords: certification; cloud computing; security of data; trusted computing; architecture configurations; automatic cloud service validation; cloud service certification system design; cloud service sources; sensitive data collection; sensitive data evaluation; threat analysis; threat profile generation; trustworthiness assessment; trustworthy statement generation; Cloud computing; Engines; Monitoring; Security; Time measurement; Virtual machining; cloud services (ID#: 16-11333)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7423164&isnumber=7423114
M. Goldenbaum, R. F. Schaefer and H. V. Poor, “The Multiple-Access Channel with an External Eavesdropper: Trusted vs. Untrusted Users,” 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, 2015, pp. 564-568. doi: 10.1109/ACSSC.2015.7421192
Abstract: In this paper, the multiple-access channel with an external eavesdropper is considered. From the perspective of the trustworthiness of users, an overview of existing secrecy criteria is given and their impact on the achievable secrecy rate region is discussed. For instance, under the assumption the eavesdropper has full a priori knowledge of the other users' transmit signals, the mixed secrecy criterion requires the information leakage from all transmitted messages individually as well as jointly to be small. This is a conservative criterion useful for scenarios in which users might be compromised. If some of the users are trustworthy, however, the secrecy criterion can be relaxed to joint secrecy resulting in a significantly increased rate region. As this indicates there is a trade-off between the choice of the secrecy criterion and achievable rates, the question is posed as to whether the criterion can further be weakened to individual secrecy, which would be desirable for scenarios where users are guaranteed trustworthy.
Keywords: multi-access systems; multiuser channels; radiocommunication; telecommunication security; external eavesdropper; information leakage; mixed secrecy criterion; multiple-access channel; user transmit signals; Probability distribution; Production facilities; Receivers; Reliability; Security; Smart grids; Zinc (ID#: 16-11334)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7421192&isnumber=7421038
M. Sugino, S. Nakamura, T. Enokido and M. Takizawa, “Trustworthiness-Based Broadcast Protocols in Wireless Networks,” Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, Blumenau, 2015, pp 125-132. doi: 10.1109/IMIS.2015.85
Abstract: It is significant to deliver messages to every node in a group to realize the cooperation of the nodes in wireless networks. A node sends a message to the neighbour nodes, each of which forwards the message to the neighbour nodes in flooding protocols. Here, a huge number of messages are transmitted in a network. In multi-point relay (MPR) protocols, only a relay node forwards messages and the other leaf nodes do not forward messages in order to reduce the number of messages transmitted in networks. In this paper, we newly discuss trustworthiness concepts of a neighbour node to broadcast messages. The more trustworthy the node is, the more reliably and efficiently the node can forward messages. Trustworthy neighbour nodes are selected as relay nodes. We propose novel types of trustworthiness-based broadcast (TBR) protocols, TBR1 and TBR2 protocols. In the TBR1 protocol, trustworthy first-neighbour nodes are selected as relay nodes. In the TBR2 protocol, each second-neighbour node is connected to a trustworthy first-neighbour node. In the evaluation, electric energy consumed by a node to send messages from a node to the neighbour nodes is considered as trustworthiness of the neighbour node. We evaluate the TBR1 and TBR2 protocols and show the total electric energy consumed by nodes can be more reduced than the MPR protocol.
Keywords: protocols; radio networks; telecommunication security; trusted computing; MPR protocols; TBR protocols; broadcast messages; broadcast protocols; first neighbour nodes; flooding protocols; forward messages; leaf nodes; multipoint relay; relay node forwards messages; trustworthiness based broadcast; trustworthiness concepts; wireless networks; Energy consumption; Protocols; Relays; Reliability; Time factors; Wireless networks; Broadcast protocols; Energy-efficient broadcast protocol; Trustworthiness; Wireless network (ID#: 16-11335)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284937&isnumber=7284886
M. Schölzel, E. Eren and K. O. Detken, “A Viable SIEM Approach for Android,” Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2015 IEEE 8th International Conference on, Warsaw, 2015, pp. 803-807. doi: 10.1109/IDAACS.2015.7341414
Abstract: Mobile devices such as smartphones and tablet PCs are increasingly used for business purposes. However, the trustworthiness of operating systems and apps is controversial. They can constitute a threat to corporate networks and infrastructures, if they are not audited or monitored. The concept of port-based authentication using IEEE 802.1x restricts access and may provide statistical data about users entering or leaving a network, but it does not consider the threat that devices can pose when already authenticated and used. Mobile devices gather and publish information. This information is incorporated into Security Information and Event Management (SIEM) software so that a threat is recognized while the device is being used.
Keywords: message authentication; mobile computing; smart phones; telecommunication security; trusted computing; Android; IEEE 802.1x; SIEM approach; SIEM software; apps; business purposes; corporate networks threat; infrastructures threat; mobile devices; operating systems; port-based authentication; security information and event management; smartphones; statistical data; tablet PC; trustworthiness; Androids; Humanoid robots; Metadata; Mobile handsets; Monitoring; Security; Servers; IEEE 802.1X; IF-MAP; SIEM; TNC; event detection; information security; network monitoring; trusted network connect (ID#: 16-11336)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7341414&isnumber=7341341
M. M. Bidmeshki and Y. Makris, “Toward Automatic Proof Generation for Information Flow Policies in Third-Party Hardware IP,” Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, Washington, DC, 2015, pp. 163-168. doi: 10.1109/HST.2015.7140256
Abstract: The proof carrying hardware intellectual property (PCHIP) framework ensures trustworthiness by developing proofs for security properties designed to prevent introduction of malicious behaviors via third-party hardware IP. However, converting a design to a formal representation and developing proofs for the desired security properties is a cumbersome task for IP developers and requires extra knowledge of formal reasoning methods, proof development and proof checking. While security properties are generally specific to each design, information flow policies are a set of policies which ensure that no secret information is leaked through untrusted channels, and are mainly applicable to the designs which manipulate secret and sensitive data. In this work, we introduce the VeriCoq-IFT framework which aims to (i) automate the process of converting designs from HDL to the Coq formal language, (ii) generate security property theorems ensuring information flow policies, (iii) construct proofs for such theorems, and (iv) check their validity for the design, with minimal user intervention. We take advantage of Coq proof automation facilities in proving the generated theorems for enforcing these policies and we demonstrate the applicability of our automated framework on two DES encryption circuits. By providing essential information, the trustworthiness of these circuits in terms of information flow policies is verified automatically. Any alteration of the circuit description against information flow policies causes proofs to fail. Our methodology is the first but essential step in the adoption of PCHIP as a valuable method to authenticate the trustworthiness of third party hardware IP with minimal extra effort.
Keywords: formal languages; industrial property; theorem proving; trusted computing; Coq formal language; DES encryption circuits; HDL; PCHIP framework; VeriCoq-IFT framework; automatic proof generation; formal reasoning methods; information flow policies; malicious behaviors; proof carrying hardware intellectual property framework; proof checking; proof development; third-party hardware; Hardware; Hardware design languages; IP networks; Sensitivity; Trojan horses; Wires (ID#: 16-11337)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140256&isnumber=7140225
M. Chibba and A. Cavoukian, “Privacy, Consumer Trust and Big Data: Privacy by Design and the 3 C’S,” ITU Kaleidoscope: Trust in the Information Society (K-2015), 2015, Barcelona, 2015, pp. 1-5. doi: 10.1109/Kaleidoscope.2015.7383624
Abstract: The growth of ICTs and the resulting data explosion could pave the way for the surveillance of our lives and diminish our democratic freedoms, at an unimaginable scale. Consumer mistrust of an organization's ability to safeguard their data is at an all time high and this has negative implications for Big Data. The timing is right to be proactive about designing privacy into technologies, business processes and networked infrastructures. Inclusiveness of all objectives can be achieved through consultation, co-operation, and collaboration (3 C's). If privacy is the default, without diminishing functionality or other legitimate interests, then trust will be preserved and innovation will flourish.
Keywords: Big Data; consumer protection; data privacy; trusted computing; ICT; big data; consumer trust; data explosion; privacy; Big data; Business; Collaboration; Data protection; Privacy; Security; Information and communication technologies (ICTs); Privacy by Design; information society; internet of things; privacy; security; technological innovation; trustworthiness (ID#: 16-11338)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7383624&isnumber=7383613
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.