Control Theory and Privacy, 2014, Part 1
SoS Newsletter- Advanced Book Block
Control Theory and Privacy, 2014 Part 1 |
In the Science of Security, control theory offers methods and approaches to potentially solve hard problems. The research work presented here specifically addresses issues in privacy. The work was presented in 2014..
Cox, A.; Roy, S.; Warnick, S., “A Science of System Security,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 487, 492, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039428
Abstract: As the internet becomes the information-technology backbone for more and more operations, including critical infrastructures such as water and power systems, the security problems introduced by linking such operations to the internet become more of a concern. Various communities have considered these problems and approached solutions from a variety of perspectives. In this paper, we consider the contributions we believe control theory can make towards developing tools for analyzing whole system security, that is, security of a system that may include its physical and human elements as well as its cyber components. In particular, we contrast notions of security focused on protecting information, and thus concerned primarily with delivering the right information to the right people (and no one else), with a different perspective on system security focused on protecting system functionality, which is concerned primarily with system robustness to particular attacks (and may not be concerned with privacy of communications).
Keywords: security of data; Internet; control theory; information protection; information technology backbone; security notion; system functionality protection; system security; Communities; Computational modeling; Computer security; Computers; Robustness; US Government (ID#: 15-5739)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039428&isnumber=7039338
Srivastava, M., “In Sensors We Trust — A Realistic Possibility?” Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, vol., no., pp. 1, 1, 26-28 May 2014. doi:10.1109/DCOSS.2014.65
Abstract: Sensors of diverse capabilities and modalities, carried by us or deeply embedded in the physical world, have invaded our personal, social, work, and urban spaces. Our relationship with these sensors is a complicated one. On the one hand, these sensors collect rich data that are shared and disseminated, often initiated by us, with a broad array of service providers, interest groups, friends, and family. Embedded in this data is information that can be used to algorithmically construct a virtual biography of our activities, revealing intimate behaviors and lifestyle patterns. On the other hand, we and the services we use, increasingly depend directly and indirectly on information originating from these sensors for making a variety of decisions, both routine and critical, in our lives. The quality of these decisions and our confidence in them depend directly on the quality of the sensory information and our trust in the sources. Sophisticated adversaries, benefiting from the same technology advances as the sensing systems, can manipulate sensory sources and analyze data in subtle ways to extract sensitive knowledge, cause erroneous inferences, and subvert decisions. The consequences of these compromises will only amplify as our society increasingly complex human-cyber-physical systems with increased reliance on sensory information and real-time decision cycles. Drawing upon examples of this two-faceted relationship with sensors in applications such as mobile health and sustainable buildings, this talk will discuss the challenges inherent in designing a sensor information flow and processing architecture that is sensitive to the concerns of both producers and consumer. For the pervasive sensing infrastructure to be trusted by both, it must be robust to active adversaries who are deceptively extracting private information, manipulating beliefs and subverting decisions. While completely solving these challenges would require a new science of resilient, secure and trustworthy networked sensing and decision systems that would combine hitherto disciplines of distributed embedded systems, network science, control theory, security, behavioral science, and game theory, this talk will provide some initial ideas. These include an approach to enabling privacy-utility trade-offs that balance the tension between risk of information sharing to the producer and the value of information sharing to the consumer, and method to secure systems against physical manipulation of sensed information.
Keywords: information dissemination; sensors; information sharing; processing architecture; secure systems; sensing infrastructure; sensor information flow; Architecture; Buildings; Computer architecture; Data mining; Information management; Security; Sensors (ID#: 15-5740)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846138&isnumber=6846129
Nai-Wei Lo; Yohan, A., “Danger Theory-Based Privacy Protection Model for Social Networks,” Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, vol., no., pp. 1397, 1406, 7-10 Sept. 2014. doi:10.15439/2014F129
Abstract: Privacy protection issues in Social Networking Sites (SNS) usually raise from insufficient user privacy control mechanisms offered by service providers, unauthorized usage of user's data by SNS, and lack of appropriate privacy protection schemes for user's data at the SNS servers. In this paper, we propose a privacy protection model based on danger theory concept to provide automatic detection and blocking of sensitive user information revealed in social communications. By utilizing the dynamic adaptability feature of danger theory, we show how a privacy protection model for SNS users can be built with system effectiveness and reasonable computing cost. A prototype based on the proposed model is constructed and evaluated. Our experiment results show that the proposed model achieves 88.9% detection and blocking rate in average for user-sensitive data revealed by the services of SNS.
Keywords: data privacy; social networking (online); SNS; danger theory; dynamic adaptability feature; privacy protection; social communication; social networking sites; user privacy control mechanism; Adaptation models; Cryptography; Data privacy; Databases; Immune system; Privacy; Social network services (ID#: 15-5741)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933181&isnumber=6932982
Ward, J.R.; Younis, M., “Examining the Effect of Wireless Sensor Network Synchronization on Base Station Anonymity,” Military Communications Conference (MILCOM), 2014 IEEE, vol., no., pp. 204, 209, 6-8 Oct. 2014. doi:10.1109/MILCOM.2014.39
Abstract: In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. A typical WSN topology that applies to most applications allows sensors to act as data sources that forward their measurements to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN. An adversary may employ traffic analysis techniques such as evidence theory to identify the BS based on network traffic flow even when the WSN implements conventional security mechanisms. This motivates a need for WSN operators to achieve improved BS anonymity to protect the identity, role, and location of the BS. Many traffic analysis countermeasures have been proposed in literature, but are typically evaluated based on data traffic only, without considering the effects of network synchronization on anonymity performance. In this paper we use evidence theory analysis to examine the effects of WSN synchronization on BS anonymity by studying two commonly used protocols, Reference Broadcast Synchronization (RBS) and Timing-synch Protocol for Sensor Networks (TPSN).
Keywords: protocols; synchronisation; telecommunication network topology; telecommunication security; telecommunication traffic; wireless sensor networks; BS anonymity improvement; RBS; TPSN; WSN topology; base station anonymity; data sources; evidence theory analysis; network traffic flow; reference broadcast synchronization; security mechanisms; timing-synch protocol for sensor networks; traffic analysis techniques; wireless sensor network synchronization; Protocols; Receivers; Sensors; Synchronization; Wireless communication; Wireless sensor networks; RBS; TPSN; anonymity; location privacy; synchronization; wireless sensor network (ID#: 15-5742)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956760&isnumber=6956719
Tsegaye, T.; Flowerday, S., “Controls for Protecting Critical Information Infrastructure from Cyberattacks,” Internet Security (WorldCIS), 2014 World Congress on, vol., no., pp. 24, 29, 8-10 Dec. 2014. doi:10.1109/WorldCIS.2014.7028160
Abstract: Critical information infrastructure has enabled organisations to store large amounts of information on their systems and deliver it via networks such as the internet. Users who are connected to the internet are able to access various internet services provided by critical information infrastructure. However, some organisations have not effectively secured their critical information infrastructure and hackers, disgruntled employees and other entities have taken advantage of this by launching cyberattacks on their critical information infrastructure. They do this by using cyberthreats to exploit vulnerabilities in critical information infrastructure which organisations fail to secure. As a result, cyberthreats are able to steal or damage confidential information stored on systems or take down websites, preventing access to information. Despite this, risk strategies can be used to implement a number of security controls: preventive, detective and corrective controls, which together form a system of controls. This will ensure that the confidentiality, integrity and availability of information is preserved, thus reducing risks to information. This system of controls is based on the General Systems Theory, which states that the elements of a system are interdependent and contribute to the operation of the whole system. Finally, a model is proposed to address insecure critical information infrastructure.
Keywords: Internet; business data processing; computer crime; data integrity; data privacy; risk management; Internet service access; confidential information stealing; corrective control; critical information infrastructure protection; cyberattacks; cyberthreats; detective control; disgruntled employees; general systems theory; hackers; information access; information availability; information confidentiality; information integrity; organisational information; preventive control; risk reduction; security controls; vulnerability exploitation; Availability; Computer crime; Malware; Personnel; Planning; Critical Information Infrastructure; Cyberattacks; Cyberthreats; Security Controls; Vulnerabilities (ID#: 15-5743)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028160&isnumber=7027983
Hsu, J.; Gaboardi, M.; Haeberlen, A.; Khanna, S.; Narayan, A.; Pierce, B.C.; Roth, A., “Differential Privacy: An Economic Method for Choosing Epsilon,” Computer Security Foundations Symposium (CSF), 2014 IEEE 27th, vol., no., pp. 398, 410, 19-22 July 2014. doi:10.1109/CSF.2014.35
Abstract: Differential privacy is becoming a gold standard notion of privacy; it offers a guaranteed bound on loss of privacy due to release of query results, even under worst-case assumptions. The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. However, the question of when differential privacy works in practice has received relatively little attention. In particular, there is still no rigorous method for choosing the key parameter ε, which controls the crucial tradeoff between the strength of the privacy guarantee and the accuracy of the published results. In this paper, we examine the role of these parameters in concrete applications, identifying the key considerations that must be addressed when choosing specific values. This choice requires balancing the interests of two parties with conflicting objectives: the data analyst, who wishes to learn something abou the data, and the prospective participant, who must decide whether to allow their data to be included in the analysis. We propose a simple model that expresses this balance as formulas over a handful of parameters, and we use our model to choose ε on a series of simple statistical studies. We also explore a surprising insight: in some circumstances, a differentially private study can be more accurate than a non-private study for the same cost, under our model. Finally, we discuss the simplifying assumptions in our model and outline a research agenda for possible refinements.
Keywords: data analysis; data privacy; Epsilon; data analyst; differential privacy; differentially private algorithms; economic method; privacy guarantee; Accuracy; Analytical models; Cost function; Data models; Data privacy; Databases; Privacy; Differential Privacy (ID#: 15-5744)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957125&isnumber=6957090
Tan, A.Z.Y.; Wen Yong Chua; Chang, K.T.T., “Location Based Services and Information Privacy Concerns among Literate and Semi-literate Users,” System Sciences (HICSS), 2014 47th Hawaii International Conference on, vol., no., pp. 3198, 3206, 6-9 Jan. 2014. doi:10.1109/HICSS.2014.394
Abstract: Location-based services mobile applications are becoming increasingly prevalent to the large population of semi-literate users living in emerging economies due to the low costs and ubiquity. However, usage of location-based services is still threatened by information privacy concerns. Studies typically only addressed how to mitigate information privacy concerns for the literate users and not the semi-literate users. To fill that gap and better understand information privacy concerns among different communities, this study draws upon theories of perceptual control and familiarity to identify the antecedents of information privacy concerns related to location-based service and user literacy. The proposed research model is empirically tested in a laboratory experiment. The findings show that the two location-based service channels (push and pull) affect the degree of information privacy concerns between the literate and semi-literate users. Implications for enhancing usage intentions and mitigating information privacy concerns for different types of mobile applications are discussed.
Keywords: data privacy; mobile computing; social aspects of automation; emerging economies; information privacy concerns; laboratory experiment; location-based service channels; mobile applications; pull channel; push channel; semiliterate users; usage intentions; user literacy; Analysis of variance; Educational institutions; Mobile communication; Mobile handsets; Privacy; Standards (ID#: 15-5745)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758998&isnumber=6758592
Zheng Yan; Xueyun Li; Kantola, R., “Personal Data Access Based on Trust Assessment in Mobile Social Networking,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 989, 994, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.131
Abstract: Trustworthy personal data access control at a semi-trusted or distrusted Cloud Service Provider (CSP) is a practical issue although cloud computing has widely developed. Many existing solutions suffer from high computation and communication costs, and are impractical to deploy in reality due to usability issue. With the rapid growth and popularity of mobile social networking, trust relationships in different contexts can be assessed based on mobile social networking activities, behaviors and experiences. Obviously, such trust cues extracted from social networking are helpful in automatically managing personal data access at the cloud with sound usability. In this paper, we propose a scheme to secure personal data access at CSP according to trust assessed in mobile social networking. Security and performance evaluations show the efficiency and effectiveness of our scheme for practical adoption.
Keywords: authorisation; cloud computing; mobile computing; social networking (online); trusted computing; CSP; cloud computing; cloud service provider; mobile social networking; trust assessment; trustworthy personal data access control; Access control; Complexity theory; Context; Cryptography; Mobile communication; Mobile computing; Social network services; Trust; access control; cloud computing; reputation; social networking; trust assessment (ID#: 15-5746)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011357&isnumber=7011202
Ta-Chih Yang; Ming-Huang Guo, “An A-RBAC Mechanism for a Multi-Tenancy Cloud Environment,” Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on, vol., no., pp. 1, 5, 11-14 May 2014. doi:10.1109/VITAE.2014.6934436
Abstract: With the evolution of software technology, companies require more high-performance hardware to enhance their competitiveness. Cloud computing is the result of distributed computing and grid computing processes and is gradually being seen as the solution to the companies. Cloud computing can virtualizes existing software and hardware to reduce costs. Thus, companies only require high Internet bandwidth and devices to access cloud service on the Internet. This would decrease many overhead costs and the number of IT staff required. When many companies rent a cloud service simultaneously, this is called a multi-tenancy cloud service. However, how to access resource safely is important if adopt multi-tenancy cloud computing technology. The cloud computing environment is vulnerable to network-related attacks. This research improves the role-based access control authorization mechanism and combines it with attribute check mechanism to determine which tenant that user can access. The enhanced authorization can improve the safety of cloud computing services and protected the data privacy.
Keywords: authorisation; cloud computing; data privacy; grid computing; A-RBAC mechanism; IT staff; attribute check mechanism; cloud computing; cloud service; data privacy; distributed computing; grid computing processes; high Internet bandwidth; high-performance hardware; multitenancy cloud computing technology; multitenancy cloud environment; network-related attacks; role-based access control authorization mechanism; software technology; Authentication; Authorization; Cloud computing; Companies; Cryptography;Servers; Attribute;Authorization; Multi-tenancy; Role-based access control (ID#: 15-5747)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934436&isnumber=6934393
Boyang Zhou; Wen Gao; Shanshan Zhao; Xinjia Lu; Zhong Du; Chunming Wu; Qiang Yang, “Virtual Network Mapping for Multi-Domain Data Plane in Software-Defined Networks,” Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on, vol., no., pp. 1, 5, 11-14 May 2014. doi:10.1109/VITAE.2014.6934439
Abstract: Software-Defined Networking (SDN) separates the control plane from the data plane to improve the control flexibility, supporting multiple services with their isolated physical resources. In SDN, the virtual network (VN) mapping is required by network services for allocating these resources in the multidomain SDN. Such mapping problem is challenged by the NP-Completeness of the mapping and business privacy to protect the domain topology. We propose a novel multi-domain mapping algorithm for SDN using a distributed architecture to achieve a better efficiency and flexibility than the traditional PolyViNE approach, meanwhile protecting the privacy. By simulating on a large synthesized topology with 10 to 40 domains, our approach shows 25% and 15% faster than the PolyViNE in time, and 30% better in balancing load on multiple controllers.
Keywords: computational complexity; computer network security; data protection; resource allocation; telecommunication network topology; virtual private networks; NP-complete; PolyViNE approach; SDN; VN mapping; business privacy; control plane; data plane; distributed architecture; domain topology protection; load balancing; multidomain data plane; multidomain mapping algorithm; resource allocation; software-defined network; virtual network mapping; Bandwidth; Computer architecture; Control systems; Heuristic algorithms; Network topology; Partitioning algorithms; Topology; Network Management; Software-Defined Networking; Virtual Network Mapping (ID#: 15-5748)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934439&isnumber=6934393
Kia, S.S.; Cortes, J.; Martinez, S., “Periodic and Event-Triggered Communication for Distributed Continuous-Time Convex Optimization,” American Control Conference (ACC), 2014, vol., no., pp. 5010, 5015, 4-6 June 2014. doi:10.1109/ACC.2014.6859122
Abstract: We propose a distributed continuous-time algorithm to solve a network optimization problem where the global cost function is a strictly convex function composed of the sum of the local cost functions of the agents. We establish that our algorithm, when implemented over strongly connected and weight-balanced directed graph topologies, converges exponentially fast when the local cost functions are strongly convex and their gradients are globally Lipschitz. We also characterize the privacy preservation properties of our algorithm and extend the convergence guarantees to the case of time-varying, strongly connected, weight-balanced digraphs. When the network topology is a connected undirected graph, we show that exponential convergence is still preserved if the gradients of the strongly convex local cost functions are locally Lipschitz, while it is asymptotic if the local cost functions are convex. We also study discrete-time communication implementations. Specifically, we provide an upper bound on the stepsize of a synchronous periodic communication scheme that guarantees convergence over connected undirected graph topologies and, building on this result, design a centralized event-triggered implementation that is free of Zeno behavior. Simulations illustrate our results.
Keywords: convex programming; directed graphs; network theory (graphs); Zeno behavior; connected undirected graph; convex function; cost functions; distributed continuous-time algorithm; distributed continuous-time convex optimization; event-triggered communication; global cost function; network optimization problem; periodic communication; privacy preservation properties; strongly connected weight-balanced directed graph; synchronous periodic communication scheme; Algorithm design and analysis; Convergence; Convex functions; Cost function; Privacy; Topology; Control of networks; Optimization algorithms (ID#: 15-5749)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859122&isnumber=6858556
Tams, B.; Rathgeb, C., “Towards Efficient Privacy-Preserving Two-Stage Identification for Fingerprint-Based Biometric Cryptosystems,” Biometrics (IJCB), 2014 IEEE International Joint Conference on, vol., no., pp. 1, 8, Sept. 29 2014 - Oct. 2 2014. doi:10.1109/BTAS.2014.6996241
Abstract: Biometric template protection schemes in particular, biometric cryptosystems bind secret keys to biometric data, i.e. complex key retrieval processes are performed at each authentication attempt. Focusing on biometric identification exhaustive 1: N comparisons are required for identifying a biometric probe. As a consequence comparison time frequently dominates the overall computational workload, preventing biometric cryptosystems from being operated in identification mode. In this paper we propose a computational efficient two-stage identification system for fingerprint-biometric cryptosystems. Employing the concept of adaptive Bloom filter-based cancelable biometrics, pseudonymous binary prescreeners are extracted based on which top-candidates are returned from a database. Thereby the number of required key-retrieval processes is reduced to a fraction of the total. Experimental evaluations confirm that, by employing the proposed technique, biometric cryptosystems, e.g. fuzzy vault scheme, can be enhanced in order to enable a real-time privacy preserving identification, while at the same time biometric performance is maintained.
Keywords: biometrics (access control); data privacy; data structures; fingerprint identification; fuzzy set theory; image retrieval; private key cryptography; adaptive Bloom filter-based cancelable biometrics; biometric performance analysis; biometric probe identification; biometric template protection schemes; comparison time; complex key retrieval processes; computational efficient two-stage identification system; computational workload; data authentication; fingerprint-based biometric cryptosystems; fuzzy vault scheme; privacy-preserving two-stage identification; pseudonymous binary prescreener extraction; real-time privacy preserving identification; secret keys; Authentication; Cryptography; Databases; Fingerprint recognition; Measurement; Privacy (ID#: 15-5750)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996241&isnumber=6996217
Krombi, W.; Erradi, M.; Khoumsi, A., “Automata-Based Approach to Design and Analyze Security Policies,” Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, vol., no., pp. 306, 313, 23-24 July 2014. doi:10.1109/PST.2014.6890953
Abstract: Information systems must be controlled by security policies to protect them from undue accesses. Security policies are often designed by rules expressed using informal text, which implies ambiguities and inconsistencies in security rules. Our objective in this paper is to develop a formal approach to design and analyze security policies. We propose a procedure that synthesizes an automaton which implements a given security policy. Our automata-based approach can be a common basis to analyze several aspects of security policies. We use our automata-based approach to develop three analysis procedures to: verify completeness of a security policy, detect anomalies in a security policy, and detect functional discrepancies between several implementations of a security policy. We illustrate our approach using examples of security policies for a firewall.
Keywords: automata theory; data protection; firewalls; information systems; anomaly detection; automata synthesis; automata-based approach; firewall security policies; formal approach; functional discrepancy detection; information system protection; security policy analysis; security policy completeness verification; security policy design; Automata; Boolean functions; Data structures; Educational institutions; Firewalls (computing); Protocols (ID#: 15-5751)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890953&isnumber=6890911
Anggorojati, B.; Prasad, N.R.; Prasad, R., “Secure Capability-Based Access Control in the M2M Local Cloud Platform,” Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on, vol., no., pp. 1, 5, 11-14 May 2014. doi:10.1109/VITAE.2014.6934469
Abstract: Protection and access control to resources plays a critical role in a distributed computing system like Machine-to-Machine (M2M) and cloud platform. The M2M local cloud platform considered in this paper, consists of multiple distributed M2M gateways that form a local cloud - presenting a unique challenge to the existing access control systems. The most prominent access control systems, such as ACL and RBAC, lack in scalability and flexibility to manage access from users or entity that belong to different authorization domains, and thus unsuitable for the presented platform. The access control approach based on API keys and OAuth that is used by the existing M2M Cloud platform, fails to provide fine grained and flexible access right delegation at the same time when both methods are used together. The proposed approach is built upon capability-based access control that has been specifically designed to provide flexible, yet restricted, access rights delegation. A number of use cases are provided to show the usage of capability creation, delegation, and access provision, particularly in the way application accesses services provided by the platform.
Keywords: application program interfaces; authorisation; cloud computing; computer network security; internetworking; network servers; private key cryptography; API key; M2M local cloud platform; OAuth; application programming interface; authorization domain; distributed computing system; machine-to-machine computing system; multiple distributed M2M gateway; secure capability based access control system; Access control; Buildings; Context; Permission; Privacy; Public key; M2M; access control; capability; cloud; delegation; security (ID#: 15-5752)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934469&isnumber=6934393
Lugini, L.; Marasco, E.; Cukic, B.; Dawson, J., “Removing Gender Signature from Fingerprints,” Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on, vol., no., pp. 1283, 1287, 26-30 May 2014. doi:10.1109/MIPRO.2014.6859765
Abstract: The need of sharing fingerprint image data in many emerging applications raises concerns about the protection of privacy. It has become possible to use automated algorithms for inferring soft biometrics from fingerprint images. Even if we cannot uniquely match the person to an existing fingerprint, revealing their age or gender may lead to undesirable consequences. Our research is focused on de-identifying fingerprint images in order to obfuscate soft biometrics. In this paper, we first discuss a general framework for soft biometrics fingerprint de-identification. We implemented the framework to reduce the risk of successful estimation of gender from fingerprint images using ad-hoc image filtering. We evaluate the proposed approach through experiments using a data set of rolled fingerprints collected at West Virginia University. Results show the proposed method is effective in preventing gender estimation from fingerprint images.
Keywords: data privacy; filtering theory; fingerprint identification; ad-hoc image filtering; gender estimation prevention; gender signature removal; privacy protection; rolled fingerprints; soft biometrics fingerprint deidentification; Biometrics (access control); Estimation; Feature extraction; Fingerprint recognition; Frequency-domain analysis; Privacy; Probes; Fingerprint Recognition; Gender Estimation; Image De-Identification}, (ID#: 15-5753)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859765&isnumber=6859515
Premarathne, U.S.; Khalil, I., “Multiplicative Attributes Graph Approach for Persistent Authentication in Single-Sign-On Mobile Systems,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 221, 228, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.33
Abstract: Single-sign-on (SSO) has been proposed as a more efficient and convenient authentication method. Classic SSO systems re-authenticate a user to different applications based on a fixed set of attributes (e.g. Username-password combinations). However, the use of a fixed set of attributes fail to account for mobility and contextual variations of user activities. Thus, in a SSO based system, robust persistent authentications and secure session termination management are vital for ensuring secure operations. In this paper we propose a novel persistent authentication technique using multiplicative attribute graph model. We use multiple attribute based persistent authentication model using facial biometrics, location and activity specific information. We propose a novel membership (or group affiliations) based session management technique for user initiated SSO global logout management. Significance and viability of these methods are demonstrated by security, complexity and numerical analyses. In conclusion, our model provides meaningful insights and more pragmatic approaches for persistent authentication and session termination management in implementing SSO based mobile collaborative applications.
Keywords: authorisation; biometrics (access control); graph theory; mobile computing; SSO based mobile collaborative applications; SSO global logout management; activity specific information; contextual variations; facial biometrics; location information; membership based session management technique; mobility variations; multiple attribute based persistent authentication model; multiplicative attribute graph approach; robust persistent authentications; secure session termination management; single-sign-on mobile systems; Authentication; Biological system modeling; Biometrics (access control); Collaboration; Face; Mobile communication; mobile systems; multiplicative attribute graph; persistent authentication; single sign on (ID#: 15-5754)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011254&isnumber=7011202
Jianming Fu; Yan Lin; Xu Zhang; Pengwei Li, “Computation Integrity Measurement Based on Branch Transfer,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 590, 597, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.75
Abstract: Tasks are selectively migrated to the cloud with the widespread adoption of the cloud computing platform, but the user cannot know whether the tasks are tampered in the cloud, so it is an urgent demand for cloud users to verify the execution integrity of the program in the cloud. The computation integrity measurement based on behavior is difficult to detect carefully crafted shell code. According to the property of shell code, this paper proposes a computation integrity measurement based on branch transfer called CIMB, which is a fine-grained instruction-level integrity measurement. In this approach, all branches in the user-level have been recorded, which effectively cover all execution control flow of a program, and CIMB can detect control-flow hijacking attacks without the support of source code, such as Return-oriented Programming (ROP) and Jump-oriented Programming (JOP). Meanwhile, distance between two instruction addresses and machine code of instruction can mask the measurement inconsistency derived from address space layout randomization of program and shared libraries. Finally, we have implemented CIMB with a dynamic binary instrumentation tool Pin on ×86 32-bit version of ubuntu12.04. Its experimental results show that CIMB is feasible and it has a relatively stable measurement result, and the advantages of CIMB and factors affecting the results of measurement are analyzed and discussed.
Keywords: cloud computing; data integrity; trusted computing; CIMB; Pin dynamic binary instrumentation tool; address space layout randomization; branch transfer; cloud computing platform; cloud users; computation integrity measurement; control-flow hijacking attack detection; fine-grained instruction-level integrity measurement; instruction addresses; instruction machine code; measurement inconsistency; program execution control flow; program execution integrity verification; shellcode detection; tampered tasks; ubuntu12.04; user-level; Complexity theory; Current measurement; Fluid flow measurement; Instruments; Libraries; Linux; Software measurement; computation integrity; control flow; dynamic binary instrumentation; integrity measurement; trusted computing (ID#: 15-5755)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011299&isnumber=7011202
Sefer, E.; Kingsford, C., “Diffusion Archaeology for Diffusion Progression History Reconstruction,” Data Mining (ICDM), 2014 IEEE International Conference on, vol., no., pp. 530, 539, 14-17 Dec. 2014. doi:10.1109/ICDM.2014.135
Abstract: Diffusion through graphs can be used to model many real-world process, such as the spread of diseases, social network memes, computer viruses, or water contaminants. Often, a real-world diffusion cannot be directly observed while it is occurring—perhaps it is not noticed until some time has passed, continuous monitoring is too costly, or privacy concerns limit data access. This leads to the need to reconstruct how the present state of the diffusion came to be from partial diffusion data. Here, we tackle the problem of reconstructing a diffusion history from one or more snapshots of the diffusion state. This ability can be invaluable to learn when certain computer nodes are infected or which people are the initial disease spreaders to control future diffusions. We formulate this problem over discrete-time SEIRS-type diffusion models in terms of maximum likelihood. We design methods that are based on sub modularity and a novel prize-collecting dominating-set vertex cover (PCDSVC) relaxation that can identify likely diffusion steps with some provable performance guarantees. Our methods are the first to be able to reconstruct complete diffusion histories accurately in real and simulated situations. As a special case, they can also identify the initial spreaders better than existing methods for that problem. Our results for both meme and contaminant diffusion show that the partial diffusion data problem can be overcome with proper modeling and methods, and that hidden temporal characteristics of diffusion can be predicted from limited data.
Keywords: data handling; diffusion; discrete time systems; graph theory; maximum likelihood estimation; PCDSVC relaxation; contaminant diffusion; continuous monitoring; data access; diffusion archaeology; diffusion history reconstruction; diffusion progression history reconstruction; diffusion state; discrete-time SEIRS-type diffusion model; disease spreader; graph; maximum likelihood; partial diffusion data problem; performance guarantee; prize-collecting dominating-set vertex cover relaxation; real-world diffusion; real-world process; temporal characteristics; Approximation methods; Computational modeling; Computers; History; Integrated circuit modeling; Mathematical model; Silicon; diffusion; epidemics; history (ID#: 15-5756)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023370&isnumber=7023305
Wen Zeng; Koutny, M.; Van Moorsel, A., “Performance Modelling and Evaluation of Enterprise Information Security Technologies,” Computer and Information Technology (CIT), 2014 IEEE International Conference on, vol., no., pp. 504, 511, 11-13 Sept. 2014. doi:10.1109/CIT.2014.18
Abstract: By providing effective access control mechanisms, enterprise information security technologies have been proven successful in protecting the confidentiality of sensitive information in business organizations. However, such security mechanisms typically reduce the work productivity of the staff, by making them spend time working on non-project related tasks. Therefore, organizations have to invest a signification amount of capital in the information security technologies, and then to continue incurring additional costs. In this study, we investigate the performance of administrators in an information help desk, and the non-productive time (NPT) in an organization, resulting from the implementation of information security technologies. An approximate analytical solution is discussed first, and the loss of staff member productivity is quantified using non-productive time. Stochastic Petri nets are then used to provide simulation results. The presented study can help information security managers to make investment decisions, and to take actions toward reducing the cost of information security technologies, so that a balance is kept between information security expense, resource drain and effectiveness of security technologies.
Keywords: Petri nets; authorisation; business data processing; cost reduction; data privacy; decision making; investment; productivity; stochastic processes; NPT; access control mechanisms; business organizations; cost reduction enterprise information security technologies; information help desk; investment decision making; nonproductive time; performance evaluation; performance modelling; sensitive information confidentiality; staff member productivity; stochastic Petri nets; work productivity; Information security; Mathematical model; Organizations; Servers; Stochastic processes; Non-productive Time; Queuing Theory; Security Investment Decision; Stochastic Petri Nets (ID#: 15-5757)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984703&isnumber=6984594
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.