Biblio
Smart energy meters record electricity consumption and generation at fine-grained intervals, and are among the most widely deployed sensors in the world. Energy data embeds detailed information about a building's energy-efficiency, as well as the behavior of its occupants, which academia and industry are actively working to extract. In many cases, either inadvertently or by design, these third-parties only have access to anonymous energy data without an associated location. The location of energy data is highly useful and highly sensitive information: it can provide important contextual information to improve big data analytics or interpret their results, but it can also enable third-parties to link private behavior derived from energy data with a particular location. In this paper, we present Weatherman, which leverages a suite of analytics techniques to localize the source of anonymous energy data. Our key insight is that energy consumption data, as well as wind and solar generation data, largely correlates with weather, e.g., temperature, wind speed, and cloud cover, and that every location on Earth has a distinct weather signature that uniquely identifies it. Weatherman represents a serious privacy threat, but also a potentially useful tool for researchers working with anonymous smart meter data. We evaluate Weatherman's potential in both areas by localizing data from over one hundred smart meters using a weather database that includes data from over 35,000 locations. Our results show that Weatherman localizes coarse (one-hour resolution) energy consumption, wind, and solar data to within 16.68km, 9.84km, and 5.12km, respectively, on average, which is more accurate using much coarser resolution data than prior work on localizing only anonymous solar data using solar signatures.
The main issue with big data in cloud is the processed or used always need to be by third party. It is very important for the owners of data or clients to trust and to have the guarantee of privacy for the information stored in cloud or analyzed as big data. The privacy models studied in previous research showed that privacy infringement for big data happened because of limitation, privacy guarantee rate or dissemination of accurate data which is obtainable in the data set. In addition, there are various privacy models. In order to determine the best and the most appropriate model to be applied in the future, which also guarantees big data privacy, it is necessary to invest in research and study. In the next part, we surfed some of the privacy models in order to determine the advantages and disadvantages of each model in privacy assurance for big data in cloud. The present study also proposes combined Diff-Anonym algorithm (K-anonymity and differential models) to provide data anonymity with guarantee to keep balance between ambiguity of private data and clarity of general data.
Security challenges are the most important obstacles for the advancement of IT-based on-demand services and cloud computing as an emerging technology. Lack of coincidence in identity management models based on defined policies and various security levels in different cloud servers is one of the most challenging issues in clouds. In this paper, a policy- based user authentication model has been presented to provide a reliable and scalable identity management and to map cloud users' access requests with defined polices of cloud servers. In the proposed schema several components are provided to define access policies by cloud servers, to apply policies based on a structural and reliable ontology, to manage user identities and to semantically map access requests by cloud users with defined polices. Finally, the reliability and efficiency of this policy-based authentication schema have been evaluated by scientific performance, security and competitive analysis. Overall, the results show that this model has met defined demands of the research to enhance the reliability and efficiency of identity management in cloud computing environments.
Cloud computing is significantly reshaping the computing industry built around core concepts such as virtualization, processing power, connectivity and elasticity to store and share IT resources via a broad network. It has emerged as the key technology that unleashes the potency of Big Data, Internet of Things, Mobile and Web Applications, and other related technologies; but it also comes with its challenges - such as governance, security, and privacy. This paper is focused on the security and privacy challenges of cloud computing with specific reference to user authentication and access management for cloud SaaS applications. The suggested model uses a framework that harnesses the stateless and secure nature of JWT for client authentication and session management. Furthermore, authorized access to protected cloud SaaS resources have been efficiently managed. Accordingly, a Policy Match Gate (PMG) component and a Policy Activity Monitor (PAM) component have been introduced. In addition, other subcomponents such as a Policy Validation Unit (PVU) and a Policy Proxy DB (PPDB) have also been established for optimized service delivery. A theoretical analysis of the proposed model portrays a system that is secure, lightweight and highly scalable for improved cloud resource security and management.
In the multi-cloud tenancy environments, Web Service offers an standard approach for discovering and using capabilities in an environment that transcends ownership domains. This brings into concern the ownership and security related to Web Service governance. Our approach for this issue involves an ESB-integrated middleware for security criteria regulation on Clouds. It uses an attribute-based security policy model for the exhibition of assets consumers' security profiles and deducing service accessing decision. Assets represent computing power/functionality and information/data provided by entities. Experiments show the middleware to bring minor governance burdens on the hardware aspect, as well as better performance with colosum scaling property, dealing well with cumbersome policy files, which is probably the situation of complex composite service scenarios.
Learning analytics open up a complex landscape of privacy and policy issues, which, in turn, influence how learning analytics systems and practices are designed. Research and development is governed by regulations for data storage and management, and by research ethics. Consequently, when moving solutions out the research labs implementers meet constraints defined in national laws and justified in privacy frameworks. This paper explores how the OECD, APEC and EU privacy frameworks seek to regulate data privacy, with significant implications for the discourse of learning, and ultimately, an impact on the design of tools, architectures and practices that now are on the drawing board. A detailed list of requirements for learning analytics systems is developed, based on the new legal requirements defined in the European General Data Protection Regulation, which from 2018 will be enforced as European law. The paper also gives an initial account of how the privacy discourse in Europe, Japan, South-Korea and China is developing and reflects upon the possible impact of the different privacy frameworks on the design of LA privacy solutions in these countries. This research contributes to knowledge of how concerns about privacy and data protection related to educational data can drive a discourse on new approaches to privacy engineering based on the principles of Privacy by Design. For the LAK community, this study represents the first attempt to conceptualise the issues of privacy and learning analytics in a cross-cultural context. The paper concludes with a plan to follow up this research on privacy policies and learning analytics systems development with a new international study.
Fast Health Interoperability Services (FHIR) is the most recent in the line of standards for healthcare resources. FHIR represents different types of medical artifacts as resources and also provides recommendations for their authorized disclosure using web-based protocols including O-Auth and OpenId Connect and also defines security labels. In most cases, Role Based Access Control (RBAC) is used to secure access to FHIR resources. We provide an alternative approach based on Attribute Based Access Control (ABAC) that allows attributes of subjects and objects to take part in authorization decision. Our system allows various stakeholders to define policies governing the release of healthcare data. It also authenticates the end user requesting access. Our system acts as a middle-layer between the end-user and the FHIR server. Our system provides efficient release of individual and batch resources both during normal operations and also during emergencies. We also provide an implementation that demonstrates the feasibility of our approach.
As the use of cloud computing and autonomous computing increases, integrity verification of the software stack used in a system becomes a critical issue. In this paper, we analyze the internal behavior of IMA (Integrity Measurement Architecture), one of the most well-known integrity verification frameworks employed in the Linux kernel. For integrity verification, IMA measures all executables and their configuration files in a trusty manner using TPM (Trust Platform Module). Our analysis reveals that there are two obstacles in IMA, measurement overhead and nondeterminism. To address these problems, we propose two novel techniques, called batch extend and core measurement. The former is a technique that accumulates the measured values of executables/files and extends them into TPM in a batch fashion. The second technique measures some specified executables/files only so that it verifies the core integrity of a system in which a user or a remote party is interested. Real implementation based evaluation shows that our proposal can reduce the booting time from 122 to 23 seconds, while supporting the same integrity verification capability of the default IMA policy.
Mobile application offloading, with the purpose of extending battery lifetime and increasing performance has been intensively discussed recently, resulting in various different solutions: mobile device clones operated as virtual machines in the cloud, simultaneously running applications on the mobile device and on a distant server, as well as flexible solutions dynamically acquiring other mobile devices' resources in the user's surrounding. Existing solutions have gaps in the fields of data security and application security. These gaps can be closed by integrating data usage policies, as well as application-flow policies. In this paper, we propose and evaluate a novel approach of integrating XACML into existing mobile application offloading-frameworks. Data owners remain in full control of their data, still, technologies like device-to-device offloading can be used.
Logic locking has been conceived as a promising proactive defense strategy against intellectual property (IP) piracy, counterfeiting, hardware Trojans, reverse engineering, and overbuilding attacks. Yet, various attacks that use a working chip as an oracle have been launched on logic locking to successfully retrieve its secret key, undermining the defense of all existing locking techniques. In this paper, we propose stripped-functionality logic locking (SFLL), which strips some of the functionality of the design and hides it in the form of a secret key(s), thereby rendering on-chip implementation functionally different from the original one. When loaded onto an on-chip memory, the secret keys restore the original functionality of the design. Through security-aware synthesis that creates a controllable mismatch between the reverse-engineered netlist and original design, SFLL provides a quantifiable and provable resilience trade-off between all known and anticipated attacks. We demonstrate the application of SFLL to large designs (textgreater100K gates) using a computer-aided design (CAD) framework that ensures attaining the desired security level at minimal implementation cost, 8%, 5%, and 0.5% for area, power, and delay, respectively. In addition to theoretical proofs and simulation confirmation of SFLL's security, we also report results from the silicon implementation of SFLL on an ARM Cortex-M0 microprocessor in 65nm technology.
Logic locking is an intellectual property (IP) protection technique that prevents IP piracy, reverse engineering and overbuilding attacks by the untrusted foundry or end-users. Existing logic locking techniques are all based on locking the functionality; the design/chip is nonfunctional unless the secret key has been loaded. Existing techniques are vulnerable to various attacks, such as sensitization, key-pruning, and signal skew analysis enabled removal attacks. In this paper, we propose a tenacious and traceless logic locking technique, TTlock, that locks functionality and provably withstands all known attacks, such as SAT-based, sensitization, removal, etc. TTLock protects a secret input pattern; the output of a logic cone is flipped for that pattern, where this flip is restored only when the correct key is applied. Experimental results confirm our theoretical expectations that the computational complexity of attacks launched on TTLock grows exponentially with increasing key-size, while the area, power, and delay overhead increases only linearly. In this paper, we also coin ``parametric locking," where the design/chip behaves as per its specifications (performance, power, reliability, etc.) only with the secret key in place, and an incorrect key downgrades its parametric characteristics. We discuss objectives and challenges in parametric locking.
Reversible circuits are vulnerable to intellectual property and integrated circuit piracy. To show these vulnerabilities, a detailed understanding on how to identify the function embedded in a reversible circuit is crucial. To obtain the embedded function, one needs to know the synthesis approach used to generate the reversible circuit in the first place. We present a machine learning based scheme to identify the synthesis approach using telltale signs in the design.
Scan-based test is commonly used to increase testability and fault coverage, however, it is also known to be a liability for chip security. Research has shown that intellectual property (IP) or secret keys can be leaked through scan-based attacks. In this paper, we propose a dynamically-obfuscated scan design for protecting IPs against scan-based attacks. By perturbing all test patterns/responses and protecting the obfuscation key, the proposed architecture is proven to be robust against existing non-invasive scan attacks, and can protect all scan data from attackers in foundry, assembly, and system developers (i.e., OEMs) without compromising the testability. Furthermore, the proposed architecture can be easily plugged into EDA generated scan chains without having a noticeable impact on conventional integrated circuit (IC) design, manufacturing, and test flow. Finally, detailed security and experimental analyses have been performed on several benchmarks. The results demonstrate that the proposed method can protect chips from existing brute force, differential, and other scan-based attacks that target the obfuscation key. The proposed design is of low overhead on area, power consumption, and pattern generation time, and there is no impact on test time.
Intellectual property is inextricably linked to the innovative development of mass innovation spaces. The synthetic development of intellectual property and mass innovation spaces will fundamentally support the new economic model of “mass entrepreneurship and innovation”. As such, it is critical to explore intellectual property service standards for mass innovation spaces and to steer mass innovation spaces to the creation of an intellectual property service system catering to “makers”. In addition, it is crucial to explore intellectual cluster management innovations for mass innovation spaces.
The size of counterfeiting activities is increasing day by day. These activities are encountered especially in electronics market. In this paper, a countermeasure against counterfeiting on intellectual properties (IP) on Field-Programmable Gate Arrays (FPGA) is proposed. FPGA vendors provide bitstream ciphering as an IP security solution such as battery-backed or non-volatile FPGAs. However, these solutions are secure as long as they can keep decryption key away from third parties. Key storage and key transfer over unsecure channels expose risks for these solutions. In this work, physical unclonable functions (PUFs) have been used for key generation. Generating a key from a circuit in the device solves key transfer problem. Proposed system goes through different phases when it operates. Therefore, partial reconfiguration feature of FPGAs is essential for feasibility of proposed system.
The trend in computing is towards the use of FPGAs to improve performance at reduced costs. An indication of this is the adoption of FPGAs for data centre and server application acceleration by notable technological giants like Microsoft, Amazon, and Baidu. The continued protection of Intellectual Properties (IPs) on the FPGA has thus become both more important and challenging. To facilitate IP security, FPGA vendors have provided bitstream authentication and encryption. However, advancements in FPGA programming technology have engendered a bitstream manipulation technique like partial bitstream relocation (PBR), which is promising in terms of reducing bitstream storage cost and facilitating adaptability. Meanwhile, encrypted bitstreams are not amenable to PBR. In this paper, we present three methods for performing encrypted PBR with varying overheads of resources and time. These methods ensure that PBR can be applied to bitstreams without losing the protection of IPs.
For the increasing use of internet, it is equally important to protect the intellectual property. And for the protection of copyright, a blind digital watermark algorithm with SVD and OSELM in the IWT domain has been proposed. During the embedding process, SVD has been applied to the coefficient blocks to get the singular values in the IWT domain. Singular values are modulated to embed the watermark in the host image. Online sequential extreme learning machine is trained to learn the relationship between the original coefficient and the corresponding watermarked version. During the extraction process, this trained OSELM is used to extract the embedded watermark logo blindly as no original host image is required during this process. The watermarked image is altered using various attacks like blurring, noise, sharpening, rotation and cropping. The experimental results show that the proposed watermarking scheme is robust against various attacks. The extracted watermark has very much similarity with the original watermark and works good to prove the ownership.