Visible to the public Biblio

Filters: Keyword is transparency  [Clear All Filters]
2023-03-31
Kahla, Mostafa, Chen, Si, Just, Hoang Anh, Jia, Ruoxi.  2022.  Label-Only Model Inversion Attacks via Boundary Repulsion. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :15025–15033.
Recent studies show that the state-of-the-art deep neural networks are vulnerable to model inversion attacks, in which access to a model is abused to reconstruct private training data of any given target class. Existing attacks rely on having access to either the complete target model (whitebox) or the model's soft-labels (blackbox). However, no prior work has been done in the harder but more practical scenario, in which the attacker only has access to the model's predicted label, without a confidence measure. In this paper, we introduce an algorithm, Boundary-Repelling Model Inversion (BREP-MI), to invert private training data using only the target model's predicted labels. The key idea of our algorithm is to evaluate the model's predicted labels over a sphere and then estimate the direction to reach the target class's centroid. Using the example of face recognition, we show that the images reconstructed by BREP-MI successfully reproduce the semantics of the private training data for various datasets and target model architectures. We compare BREP-MI with the state-of-the-art white-box and blackbox model inversion attacks, and the results show that despite assuming less knowledge about the target model, BREP-MI outperforms the blackbox attack and achieves comparable results to the whitebox attack. Our code is available online.11https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion
2023-01-06
Golatkar, Aditya, Achille, Alessandro, Wang, Yu-Xiang, Roth, Aaron, Kearns, Michael, Soatto, Stefano.  2022.  Mixed Differential Privacy in Computer Vision. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :8366—8376.
We introduce AdaMix, an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data. While pre-training language models on large public datasets has enabled strong differential privacy (DP) guarantees with minor loss of accuracy, a similar practice yields punishing trade-offs in vision tasks. A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset. AdaMix incorporates few-shot training, or cross-modal zero-shot learning, on public data prior to private fine-tuning, to improve the trade-off. AdaMix reduces the error increase from the non-private upper bound from the 167–311% of the baseline, on average across 6 datasets, to 68-92% depending on the desired privacy level selected by the user. AdaMix tackles the trade-off arising in visual classification, whereby the most privacy sensitive data, corresponding to isolated points in representation space, are also critical for high classification accuracy. In addition, AdaMix comes with strong theoretical privacy guarantees and convergence analysis.
2021-10-12
Al Omar, Abdullah, Jamil, Abu Kaisar, Nur, Md. Shakhawath Hossain, Hasan, Md Mahamudul, Bosri, Rabeya, Bhuiyan, Md Zakirul Alam, Rahman, Mohammad Shahriar.  2020.  Towards A Transparent and Privacy-Preserving Healthcare Platform with Blockchain for Smart Cities. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1291–1296.
In smart cities, data privacy and security issues of Electronic Health Record(EHR) are grabbing importance day by day as cyber attackers have identified the weaknesses of EHR platforms. Besides, health insurance companies interacting with the EHRs play a vital role in covering the whole or a part of the financial risks of a patient. Insurance companies have specific policies for which patients have to pay them. Sometimes the insurance policies can be altered by fraudulent entities. Another problem that patients face in smart cities is when they interact with a health organization, insurance company, or others, they have to prove their identity to each of the organizations/companies separately. Health organizations or insurance companies have to ensure they know with whom they are interacting. To build a platform where a patient's personal information and insurance policy are handled securely, we introduce an application of blockchain to solve the above-mentioned issues. In this paper, we present a solution for the healthcare system that will provide patient privacy and transparency towards the insurance policies incorporating blockchain. Privacy of the patient information will be provided using cryptographic tools.
2021-08-17
Noor, Abdul, Wu, Youxi, Khan, Salabat.  2020.  Secure and Transparent Public-key Management System for Vehicular Social Networks. 2020 IEEE 6th International Conference on Computer and Communications (ICCC). :309–316.
Vehicular Social Networks (VSNs) are expected to become a reality soon, where commuters having common interests in the virtual community of vehicles, drivers, passengers can share information, both about road conditions and their surroundings. This will improve transportation efficiency and public safety. However, social networking exposes vehicles to different kinds of cyber-attacks. This concern can be addressed through an efficient and secure key management framework. This study presents a Secure and Transparent Public-key Management (ST-PKMS) based on blockchain and notary system, but it addresses security and privacy challenges specific to VSNs. ST-PKMS significantly enhances the efficiency and trustworthiness of mutual authentication. In ST-PKMS, each vehicle has multiple short-lived anonymous public-keys, which are recorded on the blockchain platform. However, public-keys get activated only when a notary system notarizes it, and clients accept only notarized public-keys during mutual authentication. Compromised vehicles can be effectively removed from the VSNs by blocking notarization of their public-keys; thus, the need to distribute Certificate Revocation List (CRL) is eliminated in the proposed scheme. ST-PKMS ensures transparency, security, privacy, and availability, even in the face of an active adversary. The simulation and evaluation results show that the ST-PKMS meets real-time performance requirements, and it is cost-effective in terms of scalability, delay, and communication overhead.
2021-03-04
Algehed, M., Flanagan, C..  2020.  Transparent IFC Enforcement: Possibility and (In)Efficiency Results. 2020 IEEE 33rd Computer Security Foundations Symposium (CSF). :65—78.

Information Flow Control (IFC) is a collection of techniques for ensuring a no-write-down no-read-up style security policy known as noninterference. Traditional methods for both static (e.g. type systems) and dynamic (e.g. runtime monitors) IFC suffer from untenable numbers of false alarms on real-world programs. Secure Multi-Execution (SME) promises to provide secure information flow control without modifying the behaviour of already secure programs, a property commonly referred to as transparency. Implementations of SME exist for the web in the form of the FlowFox browser and as plug-ins to several programming languages. Furthermore, SME can in theory work in a black-box manner, meaning that it can be programming language agnostic, making it perfect for securing legacy or third-party systems. As such SME, and its variants like Multiple Facets (MF) and Faceted Secure Multi-Execution (FSME), appear to be a family of panaceas for the security engineer. The question is, how come, given all these advantages, that these techniques are not ubiquitous in practice? The answer lies, partially, in the issue of runtime and memory overhead. SME and its variants are prohibitively expensive to deploy in many non-trivial situations. The natural question is why is this the case? On the surface, the reason is simple. The techniques in the SME family all rely on the idea of multi-execution, running all or parts of a program multiple times to achieve noninterference. Naturally, this causes some overhead. However, the predominant thinking in the IFC community has been that these overheads can be overcome. In this paper we argue that there are fundamental reasons to expect this not to be the case and prove two key theorems: (1) All transparent enforcement is polynomial time equivalent to multi-execution. (2) All black-box enforcement takes time exponential in the number of principals in the security lattice. Our methods also allow us to answer, in the affirmative, an open question about the possibility of secure and transparent enforcement of a security condition known as Termination Insensitive Noninterference.

2021-02-01
Wickramasinghe, C. S., Marino, D. L., Grandio, J., Manic, M..  2020.  Trustworthy AI Development Guidelines for Human System Interaction. 2020 13th International Conference on Human System Interaction (HSI). :130–136.
Artificial Intelligence (AI) is influencing almost all areas of human life. Even though these AI-based systems frequently provide state-of-the-art performance, humans still hesitate to develop, deploy, and use AI systems. The main reason for this is the lack of trust in AI systems caused by the deficiency of transparency of existing AI systems. As a solution, “Trustworthy AI” research area merged with the goal of defining guidelines and frameworks for improving user trust in AI systems, allowing humans to use them without fear. While trust in AI is an active area of research, very little work exists where the focus is to build human trust to improve the interactions between human and AI systems. In this paper, we provide a concise survey on concepts of trustworthy AI. Further, we present trustworthy AI development guidelines for improving the user trust to enhance the interactions between AI systems and humans, that happen during the AI system life cycle.
2021-01-20
Mavroudis, V., Svenda, P..  2020.  JCMathLib: Wrapper Cryptographic Library for Transparent and Certifiable JavaCard Applets. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :89—96.

The JavaCard multi-application platform is now deployed to over twenty billion smartcards, used in various applications ranging from banking payments and authentication tokens to SIM cards and electronic documents. In most of those use cases, access to various cryptographic primitives is required. The standard JavaCard API provides a basic level of access to such functionality (e.g., RSA encryption) but does not expose low-level cryptographic primitives (e.g., elliptic curve operations) and essential data types (e.g., Integers). Developers can access such features only through proprietary, manufacturer-specific APIs. Unfortunately, such APIs significantly reduce the interoperability and certification transparency of the software produced as they require non-disclosure agreements (NDA) that prohibit public sharing of the applet's source code.We introduce JCMathLib, an open library that provides an intermediate layer realizing essential data types and low-level cryptographic primitives from high-level operations. To achieve this, we introduce a series of optimization techniques for resource-constrained platforms that make optimal use of the underlying hardware, while having a small memory footprint. To the best of our knowledge, it is the first generic library for low-level cryptographic operations in JavaCards that does not rely on a proprietary API.Without any disclosure limitations, JCMathLib has the potential to increase transparency by enabling open code sharing, release of research prototypes, and public code audits. Moreover, JCMathLib can help resolve the conflict between strict open-source licenses such as GPL and proprietary APIs available only under an NDA. This is of particular importance due to the introduction of JavaCard API v3.1, which targets specifically IoT devices, where open-source development might be more common than in the relatively closed world of government-issued electronic documents.

2020-07-03
Giles, Keir, Hartmann, Kim.  2019.  “Silent Battle” Goes Loud: Entering a New Era of State-Avowed Cyber Conflict. 2019 11th International Conference on Cyber Conflict (CyCon). 900:1—13.

The unprecedented transparency shown by the Netherlands intelligence services in exposing Russian GRU officers in October 2018 is indicative of a number of new trends in state handling of cyber conflict. US public indictments of foreign state intelligence officials, and the UK's deliberate provision of information allowing the global media to “dox” GRU officers implicated in the Salisbury poison attack in early 2018, set a precedent for revealing information that previously would have been confidential. This is a major departure from previous practice where the details of state-sponsored cyber attacks would only be discovered through lengthy investigative journalism (as with Stuxnet) or through the efforts of cybersecurity corporations (as with Red October). This paper uses case studies to illustrate the nature of this departure and consider its impact, including potentially substantial implications for state handling of cyber conflict. The paper examines these implications, including: · The effect of transparency on perception of conflict. Greater public knowledge of attacks will lead to greater public acceptance that countermeasures should be taken. This may extend to public preparedness to accept that a state of declared or undeclared war exists with a cyber aggressor. · The resulting effect on legality. This adds a new element to the long-running debates on the legality of cyber attacks or counter-attacks, by affecting the point at which a state of conflict is politically and socially, even if not legally, judged to exist. · The further resulting effect on permissions and authorities to conduct cyber attacks, in the form of adjustment to the glaring imbalance between the means and methods available to aggressors (especially those who believe themselves already to be in conflict) and defenders. Greater openness has already intensified public and political questioning of the restraint shown by NATO and EU nations in responding to Russian actions; this trend will continue. · Consequences for deterrence, both specifically within cyber conflict and also more broadly deterring hostile actions. In sum, the paper brings together the direct and immediate policy implications, for a range of nations and for NATO, of the new apparent policy of transparency.

2020-05-22
Jemal, Jay, Kornegay, Kevin T..  2019.  Security Assessment of Blockchains in Heterogenous IoT Networks : Invited Presentation. 2019 53rd Annual Conference on Information Sciences and Systems (CISS). :1—4.

As Blockchain technology become more understood in recent years and its capability to solve enterprise business use cases become evident, technologist have been exploring Blockchain technology to solve use cases that have been daunting industries for years. Unlike existing technologies, one of the key features of blockchain technology is its unparalleled capability to provide, traceability, accountability and immutable records that can be accessed at any point in time. One application area of interest for blockchain is securing heterogenous networks. This paper explores the security challenges in a heterogonous network of IoT devices and whether blockchain can be a viable solution. Using an experimental approach, we explore the possibility of using blockchain technology to secure IoT devices, validate IoT device transactions, and establish a chain of trust to secure an IoT device mesh network, as well as investigate the plausibility of using immutable transactions for forensic analysis.

2020-04-13
R P, Jagadeesh Chandra Bose, Singi, Kapil, Kaulgud, Vikrant, Phokela, Kanchanjot Kaur, Podder, Sanjay.  2019.  Framework for Trustworthy Software Development. 2019 34th IEEE/ACM International Conference on Automated Software Engineering Workshop (ASEW). :45–48.
Intelligent software applications are becoming ubiquitous and pervasive affecting various aspects of our lives and livelihoods. At the same time, the risks to which these systems expose the organizations and end users are growing dramatically. Trustworthiness of software applications is becoming a paramount necessity. Trust is to be regarded as a first-class citizen in the total product life cycle and should be addressed across all stages of software development. Trust can be looked at from two facets: one at an algorithmic level (e.g., bias-free, discrimination-aware, explainable and interpretable techniques) and the other at a process level by making development processes more transparent, auditable, and adhering to regulations and best practices. In this paper, we address the latter and propose a blockchain enabled governance framework for building trustworthy software. Our framework supports the recording, monitoring, and analysis of various activities throughout the application development life cycle thereby bringing in transparency and auditability. It facilitates the specification of regulations and best practices and verifies for its adherence raising alerts of non-compliance and prescribes remedial measures.
2020-03-30
Jentzsch, Sophie F., Hochgeschwender, Nico.  2019.  Don't Forget Your Roots! Using Provenance Data for Transparent and Explainable Development of Machine Learning Models. 2019 34th IEEE/ACM International Conference on Automated Software Engineering Workshop (ASEW). :37–40.
Explaining reasoning and behaviour of artificial intelligent systems to human users becomes increasingly urgent, especially in the field of machine learning. Many recent contributions approach this issue with post-hoc methods, meaning they consider the final system and its outcomes, while the roots of included artefacts are widely neglected. However, we argue in this position paper that there needs to be a stronger focus on the development process. Without insights into specific design decisions and meta information that accrue during the development an accurate explanation of the resulting model is hardly possible. To remedy this situation we propose to increase process transparency by applying provenance methods, which serves also as a basis for increased explainability.
2020-01-27
Akinrolabu, Olusola, New, Steve, Martin, Andrew.  2019.  Assessing the Security Risks of Multicloud SaaS Applications: A Real-World Case Study. 2019 6th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/ 2019 5th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :81–88.

Cloud computing is widely believed to be the future of computing. It has grown from being a promising idea to one of the fastest research and development paradigms of the computing industry. However, security and privacy concerns represent a significant hindrance to the widespread adoption of cloud computing services. Likewise, the attributes of the cloud such as multi-tenancy, dynamic supply chain, limited visibility of security controls and system complexity, have exacerbated the challenge of assessing cloud risks. In this paper, we conduct a real-world case study to validate the use of a supply chaininclusive risk assessment model in assessing the risks of a multicloud SaaS application. Using the components of the Cloud Supply Chain Cyber Risk Assessment (CSCCRA) model, we show how the model enables cloud service providers (CSPs) to identify critical suppliers, map their supply chain, identify weak security spots within the chain, and analyse the risk of the SaaS application, while also presenting the value of the risk in monetary terms. A key novelty of the CSCCRA model is that it caters for the complexities involved in the delivery of SaaS applications and adapts to the dynamic nature of the cloud, enabling CSPs to conduct risk assessments at a higher frequency, in response to a change in the supply chain.

2019-08-26
Hasircioglu, Burak, Pignolet, Yvonne-Anne, Sivanthi, Thanikesavan.  2018.  Transparent Fault Tolerance for Real-Time Automation Systems. Proceedings of the 1st International Workshop on Internet of People, Assistive Robots and Things. :7-12.

Developing software is hard. Developing software that is resilient and does not crash at the occurrence of unexpected inputs or events is even harder, especially with IoT devices and real-time requirements, e.g., due to interactions with human beings. Therefore, there is a need for a software architecture that helps software developers to build fault-tolerant software with as little pain and effort as possible. To this end, we have designed a fault tolerance framework for automation systems that lets developers be mostly oblivious to fault tolerance issues. Thus they can focus on the application logic encapsulated in (micro)services. That is, the developer only needs to specify the required fault tolerance level by description, not implementation. The fault tolerance aspects are transparent to the developer, as the framework takes care of them. This approach is particularly suited for the development for mixed-criticality systems, where different parts have very different and demanding functional and non-functional requirements. For such systems highly specialized developers are needed and removing the burden of fault tolerance results in faster time to market and safer and more dependable systems.

2018-11-14
Wakenshaw, S. Y. L., Maple, C., Schraefel, M. C., Gomer, R., Ghirardello, K..  2018.  Mechanisms for Meaningful Consent in Internet of Things. Living in the Internet of Things: Cybersecurity of the IoT - 2018. :1–10.

Consent is a key measure for privacy protection and needs to be `meaningful' to give people informational power. It is increasingly important that individuals are provided with real choices and are empowered to negotiate for meaningful consent. Meaningful consent is an important area for consideration in IoT systems since privacy is a significant factor impacting on adoption of IoT. Obtaining meaningful consent is becoming increasingly challenging in IoT environments. It is proposed that an ``apparency, pragmatic/semantic transparency model'' adopted for data management could make consent more meaningful, that is, visible, controllable and understandable. The model has illustrated the why and what issues regarding data management for potential meaningful consent [1]. In this paper, we focus on the `how' issue, i.e. how to implement the model in IoT systems. We discuss apparency by focusing on the interactions and data actions in the IoT system; pragmatic transparency by centring on the privacy risks, threats of data actions; and semantic transparency by focusing on the terms and language used by individuals and the experts. We believe that our discussion would elicit more research on the apparency model' in IoT for meaningful consent.

2017-05-18
Chachmon, Nadav, Richins, Daniel, Cohn, Robert, Christensson, Magnus, Cui, Wenzhi, Reddi, Vijay Janapa.  2016.  Simulation and Analysis Engine for Scale-Out Workloads. Proceedings of the 2016 International Conference on Supercomputing. :22:1–22:13.

We introduce a system-level Simulation and Analysis Engine (SAE) framework based on dynamic binary instrumentation for fine-grained and customizable instruction-level introspection of everything that executes on the processor. SAE can instrument the BIOS, kernel, drivers, and user processes. It can also instrument multiple systems simultaneously using a single instrumentation interface, which is essential for studying scale-out applications. SAE is an x86 instruction set simulator designed specifically to enable rapid prototyping, evaluation, and validation of architectural extensions and program analysis tools using its flexible APIs. It is fast enough to execute full platform workloads–-a modern operating system can boot in a few minutes–-thus enabling research, evaluation, and validation of complex functionalities related to multicore configurations, virtualization, security, and more. To reach high speeds, SAE couples tightly with a virtual platform and employs both a just-in-time (JIT) compiler that helps simulate simple instructions efficiently and a fast interpreter for simulating new or complex instructions. We describe SAE's architecture and instrumentation engine design and show the framework's usefulness for single- and multi-system architectural and program analysis studies.

2017-05-16
Kizilcec, René F..  2016.  How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :2390–2395.

The rising prevalence of algorithmic interfaces, such as curated feeds in online news, raises new questions for designers, scholars, and critics of media. This work focuses on how transparent design of algorithmic interfaces can promote awareness and foster trust. A two-stage process of how transparency affects trust was hypothesized drawing on theories of information processing and procedural justice. In an online field experiment, three levels of system transparency were tested in the high-stakes context of peer assessment. Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust. Attitudes of individuals whose expectations were met did not vary with transparency. Results are discussed in terms of a dual process model of attitude change and the depth of justification of perceived inconsistency. Designing for trust requires balanced interface transparency - not too little and not too much.

2015-04-30
Riveiro, M., Lebram, M., Warston, H..  2014.  On visualizing threat evaluation configuration processes: A design proposal. Information Fusion (FUSION), 2014 17th International Conference on. :1-8.

Threat evaluation is concerned with estimating the intent, capability and opportunity of detected objects in relation to our own assets in an area of interest. To infer whether a target is threatening and to which degree is far from a trivial task. Expert operators have normally to their aid different support systems that analyze the incoming data and provide recommendations for actions. Since the ultimate responsibility lies in the operators, it is crucial that they trust and know how to configure and use these systems, as well as have a good understanding of their inner workings, strengths and limitations. To limit the negative effects of inadequate cooperation between the operators and their support systems, this paper presents a design proposal that aims at making the threat evaluation process more transparent. We focus on the initialization, configuration and preparation phases of the threat evaluation process, supporting the user in the analysis of the behavior of the system considering the relevant parameters involved in the threat estimations. For doing so, we follow a known design process model and we implement our suggestions in a proof-of-concept prototype that we evaluate with military expert system designers.