Biblio
Filters: Keyword is annotations [Clear All Filters]
Improving Anomaly Detection with a Self-Supervised Task Based on Generative Adversarial Network. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :3563–3567.
.
2022. Existing anomaly detection models show success in detecting abnormal images with generative adversarial networks on the insufficient annotation of anomalous samples. However, existing models cannot accurately identify the anomaly samples which are close to the normal samples. We assume that the main reason is that these methods ignore the diversity of patterns in normal samples. To alleviate the above issue, this paper proposes a novel anomaly detection framework based on generative adversarial network, called ADe-GAN. More concretely, we construct a self-supervised learning task to fully explore the pattern information and latent representations of input images. In model inferring stage, we design a new abnormality score approach by jointly considering the pattern information and reconstruction errors to improve the performance of anomaly detection. Extensive experiments show that the ADe-GAN outperforms the state-of-the-art methods over several real-world datasets.
ISSN: 2379-190X
ZeeStar: Private Smart Contracts by Homomorphic Encryption and Zero-knowledge Proofs. 2022 IEEE Symposium on Security and Privacy (SP). :179—197.
.
2022. Data privacy is a key concern for smart contracts handling sensitive data. The existing work zkay addresses this concern by allowing developers without cryptographic expertise to enforce data privacy. However, while zkay avoids fundamental limitations of other private smart contract systems, it cannot express key applications that involve operations on foreign data.We present ZeeStar, a language and compiler allowing non-experts to instantiate private smart contracts and supporting operations on foreign data. The ZeeStar language allows developers to ergonomically specify privacy constraints using zkay’s privacy annotations. The ZeeStar compiler then provably realizes these constraints by combining non-interactive zero-knowledge proofs and additively homomorphic encryption.We implemented ZeeStar for the public blockchain Ethereum. We demonstrated its expressiveness by encoding 12 example contracts, including oblivious transfer and a private payment system like Zether. ZeeStar is practical: it prepares transactions for our contracts in at most 54.7s, at an average cost of 339k gas.
Gradual Security Types and Gradual Guarantees. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1—16.
.
2021. Information flow type systems enforce the security property of noninterference by detecting unauthorized data flows at compile-time. However, they require precise type annotations, making them difficult to use in practice as much of the legacy infrastructure is written in untyped or dynamically-typed languages. Gradual typing seamlessly integrates static and dynamic typing, providing the best of both approaches, and has been applied to information flow control, where information flow monitors are derived from gradual security types. Prior work on gradual information flow typing uncovered tensions between noninterference and the dynamic gradual guarantee- the property that less precise security type annotations in a program should not cause more runtime errors.This paper re-examines the connection between gradual information flow types and information flow monitors to identify the root cause of the tension between the gradual guarantees and noninterference. We develop runtime semantics for a simple imperative language with gradual information flow types that provides both noninterference and gradual guarantees. We leverage a proof technique developed for FlowML and reduce noninterference proofs to preservation proofs.
Efficient Human-In-The-Loop Object Detection using Bi-Directional Deep SORT and Annotation-Free Segment Identification. 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). :1226–1233.
.
2020. The present study proposes a method for detecting objects with a high recall rate for human-supported video annotation. In recent years, automatic annotation techniques such as object detection and tracking have become more powerful; however, detection and tracking of occluded objects, small objects, and blurred objects are still difficult. In order to annotate such objects, manual annotation is inevitably required. For this reason, we envision a human-supported video annotation framework in which over-detected objects (i.e., false positives) are allowed to minimize oversight (i.e., false negatives) in automatic annotation and then the over-detected objects are removed manually. This study attempts to achieve human-in-the-loop object detection with an emphasis on suppressing the oversight for the former stage of processing in the aforementioned annotation framework: bi-directional deep SORT is proposed to reliably capture missed objects and annotation-free segment identification (AFSID) is proposed to identify video frames in which manual annotation is not required. These methods are reinforced each other, yielding an increase in the detection rate while reducing the burden of human intervention. Experimental comparisons using a pedestrian video dataset demonstrated that bi-directional deep SORT with AFSID was successful in capturing object candidates with a higher recall rate over the existing deep SORT while reducing the cost of manpower compared to manual annotation at regular intervals.
Interpretation of Sentiment Analysis with Human-in-the-Loop. 2020 IEEE International Conference on Big Data (Big Data). :3099–3108.
.
2020. Human-in-the-Loop has been receiving special attention from the data science and machine learning community. It is essential to realize the advantages of human feedback and the pressing need for manual annotation to improve machine learning performance. Recent advancements in natural language processing (NLP) and machine learning have created unique challenges and opportunities for digital humanities research. In particular, there are ample opportunities for NLP and machine learning researchers to analyze data from literary texts and use these complex source texts to broaden our understanding of human sentiment using the human-in-the-loop approach. This paper presents our understanding of how human annotators differ from machine annotators in sentiment analysis tasks and how these differences can contribute to designing systems for the "human in the loop" sentiment analysis in complex, unstructured texts. We further explore the challenges and benefits of the human-machine collaboration for sentiment analysis using a case study in Greek tragedy and address some open questions about collaborative annotation for sentiments in literary texts. We focus primarily on (i) an analysis of the challenges in sentiment analysis tasks for humans and machines, and (ii) whether consistent annotation results are generated from multiple human annotators and multiple machine annotators. For human annotators, we have used a survey-based approach with about 60 college students. We have selected six popular sentiment analysis tools for machine annotators, including VADER, CoreNLP's sentiment annotator, TextBlob, LIME, Glove+LSTM, and RoBERTa. We have conducted a qualitative and quantitative evaluation with the human-in-the-loop approach and confirmed our observations on sentiment tasks using the Greek tragedy case study.
A Simple Data Augmentation Method to Improve the Performance of Named Entity Recognition Models in Medical Domain. 2021 6th International Conference on Computer Science and Engineering (UBMK). :763–768.
.
2021. Easy Data Augmentation is originally developed for text classification tasks. It consists of four basic methods: Synonym Replacement, Random Insertion, Random Deletion, and Random Swap. They yield accuracy improvements on several deep neural network models. In this study we apply these methods to a new domain. We augment Named Entity Recognition datasets from medical domain. Although the augmentation task is much more difficult due to the nature of named entities which consist of word or word groups in the sentences, we show that we can improve the named entity recognition performance.
Secure Compilation of Constant-Resource Programs. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1–12.
.
2021. Observational non-interference (ONI) is a generic information-flow policy for side-channel leakage. Informally, a program is ONI-secure if observing program leakage during execution does not reveal any information about secrets. Formally, ONI is parametrized by a leakage function l, and different instances of ONI can be recovered through different instantiations of l. One popular instance of ONI is the cryptographic constant-time (CCT) policy, which is widely used in cryptographic libraries to protect against timing and cache attacks. Informally, a program is CCT-secure if it does not branch on secrets and does not perform secret-dependent memory accesses. Another instance of ONI is the constant-resource (CR) policy, a relaxation of the CCT policy which is used in Amazon's s2n implementation of TLS and in several other security applications. Informally, a program is CR-secure if its cost (modelled by a tick operator over an arbitrary semi-group) does not depend on secrets.In this paper, we consider the problem of preserving ONI by compilation. Prior work on the preservation of the CCT policy develops proof techniques for showing that main compiler optimisations preserve the CCT policy. However, these proof techniques critically rely on the fact that the semi-group used for modelling leakage satisfies the property: l1+ l1' = l2+l2'$\Rightarrow$l1=l2$\wedge$ l1' = l2' Unfortunately, this non-cancelling property fails for the CR policy, because its underlying semi-group is ($\backslash$mathbbN, +) and it is currently not known how to extend existing techniques to policies that do not satisfy non-cancellation.We propose a methodology for proving the preservation of the CR policy during a program transformation. We present an implementation of some elementary compiler passes, and apply the methodology to prove the preservation of these passes. Our results have been mechanically verified using the Coq proof assistant.
SSF: Smart city Semantics Framework for reusability of semantic data. 2021 International Conference on Information and Communication Technology Convergence (ICTC). :1625—1627.
.
2021. Semantic data has semantic information about the relationship between information and resources of data collected in a smart city so that all different domains and data can be organically connected. Various services using semantic data such as public data integration of smart cities, semantic search, and linked open data are emerging, and services that open and freely use semantic data are also increasing. By using semantic data, it is possible to create a variety of services regardless of platform and resource characteristics. However, despite the many advantages of semantic data, it is not easy to use because it requires a high understanding of semantics such as SPARQL. Therefore, in this paper, we propose a semantic framework for users of semantic data so that new services can be created without a high understanding of semantics. The semantics framework includes a template-based annotator that supports automatically generating semantic data based on user input and a semantic REST API that allows you to utilize semantic data without understanding SPAROL.
BBAM: Bounding Box Attribution Map for Weakly Supervised Semantic and Instance Segmentation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :2643–2651.
.
2021. Weakly supervised segmentation methods using bounding box annotations focus on obtaining a pixel-level mask from each box containing an object. Existing methods typically depend on a class-agnostic mask generator, which operates on the low-level information intrinsic to an image. In this work, we utilize higher-level information from the behavior of a trained object detector, by seeking the smallest areas of the image from which the object detector produces almost the same result as it does from the whole image. These areas constitute a bounding-box attribution map (BBAM), which identifies the target object in its bounding box and thus serves as pseudo ground-truth for weakly supervised semantic and instance segmentation. This approach significantly outperforms recent comparable techniques on both the PASCAL VOC and MS COCO benchmarks in weakly supervised semantic and instance segmentation. In addition, we provide a detailed analysis of our method, offering deeper insight into the behavior of the BBAM.
Generation of Textual Explanations in XAI: The Case of Semantic Annotation. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–6.
.
2021. Semantic image annotation is a field of paramount importance in which deep learning excels. However, some application domains, like security or medicine, may need an explanation of this annotation. Explainable Artificial Intelligence is an answer to this need. In this work, an explanation is a sentence in natural language that is dedicated to human users to provide them clues about the process that leads to the decision: the labels assignment to image parts. We focus on semantic image annotation with fuzzy logic that has proven to be a useful framework that captures both image segmentation imprecision and the vagueness of human spatial knowledge and vocabulary. In this paper, we present an algorithm for textual explanation generation of the semantic annotation of image regions.
The ASPIRE Framework for Software Protection. Proceedings of the 2016 ACM Workshop on Software PROtection. :91–92.
.
2016. In the ASPIRE research project, a software protection tool flow was designed and prototyped that targets native ARM Android code. This tool flow supports the deployment of a number of protections against man-at-the-end attacks. In this tutorial, an overview of the tool flow will be presented and attendants will participate to a hands-on demonstration. In addition, we will present an overview of the decision support systems developed in the project to facilitate the use of the protection tool flow.