Biblio

Filters: Author is Rocha, Anderson  [Clear All Filters]
2022-06-10
Yang, Jing, Vega-Oliveros, Didier, Seibt, Tais, Rocha, Anderson.  2021.  Scalable Fact-checking with Human-in-the-Loop. 2021 IEEE International Workshop on Information Forensics and Security (WIFS). :1–6.
Researchers have been investigating automated solutions for fact-checking in various fronts. However, current approaches often overlook the fact that information released every day is escalating, and a large amount of them overlap. Intending to accelerate fact-checking, we bridge this gap by proposing a new pipeline – grouping similar messages and summarizing them into aggregated claims. Specifically, we first clean a set of social media posts (e.g., tweets) and build a graph of all posts based on their semantics; Then, we perform two clustering methods to group the messages for further claim summarization. We evaluate the summaries both quantitatively with ROUGE scores and qualitatively with human evaluation. We also generate a graph of summaries to verify that there is no significant overlap among them. The results reduced 28,818 original messages to 700 summary claims, showing the potential to speed up the fact-checking process by organizing and selecting representative claims from massive disorganized and redundant messages.
2022-04-12
Lavi, Bahram, Nascimento, José, Rocha, Anderson.  2021.  Semi-Supervised Feature Embedding for Data Sanitization in Real-World Events. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2495—2499.
With the rapid growth of data sharing through social media networks, determining relevant data items concerning a particular subject becomes paramount. We address the issue of establishing which images represent an event of interest through a semi-supervised learning technique. The method learns consistent and shared features related to an event (from a small set of examples) to propagate them to an unlabeled set. We investigate the behavior of five image feature representations considering low- and high-level features and their combinations. We evaluate the effectiveness of the feature embedding approach on five collected datasets from real-world events.
2020-03-30
Bharati, Aparna, Moreira, Daniel, Brogan, Joel, Hale, Patricia, Bowyer, Kevin, Flynn, Patrick, Rocha, Anderson, Scheirer, Walter.  2019.  Beyond Pixels: Image Provenance Analysis Leveraging Metadata. 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). :1692–1702.
Creative works, whether paintings or memes, follow unique journeys that result in their final form. Understanding these journeys, a process known as "provenance analysis," provides rich insights into the use, motivation, and authenticity underlying any given work. The application of this type of study to the expanse of unregulated content on the Internet is what we consider in this paper. Provenance analysis provides a snapshot of the chronology and validity of content as it is uploaded, re-uploaded, and modified over time. Although still in its infancy, automated provenance analysis for online multimedia is already being applied to different types of content. Most current works seek to build provenance graphs based on the shared content between images or videos. This can be a computationally expensive task, especially when considering the vast influx of content that the Internet sees every day. Utilizing non-content-based information, such as timestamps, geotags, and camera IDs can help provide important insights into the path a particular image or video has traveled during its time on the Internet without large computational overhead. This paper tests the scope and applicability of metadata-based inferences for provenance graph construction in two different scenarios: digital image forensics and cultural analytics.
2019-08-12
Peixoto, Bruno Malveira, Avila, Sandra, Dias, Zanoni, Rocha, Anderson.  2018.  Breaking Down Violence: A Deep-learning Strategy to Model and Classify Violence in Videos. Proceedings of the 13th International Conference on Availability, Reliability and Security. :50:1–50:7.
Detecting violence in videos through automatic means is significant for law enforcement and analysis of surveillance cameras with the intent of maintaining public safety. Moreover, it may be a great tool for protecting children from accessing inappropriate content and help parents make a better informed decision about what their kids should watch. However, this is a challenging problem since the very definition of violence is broad and highly subjective. Hence, detecting such nuances from videos with no human supervision is not only technical, but also a conceptual problem. With this in mind, we explore how to better describe the idea of violence for a convolutional neural network by breaking it into more objective and concrete parts. Initially, our method uses independent networks to learn features for more specific concepts related to violence, such as fights, explosions, blood, etc. Then we use these features to classify each concept and later fuse them in a meta-classification to describe violence. We also explore how to represent time-based events in still-images as network inputs; since many violent acts are described in terms of movement. We show that using more specific concepts is an intuitive and effective solution, besides being complementary to form a more robust definition of violence. When compared to other methods for violence detection, this approach holds better classification quality while using only automatic features.