Visible to the public BBAM: Bounding Box Attribution Map for Weakly Supervised Semantic and Instance Segmentation

TitleBBAM: Bounding Box Attribution Map for Weakly Supervised Semantic and Instance Segmentation
Publication TypeConference Paper
Year of Publication2021
AuthorsLee, Jungbeom, Yi, Jihun, Shin, Chaehun, Yoon, Sungroh
Conference Name2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Date Publishedjun
Keywordsannotations, attribution, Benchmark testing, composability, Computer vision, Detectors, Generators, Human Behavior, image segmentation, Metrics, pubcrawl, Semantics
AbstractWeakly supervised segmentation methods using bounding box annotations focus on obtaining a pixel-level mask from each box containing an object. Existing methods typically depend on a class-agnostic mask generator, which operates on the low-level information intrinsic to an image. In this work, we utilize higher-level information from the behavior of a trained object detector, by seeking the smallest areas of the image from which the object detector produces almost the same result as it does from the whole image. These areas constitute a bounding-box attribution map (BBAM), which identifies the target object in its bounding box and thus serves as pseudo ground-truth for weakly supervised semantic and instance segmentation. This approach significantly outperforms recent comparable techniques on both the PASCAL VOC and MS COCO benchmarks in weakly supervised semantic and instance segmentation. In addition, we provide a detailed analysis of our method, offering deeper insight into the behavior of the BBAM.
DOI10.1109/CVPR46437.2021.00267
Citation Keylee_bbam_2021