Visible to the public Biblio

Found 221 results

Filters: Keyword is visualization  [Clear All Filters]
2018-06-20
Luo, J. S., Lo, D. C. T..  2017.  Binary malware image classification using machine learning with local binary pattern. 2017 IEEE International Conference on Big Data (Big Data). :4664–4667.

Malware classification is a critical part in the cyber-security. Traditional methodologies for the malware classification typically use static analysis and dynamic analysis to identify malware. In this paper, a malware classification methodology based on its binary image and extracting local binary pattern (LBP) features is proposed. First, malware images are reorganized into 3 by 3 grids which is mainly used to extract LBP feature. Second, the LBP is implemented on the malware images to extract features in that it is useful in pattern or texture classification. Finally, Tensorflow, a library for machine learning, is applied to classify malware images with the LBP feature. Performance comparison results among different classifiers with different image descriptors such as GIST, a spatial envelop, and the LBP demonstrate that our proposed approach outperforms others.

Ren, Z., Chen, G..  2017.  EntropyVis: Malware classification. 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI). :1–6.

Malware writers often develop malware with automated measures, so the number of malware has increased dramatically. Automated measures tend to repeatedly use significant modules, which form the basis for identifying malware variants and discriminating malware families. Thus, we propose a novel visualization analysis method for researching malware similarity. This method converts malicious Windows Portable Executable (PE) files into local entropy images for observing internal features of malware, and then normalizes local entropy images into entropy pixel images for malware classification. We take advantage of the Jaccard index to measure similarities between entropy pixel images and the k-Nearest Neighbor (kNN) classification algorithm to assign entropy pixel images to different malware families. Preliminary experimental results show that our visualization method can discriminate malware families effectively.

2018-06-11
Maines, C. L., Zhou, B., Tang, S., Shi, Q..  2017.  Towards a Framework for the Extension and Visualisation of Cyber Security Requirements in Modelling Languages. 2017 10th International Conference on Developments in eSystems Engineering (DeSE). :71–76.
Every so often papers are published presenting a new extension for modelling cyber security requirements in Business Process Model and Notation (BPMN). The frequent production of new extensions by experts belies the need for a richer and more usable representation of security requirements in BPMN processes. In this paper, we present our work considering an analysis of existing extensions and identify the notational issues present within each of them. We discuss how there is yet no single extension which represents a comprehensive range of cyber security concepts. Consequently, there is no adequate solution for accurately specifying cyber security requirements within BPMN. In order to address this, we propose a new framework that can be used to extend, visualise and verify cyber security requirements in not only BPMN, but any other existing modelling language. The framework comprises of the three core roles necessary for the successful development of a security extension. With each of these being further subdivided into the respective components each role must complete.
2018-04-04
Gajjar, V., Khandhediya, Y., Gurnani, A..  2017.  Human Detection and Tracking for Video Surveillance: A Cognitive Science Approach. 2017 IEEE International Conference on Computer Vision Workshops (ICCVW). :2805–2809.

With crimes on the rise all around the world, video surveillance is becoming more important day by day. Due to the lack of human resources to monitor this increasing number of cameras manually, new computer vision algorithms to perform lower and higher level tasks are being developed. We have developed a new method incorporating the most acclaimed Histograms of Oriented Gradients, the theory of Visual Saliency and the saliency prediction model Deep Multi-Level Network to detect human beings in video sequences. Furthermore, we implemented the k - Means algorithm to cluster the HOG feature vectors of the positively detected windows and determined the path followed by a person in the video. We achieved a detection precision of 83.11% and a recall of 41.27%. We obtained these results 76.866 times faster than classification on normal images.

2018-03-19
Liu, B., Zhu, Z., Yang, Y..  2017.  Convolutional Neural Networks Based Scale-Adaptive Kernelized Correlation Filter for Robust Visual Object Tracking. 2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC). :423–428.

Visual object tracking is challenging when the object appearances occur significant changes, such as scale change, background clutter, occlusion, and so on. In this paper, we crop different sizes of multiscale templates around object and input these multiscale templates into network to pretrain the network adaptive the size change of tracking object. Different from previous the tracking method based on deep convolutional neural network (CNN), we exploit deep Residual Network (ResNet) to offline train a multiscale object appearance model on the ImageNet, and then the features from pretrained network are transferred into tracking tasks. Meanwhile, the proposed method combines the multilayer convolutional features, it is robust to disturbance, scale change, and occlusion. In addition, we fuse multiscale search strategy into three kernelized correlation filter, which strengthens the ability of adaptive scale change of object. Unlike the previous methods, we directly learn object appearance change by integrating multiscale templates into the ResNet. We compared our method with other CNN-based or correlation filter tracking methods, the experimental results show that our tracking method is superior to the existing state-of-the-art tracking method on Object Tracking Benchmark (OTB-2015) and Visual Object Tracking Benchmark (VOT-2015).

2018-02-28
Su, J. C., Wu, C., Jiang, H., Maji, S..  2017.  Reasoning About Fine-Grained Attribute Phrases Using Reference Games. 2017 IEEE International Conference on Computer Vision (ICCV). :418–427.

We present a framework for learning to describe finegrained visual differences between instances using attribute phrases. Attribute phrases capture distinguishing aspects of an object (e.g., “propeller on the nose” or “door near the wing” for airplanes) in a compositional manner. Instances within a category can be described by a set of these phrases and collectively they span the space of semantic attributes for a category. We collect a large dataset of such phrases by asking annotators to describe several visual differences between a pair of instances within a category. We then learn to describe and ground these phrases to images in the context of a reference game between a speaker and a listener. The goal of a speaker is to describe attributes of an image that allows the listener to correctly identify it within a pair. Data collected in a pairwise manner improves the ability of the speaker to generate, and the ability of the listener to interpret visual descriptions. Moreover, due to the compositionality of attribute phrases, the trained listeners can interpret descriptions not seen during training for image retrieval, and the speakers can generate attribute-based explanations for differences between previously unseen categories. We also show that embedding an image into the semantic space of attribute phrases derived from listeners offers 20% improvement in accuracy over existing attributebased representations on the FGVC-aircraft dataset.

Arellanes, D., Lau, K. K..  2017.  Exogenous Connectors for Hierarchical Service Composition. 2017 IEEE 10th Conference on Service-Oriented Computing and Applications (SOCA). :125–132.

Service composition is currently done by (hierarchical) orchestration and choreography. However, these approaches do not support explicit control flow and total compositionality, which are crucial for the scalability of service-oriented systems. In this paper, we propose exogenous connectors for service composition. These connectors support both explicit control flow and total compositionality in hierarchical service composition. To validate and evaluate our proposal, we present a case study based on the popular MusicCorp.

2018-02-21
Overbye, T. J., Mao, Z., Shetye, K. S., Weber, J. D..  2017.  An interactive, extensible environment for power system simulation on the PMU time frame with a cyber security application. 2017 IEEE Texas Power and Energy Conference (TPEC). :1–6.

Power system simulation environments with appropriate time-fidelity are needed to enable rapid testing of new smart grid technologies and for coupled simulations of the underlying cyber infrastructure. This paper presents such an environment which operates with power system models in the PMU time frame, including data visualization and interactive control action capabilities. The flexible and extensible capabilities are demonstrated by interfacing with a cyber infrastructure simulation.

2018-02-15
Yonetani, R., Boddeti, V. N., Kitani, K. M., Sato, Y..  2017.  Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption. 2017 IEEE International Conference on Computer Vision (ICCV). :2059–2069.

We propose a privacy-preserving framework for learning visual classifiers by leveraging distributed private image data. This framework is designed to aggregate multiple classifiers updated locally using private data and to ensure that no private information about the data is exposed during and after its learning procedure. We utilize a homomorphic cryptosystem that can aggregate the local classifiers while they are encrypted and thus kept secret. To overcome the high computational cost of homomorphic encryption of high-dimensional classifiers, we (1) impose sparsity constraints on local classifier updates and (2) propose a novel efficient encryption scheme named doublypermuted homomorphic encryption (DPHE) which is tailored to sparse high-dimensional data. DPHE (i) decomposes sparse data into its constituent non-zero values and their corresponding support indices, (ii) applies homomorphic encryption only to the non-zero values, and (iii) employs double permutations on the support indices to make them secret. Our experimental evaluation on several public datasets shows that the proposed approach achieves comparable performance against state-of-the-art visual recognition methods while preserving privacy and significantly outperforms other privacy-preserving methods.

2018-02-14
Kulyk, O., Reinheimer, B. M., Gerber, P., Volk, F., Volkamer, M., Mühlhäuser, M..  2017.  Advancing Trust Visualisations for Wider Applicability and User Acceptance. 2017 IEEE Trustcom/BigDataSE/ICESS. :562–569.
There are only a few visualisations targeting the communication of trust statements. Even though there are some advanced and scientifically founded visualisations-like, for example, the opinion triangle, the human trust interface, and T-Viz-the stars interface known from e-commerce platforms is by far the most common one. In this paper, we propose two trust visualisations based on T-Viz, which was recently proposed and successfully evaluated in large user studies. Despite being the most promising proposal, its design is not primarily based on findings from human-computer interaction or cognitive psychology. Our visualisations aim to integrate such findings and to potentially improve decision making in terms of correctness and efficiency. A large user study reveals that our proposed visualisations outperform T-Viz in these factors.
2018-02-06
Gavgani, M. H., Eftekharnejad, S..  2017.  A Graph Model for Enhancing Situational Awareness in Power Systems. 2017 19th International Conference on Intelligent System Application to Power Systems (ISAP). :1–6.

As societies are becoming more dependent on the power grids, the security issues and blackout threats are more emphasized. This paper proposes a new graph model for online visualization and assessment of power grid security. The proposed model integrates topology and power flow information to estimate and visualize interdependencies between the lines in the form of line dependency graph (LDG) and immediate threats graph (ITG). These models enable the system operator to predict the impact of line outage and identify the most vulnerable and critical links in the power system. Line Vulnerability Index (LVI) and Line Criticality Index (LCI) are introduced as two indices extracted from LDG to aid the operator in decision making and contingency selection. This package can be useful in enhancing situational awareness in power grid operation by visualization and estimation of system threats. The proposed approach is tested for security analysis of IEEE 30-bus and IEEE 118-bus systems and the results are discussed.

Huang, Lulu, Matwin, Stan, de Carvalho, Eder J., Minghim, Rosane.  2017.  Active Learning with Visualization for Text Data. Proceedings of the 2017 ACM Workshop on Exploratory Search and Interactive Data Analytics. :69–74.

Labeled datasets are always limited, and oftentimes the quantity of labeled data is a bottleneck for data analytics. This especially affects supervised machine learning methods, which require labels for models to learn from the labeled data. Active learning algorithms have been proposed to help achieve good analytic models with limited labeling efforts, by determining which additional instance labels will be most beneficial for learning for a given model. Active learning is consistent with interactive analytics as it proceeds in a cycle in which the unlabeled data is automatically explored. However, in active learning users have no control of the instances to be labeled, and for text data, the annotation interface is usually document only. Both of these constraints seem to affect the performance of an active learning model. We hypothesize that visualization techniques, particularly interactive ones, will help to address these constraints. In this paper, we implement a pilot study of visualization in active learning for text classification, with an interactive labeling interface. We compare the results of three experiments. Early results indicate that visualization improves high-performance machine learning model building with an active learning algorithm.

2018-01-23
McDuff, D., Soleymani, M..  2017.  Large-scale Affective Content Analysis: Combining Media Content Features and Facial Reactions. 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017). :339–345.

We present a novel multimodal fusion model for affective content analysis, combining visual, audio and deep visual-sentiment descriptors from the media content with automated facial action measurements from naturalistic responses to the media. We collected a dataset of 48,867 facial responses to 384 media clips and extracted a rich feature set from the facial responses and media content. The stimulus videos were validated to be informative, inspiring, persuasive, sentimental or amusing. By combining the features, we were able to obtain a classification accuracy of 63% (weighted F1-score: 0.62) for a five-class task. This was a significant improvement over using the media content features alone. By analyzing the feature sets independently, we found that states of informed and persuaded were difficult to differentiate from facial responses alone due to the presence of similar sets of action units in each state (AU 2 occurring frequently in both cases). Facial actions were beneficial in differentiating between amused and informed states whereas media content features alone performed less well due to similarities in the visual and audio make up of the content. We highlight examples of content and reactions from each class. This is the first affective content analysis based on reactions of 10,000s of people.

2018-01-10
Vincur, J., Navrat, P., Polasek, I..  2017.  VR City: Software Analysis in Virtual Reality Environment. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :509–516.
This paper presents software visualization tool that utilizes the modified city metaphor to represent software system and related analysis data in virtual reality environment. To better address all three kinds of software aspects we propose a new layouting algorithm that provides a higher level of detail and position the buildings according to the coupling between classes that they represent. Resulting layout allows us to visualize software metrics and source code modifications at the granularity of methods, visualize method invocations involved in program execution and to support the remodularization analysis. To further reduce the cognitive load and increase efficiency of 3D visualization we allow users to observe and interact with our city in immersive virtual reality environment that also provides a source code browsing feature. We demonstrate the use of our approach on two open-source systems.
Ouali, C., Dumouchel, P., Gupta, V..  2017.  Robust video fingerprints using positions of salient regions. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :3041–3045.
This paper describes a video fingerprinting system that is highly robust to audio and video transformations. The proposed system adapts a robust audio fingerprint extraction approach to video fingerprinting. The audio fingerprinting system converts the spectrogram into binary images, and then encodes the positions of salient regions selected from each binary image. Visual features are extracted in a similar way from the video images. We propose two visual fingerprint generation methods where fingerprints encode the positions of salient regions of greyscale video images. Salient regions of the first method are selected based on the intensity values of the image, while the second method identifies the regions that represent the highest variations between two successive images. The similarity between two fingerprints is defined as the intersection between their elements. The search algorithm is speeded up by an efficient implementation on a Graphics Processing Unit (GPU). We evaluate the performance of the proposed video system on TRECVID 2009 and 2010 datasets, and we show that this system achieves promising results and outperforms other state-of-the-art video copy detection methods for queries that do not includes geometric transformations. In addition, we show the effectiveness of this system for a challenging audio+video copy detection task.
2017-12-28
Chowdhary, A., Dixit, V. H., Tiwari, N., Kyung, S., Huang, D., Ahn, G. J..  2017.  Science DMZ: SDN based secured cloud testbed. 2017 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN). :1–2.

Software Defined Networking (SDN) presents a unique opportunity to manage and orchestrate cloud networks. The educational institutions, like many other industries face a lot of security threats. We have established an SDN enabled Demilitarized Zone (DMZ) — Science DMZ to serve as testbed for securing ASU Internet2 environment. Science DMZ allows researchers to conduct in-depth analysis of security attacks and take necessary countermeasures using SDN based command and control (C&C) center. Demo URL: https : //www.youtube.corn/watchlv = 8yo2lTNV 3r4.

2017-12-20
Park, A. J., Quadari, R. N., Tsang, H. H..  2017.  Phishing website detection framework through web scraping and data mining. 2017 8th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON). :680–684.

Phishers often exploit users' trust on the appearance of a site by using webpages that are visually similar to an authentic site. In the past, various research studies have tried to identify and classify the factors contributing towards the detection of phishing websites. The focus of this research is to establish a strong relationship between those identified heuristics (content-based) and the legitimacy of a website by analyzing training sets of websites (both phishing and legitimate websites) and in the process analyze new patterns and report findings. Many existing phishing detection tools are often not very accurate as they depend mostly on the old database of previously identified phishing websites. However, there are thousands of new phishing websites appearing every year targeting financial institutions, cloud storage/file hosting sites, government websites, and others. This paper presents a framework called Phishing-Detective that detects phishing websites based on existing and newly found heuristics. For this framework, a web crawler was developed to scrape the contents of phishing and legitimate websites. These contents were analyzed to rate the heuristics and their contribution scale factor towards the illegitimacy of a website. The data set collected from Web Scraper was then analyzed using a data mining tool to find patterns and report findings. A case study shows how this framework can be used to detect a phishing website. This research is still in progress but shows a new way of finding and using heuristics and the sum of their contributing weights to effectively and accurately detect phishing websites. Further development of this framework is discussed at the end of the paper.

2017-12-12
Hellmann, B., Ahlers, V., Rodosek, G. D..  2017.  Integrating visual analysis of network security and management of detection system configurations. 2017 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS). 2:1020–1025.

A problem in managing the ever growing computer networks nowadays is the analysis of events detected by intrusion detection systems and the classification whether an event was correctly detected or not. When a false positive is detected by the user, changes to the configuration must be made and evaluated before they can be adopted to productive use. This paper describes an approach for a visual analysis framework that integrates the monitoring and analysis of events and the resulting changes on the configuration of detection systems after finding false alarms, together with a preliminary simulation and evaluation of the changes.

2017-12-04
Mayer, N., Feltus, C..  2017.  Evaluation of the risk and security overlay of archimate to model information system security risks. 2017 IEEE 21st International Enterprise Distributed Object Computing Workshop (EDOCW). :106–116.

We evaluated the support proposed by the RSO to represent graphically our EAM-ISSRM (Enterprise Architecture Management - Information System Security Risk Management) integrated model. The evaluation of the RSO visual notation has been done at two different levels: completeness with regards to the EAM-ISSRM integrated model (Section III) and cognitive effectiveness, relying on the nine principles established by D. Moody ["The 'Physics' of Notations: Toward a Scientific Basis for Constructing Visual Notations in Software Engineering," IEEE Trans. Softw. Eng., vol. 35, no. 6, pp. 756-779, Nov. 2009] (Section IV). Regarding completeness, the coverage of the EAMISSRM integrated model by the RSO is complete apart from 'Event'. As discussed in Section III, this lack is negligible and we can consider the RSO as an appropriate notation to support the EAM-ISSRM integrated model from a completeness point of view. Regarding cognitive effectiveness, many gaps have been identified with regards to the nine principle established by Moody. Although no quantitative analysis has been performed to objectify this conclusion, the RSO can decently not be considered as an appropriate notation from a cognitive effectiveness point of view and there is room to propose a notation better on this aspect. This paper is focused on assessing the RSO without suggesting improvements based on the conclusions drawn. As a consequence, our objective for future work is to propose a more cognitive effective visual notation for the EAM-ISSRM integrated model. The approach currently considered is to operationalize Moody's principles into concrete metrics and requirements, taking into account the needs and profile of the target group of our notation (information security risk managers) through personas development and user experience map. With such an approach, we will be able to make decisions on the necessary trade-offs about our visual syntax, taking care of a specific context. We also aim at valida- ing our proposal(s) with the help of tools and approaches extracted from cognitive psychology research applied to HCI domain (e.g., eye tracking, heuristic evaluation, user experience evaluation…).

2017-09-15
Wang, Gang, Zhang, Xinyi, Tang, Shiliang, Zheng, Haitao, Zhao, Ben Y..  2016.  Unsupervised Clickstream Clustering for User Behavior Analysis. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :225–236.

Online services are increasingly dependent on user participation. Whether it's online social networks or crowdsourcing services, understanding user behavior is important yet challenging. In this paper, we build an unsupervised system to capture dominating user behaviors from clickstream data (traces of users' click events), and visualize the detected behaviors in an intuitive manner. Our system identifies "clusters" of similar users by partitioning a similarity graph (nodes are users; edges are weighted by clickstream similarity). The partitioning process leverages iterative feature pruning to capture the natural hierarchy within user clusters and produce intuitive features for visualizing and understanding captured user behaviors. For evaluation, we present case studies on two large-scale clickstream traces (142 million events) from real social networks. Our system effectively identifies previously unknown behaviors, e.g., dormant users, hostile chatters. Also, our user study shows people can easily interpret identified behaviors using our visualization tool.

2017-03-20
Shi, Yang, Zhang, Yaoxue, Zhou, Fangfang, Zhao, Ying, Wang, Guojun, Shi, Ronghua, Liang, Xing.  2016.  IDSPlanet: A Novel Radial Visualization of Intrusion Detection Alerts. Proceedings of the 9th International Symposium on Visual Information Communication and Interaction. :25–29.

In this article, we present a novel radial visualization of IDS alerts, named IDSPlanet, which helps administrators identify false positives, analyze attack patterns, and understand evolving network conditions. Inspired by celestial bodies, IDSPlanet is composed of Chrono Rings, Alert Continents, and Interactive Core. These components correspond with temporal features of alert types, patterns of behavior in affected hosts, and correlations amongst alert types, attackers and targets. The visualization provides an informative picture for the status of the network. In addition, IDSPlanet offers different interactions and monitoring modes, which allow users to interact with high-interest individuals in detail as well as to explore overall pattern.

Shi, Yang, Zhang, Yaoxue, Zhou, Fangfang, Zhao, Ying, Wang, Guojun, Shi, Ronghua, Liang, Xing.  2016.  IDSPlanet: A Novel Radial Visualization of Intrusion Detection Alerts. Proceedings of the 9th International Symposium on Visual Information Communication and Interaction. :25–29.

In this article, we present a novel radial visualization of IDS alerts, named IDSPlanet, which helps administrators identify false positives, analyze attack patterns, and understand evolving network conditions. Inspired by celestial bodies, IDSPlanet is composed of Chrono Rings, Alert Continents, and Interactive Core. These components correspond with temporal features of alert types, patterns of behavior in affected hosts, and correlations amongst alert types, attackers and targets. The visualization provides an informative picture for the status of the network. In addition, IDSPlanet offers different interactions and monitoring modes, which allow users to interact with high-interest individuals in detail as well as to explore overall pattern.

2017-03-08
Tsao, Chia-Chin, Chen, Yan-Ying, Hou, Yu-Lin, Hsu, Winston H..  2015.  Identify Visual Human Signature in community via wearable camera. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2229–2233.

With the increasing popularity of wearable devices, information becomes much easily available. However, personal information sharing still poses great challenges because of privacy issues. We propose an idea of Visual Human Signature (VHS) which can represent each person uniquely even captured in different views/poses by wearable cameras. We evaluate the performance of multiple effective modalities for recognizing an identity, including facial appearance, visual patches, facial attributes and clothing attributes. We propose to emphasize significant dimensions and do weighted voting fusion for incorporating the modalities to improve the VHS recognition. By jointly considering multiple modalities, the VHS recognition rate can reach by 51% in frontal images and 48% in the more challenging environment and our approach can surpass the baseline with average fusion by 25% and 16%. We also introduce Multiview Celebrity Identity Dataset (MCID), a new dataset containing hundreds of identities with different view and clothing for comprehensive evaluation.

Chriskos, P., Zoidi, O., Tefas, A., Pitas, I..  2015.  De-identifying facial images using projections on hyperspheres. 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). 04:1–6.

A major issue that arises from mass visual media distribution in modern video sharing, social media and cloud services, is the issue of privacy. Malicious users can use these services to track the actions of certain individuals and/or groups thus violating their privacy. As a result the need to hinder automatic facial image identification in images and videos arises. In this paper we propose a method for de-identifying facial images. Contrary to most de-identification methods, this method manipulates facial images so that humans can still recognize the individual or individuals in an image or video frame, but at the same time common automatic identification algorithms fail to do so. This is achieved by projecting the facial images on a hypersphere. From the conducted experiments it can be verified that this method is effective in reducing the classification accuracy under 10%. Furthermore, in the resulting images the subject can be identified by human viewers.

Nakashima, Y., Koyama, T., Yokoya, N., Babaguchi, N..  2015.  Facial expression preserving privacy protection using image melding. 2015 IEEE International Conference on Multimedia and Expo (ICME). :1–6.

An enormous number of images are currently shared through social networking services such as Facebook. These images usually contain appearance of people and may violate the people's privacy if they are published without permission from each person. To remedy this privacy concern, visual privacy protection, such as blurring, is applied to facial regions of people without permission. However, in addition to image quality degradation, this may spoil the context of the image: If some people are filtered while the others are not, missing facial expression makes comprehension of the image difficult. This paper proposes an image melding-based method that modifies facial regions in a visually unintrusive way with preserving facial expression. Our experimental results demonstrated that the proposed method can retain facial expression while protecting privacy.