Visible to the public Biblio

Filters: Keyword is attribution  [Clear All Filters]
2022-01-25
Sun, Hao, Xu, Yanjie, Kuang, Gangyao, Chen, Jin.  2021.  Adversarial Robustness Evaluation of Deep Convolutional Neural Network Based SAR ATR Algorithm. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS. :5263–5266.
Robustness, both to accident and to malevolent perturbations, is a crucial determinant of the successful deployment of deep convolutional neural network based SAR ATR systems in various security-sensitive applications. This paper performs a detailed adversarial robustness evaluation of deep convolutional neural network based SAR ATR models across two public available SAR target recognition datasets. For each model, seven different adversarial perturbations, ranging from gradient based optimization to self-supervised feature distortion, are generated for each testing image. Besides adversarial average recognition accuracy, feature attribution techniques have also been adopted to analyze the feature diffusion effect of adversarial attacks, which promotes the understanding of vulnerability of deep learning models.
Islam, Muhammad Aminul, Veal, Charlie, Gouru, Yashaswini, Anderson, Derek T..  2021.  Attribution Modeling for Deep Morphological Neural Networks using Saliency Maps. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Mathematical morphology has been explored in deep learning architectures, as a substitute to convolution, for problems like pattern recognition and object detection. One major advantage of using morphology in deep learning is the utility of morphological erosion and dilation. Specifically, these operations naturally embody interpretability due to their underlying connections to the analysis of geometric structures. While the use of these operations results in explainable learned filters, morphological deep learning lacks attribution modeling, i.e., a paradigm to specify what areas of the original observed image are important. Furthermore, convolution-based deep learning has achieved attribution modeling through a variety of neural eXplainable Artificial Intelligence (XAI) paradigms (e.g., saliency maps, integrated gradients, guided backpropagation, and gradient class activation mapping). Thus, a problem for morphology-based deep learning is that these XAI methods do not have a morphological interpretation due to the differences in the underlying mathematics. Herein, we extend the neural XAI paradigm of saliency maps to morphological deep learning, and by doing, so provide an example of morphological attribution modeling. Furthermore, our qualitative results highlight some advantages of using morphological attribution modeling.
Boris, Ryabko, Nadezhda, Savina.  2021.  Development of an information-theoretical method of attribution of literary texts. 2021 XVII International Symposium "Problems of Redundancy in Information and Control Systems" (REDUNDANCY). :70–73.
We propose an information-theoretical method of attribution of literary texts, developed within the framework of information theory and mathematical statistics. Using the proposed method, the following two problems of disputed authorship in Russian and Soviet literature were investigated: i) the problem of false attribution of some novels to Nekrasov and ii) the problem of dubious attribution of two novels to Bulgakov. The research has shown the high efficiency of the data-compression method for attribution of literary texts.
Nakhodchi, Sanaz, Zolfaghari, Behrouz, Yazdinejad, Abbas, Dehghantanha, Ali.  2021.  SteelEye: An Application-Layer Attack Detection and Attribution Model in Industrial Control Systems using Semi-Deep Learning. 2021 18th International Conference on Privacy, Security and Trust (PST). :1–8.
The security of Industrial Control Systems is of high importance as they play a critical role in uninterrupted services provided by Critical Infrastructure operators. Due to a large number of devices and their geographical distribution, Industrial Control Systems need efficient automatic cyber-attack detection and attribution methods, which suggests us AI-based approaches. This paper proposes a model called SteelEye based on Semi-Deep Learning for accurate detection and attribution of cyber-attacks at the application layer in industrial control systems. The proposed model depends on Bag of Features for accurate detection of cyber-attacks and utilizes Categorical Boosting as the base predictor for attack attribution. Empirical results demonstrate that SteelEye remarkably outperforms state-of-the-art cyber-attack detection and attribution methods in terms of accuracy, precision, recall, and Fl-score.
Marulli, Fiammetta, Balzanella, Antonio, Campanile, Lelio, Iacono, Mauro, Mastroianni, Michele.  2021.  Exploring a Federated Learning Approach to Enhance Authorship Attribution of Misleading Information from Heterogeneous Sources. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Authorship Attribution (AA) is currently applied in several applications, among which fraud detection and anti-plagiarism checks: this task can leverage stylometry and Natural Language Processing techniques. In this work, we explored some strategies to enhance the performance of an AA task for the automatic detection of false and misleading information (e.g., fake news). We set up a text classification model for AA based on stylometry exploiting recurrent deep neural networks and implemented two learning tasks trained on the same collection of fake and real news, comparing their performances: one is based on Federated Learning architecture, the other on a centralized architecture. The goal was to discriminate potential fake information from true ones when the fake news comes from heterogeneous sources, with different styles. Preliminary experiments show that a distributed approach significantly improves recall with respect to the centralized model. As expected, precision was lower in the distributed model. This aspect, coupled with the statistical heterogeneity of data, represents some open issues that will be further investigated in future work.
Goh, Gary S. W., Lapuschkin, Sebastian, Weber, Leander, Samek, Wojciech, Binder, Alexander.  2021.  Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution. 2020 25th International Conference on Pattern Recognition (ICPR). :4949–4956.
Integrated Gradients as an attribution method for deep neural network models offers simple implementability. However, it suffers from noisiness of explanations which affects the ease of interpretability. The SmoothGrad technique is proposed to solve the noisiness issue and smoothen the attribution maps of any gradient-based attribution method. In this paper, we present SmoothTaylor as a novel theoretical concept bridging Integrated Gradients and SmoothGrad, from the Taylor's theorem perspective. We apply the methods to the image classification problem, using the ILSVRC2012 ImageNet object recognition dataset, and a couple of pretrained image models to generate attribution maps. These attribution maps are empirically evaluated using quantitative measures for sensitivity and noise level. We further propose adaptive noising to optimize for the noise scale hyperparameter value. From our experiments, we find that the SmoothTaylor approach together with adaptive noising is able to generate better quality saliency maps with lesser noise and higher sensitivity to the relevant points in the input space as compared to Integrated Gradients.
Jha, Ashish, Novikova, Evgeniya S., Tokarev, Dmitry, Fedorchenko, Elena V..  2021.  Feature Selection for Attacker Attribution in Industrial Automation amp; Control Systems. 2021 IV International Conference on Control in Technical Systems (CTS). :220–223.
Modern Industrial Automation & Control Systems (IACS) are essential part of the critical infrastructures and services. They are used in health, power, water, and transportation systems, and the impact of cyberattacks on IACS could be severe, resulting, for example, in damage to the environment, public or employee safety or health. Thus, building IACS safe and secure against cyberattacks is extremely important. The attacker model is one of the key elements in risk assessment and other security related information system management tasks. The aim of the study is to specify the attacker's profile based on the analysis of network and system events. The paper presents an approach to the selection of attacker's profile attributes from raw network and system events of the Linux OS. To evaluate the approach the experiments were performed on data collected within the Global CPTC 2019 competition.
Lee, Jungbeom, Yi, Jihun, Shin, Chaehun, Yoon, Sungroh.  2021.  BBAM: Bounding Box Attribution Map for Weakly Supervised Semantic and Instance Segmentation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :2643–2651.
Weakly supervised segmentation methods using bounding box annotations focus on obtaining a pixel-level mask from each box containing an object. Existing methods typically depend on a class-agnostic mask generator, which operates on the low-level information intrinsic to an image. In this work, we utilize higher-level information from the behavior of a trained object detector, by seeking the smallest areas of the image from which the object detector produces almost the same result as it does from the whole image. These areas constitute a bounding-box attribution map (BBAM), which identifies the target object in its bounding box and thus serves as pseudo ground-truth for weakly supervised semantic and instance segmentation. This approach significantly outperforms recent comparable techniques on both the PASCAL VOC and MS COCO benchmarks in weakly supervised semantic and instance segmentation. In addition, we provide a detailed analysis of our method, offering deeper insight into the behavior of the BBAM.
Taspinar, Samet, Mohanty, Manoranjan, Memon, Nasir.  2021.  Effect of Video Pixel-Binning on Source Attribution of Mixed Media. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2545–2549.
Photo Response Non-Uniformity (PRNU) noise obtained from images or videos is used as a camera fingerprint to attribute visual objects captured by a camera. The PRNU-based source attribution method, however, fails when there is misalignment between the fingerprint and the query object. One example of such a misalignment, which has been overlooked in the field, is caused by the in-camera resizing technique that a video may have been subjected to. This paper investigates the attribution of visual media in the context of matching a video query object to an image fingerprint or vice versa. Specifically this paper focuses on improving camera attribution performance by taking into account the effects of binning, a commonly used in-camera resizing technique applied to video. We experimentally show that the True Positive Rate (TPR) obtained when binning is considered is approximately 3% higher.
2021-05-13
Wu, Xiaohe, Calderon, Juan, Obeng, Morrison.  2021.  Attribution Based Approach for Adversarial Example Generation. SoutheastCon 2021. :1–6.
Neural networks with deep architectures have been used to construct state-of-the-art classifiers that can match human level accuracy in areas such as image classification. However, many of these classifiers can be fooled by examples slightly modified from their original forms. In this work, we propose a novel approach for generating adversarial examples that makes use of only attribution information of the features and perturbs only features that are highly influential to the output of the classifier. We call this approach Attribution Based Adversarial Generation (ABAG). To demonstrate the effectiveness of this approach, three somewhat arbitrary algorithms are proposed and examined. In the first algorithm all non-zero attributions are utilized and associated features perturbed; in the second algorithm only the top-n most positive and top-n most negative attributions are used and corresponding features perturbed; and in the third algorithm the level of perturbation is increased in an iterative manner until an adversarial example is discovered. All of the three algorithms are implemented and experiments are performed on the well-known MNIST dataset. Experiment results show that adversarial examples can be generated very efficiently, and thus prove the validity and efficacy of ABAG - utilizing attributions for the generation of adversarial examples. Furthermore, as shown by examples, ABAG can be adapted to provides a systematic searching approach to generate adversarial examples by perturbing a minimum amount of features.
Jaafar, Fehmi, Avellaneda, Florent, Alikacem, El-Hackemi.  2020.  Demystifying the Cyber Attribution: An Exploratory Study. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :35–40.
Current cyber attribution approaches proposed to use a variety of datasets and analytical techniques to distill the information that will be useful to identify cyber attackers. In contrast, practitioners and researchers in cyber attribution face several technical and regulation challenges. In this paper, we describe the main challenges of cyber attribution and present a state of the art of used approaches to face these challenges. Then, we will present an exploratory study to perform cyber attacks attribution based on pattern recognition from real data. In our study, we are using attack pattern discovery and identification based on real data collection and analysis.
Fernandes, Steven, Raj, Sunny, Ewetz, Rickard, Pannu, Jodh Singh, Kumar Jha, Sumit, Ortiz, Eddy, Vintila, Iustina, Salter, Margaret.  2020.  Detecting Deepfake Videos using Attribution-Based Confidence Metric. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :1250–1259.
Recent advances in generative adversarial networks have made detecting fake videos a challenging task. In this paper, we propose the application of the state-of-the-art attribution based confidence (ABC) metric for detecting deepfake videos. The ABC metric does not require access to the training data or training the calibration model on the validation data. The ABC metric can be used to draw inferences even when only the trained model is available. Here, we utilize the ABC metric to characterize whether a video is original or fake. The deep learning model is trained only on original videos. The ABC metric uses the trained model to generate confidence values. For, original videos, the confidence values are greater than 0.94.
Bansal, Naman, Agarwal, Chirag, Nguyen, Anh.  2020.  SAM: The Sensitivity of Attribution Methods to Hyperparameters. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :11–21.
Attribution methods can provide powerful insights into the reasons for a classifier's decision. We argue that a key desideratum of an explanation method is its robustness to input hyperparameters which are often randomly set or empirically tuned. High sensitivity to arbitrary hyperparameter choices does not only impede reproducibility but also questions the correctness of an explanation and impairs the trust of end-users. In this paper, we provide a thorough empirical study on the sensitivity of existing attribution methods. We found an alarming trend that many methods are highly sensitive to changes in their common hyperparameters e.g. even changing a random seed can yield a different explanation! Interestingly, such sensitivity is not reflected in the average explanation accuracy scores over the dataset as commonly reported in the literature. In addition, explanations generated for robust classifiers (i.e. which are trained to be invariant to pixel-wise perturbations) are surprisingly more robust than those generated for regular classifiers.
Peck, Sarah Marie, Khan, Mohammad Maifi Hasan, Fahim, Md Abdullah Al, Coman, Emil N, Jensen, Theodore, Albayram, Yusuf.  2020.  Who Would Bob Blame? Factors in Blame Attribution in Cyberattacks Among the Non-Adopting Population in the Context of 2FA 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC). :778–789.
This study focuses on identifying the factors contributing to a sense of personal responsibility that could improve understanding of insecure cybersecurity behavior and guide research toward more effective messaging targeting non-adopting populations. Towards that, we ran a 2(account type) x2(usage scenario) x2(message type) between-group study with 237 United States adult participants on Amazon MTurk, and investigated how the non-adopting population allocates blame, and under what circumstances they blame the end user among the parties who hold responsibility: the software companies holding data, the attackers exposing data, and others. We find users primarily hold service providers accountable for breaches but they feel the same companies should not enforce stronger security policies on users. Results indicate that people do hold end users accountable for their behavior in the event of a breach, especially when the users' behavior affects others. Implications of our findings in risk communication is discussed in the paper.
Song, Jie, Chen, Yixin, Ye, Jingwen, Wang, Xinchao, Shen, Chengchao, Mao, Feng, Song, Mingli.  2020.  DEPARA: Deep Attribution Graph for Deep Knowledge Transferability. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :3921–3929.
Exploring the intrinsic interconnections between the knowledge encoded in PRe-trained Deep Neural Networks (PR-DNNs) of heterogeneous tasks sheds light on their mutual transferability, and consequently enables knowledge transfer from one task to another so as to reduce the training effort of the latter. In this paper, we propose the DEeP Attribution gRAph (DEPARA) to investigate the transferability of knowledge learned from PR-DNNs. In DEPARA, nodes correspond to the inputs and are represented by their vectorized attribution maps with regards to the outputs of the PR-DNN. Edges denote the relatedness between inputs and are measured by the similarity of their features extracted from the PR-DNN. The knowledge transferability of two PR-DNNs is measured by the similarity of their corresponding DEPARAs. We apply DEPARA to two important yet under-studied problems in transfer learning: pre-trained model selection and layer selection. Extensive experiments are conducted to demonstrate the effectiveness and superiority of the proposed method in solving both these problems. Code, data and models reproducing the results in this paper are available at https://github.com/zju-vipa/DEPARA.
Kayes, A.S.M., Hammoudeh, Mohammad, Badsha, Shahriar, Watters, Paul A., Ng, Alex, Mohammed, Fatma, Islam, Mofakharul.  2020.  Responsibility Attribution Against Data Breaches. 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT). :498–503.
Electronic crimes like data breaches in healthcare systems are often a fundamental failures of access control mechanisms. Most of current access control systems do not provide an accessible way to engage users in decision making processes, about who should have access to what data and when. We advocate that a policy ontology can contribute towards the development of an effective access control system by attributing responsibility for data breaches. We propose a responsibility attribution model as a theoretical construct and discuss its implication by introducing a cost model for data breach countermeasures. Then, a policy ontology is presented to realize the proposed responsibility and cost models. An experimental study on the performance of the proposed framework is conducted with respect to a more generic access control framework. The practicality of the proposed solution is demonstrated through a case study from the healthcare domain.
Xu, Shawn, Venugopalan, Subhashini, Sundararajan, Mukund.  2020.  Attribution in Scale and Space. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :9677–9686.
We study the attribution problem for deep networks applied to perception tasks. For vision tasks, attribution techniques attribute the prediction of a network to the pixels of the input image. We propose a new technique called Blur Integrated Gradients (Blur IG). This technique has several advantages over other methods. First, it can tell at what scale a network recognizes an object. It produces scores in the scale/frequency dimension, that we find captures interesting phenomena. Second, it satisfies the scale-space axioms, which imply that it employs perturbations that are free of artifact. We therefore produce explanations that are cleaner and consistent with the operation of deep networks. Third, it eliminates the need for baseline parameter for Integrated Gradients for perception tasks. This is desirable because the choice of baseline has a significant effect on the explanations. We compare the proposed technique against previous techniques and demonstrate application on three tasks: ImageNet object recognition, Diabetic Retinopathy prediction, and AudioSet audio event identification. Code and examples are at https://github.com/PAIR-code/saliency.
Kumar, Sachin, Gupta, Garima, Prasad, Ranjitha, Chatterjee, Arnab, Vig, Lovekesh, Shroff, Gautam.  2020.  CAMTA: Causal Attention Model for Multi-touch Attribution. 2020 International Conference on Data Mining Workshops (ICDMW). :79–86.
Advertising channels have evolved from conventional print media, billboards and radio-advertising to online digital advertising (ad), where the users are exposed to a sequence of ad campaigns via social networks, display ads, search etc. While advertisers revisit the design of ad campaigns to concurrently serve the requirements emerging out of new ad channels, it is also critical for advertisers to estimate the contribution from touch-points (view, clicks, converts) on different channels, based on the sequence of customer actions. This process of contribution measurement is often referred to as multi-touch attribution (MTA). In this work, we propose CAMTA, a novel deep recurrent neural network architecture which is a causal attribution mechanism for user-personalised MTA in the context of observational data. CAMTA minimizes the selection bias in channel assignment across time-steps and touchpoints. Furthermore, it utilizes the users' pre-conversion actions in a principled way in order to predict per-channel attribution. To quantitatively benchmark the proposed MTA model, we employ the real-world Criteo dataset and demonstrate the superior performance of CAMTA with respect to prediction accuracy as compared to several baselines. In addition, we provide results for budget allocation and user-behaviour modeling on the predicted channel attribution.
S, Naveen, Puzis, Rami, Angappan, Kumaresan.  2020.  Deep Learning for Threat Actor Attribution from Threat Reports. 2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP). :1–6.
Threat Actor Attribution is the task of identifying an attacker responsible for an attack. This often requires expert analysis and involves a lot of time. There had been attempts to detect a threat actor using machine learning techniques that use information obtained from the analysis of malware samples. These techniques will only be able to identify the attack, and it is trivial to guess the attacker because various attackers may adopt an attack method. A state-of-the-art method performs attribution of threat actors from text reports using Machine Learning and NLP techniques using Threat Intelligence reports. We use the same set of Threat Reports of Advanced Persistent Threats (APT). In this paper, we propose a Deep Learning architecture to attribute Threat actors based on threat reports obtained from various Threat Intelligence sources. Our work uses Neural Networks to perform the task of attribution and show that our method makes the attribution more accurate than other techniques and state-of-the-art methods.
2021-01-11
Kim, Y.-K., Lee, J. J., Go, M.-H., Lee, K..  2020.  Analysis of the Asymmetrical Relationships between State Actors and APT Threat Groups. 2020 International Conference on Information and Communication Technology Convergence (ICTC). :695–700.
During the Cold War era, countries with asymmetrical relationships often demonstrated how lower-tier nation states required the alliance and support from top-tier nation states. This statement no longer stands true as country such as North Korea has exploited global financial institutions through various malware such as WANNACRY V0, V1, V2, evtsys.exe, and BRAMBUL WORM. Top tier nation states such as the U.S. are unable to use diplomatic clout or to retaliate against the deferrer. Our study examined the affidavit filed against the North Korean hacker, Park Jin Hyok, which was provided by the FBI. Our paper focuses on the operations and campaigns that were carried out by the Lazarus Group by focusing on the key factors of the infrastructure and artifacts. Due to the nature of the cyber deterrence, deterrence in the cyber realm is far complex than the nuclear deterrence. We focused on the Sony Picture Entertainment’s incident for our study. In this study, we discuss how cyber deterrence can be employed when different nation states share an asymmetrical relationship. Furthermore, we focus on contestability and attribution that is a key factor that makes cyber deterrence difficult.
2020-08-28
Avellaneda, Florent, Alikacem, El-Hackemi, Jaafar, Femi.  2019.  Using Attack Pattern for Cyber Attack Attribution. 2019 International Conference on Cybersecurity (ICoCSec). :1—6.

A cyber attack is a malicious and deliberate attempt by an individual or organization to breach the integrity, confidentiality, and/or availability of data or services of an information system of another individual or organization. Being able to attribute a cyber attack is a crucial question for security but this question is also known to be a difficult problem. The main reason why there is currently no solution that automatically identifies the initiator of an attack is that attackers usually use proxies, i.e. an intermediate node that relays a host over the network. In this paper, we propose to formalize the problem of identifying the initiator of a cyber attack. We show that if the attack scenario used by the attacker is known, then we are able to resolve the cyber attribution problem. Indeed, we propose a model to formalize these attack scenarios, that we call attack patterns, and give an efficient algorithm to search for attack pattern on a communication history. Finally, we experimentally show the relevance of our approach.

Parafita, Álvaro, Vitrià, Jordi.  2019.  Explaining Visual Models by Causal Attribution. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). :4167—4175.

Model explanations based on pure observational data cannot compute the effects of features reliably, due to their inability to estimate how each factor alteration could affect the rest. We argue that explanations should be based on the causal model of the data and the derived intervened causal models, that represent the data distribution subject to interventions. With these models, we can compute counterfactuals, new samples that will inform us how the model reacts to feature changes on our input. We propose a novel explanation methodology based on Causal Counterfactuals and identify the limitations of current Image Generative Models in their application to counterfactual creation.

Perry, Lior, Shapira, Bracha, Puzis, Rami.  2019.  NO-DOUBT: Attack Attribution Based On Threat Intelligence Reports. 2019 IEEE International Conference on Intelligence and Security Informatics (ISI). :80—85.

The task of attack attribution, i.e., identifying the entity responsible for an attack, is complicated and usually requires the involvement of an experienced security expert. Prior attempts to automate attack attribution apply various machine learning techniques on features extracted from the malware's code and behavior in order to identify other similar malware whose authors are known. However, the same malware can be reused by multiple actors, and the actor who performed an attack using a malware might differ from the malware's author. Moreover, information collected during an incident may contain many clues about the identity of the attacker in addition to the malware used. In this paper, we propose a method of attack attribution based on textual analysis of threat intelligence reports, using state of the art algorithms and models from the fields of machine learning and natural language processing (NLP). We have developed a new text representation algorithm which captures the context of the words and requires minimal feature engineering. Our approach relies on vector space representation of incident reports derived from a small collection of labeled reports and a large corpus of general security literature. Both datasets have been made available to the research community. Experimental results show that the proposed representation can attribute attacks more accurately than the baselines' representations. In addition, we show how the proposed approach can be used to identify novel previously unseen threat actors and identify similarities between known threat actors.

Traylor, Terry, Straub, Jeremy, Gurmeet, Snell, Nicholas.  2019.  Classifying Fake News Articles Using Natural Language Processing to Identify In-Article Attribution as a Supervised Learning Estimator. 2019 IEEE 13th International Conference on Semantic Computing (ICSC). :445—449.

Intentionally deceptive content presented under the guise of legitimate journalism is a worldwide information accuracy and integrity problem that affects opinion forming, decision making, and voting patterns. Most so-called `fake news' is initially distributed over social media conduits like Facebook and Twitter and later finds its way onto mainstream media platforms such as traditional television and radio news. The fake news stories that are initially seeded over social media platforms share key linguistic characteristics such as making excessive use of unsubstantiated hyperbole and non-attributed quoted content. In this paper, the results of a fake news identification study that documents the performance of a fake news classifier are presented. The Textblob, Natural Language, and SciPy Toolkits were used to develop a novel fake news detector that uses quoted attribution in a Bayesian machine learning system as a key feature to estimate the likelihood that a news article is fake. The resultant process precision is 63.333% effective at assessing the likelihood that an article with quotes is fake. This process is called influence mining and this novel technique is presented as a method that can be used to enable fake news and even propaganda detection. In this paper, the research process, technical analysis, technical linguistics work, and classifier performance and results are presented. The paper concludes with a discussion of how the current system will evolve into an influence mining system.

BOUGHACI, Dalila, BENMESBAH, Mounir, ZEBIRI, Aniss.  2019.  An improved N-grams based Model for Authorship Attribution. 2019 International Conference on Computer and Information Sciences (ICCIS). :1—6.

Authorship attribution is the problem of studying an anonymous text and finding the corresponding author in a set of candidate authors. In this paper, we propose a method based on N-grams model for the problem of authorship attribution. Several measures are used to assign an anonymous text to an author. The different variants of the proposed method are implemented and validated on PAN benchmarks. The numerical results are encouraging and demonstrate the benefit of the proposed idea.