Biblio

Found 350 results

Filters: Keyword is data mining  [Clear All Filters]
2022-02-07
Abdel-Fattah, Farhan, AlTamimi, Fadel, Farhan, Khalid A..  2021.  Machine Learning and Data Mining in Cybersecurty. 2021 International Conference on Information Technology (ICIT). :952–956.
A wireless technology Mobile Ad hoc Network (MANET) that connects a group of mobile devices such as phones, laptops, and tablets suffers from critical security problems, so the traditional defense mechanism Intrusion Detection System (IDS) techniques are not sufficient to safeguard and protect MANET from malicious actions performed by intruders. Due to the MANET dynamic decentralized structure, distributed architecture, and rapid growing of MANET over years, vulnerable MANET does not need to change its infrastructure rather than using intelligent and advance methods to secure them and prevent intrusions. This paper focuses essentially on machine learning methodologies and algorithms to solve the shortage of the first line defense IDS to overcome the security issues MANET experience. Threads such as black hole, routing loops, network partition, selfishness, sleep deprivation, and denial of service (DoS), may be easily classified and recognized using machine learning methodologies and algorithms. Also, machine learning methodologies and algorithms help find ways to reduce and solve mischievous and harmful attacks against intimidation and prying. The paper describes few machine learning algorithms in detail such as Neural Networks, Support vector machine (SVM) algorithm and K-nearest neighbors, and how these methodologies help MANET to resolve their security problems.
2022-04-01
Markina, Maria S., Markin, Pavel V., Voevodin, Vladislav A., Burenok, Dmitry S..  2021.  Methodology for Quantifying the Materiality of Audit Evidence Using Expert Assessments and Their Ranking. 2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus). :2390—2393.
An Information security audit is a process of obtaining objective audit evidence and evaluating it objectively for compliance with audit criteria. Given resource constraints, it's advisable to focus on obtaining evidence that has a significant impact on its effectiveness when developing an audit program to organize the audit. The person managing the audit program faces an urgent task developing an audit program, taking into account the information content of extracted evidence and resource constraints. In practice, evidence cannot be evaluated correctly directly in numerical scales, so they are forced to use less informative scales. The purpose of scientific research is to develop a methodology for assessing the materiality of audit evidence using expert assessments, their statistical processing, and transition to quantitative scales. As a result, the person managing the audit program gets a tool for developing an effective audit program.
2022-04-18
Bothos, Ioannis, Vlachos, Vasileios, Kyriazanos, Dimitris M., Stamatiou, Ioannis, Thanos, Konstantinos Georgios, Tzamalis, Pantelis, Nikoletseas, Sotirios, Thomopoulos, Stelios C.A..  2021.  Modelling Cyber-Risk in an Economic Perspective. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :372–377.
In this paper, we present a theoretical approach concerning the econometric modelling for the estimation of cyber-security risk, with the use of time-series analysis methods and alternatively with Machine Learning (ML) based, deep learning methodology. Also we present work performed in the framework of SAINT H2020 Project [1], concerning innovative data mining techniques, based on automated web scrapping, for the retrieving of the relevant time-series data. We conclude with a review of emerging challenges in cyber-risk assessment brought by the rapid development of adversarial AI.
2022-06-14
Yasa, Ray Novita, Buana, I Komang Setia, Girinoto, Setiawan, Hermawan, Hadiprakoso, Raden Budiarto.  2021.  Modified RNP Privacy Protection Data Mining Method as Big Data Security. 2021 International Conference on Informatics, Multimedia, Cyber and Information System (ICIMCIS. :30–34.
Privacy-Preserving Data Mining (PPDM) has become an exciting topic to discuss in recent decades due to the growing interest in big data and data mining. A technique of securing data but still preserving the privacy that is in it. This paper provides an alternative perturbation-based PPDM technique which is carried out by modifying the RNP algorithm. The novelty given in this paper are modifications of some steps method with a specific purpose. The modifications made are in the form of first narrowing the selection of the disturbance value. With the aim that the number of attributes that are replaced in each record line is only as many as the attributes in the original data, no more and no need to repeat; secondly, derive the perturbation function from the cumulative distribution function and use it to find the probability distribution function so that the selection of replacement data has a clear basis. The experiment results on twenty-five perturbed data show that the modified RNP algorithm balances data utility and security level by selecting the appropriate disturbance value and perturbation value. The level of security is measured using privacy metrics in the form of value difference, average transformation of data, and percentage of retains. The method presented in this paper is fascinating to be applied to actual data that requires privacy preservation.
2022-10-12
Li, Chunzhi.  2021.  A Phishing Detection Method Based on Data Mining. 2021 3rd International Conference on Applied Machine Learning (ICAML). :202—205.
Data mining technology is a very important technology in the current era of data explosion. With the informationization of society and the transparency and openness of information, network security issues have become the focus of concern of people all over the world. This paper wants to compare the accuracy of multiple machine learning methods and two deep learning frameworks when using lexical features to detect and classify malicious URLs. As a result, this paper shows that the Random Forest, which is an ensemble learning method for classification, is superior to 8 other machine learning methods in this paper. Furthermore, the Random Forest is even superior to some popular deep neural network models produced by famous frameworks such as TensorFlow and PyTorch when using lexical features to detect and classify malicious URLs.
2022-03-14
Vykopal, Jan, Čeleda, Pavel, Seda, Pavel, Švábenský, Valdemar, Tovarňák, Daniel.  2021.  Scalable Learning Environments for Teaching Cybersecurity Hands-on. 2021 IEEE Frontiers in Education Conference (FIE). :1—9.
This Innovative Practice full paper describes a technical innovation for scalable teaching of cybersecurity hands-on classes using interactive learning environments. Hands-on experience significantly improves the practical skills of learners. However, the preparation and delivery of hands-on classes usually do not scale. Teaching even small groups of students requires a substantial effort to prepare the class environment and practical assignments. Further issues are associated with teaching large classes, providing feedback, and analyzing learning gains. We present our research effort and practical experience in designing and using learning environments that scale up hands-on cybersecurity classes. The environments support virtual networks with full-fledged operating systems and devices that emulate realworld systems. The classes are organized as simultaneous training sessions with cybersecurity assignments and learners' assessment. For big classes, with the goal of developing learners' skills and providing formative assessment, we run the environment locally, either in a computer lab or at learners' own desktops or laptops. For classes that exercise the developed skills and feature summative assessment, we use an on-premises cloud environment. Our approach is unique in supporting both types of deployment. The environment is described as code using open and standard formats, defining individual hosts and their networking, configuration of the hosts, and tasks that the students have to solve. The environment can be repeatedly created for different classes on a massive scale or for each student on-demand. Moreover, the approach enables learning analytics and educational data mining of learners' interactions with the environment. These analyses inform the instructor about the student's progress during the class and enable the learner to reflect on a finished training. Thanks to this, we can improve the student class experience and motivation for further learning. Using the presented environments KYPO Cyber Range Platform and Cyber Sandbox Creator, we delivered the classes on-site or remotely for various target groups of learners (K-12, university students, and professional learners). The learners value the realistic nature of the environments that enable exercising theoretical concepts and tools. The instructors value time-efficiency when preparing and deploying the hands-on activities. Engineering and computing educators can freely use our software, which we have released under an open-source license. We also provide detailed documentation and exemplary hands-on training to help other educators adopt our teaching innovations and enable sharing of reusable components within the community.
2022-03-15
Zhou, Zequan, Wang, Yupeng, Luo, Xiling, Bai, Yi, Wang, Xiaochao, Zeng, Feng.  2021.  Secure Accountable Dynamic Storage Integrity Verification. 2021 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/IOP/SCI). :440—447.
Integrity verification of cloud data is of great importance for secure and effective cloud storage since attackers can change the data even though it is encrypted. Traditional integrity verification schemes only let the client know the integrity status of the remote data. When the data is corrupted, the system cannot hold the server accountable. Besides, almost all existing schemes assume that the users are credible. Instead, especially in a dynamic operation environment, users can deny their behaviors, and let the server bear the penalty of data loss. To address the issues above, we propose an accountable dynamic storage integrity verification (ADS-IV) scheme which provides means to detect or eliminate misbehavior of all participants. In the meanwhile, we modify the Invertible Bloom Filter (IBF) to recover the corrupted data and use the Mahalanobis distance to calculate the degree of damage. We prove that our scheme is secure under Computational Diffie-Hellman (CDH) assumption and Discrete Logarithm (DL) assumption and that the audit process is privacy-preserving. The experimental results demonstrate that the computational complexity of the audit is constant; the storage overhead is \$O(\textbackslashtextbackslashsqrt n )\$, which is only 1/400 of the size of the original data; and the whole communication overhead is O(1).As a result, the proposed scheme is not only suitable for large-scale cloud data storage systems, but also for systems with sensitive data, such as banking systems, medical systems, and so on.
2022-11-18
Singh, Karan Kumar, B S, Radhika, Shyamasundar, R K.  2021.  SEFlowViz: A Visualization Tool for SELinux Policy Analysis. 2021 12th International Conference on Information and Communication Systems (ICICS). :439—444.
SELinux policies used in practice are generally large and complex. As a result, it is difficult for the policy writers to completely understand the policy and ensure that the policy meets the intended security goals. To remedy this, we have developed a tool called SEFlowViz that helps in visualizing the information flows of a policy and thereby helps in creating flow-secure policies. The tool uses the graph database Neo4j to visualize the policy. Along with visualization, the tool also supports extracting various information regarding the policy and its components through queries. Furthermore, the tool also supports the addition and deletion of rules which is useful in converting inconsistent policies into consistent policies.
2022-04-12
Nair, Viswajit Vinod, van Staalduinen, Mark, Oosterman, Dion T..  2021.  Template Clustering for the Foundational Analysis of the Dark Web. 2021 IEEE International Conference on Big Data (Big Data). :2542—2549.
The rapid rise of the Dark Web and supportive technologies has served as the backbone facilitating online illegal activity worldwide. These illegal activities supported by anonymisation technologies such as Tor has made it increasingly elusive to law enforcement agencies. Despite several successful law enforcement operations, illegal activity on the Dark Web is still growing. There are approaches to monitor, mine, and research the Dark Web, all with varying degrees of success. Given the complexity and dynamics of the services offered, we recognize the need for in depth analysis of the Dark Web with regard to its infrastructures, actors, types of abuse and their relationships. This involves the challenging task of information extraction from the very heterogeneous collection of web pages that make up the Dark Web. Most providers develop their services on top of standard frameworks such as WordPress, Simple Machine Forum, phpBB and several other frameworks to deploy their services. As a result, these service providers publish significant number of pages based on similar structural and stylistic templates. We propose an efficient, scalable, repeatable and accurate approach to cluster Dark Web pages based on those structural and stylistic features. Extracting relevant information from those clusters should make it feasible to conduct in depth Dark Web analysis. This paper presents our clustering algorithm to accelerate information extraction, and as a result improve attribution of digital traces to infrastructures or individuals in the fight against cyber crime.
2022-05-19
Rabbani, Mustafa Raza, Bashar, Abu, Atif, Mohd, Jreisat, Ammar, Zulfikar, Zehra, Naseem, Yusra.  2021.  Text mining and visual analytics in research: Exploring the innovative tools. 2021 International Conference on Decision Aid Sciences and Application (DASA). :1087–1091.
The aim of the study is to present an advanced overview and potential application of the innovative tools/software's/methods used for data visualization, text mining, scientific mapping, and bibliometric analysis. Text mining and data visualization has been a topic of research for several years for academic researchers and practitioners. With the advancement in technology and innovation in the data analysis techniques, there are many online and offline software tools available for text mining and visualisation. The purpose of this study is to present an advanced overview of latest, sophisticated, and innovative tools available for this purpose. The unique characteristic about this study is that it provides an overview with examples of the five most adopted software tools such as VOSviewer, Biblioshiny, Gephi, HistCite and CiteSpace in social science research. This study will contribute to the academic literature and will help the researchers and practitioners to apply these tools in future research to present their findings in a more scientific manner.
2021-12-21
Jeong, Jang Hyeon, Kim, Jong Beom, Choi, Seong Gon.  2021.  Zero-Day Attack Packet Highlighting System. 2021 23rd International Conference on Advanced Communication Technology (ICACT). :200–204.
This paper presents Zero-Day Attack Packet Highlighting System. Proposed system outputs zero-day attack packet information from flow extracted as result of regression inspection of packets stored in flow-based PCA. It also highlights raw data of the packet matched with rule. Also, we design communication protocols for sending and receiving data within proposed system. Purpose of the proposed system is to solve existing flow-based problems and provides users with raw data information of zero-day packets so that they can analyze raw data for the packets.
2022-01-10
Gong, Jianhu.  2021.  Network Information Security Pipeline Based on Grey Relational Cluster and Neural Networks. 2021 5th International Conference on Computing Methodologies and Communication (ICCMC). :971–975.
Network information security pipeline based on the grey relational cluster and neural networks is designed and implemented in this paper. This method is based on the principle that the optimal selected feature set must contain the feature with the highest information entropy gain to the data set category. First, the feature with the largest information gain is selected from all features as the search starting point, and then the sample data set classification mark is fully considered. For the better performance, the neural networks are considered. The network learning ability is directly determined by its complexity. The learning of general complex problems and large sample data will bring about a core dramatic increase in network scale. The proposed model is validated through the simulation.
2022-05-19
Zhang, Xiaoyu, Fujiwara, Takanori, Chandrasegaran, Senthil, Brundage, Michael P., Sexton, Thurston, Dima, Alden, Ma, Kwan-Liu.  2021.  A Visual Analytics Approach for the Diagnosis of Heterogeneous and Multidimensional Machine Maintenance Data. 2021 IEEE 14th Pacific Visualization Symposium (PacificVis). :196–205.
Analysis of large, high-dimensional, and heterogeneous datasets is challenging as no one technique is suitable for visualizing and clustering such data in order to make sense of the underlying information. For instance, heterogeneous logs detailing machine repair and maintenance in an organization often need to be analyzed to diagnose errors and identify abnormal patterns, formalize root-cause analyses, and plan preventive maintenance. Such real-world datasets are also beset by issues such as inconsistent and/or missing entries. To conduct an effective diagnosis, it is important to extract and understand patterns from the data with support from analytic algorithms (e.g., finding that certain kinds of machine complaints occur more in the summer) while involving the human-in-the-loop. To address these challenges, we adopt existing techniques for dimensionality reduction (DR) and clustering of numerical, categorical, and text data dimensions, and introduce a visual analytics approach that uses multiple coordinated views to connect DR + clustering results across each kind of the data dimension stated. To help analysts label the clusters, each clustering view is supplemented with techniques and visualizations that contrast a cluster of interest with the rest of the dataset. Our approach assists analysts to make sense of machine maintenance logs and their errors. Then the gained insights help them carry out preventive maintenance. We illustrate and evaluate our approach through use cases and expert studies respectively, and discuss generalization of the approach to other heterogeneous data.
2022-10-20
Nahar, Nazmun, Ahmed, Md. Kawsher, Miah, Tareq, Alam, Shahriar, Rahman, Kh. Mustafizur, Rabbi, Md. Anayt.  2021.  Implementation of Android Based Text to Image Steganography Using 512-Bit Algorithm with LSB Technique. 2021 5th International Conference on Electrical Information and Communication Technology (EICT). :1—6.
Steganography security is the main concern in today’s informative world. The fact is that communication takes place to hide information secretly. Steganography is the technique of hiding secret data within an ordinary, non-secret, file, text message and images. This technique avoids detection of the secret data then extracted at its destination. The main reason for using steganography is, we can hide any secret message behind its ordinary file. This work presents a unique technique for image steganography based on a 512-bit algorithm. The secure stego image is a very challenging task to give protection. Therefore we used the least significant bit (LSB) techniques for implementing stego and cover image. However, data encryption and decryption are used to embedded text and replace data into the least significant bit (LSB) for better approaches. Android-based interface used in encryption-decryption techniques that evaluated in this process.Contribution—this research work with 512-bit data simultaneously in a block cipher to reduce the time complexity of a system, android platform used for data encryption decryption process. Steganography model works with stego image that interacts with LSB techniques for data hiding.
2022-09-20
Ndemeye, Bosco, Hussain, Shahid, Norris, Boyana.  2021.  Threshold-Based Analysis of the Code Quality of High-Performance Computing Software Packages. 2021 IEEE 21st International Conference on Software Quality, Reliability and Security Companion (QRS-C). :222—228.
Many popular metrics used for the quantification of the quality or complexity of a codebase (e.g. cyclomatic complexity) were developed in the 1970s or 1980s when source code sizes were significantly smaller than they are today, and before a number of modern programming language features were introduced in different languages. Thus, the many thresholds that were suggested by researchers for deciding whether a given function is lacking in a given quality dimension need to be updated. In the pursuit of this goal, we study a number of open-source high-performance codes, each of which has been in development for more than 15 years—a characteristic which we take to imply good design to score them in terms of their source codes' quality and to relax the above-mentioned thresholds. First, we employ the LLVM/Clang compiler infrastructure and introduce a Clang AST tool to gather AST-based metrics, as well as an LLVM IR pass for those based on a source code's static call graph. Second, we perform statistical analysis to identify the reference thresholds of 22 code quality and callgraph-related metrics at a fine grained level.
2022-02-07
Zhang, Ruichao, Wang, Shang, Burton, Renee, Hoang, Minh, Hu, Juhua, Nascimento, Anderson C A.  2021.  Clustering Analysis of Email Malware Campaigns. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :95–102.
The task of malware labeling on real datasets faces huge challenges—ever-changing datasets and lack of ground-truth labels—owing to the rapid growth of malware. Clustering malware on their respective families is a well known tool used for improving the efficiency of the malware labeling process. In this paper, we addressed the challenge of clustering email malware, and carried out a cluster analysis on a real dataset collected from email campaigns over a 13-month period. Our main original contribution is to analyze the usefulness of email’s header information for malware clustering (a novel approach proposed by Burton [1]), and compare it with features collected from the malware directly. We compare clustering based on email header’s information with traditional features extracted from varied resources provided by VirusTotal [2], including static and dynamic analysis. We show that email header information has an excellent performance.
2022-03-10
Qin, Shuangling, Xu, Chaozhi, Zhang, Fang, Jiang, Tao, Ge, Wei, Li, Jihong.  2021.  Research on Application of Chinese Natural Language Processing in Constructing Knowledge Graph of Chronic Diseases. 2021 International Conference on Communications, Information System and Computer Engineering (CISCE). :271—274.
Knowledge Graph can describe the concepts in the objective world and the relationships between these concepts in a structured way, and identify, discover and infer the relationships between things and concepts. It has been developed in the field of medical and health care. In this paper, the method of natural language processing has been used to build chronic disease knowledge graph, such as named entity recognition, relationship extraction. This method is beneficial to forecast analysis of chronic disease, network monitoring, basic education, etc. The research of this paper can greatly help medical experts in the treatment of chronic disease treatment, and assist primary clinicians with making more scientific decision, and can help Patients with chronic diseases to improve medical efficiency. In the end, it also has practical significance for clinical scientific research of chronic disease.
2022-11-18
Islam, Md Rofiqul, Cerny, Tomas.  2021.  Business Process Extraction Using Static Analysis. 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). :1202–1204.
Business process mining of a large-scale project has many benefits such as finding vulnerabilities, improving processes, collecting data for data science, generating more clear and simple representation, etc. The general way of process mining is to turn event data such as application logs into insights and actions. Observing logs broad enough to depict the whole business logic scenario of a large project can become very costly due to difficult environment setup, unavailability of users, presence of not reachable or hardly reachable log statements, etc. Using static source code analysis to extract logs and arranging them perfect runtime execution order is a potential way to solve the problem and reduce the business process mining operation cost.
2022-04-19
Sun, Dengdi, Lv, Xiangjie, Huang, Shilei, Yao, Lin, Ding, Zhuanlian.  2021.  Salient Object Detection Based on Multi-layer Cascade and Fine Boundary. 2021 17th International Conference on Computational Intelligence and Security (CIS). :299–303.
Due to the continuous improvement of deep learning, saliency object detection based on deep learning has been a hot topic in computational vision. The Fully Convolutional Neural Network (FCNS) has become the mainstream method in salient target measurement. In this article, we propose a new end-to-end multi-level feature fusion module(MCFB), success-fully achieving the goal of extracting rich multi-scale global information by integrating semantic and detailed information. In our module, we obtain different levels of feature maps through convolution, and then cascade the different levels of feature maps, fully considering our global information, and get a rough saliency image. We also propose an optimization module upon our base module to further optimize the feature map. To obtain a clearer boundary, we use a self-defined loss function to optimize the learning process, which includes the Intersection-over-Union (IoU) losses, Binary Cross-Entropy (BCE), and Structural Similarity (SSIM). The module can extract global information to a greater extent while obtaining clearer boundaries. Compared with some existing representative methods, this method has achieved good results.
2022-07-05
Bae, Jin Hee, Kim, Minwoo, Lim, Joon S..  2021.  Emotion Detection and Analysis from Facial Image using Distance between Coordinates Feature. 2021 International Conference on Information and Communication Technology Convergence (ICTC). :494—497.
Facial expression recognition has long been established as a subject of continuous research in various fields. In this study, feature extraction was conducted by calculating the distance between facial landmarks in an image. The extracted features of the relationship between each landmark and analysis were used to classify five facial expressions. We increased the data and label reliability based on our labeling work with multiple observers. Additionally, faces were recognized from the original data, and landmark coordinates were extracted and used as features. A genetic algorithm was used to select features that were relatively more helpful for classification. We performed facial recognition classification and analysis using the method proposed in this study, which showed the validity and effectiveness of the proposed method.
2022-06-07
He, Weiyu, Wu, Xu, Wu, Jingchen, Xie, Xiaqing, Qiu, Lirong, Sun, Lijuan.  2021.  Insider Threat Detection Based on User Historical Behavior and Attention Mechanism. 2021 IEEE Sixth International Conference on Data Science in Cyberspace (DSC). :564–569.
Insider threat makes enterprises or organizations suffer from the loss of property and the negative influence of reputation. User behavior analysis is the mainstream method of insider threat detection, but due to the lack of fine-grained detection and the inability to effectively capture the behavior patterns of individual users, the accuracy and precision of detection are insufficient. To solve this problem, this paper designs an insider threat detection method based on user historical behavior and attention mechanism, including using Long Short Term Memory (LSTM) to extract user behavior sequence information, using Attention-based on user history behavior (ABUHB) learns the differences between different user behaviors, uses Bidirectional-LSTM (Bi-LSTM) to learn the evolution of different user behavior patterns, and finally realizes fine-grained user abnormal behavior detection. To evaluate the effectiveness of this method, experiments are conducted on the CMU-CERT Insider Threat Dataset. The experimental results show that the effectiveness of this method is 3.1% to 6.3% higher than that of other comparative model methods, and it can detect insider threats in different user behaviors with fine granularity.
2022-02-24
Chiu, Chih-Chieh, Tsai, Pang-Wei, Yang, Chu-Sing.  2021.  PIDS: An Essential Personal Information Detection System for Small Business Enterprise. 2021 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME). :01–06.
Since the personal data protection law is on the way of many countries, how to use data mining method to secure sensitive information has become a challenge for enterprises. To make sure every employee follows company's data protection strategy, it may take lots of time and cost to seek and scan thousands of folders and files in user equipment, ensuring that the file contents meet IT security policies. Hence, this paper proposed a lightweight, pattern-based detection system, PIDS, which is expecting to enable an affordable data leakage prevention with essential cost and high efficiency in small business enterprise. For verification and evaluation, PIDS has been deployed on more than 100,000 PCs of collaboration enterprises, and the feedback shows that the system is able to approach its original design functionality for finding violated or sensitive contents efficiently.
2022-02-25
Bolbol, Noor, Barhoom, Tawfiq.  2021.  Mitigating Web Scrapers using Markup Randomization. 2021 Palestinian International Conference on Information and Communication Technology (PICICT). :157—162.

Web Scraping is the technique of extracting desired data in an automated way by scanning the internal links and content of a website, this activity usually performed by systematically programmed bots. This paper explains our proposed solution to protect the blog content from theft and from being copied to other destinations by mitigating the scraping bots. To achieve our purpose we applied two steps in two levels, the first one, on the main blog page level, mitigated the work of crawler bots by adding extra empty articles anchors among real articles, and the next step, on the article page level, we add a random number of empty and hidden spans with randomly generated text among the article's body. To assess this solution we apply it to a local project developed using PHP language in Laravel framework, and put four criteria that measure the effectiveness. The results show that the changes in the file size before and after the application do not affect it, also, the processing time increased by few milliseconds which still in the acceptable range. And by using the HTML-similarity tool we get very good results that show the symmetric over style, with a few bit changes over the structure. Finally, to assess the effects on the bots, scraper bot reused and get the expected results from the programmed middleware. These results show that the solution is feasible to be adopted and use to protect blogs content.

2022-08-12
Chao, Wang, Qun, Li, XiaoHu, Wang, TianYu, Ren, JiaHan, Dong, GuangXin, Guo, EnJie, Shi.  2020.  An Android Application Vulnerability Mining Method Based On Static and Dynamic Analysis. 2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC). :599–603.
Due to the advantages and limitations of the two kinds of vulnerability mining methods of static and dynamic analysis of android applications, the paper proposes a method of Android application vulnerability mining based on dynamic and static combination. Firstly, the static analysis method is used to obtain the basic vulnerability analysis results of the application, and then the input test case of dynamic analysis is constructed on this basis. The fuzzy input test is carried out in the real machine environment, and the application security vulnerability is verified with the taint analysis technology, and finally the application vulnerability report is obtained. Experimental results show that compared with static analysis results, the method can significantly improve the accuracy of vulnerability mining.
2022-09-09
Xu, Rong-Zhen, He, Meng-Ke.  2020.  Application of Deep Learning Neural Network in Online Supply Chain Financial Credit Risk Assessment. 2020 International Conference on Computer Information and Big Data Applications (CIBDA). :224—232.
Under the background of "Internet +", in order to solve the problem of deeply mining credit risk behind online supply chain financial big data, this paper proposes an online supply chain financial credit risk assessment method based on deep belief network (DBN). First, a deep belief network evaluation model composed of Restricted Boltzmann Machine (RBM) and classifier SOFTMAX is established, and the performance evaluation test of three kinds of data sets is carried out by using this model. Using factor analysis to select 8 indicators from 21 indicators, and then input them into RBM for conversion to form a more scientific evaluation index, and finally input them into SOFTMAX for evaluation. This method of online supply chain financial credit risk assessment based on DBN is applied to an example for verification. The results show that the evaluation accuracy of this method is 96.04%, which has higher evaluation accuracy and better rationality compared with SVM method and Logistic method.