Visible to the public Biblio

Found 5734 results

Filters: Keyword is Human Behavior  [Clear All Filters]
2020-10-12
Khayat, Mohamad, Barka, Ezedin, Sallabi, Farag.  2019.  SDN\_Based Secure Healthcare Monitoring System(SDN-SHMS). 2019 28th International Conference on Computer Communication and Networks (ICCCN). :1–7.
Healthcare experts and researchers have been promoting the need for IoT-based remote health monitoring systems that take care of the health of elderly people. However, such systems may generate large amounts of data, which makes the security and privacy of such data to become imperative. This paper studies the security and privacy concerns of the existing Healthcare Monitoring System (HMS) and proposes a reference architecture (security integration framework) for managing IoT-based healthcare monitoring systems that ensures security, privacy, and reliable service delivery for patients and elderly people to reduce and avoid health related risks. Our proposed framework will be in the form of state-of-the-art Security Platform, for HMS, using the emerging Software Defined Network (SDN) networking paradigm. Our proposed integration framework eliminates the dependency on specific Software or vendor for different security systems, and allows for the benefits from the functional and secure applications, and services provided by the SDN platform.
MacMahon, Silvana Togneri, Alfano, Marco, Lenzitti, Biagio, Bosco, Giosuè Lo, McCaffery, Fergal, Taibi, Davide, Helfert, Markus.  2019.  Improving Communication in Risk Management of Health Information Technology Systems by means of Medical Text Simplification. 2019 IEEE Symposium on Computers and Communications (ISCC). :1135–1140.
Health Information Technology Systems (HITS) are increasingly used to improve the quality of patient care while reducing costs. These systems have been developed in response to the changing models of care to an ongoing relationship between patient and care team, supported by the use of technology due to the increased instance of chronic disease. However, the use of HITS may increase the risk to patient safety and security. While standards can be used to address and manage these risks, significant communication problems exist between experts working in different departments. These departments operate in silos often leading to communication breakdowns. For example, risk management stakeholders who are not clinicians may struggle to understand, define and manage risks associated with these systems when talking to medical professionals as they do not understand medical terminology or the associated care processes. In order to overcome this communication problem, we propose the use of the “Three Amigos” approach together with the use of the SIMPLE tool that has been developed to assist patients in understanding medical terms. This paper examines how the “Three Amigos” approach and the SIMPLE tool can be used to improve estimation of severity of risk by non-clinical risk management stakeholders and provides a practical example of their use in a ten step risk management process.
Chia, Pern Hui, Desfontaines, Damien, Perera, Irippuge Milinda, Simmons-Marengo, Daniel, Li, Chao, Day, Wei-Yen, Wang, Qiushi, Guevara, Miguel.  2019.  KHyperLogLog: Estimating Reidentifiability and Joinability of Large Data at Scale. 2019 IEEE Symposium on Security and Privacy (SP). :350–364.
Understanding the privacy relevant characteristics of data sets, such as reidentifiability and joinability, is crucial for data governance, yet can be difficult for large data sets. While computing the data characteristics by brute force is straightforward, the scale of systems and data collected by large organizations demands an efficient approach. We present KHyperLogLog (KHLL), an algorithm based on approximate counting techniques that can estimate the reidentifiability and joinability risks of very large databases using linear runtime and minimal memory. KHLL enables one to measure reidentifiability of data quantitatively, rather than based on expert judgement or manual reviews. Meanwhile, joinability analysis using KHLL helps ensure the separation of pseudonymous and identified data sets. We describe how organizations can use KHLL to improve protection of user privacy. The efficiency of KHLL allows one to schedule periodic analyses that detect any deviations from the expected risks over time as a regression test for privacy. We validate the performance and accuracy of KHLL through experiments using proprietary and publicly available data sets.
Puspitaningrum, Diyah, Fernando, Julio, Afriando, Edo, Utama, Ferzha Putra, Rahmadini, Rina, Pinata, Y..  2019.  Finding Local Experts for Dynamic Recommendations Using Lazy Random Walk. 2019 7th International Conference on Cyber and IT Service Management (CITSM). 7:1–6.
Statistics based privacy-aware recommender systems make suggestions more powerful by extracting knowledge from the log of social contacts interactions, but unfortunately, they are static - moreover, advice from local experts effective in finding specific business categories in a particular area. We propose a dynamic recommender algorithm based on a lazy random walk that recommends top-rank shopping places to potentially interested visitors. We consider local authority and topical authority. The algorithm tested on FourSquare shopping data sets of 5 cities in Indonesia with k-steps=5,7,9 (lazy) random walks and compared the results with other state-of-the-art ranking techniques. The results show that it can reach high score precisions (0.5, 0.37, and 0.26 respectively on p@1, p@3, and p@5 for k=5). The algorithm also shows scalability concerning execution time. The advantage of dynamicity is the database used to power the recommender system; no need to be very frequently updated to produce a good recommendation.
D'Angelo, Mirko, Gerasimou, Simos, Ghahremani, Sona, Grohmann, Johannes, Nunes, Ingrid, Pournaras, Evangelos, Tomforde, Sven.  2019.  On Learning in Collective Self-Adaptive Systems: State of Practice and a 3D Framework. 2019 IEEE/ACM 14th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS). :13–24.
Collective self-adaptive systems (CSAS) are distributed and interconnected systems composed of multiple agents that can perform complex tasks such as environmental data collection, search and rescue operations, and discovery of natural resources. By providing individual agents with learning capabilities, CSAS can cope with challenges related to distributed sensing and decision-making and operate in uncertain environments. This unique characteristic of CSAS enables the collective to exhibit robust behaviour while achieving system-wide and agent-specific goals. Although learning has been explored in many CSAS applications, selecting suitable learning models and techniques remains a significant challenge that is heavily influenced by expert knowledge. We address this gap by performing a multifaceted analysis of existing CSAS with learning capabilities reported in the literature. Based on this analysis, we introduce a 3D framework that illustrates the learning aspects of CSAS considering the dimensions of autonomy, knowledge access, and behaviour, and facilitates the selection of learning techniques and models. Finally, using example applications from this analysis, we derive open challenges and highlight the need for research on collaborative, resilient and privacy-aware mechanisms for CSAS.
Foreman, Zackary, Bekman, Thomas, Augustine, Thomas, Jafarian, Haadi.  2019.  PAVSS: Privacy Assessment Vulnerability Scoring System. 2019 International Conference on Computational Science and Computational Intelligence (CSCI). :160–165.
Currently, the guidelines for business entities to collect and use consumer information from online sources is guided by the Fair Information Practice Principles set forth by the Federal Trade Commission in the United States. These guidelines are inadequate, outdated, and provide little protection for consumers. Moreover, there are many techniques to anonymize the stored data that was collected by large companies and governments. However, what does not exist is a framework that is capable of evaluating and scoring the effects of this information in the event of a data breach. In this work, a framework for scoring and evaluating the vulnerability of private data is presented. This framework is created to be used in parallel with currently adopted frameworks that are used to score and evaluate other areas of deficiencies within the software, including CVSS and CWSS. It is dubbed the Privacy Assessment Vulnerability Scoring System (PAVSS) and quantifies the privacy-breach vulnerability an individual takes on when using an online platform. This framework is based on a set of hypotheses about user behavior, inherent properties of an online platform, and the usefulness of available data in performing a cyber attack. The weight each of these metrics has within our model is determined by surveying cybersecurity experts. Finally, we test the validity of our user-behavior based hypotheses, and indirectly our model by analyzing user posts from a large twitter data set.
Luma, Artan, Abazi, Blerton, Aliu, Azir.  2019.  An approach to Privacy on Recommended Systems. 2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT). :1–5.
Recommended systems are very popular nowadays. They are used online to help a user get the desired product quickly. Recommended Systems are found on almost every website, especially big companies such as Facebook, eBay, Amazon, NetFlix, and others. In specific cases, these systems help the user find a book, movie, article, product of his or her preference, and are also used on social networks to meet friends who share similar interests in different fields. These companies use referral systems because they bring amazing benefits in a very fast time. To generate more accurate recommendations, recommended systems are based on the user's personal information, eg: different ratings, history observation, personal profiles, etc. Use of these systems is very necessary but the way this information is received, and the privacy of this information is almost constantly ignored. Many users are unaware of how their information is received and how it is used. This paper will discuss how recommended systems work in different online companies and how safe they are to use without compromising their privacy. Given the widespread use of these systems, an important issue has arisen regarding user privacy and security. Collecting personal information from recommended systems increases the risk of unwanted exposure to that information. As a result of this paper, the reader will be aware of the functioning of Recommended systems, the way they receive and use their information, and will also discuss privacy protection techniques against Recommended systems.
2020-10-05
Kang, Anqi.  2018.  Collaborative Filtering Algorithm Based on Trust and Information Entropy. 2018 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS). 3:262—266.

In order to improve the accuracy of similarity, an improved collaborative filtering algorithm based on trust and information entropy is proposed in this paper. Firstly, the direct trust between the users is determined by the user's rating to explore the potential trust relationship of the users. The time decay function is introduced to realize the dynamic portrayal of the user's interest decays over time. Secondly, the direct trust and the indirect trust are combined to obtain the overall trust which is weighted with the Pearson similarity to obtain the trust similarity. Then, the information entropy theory is introduced to calculate the similarity based on weighted information entropy. At last, the trust similarity and the similarity based on weighted information entropy are weighted to obtain the similarity combing trust and information entropy which is used to predicted the rating of the target user and create the recommendation. The simulation shows that the improved algorithm has a higher accuracy of recommendation and can provide more accurate and reliable recommendation service.

Ong, Desmond, Soh, Harold, Zaki, Jamil, Goodman, Noah.  2019.  Applying Probabilistic Programming to Affective Computing. IEEE Transactions on Affective Computing. :1—1.

Affective Computing is a rapidly growing field spurred by advancements in artificial intelligence, but often, held back by the inability to translate psychological theories of emotion into tractable computational models. To address this, we propose a probabilistic programming approach to affective computing, which models psychological-grounded theories as generative models of emotion, and implements them as stochastic, executable computer programs. We first review probabilistic approaches that integrate reasoning about emotions with reasoning about other latent mental states (e.g., beliefs, desires) in context. Recently-developed probabilistic programming languages offer several key desidarata over previous approaches, such as: (i) flexibility in representing emotions and emotional processes; (ii) modularity and compositionality; (iii) integration with deep learning libraries that facilitate efficient inference and learning from large, naturalistic data; and (iv) ease of adoption. Furthermore, using a probabilistic programming framework allows a standardized platform for theory-building and experimentation: Competing theories (e.g., of appraisal or other emotional processes) can be easily compared via modular substitution of code followed by model comparison. To jumpstart adoption, we illustrate our points with executable code that researchers can easily modify for their own models. We end with a discussion of applications and future directions of the probabilistic programming approach

Liu, Donglei, Niu, Zhendong, Zhang, Chunxia, Zhang, Jiadi.  2019.  Multi-Scale Deformable CNN for Answer Selection. IEEE Access. 7:164986—164995.

The answer selection task is one of the most important issues within the automatic question answering system, and it aims to automatically find accurate answers to questions. Traditional methods for this task use manually generated features based on tf-idf and n-gram models to represent texts, and then select the right answers according to the similarity between the representations of questions and the candidate answers. Nowadays, many question answering systems adopt deep neural networks such as convolutional neural network (CNN) to generate the text features automatically, and obtained better performance than traditional methods. CNN can extract consecutive n-gram features with fixed length by sliding fixed-length convolutional kernels over the whole word sequence. However, due to the complex semantic compositionality of the natural language, there are many phrases with variable lengths and be composed of non-consecutive words in natural language, such as these phrases whose constituents are separated by other words within the same sentences. But the traditional CNN is unable to extract the variable length n-gram features and non-consecutive n-gram features. In this paper, we propose a multi-scale deformable convolutional neural network to capture the non-consecutive n-gram features by adding offset to the convolutional kernel, and also propose to stack multiple deformable convolutional layers to mine multi-scale n-gram features by the means of generating longer n-gram in higher layer. Furthermore, we apply the proposed model into the task of answer selection. Experimental results on public dataset demonstrate the effectiveness of our proposed model in answer selection.

Joseph, Matthew, Mao, Jieming, Neel, Seth, Roth, Aaron.  2019.  The Role of Interactivity in Local Differential Privacy. 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS). :94—105.

We study the power of interactivity in local differential privacy. First, we focus on the difference between fully interactive and sequentially interactive protocols. Sequentially interactive protocols may query users adaptively in sequence, but they cannot return to previously queried users. The vast majority of existing lower bounds for local differential privacy apply only to sequentially interactive protocols, and before this paper it was not known whether fully interactive protocols were more powerful. We resolve this question. First, we classify locally private protocols by their compositionality, the multiplicative factor by which the sum of a protocol's single-round privacy parameters exceeds its overall privacy guarantee. We then show how to efficiently transform any fully interactive compositional protocol into an equivalent sequentially interactive protocol with a blowup in sample complexity linear in this compositionality. Next, we show that our reduction is tight by exhibiting a family of problems such that any sequentially interactive protocol requires this blowup in sample complexity over a fully interactive compositional protocol. We then turn our attention to hypothesis testing problems. We show that for a large class of compound hypothesis testing problems - which include all simple hypothesis testing problems as a special case - a simple noninteractive test is optimal among the class of all (possibly fully interactive) tests.

Gamba, Matteo, Azizpour, Hossein, Carlsson, Stefan, Björkman, Mårten.  2019.  On the Geometry of Rectifier Convolutional Neural Networks. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). :793—797.

While recent studies have shed light on the expressivity, complexity and compositionality of convolutional networks, the real inductive bias of the family of functions reachable by gradient descent on natural data is still unknown. By exploiting symmetries in the preactivation space of convolutional layers, we present preliminary empirical evidence of regularities in the preimage of trained rectifier networks, in terms of arrangements of polytopes, and relate it to the nonlinear transformations applied by the network to its input.

Li, Xilai, Song, Xi, Wu, Tianfu.  2019.  AOGNets: Compositional Grammatical Architectures for Deep Learning. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :6213—6223.

Neural architectures are the foundation for improving performance of deep neural networks (DNNs). This paper presents deep compositional grammatical architectures which harness the best of two worlds: grammar models and DNNs. The proposed architectures integrate compositionality and reconfigurability of the former and the capability of learning rich features of the latter in a principled way. We utilize AND-OR Grammar (AOG) as network generator in this paper and call the resulting networks AOGNets. An AOGNet consists of a number of stages each of which is composed of a number of AOG building blocks. An AOG building block splits its input feature map into N groups along feature channels and then treat it as a sentence of N words. It then jointly realizes a phrase structure grammar and a dependency grammar in bottom-up parsing the “sentence” for better feature exploration and reuse. It provides a unified framework for the best practices developed in state-of-the-art DNNs. In experiments, AOGNet is tested in the ImageNet-1K classification benchmark and the MS-COCO object detection and segmentation benchmark. In ImageNet-1K, AOGNet obtains better performance than ResNet and most of its variants, ResNeXt and its attention based variants such as SENet, DenseNet and DualPathNet. AOGNet also obtains the best model interpretability score using network dissection. AOGNet further shows better potential in adversarial defense. In MS-COCO, AOGNet obtains better performance than the ResNet and ResNeXt backbones in Mask R-CNN.

Rakotonirina, Itsaka, Köpf, Boris.  2019.  On Aggregation of Information in Timing Attacks. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :387—400.

A key question for characterising a system's vulnerability against timing attacks is whether or not it allows an adversary to aggregate information about a secret over multiple timing measurements. Existing approaches for reasoning about this aggregate information rely on strong assumptions about the capabilities of the adversary in terms of measurement and computation, which is why they fall short in modelling, explaining, or synthesising real-world attacks against cryptosystems such as RSA or AES. In this paper we present a novel model for reasoning about information aggregation in timing attacks. The model is based on a novel abstraction of timing measurements that better captures the capabilities of real-world adversaries, and a notion of compositionality of programs that explains attacks by divide-and-conquer. Our model thus lifts important limiting assumptions made in prior work and enables us to give the first uniform explanation of high-profile timing attacks in the language of information-flow analysis.

2020-09-28
Liu, Kai, Zhou, Yun, Wang, Qingyong, Zhu, Xianqiang.  2019.  Vulnerability Severity Prediction With Deep Neural Network. 2019 5th International Conference on Big Data and Information Analytics (BigDIA). :114–119.
High frequency of network security incidents has also brought a lot of negative effects and even huge economic losses to countries, enterprises and individuals in recent years. Therefore, more and more attention has been paid to the problem of network security. In order to evaluate the newly included vulnerability text information accurately, and to reduce the workload of experts and the false negative rate of the traditional method. Multiple deep learning methods for vulnerability text classification evaluation are proposed in this paper. The standard Cross Site Scripting (XSS) vulnerability text data is processed first, and then classified using three kinds of deep neural networks (CNN, LSTM, TextRCNN) and one kind of traditional machine learning method (XGBoost). The dropout ratio of the optimal CNN network, the epoch of all deep neural networks and training set data were tuned via experiments to improve the fit on our target task. The results show that the deep learning methods evaluate vulnerability risk levels better, compared with traditional machine learning methods, but cost more time. We train our models in various training sets and test with the same testing set. The performance and utility of recurrent convolutional neural networks (TextRCNN) is highest in comparison to all other methods, which classification accuracy rate is 93.95%.
Li, Lin, Wei, Linfeng.  2019.  Automatic XSS Detection and Automatic Anti-Anti-Virus Payload Generation. 2019 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC). :71–76.
In the Web 2.0 era, user interaction makes Web application more diverse, but brings threats, among which XSS vulnerability is the common and pernicious one. In order to promote the efficiency of XSS detection, this paper investigates the parameter characteristics of malicious XSS attacks. We identify whether a parameter is malicious or not through detecting user input parameters with SVM algorithm. The original malicious XSS parameters are deformed by DQN algorithm for reinforcement learning for rule-based WAF to be anti-anti-virus. Based on this method, we can identify whether a specific WAF is secure. The above model creates a more efficient automatic XSS detection tool and a more targeted automatic anti-anti-virus payload generation tool. This paper also explores the automatic generation of XSS attack codes with RNN LSTM algorithm.
Akaishi, Sota, Uda, Ryuya.  2019.  Classification of XSS Attacks by Machine Learning with Frequency of Appearance and Co-occurrence. 2019 53rd Annual Conference on Information Sciences and Systems (CISS). :1–6.
Cross site scripting (XSS) attack is one of the attacks on the web. It brings session hijack with HTTP cookies, information collection with fake HTML input form and phishing with dummy sites. As a countermeasure of XSS attack, machine learning has attracted a lot of attention. There are existing researches in which SVM, Random Forest and SCW are used for the detection of the attack. However, in the researches, there are problems that the size of data set is too small or unbalanced, and that preprocessing method for vectorization of strings causes misclassification. The highest accuracy of the classification was 98% in existing researches. Therefore, in this paper, we improved the preprocessing method for vectorization by using word2vec to find the frequency of appearance and co-occurrence of the words in XSS attack scripts. Moreover, we also used a large data set to decrease the deviation of the data. Furthermore, we evaluated the classification results with two procedures. One is an inappropriate procedure which some researchers tend to select by mistake. The other is an appropriate procedure which can be applied to an attack detection filter in the real environment.
Lv, Chengcheng, Zhang, Long, Zeng, Fanping, Zhang, Jian.  2019.  Adaptive Random Testing for XSS Vulnerability. 2019 26th Asia-Pacific Software Engineering Conference (APSEC). :63–69.
XSS is one of the common vulnerabilities in web applications. Many black-box testing tools may collect a large number of payloads and traverse them to find a payload that can be successfully injected, but they are not very efficient. And previous research has paid less attention to how to improve the efficiency of black-box testing to detect XSS vulnerability. To improve the efficiency of testing, we develop an XSS testing tool. It collects 6128 payloads and uses a headless browser to detect XSS vulnerability. The tool can discover XSS vulnerability quickly with the ART(Adaptive Random Testing) method. We conduct an experiment using 3 extensively adopted open source vulnerable benchmarks and 2 actual websites to evaluate the ART method. The experimental results indicate that the ART method can effectively improve the fuzzing method by more than 27.1% in reducing the number of attempts before accomplishing a successful injection.
Zhang, Xun, Zhao, Jinxiong, Yang, Fan, Zhang, Qin, Li, Zhiru, Gong, Bo, Zhi, Yong, Zhang, Xuejun.  2019.  An Automated Composite Scanning Tool with Multiple Vulnerabilities. 2019 IEEE 3rd Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC). :1060–1064.
In order to effectively do network security protection, detecting system vulnerabilities becomes an indispensable process. Here, the vulnerability detection module with three functions is assembled into a device, and a composite detection tool with multiple functions is proposed to deal with some frequent vulnerabilities. The tool includes a total of three types of vulnerability detection, including cross-site scripting attacks, SQL injection, and directory traversal. First, let's first introduce the principle of each type of vulnerability; then, introduce the detection method of each type of vulnerability; finally, detail the defenses of each type of vulnerability. The benefits are: first, the cost of manual testing is eliminated; second, the work efficiency is greatly improved; and third, the network is safely operated in the first time.
Ibrahim, Ahmed, El-Ramly, Mohammad, Badr, Amr.  2019.  Beware of the Vulnerability! How Vulnerable are GitHub's Most Popular PHP Applications? 2019 IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA). :1–7.
The presence of software vulnerabilities is a serious threat to any software project. Exploiting them can compromise system availability, data integrity, and confidentiality. Unfortunately, many open source projects go for years with undetected ready-to-exploit critical vulnerabilities. In this study, we investigate the presence of software vulnerabilities in open source projects and the factors that influence this presence. We analyzed the top 100 open source PHP applications in GitHub using a static analysis vulnerability scanner to examine how common software vulnerabilities are. We also discussed which vulnerabilities are most present and what factors contribute to their presence. We found that 27% of these projects are insecure, with a median number of 3 vulnerabilities per vulnerable project. We found that the most common type is injection vulnerabilities, which made 58% of all detected vulnerabilities. Out of these, cross-site scripting (XSS) was the most common and made 43.5% of all vulnerabilities found. Statistical analysis revealed that project activities like branching, pulling, and committing have a moderate positive correlation with the number of vulnerabilities in the project. Other factors like project popularity, number of releases, and number of issues had almost no influence on the number of vulnerabilities. We recommend that open source project owners should set secure code development guidelines for their project members and establish secure code reviews as part of the project's development process.
Simos, Dimitris E., Garn, Bernhard, Zivanovic, Jovan, Leithner, Manuel.  2019.  Practical Combinatorial Testing for XSS Detection using Locally Optimized Attack Models. 2019 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). :122–130.
In this paper, we present a combinatorial testing methodology for automated black-box security testing of complex web applications. The focus of our work is the identification of Cross-site Scripting (XSS) vulnerabilities. We introduce a new modelling scheme for test case generation of XSS attack vectors consisting of locally optimized attack models. The modelling approach takes into account the response and behavior of the web application and is particularly efficient when used in conjunction with combinatorial testing. In addition to the modelling scheme, we present a research prototype of a security testing tool called XSSInjector, which executes attack vectors generated from our methodology against web applications. The tool also employs a newly developed test oracle for detecting XSS which allow us to precisely identify whether injected JavaScript is actually executed and thus eliminate false positives. Our testing methodology is sufficiently generic to be applied to any web application that returns HTML code. We describe the foundations of our approach and validate it via an extensive case study using a verification framework and real world web applications. In particular, we have found several new critical vulnerabilities in popular forum software, library management systems and gallery packages.
Rodriguez, German, Torres, Jenny, Flores, Pamela, Benavides, Eduardo, Nuñez-Agurto, Daniel.  2019.  XSStudent: Proposal to Avoid Cross-Site Scripting (XSS) Attacks in Universities. 2019 3rd Cyber Security in Networking Conference (CSNet). :142–149.
QR codes are the means to offer more direct and instant access to information. However, QR codes have shown their deficiency, being a very powerful attack vector, for example, to execute phishing attacks. In this study, we have proposed a solution that allows controlling access to the information offered by QR codes. Through a scanner designed in APP Inventor which has been called XSStudent, a system has been built that analyzes the URLs obtained and compares them with a previously trained system. This study was executed by means of a controlled attack to the users of the university who through a flyer with a QR code and a fictional link accessed an infected page with JavaScript code that allowed a successful cross-site scripting attack. The results indicate that 100% of the users are vulnerable to this type of attacks, so also, with our proposal, an attack executed in the universities using the Beef software would be totally blocked.
Patel, Keyur.  2019.  A Survey on Vulnerability Assessment Penetration Testing for Secure Communication. 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI). :320–325.
As the technology is growing rapidly, the development of systems and software are becoming more complex. For this reason, the security of software and web applications become more vulnerable. In the last two decades, the use of internet application and security hacking activities are on top of the glance. The organizations are having the biggest challenge that how to secure their web applications from the rapidly increasing cyber threats because the organization can't compromise the security of their sensitive information. Vulnerability Assessment and Penetration Testing techniques may help organizations to find security loopholes. The weakness can be the asset for the attacker if the organizations are not aware of this. Vulnerability Assessment and Penetration Testing helps an organization to cover the security loopholes and determine their security arrangements are working as per defined policies or not. To cover the tracks and mitigate the threats it is necessary to install security patches. This paper includes the survey on the current vulnerabilities, determination of those vulnerabilities, the methodology used for determination, tools used to determine the vulnerabilities to secure the organizations from cyber threat.
Mohammadi, Mahmoud, Chu, Bill, Richter Lipford, Heather.  2019.  Automated Repair of Cross-Site Scripting Vulnerabilities through Unit Testing. 2019 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). :370–377.
Many web applications are vulnerable to Cross Site Scripting (XSS) attacks enabling attackers to steal sensitive information and commit frauds. Much research in this area have focused on detecting vulnerable web pages using static and dynamic program analysis. The best practice to prevent XSS vulnerabilities is to encode untrusted dynamic content. However, a common programming error is the use of a wrong type of encoder to sanitize untrusted data, leaving the application vulnerable. We propose a new approach that can automatically fix this common type of XSS vulnerability in many situations. This approach is integrated into the software maintenance life cycle through unit testing. Vulnerable codes are refactored to reflect the suggested encoder and then verified using an attack evaluating mechanism to find a proper repair. Evaluation of this approach has been conducted on an open source medical record application with over 200 web pages written in JSP.
Andreoletti, Davide, Rottondi, Cristina, Giordano, Silvia, Verticale, Giacomo, Tornatore, Massimo.  2019.  An Open Privacy-Preserving and Scalable Protocol for a Network-Neutrality Compliant Caching. ICC 2019 - 2019 IEEE International Conference on Communications (ICC). :1–6.
The distribution of video contents generated by Content Providers (CPs) significantly contributes to increase the congestion within the networks of Internet Service Providers (ISPs). To alleviate this problem, CPs can serve a portion of their catalogues to the end users directly from servers (i.e., the caches) located inside the ISP network. Users served from caches perceive an increased QoS (e.g., average retrieval latency is reduced) and, for this reason, caching can be considered a form of traffic prioritization. Hence, since the storage of caches is limited, its subdivision among several CPs may lead to discrimination. A static subdivision that assignes to each CP the same portion of storage is a neutral but ineffective appraoch, because it does not consider the different popularities of the CPs' contents. A more effective strategy consists in dividing the cache among the CPs proportionally to the popularity of their contents. However, CPs consider this information sensitive and are reluctant to disclose it. In this work, we propose a protocol based on Shamir Secret Sharing (SSS) scheme that allows the ISP to calculate the portion of cache storage that a CP is entitled to receive while guaranteeing network neutrality and resource efficiency, but without violating its privacy. The protocol is executed by the ISP, the CPs and a Regulator Authority (RA) that guarantees the actual enforcement of a fair subdivision of the cache storage and the preservation of privacy. We perform extensive simulations and prove that our approach leads to higher hit-rates (i.e., percentage of requests served by the cache) with respect to the static one. The advantages are particularly significant when the cache storage is limited.