Biblio

Found 19480 results

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
Z
Zuxing Gu, Hong Song, Yu Jiang, Jeonghone Choi, Hongjiang He, Lui Sha, Ming Gu.  2016.  An integrated Medical CPS for early detection of paroxysmal sympathetic hyperactivity. 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). :818-822.
Zuva, Keneilwe, Zuva, Tranos.  2017.  Diversity and Serendipity in Recommender Systems. Proceedings of the International Conference on Big Data and Internet of Thing. :120–124.

The present age of digital information has presented a heterogeneous online environment which makes it a formidable mission for a noble user to search and locate the required online resources timely. Recommender systems were implemented to rescue this information overload issue. However, majority of recommendation algorithms focused on the accuracy of the recommendations, leaving out other important aspects in the definition of good recommendation such as diversity and serendipity. This results in low coverage, long-tail items often are left out in the recommendations as well. In this paper, we present and explore a recommendation technique that ensures that diversity, accuracy and serendipity are all factored in the recommendations. The proposed algorithm performed comparatively well as compared to other algorithms in literature.

Zurek, E.E., Gamarra, A.M.R., Escorcia, G.J.R., Gutierrez, C., Bayona, H., Perez, R., Garcia, X..  2014.  Spectral analysis techniques for acoustic fingerprints recognition. Image, Signal Processing and Artificial Vision (STSIVA), 2014 XIX Symposium on. :1-5.

This article presents results of the recognition process of acoustic fingerprints from a noise source using spectral characteristics of the signal. Principal Components Analysis (PCA) is applied to reduce the dimensionality of extracted features and then a classifier is implemented using the method of the k-nearest neighbors (KNN) to identify the pattern of the audio signal. This classifier is compared with an Artificial Neural Network (ANN) implementation. It is necessary to implement a filtering system to the acquired signals for 60Hz noise reduction generated by imperfections in the acquisition system. The methods described in this paper were used for vessel recognition.

Zúquete, André.  2016.  Exploitation of Dual Channel Transmissions to Increase Security and Reliability in Classic Bluetooth Piconets. Proceedings of the 12th ACM Symposium on QoS and Security for Wireless and Mobile Networks. :55–60.

In this paper we discuss several improvements to the security and reliability of a classic Bluetooth network (piconet) that can arise from the fact of being able to transmit the same frame with two frequencies on each slot, instead of the actual standard, that uses only one frequency. Furthermore, we build upon this possibility and we show that piconet participants can explore many strategies to increase the security of their communications by confounding eavesdroppers, such as multiple hopping sequences, random selection of a hopping sequence on each transmission slot and variable frame encryption per hopping sequence. Finally, all this can be decided independently by any piconet participant without having to agree in real time on some type of service with other participants of the same piconet.

Zuo, Zhiqiang, Tian, Ran, Wang, Yijing.  2021.  Bipartite Consensus for Multi-Agent Systems with Differential Privacy Constraint. 2021 40th Chinese Control Conference (CCC). :5062—5067.
This paper studies the differential privacy-preserving problem of discrete-time multi-agent systems (MASs) with antagonistic information, where the connected signed graph is structurally balanced. First, we introduce the bipartite consensus definitions in the sense of mean square and almost sure, respectively. Second, some criteria for mean square and almost sure bipartite consensus are derived, where the eventualy value is related to the gauge matrix and agents’ initial states. Third, we design the ε-differential privacy algorithm and characterize the tradeoff between differential privacy and system performance. Finally, simulations validate the effectiveness of the proposed algorithm.
Zuo, Xinbin, Pang, Xue, Zhang, Pengping, Zhang, Junsan, Dong, Tao, Zhang, Peiying.  2020.  A Security-Aware Software-Defined IoT Network Architecture. 2020 IEEE Computing, Communications and IoT Applications (ComComAp). :1–5.
With the improvement of people's living standards, more and more network users access the network, including a large number of infrastructure, these devices constitute the Internet of things(IoT). With the rapid expansion of devices in the IoT, the data transmission between the IoT has become more complex, and the security issues are facing greater challenges. SDN as a mature network architecture, its security has been affirmed by the industry, it separates the data layer from the control layer, thus greatly improving the security of the network. In this paper, we apply the SDN to the IoT, and propose a IoT network architecture based on SDN. In this architecture, we not only make use of the security features of SDN, but also deploy different security modules in each layer of SDN to integrate, analyze and plan various data through the IoT, which undoubtedly improves the security performance of the network. In the end, we give a comprehensive introduction to the system and verify its performance.
Zuo, Xiaojiang, Wang, Xiao, Han, Rui.  2022.  An Empirical Analysis of CAPTCHA Image Design Choices in Cloud Services. IEEE INFOCOM 2022 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–6.
Cloud service uses CAPTCHA to protect itself from malicious programs. With the explosive development of AI technology and the emergency of third-party recognition services, the factors that influence CAPTCHA’s security are going to be more complex. In such a situation, evaluating the security of mainstream CAPTCHAs in cloud services is helpful to guide better CAPTCHA design choices for providers. In this paper, we evaluate and analyze the security of 6 mainstream CAPTCHA image designs in public cloud services. According to the evaluation results, we made some suggestions of CAPTCHA image design choices to cloud service providers. In addition, we particularly discussed the CAPTCHA images adopted by Facebook and Twitter. The evaluations are separated into two stages: (i) using AI techniques alone; (ii) using both AI techniques and third-party services. The former is based on open source models; the latter is conducted under our proposed framework: CAPTCHAMix.
Zuo, Pengfei, Hua, Yu, Wang, Cong, Xia, Wen, Cao, Shunde, Zhou, Yukun, Sun, Yuanyuan.  2017.  Mitigating Traffic-Based Side Channel Attacks in Bandwidth-Efficient Cloud Storage. Proceedings of the 2017 Symposium on Cloud Computing. :638–638.

Data deduplication [3] is able to effectively identify and eliminate redundant data and only maintain a single copy of files and chunks. Hence, it is widely used in cloud storage systems to save the users' network bandwidth for uploading data. However, the occurrence of deduplication can be easily identified by monitoring and analyzing network traffic, which leads to the risk of user privacy leakage. The attacker can carry out a very dangerous side channel attack, i.e., learn-the-remaining-information (LRI) attack, to reveal users' privacy information by exploiting the side channel of network traffic in deduplication [1]. In the LRI attack, the attacker knows a large part of the target file in the cloud and tries to learn the remaining unknown parts via uploading all possible versions of the file's content. For example, the attacker knows all the contents of the target file X except the sensitive information \texttheta. To learn the sensitive information, the attacker needs to upload m files with all possible values of \texttheta, respectively. If a file Xd with the value \textthetad is deduplicated and other files are not, the attacker knows that the information \texttheta = \textthetad. In the threat model of the LRI attack, we consider a general cloud storage service model that includes two entities, i.e., the user and cloud storage server. The attack is launched by the users who aim to steal the privacy information of other users [1]. The attacker can act as a user via its own account or use multiple accounts to disguise as multiple users. The cloud storage server communicates with the users through Internet. The connections from the clients to the cloud storage server are encrypted by SSL or TLS protocol. Hence, the attacker can monitor and measure the amount of network traffic between the client and server but cannot intercept and analyze the contents of the transmitted data due to the encryption. The attacker can then perform the sophisticated traffic analysis with sufficient computing resources. We propose a simple yet effective scheme, called randomized redundant chunk scheme (RRCS), to significantly mitigate the risk of the LRI attack while maintaining the high bandwidth efficiency of deduplication. The basic idea behind RRCS is to add randomized redundant chunks to mix up the real deduplication states of files used for the LRI attack, which effectively obfuscates the view of the attacker, who attempts to exploit the side channel of network traffic for the LRI attack. RRCS includes three key function modules, range generation (RG), secure bounds setting (SBS), and security-irrelevant redundancy elimination (SRE). When uploading the random-number redundant chunks, RRCS first uses RG to generate a fixed range [0,$łambda$N] ($łambda$ $ε$ (0,1]), in which the number of added redundant chunks is randomly chosen, where N is the total number of chunks in a file and $łambda$ is a system parameter. However, the fixed range may cause a security issue. SBS is used to deal with the bounds of the fixed range to avoid the security issue. There may exist security-irrelevant redundant chunks in RRCS. SRE reduces the security-irrelevant redundant chunks to improve the deduplication efficiency. The design details are presented in our technical report [5]. Our security analysis demonstrates RRCS can significantly reduce the risk of the LRI attack [5]. We examine the performance of RRCS using three real-world trace-based datasets, i.e., Fslhomes [2], MacOS [2], and Onefull [4], and compare RRCS with the randomized threshold scheme (RTS) [1]. Our experimental results show that source-based deduplication eliminates 100% data redundancy which however has no security guarantee. File-level (chunk-level) RTS only eliminates 8.1% – 16.8% (9.8% – 20.3%) redundancy, due to only eliminating the redundancy of the files (chunks) that have many copies. RRCS with $łambda$ = 0.5 eliminates 76.1% – 78.0% redundancy and RRCS with $łambda$ = 1 eliminates 47.9% – 53.6% redundancy.

Zuo, Langyi.  2022.  Comparison between the Traditional and Computerized Cognitive Training Programs in Treating Mild Cognitive Impairment. 2022 2nd International Conference on Electronic Information Engineering and Computer Technology (EIECT). :119—124.
MCI patients can be benefited from cognitive training programs to improve their cognitive capabilities or delay the decline of cognition. This paper evaluated three types of commonly seen categories of cognitive training programs (non-computerized / traditional cognitive training (TCT), computerized cognitive training (CCT), and virtual/augmented reality cognitive training (VR/AR CT)) based on six aspects: stimulation strength, user-friendliness, expandability, customizability/personalization, convenience, and motivation/atmosphere. In addition, recent applications of each type of CT were offered. Finally, a conclusion in which no single CT outperformed the others was derived, and the most applicable scenario of each type of CT was also provided.
Zuo, Jinxin, Guo, Ziyu, Gan, Jiefu, Lu, Yueming.  2021.  Enhancing Continuous Service of Information Systems Based on Cyber Resilience. 2021 IEEE Sixth International Conference on Data Science in Cyberspace (DSC). :535—542.

Cyber resilience has become a strategic point of information security in recent years. In the face of complex attack means and severe internal and external threats, it is difficult to achieve 100% protection against information systems. It is necessary to enhance the continuous service of information systems based on network resiliency and take appropriate compensation measures in case of protection failure, to ensure that the mission can still be achieved under attack. This paper combs the definition, cycle, and state of cyber resilience, and interprets the cyber resiliency engineering framework, to better understand cyber resilience. In addition, we also discuss the evolution of security architecture and analyze the impact of cyber resiliency on security architecture. Finally, the strategies and schemes of enhancing cyber resilience represented by zero trust and endogenous security are discussed.

Zuo, C., Shao, J., Liu, Z., Ling, Y., Wei, G..  2017.  Hidden-Token Searchable Public-Key Encryption. 2017 IEEE Trustcom/BigDataSE/ICESS. :248–254.

In this paper, we propose a variant of searchable public-key encryption named hidden-token searchable public-key encryption with two new security properties: token anonymity and one-token-per-trapdoor. With the former security notion, the client can obtain the search token from the data owner without revealing any information about the underlying keyword. Meanwhile, the client cannot derive more than one token from one trapdoor generated by the data owner according to the latter security notion. Furthermore, we present a concrete hiddentoken searchable public-key encryption scheme together with the security proofs in the random oracle model.

Zum Felde, Hendrik Meyer, Morbitzer, Mathias, Schütte, Julian.  2021.  Securing Remote Policy Enforcement by a Multi-Enclave based Attestation Architecture. 2021 IEEE 19th International Conference on Embedded and Ubiquitous Computing (EUC). :102–108.
The concept of usage control goes beyond traditional access control by regulating not only the retrieval but also the processing of data. To be able to remotely enforce usage control policy the processing party requires a trusted execution environ-ment such as Intel SGX which creates so-called enclaves. In this paper we introduce Multi Enclave based Code from Template (MECT), an SGX-based architecture for trusted remote policy enforcement. MECT uses a multi-enclave approach in which an enclave generation service dynamically generates enclaves from pre-defined code and dynamic policy parameters. This approach leads to a small trusted computing base and highly simplified attestation while preserving functionality benefits. Our proof of concept implementation consumes customisable code from templates. We compare the implementation with other architectures regarding the trusted computing base, flexibility, performance, and modularity. This comparison highlights the security benefits for remote attestation of MECT.
Zulkipli, Nurul Huda Nik, Wills, Gary B..  2017.  An Event-based Access Control for IoT. Proceedings of the Second International Conference on Internet of Things, Data and Cloud Computing. :121:1–121:4.

The Internet of Things (IoT) comes together with the connection between sensors and devices. These smart devices have been upgraded from a standalone device which can only handle a specific task at one time to an interactive device that can handle multiple tasks in time. However, this technology has been exposed to many vulnerabilities especially on the malicious attacks of the devices. With the IoT constraints and low-security mechanisms applied, the malicious attacks could exploit the sensor vulnerability to provide wrong data where it can lead to wrong interpretation and actuation to the users. Due to this problems, this short paper presents an event-based access control framework that considers integrity, privacy and the authenticity in the IoT devices.

Zulkarnine, A. T., Frank, R., Monk, B., Mitchell, J., Davies, G..  2016.  Surfacing collaborated networks in dark web to find illicit and criminal content. 2016 IEEE Conference on Intelligence and Security Informatics (ISI). :109–114.
The Tor Network, a hidden part of the Internet, is becoming an ideal hosting ground for illegal activities and services, including large drug markets, financial frauds, espionage, child sexual abuse. Researchers and law enforcement rely on manual investigations, which are both time-consuming and ultimately inefficient. The first part of this paper explores illicit and criminal content identified by prominent researchers in the dark web. We previously developed a web crawler that automatically searched websites on the internet based on pre-defined keywords and followed the hyperlinks in order to create a map of the network. This crawler has demonstrated previous success in locating and extracting data on child exploitation images, videos, keywords and linkages on the public internet. However, as Tor functions differently at the TCP level, and uses socket connections, further technical challenges are faced when crawling Tor. Some of the other inherent challenges for advanced Tor crawling include scalability, content selection tradeoffs, and social obligation. We discuss these challenges and the measures taken to meet them. Our modified web crawler for Tor, termed the “Dark Crawler” has been able to access Tor while simultaneously accessing the public internet. We present initial findings regarding what extremist and terrorist contents are present in Tor and how this content is connected to each other in a mapped network that facilitates dark web crimes. Our results so far indicate the most popular websites in the dark web are acting as catalysts for dark web expansion by providing necessary knowledgebase, support and services to build Tor hidden services and onion websites.
Ž
Žulj, S., Delija, D., Sirovatka, G..  2020.  Analysis of secure data deletion and recovery with common digital forensic tools and procedures. 2020 43rd International Convention on Information, Communication and Electronic Technology (MIPRO). :1607–1610.
This paper presents how students practical’s is developed and used for the important task forensic specialist have to do when using common digital forensic tools for data deletion and data recovery from various types of digital media and live systems. Digital forensic tools like EnCase, FTK imager, BlackLight, and open source tools are discussed in developed practical’s scenarios. This paper shows how these tools can be used to train and enhance student understanding of the capabilities and limitations of digital forensic tools in uncommon digital forensic scenarios. Students’ practicals encourage students to efficiently use digital forensic tools in the various professional scenarios that they will encounter.
Z
Zulfa, Mulki Indana, Hartanto, Rudy, Permanasari, Adhistya Erna, Ali, Waleed.  2021.  Web Caching Strategy Optimization Based on Ant Colony Optimization and Genetic Algorithm. 2021 International Seminar on Intelligent Technology and Its Applications (ISITIA). :75—81.
Web caching is a strategy that can be used to speed up website access on the client-side. This strategy is implemented by storing as many popular web objects as possible on the cache server. All web objects stored on a cache server are called cached data. Requests for cached web data on the cache server are much faster than requests directly to the origin server. Not all web objects can fit on the cache server due to their limited capacity. Therefore, optimizing cached data in a web caching strategy will determine which web objects can enter the cache server to have maximum profit. This paper simulates a web caching strategy optimization with a knapsack problem approach using the Ant Colony optimization (ACO), Genetic Algorithm (GA), and a combination of the two. Knapsack profit is seen from the number of web objects that can be entered into the cache server but with the minimum objective function value. The simulation results show that the combination of ACO and GA is faster to produce an optimal solution and is not easily trapped by the local optimum.
Zulfa, Mulki Indana, Hartanto, Rudy, Permanasari, Adhistya Erna.  2021.  Performance Comparison of Swarm Intelligence Algorithms for Web Caching Strategy. 2021 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT). :45—51.
Web caching is one strategy that can be used to speed up response times by storing frequently accessed data in the cache server. Given the cache server limited capacity, it is necessary to determine the priority of cached data that can enter the cache server. This study simulated cached data prioritization based on an objective function as a characteristic of problem-solving using an optimization approach. The objective function of web caching is formulated based on the variable data size, count access, and frequency-time access. Then we use the knapsack problem method to find the optimal solution. The Simulations run three swarm intelligence algorithms Ant Colony Optimization (ACO), Genetic Algorithm (GA), and Binary Particle Swarm Optimization (BPSO), divided into several scenarios. The simulation results show that the GA algorithm relatively stable and fast to convergence. The ACO algorithm has the advantage of a non-random initial solution but has followed the pheromone trail. The BPSO algorithm is the fastest, but the resulting solution quality is not as good as ACO and GA.
Zukran, Busra, Siraj, Maheyzah Md.  2021.  Performance Comparison on SQL Injection and XSS Detection using Open Source Vulnerability Scanners. 2021 International Conference on Data Science and Its Applications (ICoDSA). :61–65.

Web technologies are typically built with time constraints and security vulnerabilities. Automatic software vulnerability scanners are common tools for detecting such vulnerabilities among software developers. It helps to illustrate the program for the attacker by creating a great deal of engagement within the program. SQL Injection and Cross-Site Scripting (XSS) are two of the most commonly spread and dangerous vulnerabilities in web apps that cause to the user. It is very important to trust the findings of the site vulnerability scanning software. Without a clear idea of the accuracy and the coverage of the open-source tools, it is difficult to analyze the result from the automatic vulnerability scanner that provides. The important to do a comparison on the key figure on the automated vulnerability scanners because there are many kinds of a scanner on the market and this comparison can be useful to decide which scanner has better performance in term of SQL Injection and Cross-Site Scripting (XSS) vulnerabilities. In this paper, a method by Jose Fonseca et al, is used to compare open-source automated vulnerability scanners based on detection coverage and a method by Yuki Makino and Vitaly Klyuev for precision rate. The criteria vulnerabilities will be injected into the web applications which then be scanned by the scanners. The results then are compared by analyzing the precision rate and detection coverage of vulnerability detection. Two leading open source automated vulnerability scanners will be evaluated. In this paper, the scanner that being utilizes is OW ASP ZAP and Skipfish for comparison. The results show that from precision rate and detection rate scope, OW ASP ZAP has better performance than Skipfish by two times for precision rate and have almost the same result for detection coverage where OW ASP ZAP has a higher number in high vulnerabilities.

Zuin, Gianlucca, Chaimowicz, Luiz, Veloso, Adriano.  2018.  Learning Transferable Features For Open-Domain Question Answering. 2018 International Joint Conference on Neural Networks (IJCNN). :1–8.

Corpora used to learn open-domain Question-Answering (QA) models are typically collected from a wide variety of topics or domains. Since QA requires understanding natural language, open-domain QA models generally need very large training corpora. A simple way to alleviate data demand is to restrict the domain covered by the QA model, leading thus to domain-specific QA models. While learning improved QA models for a specific domain is still challenging due to the lack of sufficient training data in the topic of interest, additional training data can be obtained from related topic domains. Thus, instead of learning a single open-domain QA model, we investigate domain adaptation approaches in order to create multiple improved domain-specific QA models. We demonstrate that this can be achieved by stratifying the source dataset, without the need of searching for complementary data unlike many other domain adaptation approaches. We propose a deep architecture that jointly exploits convolutional and recurrent networks for learning domain-specific features while transferring domain-shared features. That is, we use transferable features to enable model adaptation from multiple source domains. We consider different transference approaches designed to learn span-level and sentence-level QA models. We found that domain-adaptation greatly improves sentence-level QA performance, and span-level QA benefits from sentence information. Finally, we also show that a simple clustering algorithm may be employed when the topic domains are unknown and the resulting loss in accuracy is negligible.

Zügner, Daniel, Akbarnejad, Amir, Günnemann, Stephan.  2018.  Adversarial Attacks on Neural Networks for Graph Data. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. :2847-2856.
Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this work, we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model.We generate adversarial perturbations targeting the node's features and the graph structure, thus, taking the dependencies between instances in account. Moreover, we ensure that the perturbations remain unnoticeable by preserving important data characteristics. To cope with the underlying discrete domain we propose an efficient algorithm Nettack exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful even when only limited knowledge about the graph is given.
Zuech, Richard, Hancock, John, Khoshgoftaar, Taghi M..  2021.  Feature Popularity Between Different Web Attacks with Supervised Feature Selection Rankers. 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA). :30–37.
We introduce the novel concept of feature popularity with three different web attacks and big data from the CSE-CIC-IDS2018 dataset: Brute Force, SQL Injection, and XSS web attacks. Feature popularity is based upon ensemble Feature Selection Techniques (FSTs) and allows us to more easily understand common important features between different cyberattacks, for two main reasons. First, feature popularity lists can be generated to provide an easy comprehension of important features across different attacks. Second, the Jaccard similarity metric can provide a quantitative score for how similar feature subsets are between different attacks. Both of these approaches not only provide more explainable and easier-to-understand models, but they can also reduce the complexity of implementing models in real-world systems. Four supervised learning-based FSTs are used to generate feature subsets for each of our three different web attack datasets, and then our feature popularity frameworks are applied. For these three web attacks, the XSS and SQL Injection feature subsets are the most similar per the Jaccard similarity. The most popular features across all three web attacks are: Flow\_Bytes\_s, FlowİAT\_Max, and Flow\_Packets\_s. While this introductory study is only a simple example using only three web attacks, this feature popularity concept can be easily extended, allowing an automated framework to more easily determine the most popular features across a very large number of attacks and features.
Zubov, Ilya G., Lysenko, Nikolai V., Labkov, Gleb M..  2019.  Detection of the Information Hidden in Image by Convolutional Neural Networks. 2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :393–394.

This article shows the possibility of detection of the hidden information in images. This is the approach to steganalysis than the basic data about the image and the information about the hiding method of the information are unknown. The architecture of the convolutional neural network makes it possible to detect small changes in the image with high probability.

Zubaydi, H. D., Anbar, M., Wey, C. Y..  2017.  Review on Detection Techniques against DDoS Attacks on a Software-Defined Networking Controller. 2017 Palestinian International Conference on Information and Communication Technology (PICICT). :10–16.

The evolution of information and communication technologies has brought new challenges in managing the Internet. Software-Defined Networking (SDN) aims to provide easily configured and remotely controlled networks based on centralized control. Since SDN will be the next disruption in networking, SDN security has become a hot research topic because of its importance in communication systems. A centralized controller can become a focal point of attack, thus preventing attack in controller will be a priority. The whole network will be affected if attacker gain access to the controller. One of the attacks that affect SDN controller is DDoS attacks. This paper reviews different detection techniques that are available to prevent DDoS attacks, characteristics of these techniques and issues that may arise using these techniques.

Zubarev, Dmytro, Skarga-Bandurova, Inna.  2019.  Cross-Site Scripting for Graphic Data: Vulnerabilities and Prevention. 2019 10th International Conference on Dependable Systems, Services and Technologies (DESSERT). :154–160.

In this paper, we present an overview of the problems associated with the cross-site scripting (XSS) in the graphical content of web applications. The brief analysis of vulnerabilities for graphical files and factors responsible for making SVG images vulnerable to XSS attacks are discussed. XML treatment methods and their practical testing are performed. As a result, the set of rules for protecting the graphic content of the websites and prevent XSS vulnerabilities are proposed.

Zouari, J., Hamdi, M., Kim, T. H..  2017.  A privacy-preserving homomorphic encryption scheme for the Internet of Things. 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC). :1939–1944.

The Internet of Things is a disruptive paradigm based on the cooperation of a plethora of heterogeneous smart things to collect, transmit, and analyze data from the ambient environment. To this end, many monitored variables are combined by a data analysis module in order to implement efficient context-aware decision mechanisms. To ensure resource efficiency, aggregation is a long established solution, however it is applicable only in the case of one sensed variable. We extend the use of aggregation to the complex context of IoT by proposing a novel approach for secure cooperation of smart things while granting confidentiality and integrity. Traditional solutions for data concealment in resource constrained devices rely on hop-by-hop or end-to-end encryption, which are shown to be inefficient in our context. We use a more sophisticated scheme relying on homomorphic encryption which is not compromise resilient. We combine fully additive encryption with fully additive secret sharing to fulfill the required properties. Thorough security analysis and performance evaluation show a viable tradeoff between security and efficiency for our scheme.