Visible to the public Biblio

Found 391 results

Filters: Keyword is Databases  [Clear All Filters]
2017-02-23
B. Yang, E. Martiri.  2015.  "Using Honey Templates to Augment Hash Based Biometric Template Protection". 2015 IEEE 39th Annual Computer Software and Applications Conference. 3:312-316.

Hash based biometric template protection schemes (BTPS), such as fuzzy commitment, fuzzy vault, and secure sketch, address the privacy leakage concern on the plain biometric template storage in a database through using cryptographic hash calculation for template verification. However, cryptographic hashes have only computational security whose being cracked shall leak the biometric feature in these BTPS; and furthermore, existing BTPS are rarely able to detect during a verification process whether a probe template has been leaked from the database or not (i.e., being used by an imposter or a genuine user). In this paper we tailor the "honeywords" idea, which was proposed to detect the hashed password cracking, to enable the detectability of biometric template database leakage. However, unlike passwords, biometric features encoded in a template cannot be renewed after being cracked and thus not straightforwardly able to be protected by the honeyword idea. To enable the honeyword idea on biometrics, diversifiability (and thus renewability) is required on the biometric features. We propose to use BTPS for his purpose in this paper and present a machine learning based protected template generation protocol to ensure the best anonymity of the generated sugar template (from a user's genuine biometric feature) among other honey ones (from synthesized biometric features).

C. Zhang, W. Zhang, H. Mu.  2015.  "A Mutual Authentication Security RFID Protocol Based on Time Stamp". 2015 First International Conference on Computational Intelligence Theory, Systems and Applications (CCITSA). :166-170.

In the RFID technology, the privacy of low-cost tag is a hot issue in recent years. A new mutual authentication protocol is achieved with the time stamps, hash function and PRNG. This paper analyzes some common attack against RFID and the relevant solutions. We also make the security performance comparison with original security authentication protocol. This protocol can not only speed up the proof procedure but also save cost and it can prevent the RFID system from being attacked by replay, clone and DOS, etc..

2017-02-21
S. Lohit, K. Kulkarni, P. Turaga, J. Wang, A. C. Sankaranarayanan.  2015.  "Reconstruction-free inference on compressive measurements". 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :16-24.

Spatial-multiplexing cameras have emerged as a promising alternative to classical imaging devices, often enabling acquisition of `more for less'. One popular architecture for spatial multiplexing is the single-pixel camera (SPC), which acquires coded measurements of the scene with pseudo-random spatial masks. Significant theoretical developments over the past few years provide a means for reconstruction of the original imagery from coded measurements at sub-Nyquist sampling rates. Yet, accurate reconstruction generally requires high measurement rates and high signal-to-noise ratios. In this paper, we enquire if one can perform high-level visual inference problems (e.g. face recognition or action recognition) from compressive cameras without the need for image reconstruction. This is an interesting question since in many practical scenarios, our goals extend beyond image reconstruction. However, most inference tasks often require non-linear features and it is not clear how to extract such features directly from compressed measurements. In this paper, we show that one can extract nontrivial correlational features directly without reconstruction of the imagery. As a specific example, we consider the problem of face recognition beyond the visible spectrum e.g in the short-wave infra-red region (SWIR) - where pixels are expensive. We base our framework on smashed filters which suggests that inner-products between high-dimensional signals can be computed in the compressive domain to a high degree of accuracy. We collect a new face image dataset of 30 subjects, obtained using an SPC. Using face recognition as an example, we show that one can indeed perform reconstruction-free inference with a very small loss of accuracy at very high compression ratios of 100 and more.

Wensheng Chen, Hui Li, Jun Lu, Chaoqi Yu, Fuxing Chen.  2015.  "Routing in the Centralized Identifier Network". 2015 10th International Conference on Communications and Networking in China (ChinaCom). :73-78.

We propose a clean-slate network architecture called Centralized Identifier Network (CIN) which jointly considers the ideas of both control plane/forwarding plane separation and identifier/locator separation. In such an architecture, a controller cluster is designed to perform routers' link states gathering and routing calculation/handing out. Meanwhile, a tailor-made router without routing calculation function is designed to forward packets and communicate with its controller. Furthermore, A router or a host owns a globally unique ID and a host should be registered to a router whose ID will be the host's location. Control plane/forwarding plane separation enables CIN easily re-splitting the network functions into finer optional building blocks for sufficient flexibility and adaptability. Identifier/locator separation helps CIN deal with serious scaling problems and offer support for host mobility. This article mainly shows the routing mechanism of CIN. Furthermore, numerical results are presented to demonstrate the performance of the proposed mechanism.

2017-02-14
N. Nakagawa, Y. Teshigawara, R. Sasaki.  2015.  "Development of a Detection and Responding System for Malware Communications by Using OpenFlow and Its Evaluation". 2015 Fourth International Conference on Cyber Security, Cyber Warfare, and Digital Forensic (CyberSec). :46-51.

Advanced Persistent Threat (APT) attacks, which have become prevalent in recent years, are classified into four phases. These are initial compromise phase, attacking infrastructure building phase, penetration and exploration phase, and mission execution phase. The malware on infected terminals attempts various communications on and after the attacking infrastructure building phase. In this research, using OpenFlow technology for virtual networks, we developed a system of identifying infected terminals by detecting communication events of malware communications in APT attacks. In addition, we prevent information fraud by using OpenFlow, which works as real-time path control. To evaluate our system, we executed malware infection experiments with a simulation tool for APT attacks and malware samples. In these experiments, an existing network using only entry control measures was prepared. As a result, we confirm the developed system is effective.

K. P. B. Anushka, Chamantha, A. P. Karunaweera, P. R. Priyashantha, H. D. R. Wickramasinghe, W. A. V. M. G. Wijethunge.  2015.  "Case study on exploitation, detection and prevention of user account DoS through Advanced Persistent Threats". 2015 Fifteenth International Conference on Advances in ICT for Emerging Regions (ICTer). :190-194.

Security analysts implement various security mechanisms to protect systems from attackers. Even though these mechanisms try to secure systems, a talented attacker may use these same techniques to launch a sophisticated attack. This paper discuss about such an attack called as user account Denial of Service (DoS) where an attacker uses user account lockout features of the application to lockout all user accounts causing an enterprise wide DoS. The attack has being simulated usingastealthy attack mechanism called as Advanced Persistent Threats (APT) using a XMPP based botnet. Through the simulation, researchers discuss about the patterns associated with the attack which can be used to detect the attack in real time and how the attack can be prevented from the perspective of developers, system engineers and security analysts.

2015-05-06
Xinhai Zhang, Persson, M., Nyberg, M., Mokhtari, B., Einarson, A., Linder, H., Westman, J., DeJiu Chen, Torngren, M..  2014.  Experience on applying software architecture recovery to automotive embedded systems. Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE), 2014 Software Evolution Week - IEEE Conference on. :379-382.

The importance and potential advantages with a comprehensive product architecture description are well described in the literature. However, developing such a description takes additional resources, and it is difficult to maintain consistency with evolving implementations. This paper presents an approach and industrial experience which is based on architecture recovery from source code at truck manufacturer Scania CV AB. The extracted representation of the architecture is presented in several views and verified on CAN signal level. Lessons learned are discussed.
 

Potdar, M.S., Manekar, A.S., Kadu, R.D..  2014.  Android #x0022;Health-Dr. #x0022; Application for Synchronous Information Sharing. Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on. :265-269.

Android "Health-DR." is innovative idea for ambulatory appliances. In rapid developing technology, we are providing "Health-DR." application for the insurance agent, dispensary, patients, physician, annals management (security) for annals. So principally, the ample of record are maintain in to the hospitals. The application just needs to be installed in the customer site with IT environment. Main purpose of our application is to provide the healthy environment to the patient. Our cream focus is on the "Health-DR." application meet to the patient regiment. For the personal use of member, we provide authentication service strategy for "Health-DR." application. Prospective strategy includes: Professional Authentications (User Authentication) by doctor to the patient, actuary and dispensary. Remote access is available to the medical annals, doctor affability and patient affability. "Health-DR." provides expertness anytime and anywhere. The application is middleware to isolate the information from affability management, client discovery and transit of database. Annotations of records are kept in the bibliography. Mainly, this paper focuses on the conversion of E-Health application with flexible surroundings.
 

Boukhtouta, A., Lakhdari, N.-E., Debbabi, M..  2014.  Inferring Malware Family through Application Protocol Sequences Signature. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

The dazzling emergence of cyber-threats exert today's cyberspace, which needs practical and efficient capabilities for malware traffic detection. In this paper, we propose an extension to an initial research effort, namely, towards fingerprinting malicious traffic by putting an emphasis on the attribution of maliciousness to malware families. The proposed technique in the previous work establishes a synergy between automatic dynamic analysis of malware and machine learning to fingerprint badness in network traffic. Machine learning algorithms are used with features that exploit only high-level properties of traffic packets (e.g. packet headers). Besides, the detection of malicious packets, we want to enhance fingerprinting capability with the identification of malware families responsible in the generation of malicious packets. The identification of the underlying malware family is derived from a sequence of application protocols, which is used as a signature to the family in question. Furthermore, our results show that our technique achieves promising malware family identification rate with low false positives.

Boukhtouta, A., Lakhdari, N.-E., Debbabi, M..  2014.  Inferring Malware Family through Application Protocol Sequences Signature. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

The dazzling emergence of cyber-threats exert today's cyberspace, which needs practical and efficient capabilities for malware traffic detection. In this paper, we propose an extension to an initial research effort, namely, towards fingerprinting malicious traffic by putting an emphasis on the attribution of maliciousness to malware families. The proposed technique in the previous work establishes a synergy between automatic dynamic analysis of malware and machine learning to fingerprint badness in network traffic. Machine learning algorithms are used with features that exploit only high-level properties of traffic packets (e.g. packet headers). Besides, the detection of malicious packets, we want to enhance fingerprinting capability with the identification of malware families responsible in the generation of malicious packets. The identification of the underlying malware family is derived from a sequence of application protocols, which is used as a signature to the family in question. Furthermore, our results show that our technique achieves promising malware family identification rate with low false positives.

Zhongming Jin, Cheng Li, Yue Lin, Deng Cai.  2014.  Density Sensitive Hashing. Cybernetics, IEEE Transactions on. 44:1362-1371.

Nearest neighbor search is a fundamental problem in various research fields like machine learning, data mining and pattern recognition. Recently, hashing-based approaches, for example, locality sensitive hashing (LSH), are proved to be effective for scalable high dimensional nearest neighbor search. Many hashing algorithms found their theoretic root in random projection. Since these algorithms generate the hash tables (projections) randomly, a large number of hash tables (i.e., long codewords) are required in order to achieve both high precision and recall. To address this limitation, we propose a novel hashing algorithm called density sensitive hashing (DSH) in this paper. DSH can be regarded as an extension of LSH. By exploring the geometric structure of the data, DSH avoids the purely random projections selection and uses those projective functions which best agree with the distribution of the data. Extensive experimental results on real-world data sets have shown that the proposed method achieves better performance compared to the state-of-the-art hashing approaches.

Jingkuan Song, Yi Yang, Xuelong Li, Zi Huang, Yang Yang.  2014.  Robust Hashing With Local Models for Approximate Similarity Search. Cybernetics, IEEE Transactions on. 44:1225-1236.

Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing ℓ2,1-norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.
 

Khobragade, P.K., Malik, L.G..  2014.  Data Generation and Analysis for Digital Forensic Application Using Data Mining. Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on. :458-462.

In the cyber crime huge log data, transactional data occurs which tends to plenty of data for storage and analyze them. It is difficult for forensic investigators to play plenty of time to find out clue and analyze those data. In network forensic analysis involves network traces and detection of attacks. The trace involves an Intrusion Detection System and firewall logs, logs generated by network services and applications, packet captures by sniffers. In network lots of data is generated in every event of action, so it is difficult for forensic investigators to find out clue and analyzing those data. In network forensics is deals with analysis, monitoring, capturing, recording, and analysis of network traffic for detecting intrusions and investigating them. This paper focuses on data collection from the cyber system and web browser. The FTK 4.0 is discussing for memory forensic analysis and remote system forensic which is to be used as evidence for aiding investigation.
 

Janbeglou, M., Naderi, H., Brownlee, N..  2014.  Effectiveness of DNS-Based Security Approaches in Large-Scale Networks. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :524-529.

The Domain Name System (DNS) is widely seen as a vital protocol of the modern Internet. For example, popular services like load balancers and Content Delivery Networks heavily rely on DNS. Because of its important role, DNS is also a desirable target for malicious activities such as spamming, phishing, and botnets. To protect networks against these attacks, a number of DNS-based security approaches have been proposed. The key insight of our study is to measure the effectiveness of security approaches that rely on DNS in large-scale networks. For this purpose, we answer the following questions, How often is DNS used? Are most of the Internet flows established after contacting DNS? In this study, we collected data from the University of Auckland campus network with more than 33,000 Internet users and processed it to find out how DNS is being used. Moreover, we studied the flows that were established with and without contacting DNS. Our results show that less than 5 percent of the observed flows use DNS. Therefore, we argue that those security approaches that solely depend on DNS are not sufficient to protect large-scale networks.

Leong, P., Liming Lu.  2014.  Multiagent Web for the Internet of Things. Information Science and Applications (ICISA), 2014 International Conference on. :1-4.

The Internet of Things (IOT) is a network of networks where massively large numbers of objects or things are interconnected to each other through the network. The Internet of Things brings along many new possibilities of applications to improve human comfort and quality of life. Complex systems such as the Internet of Things are difficult to manage because of the emergent behaviours that arise from the complex interactions between its constituent parts. Our key contribution in the paper is a proposed multiagent web for the Internet of Things. Corresponding data management architecture is also proposed. The multiagent architecture provides autonomic characteristics for IOT making the IOT manageable. In addition, the multiagent web allows for flexible processing on heterogeneous platforms as we leverage off web protocols such as HTTP and language independent data formats such as JSON for communications between agents. The architecture we proposed enables a scalable architecture and infrastructure for a web-scale multiagent Internet of Things.
 

2015-05-05
Lomotey, R.K., Deters, R..  2014.  Terms Mining in Document-Based NoSQL: Response to Unstructured Data. Big Data (BigData Congress), 2014 IEEE International Congress on. :661-668.

Unstructured data mining has become topical recently due to the availability of high-dimensional and voluminous digital content (known as "Big Data") across the enterprise spectrum. The Relational Database Management Systems (RDBMS) have been employed over the past decades for content storage and management, but, the ever-growing heterogeneity in today's data calls for a new storage approach. Thus, the NoSQL database has emerged as the preferred storage facility nowadays since the facility supports unstructured data storage. This creates the need to explore efficient data mining techniques from such NoSQL systems since the available tools and frameworks which are designed for RDBMS are often not directly applicable. In this paper, we focused on topics and terms mining, based on clustering, in document-based NoSQL. This is achieved by adapting the architectural design of an analytics-as-a-service framework and the proposal of the Viterbi algorithm to enhance the accuracy of the terms classification in the system. The results from the pilot testing of our work show higher accuracy in comparison to some previously proposed techniques such as the parallel search.

Jandel, M., Svenson, P., Johansson, R..  2014.  Fusing restricted information. Information Fusion (FUSION), 2014 17th International Conference on. :1-9.

Information fusion deals with the integration and merging of data and information from multiple (heterogeneous) sources. In many cases, the information that needs to be fused has security classification. The result of the fusion process is then by necessity restricted with the strictest information security classification of the inputs. This has severe drawbacks and limits the possible dissemination of the fusion results. It leads to decreased situational awareness: the organization knows information that would enable a better situation picture, but since parts of the information is restricted, it is not possible to distribute the most correct situational information. In this paper, we take steps towards defining fusion and data mining processes that can be used even when all the underlying data that was used cannot be disseminated. The method we propose here could be used to produce a classifier where all the sensitive information has been removed and where it can be shown that an antagonist cannot even in principle obtain knowledge about the classified information by using the classifier or situation picture.
 

Aydin, A., Alkhalaf, M., Bultan, T..  2014.  Automated Test Generation from Vulnerability Signatures. Software Testing, Verification and Validation (ICST), 2014 IEEE Seventh International Conference on. :193-202.

Web applications need to validate and sanitize user inputs in order to avoid attacks such as Cross Site Scripting (XSS) and SQL Injection. Writing string manipulation code for input validation and sanitization is an error-prone process leading to many vulnerabilities in real-world web applications. Automata-based static string analysis techniques can be used to automatically compute vulnerability signatures (represented as automata) that characterize all the inputs that can exploit a vulnerability. However, there are several factors that limit the applicability of static string analysis techniques in general: 1) undesirability of static string analysis requires the use of approximations leading to false positives, 2) static string analysis tools do not handle all string operations, 3) dynamic nature of the scripting languages makes static analysis difficult. In this paper, we show that vulnerability signatures computed for deliberately insecure web applications (developed for demonstrating different types of vulnerabilities) can be used to generate test cases for other applications. Given a vulnerability signature represented as an automaton, we present algorithms for test case generation based on state, transition, and path coverage. These automatically generated test cases can be used to test applications that are not analyzable statically, and to discover attack strings that demonstrate how the vulnerabilities can be exploited.
 

Coelho Martins da Fonseca, J.C., Amorim Vieira, M.P..  2014.  A Practical Experience on the Impact of Plugins in Web Security. Reliable Distributed Systems (SRDS), 2014 IEEE 33rd International Symposium on. :21-30.

In an attempt to support customization, many web applications allow the integration of third-party server-side plugins that offer diverse functionality, but also open an additional door for security vulnerabilities. In this paper we study the use of static code analysis tools to detect vulnerabilities in the plugins of the web application. The goal is twofold: 1) to study the effectiveness of static analysis on the detection of web application plugin vulnerabilities, and 2) to understand the potential impact of those plugins in the security of the core web application. We use two static code analyzers to evaluate a large number of plugins for a widely used Content Manage-ment System. Results show that many plugins that are current-ly deployed worldwide have dangerous Cross Site Scripting and SQL Injection vulnerabilities that can be easily exploited, and that even widely used static analysis tools may present disappointing vulnerability coverage and false positive rates.

Blankstein, A., Freedman, M.J..  2014.  Automating Isolation and Least Privilege in Web Services. Security and Privacy (SP), 2014 IEEE Symposium on. :133-148.

In many client-facing applications, a vulnerability in any part can compromise the entire application. This paper describes the design and implementation of Passe, a system that protects a data store from unintended data leaks and unauthorized writes even in the face of application compromise. Passe automatically splits (previously shared-memory-space) applications into sandboxed processes. Passe limits communication between those components and the types of accesses each component can make to shared storage, such as a backend database. In order to limit components to their least privilege, Passe uses dynamic analysis on developer-supplied end-to-end test cases to learn data and control-flow relationships between database queries and previous query results, and it then strongly enforces those relationships. Our prototype of Passe acts as a drop-in replacement for the Django web framework. By running eleven unmodified, off-the-shelf applications in Passe, we demonstrate its ability to provide strong security guarantees-Passe correctly enforced 96% of the applications' policies-with little additional overhead. Additionally, in the web-specific setting of the prototype, we also mitigate the cross-component effects of cross-site scripting (XSS) attacks by combining browser HTML5 sandboxing techniques with our automatic component separation.

Buja, G., Bin Abd Jalil, K., Bt Hj Mohd Ali, F., Rahman, T.F.A..  2014.  Detection model for SQL injection attack: An approach for preventing a web application from the SQL injection attack. Computer Applications and Industrial Electronics (ISCAIE), 2014 IEEE Symposium on. :60-64.

Since the past 20 years the uses of web in daily life is increasing and becoming trend now. As the use of the web is increasing, the use of web application is also increasing. Apparently most of the web application exists up to today have some vulnerability that could be exploited by unauthorized person. Some of well-known web application vulnerabilities are Structured Query Language (SQL) Injection, Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF). By compromising with these web application vulnerabilities, the system cracker can gain information about the user and lead to the reputation of the respective organization. Usually the developers of web applications did not realize that their web applications have vulnerabilities. They only realize them when there is an attack or manipulation of their code by someone. This is normal as in a web application, there are thousands of lines of code, therefore it is not easy to detect if there are some loopholes. Nowadays as the hacking tools and hacking tutorials are easier to get, lots of new hackers are born. Even though SQL injection is very easy to protect against, there are still large numbers of the system on the internet are vulnerable to this type of attack because there will be a few subtle condition that can go undetected. Therefore, in this paper we propose a detection model for detecting and recognizing the web vulnerability which is; SQL Injection based on the defined and identified criteria. In addition, the proposed detection model will be able to generate a report regarding the vulnerability level of the web application. As the consequence, the proposed detection model should be able to decrease the possibility of the SQL Injection attack that can be launch onto the web application.

Bozic, J., Wotawa, F..  2014.  Security Testing Based on Attack Patterns. Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on. :4-11.

Testing for security related issues is an important task of growing interest due to the vast amount of applications and services available over the internet. In practice testing for security often is performed manually with the consequences of higher costs, and no integration of security testing with today's agile software development processes. In order to bring security testing into practice, many different approaches have been suggested including fuzz testing and model-based testing approaches. Most of these approaches rely on models of the system or the application domain. In this paper we suggest to formalize attack patterns from which test cases can be generated and even executed automatically. Hence, testing for known attacks can be easily integrated into software development processes where automated testing, e.g., for daily builds, is a requirement. The approach makes use of UML state charts. Besides discussing the approach, we illustrate the approach using a case study.

Rocha, T.S., Souto, E..  2014.  ETSSDetector: A Tool to Automatically Detect Cross-Site Scripting Vulnerabilities. Network Computing and Applications (NCA), 2014 IEEE 13th International Symposium on. :306-309.

The inappropriate use of features intended to improve usability and interactivity of web applications has resulted in the emergence of various threats, including Cross-Site Scripting(XSS) attacks. In this work, we developed ETSS Detector, a generic and modular web vulnerability scanner that automatically analyzes web applications to find XSS vulnerabilities. ETSS Detector is able to identify and analyze all data entry points of the application and generate specific code injection tests for each one. The results shows that the correct filling of the input fields with only valid information ensures a better effectiveness of the tests, increasing the detection rate of XSS attacks.
 

Coras, F., Saucez, D., Iannone, L., Donnet, B..  2014.  On the performance of the LISP beta network. Networking Conference, 2014 IFIP. :1-9.

The future Internet has been a hot topic during the past decade and many approaches towards this future Internet, ranging from incremental evolution to complete clean slate ones, have been proposed. One of the proposition, LISP, advocates for the separation of the identifier and the locator roles of IP addresses to reduce BGP churn and BGP table size. Up to now, however, most studies concerning LISP have been theoretical and, in fact, little is known about the actual LISP deployment performance. In this paper, we fill this gap through measurement campaigns carried out on the LISP Beta Network. More precisely, we evaluate the performance of the two key components of the infrastructure: the control plane (i.e., the mapping system) and the interworking mechanism (i.e., communication between LISP and non-LISP sites). Our measurements highlight that performance offered by the LISP interworking infrastructure is strongly dependent on BGP routing policies. If we exclude misconfigured nodes, the mapping system typically provides reliable performance and relatively low median mapping resolution delays. Although the bias is not very important, control plane performance favors USA sites as a result of its larger LISP user base but also because European infrastructure appears to be less reliable.
 

Mittal, D., Kaur, D., Aggarwal, A..  2014.  Secure Data Mining in Cloud Using Homomorphic Encryption. Cloud Computing in Emerging Markets (CCEM), 2014 IEEE International Conference on. :1-7.

With the advancement in technology, industry, e-commerce and research a large amount of complex and pervasive digital data is being generated which is increasing at an exponential rate and often termed as big data. Traditional Data Storage systems are not able to handle Big Data and also analyzing the Big Data becomes a challenge and thus it cannot be handled by traditional analytic tools. Cloud Computing can resolve the problem of handling, storage and analyzing the Big Data as it distributes the big data within the cloudlets. No doubt, Cloud Computing is the best answer available to the problem of Big Data storage and its analyses but having said that, there is always a potential risk to the security of Big Data storage in Cloud Computing, which needs to be addressed. Data Privacy is one of the major issues while storing the Big Data in a Cloud environment. Data Mining based attacks, a major threat to the data, allows an adversary or an unauthorized user to infer valuable and sensitive information by analyzing the results generated from computation performed on the raw data. This thesis proposes a secure k-means data mining approach assuming the data to be distributed among different hosts preserving the privacy of the data. The approach is able to maintain the correctness and validity of the existing k-means to generate the final results even in the distributed environment.