Biblio
Cloud computing is an Internet-based technology that emerging rapidly in the last few years due to popular and demanded services required by various institutions, organizations, and individuals. structured, unstructured, semistructured data is transfer at a record pace on to the cloud server. These institutions, businesses, and organizations are shifting more and more increasing workloads on cloud server, due to high cost, space and maintenance issues from big data, cloud computing will become a potential choice for the storage of data. In Cloud Environment, It is obvious that data is not secure completely yet from inside and outside attacks and intrusions because cloud servers are under the control of a third party. The Security of data becomes an important aspect due to the storage of sensitive data in a cloud environment. In this paper, we give an overview of characteristics and state of art of big data and data security & privacy top threats, open issues and current challenges and their impact on business are discussed for future research perspective and review & analysis of previous and recent frameworks and architectures for data security that are continuously established against threats to enhance how to keep and store data in the cloud environment.
As the number of data in various industries and government sectors is growing exponentially, the `7V' concept of big data aims to create a new value by indiscriminately collecting and analyzing information from various fields. At the same time as the ecosystem of the ICT industry arrives, big data utilization is treatened by the privacy attacks such as infringement due to the large amount of data. To manage and sustain the controllable privacy level, there need some recommended de-identification techniques. This paper exploits those de-identification processes and three types of commonly used privacy models. Furthermore, this paper presents use cases which can be adopted those kinds of technologies and future development directions.
Big Data Platform provides business units with data platforms, data products and data services by integrating all data to fully analyze and exploit the intrinsic value of data. Data accessed by big data platforms may include many users' privacy and sensitive information, such as the user's hotel stay history, user payment information, etc., which is at risk of leakage. This paper first analyzes the risks of data leakage, then introduces in detail the theoretical basis and common methods of data desensitization technology, and finally puts forward a set of effective market subject credit supervision application based on asccii, which is committed to solving the problems of insufficient breadth and depth of data utilization for enterprises involved, the problems of lagging regulatory laws and standards, the problems of separating credit construction and market supervision business, and the credit constraints of data governance.
With the development of mobile internet technology, GPS technology and social software have been widely used in people's lives. The problem of big data privacy protection related to location trajectory is becoming more and more serious. The traditional location trajectory privacy protection method requires certain background knowledge and it is difficult to adapt to massive mass. Privacy protection of data. differential privacy protection technology protects privacy by attacking data by randomly perturbing raw data. The method used in this paper is to first sample the position trajectory, form the irregular polygons of the high-frequency access points in the sampling points and position data, calculate the center of gravity of the polygon, and then use the differential privacy protection algorithm to add noise to the center of gravity of the polygon to form a new one. The center of gravity, and the new center of gravity are connected to form a new trajectory. The purpose of protecting the position trajectory is well achieved. It is proved that the differential privacy protection algorithm can effectively protect the position trajectory by adding noise.
Aiming at the problems of poor stability and low accuracy of current communication data informatization processing methods, this paper proposes a research on nonlinear frequency hopping communication data informatization under the framework of big data security evaluation. By adding a frequency hopping mediation module to the frequency hopping communication safety evaluation framework, the communication interference information is discretely processed, and the data parameters of the nonlinear frequency hopping communication data are corrected and converted by combining a fast clustering analysis algorithm, so that the informatization processing of the nonlinear frequency hopping communication data under the big data safety evaluation framework is completed. Finally, experiments prove that the research on data informatization of nonlinear frequency hopping communication under the framework of big data security evaluation could effectively improve the accuracy and stability.
In recent years, almost all the real-world operations are transferred to cyber world and these market computers connect with each other via Internet. As a result of this, there is an increasing number of security breaches of the networks, whose admins cannot protect their networks from the all types of attacks. Although most of these attacks can be prevented with the use of firewalls, encryption mechanisms, access controls and some password protections mechanisms; due to the emergence of new type of attacks, a dynamic intrusion detection mechanism is always needed in the information security market. To enable the dynamicity of the Intrusion Detection System (IDS), it should be updated by using a modern learning mechanism. Neural Network approach is one of the mostly preferred algorithms for training the system. However, with the increasing power of parallel computing and use of big data for training, as a new concept, deep learning has been used in many of the modern real-world problems. Therefore, in this paper, we have proposed an IDS system which uses GPU powered Deep Learning Algorithms. The experimental results are collected on mostly preferred dataset KDD99 and it showed that use of GPU speed up training time up to 6.48 times depending on the number of the hidden layers and nodes in them. Additionally, we compare the different optimizers to enlighten the researcher to select the best one for their ongoing or future research.
Nowadays big data has getting more and more attention in both the academic and the industrial research. With the development of big data, people pay more attention to data security. A significant feature of big data is the large size of the data. In order to improve the encryption speed of the large size of data, this paper uses the deep pipeline and full expansion technology to implement the AES encryption algorithm on FPGA. Achieved throughput of 31.30 Gbps with a minimum latency of 0.134 us. This design can quickly encrypt large amounts of data and provide technical support for the development of big data.
Efficient application of Internet of Battlefield Things (IoBT) technology on the battlefield calls for innovative solutions to control and manage the deluge of heterogeneous IoBT devices. This paper presents an innovative paradigm to address heterogeneity in controlling IoBT and IoT devices, enabling multi-force cooperation in challenging battlefield scenarios.
With the rapid development of the Internet of vehicles, there is a huge amount of multimedia data becoming a hidden trouble in the Internet of Things. Therefore, it is necessary to process and store them in real time as a way of big data curation. In this paper, a method of real-time processing and storage based on CDN in vehicle monitoring system is proposed. The MPEG-DASH standard is used to process the multimedia data by dividing them into MPD files and media segments. A real-time monitoring system of vehicle on the basis of the method introduced is designed and implemented.
"Good Governance" - may it be corporate or governmental, is a badly needed focus area in the world today where the companies and governments are struggling to survive the political and economical turmoil around the globe. All governments around the world have a tendency of expanding the size of their government, but eventually they would be forced to think reducing the size by incorporating information technology as a way to provide services to the citizens effectively and efficiently. Hence our attempt is to offer a complete solution from birth of a citizen till death encompassing all the necessary services related to the well being of a person living in a society. Our research and analysis would explore the pros and cons of using IT as a solution to our problems and ways to implement them for a best outcome in e-Governance occasionally comparing with the present scenario when relevant.
The government in the era of big data requires safer infrastructure, information storage and data application. As a result, security threats will be the bottleneck for e-government development. Based on the e-government hierarchy model, this thesis focuses on such information security threats as human effects, network technology defects and management deficiency facing the e-government system in the era of big data. On this basis, three solutions are put forward to improve e-government information security system. Firstly, enhance information security awareness and improve network technology of information management departments in the government; secondly, conduct proper information encryption by ensuring information confidentiality and identity authentication; thirdly, implement strict information management through isolation between intranet and extranet and united planning of e-government information management.
This paper describes a machine assistance approach to grading decisions for values that might be missing or need validation, using a mathematical algebraic form of an Expert System, instead of the traditional textual or logic forms and builds a neural network computational graph structure. This Experts System approach is also structured into a neural network like format of: input, hidden and output layers that provide a structured approach to the knowledge-base organization, this provides a useful abstraction for reuse for data migration applications in big data, Cyber and relational databases. The approach is further enhanced with a Bayesian probability tree approach to grade the confidences of value probabilities, instead of the traditional grading of the rule probabilities, and estimates the most probable value in light of all evidence presented. This is ground work for a Machine Learning (ML) experts system approach in a form that is closer to a Neural Network node structure.