Biblio
Recognising user's risky behaviours in real-time is an important element of providing appropriate solutions and recommending suitable actions for responding to cybersecurity threats. Employing user modelling and machine learning can make this process automated by requires high-performance intelligent agent to create the user security profile. User profiling is the process of producing a profile of the user from historical information and past details. This research tries to identify the monitoring factors and suggests a novel observation solution to create high-performance sensors to generate the user security profile for a home user concerning the user's privacy. This observer agent helps to create a decision-making model that influences the user's decision following real-time threats or risky behaviours.
The Internet of Things enables interaction between IoT devices and users through the cloud. The cloud provides services such as account monitoring, device management, and device control. As the center of the IoT platform, the cloud provides services to IoT devices and IoT applications through APIs. Therefore, the permission verification of the API is essential. However, we found that some APIs are unverified, which allows unauthorized users to access cloud resources or control devices; it could threaten the security of devices and cloud. To check for unauthorized access to the API, we developed IoT-APIScanner, a framework to check the permission verification of the cloud API. Through observation, we found there is a large amount of interactive information between IoT application and cloud, which include the APIs and related parameters, so we can extract them by analyzing the code of the IoT application, and use this for mutating API test cases. Through these test cases, we can effectively check the permissions of the API. In our research, we extracted a total of 5 platform APIs. Among them, the proportion of APIs without permission verification reached 13.3%. Our research shows that attackers could use the API without permission verification to obtain user privacy or control of devices.
Big Data Platform provides business units with data platforms, data products and data services by integrating all data to fully analyze and exploit the intrinsic value of data. Data accessed by big data platforms may include many users' privacy and sensitive information, such as the user's hotel stay history, user payment information, etc., which is at risk of leakage. This paper first analyzes the risks of data leakage, then introduces in detail the theoretical basis and common methods of data desensitization technology, and finally puts forward a set of effective market subject credit supervision application based on asccii, which is committed to solving the problems of insufficient breadth and depth of data utilization for enterprises involved, the problems of lagging regulatory laws and standards, the problems of separating credit construction and market supervision business, and the credit constraints of data governance.
Guaranteeing a certain level of user privacy in an arbitrary piece of text is a challenging issue. However, with this challenge comes the potential of unlocking access to vast data stores for training machine learning models and supporting data driven decisions. We address this problem through the lens of dx-privacy, a generalization of Differential Privacy to non Hamming distance metrics. In this work, we explore word representations in Hyperbolic space as a means of preserving privacy in text. We provide a proof satisfying dx-privacy, then we define a probability distribution in Hyperbolic space and describe a way to sample from it in high dimensions. Privacy is provided by perturbing vector representations of words in high dimensional Hyperbolic space to obtain a semantic generalization. We conduct a series of experiments to demonstrate the tradeoff between privacy and utility. Our privacy experiments illustrate protections against an authorship attribution algorithm while our utility experiments highlight the minimal impact of our perturbations on several downstream machine learning models. Compared to the Euclidean baseline, we observe \textbackslashtextgreater 20x greater guarantees on expected privacy against comparable worst case statistics.
It can get the user's privacy and home energy use information by analyzing the user's electrical load information in smart grid, and this is an area of concern. A rechargeable battery may be used in the home network to protect user's privacy. In this paper, the battery can neither charge nor discharge, and the power of battery is adjustable, at the same time, we model the real user's electrical load information and the battery power information and the recorded electrical power of smart meters which are processed with discrete way. Then we put forward a heuristic algorithm which can make the rate of information leakage less than existing solutions. We use statistical methods to protect user's privacy, the theoretical analysis and the examples show that our solution makes the scene design more reasonable and is more effective than existing solutions to avoid the leakage of the privacy.