Biblio
These days the digitization process is everywhere, spreading also across central governments and local authorities. It is hoped that, using open government data for scientific research purposes, the public good and social justice might be enhanced. Taking into account the European General Data Protection Regulation recently adopted, the big challenge in Portugal and other European countries, is how to provide the right balance between personal data privacy and data value for research. This work presents a sensitivity study of data anonymization procedure applied to a real open government data available from the Brazilian higher education evaluation system. The ARX k-anonymization algorithm, with and without generalization of some research value variables, was performed. The analysis of the amount of data / information lost and the risk of re-identification suggest that the anonymization process may lead to the under-representation of minorities and sociodemographic disadvantaged groups. It will enable scientists to improve the balance among risk, data usability, and contributions for the public good policies and practices.
The existing anonymized differential privacy model adopts a unified anonymity method, ignoring the difference of personal privacy, which may lead to the problem of excessive or insufficient protection of the original data [1]. Therefore, this paper proposes a personalized k-anonymity model for tuples (PKA) and proposes a differential privacy data publishing algorithm (DPPA) based on personalized anonymity, firstly based on the tuple personality factor set by the user in the original data set. The values are classified and the corresponding privacy protection relevance is calculated. Then according to the tuple personality factor classification value, the data set is clustered by clustering method with different anonymity, and the quasi-identifier attribute of each cluster is aggregated and noise-added to realize anonymized differential privacy; finally merge the subset to get the data set that meets the release requirements. In this paper, the correctness of the algorithm is analyzed theoretically, and the feasibility and effectiveness of the proposed algorithm are verified by comparison with similar algorithms.
The purpose of the General Data Protection Regulation (GDPR) is to provide improved privacy protection. If an app controls personal data from users, it needs to be compliant with GDPR. However, GDPR lists general rules rather than exact step-by-step guidelines about how to develop an app that fulfills the requirements. Therefore, there may exist GDPR compliance violations in existing apps, which would pose severe privacy threats to app users. In this paper, we take mobile health applications (mHealth apps) as a peephole to examine the status quo of GDPR compliance in Android apps. We first propose an automated system, named HPDROID, to bridge the semantic gap between the general rules of GDPR and the app implementations by identifying the data practices declared in the app privacy policy and the data relevant behaviors in the app code. Then, based on HPDROID, we detect three kinds of GDPR compliance violations, including the incompleteness of privacy policy, the inconsistency of data collections, and the insecurity of data transmission. We perform an empirical evaluation of 796 mHealth apps. The results reveal that 189 (23.7%) of them do not provide complete privacy policies. Moreover, 59 apps collect sensitive data through different measures, but 46 (77.9%) of them contain at least one inconsistent collection behavior. Even worse, among the 59 apps, only 8 apps try to ensure the transmission security of collected data. However, all of them contain at least one encryption or SSL misuse. Our work exposes severe privacy issues to raise awareness of privacy protection for app users and developers.
Most anti-collusion audio fingerprinting schemes are aiming at finding colluders from the illegal redistributed audio copies. However, the loss caused by the redistributed versions is inevitable. In this letter, a novel fingerprinting scheme is proposed to eliminate the motivation of collusion attack. The audio signal is transformed to the frequency domain by the Fourier transform, and the coefficients in frequency domain are reversed in different degrees according to the fingerprint sequence. Different from other fingerprinting schemes, the coefficients of the host media are excessively modified by the proposed method in order to reduce the quality of the colluded version significantly, but the imperceptibility is well preserved. Experiments show that the colluded audio cannot be reused because of the poor quality. In addition, the proposed method can also resist other common attacks. Various kinds of copyright risks and losses caused by the illegal redistribution are effectively avoided, which is significant for protecting the copyright of audio.
With the development of mobile internet technology, GPS technology and social software have been widely used in people's lives. The problem of big data privacy protection related to location trajectory is becoming more and more serious. The traditional location trajectory privacy protection method requires certain background knowledge and it is difficult to adapt to massive mass. Privacy protection of data. differential privacy protection technology protects privacy by attacking data by randomly perturbing raw data. The method used in this paper is to first sample the position trajectory, form the irregular polygons of the high-frequency access points in the sampling points and position data, calculate the center of gravity of the polygon, and then use the differential privacy protection algorithm to add noise to the center of gravity of the polygon to form a new one. The center of gravity, and the new center of gravity are connected to form a new trajectory. The purpose of protecting the position trajectory is well achieved. It is proved that the differential privacy protection algorithm can effectively protect the position trajectory by adding noise.
Nowadays citizens live in a world where communication technologies offer opportunities for new interactions between people and society. Clearly, e-government is changing the way citizens relate to their government, moving the interaction of physical environment and management towards digital participation. Therefore, it is necessary for e-government to have procedures in place to prevent and lessen the negative impact of an attack or intrusion by third parties. In this research work, he focuses on the implementation of anonymous communication in a proof of concept application called “Delta”, whose function is to allow auctions and offers of products, thus marking the basis for future implementations in e-government services.
Finite-state machine (FSM) is widely used as control unit in most digital designs. Many intellectual property protection and obfuscation techniques leverage on the exponential number of possible states and state transitions of large FSM to secure a physical design with the reason that it is challenging to retrieve the FSM design from its downstream design or physical implementation without knowledge of the design. In this paper, we postulate that this assumption may not be sustainable with big data analytics. We demonstrate by applying a data mining technique to analyze sufficiently large amount of data collected from a full scan design to identify its FSM state registers. An impact metric is introduced to discriminate FSM state registers from other registers. A decision tree algorithm is constructed from the scan data for the regression analysis of the dependency of other registers on a chosen register to deduce its impact. The registers with the greater impact are more likely to be the FSM state registers. The proposed scheme is applied on several complex designs from OpenCores. The experiment results show the feasibility of our scheme in correctly identifying most FSM state registers with a high hit rate for a large majority of the designs.