Biblio
Discovering vulnerabilities is an information-intensive task that requires a developer to locate the defects in the code that have security implications. The task is difficult due to the growing code complexity and some developer's lack of security expertise. Although tools have been created to ease the difficulty, no single one is sufficient. In practice, developers often use a combination of tools to uncover vulnerabilities. Yet, the basis on which different tools are composed is under explored. In this paper, we examine the composition base by taking advantage of the tool design patterns informed by foraging theory. We follow a design science methodology and carry out a three-step empirical study: mapping 34 foraging-theoretic patterns in a specific vulnerability discovery tool, formulating hypotheses about the value and cost of foraging when considering two composition scenarios, and performing a human-subject study to test the hypotheses. Our work offers insights into guiding developers' tool usage in detecting software vulnerabilities.
Nowadays, video surveillance systems are part of our daily life, because of their role in ensuring the security of goods and people this generates a huge amount of video data. Thus, several research works based on the ontology paradigm have tried to develop an efficient system to index and search precisely a very large volume of videos. Due to their semantic expressiveness, ontologies are undoubtedly very much in demand in recent years in the field of video surveillance to overcome the problem of the semantic gap between the interpretation of the data extracted from the low level and the high-level semantics of the video. Despite its good expressiveness of semantics, a classical ontology may not be sufficient for good handling of uncertainty, which is however commonly present in the video surveillance domain, hence the need to consider a new ontological approach that will better represent uncertainty. Fuzzy logic is recognized as a powerful tool for dealing with vague, incomplete, imperfect, or uncertain data or information. In this work, we develop a new ontological approach based on fuzzy logic. All the relevant fuzzy concepts such as Video\_Objects, Video\_Events, Video\_Sequences, that could appear in a video surveillance domain are well represented with their fuzzy Ontology DataProperty and the fuzzy relations between them (Ontology ObjectProperty). To achieve this goal, the new fuzzy video surveillance ontology is implemented using the fuzzy ontology web language 2 (fuzzy owl2) which is an extension of the standard semantic web language, ontology web language 2 (owl2).
Recently, federated learning (FL), as an advanced and practical solution, has been applied to deal with privacy-preserving issues in distributed multi-party federated modeling. However, most existing FL methods focus on the same privacy-preserving budget while ignoring various privacy requirements of participants. In this paper, we for the first time propose an algorithm (PLU-FedOA) to optimize the deep neural network of horizontal FL with personalized local differential privacy. For such considerations, we design two approaches: PLU, which allows clients to upload local updates under differential privacy-preserving of personally selected privacy level, and FedOA, which helps the server aggregates local parameters with optimized weight in mixed privacy-preserving scenarios. Moreover, we theoretically analyze the effect on privacy and optimization of our approaches. Finally, we verify PLU-FedOA on real-world datasets.
Federated learning (FL) allows to train a massive amount of data privately due to its decentralized structure. Stochastic gradient descent (SGD) is commonly used for FL due to its good empirical performance, but sensitive user information can still be inferred from weight updates shared during FL iterations. We consider Gaussian mechanisms to preserve local differential privacy (LDP) of user data in the FL model with SGD. The trade-offs between user privacy, global utility, and transmission rate are proved by defining appropriate metrics for FL with LDP. Compared to existing results, the query sensitivity used in LDP is defined as a variable, and a tighter privacy accounting method is applied. The proposed utility bound allows heterogeneous parameters over all users. Our bounds characterize how much utility decreases and transmission rate increases if a stronger privacy regime is targeted. Furthermore, given a target privacy level, our results guarantee a significantly larger utility and a smaller transmission rate as compared to existing privacy accounting methods.
With the rapid development of 5G, the Internet of Things (IoT) and edge computing technologies dramatically improve smart industries' efficiency, such as healthcare, smart agriculture, and smart city. IoT is a data-driven system in which many smart devices generate and collect a massive amount of user privacy data, which may be used to improve users' efficiency. However, these data tend to leak personal privacy when people send it to the Internet. Differential privacy (DP) provides a method for measuring privacy protection and a more flexible privacy protection algorithm. In this paper, we study an estimation problem and propose a new frequency estimation algorithm named MFEA that redesigns the publish process. The algorithm maps a finite data set to an integer range through a hash function, then initializes the data vector according to the mapped value and adds noise through the randomized response. The frequency of all interference data is estimated with maximum likelihood. Compared with the current traditional frequency estimation, our approach achieves better algorithm complexity and error control while satisfying differential privacy protection (LDP).
The main objective of the proposed work is to build a reliable and secure architecture for cloud servers where users may safely store and transfer their data. This platform ensures secure communication between the client and the server during data transfer. Furthermore, it provides a safe method for sharing and transferring files from one person to another. As a result, for ensuring safe data on cloud servers, this research work presents a secure architecture combining three DNA cryptography, HMAC, and a third party Auditor. In order to provide security by utilizing various strategies, a number of traditional and novel cryptographic methods are investigated. In the first step, data will be encrypted with the help of DNA cryptography, where the encoded document will be stored in the cloud server. In next step, create a HMAC value of encrypted file, which was stored on cloud by using secret key and sends to TPA. In addition, Third Party Auditor is used for authenticate the purity of stored documents in cloud at the time of verification TPA also create HMAC value from Cloud stored data and verify it. DNA-based cryptographic technique, hash based message authentic code and third party auditor will provide more secured framework for data security and integrity in cloud server.