Biblio
People connect with a plethora of information from many online portals due to the availability and ease of access to the internet and electronic communication devices. However, news portals sometimes abuse press freedom by manipulating facts. Most of the time, people are unable to discriminate between true and false news. It is difficult to avoid the detrimental impact of Bangla fake news from spreading quickly through online channels and influencing people’s judgment. In this work, we investigated many real and false news pieces in Bangla to discover a common pattern for determining if an article is disseminating incorrect information or not. We developed a deep learning model that was trained and validated on our selected dataset. For learning, the dataset contains 48,678 legitimate news and 1,299 fraudulent news. To deal with the imbalanced data, we used random undersampling and then ensemble to achieve the combined output. In terms of Bangla text processing, our proposed model achieved an accuracy of 98.29% and a recall of 99%.
Deep web refers to sites that cannot be found by search engines and makes up the 96% of the digital world. The dark web is the part of the deep web that can only be accessed through specialised tools and anonymity networks. To avoid monitoring and control, communities that seek for anonymization are moving to the dark web. In this work, we scrape five dark web forums and construct five graphs to model user connections. These networks are then studied and compared using data mining techniques and social network analysis tools; for each community we identify the key actors, we study the social connections and interactions, we observe the small world effect, and we highlight the type of discussions among the users. Our results indicate that only a small subset of users are influential, while the rapid dissemination of information and resources between users may affect behaviours and formulate ideas for future members.
Many organizations process and store classified data within their computer networks. Owing to the value of data that they hold; such organizations are more vulnerable to targets from adversaries. Accordingly, the sensitive organizations resort to an ‘air-gap’ approach on their networks, to ensure better protection. However, despite the physical and logical isolation, the attackers have successfully manifested their capabilities by compromising such networks; examples of Stuxnet and Agent.btz in view. Such attacks were possible due to the successful manipulation of human beings. It has been observed that to build up such attacks, persistent reconnaissance of the employees, and their data collection often forms the first step. With the rapid integration of social media into our daily lives, the prospects for data-seekers through that platform are higher. The inherent risks and vulnerabilities of social networking sites/apps have cultivated a rich environment for foreign adversaries to cherry-pick personal information and carry out successful profiling of employees assigned with sensitive appointments. With further targeted social engineering techniques against the identified employees and their families, attackers extract more and more relevant data to make an intelligent picture. Finally, all the information is fused to design their further sophisticated attacks against the air-gapped facility for data pilferage. In this regard, the success of the adversaries in harvesting the personal information of the victims largely depends upon the common errors committed by legitimate users while on duty, in transit, and after their retreat. Such errors would keep on repeating unless these are aligned with their underlying human behaviors and weaknesses, and the requisite mitigation framework is worked out.
Efficient large-scale biometric identification is a challenging open problem in biometrics today. Adding biometric information protection by cryptographic techniques increases the computational workload even further. Therefore, this paper proposes an efficient and improved use of coefficient packing for homomorphically protected biometric templates, allowing for the evaluation of multiple biometric comparisons at the cost of one. In combination with feature dimensionality reduction, the proposed technique facilitates a quadratic computational workload reduction for biometric identification, while long-term protection of the sensitive biometric data is maintained throughout the system. In previous works on using coefficient packing, only a linear speed-up was reported. In an experimental evaluation on a public face database, efficient identification in the encrypted domain is achieved on off-the-shelf hardware with no loss in recognition performance. In particular, the proposed improved use of coefficient packing allows for a computational workload reduction down to 1.6% of a conventional homomorphically protected identification system without improved packing.
This paper introduces lronMask, a new versatile verification tool for masking security. lronMask is the first to offer the verification of standard simulation-based security notions in the probing model as well as recent composition and expandability notions in the random probing model. It supports any masking gadgets with linear randomness (e.g. addition, copy and refresh gadgets) as well as quadratic gadgets (e.g. multiplication gadgets) that might include non-linear randomness (e.g. by refreshing their inputs), while providing complete verification results for both types of gadgets. We achieve this complete verifiability by introducing a new algebraic characterization for such quadratic gadgets and exhibiting a complete method to determine the sets of input shares which are necessary and sufficient to perform a perfect simulation of any set of probes. We report various benchmarks which show that lronMask is competitive with state-of-the-art verification tools in the probing model (maskVerif, scVerif, SILVEH, matverif). lronMask is also several orders of magnitude faster than VHAPS -the only previous tool verifying random probing composability and expandability- as well as SILVEH -the only previous tool providing complete verification for quadratic gadgets with nonlinear randomness. Thanks to this completeness and increased performance, we obtain better bounds for the tolerated leakage probability of state-of-the-art random probing secure compilers.
Considered sensitive information by the ISO/IEC 24745, biometric data should be stored and used in a protected way. If not, privacy and security of end-users can be compromised. Also, the advent of quantum computers demands quantum-resistant solutions. This work proposes the use of Kyber and Saber public key encryption (PKE) algorithms together with homomorphic encryption (HE) in a face recognition system. Kyber and Saber, both based on lattice cryptography, were two finalists of the third round of NIST post-quantum cryptography standardization process. After the third round was completed, Kyber was selected as the PKE algorithm to be standardized. Experimental results show that recognition performance of the non-protected face recognition system is preserved with the protection, achieving smaller sizes of protected templates and keys, and shorter execution times than other HE schemes reported in literature that employ lattices. The parameter sets considered achieve security levels of 128, 192 and 256 bits.
ISSN: 1617-5468
Cloud computing provides customers with enormous compute power and storage capacity, allowing them to deploy their computation and data-intensive applications without having to invest in infrastructure. Many firms use cloud computing as a means of relocating and maintaining resources outside of their enterprise, regardless of the cloud server's location. However, preserving the data in cloud leads to a number of issues related to data loss, accountability, security etc. Such fears become a great barrier to the adoption of the cloud services by users. Cloud computing offers a high scale storage facility for internet users with reference to the cost based on the usage of facilities provided. Privacy protection of a user's data is considered as a challenge as the internal operations offered by the service providers cannot be accessed by the users. Hence, it becomes necessary for monitoring the usage of the client's data in cloud. In this research, we suggest an effective cloud storage solution for accessing patient medical records across hospitals in different countries while maintaining data security and integrity. In the suggested system, multifactor authentication for user login to the cloud, homomorphic encryption for data storage with integrity verification, and integrity verification have all been implemented effectively. To illustrate the efficacy of the proposed strategy, an experimental investigation was conducted.
Managing electricity effectively also means knowing as accurately as possible when, where and how electricity is used. Detailed metering and timely allocation of consumption can help identify specific areas where energy consumption is excessive and therefore requires action and optimization. All those interested in the measurement process (distributors, sellers, wholesalers, managers, ultimately customers and new prosumer figures - producers / consumers -) have an interest in monitoring and managing energy flows more efficiently, in real time.Smart meter plays a key role in sending data containing consumer measurements to both the producer and the consumer, thanks to chain 2. It allows you to connect consumption and production, during use and the customer’s identity, allowing billing as Time-of-Use or Real-Time Pricing, and through the new two-way channel, this information is also made available to the consumer / prosumer himself, enabling new services such as awareness of energy consumption at the very moment of energy use.This is made possible by latest generation devices that "talk" with the end user, which use chain 2 and the power line for communication.However, the implementation of smart meters and related digital technologies associated with the smart grid raises various concerns, including, privacy. This paper provides a comparative perspective on privacy policies for residential energy customers, moreover, it will be possible to improve security through the blockchain for the introduction of smart contracts.
We propose new, more efficient targeted whitebox attacks against deep neural networks. Our attacks better align with the attacker’s goal: (1) tricking a model to assign higher probability to the target class than to any other class, while (2) staying within an -distance of the attacked input. First, we demonstrate a loss function that explicitly encodes (1) and show that Auto-PGD finds more attacks with it. Second, we propose a new attack method, Constrained Gradient Descent (CGD), using a refinement of our loss function that captures both (1) and (2). CGD seeks to satisfy both attacker objectives—misclassification and bounded `p-norm—in a principled manner, as part of the optimization, instead of via ad hoc postprocessing techniques (e.g., projection or clipping). We show that CGD is more successful on CIFAR10 (0.9–4.2%) and ImageNet (8.6–13.6%) than state-of-the-art attacks while consuming less time (11.4–18.8%). Statistical tests confirm that our attack outperforms others against leading defenses on different datasets and values of .
Intrusion detection systems (IDSs) are widely deployed in the industrial control systems to protect network security. IDSs typically generate a huge number of alerts, which are time-consuming for system operators to process. Most of the alerts are individually insignificant false alarms. However, it is not the best solution to discard these alerts, as they can still provide useful information about network situation. Based on the study of characteristics of alerts in the industrial control systems, we adopt an enhanced method of exponentially weighted moving average (EWMA) control charts to help operators in processing alerts. We classify all detection signatures as regular and irregular according to their frequencies, set multiple control limits to detect anomalies, and monitor regular signatures for network security situational awareness. Extensive experiments have been performed using real-world alert data. Simulation results demonstrate that the proposed enhanced EWMA method can greatly reduce the volume of alerts to be processed while reserving significant abnormal information.
To exploit high temporal correlations in video frames of the same scene, the current frame is predicted from the already-encoded reference frames using block-based motion estimation and compensation techniques. While this approach can efficiently exploit the translation motion of the moving objects, it is susceptible to other types of affine motion and object occlusion/deocclusion. Recently, deep learning has been used to model the high-level structure of human pose in specific actions from short videos and then generate virtual frames in future time by predicting the pose using a generative adversarial network (GAN). Therefore, modelling the high-level structure of human pose is able to exploit semantic correlation by predicting human actions and determining its trajectory. Video surveillance applications will benefit as stored “big” surveillance data can be compressed by estimating human pose trajectories and generating future frames through semantic correlation. This paper explores a new way of video coding by modelling human pose from the already-encoded frames and using the generated frame at the current time as an additional forward-referencing frame. It is expected that the proposed approach can overcome the limitations of the traditional backward-referencing frames by predicting the blocks containing the moving objects with lower residuals. Our experimental results show that the proposed approach can achieve on average up to 2.83 dB PSNR gain and 25.93% bitrate savings for high motion video sequences compared to standard video coding.
ISSN: 2642-9357