Biblio
Filters: Keyword is Scalability [Clear All Filters]
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :588—599.
.
2022. Trojan attacks threaten deep neural networks (DNNs) by poisoning them to behave normally on most samples, yet to produce manipulated results for inputs attached with a particular trigger. Several works attempt to detect whether a given DNN has been injected with a specific trigger during the training. In a parallel line of research, the lottery ticket hypothesis reveals the existence of sparse sub-networks which are capable of reaching competitive performance as the dense network after independent training. Connecting these two dots, we investigate the problem of Trojan DNN detection from the brand new lens of sparsity, even when no clean training data is available. Our crucial observation is that the Trojan features are significantly more stable to network pruning than benign features. Leveraging that, we propose a novel Trojan network detection regime: first locating a “winning Trojan lottery ticket” which preserves nearly full Trojan information yet only chance-level performance on clean inputs; then recovering the trigger embedded in this already isolated sub-network. Extensive experiments on various datasets, i.e., CIFAR-10, CIFAR-100, and ImageNet, with different network architectures, i.e., VGG-16, ResNet-18, ResNet-20s, and DenseNet-100 demonstrate the effectiveness of our proposal. Codes are available at https://github.com/VITA-Group/Backdoor-LTH.
A Robust Framework for Adaptive Selection of Filter Ensembles to Detect Adversarial Inputs. 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :59—67.
.
2022. Existing defense strategies against adversarial attacks (AAs) on AI/ML are primarily focused on examining the input data streams using a wide variety of filtering techniques. For instance, input filters are used to remove noisy, misleading, and out-of-class inputs along with a variety of attacks on learning systems. However, a single filter may not be able to detect all types of AAs. To address this issue, in the current work, we propose a robust, transferable, distribution-independent, and cross-domain supported framework for selecting Adaptive Filter Ensembles (AFEs) to minimize the impact of data poisoning on learning systems. The optimal filter ensembles are determined through a Multi-Objective Bi-Level Programming Problem (MOBLPP) that provides a subset of diverse filter sequences, each exhibiting fair detection accuracy. The proposed framework of AFE is trained to model the pristine data distribution to identify the corrupted inputs and converges to the optimal AFE without vanishing gradients and mode collapses irrespective of input data distributions. We presented preliminary experiments to show the proposed defense outperforms the existing defenses in terms of robustness and accuracy.
Robust and Resilient Federated Learning for Securing Future Networks. 2022 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit). :351—356.
.
2022. Machine Learning (ML) and Artificial Intelligence (AI) techniques are widely adopted in the telecommunication industry, especially to automate beyond 5G networks. Federated Learning (FL) recently emerged as a distributed ML approach that enables localized model training to keep data decentralized to ensure data privacy. In this paper, we identify the applicability of FL for securing future networks and its limitations due to the vulnerability to poisoning attacks. First, we investigate the shortcomings of state-of-the-art security algorithms for FL and perform an attack to circumvent FoolsGold algorithm, which is known as one of the most promising defense techniques currently available. The attack is launched with the addition of intelligent noise at the poisonous model updates. Then we propose a more sophisticated defense strategy, a threshold-based clustering mechanism to complement FoolsGold. Moreover, we provide a comprehensive analysis of the impact of the attack scenario and the performance of the defense mechanism.
Detection and Mitigation of Targeted Data Poisoning Attacks in Federated Learning. 2022 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :1—8.
.
2022. Federated learning (FL) has emerged as a promising paradigm for distributed training of machine learning models. In FL, several participants train a global model collaboratively by only sharing model parameter updates while keeping their training data local. However, FL was recently shown to be vulnerable to data poisoning attacks, in which malicious participants send parameter updates derived from poisoned training data. In this paper, we focus on defending against targeted data poisoning attacks, where the attacker’s goal is to make the model misbehave for a small subset of classes while the rest of the model is relatively unaffected. To defend against such attacks, we first propose a method called MAPPS for separating malicious updates from benign ones. Using MAPPS, we propose three methods for attack detection: MAPPS + X-Means, MAPPS + VAT, and their Ensemble. Then, we propose an attack mitigation approach in which a "clean" model (i.e., a model that is not negatively impacted by an attack) can be trained despite the existence of a poisoning attempt. We empirically evaluate all of our methods using popular image classification datasets. Results show that we can achieve \textgreater 95% true positive rates while incurring only \textless 2% false positive rate. Furthermore, the clean models that are trained using our proposed methods have accuracy comparable to models trained in an attack-free scenario.
FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :20844—20853.
.
2022. In recent years, the security of AI systems has drawn increasing research attention, especially in the medical imaging realm. To develop a secure medical image analysis (MIA) system, it is a must to study possible backdoor attacks (BAs), which can embed hidden malicious behaviors into the system. However, designing a unified BA method that can be applied to various MIA systems is challenging due to the diversity of imaging modalities (e.g., X-Ray, CT, and MRI) and analysis tasks (e.g., classification, detection, and segmentation). Most existing BA methods are designed to attack natural image classification models, which apply spatial triggers to training images and inevitably corrupt the semantics of poisoned pixels, leading to the failures of attacking dense prediction models. To address this issue, we propose a novel Frequency-Injection based Backdoor Attack method (FIBA) that is capable of delivering attacks in various MIA tasks. Specifically, FIBA leverages a trigger function in the frequency domain that can inject the low-frequency information of a trigger image into the poisoned image by linearly combining the spectral amplitude of both images. Since it preserves the semantics of the poisoned image pixels, FIBA can perform attacks on both classification and dense prediction models. Experiments on three benchmarks in MIA (i.e., ISIC-2019 [4] for skin lesion classification, KiTS-19 [17] for kidney tumor segmentation, and EAD-2019 [1] for endoscopic artifact detection), validate the effectiveness of FIBA and its superiority over stateof-the-art methods in attacking MIA models and bypassing backdoor defense. Source code will be available at code.
A Survey on Data Poisoning Attacks and Defenses. 2022 7th IEEE International Conference on Data Science in Cyberspace (DSC). :48—55.
.
2022. With the widespread deployment of data-driven services, the demand for data volumes continues to grow. At present, many applications lack reliable human supervision in the process of data collection, which makes the collected data contain low-quality data or even malicious data. This low-quality or malicious data make AI systems potentially face much security challenges. One of the main security threats in the training phase of machine learning is data poisoning attacks, which compromise model integrity by contaminating training data to make the resulting model skewed or unusable. This paper reviews the relevant researches on data poisoning attacks in various task environments: first, the classification of attacks is summarized, then the defense methods of data poisoning attacks are sorted out, and finally, the possible research directions in the prospect.
Poisoning Attack against Online Regression Learning with Maximum Loss for Edge Intelligence. 2022 International Conference on Computing, Communication, Perception and Quantum Technology (CCPQT). :169—173.
.
2022. Recent trends in the convergence of edge computing and artificial intelligence (AI) have led to a new paradigm of “edge intelligence”, which are more vulnerable to attack such as data and model poisoning and evasion of attacks. This paper proposes a white-box poisoning attack against online regression model for edge intelligence environment, which aim to prepare the protection methods in the future. Firstly, the new method selects data points from original stream with maximum loss by two selection strategies; Secondly, it pollutes these points with gradient ascent strategy. At last, it injects polluted points into original stream being sent to target model to complete the attack process. We extensively evaluate our proposed attack on open dataset, the results of which demonstrate the effectiveness of the novel attack method and the real implications of poisoning attack in a case study electric energy prediction application.
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers. 2022 IEEE/ACM 1st International Conference on AI Engineering – Software Engineering for AI (CAIN). :77—87.
.
2022. Graph-based Semi-Supervised Learning (GSSL) is a practical solution to learn from a limited amount of labelled data together with a vast amount of unlabelled data. However, due to their reliance on the known labels to infer the unknown labels, these algorithms are sensitive to data quality. It is therefore essential to study the potential threats related to the labelled data, more specifically, label poisoning. In this paper, we propose a novel data poisoning method which efficiently approximates the result of label inference to identify the inputs which, if poisoned, would produce the highest number of incorrectly inferred labels. We extensively evaluate our approach on three classification problems under 24 different experimental settings each. Compared to the state of the art, our influence-driven attack produces an average increase of error rate 50% higher, while being faster by multiple orders of magnitude. Moreover, our method can inform engineers of inputs that deserve investigation (relabelling them) before training the learning model. We show that relabelling one-third of the poisoned inputs (selected based on their influence) reduces the poisoning effect by 50%. ACM Reference Format: Adriano Franci, Maxime Cordy, Martin Gubri, Mike Papadakis, and Yves Le Traon. 2022. Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers. In 1st Conference on AI Engineering - Software Engineering for AI (CAIN’22), May 16–24, 2022, Pittsburgh, PA, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3522664.3528606
Enhancing Cyber Security in IoT Systems using FL-based IDS with Differential Privacy. 2022 Global Information Infrastructure and Networking Symposium (GIIS). :30—34.
.
2022. Nowadays, IoT networks and devices exist in our everyday life, capturing and carrying unlimited data. However, increasing penetration of connected systems and devices implies rising threats for cybersecurity with IoT systems suffering from network attacks. Artificial Intelligence (AI) and Machine Learning take advantage of huge volumes of IoT network logs to enhance their cybersecurity in IoT. However, these data are often desired to remain private. Federated Learning (FL) provides a potential solution which enables collaborative training of attack detection model among a set of federated nodes, while preserving privacy as data remain local and are never disclosed or processed on central servers. While FL is resilient and resolves, up to a point, data governance and ownership issues, it does not guarantee security and privacy by design. Adversaries could interfere with the communication process, expose network vulnerabilities, and manipulate the training process, thus affecting the performance of the trained model. In this paper, we present a federated learning model which can successfully detect network attacks in IoT systems. Moreover, we evaluate its performance under various settings of differential privacy as a privacy preserving technique and configurations of the participating nodes. We prove that the proposed model protects the privacy without actually compromising performance. Our model realizes a limited performance impact of only ∼ 7% less testing accuracy compared to the baseline while simultaneously guaranteeing security and applicability.
PPDS: Privacy Preserving Data Sharing for AI applications Based on Smart Contracts. 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC). :1561—1566.
.
2022. With the development of artificial intelligence, the need for data sharing is becoming more and more urgent. However, the existing data sharing methods can no longer fully meet the data sharing needs. Privacy breaches, lack of motivation and mutual distrust have become obstacles to data sharing. We design a privacy-preserving, decentralized data sharing method based on blockchain smart contracts, named PPDS. To protect data privacy, we transform the data sharing problem into a model sharing problem. This means that the data owner does not need to directly share the raw data, but the AI model trained with such data. The data requester and the data owner interact on the blockchain through a smart contract. The data owner trains the model with local data according to the requester's requirements. To fairly assess model quality, we set up several model evaluators to assess the validity of the model through voting. After the model is verified, the data owner who trained the model will receive reward in return through a smart contract. The sharing of the model avoids direct exposure of the raw data, and the reasonable incentive provides a motivation for the data owner to share the data. We describe the design and workflow of our PPDS, and analyze the security using formal verification technology, that is, we use Coloured Petri Nets (CPN) to build a formal model for our approach, proving its security through simulation execution and model checking. Finally, we demonstrate effectiveness of PPDS by developing a prototype with its corresponding case application.
PPIoV: A Privacy Preserving-Based Framework for IoV- Fog Environment Using Federated Learning and Blockchain. 2022 IEEE World AI IoT Congress (AIIoT). :597—603.
.
2022. The integration of the Internet-of-Vehicles (IoV) and fog computing benefits from cooperative computing and analysis of environmental data while avoiding network congestion and latency. However, when private data is shared across fog nodes or the cloud, there exist privacy issues that limit the effectiveness of IoV systems, putting drivers' safety at risk. To address this problem, we propose a framework called PPIoV, which is based on Federated Learning (FL) and Blockchain technologies to preserve the privacy of vehicles in IoV.Typical machine learning methods are not well suited for distributed and highly dynamic systems like IoV since they train on data with local features. Therefore, we use FL to train the global model while preserving privacy. Also, our approach is built on a scheme that evaluates the reliability of vehicles participating in the FL training process. Moreover, PPIoV is built on blockchain to establish trust across multiple communication nodes. For example, when the local learned model updates from the vehicles and fog nodes are communicated with the cloud to update the global learned model, all transactions take place on the blockchain. The outcome of our experimental study shows that the proposed method improves the global model's accuracy as a result of allowing reputed vehicles to update the global model.
Facial Privacy Preservation using FGSM and Universal Perturbation attacks. 2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON). 1:46—52.
.
2022. Research done in Facial Privacy so far has entrenched the scope of gleaning race, age, and gender from a human’s facial image that are classifiable and compliant biometric attributes. Noticeable distortions, morphing, and face-swapping are some of the techniques that have been researched to restore consumers’ privacy. By fooling face recognition models, these techniques cater superficially to the needs of user privacy, however, the presence of visible manipulations negatively affects the aesthetic of the image. The objective of this work is to highlight common adversarial techniques that can be used to introduce granular pixel distortions using white-box and black-box perturbation algorithms that ensure the privacy of users’ sensitive or personal data in face images, fooling AI facial recognition models while maintaining the aesthetics of and visual integrity of the image.
A Privacy-preserving Approach to Distributed Set-membership Estimation over Wireless Sensor Networks. 2022 9th International Conference on Dependable Systems and Their Applications (DSA). :974—979.
.
2022. This paper focuses on the system on wireless sensor networks. The system is linear and the time of the system is discrete as well as variable, which named discrete-time linear time-varying systems (DLTVS). DLTVS are vulnerable to network attacks when exchanging information between sensors in the network, as well as putting their security at risk. A DLTVS with privacy-preserving is designed for this purpose. A set-membership estimator is designed by adding privacy noise obeying the Laplace distribution to state at the initial moment. Simultaneously, the differential privacy of the system is analyzed. On this basis, the real state of the system and the existence form of the estimator for the desired distribution are analyzed. Finally, simulation examples are given, which prove that the model after adding differential privacy can obtain accurate estimates and ensure the security of the system state.
AI in Blockchain Towards Realizing Cyber Security. 2022 International Conference on Artificial Intelligence in Everything (AIE). :471—475.
.
2022. Blockchain and artificial intelligence are two technologies that, when combined, have the ability to help each other realize their full potential. Blockchains can guarantee the accessibility and consistent admittance to integrity safeguarded big data indexes from numerous areas, allowing AI systems to learn more effectively and thoroughly. Similarly, artificial intelligence (AI) can be used to offer new consensus processes, and hence new methods of engaging with Blockchains. When it comes to sensitive data, such as corporate, healthcare, and financial data, various security and privacy problems arise that must be properly evaluated. Interaction with Blockchains is vulnerable to data credibility checks, transactional data leakages, data protection rules compliance, on-chain data privacy, and malicious smart contracts. To solve these issues, new security and privacy-preserving technologies are being developed. AI-based blockchain data processing, either based on AI or used to defend AI-based blockchain data processing, is emerging to simplify the integration of these two cutting-edge technologies.
PrivPAS: A real time Privacy-Preserving AI System and applied ethics. 2022 IEEE 16th International Conference on Semantic Computing (ICSC). :9—16.
.
2022. With 3.78 billion social media users worldwide in 2021 (48% of the human population), almost 3 billion images are shared daily. At the same time, a consistent evolution of smartphone cameras has led to a photography explosion with 85% of all new pictures being captured using smartphones. However, lately, there has been an increased discussion of privacy concerns when a person being photographed is unaware of the picture being taken or has reservations about the same being shared. These privacy violations are amplified for people with disabilities, who may find it challenging to raise dissent even if they are aware. Such unauthorized image captures may also be misused to gain sympathy by third-party organizations, leading to a privacy breach. Privacy for people with disabilities has so far received comparatively less attention from the AI community. This motivates us to work towards a solution to generate privacy-conscious cues for raising awareness in smartphone users of any sensitivity in their viewfinder content. To this end, we introduce PrivPAS (A real time Privacy-Preserving AI System) a novel framework to identify sensitive content. Additionally, we curate and annotate a dataset to identify and localize accessibility markers and classify whether an image is sensitive to a featured subject with a disability. We demonstrate that the proposed lightweight architecture, with a memory footprint of a mere 8.49MB, achieves a high mAP of 89.52% on resource-constrained devices. Furthermore, our pipeline, trained on face anonymized data. achieves an F1-score of 73.1%.
AI Ethics and Data Privacy compliance. 2022 14th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). :1—5.
.
2022. Throughout history, technological evolution has generated less desired side effects with impact on society. In the field of IT&C, there are ongoing discussions about the role of robots within economy, but also about their impact on the labour market. In the case of digital media systems, we talk about misinformation, manipulation, fake news, etc. Issues related to the protection of the citizen's life in the face of technology began more than 25 years ago; In addition to the many messages such as “the citizen is at the center of concern” or, “privacy must be respected”, transmitted through various channels of different entities or companies in the field of ICT, the EU has promoted a number of legislative and normative documents to protect citizens' rights and freedoms.
Toward Among-Device AI from On-Device AI with Stream Pipelines. 2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). :285—294.
.
2022. Modern consumer electronic devices often provide intelligence services with deep neural networks. We have started migrating the computing locations of intelligence services from cloud servers (traditional AI systems) to the corresponding devices (on-device AI systems). On-device AI systems generally have the advantages of preserving privacy, removing network latency, and saving cloud costs. With the emergence of on-device AI systems having relatively low computing power, the inconsistent and varying hardware resources and capabilities pose difficulties. Authors' affiliation has started applying a stream pipeline framework, NNStreamer, for on-device AI systems, saving developmental costs and hardware resources and improving performance. We want to expand the types of devices and applications with on-device AI services products of both the affiliation and second/third parties. We also want to make each AI service atomic, re-deployable, and shared among connected devices of arbitrary vendors; we now have yet another requirement introduced as it always has been. The new requirement of “among-device AI” includes connectivity between AI pipelines so that they may share computing resources and hardware capabilities across a wide range of devices regardless of vendors and manufacturers. We propose extensions of the stream pipeline framework, NNStreamer, for on-device AI so that NNStreamer may provide among-device AI capability. This work is a Linux Foundation (LF AI & Data) open source project accepting contributions from the general public.
Privacy vs Accuracy Trade-Off in Privacy Aware Face Recognition in Smart Systems. 2022 IEEE Symposium on Computers and Communications (ISCC). :1—8.
.
2022. This paper proposes a novel approach for privacy preserving face recognition aimed to formally define a trade-off optimization criterion between data privacy and algorithm accuracy. In our methodology, real world face images are anonymized with Gaussian blurring for privacy preservation. The anonymized images are processed for face detection, face alignment, face representation, and face verification. The proposed methodology has been validated with a set of experiments on a well known dataset and three face recognition classifiers. The results demonstrate the effectiveness of our approach to correctly verify face images with different levels of privacy and results accuracy, and to maximize privacy with the least negative impact on face detection and face verification accuracy.
Mixed Differential Privacy in Computer Vision. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :8366—8376.
.
2022. We introduce AdaMix, an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data. While pre-training language models on large public datasets has enabled strong differential privacy (DP) guarantees with minor loss of accuracy, a similar practice yields punishing trade-offs in vision tasks. A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset. AdaMix incorporates few-shot training, or cross-modal zero-shot learning, on public data prior to private fine-tuning, to improve the trade-off. AdaMix reduces the error increase from the non-private upper bound from the 167–311% of the baseline, on average across 6 datasets, to 68-92% depending on the desired privacy level selected by the user. AdaMix tackles the trade-off arising in visual classification, whereby the most privacy sensitive data, corresponding to isolated points in representation space, are also critical for high classification accuracy. In addition, AdaMix comes with strong theoretical privacy guarantees and convergence analysis.
Data Acquisition and extraction on mobile devices-A Review. 2022 IEEE International Workshop on Metrology for Industry 4.0 & IoT (MetroInd4.0&IoT). :294—299.
.
2022. Forensic Science comprises a set of technical-scientific knowledge used to solve illicit acts. The increasing use of mobile devices as the main computing platform, in particular smartphones, makes existing information valuable for forensics. However, the blocking mechanisms imposed by the manufacturers and the variety of models and technologies make the task of reconstructing the data for analysis challenging. It is worth mentioning that the conclusion of a case requires more than the simple identification of evidence, as it is extremely important to correlate all the data and sources obtained, to confirm a suspicion or to seek new evidence. This work carries out a systematic review of the literature, identifying the different types of existing image acquisition and the main extraction and encryption methods used in smartphones with the Android operating system.
A Reliable, Secure and Efficient Decentralised Conditional of KYC Verification System: A Blockchain Approach. 2022 International Conference on Edge Computing and Applications (ICECAA). :564—570.
.
2022. KYC or Know Your Customer is the procedure to verify the individuality of its consumers & evaluating the possible dangers of illegitimate trade relations. A few problems with the existing KYC manual process are that it is less secure, time-consuming and expensive. With the advent of Blockchain technology, its structures such as consistency, security, and geographical diversity make them an ideal solution to such problems. Although marketing solutions such as KYC-chain.co, K-Y-C. The legal right to enable blockchain-based KYC authentication provides a way for documents to be verified by a trusted network participant. This project uses an ETHereum based Optimised KYC Block-chain system with uniform A-E-S encryption and compression built on the LZ method. The system publicly verifies a distributed encryption, is protected by cryptography, operates by pressing the algorithm and is all well-designed blockchain features. The suggested scheme is a novel explanation based on Distributed Ledger Technology or Blockchain technology that would cut KYC authentication process expenses of organisations & decrease the regular schedule for completion of the procedure whilst becoming easier for clients. The largest difference in the system in traditional methods is the full authentication procedure is performed in just no time for every client, regardless of the number of institutions you desire to be linked to. Furthermore, since DLT is employed, validation findings may be securely distributed to consumers, enhancing transparency. Based on this method, a Proof of Concept (POC) is produced with Ethereum's API, websites as endpoints and the android app as the front office, recognising the viability and efficacy of this technique. Ultimately, this strategy enhances consumer satisfaction, lowers budget overrun & promotes transparency in the customer transport network.
Evaluating Opcodes for Detection of Obfuscated Android Malware. 2022 International Conference on Artificial Intelligence in Information and Communication (ICAIIC). :044—049.
.
2022. Obfuscation refers to changing the structure of code in a way that original semantics can be hidden. These techniques are often used by application developers for code hardening but it has been found that obfuscation techniques are widely used by malware developers in order to hide the work flow and semantics of malicious code. Class Encryption, Code Re-Ordering, Junk Code insertion and Control Flow modifications are Code Obfuscation techniques. In these techniques, code of the application is changed. These techniques change the signature of the application and also affect the systems that use sequence of instructions in order to detect maliciousness of an application. In this paper an ’Opcode sequence’ based detection system is designed and tested against obfuscated samples. It has been found that the system works efficiently for the detection of non obfuscated samples but the performance is effected significantly against obfuscated samples. The study tests different code obfuscation schemes and reports the effect of each on sequential opcode based analytic system.
Advanced Lightweight Encryption Algorithm for Android (IoT) Devices. 2022 International Conference on Advances in Computing, Communication and Applied Informatics (ACCAI). :1—5.
.
2022. Security and Controls with Data privacy in Internet of Things (IoT) devices is not only a present and future technology that is projected to connect a multitude of devices, but it is also a critical survival factor for IoT to thrive. As the quantity of communications increases, massive amounts of data are expected to be generated, posing a threat to both physical device and data security. In the Internet of Things architecture, small and low-powered devices are widespread. Due to their complexity, traditional encryption methods and algorithms are computationally expensive, requiring numerous rounds to encrypt and decode, squandering the limited energy available on devices. A simpler cryptographic method, on the other hand, may compromise the intended confidentiality and integrity. This study examines two lightweight encryption algorithms for Android devices: AES and RSA. On the other hand, the traditional AES approach generates preset encryption keys that the sender and receiver share. As a result, the key may be obtained quickly. In this paper, we present an improved AES approach for generating dynamic keys.
A Distributed Agent-Oriented Framework for Blockchain-Enabled Supply Chain Management. 2022 IEEE International Conference on Blockchain and Distributed Systems Security (ICBDS). :1—7.
.
2022. Blockchain has emerged as a leading technological innovation because of its indisputable safety and services in a distributed setup. Applications of blockchain are rising covering varied fields such as financial transactions, supply chains, maintenance of land records, etc. Supply chain management is a potential area that can immensely benefit from blockchain technology (BCT) along with smart contracts, making supply chain operations more reliable, safer, and trustworthy for all its stakeholders. However, there are numerous challenges such as scalability, coordination, and safety-related issues which are yet to be resolved. Multi-agent systems (MAS) offer a completely new dimension for scalability, cooperation, and coordination in distributed culture. MAS consists of a collection of automated agents who can perform a specific task intelligently in a distributed environment. In this work, an attempt has been made to develop a framework for implementing a multi-agent system for a large-scale product manufacturing supply chain with blockchain technology wherein the agents communicate with each other to monitor and organize supply chain operations. This framework eliminates many of the weaknesses of supply chain management systems. The overall goal is to enhance the performance of SCM in terms of transparency, traceability, trustworthiness, and resilience by using MAS and BCT.
Advanced Ledger: Supply Chain Management with Contribution Trails and Fair Reward Distribution. 2022 IEEE International Conference on Blockchain (Blockchain). :435—442.
.
2022. We have several issues in most current supply chain management systems. Consumers want to spend money on environmentally friendly products, but they are seldomly informed of the environmental contributions of the suppliers. Meanwhile, each supplier seeks to recover the costs for the environmental contributions to re-invest them into further contributions. Instead, in most current supply chains, the reward for each supplier is not clearly defined and fairly distributed. To address these issues, we propose a supply-chain contribution management platform for fair reward distribution called ‘Advanced Ledger.’ This platform records suppliers' environ-mental contribution trails, receives rewards from consumers in exchange for trail-backed fungible tokens, and fairly distributes the rewards to each supplier based on the contribution trails. In this paper, we overview the architecture of Advanced Ledger and 11 technical features, including decentralized autonomous organization (DAO) based contribution verification, contribution concealment, negative-valued tokens, fair reward distribution, atomic rewarding, and layer-2 rewarding. We then study the requirements and candidates of the smart contract platforms for implementing Advanced Ledger. Finally, we introduce a use case called ‘ESG token’ built on the Advanced Ledger architecture.