Visible to the public Biblio

Found 721 results

Filters: Keyword is Computational modeling  [Clear All Filters]
2022-07-15
Wang, Shilei, Wang, Hui, Yu, Hongtao, Zhang, Fuzhi.  2021.  Detecting shilling groups in recommender systems based on hierarchical topic model. 2021 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA). :832—837.
In a group shilling attack, attackers work collaboratively to inject fake profiles aiming to obtain desired recommendation result. This type of attacks is more harmful to recommender systems than individual shilling attacks. Previous studies pay much attention to detect individual attackers, and little work has been done on the detection of shilling groups. In this work, we introduce a topic modeling method of natural language processing into shilling attack detection and propose a shilling group detection method on the basis of hierarchical topic model. First, we model the given dataset to a series of user rating documents and use the hierarchical topic model to learn the specific topic distributions of each user from these rating documents to describe user rating behaviors. Second, we divide candidate groups based on rating value and rating time which are not involved in the hierarchical topic model. Lastly, we calculate group suspicious degrees in accordance with several indicators calculated through the analysis of user rating distributions, and use the k-means clustering algorithm to distinguish shilling groups. The experimental results on the Netflix and Amazon datasets show that the proposed approach performs better than baseline methods.
2022-07-14
De, Rohit, Moberly, Raymond, Beery, Colton, Juybari, Jeremy, Sundqvist, Kyle.  2021.  Multi-Qubit Size-Hopping Deutsch-Jozsa Algorithm with Qubit Reordering for Secure Quantum Key Distribution. 2021 IEEE International Conference on Quantum Computing and Engineering (QCE). :473—474.
As a classic quantum computing implementation, the Deustch-Jozsa (DJ) algorithm is taught in many courses pertaining to quantum information science and technology (QIST). We exploit the DJ framework as an educational testbed, illustrating fundamental qubit concepts while identifying associated algorithmic challenges. In this work, we present a self-contained exploration which may be beneficial in educating the future quantum workforce. Quantum Key Distribution (QKD), an improvement over the classical Public Key Infrastructure (PKI), allows two parties, Alice and Bob, to share a secret key by using the quantum physical properties. For QKD the DJ-packets, consisting of the input qubits and the target qubit for the DJ algorithm, carry the secret information between Alice and Bob. Previous research from Nagata and Nakamura discovered in 2015 that the DJ algorithm for QKD allows an attacker to successfully intercept and remain undetected. Improving upon the past research we increased the entropy of DJ-packets through: (i) size hopping (H), where the number of qubits in consecutive DJ-packets keeps on changing and (ii) reordering (R) the qubits within the DJ-packets. These concepts together illustrate the multiple scales where entropy may increase in a DJ algorithm to make for a more robust QKD framework, and therefore significantly decrease Eve’s chance of success. The proof of concept of the new schemes is tested on Google’s Cirq quantum simulator, and detailed python simulations show that attacker’s interception success rate can be drastically reduced.
2022-07-13
Angelogianni, Anna, Politis, Ilias, Polvanesi, Pier Luigi, Pastor, Antonio, Xenakis, Christos.  2021.  Unveiling the user requirements of a cyber range for 5G security testing and training. 2021 IEEE 26th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1—6.

Cyber ranges are proven to be effective towards the direction of cyber security training. Nevertheless, the existing literature in the area of cyber ranges does not cover, to our best knowledge, the field of 5G security training. 5G networks, though, reprise a significant field for modern cyber security, introducing a novel threat landscape. In parallel, the demand for skilled cyber security specialists is high and still rising. Therefore, it is of utmost importance to provide all means to experts aiming to increase their preparedness level in the case of an unwanted event. The EU funded SPIDER project proposes an innovative Cyber Range as a Service (CRaaS) platform for 5G cyber security testing and training. This paper aims to present the evaluation framework, followed by SPIDER, for the extraction of the user requirements. To validate the defined user requirements, SPIDER leveraged of questionnaires which included both closed and open format questions and were circulated among the personnel of telecommunication providers, vendors, security service providers, managers, engineers, cyber security personnel and researchers. Here, we demonstrate a selected set of the most critical questions and responses received. From the conducted analysis we reach to some important conclusions regarding 5G testing and training capabilities that should be offered by a cyber range, in addition to the analysis of the different perceptions between cyber security and 5G experts.

2022-07-12
Farrukh, Yasir Ali, Ahmad, Zeeshan, Khan, Irfan, Elavarasan, Rajvikram Madurai.  2021.  A Sequential Supervised Machine Learning Approach for Cyber Attack Detection in a Smart Grid System. 2021 North American Power Symposium (NAPS). :1—6.
Modern smart grid systems are heavily dependent on Information and Communication Technology, and this dependency makes them prone to cyber-attacks. The occurrence of a cyber-attack has increased in recent years resulting in substantial damage to power systems. For a reliable and stable operation, cyber protection, control, and detection techniques are becoming essential. Automated detection of cyberattacks with high accuracy is a challenge. To address this, we propose a two-layer hierarchical machine learning model having an accuracy of 95.44 % to improve the detection of cyberattacks. The first layer of the model is used to distinguish between the two modes of operation - normal state or cyberattack. The second layer is used to classify the state into different types of cyberattacks. The layered approach provides an opportunity for the model to focus its training on the targeted task of the layer, resulting in improvement in model accuracy. To validate the effectiveness of the proposed model, we compared its performance against other recent cyber attack detection models proposed in the literature.
2022-07-05
Barros, Bettina D., Venkategowda, Naveen K. D., Werner, Stefan.  2021.  Quickest Detection of Stochastic False Data Injection Attacks with Unknown Parameters. 2021 IEEE Statistical Signal Processing Workshop (SSP). :426—430.
This paper considers a multivariate quickest detection problem with false data injection (FDI) attacks in internet of things (IoT) systems. We derive a sequential generalized likelihood ratio test (GLRT) for zero-mean Gaussian FDI attacks. Exploiting the fact that covariance matrices are positive, we propose strategies to detect positive semi-definite matrix additions rather than arbitrary changes in the covariance matrix. The distribution of the GLRT is only known asymptotically whereas quickest detectors deal with short sequences, thereby leading to loss of performance. Therefore, we use a finite-sample correction to reduce the false alarm rate. Further, we provide a numerical approach to estimate the threshold sequences, which are analytically intractable to compute. We also compare the average detection delay of the proposed detector for constant and varying threshold sequences. Simulations showed that the proposed detector outperforms the standard sequential GLRT detector.
Park, Ho-rim, Hwang, Kyu-hong, Ha, Young-guk.  2021.  An Object Detection Model Robust to Out-of-Distribution Data. 2021 IEEE International Conference on Big Data and Smart Computing (BigComp). :275—278.
Most of the studies of the existing object detection models are studies to better detect the objects to be detected. The problem of false detection of objects that should not be detected is not considered. When an object detection model that does not take this problem into account is applied to an industrial field close to humans, false detection can lead to a dangerous situation that greatly interferes with human life. To solve this false detection problem, this paper proposes a method of fine-tuning the backbone neural network model of the object detection model using the Outlier Exposure method and applying the class-specific uncertainty constant to the confidence score to detect the object.
2022-06-13
Dutta, Aritra, Bose, Rajesh, Chakraborty, Swarnendu Kumar, Roy, Sandip, Mondal, Haraprasad.  2021.  Data Security Mechanism for Green Cloud. 2021 Innovations in Energy Management and Renewable Resources(52042). :1–4.
Data and veracious information are an important feature of any organization; it takes special care as a like asset of the organization. Cloud computing system main target to provide service to the user like high-speed access user data for storage and retrieval. Now, big concern is data protection in cloud computing technology as because data leaking and various malicious attacks happened in cloud computing technology. This study provides user data protection in the cloud storage device. The article presents the architecture of a data security hybrid infrastructure that protects and stores the user data from the unauthenticated user. In this hybrid model, we use a different type of security model.
Fan, Teah Yi, Rana, Muhammad Ehsan.  2021.  Facilitating Role of Cloud Computing in Driving Big Data Emergence. 2021 Third International Sustainability and Resilience Conference: Climate Change. :524–529.
Big data emerges as an important technology that addresses the storage, processing and analytics aspects of massive data characterized by 5V's (volume, velocity, variety, veracity, value) which has grown exponentially beyond the handling capacity traditional data architectures. The most significant technologies include the parallel storage and processing framework which requires entirely new IT infrastructures to facilitate big data adoption. Cloud computing emerges as a successful paradigm in computing technology that shifted the business landscape of IT infrastructures towards service-oriented basis. Cloud service providers build IT infrastructures and technologies and offer them as services which can be accessed through internet to the consumers. This paper discusses on the facilitating role of cloud computing in the field of big data analytics. Cloud deployment models concerning the architectural aspect and the current trend of adoption are introduced. The fundamental cloud services models concerning the infrastructural and technological provisioning are introduced while the emerging cloud services models related to big data are discussed with examples of technology platforms offered by the big cloud service providers - Amazon, Google, Microsoft and Cloudera. The main advantages of cloud adoption in terms of availability and scalability for big data are reiterated. Lastly, the challenges concerning cloud security, data privacy and data governance of consuming and adopting big data in the cloud are highlighted.
2022-06-10
Nguyen, Tien N., Choo, Raymond.  2021.  Human-in-the-Loop XAI-enabled Vulnerability Detection, Investigation, and Mitigation. 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). :1210–1212.
The need for cyber resilience is increasingly important in our technology-dependent society, where computing systems, devices and data will continue to be the target of cyber attackers. Hence, we propose a conceptual framework called ‘Human-in-the-Loop Explainable-AI-Enabled Vulnerability Detection, Investigation, and Mitigation’ (HXAI-VDIM). Specifically, instead of resolving complex scenario of security vulnerabilities as an output of an AI/ML model, we integrate the security analyst or forensic investigator into the man-machine loop and leverage explainable AI (XAI) to combine both AI and Intelligence Assistant (IA) to amplify human intelligence in both proactive and reactive processes. Our goal is that HXAI-VDIM integrates human and machine in an interactive and iterative loop with security visualization that utilizes human intelligence to guide the XAI-enabled system and generate refined solutions.
2022-06-09
Fang, Shiwei, Huang, Jin, Samplawski, Colin, Ganesan, Deepak, Marlin, Benjamin, Abdelzaher, Tarek, Wigness, Maggie B..  2021.  Optimizing Intelligent Edge-clouds with Partitioning, Compression and Speculative Inference. MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM). :892–896.
Internet of Battlefield Things (IoBTs) are well positioned to take advantage of recent technology trends that have led to the development of low-power neural accelerators and low-cost high-performance sensors. However, a key challenge that needs to be dealt with is that despite all the advancements, edge devices remain resource-constrained, thus prohibiting complex deep neural networks from deploying and deriving actionable insights from various sensors. Furthermore, deploying sophisticated sensors in a distributed manner to improve decision-making also poses an extra challenge of coordinating and exchanging data between the nodes and server. We propose an architecture that abstracts away these thorny deployment considerations from an end-user (such as a commander or warfighter). Our architecture can automatically compile and deploy the inference model into a set of distributed nodes and server while taking into consideration of the resource availability, variation, and uncertainties.
Hoarau, Kevin, Tournoux, Pierre Ugo, Razafindralambo, Tahiry.  2021.  Suitability of Graph Representation for BGP Anomaly Detection. 2021 IEEE 46th Conference on Local Computer Networks (LCN). :305–310.
The Border Gateway Protocol (BGP) is in charge of the route exchange at the Internet scale. Anomalies in BGP can have several causes (mis-configuration, outage and attacks). These anomalies are classified into large or small scale anomalies. Machine learning models are used to analyze and detect anomalies from the complex data extracted from BGP behavior. Two types of data representation can be used inside the machine learning models: a graph representation of the network (graph features) or a statistical computation on the data (statistical features). In this paper, we evaluate and compare the accuracy of machine learning models using graph features and statistical features on both large and small scale BGP anomalies. We show that statistical features have better accuracy for large scale anomalies, and graph features increase the detection accuracy by 15% for small scale anomalies and are well suited for BGP small scale anomaly detection.
Xu, Qichao, Zhao, Lifeng, Su, Zhou.  2021.  UAV-assisted Abnormal Vehicle Behavior Detection in Internet of Vehicles. 2021 40th Chinese Control Conference (CCC). :7500–7505.
With advantages of low cost, high mobility, and flexible deployment, unmanned aerial vehicle (UAVs) are employed to efficiently detect abnormal vehicle behaviors (AVBs) in the internet of vehicles (IoVs). However, due to limited resources including battery, computing, and communication, UAVs are selfish to work cooperatively. To solve the above problem, in this paper, a game theoretical UAV incentive scheme in IoVs is proposed. Specifically, the abnormal behavior model is first constructed, where three model categories are defined: velocity abnormality, distance abnormality, and overtaking abnormality. Then, the barging pricing framework is designed to model the interactions between UAVs and IoVs, where the transaction prices are determined with the abnormal behavior category detected by UAVs. At last, simulations are conducted to verify the feasibility and effectiveness of our proposed scheme.
2022-06-08
Wang, Runhao, Kang, Jiexiang, Yin, Wei, Wang, Hui, Sun, Haiying, Chen, Xiaohong, Gao, Zhongjie, Wang, Shuning, Liu, Jing.  2021.  DeepTrace: A Secure Fingerprinting Framework for Intellectual Property Protection of Deep Neural Networks. 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :188–195.

Deep Neural Networks (DNN) has gained great success in solving several challenging problems in recent years. It is well known that training a DNN model from scratch requires a lot of data and computational resources. However, using a pre-trained model directly or using it to initialize weights cost less time and often gets better results. Therefore, well pre-trained DNN models are valuable intellectual property that we should protect. In this work, we propose DeepTrace, a framework for model owners to secretly fingerprinting the target DNN model using a special trigger set and verifying from outputs. An embedded fingerprint can be extracted to uniquely identify the information of model owner and authorized users. Our framework benefits from both white-box and black-box verification, which makes it useful whether we know the model details or not. We evaluate the performance of DeepTrace on two different datasets, with different DNN architectures. Our experiment shows that, with the advantages of combining white-box and black-box verification, our framework has very little effect on model accuracy, and is robust against different model modifications. It also consumes very little computing resources when extracting fingerprint.

2022-06-06
Yeboah-Ofori, Abel, Ismail, Umar Mukhtar, Swidurski, Tymoteusz, Opoku-Boateng, Francisca.  2021.  Cyberattack Ontology: A Knowledge Representation for Cyber Supply Chain Security. 2021 International Conference on Computing, Computational Modelling and Applications (ICCMA). :65–70.
Cyberattacks on cyber supply chain (CSC) systems and the cascading impacts have brought many challenges and different threat levels with unpredictable consequences. The embedded networks nodes have various loopholes that could be exploited by the threat actors leading to various attacks, risks, and the threat of cascading attacks on the various systems. Key factors such as lack of common ontology vocabulary and semantic interoperability of cyberattack information, inadequate conceptualized ontology learning and hierarchical approach to representing the relationships in the CSC security domain has led to explicit knowledge representation. This paper explores cyberattack ontology learning to describe security concepts, properties and the relationships required to model security goal. Cyberattack ontology provides a semantic mapping between different organizational and vendor security goals has been inherently challenging. The contributions of this paper are threefold. First, we consider CSC security modelling such as goal, actor, attack, TTP, and requirements using semantic rules for logical representation. Secondly, we model a cyberattack ontology for semantic mapping and knowledge representation. Finally, we discuss concepts for threat intelligence and knowledge reuse. The results show that the cyberattack ontology concepts could be used to improve CSC security.
2022-05-10
Ali-Eldin, Amr M.T..  2021.  A Cloud-Based Trust Computing Model for the Social Internet of Things. 2021 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC). :161–165.
As IoT systems would have an economic impact, they are gaining growing interest. Millions of IoT devices are expected to join the internet of things, which will carny both major benefits and significant security threats to consumers. For IoT systems that secure data and preserve privacy of users, trust management is an essential component. IoT objects carry on the ownership settings of their owners, allowing them to interact with each other. Social relationships are believed to be important in confidence building. In this paper, we explain how to compute trust in social IoT environments using a cloud-based approach.
Agarkhed, Jayashree, Pawar, Geetha.  2021.  Efficient Security Model for Pervasive Computing Using Multi-Layer Neural Network. 2021 Fourth International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1–6.

In new technological world pervasive computing plays the important role in data computing and communication. The pervasive computing provides the mobile environment for decentralized computational services at anywhere, anytime at any context and location. Pervasive computing is flexible and makes portable devices and computing surrounded us as part of our daily life. Devices like Laptop, Smartphones, PDAs, and any other portable devices can constitute the pervasive environment. These devices in pervasive environments are worldwide and can receive various communications including audio visual services. The users and the system in this pervasive environment face the challenges of user trust, data privacy and user and device node identity. To give the feasible determination for these challenges. This paper aims to propose a dynamic learning in pervasive computing environment refer the challenges proposed efficient security model (ESM) for trustworthy and untrustworthy attackers. ESM model also compared with existing generic models; it also provides better accuracy rate than existing models.

2022-05-09
Ma, Zhuoran, Ma, Jianfeng, Miao, Yinbin, Liu, Ximeng, Choo, Kim-Kwang Raymond, Yang, Ruikang, Wang, Xiangyu.  2021.  Lightweight Privacy-preserving Medical Diagnosis in Edge Computing. 2021 IEEE World Congress on Services (SERVICES). :9–9.
In the era of machine learning, mobile users are able to submit their symptoms to doctors at any time, anywhere for personal diagnosis. It is prevalent to exploit edge computing for real-time diagnosis services in order to reduce transmission latency. Although data-driven machine learning is powerful, it inevitably compromises privacy by relying on vast amounts of medical data to build a diagnostic model. Therefore, it is necessary to protect data privacy without accessing local data. However, the blossom has also been accompanied by various problems, i.e., the limitation of training data, vulnerabilities, and privacy concern. As a solution to these above challenges, in this paper, we design a lightweight privacy-preserving medical diagnosis mechanism on edge. Our method redesigns the extreme gradient boosting (XGBoost) model based on the edge-cloud model, which adopts encrypted model parameters instead of local data to reduce amounts of ciphertext computation to plaintext computation, thus realizing lightweight privacy preservation on resource-limited edges. Additionally, the proposed scheme is able to provide a secure diagnosis on edge while maintaining privacy to ensure an accurate and timely diagnosis. The proposed system with secure computation could securely construct the XGBoost model with lightweight overhead, and efficiently provide a medical diagnosis without privacy leakage. Our security analysis and experimental evaluation indicate the security, effectiveness, and efficiency of the proposed system.
2022-05-06
Haugdal, Hallvar, Uhlen, Kjetil, Jóhannsson, Hjörtur.  2021.  An Open Source Power System Simulator in Python for Efficient Prototyping of WAMPAC Applications. 2021 IEEE Madrid PowerTech. :1–6.
An open source software package for performing dynamic RMS simulation of small to medium-sized power systems is presented, written entirely in the Python programming language. The main objective is to facilitate fast prototyping of new wide area monitoring, control and protection applications for the future power system by enabling seamless integration with other tools available for Python in the open source community, e.g. for signal processing, artificial intelligence, communication protocols etc. The focus is thus transparency and expandability rather than computational efficiency and performance.The main purpose of this paper, besides presenting the code and some results, is to share interesting experiences with the power system community, and thus stimulate wider use and further development. Two interesting conclusions at the current stage of development are as follows:First, the simulation code is fast enough to emulate real-time simulation for small and medium-size grids with a time step of 5 ms, and allows for interactive feedback from the user during the simulation. Second, the simulation code can be uploaded to an online Python interpreter, edited, run and shared with anyone with a compatible internet browser. Based on this, we believe that the presented simulation code could be a valuable tool, both for researchers in early stages of prototyping real-time applications, and in the educational setting, for students developing intuition for concepts and phenomena through real-time interaction with a running power system model.
Lei, Qinyi, Sun, Qi, Zhao, Linyan, Hong, Dehua, Hu, Cailiang.  2021.  Power Grid Data Confirmation Model Based on Behavior Characteristics. 2021 IEEE 5th Information Technology,Networking,Electronic and Automation Control Conference (ITNEC). 5:1252–1256.
The power grid has high requirements for data security, and data security audit technology is facing challenges. Because the server in the power grid operating environment is considered untrustworthy and does not have the authority to obtain the secret key, the encrypted data cannot be parsed and the data processing ability of the data center is restricted. In response to the above problems, the power grid database encryption system was designed, and the access control module and the encryption module that should be written based on SQL statements were explained. The database encryption system was developed using the Java language and deployed in the cloud environment. Finally, the method was proved by experiments. feasibility.
2022-05-05
Liang, Haolan, Ye, Chunxiao, Zhou, Yuangao, Yang, Hongzhao.  2021.  Anomaly Detection Based on Edge Computing Framework for AMI. 2021 IEEE International Conference on Electrical Engineering and Mechatronics Technology (ICEEMT). :385—390.
Aiming at the cyber security problem of the advanced metering infrastructure(AMI), an anomaly detection method based on edge computing framework for the AMI is proposed. Due to the characteristics of the edge node of data concentrator, the data concentrator has the capability of computing a large amount of data. In this paper, distributing the intrusion detection model on the edge node data concentrator of the AMI instead of the metering center, meanwhile, two-way communication of distributed local model parameters replaces a large amount of data transmission. The proposed method avoids the risk of privacy leakage during the communication of data in AMI, and it greatly reduces communication delay and computational time. In this paper, KDDCUP99 datasets is used to verify the effectiveness of the method. The results show that compared with Deep Convolutional Neural Network (DCNN), the detection accuracy of the proposed method reach 99.05%, and false detection rate only gets 0.74%, and the results indicts the proposed method ensures a high detection performance with less communication rounds, it also reduces computational consumption.
Ahmed, Homam, Jie, Zhu, Usman, Muhammad.  2021.  Lightweight Fire Detection System Using Hybrid Edge-Cloud Computing. 2021 IEEE 4th International Conference on Computer and Communication Engineering Technology (CCET). :153—157.
The emergence of the 5G network has boosted the advancements in the field of the internet of things (IoT) and edge/cloud computing. We present a novel architecture to detect fire in indoor and outdoor environments, dubbed as EAC-FD, an abbreviation of edge and cloud-based fire detection. Compared with existing frameworks, ours is lightweight, secure, cost-effective, and reliable. It utilizes a hybrid edge and cloud computing framework with Intel neural compute stick 2 (NCS2) accelerator is for inference in real-time with Raspberry Pi 3B as an edge device. Our fire detection model runs on the edge device while also capable of cloud computing for more robust analysis making it a secure system. We compare different versions of SSD-MobileNet architectures with ours suitable for low-end devices. The fire detection model shows a good balance between computational cost frames per second (FPS) and accuracy.
Xu, Aidong, Wu, Tao, Zhang, Yunan, Hu, Zhiwei, Jiang, Yixin.  2021.  Graph-Based Time Series Edge Anomaly Detection in Smart Grid. 2021 7th IEEE Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :1—6.
With the popularity of smart devices in the power grid and the advancement of data collection technology, the amount of electricity usage data has exploded in recent years, which is beneficial for optimizing service quality and grid operation. However, current data analysis is mainly based on cloud platforms, which poses challenges to transmission bandwidth, computing resources, and transmission delays. To solve the problem, this paper proposes a graph convolution neural networks (GCNs) based edge-cloud collaborative anomaly detection model. Specifically, the time series is converted into graph data based on visibility graph model, and graph convolutional network model is adopted to classify the labeled graph data for anomaly detection. Then a model segmentation method is proposed to adaptively divide the anomaly detection model between the edge equipment and the back-end server. Experimental results show that the proposed scheme provides an effective solution to edge anomaly detection and can make full use of the computing resources of terminal equipment.
Reyad, Omar, Mansour, Hanaa M., Heshmat, Mohamed, Zanaty, Elnomery A..  2021.  Key-Based Enhancement of Data Encryption Standard For Text Security. 2021 National Computing Colleges Conference (NCCC). :1—6.
Securing various data types such as text, image, and video is needed in real-time communications. The transmission of data over an insecure channel is a permanent challenge, especially in mass Internet applications. Preserving confidentiality and integrity of data toward malicious attacks, accidental devastation, change during transfer, or while in storage must be improved. Data Encryption Standard (DES) is considered as a symmetric-key algorithm that is most widely used for various security purposes. In this work, a Key-based Enhancement of the DES (KE-DES) technique for securing text is proposed. The KEDES is implemented by the application of two steps: the first is merging the Odd/Even bit transformation of every key bit in the DES algorithm. The second step is replacing the right-side expansion of the original DES by using Key-Distribution (K-D) function. The K-D allocation consists of 8-bits from Permutation Choice-1 (PC-1) key outcome. The next 32-bits outcomes from the right-side of data, there is also 8-bits outcome from Permutation Choice-2 (PC-2) in each round. The key and data created randomly, in this case, provide adequate security and the KEDES model is considered more efficient for text encryption.
2022-04-26
Yang, Ge, Wang, Shaowei, Wang, Haijie.  2021.  Federated Learning with Personalized Local Differential Privacy. 2021 IEEE 6th International Conference on Computer and Communication Systems (ICCCS). :484–489.

Recently, federated learning (FL), as an advanced and practical solution, has been applied to deal with privacy-preserving issues in distributed multi-party federated modeling. However, most existing FL methods focus on the same privacy-preserving budget while ignoring various privacy requirements of participants. In this paper, we for the first time propose an algorithm (PLU-FedOA) to optimize the deep neural network of horizontal FL with personalized local differential privacy. For such considerations, we design two approaches: PLU, which allows clients to upload local updates under differential privacy-preserving of personally selected privacy level, and FedOA, which helps the server aggregates local parameters with optimized weight in mixed privacy-preserving scenarios. Moreover, we theoretically analyze the effect on privacy and optimization of our approaches. Finally, we verify PLU-FedOA on real-world datasets.

Li, Jun, Zhang, Wei, Chen, Xuehong, Yang, Shuaifeng, Zhang, Xueying, Zhou, Hao, Li, Yun.  2021.  A Novel Incentive Mechanism Based on Repeated Game in Fog Computing. 2021 3rd International Conference on Advances in Computer Technology, Information Science and Communication (CTISC). :112–119.

Fog computing is a new computing paradigm that utilizes numerous mutually cooperating terminal devices or network edge devices to provide computing, storage, and communication services. Fog computing extends cloud computing services to the edge of the network, making up for the deficiencies of cloud computing in terms of location awareness, mobility support and latency. However, fog nodes are not active enough to perform tasks, and fog nodes recruited by cloud service providers cannot provide stable and continuous resources, which limits the development of fog computing. In the process of cloud service providers using the resources in the fog nodes to provide services to users, the cloud service providers and fog nodes are selfish and committed to maximizing their own payoffs. This situation makes it easy for the fog node to work negatively during the execution of the task. Limited by the low quality of resource provided by fog nodes, the payoff of cloud service providers has been severely affected. In response to this problem, an appropriate incentive mechanism needs to be established in the fog computing environment to solve the core problems faced by both cloud service providers and fog nodes in maximizing their respective utility, in order to achieve the incentive effect. Therefore, this paper proposes an incentive model based on repeated game, and designs a trigger strategy with credible threats, and obtains the conditions for incentive consistency. Under this condition, the fog node will be forced by the deterrence of the trigger strategy to voluntarily choose the strategy of actively executing the task, so as to avoid the loss of subsequent rewards when it is found to perform the task passively. Then, using evolutionary game theory to analyze the stability of the trigger strategy, it proves the dynamic validity of the incentive consistency condition.