Visible to the public Biblio

Filters: Keyword is Collaborative Work  [Clear All Filters]
2023-03-17
Dhasade, Akash, Dresevic, Nevena, Kermarrec, Anne-Marie, Pires, Rafael.  2022.  TEE-based decentralized recommender systems: The raw data sharing redemption. 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS). :447–458.
Recommenders are central in many applications today. The most effective recommendation schemes, such as those based on collaborative filtering (CF), exploit similarities between user profiles to make recommendations, but potentially expose private data. Federated learning and decentralized learning systems address this by letting the data stay on user's machines to preserve privacy: each user performs the training on local data and only the model parameters are shared. However, sharing the model parameters across the network may still yield privacy breaches. In this paper, we present Rex, the first enclave-based decentralized CF recommender. Rex exploits Trusted execution environments (TEE), such as Intel software guard extensions (SGX), that provide shielded environments within the processor to improve convergence while preserving privacy. Firstly, Rex enables raw data sharing, which ultimately speeds up convergence and reduces the network load. Secondly, Rex fully preserves privacy. We analyze the impact of raw data sharing in both deep neural network (DNN) and matrix factorization (MF) recommenders and showcase the benefits of trusted environments in a full-fledged implementation of Rex. Our experimental results demonstrate that through raw data sharing, Rex significantly decreases the training time by 18.3 x and the network load by 2 orders of magnitude over standard decentralized approaches that share only parameters, while fully protecting privacy by leveraging trustworthy hardware enclaves with very little overhead.
ISSN: 1530-2075
2023-01-06
Alotaibi, Jamal, Alazzawi, Lubna.  2022.  PPIoV: A Privacy Preserving-Based Framework for IoV- Fog Environment Using Federated Learning and Blockchain. 2022 IEEE World AI IoT Congress (AIIoT). :597—603.
The integration of the Internet-of-Vehicles (IoV) and fog computing benefits from cooperative computing and analysis of environmental data while avoiding network congestion and latency. However, when private data is shared across fog nodes or the cloud, there exist privacy issues that limit the effectiveness of IoV systems, putting drivers' safety at risk. To address this problem, we propose a framework called PPIoV, which is based on Federated Learning (FL) and Blockchain technologies to preserve the privacy of vehicles in IoV.Typical machine learning methods are not well suited for distributed and highly dynamic systems like IoV since they train on data with local features. Therefore, we use FL to train the global model while preserving privacy. Also, our approach is built on a scheme that evaluates the reliability of vehicles participating in the FL training process. Moreover, PPIoV is built on blockchain to establish trust across multiple communication nodes. For example, when the local learned model updates from the vehicles and fog nodes are communicated with the cloud to update the global learned model, all transactions take place on the blockchain. The outcome of our experimental study shows that the proposed method improves the global model's accuracy as a result of allowing reputed vehicles to update the global model.
2022-12-09
de Oliveira Silva, Hebert.  2022.  CSAI-4-CPS: A Cyber Security characterization model based on Artificial Intelligence For Cyber Physical Systems. 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks - Supplemental Volume (DSN-S). :47—48.

The model called CSAI-4-CPS is proposed to characterize the use of Artificial Intelligence in Cybersecurity applied to the context of CPS - Cyber-Physical Systems. The model aims to establish a methodology being able to self-adapt using shared machine learning models, without incurring the loss of data privacy. The model will be implemented in a generic framework, to assess accuracy across different datasets, taking advantage of the federated learning and machine learning approach. The proposed solution can facilitate the construction of new AI cybersecurity tools and systems for CPS, enabling a better assessment and increasing the level of security/robustness of these systems more efficiently.

2022-12-01
Kamhoua, Georges, Bandara, Eranga, Foytik, Peter, Aggarwal, Priyanka, Shetty, Sachin.  2021.  Resilient and Verifiable Federated Learning against Byzantine Colluding Attacks. 2021 Third IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :31–40.
Federated Learning (FL) is a multiparty learning computing approach that can aid privacy-preservation machine learning. However, FL has several potential security and privacy threats. First, the existing FL requires a central coordinator for the learning process which brings a single point of failure and trust issues for the shared trained model. Second, during the learning process, intentionally unreliable model updates performed by Byzantine colluding parties can lower the quality and convergence of the shared ML models. Therefore, discovering verifiable local model updates (i.e., integrity or correctness) and trusted parties in FL becomes crucial. In this paper, we propose a resilient and verifiable FL algorithm based on a reputation scheme to cope with unreliable parties. We develop a selection algorithm for task publisher and blockchain-based multiparty learning architecture approach where local model updates are securely exchanged and verified without the central party. We also proposed a novel auditing scheme to ensure our proposed approach is resilient up to 50% Byzantine colluding attack in a malicious scenario.
2022-10-06
Zhang, Jiachao, Yu, Peiran, Qi, Le, Liu, Song, Zhang, Haiyu, Zhang, Jianzhong.  2021.  FLDDoS: DDoS Attack Detection Model based on Federated Learning. 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :635–642.
Recently, DDoS attack has developed rapidly and become one of the most important threats to the Internet. Traditional machine learning and deep learning methods can-not train a satisfactory model based on the data of a single client. Moreover, in the real scenes, there are a large number of devices used for traffic collection, these devices often do not want to share data between each other depending on the research and analysis value of the attack traffic, which limits the accuracy of the model. Therefore, to solve these problems, we design a DDoS attack detection model based on federated learning named FLDDoS, so that the local model can learn the data of each client without sharing the data. In addition, considering that the distribution of attack detection datasets is extremely imbalanced and the proportion of attack samples is very small, we propose a hierarchical aggregation algorithm based on K-Means and a data resampling method based on SMOTEENN. The result shows that our model improves the accuracy by 4% compared with the traditional method, and reduces the number of communication rounds by 40%.
2022-08-26
Elumar, Eray Can, Yagan, Osman.  2021.  Robustness of Random K-out Graphs. 2021 60th IEEE Conference on Decision and Control (CDC). :5526—5531.
We consider a graph property known as r-robustness of the random K-out graphs. Random K-out graphs, denoted as \$\textbackslashtextbackslashmathbbH(n;K)\$, are constructed as follows. Each of the n nodes select K distinct nodes uniformly at random, and then an edge is formed between these nodes. The orientation of the edges is ignored, resulting in an undirected graph. Random K-out graphs have been used in many applications including random (pairwise) key predistribution in wireless sensor networks, anonymous message routing in crypto-currency networks, and differentially-private federated averaging. r-robustness is an important metric in many applications where robustness of networks to disruptions is of practical interest, and r-robustness is especially useful in analyzing consensus dynamics. It was previously shown that consensus can be reached in an r-robust network for sufficiently large r even in the presence of some adversarial nodes. r-robustness is also useful for resilience against adversarial attacks or node failures since it is a stronger property than r-connectivity and thus can provide guarantees on the connectivity of the graph when up to r – 1 nodes in the graph are removed. In this paper, we provide a set of conditions for Kn and n that ensure, with high probability (whp), the r-robustness of the random K-out graph.
2022-04-26
Kim, Muah, Günlü, Onur, Schaefer, Rafael F..  2021.  Federated Learning with Local Differential Privacy: Trade-Offs Between Privacy, Utility, and Communication. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2650–2654.

Federated learning (FL) allows to train a massive amount of data privately due to its decentralized structure. Stochastic gradient descent (SGD) is commonly used for FL due to its good empirical performance, but sensitive user information can still be inferred from weight updates shared during FL iterations. We consider Gaussian mechanisms to preserve local differential privacy (LDP) of user data in the FL model with SGD. The trade-offs between user privacy, global utility, and transmission rate are proved by defining appropriate metrics for FL with LDP. Compared to existing results, the query sensitivity used in LDP is defined as a variable, and a tighter privacy accounting method is applied. The proposed utility bound allows heterogeneous parameters over all users. Our bounds characterize how much utility decreases and transmission rate increases if a stronger privacy regime is targeted. Furthermore, given a target privacy level, our results guarantee a significantly larger utility and a smaller transmission rate as compared to existing privacy accounting methods.

Tekgul, Buse G. A., Xia, Yuxi, Marchal, Samuel, Asokan, N..  2021.  WAFFLE: Watermarking in Federated Learning. 2021 40th International Symposium on Reliable Distributed Systems (SRDS). :310–320.

Federated learning is a distributed learning technique where machine learning models are trained on client devices in which the local training data resides. The training is coordinated via a central server which is, typically, controlled by the intended owner of the resulting model. By avoiding the need to transport the training data to the central server, federated learning improves privacy and efficiency. But it raises the risk of model theft by clients because the resulting model is available on every client device. Even if the application software used for local training may attempt to prevent direct access to the model, a malicious client may bypass any such restrictions by reverse engineering the application software. Watermarking is a well-known deterrence method against model theft by providing the means for model owners to demonstrate ownership of their models. Several recent deep neural network (DNN) watermarking techniques use backdooring: training the models with additional mislabeled data. Backdooring requires full access to the training data and control of the training process. This is feasible when a single party trains the model in a centralized manner, but not in a federated learning setting where the training process and training data are distributed among several client devices. In this paper, we present WAFFLE, the first approach to watermark DNN models trained using federated learning. It introduces a retraining step at the server after each aggregation of local models into the global model. We show that WAFFLE efficiently embeds a resilient watermark into models incurring only negligible degradation in test accuracy (-0.17%), and does not require access to training data. We also introduce a novel technique to generate the backdoor used as a watermark. It outperforms prior techniques, imposing no communication, and low computational (+3.2%) overhead$^\textrm1$$^\textrm1$\$The research report version of this paper is also available in https://arxiv.org/abs/2008.07298, and the code for reproducing our work can be found at https://github.com/ssg-research/WAFFLE.

2022-04-18
Zhang, Junpeng, Li, Mengqian, Zeng, Shuiguang, Xie, Bin, Zhao, Dongmei.  2021.  A Survey on Security and Privacy Threats to Federated Learning. 2021 International Conference on Networking and Network Applications (NaNA). :319–326.
Federated learning (FL) has nourished a promising scheme to solve the data silo, which enables multiple clients to construct a joint model without centralizing data. The critical concerns for flourishing FL applications are that build a security and privacy-preserving learning environment. It is thus highly necessary to comprehensively identify and classify potential threats to utilize FL under security guarantees. This paper starts from the perspective of launched attacks with different computing participants to construct the unique threats classification, highlighting the significant attacks, e.g., poisoning attacks, inference attacks, and generative adversarial networks (GAN) attacks. Our study shows that existing FL protocols do not always provide sufficient security, containing various attacks from both clients and servers. GAN attacks lead to larger significant threats among the kinds of threats given the invisible of the attack process. Moreover, we summarize a detailed review of several defense mechanisms and approaches to resist privacy risks and security breaches. Then advantages and weaknesses are generalized, respectively. Finally, we conclude the paper to prospect the challenges and some potential research directions.
2022-04-12
Guo, Yifan, Wang, Qianlong, Ji, Tianxi, Wang, Xufei, Li, Pan.  2021.  Resisting Distributed Backdoor Attacks in Federated Learning: A Dynamic Norm Clipping Approach. 2021 IEEE International Conference on Big Data (Big Data). :1172—1182.
With the advance in artificial intelligence and high-dimensional data analysis, federated learning (FL) has emerged to allow distributed data providers to collaboratively learn without direct access to local sensitive data. However, limiting access to individual provider’s data inevitably incurs security issues. For instance, backdoor attacks, one of the most popular data poisoning attacks in FL, severely threaten the integrity and utility of the FL system. In particular, backdoor attacks launched by multiple collusive attackers, i.e., distributed backdoor attacks, can achieve high attack success rates and are hard to detect. Existing defensive approaches, like model inspection or model sanitization, often require to access a portion of local training data, which renders them inapplicable to the FL scenarios. Recently, the norm clipping approach is developed to effectively defend against distributed backdoor attacks in FL, which does not rely on local training data. However, we discover that adversaries can still bypass this defense scheme through robust training due to its unchanged norm clipping threshold. In this paper, we propose a novel defense scheme to resist distributed backdoor attacks in FL. Particularly, we first identify that the main reason for the failure of the norm clipping scheme is its fixed threshold in the training process, which cannot capture the dynamic nature of benign local updates during the global model’s convergence. Motivated by it, we devise a novel defense mechanism to dynamically adjust the norm clipping threshold of local updates. Moreover, we provide the convergence analysis of our defense scheme. By evaluating it on four non-IID public datasets, we observe that our defense scheme effectively can resist distributed backdoor attacks and ensure the global model’s convergence. Noticeably, our scheme reduces the attack success rates by 84.23% on average compared with existing defense schemes.
2022-03-23
Jiang, Yupeng, Li, Yong, Zhou, Yipeng, Zheng, Xi.  2021.  Sybil Attacks and Defense on Differential Privacy based Federated Learning. 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :355—362.
In federated learning, machine learning and deep learning models are trained globally on distributed devices. The state-of-the-art privacy-preserving technique in the context of federated learning is user-level differential privacy. However, such a mechanism is vulnerable to some specific model poisoning attacks such as Sybil attacks. A malicious adversary could create multiple fake clients or collude compromised devices in Sybil attacks to mount direct model updates manipulation. Recent works on novel defense against model poisoning attacks are difficult to detect Sybil attacks when differential privacy is utilized, as it masks clients' model updates with perturbation. In this work, we implement the first Sybil attacks on differential privacy based federated learning architectures and show their impacts on model convergence. We randomly compromise some clients by manipulating different noise levels reflected by the local privacy budget ε of differential privacy with Laplace mechanism on the local model updates of these Sybil clients. As a result, the global model convergence rates decrease or even leads to divergence. We apply our attacks to two recent aggregation defense mechanisms, called Krum and Trimmed Mean. Our evaluation results on the MNIST and CIFAR-10 datasets show that our attacks effectively slow down the convergence of the global models. We then propose a method to keep monitoring the average loss of all participants in each round for convergence anomaly detection and defend our Sybil attacks based on the training loss reported from randomly selected sets of clients as the judging panels. Our empirical study demonstrates that our defense effectively mitigates the impact of our Sybil attacks.
2022-02-24
Gao, Wei, Guo, Shangwei, Zhang, Tianwei, Qiu, Han, Wen, Yonggang, Liu, Yang.  2021.  Privacy-Preserving Collaborative Learning with Automatic Transformation Search. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :114–123.
Collaborative learning has gained great popularity due to its benefit of data privacy protection: participants can jointly train a Deep Learning model without sharing their training sets. However, recent works discovered that an adversary can fully recover the sensitive training samples from the shared gradients. Such reconstruction attacks pose severe threats to collaborative learning. Hence, effective mitigation solutions are urgently desired.In this paper, we propose to leverage data augmentation to defeat reconstruction attacks: by preprocessing sensitive images with carefully-selected transformation policies, it becomes infeasible for the adversary to extract any useful information from the corresponding gradients. We design a novel search method to automatically discover qualified policies. We adopt two new metrics to quantify the impacts of transformations on data privacy and model usability, which can significantly accelerate the search speed. Comprehensive evaluations demonstrate that the policies discovered by our method can defeat existing reconstruction attacks in collaborative learning, with high efficiency and negligible impact on the model performance.
2022-02-07
Gao, Tan, Li, Xudong, Chen, Wen.  2021.  Co-training For Image-Based Malware Classification. 2021 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC). :568–572.
A malware detection model based on semi-supervised learning is proposed in the paper. Our model includes mainly three parts: malware visualization, feature extraction, and classification. Firstly, the malware visualization converts malware into grayscale images; then the features of the images are extracted to reflect the coding patterns of malware; finally, a collaborative learning model is applied to malware detections using both labeled and unlabeled software samples. The proposed model was evaluated based on two commonly used benchmark datasets. The results demonstrated that compared with traditional methods, our model not only reduced the cost of sample labeling but also improved the detection accuracy through incorporating unlabeled samples into the collaborative learning process, thereby achieved higher classification performance.
2022-01-31
Liu, Yong, Zhu, Xinghua, Wang, Jianzong, Xiao, Jing.  2021.  A Quantitative Metric for Privacy Leakage in Federated Learning. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :3065–3069.
In the federated learning system, parameter gradients are shared among participants and the central modulator, while the original data never leave their protected source domain. However, the gradient itself might carry enough information for precise inference of the original data. By reporting their parameter gradients to the central server, client datasets are exposed to inference attacks from adversaries. In this paper, we propose a quantitative metric based on mutual information for clients to evaluate the potential risk of information leakage in their gradients. Mutual information has received increasing attention in the machine learning and data mining community over the past few years. However, existing mutual information estimation methods cannot handle high-dimensional variables. In this paper, we propose a novel method to approximate the mutual information between the high-dimensional gradients and batched input data. Experimental results show that the proposed metric reliably reflect the extent of information leakage in federated learning. In addition, using the proposed metric, we investigate the influential factors of risk level. It is proven that, the risk of information leakage is related to the status of the task model, as well as the inherent data distribution.
2022-01-25
Marulli, Fiammetta, Balzanella, Antonio, Campanile, Lelio, Iacono, Mauro, Mastroianni, Michele.  2021.  Exploring a Federated Learning Approach to Enhance Authorship Attribution of Misleading Information from Heterogeneous Sources. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Authorship Attribution (AA) is currently applied in several applications, among which fraud detection and anti-plagiarism checks: this task can leverage stylometry and Natural Language Processing techniques. In this work, we explored some strategies to enhance the performance of an AA task for the automatic detection of false and misleading information (e.g., fake news). We set up a text classification model for AA based on stylometry exploiting recurrent deep neural networks and implemented two learning tasks trained on the same collection of fake and real news, comparing their performances: one is based on Federated Learning architecture, the other on a centralized architecture. The goal was to discriminate potential fake information from true ones when the fake news comes from heterogeneous sources, with different styles. Preliminary experiments show that a distributed approach significantly improves recall with respect to the centralized model. As expected, precision was lower in the distributed model. This aspect, coupled with the statistical heterogeneity of data, represents some open issues that will be further investigated in future work.
2021-12-21
Li, Yan, Lu, Yifei, Li, Shuren.  2021.  EZAC: Encrypted Zero-Day Applications Classification Using CNN and K-Means. 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD). :378–383.
With the rapid development of traffic encryption technology and the continuous emergence of various network services, the classification of encrypted zero-day applications has become a major challenge in network supervision. More seriously, many attackers will utilize zero-day applications to hide their attack behaviors and make attack undetectable. However, there are very few existing studies on zero-day applications. Existing works usually select and label zero-day applications from unlabeled datasets, and these are not true zero-day applications classification. To address the classification of zero-day applications, this paper proposes an Encrypted Zero-day Applications Classification (EZAC) method that combines Convolutional Neural Network (CNN) and K-Means, which can effectively classify zero-day applications. We first use CNN to classify the flows, and for the flows that may be zero-day applications, we use K-Means to divide them into several categories, which are then manually labeled. Experimental results show that the EZAC achieves 97.4% accuracy on a public dataset (CIC-Darknet2020), which outperforms the state-of-the-art methods.
Ayed, Mohamed Ali, Talhi, Chamseddine.  2021.  Federated Learning for Anomaly-Based Intrusion Detection. 2021 International Symposium on Networks, Computers and Communications (ISNCC). :1–8.
We are attending a severe zero-day cyber attacks. Machine learning based anomaly detection is definitely the most efficient defence in depth approach. It consists to analyzing the network traffic in order to distinguish the normal behaviour from the abnormal one. This approach is usually implemented in a central server where all the network traffic is analyzed which can rise privacy issues. In fact, with the increasing adoption of Cloud infrastructures, it is important to reduce as much as possible the outsourcing of such sensitive information to the several network nodes. A better approach is to ask each node to analyze its own data and then to exchange its learning finding (model) with a coordinator. In this paper, we investigate the application of federated learning for network-based intrusion detection. Our experiment was conducted based on the C ICIDS2017 dataset. We present a f ederated learning on a deep learning algorithm C NN based on model averaging. It is a self-learning system for detecting anomalies caused by malicious adversaries without human intervention and can cope with new and unknown attacks without decreasing performance. These experimentation demonstrate that this approach is effective in detecting intrusion.
2021-12-20
Masuda, Hiroki, Kita, Kentaro, Koizumi, Yuki, Takemasa, Junji, Hasegawa, Toru.  2021.  Model Fragmentation, Shuffle and Aggregation to Mitigate Model Inversion in Federated Learning. 2021 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN). :1–6.
Federated learning is a privacy-preserving learning system where participants locally update a shared model with their own training data. Despite the advantage that training data are not sent to a server, there is still a risk that a state-of-the-art model inversion attack, which may be conducted by the server, infers training data from the models updated by the participants, referred to as individual models. A solution to prevent such attacks is differential privacy, where each participant adds noise to the individual model before sending it to the server. Differential privacy, however, sacrifices the quality of the shared model in compensation for the fact that participants' training data are not leaked. This paper proposes a federated learning system that is resistant to model inversion attacks without sacrificing the quality of the shared model. The core idea is that each participant divides the individual model into model fragments, shuffles, and aggregates them to prevent adversaries from inferring training data. The other benefit of the proposed system is that the resulting shared model is identical to the shared model generated with the naive federated learning.
Luo, Xinjian, Wu, Yuncheng, Xiao, Xiaokui, Ooi, Beng Chin.  2021.  Feature Inference Attack on Model Predictions in Vertical Federated Learning. 2021 IEEE 37th International Conference on Data Engineering (ICDE). :181–192.
Federated learning (FL) is an emerging paradigm for facilitating multiple organizations' data collaboration without revealing their private data to each other. Recently, vertical FL, where the participating organizations hold the same set of samples but with disjoint features and only one organization owns the labels, has received increased attention. This paper presents several feature inference attack methods to investigate the potential privacy leakages in the model prediction stage of vertical FL. The attack methods consider the most stringent setting that the adversary controls only the trained vertical FL model and the model predictions, relying on no background information of the attack target's data distribution. We first propose two specific attacks on the logistic regression (LR) and decision tree (DT) models, according to individual prediction output. We further design a general attack method based on multiple prediction outputs accumulated by the adversary to handle complex models, such as neural networks (NN) and random forest (RF) models. Experimental evaluations demonstrate the effectiveness of the proposed attacks and highlight the need for designing private mechanisms to protect the prediction outputs in vertical FL.
2021-11-29
Albó, Laia, Beardsley, Marc, Amarasinghe, Ishari, Hernández-Leo, Davinia.  2020.  Individual versus Computer-Supported Collaborative Self-Explanations: How Do Their Writing Analytics Differ? 2020 IEEE 20th International Conference on Advanced Learning Technologies (ICALT). :132–134.
Researchers have demonstrated the effectiveness of self-explanations (SE) as an instructional practice and study strategy. However, there is a lack of work studying the characteristics of SE responses prompted by collaborative activities. In this paper, we use writing analytics to investigate differences between SE text responses resulting from individual versus collaborative learning activities. A Coh-Metrix analysis suggests that students in the collaborative SE activity demonstrated a higher level of comprehension. Future research should explore how writing analytics can be incorporated into CSCL systems to support student performance of SE activities.
Joo, Seong-Soon, You, Woongsshik, Pyo, Cheol Sig, Kahng, Hyun-Kook.  2020.  An Organizational Structure for the Thing-User Community Formation. 2020 International Conference on Information and Communication Technology Convergence (ICTC). :1124–1127.
The special feature of the thing-user centric communication is that thing-users can form a society autonomously and collaborate to solve problems. To share experiences and knowledge, thing-users form, join, and leave communities. The thing-user, who needs a help from other thing-users to accomplish a mission, searches thing-user communities and nominates thing-users of the discovered communities to organize a collaborative work group. Thing-user community should perform autonomously the social construction process and need principles and procedures for the community formation and collaboration within the thing-user communities. This paper defines thing-user communities and proposes an organizational structure for the thing-user community formation.
2021-10-12
Yang, Howard H., Arafa, Ahmed, Quek, Tony Q. S., Vincent Poor, H..  2020.  Age-Based Scheduling Policy for Federated Learning in Mobile Edge Networks. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :8743–8747.
Federated learning (FL) is a machine learning model that preserves data privacy in the training process. Specifically, FL brings the model directly to the user equipments (UEs) for local training, where an edge server periodically collects the trained parameters to produce an improved model and sends it back to the UEs. However, since communication usually occurs through a limited spectrum, only a portion of the UEs can update their parameters upon each global aggregation. As such, new scheduling algorithms have to be engineered to facilitate the full implementation of FL. In this paper, based on a metric termed the age of update (AoU), we propose a scheduling policy by jointly accounting for the staleness of the received parameters and the instantaneous channel qualities to improve the running efficiency of FL. The proposed algorithm has low complexity and its effectiveness is demonstrated by Monte Carlo simulations.
2021-03-01
Nasir, J., Norman, U., Bruno, B., Dillenbourg, P..  2020.  When Positive Perception of the Robot Has No Effect on Learning. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :313–320.
Humanoid robots, with a focus on personalised social behaviours, are increasingly being deployed in educational settings to support learning. However, crafting pedagogical HRI designs and robot interventions that have a real, positive impact on participants' learning, as well as effectively measuring such impact, is still an open challenge. As a first effort in tackling the issue, in this paper we propose a novel robot-mediated, collaborative problem solving activity for school children, called JUSThink, aiming at improving their computational thinking skills. JUSThink will serve as a baseline and reference for investigating how the robot's behaviour can influence the engagement of the children with the activity, as well as their collaboration and mutual understanding while working on it. To this end, this first iteration aims at investigating (i) participants' engagement with the activity (Intrinsic Motivation Inventory-IMI), their mutual understanding (IMIlike) and perception of the robot (Godspeed Questionnaire); (ii) participants' performance during the activity, using several performance and learning metrics. We carried out an extensive user-study in two international schools in Switzerland, in which around 100 children participated in pairs in one-hour long interactions with the activity. Surprisingly, we observe that while a teams' performance significantly affects how team members evaluate their competence, mutual understanding and task engagement, it does not affect their perception of the robot and its helpfulness, a fact which highlights the need for baseline studies and multi-dimensional evaluation metrics when assessing the impact of robots in educational activities.
2020-11-04
Yuan, X., Zhang, T., Shama, A. A., Xu, J., Yang, L., Ellis, J., He, W., Waters, C..  2019.  Teaching Cybersecurity Using Guided Inquiry Collaborative Learning. 2019 IEEE Frontiers in Education Conference (FIE). :1—6.

This Innovate Practice Full Paper describes our experience with teaching cybersecurity topics using guided inquiry collaborative learning. The goal is to not only develop the students' in-depth technical knowledge, but also “soft skills” such as communication, attitude, team work, networking, problem-solving and critical thinking. This paper reports our experience with developing and using the Guided Inquiry Collaborative Learning materials on the topics of firewall and IPsec. Pre- and post-surveys were conducted to access the effectiveness of the developed materials and teaching methods in terms of learning outcome, attitudes, learning experience and motivation. Analysis of the survey data shows that students had increased learning outcome, participation in class, and interest with Guided Inquiry Collaborative Learning.

2019-09-26
Mishra, B., Jena, D..  2018.  CCA Secure Proxy Re-Encryption Scheme for Secure Sharing of Files through Cloud Storage. 2018 Fifth International Conference on Emerging Applications of Information Technology (EAIT). :1-6.

Cloud Storage Service(CSS) provides unbounded, robust file storage capability and facilitates for pay-per-use and collaborative work to end users. But due to security issues like lack of confidentiality, malicious insiders, it has not gained wide spread acceptance to store sensitive information. Researchers have proposed proxy re-encryption schemes for secure data sharing through cloud. Due to advancement of computing technologies and advent of quantum computing algorithms, security of existing schemes can be compromised within seconds. Hence there is a need for designing security schemes which can be quantum computing resistant. In this paper, a secure file sharing scheme through cloud storage using proxy re-encryption technique has been proposed. The proposed scheme is proven to be chosen ciphertext secure(CCA) under hardness of ring-LWE, Search problem using random oracle model. The proposed scheme outperforms the existing CCA secure schemes in-terms of re-encryption time and decryption time for encrypted files which results in an efficient file sharing scheme through cloud storage.