Visible to the public Biblio

Filters: Keyword is Brain modeling  [Clear All Filters]
2023-08-03
Duan, Xiaowei, Han, Yiliang, Wang, Chao, Ni, Huanhuan.  2022.  Optimization of Encrypted Communication Model Based on Generative Adversarial Network. 2022 International Conference on Blockchain Technology and Information Security (ICBCTIS). :20–24.
With the progress of cryptography computer science, designing cryptographic algorithms using deep learning is a very innovative research direction. Google Brain designed a communication model using generation adversarial network and explored the encrypted communication algorithm based on machine learning. However, the encrypted communication model it designed lacks quantitative evaluation. When some plaintexts and keys are leaked at the same time, the security of communication cannot be guaranteed. This model is optimized to enhance the security by adjusting the optimizer, modifying the activation function, and increasing batch normalization to improve communication speed of optimization. Experiments were performed on 16 bits and 64 bits plaintexts communication. With plaintext and key leak rate of 0.75, the decryption error rate of the decryptor is 0.01 and the attacker can't guess any valid information about the communication.
2023-03-17
Li, Sukun, Liu, Xiaoxing.  2022.  Toward a BCI-Based Personalized Recommender System Using Deep Learning. 2022 IEEE 8th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :180–185.
A recommender system is a filtering application based on personalized information from acquired big data to predict a user's preference. Traditional recommender systems primarily rely on keywords or scene patterns. Users' subjective emotion data are rarely utilized for preference prediction. Novel Brain Computer Interfaces hold incredible promise and potential for intelligent applications that rely on collected user data like a recommender system. This paper describes a deep learning method that uses Brain Computer Interfaces (BCI) based neural measures to predict a user's preference on short music videos. Our models are employed on both population-wide and individualized preference predictions. The recognition method is based on dynamic histogram measurement and deep neural network for distinctive feature extraction and improved classification. Our models achieve 97.21%, 94.72%, 94.86%, and 96.34% classification accuracy on two-class, three-class, four-class, and nine-class individualized predictions. The findings provide evidence that a personalized recommender system on an implicit BCI has the potential to succeed.
2021-03-29
Schiliro, F., Moustafa, N., Beheshti, A..  2020.  Cognitive Privacy: AI-enabled Privacy using EEG Signals in the Internet of Things. 2020 IEEE 6th International Conference on Dependability in Sensor, Cloud and Big Data Systems and Application (DependSys). :73—79.

With the advent of Industry 4.0, the Internet of Things (IoT) and Artificial Intelligence (AI), smart entities are now able to read the minds of users via extracting cognitive patterns from electroencephalogram (EEG) signals. Such brain data may include users' experiences, emotions, motivations, and other previously private mental and psychological processes. Accordingly, users' cognitive privacy may be violated and the right to cognitive privacy should protect individuals against the unconsented intrusion by third parties into the brain data as well as against the unauthorized collection of those data. This has caused a growing concern among users and industry experts that laws to protect the right to cognitive liberty, right to mental privacy, right to mental integrity, and the right to psychological continuity. In this paper, we propose an AI-enabled EEG model, namely Cognitive Privacy, that aims to protect data and classifies users and their tasks from EEG data. We present a model that protects data from disclosure using normalized correlation analysis and classifies subjects (i.e., a multi-classification problem) and their tasks (i.e., eye open and eye close as a binary classification problem) using a long-short term memory (LSTM) deep learning approach. The model has been evaluated using the EEG data set of PhysioNet BCI, and the results have revealed its high performance of classifying users and their tasks with achieving high data privacy.

Pranav, E., Kamal, S., Chandran, C. Satheesh, Supriya, M. H..  2020.  Facial Emotion Recognition Using Deep Convolutional Neural Network. 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS). :317—320.

The rapid growth of artificial intelligence has contributed a lot to the technology world. As the traditional algorithms failed to meet the human needs in real time, Machine learning and deep learning algorithms have gained great success in different applications such as classification systems, recommendation systems, pattern recognition etc. Emotion plays a vital role in determining the thoughts, behaviour and feeling of a human. An emotion recognition system can be built by utilizing the benefits of deep learning and different applications such as feedback analysis, face unlocking etc. can be implemented with good accuracy. The main focus of this work is to create a Deep Convolutional Neural Network (DCNN) model that classifies 5 different human facial emotions. The model is trained, tested and validated using the manually collected image dataset.

Ozdemir, M. A., Elagoz, B., Soy, A. Alaybeyoglu, Akan, A..  2020.  Deep Learning Based Facial Emotion Recognition System. 2020 Medical Technologies Congress (TIPTEKNO). :1—4.

In this study, it was aimed to recognize the emotional state from facial images using the deep learning method. In the study, which was approved by the ethics committee, a custom data set was created using videos taken from 20 male and 20 female participants while simulating 7 different facial expressions (happy, sad, surprised, angry, disgusted, scared, and neutral). Firstly, obtained videos were divided into image frames, and then face images were segmented using the Haar library from image frames. The size of the custom data set obtained after the image preprocessing is more than 25 thousand images. The proposed convolutional neural network (CNN) architecture which is mimics of LeNet architecture has been trained with this custom dataset. According to the proposed CNN architecture experiment results, the training loss was found as 0.0115, the training accuracy was found as 99.62%, the validation loss was 0.0109, and the validation accuracy was 99.71%.

2021-03-22
Kumar, S. A., Kumar, A., Bajaj, V., Singh, G. K..  2020.  An Improved Fuzzy Min–Max Neural Network for Data Classification. IEEE Transactions on Fuzzy Systems. 28:1910–1924.
Hyperbox classifier is an efficient tool for modern pattern classification problems due to its transparency and rigorous use of Euclidian geometry. Fuzzy min-max (FMM) network efficiently implements the hyperbox classifier, and has been modified several times to yield better classification accuracy. However, the obtained accuracy is not up to the mark. Therefore, in this paper, a new improved FMM (IFMM) network is proposed to increase the accuracy rate. In the proposed IFMM network, a modified constraint is employed to check the expandability of a hyperbox. It also uses semiperimeter of the hyperbox along with k-nearest mechanism to select the expandable hyperbox. In the proposed IFMM, the contraction rules of conventional FMM and enhanced FMM (EFMM) are also modified using semiperimeter of a hyperbox in order to balance the size of both overlapped hyperboxes. Experimental results show that the proposed IFMM network outperforms the FMM, k-nearest FMM, and EFMM by yielding more accuracy rate with less number of hyperboxes. The proposed methods are also applied to histopathological images to know the best magnification factor for classification.
2020-12-14
Willcox, G., Rosenberg, L., Domnauer, C..  2020.  Analysis of Human Behaviors in Real-Time Swarms. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC). :0104–0109.
Many species reach group decisions by deliberating in real-time systems. This natural process, known as Swarm Intelligence (SI), has been studied extensively in a range of social organisms, from schools of fish to swarms of bees. A new technique called Artificial Swarm Intelligence (ASI) has enabled networked human groups to reach decisions in systems modeled after natural swarms. The present research seeks to understand the behavioral dynamics of such “human swarms.” Data was collected from ten human groups, each having between 21 and 25 members. The groups were tasked with answering a set of 25 ordered ranking questions on a 1-5 scale, first independently by survey and then collaboratively as a real-time swarm. We found that groups reached significantly different answers, on average, by swarm versus survey ( p=0.02). Initially, the distribution of individual responses in each swarm was little different than the distribution of survey responses, but through the process of real-time deliberation, the swarm's average answer changed significantly ( ). We discuss possible interpretations of this dynamic behavior. Importantly, the we find that swarm's answer is not simply the arithmetic mean of initial individual “votes” ( ) as in a survey, suggesting a more complex mechanism is at play-one that relies on the time-varying behaviors of the participants in swarms. Finally, we publish a set of data that enables other researchers to analyze human behaviors in real-time swarms.
Willcox, G., Rosenberg, L., Burgman, M., Marcoci, A..  2020.  Prioritizing Policy Objectives in Polarized Groups using Artificial Swarm Intelligence. 2020 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA). :1–9.
Groups often struggle to reach decisions, especially when populations are strongly divided by conflicting views. Traditional methods for collective decision-making involve polling individuals and aggregating results. In recent years, a new method called Artificial Swarm Intelligence (ASI) has been developed that enables networked human groups to deliberate in real-time systems, moderated by artificial intelligence algorithms. While traditional voting methods aggregate input provided by isolated participants, Swarm-based methods enable participants to influence each other and converge on solutions together. In this study we compare the output of traditional methods such as Majority vote and Borda count to the Swarm method on a set of divisive policy issues. We find that the rankings generated using ASI and the Borda Count methods are often rated as significantly more satisfactory than those generated by the Majority vote system (p\textbackslashtextless; 0.05). This result held for both the population that generated the rankings (the “in-group”) and the population that did not (the “out-group”): the in-group ranked the Swarm prioritizations as 9.6% more satisfactory than the Majority prioritizations, while the out-group ranked the Swarm prioritizations as 6.5% more satisfactory than the Majority prioritizations. This effect also held even when the out-group was subject to a demographic sampling bias of 10% (i.e. the out-group was composed of 10% more Labour voters than the in-group). The Swarm method was the only method to be perceived as more satisfactory to the “out-group” than the voting group.
2020-11-23
Wang, M., Hussein, A., Rojas, R. F., Shafi, K., Abbass, H. A..  2018.  EEG-Based Neural Correlates of Trust in Human-Autonomy Interaction. 2018 IEEE Symposium Series on Computational Intelligence (SSCI). :350–357.
This paper aims at identifying the neural correlates of human trust in autonomous systems using electroencephalography (EEG) signals. Quantifying the relationship between trust and brain activities allows for real-time assessment of human trust in automation. This line of effort contributes to the design of trusted autonomous systems, and more generally, modeling the interaction in human-autonomy interaction. To study the correlates of trust, we use an investment game in which artificial agents with different levels of trustworthiness are employed. We collected EEG signals from 10 human subjects while they are playing the game; then computed three types of features from these signals considering the signal time-dependency, complexity and power spectrum using an autoregressive model (AR), sample entropy and Fourier analysis, respectively. Results of a mixed model analysis showed significant correlation between human trust and EEG features from certain electrodes. The frontal and the occipital area are identified as the predominant brain areas correlated with trust.
2019-02-22
Hu, D., Wang, L., Jiang, W., Zheng, S., Li, B..  2018.  A Novel Image Steganography Method via Deep Convolutional Generative Adversarial Networks. IEEE Access. 6:38303-38314.

The security of image steganography is an important basis for evaluating steganography algorithms. Steganography has recently made great progress in the long-term confrontation with steganalysis. To improve the security of image steganography, steganography must have the ability to resist detection by steganalysis algorithms. Traditional embedding-based steganography embeds the secret information into the content of an image, which unavoidably leaves a trace of the modification that can be detected by increasingly advanced machine-learning-based steganalysis algorithms. The concept of steganography without embedding (SWE), which does not need to modify the data of the carrier image, appeared to overcome the detection of machine-learning-based steganalysis algorithms. In this paper, we propose a novel image SWE method based on deep convolutional generative adversarial networks. We map the secret information into a noise vector and use the trained generator neural network model to generate the carrier image based on the noise vector. No modification or embedding operations are required during the process of image generation, and the information contained in the image can be extracted successfully by another neural network, called the extractor, after training. The experimental results show that this method has the advantages of highly accurate information extraction and a strong ability to resist detection by state-of-the-art image steganalysis algorithms.

2018-12-10
Volz, V., Majchrzak, K., Preuss, M..  2018.  A Social Science-based Approach to Explanations for (Game) AI. 2018 IEEE Conference on Computational Intelligence and Games (CIG). :1–2.

The current AI revolution provides us with many new, but often very complex algorithmic systems. This complexity does not only limit understanding, but also acceptance of e.g. deep learning methods. In recent years, explainable AI (XAI) has been proposed as a remedy. However, this research is rarely supported by publications on explanations from social sciences. We suggest a bottom-up approach to explanations for (game) AI, by starting from a baseline definition of understandability informed by the concept of limited human working memory. We detail our approach and demonstrate its application to two games from the GVGAI framework. Finally, we discuss our vision of how additional concepts from social sciences can be integrated into our proposed approach and how the results can be generalised.

2018-04-11
Gebhardt, D., Parikh, K., Dzieciuch, I., Walton, M., Hoang, N. A. V..  2017.  Hunting for Naval Mines with Deep Neural Networks. OCEANS 2017 - Anchorage. :1–5.

Explosive naval mines pose a threat to ocean and sea faring vessels, both military and civilian. This work applies deep neural network (DNN) methods to the problem of detecting minelike objects (MLO) on the seafloor in side-scan sonar imagery. We explored how the DNN depth, memory requirements, calculation requirements, and training data distribution affect detection efficacy. A visualization technique (class activation map) was incorporated that aids a user in interpreting the model's behavior. We found that modest DNN model sizes yielded better accuracy (98%) than very simple DNN models (93%) and a support vector machine (78%). The largest DNN models achieved textless;1% efficacy increase at a cost of a 17x increase of trainable parameter count and computation requirements. In contrast to DNNs popularized for many-class image recognition tasks, the models for this task require far fewer computational resources (0.3% of parameters), and are suitable for embedded use within an autonomous unmanned underwater vehicle.