Visible to the public Biblio

Filters: Keyword is Functional magnetic resonance imaging  [Clear All Filters]
2022-01-31
Peitek, Norman, Apel, Sven, Parnin, Chris, Brechmann, André, Siegmund, Janet.  2021.  Program Comprehension and Code Complexity Metrics: An fMRI Study. 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). :524–536.
Background: Researchers and practitioners have been using code complexity metrics for decades to predict how developers comprehend a program. While it is plausible and tempting to use code metrics for this purpose, their validity is debated, since they rely on simple code properties and rarely consider particularities of human cognition. Aims: We investigate whether and how code complexity metrics reflect difficulty of program comprehension. Method: We have conducted a functional magnetic resonance imaging (fMRI) study with 19 participants observing program comprehension of short code snippets at varying complexity levels. We dissected four classes of code complexity metrics and their relationship to neuronal, behavioral, and subjective correlates of program comprehension, overall analyzing more than 41 metrics. Results: While our data corroborate that complexity metrics can-to a limited degree-explain programmers' cognition in program comprehension, fMRI allowed us to gain insights into why some code properties are difficult to process. In particular, a code's textual size drives programmers' attention, and vocabulary size burdens programmers' working memory. Conclusion: Our results provide neuro-scientific evidence supporting warnings of prior research questioning the validity of code complexity metrics and pin down factors relevant to program comprehension. Future Work: We outline several follow-up experiments investigating fine-grained effects of code complexity and describe possible refinements to code complexity metrics.
2020-04-20
Kundu, Suprateek, Suthaharan, Shan.  2019.  Privacy-Preserving Predictive Model Using Factor Analysis for Neuroscience Applications. 2019 IEEE 5th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :67–73.
The purpose of this article is to present an algorithm which maximizes prediction accuracy under a linear regression model while preserving data privacy. This approach anonymizes the data such that the privacy of the original features is fully guaranteed, and the deterioration in predictive accuracy using the anonymized data is minimal. The proposed algorithm employs two stages: the first stage uses a probabilistic latent factor approach to anonymize the original features into a collection of lower dimensional latent factors, while the second stage uses an optimization algorithm to tune the anonymized data further, in a way which ensures a minimal loss in prediction accuracy under the predictive approach specified by the user. We demonstrate the advantages of our approach via numerical studies and apply our method to high-dimensional neuroimaging data where the goal is to predict the behavior of adolescents and teenagers based on functional magnetic resonance imaging (fMRI) measurements.
Kundu, Suprateek, Suthaharan, Shan.  2019.  Privacy-Preserving Predictive Model Using Factor Analysis for Neuroscience Applications. 2019 IEEE 5th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :67–73.
The purpose of this article is to present an algorithm which maximizes prediction accuracy under a linear regression model while preserving data privacy. This approach anonymizes the data such that the privacy of the original features is fully guaranteed, and the deterioration in predictive accuracy using the anonymized data is minimal. The proposed algorithm employs two stages: the first stage uses a probabilistic latent factor approach to anonymize the original features into a collection of lower dimensional latent factors, while the second stage uses an optimization algorithm to tune the anonymized data further, in a way which ensures a minimal loss in prediction accuracy under the predictive approach specified by the user. We demonstrate the advantages of our approach via numerical studies and apply our method to high-dimensional neuroimaging data where the goal is to predict the behavior of adolescents and teenagers based on functional magnetic resonance imaging (fMRI) measurements.
2020-02-18
Han, Chihye, Yoon, Wonjun, Kwon, Gihyun, Kim, Daeshik, Nam, Seungkyu.  2019.  Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study. 2019 International Joint Conference on Neural Networks (IJCNN). :1–8.

The recent success of brain-inspired deep neural networks (DNNs) in solving complex, high-level visual tasks has led to rising expectations for their potential to match the human visual system. However, DNNs exhibit idiosyncrasies that suggest their visual representation and processing might be substantially different from human vision. One limitation of DNNs is that they are vulnerable to adversarial examples, input images on which subtle, carefully designed noises are added to fool a machine classifier. The robustness of the human visual system against adversarial examples is potentially of great importance as it could uncover a key mechanistic feature that machine vision is yet to incorporate. In this study, we compare the visual representations of white- and black-box adversarial examples in DNNs and humans by leveraging functional magnetic resonance imaging (fMRI). We find a small but significant difference in representation patterns for different (i.e. white- versus black-box) types of adversarial examples for both humans and DNNs. However, human performance on categorical judgment is not degraded by noise regardless of the type unlike DNN. These results suggest that adversarial examples may be differentially represented in the human visual system, but unable to affect the perceptual experience.