Visible to the public Biblio

Filters: Keyword is self-organising feature maps  [Clear All Filters]
2021-03-22
Hikawa, H..  2020.  Nested Pipeline Hardware Self-Organizing Map for High Dimensional Vectors. 2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS). :1–4.
This paper proposes a hardware Self-Organizing Map (SOM) for high dimensional vectors. The proposed SOM is based on nested architecture with pipeline processing. Due to homogeneous modular structure, the nested architecture provides high expandability. The original nested SOM was designed to handle low-dimensional vectors with fully parallel computation, and it yielded very high performance. In this paper, the architecture is extended to handle much higher dimensional vectors by using sequential computation, which requires multiple clocks to process a single vector. To increase the performance, the proposed architecture employs pipeline computation, in which search of winner neuron and weight vector update are carried out simultaneously. Operable clock frequency for the system was 60 MHz, and its throughput reached 15012 million connection updates per second (MCUPS).
2021-03-01
Tan, R., Khan, N., Guan, L..  2020.  Locality Guided Neural Networks for Explainable Artificial Intelligence. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
In current deep network architectures, deeper layers in networks tend to contain hundreds of independent neurons which makes it hard for humans to understand how they interact with each other. By organizing the neurons by correlation, humans can observe how clusters of neighbouring neurons interact with each other. In this paper, we propose a novel algorithm for back propagation, called Locality Guided Neural Network (LGNN) for training networks that preserves locality between neighbouring neurons within each layer of a deep network. Heavily motivated by Self-Organizing Map (SOM), the goal is to enforce a local topology on each layer of a deep network such that neighbouring neurons are highly correlated with each other. This method contributes to the domain of Explainable Artificial Intelligence (XAI), which aims to alleviate the black-box nature of current AI methods and make them understandable by humans. Our method aims to achieve XAI in deep learning without changing the structure of current models nor requiring any post processing. This paper focuses on Convolutional Neural Networks (CNNs), but can theoretically be applied to any type of deep learning architecture. In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100. In depth analyses presenting both qualitative and quantitative results demonstrate that our method is capable of enforcing a topology on each layer while achieving a small increase in classification accuracy.
2019-03-06
Mito, M., Murata, K., Eguchi, D., Mori, Y., Toyonaga, M..  2018.  A Data Reconstruction Method for The Big-Data Analysis. 2018 9th International Conference on Awareness Science and Technology (iCAST). :319-323.
In recent years, the big-data approach has become important within various business operations and sales judgment tactics. Contrarily, numerous privacy problems limit the progress of their analysis technologies. To mitigate such problems, this paper proposes several privacy-preserving methods, i.e., anonymization, extreme value record elimination, fully encrypted analysis, and so on. However, privacy-cracking fears still remain that prevent the open use of big-data by other, external organizations. We propose a big-data reconstruction method that does not intrinsically use privacy data. The method uses only the statistical features of big-data, i.e., its attribute histograms and their correlation coefficients. To verify whether valuable information can be extracted using this method, we evaluate the data by using Self Organizing Map (SOM) as one of the big-data analysis tools. The results show that the same pieces of information are extracted from our data and the big-data.
2018-03-05
Mfula, H., Nurminen, J. K..  2017.  Adaptive Root Cause Analysis for Self-Healing in 5G Networks. 2017 International Conference on High Performance Computing Simulation (HPCS). :136–143.

Root cause analysis (RCA) is a common and recurring task performed by operators of cellular networks. It is done mainly to keep customers satisfied with the quality of offered services and to maximize return on investment (ROI) by minimizing and where possible eliminating the root causes of faults in cellular networks. Currently, the actual detection and diagnosis of faults or potential faults is still a manual and slow process often carried out by network experts who manually analyze and correlate various pieces of network data such as, alarms, call traces, configuration management (CM) and key performance indicator (KPI) data in order to come up with the most probable root cause of a given network fault. In this paper, we propose an automated fault detection and diagnosis solution called adaptive root cause analysis (ARCA). The solution uses measurements and other network data together with Bayesian network theory to perform automated evidence based RCA. Compared to the current common practice, our solution is faster due to automation of the entire RCA process. The solution is also cheaper because it needs fewer or no personnel in order to operate and it improves efficiency through domain knowledge reuse during adaptive learning. As it uses a probabilistic Bayesian classifier, it can work with incomplete data and it can handle large datasets with complex probability combinations. Experimental results from stratified synthesized data affirmatively validate the feasibility of using such a solution as a key part of self-healing (SH) especially in emerging self-organizing network (SON) based solutions in LTE Advanced (LTE-A) and 5G.

2017-12-28
Boucher, A., Badri, M..  2017.  Predicting Fault-Prone Classes in Object-Oriented Software: An Adaptation of an Unsupervised Hybrid SOM Algorithm. 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :306–317.

Many fault-proneness prediction models have been proposed in literature to identify fault-prone code in software systems. Most of the approaches use fault data history and supervised learning algorithms to build these models. However, since fault data history is not always available, some approaches also suggest using semi-supervised or unsupervised fault-proneness prediction models. The HySOM model, proposed in literature, uses function-level source code metrics to predict fault-prone functions in software systems, without using any fault data. In this paper, we adapt the HySOM approach for object-oriented software systems to predict fault-prone code at class-level granularity using object-oriented source code metrics. This adaptation makes it easier to prioritize the efforts of the testing team as unit tests are often written for classes in object-oriented software systems, and not for methods. Our adaptation also generalizes one main element of the HySOM model, which is the calculation of the source code metrics threshold values. We conducted an empirical study using 12 public datasets. Results show that the adaptation of the HySOM model for class-level fault-proneness prediction improves the consistency and the performance of the model. We additionally compared the performance of the adapted model to supervised approaches based on the Naive Bayes Network, ANN and Random Forest algorithms.