Biblio
Filters: Author is Cody, Tyler [Clear All Filters]
On Valuing the Impact of Machine Learning Faults to Cyber-Physical Production Systems. 2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS). :1—6.
.
2022. Machine learning (ML) has been applied in prognostics and health management (PHM) to monitor and predict the health of industrial machinery. The use of PHM in production systems creates a cyber-physical, omni-layer system. While ML offers statistical improvements over previous methods, and brings statistical models to bear on new systems and PHM tasks, it is susceptible to performance degradation when the behavior of the systems that ML is receiving its inputs from changes. Natural changes such as physical wear and engineered changes such as maintenance and rebuild procedures are catalysts for performance degradation, and are both inherent to production systems. Drawing from data on the impact of maintenance procedures on ML performance in hydraulic actuators, this paper presents a simulation study that investigates how long it takes for ML performance degradation to create a difference in the throughput of serial production system. In particular, this investigation considers the performance of an ML model learned on data collected before a rebuild procedure is conducted on a hydraulic actuator and an ML model transfer learned on data collected after the rebuild procedure. Transfer learning is able to mitigate performance degradation, but there is still a significant impact on throughput. The conclusion is drawn that ML faults can have drastic, non-linear effects on the throughput of production systems.
Heterogeneous Transfer in Deep Learning for Spectrogram Classification in Cognitive Communications. 2021 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW). :1—5.
.
2021. Machine learning offers performance improvements and novel functionality, but its life cycle performance is understudied. In areas like cognitive communications, where systems are long-lived, life cycle trade-offs are key to system design. Herein, we consider the use of deep learning to classify spectrograms. We vary the label-space over which the network makes classifications, as may emerge with changes in use over a system’s life cycle, and compare heterogeneous transfer learning performance across label-spaces between model architectures. Our results offer an empirical example of life cycle challenges to using machine learning for cognitive communications. They evidence important trade-offs among performance, training time, and sensitivity to the order in which the label-space is changed. And they show that fine-tuning can be used in the heterogeneous transfer of spectrogram classifiers.