Visible to the public Biblio

Filters: Author is Beling, Peter  [Clear All Filters]
2022-12-09
Cody, Tyler, Adams, Stephen, Beling, Peter, Freeman, Laura.  2022.  On Valuing the Impact of Machine Learning Faults to Cyber-Physical Production Systems. 2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS). :1—6.
Machine learning (ML) has been applied in prognostics and health management (PHM) to monitor and predict the health of industrial machinery. The use of PHM in production systems creates a cyber-physical, omni-layer system. While ML offers statistical improvements over previous methods, and brings statistical models to bear on new systems and PHM tasks, it is susceptible to performance degradation when the behavior of the systems that ML is receiving its inputs from changes. Natural changes such as physical wear and engineered changes such as maintenance and rebuild procedures are catalysts for performance degradation, and are both inherent to production systems. Drawing from data on the impact of maintenance procedures on ML performance in hydraulic actuators, this paper presents a simulation study that investigates how long it takes for ML performance degradation to create a difference in the throughput of serial production system. In particular, this investigation considers the performance of an ML model learned on data collected before a rebuild procedure is conducted on a hydraulic actuator and an ML model transfer learned on data collected after the rebuild procedure. Transfer learning is able to mitigate performance degradation, but there is still a significant impact on throughput. The conclusion is drawn that ML faults can have drastic, non-linear effects on the throughput of production systems.
2022-08-12
Berman, Maxwell, Adams, Stephen, Sherburne, Tim, Fleming, Cody, Beling, Peter.  2019.  Active Learning to Improve Static Analysis. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :1322–1327.
Static analysis tools are programs that run on source code prior to their compilation to binary executables and attempt to find flaws or defects in the code during the early stages of development. If left unresolved, these flaws could pose security risks. While numerous static analysis tools exist, there is no single tool that is optimal. Therefore, many static analysis tools are often used to analyze code. Further, some of the alerts generated by the static analysis tools are low-priority or false alarms. Machine learning algorithms have been developed to distinguish between true alerts and false alarms, however significant man hours need to be dedicated to labeling data sets for training. This study investigates the use of active learning to reduce the number of labeled alerts needed to adequately train a classifier. The numerical experiments demonstrate that a query by committee active learning algorithm can be utilized to significantly reduce the number of labeled alerts needed to achieve similar performance as a classifier trained on a data set of nearly 60,000 labeled alerts.