Visible to the public Biblio

Filters: Author is Feather, M. S.  [Clear All Filters]
2021-04-09
Smith, B., Feather, M. S., Huntsberger, T., Bocchino, R..  2020.  Software Assurance of Autonomous Spacecraft Control. 2020 Annual Reliability and Maintainability Symposium (RAMS). :1—7.
Summary & Conclusions: The work described addresses assurance of a planning and execution software system being added to an in-orbit CubeSat to demonstrate autonomous control of that spacecraft. Our focus was on how to develop assurance of the correct operation of the added software in its operational context, our approach to which was to use an assurance case to guide and organize the information involved. The relatively manageable magnitude of the CubeSat and its autonomy demonstration experiment made it plausible to try out our assurance approach in a relatively short timeframe. Additionally, the time was ripe to inject useful assurance results into the ongoing development and testing of the autonomy demonstration. In conducting this, we sought to answer several questions about our assurance approach. The questions, and the conclusions we reached, are as follows: 1. Question: Would our approach to assurance apply to the introduction of a planning and execution software into an existing system? Conclusion: Yes. The use of an assurance case helped focus our attention on the more challenging aspects, notably the interactions between the added software and the existing software system into which it was being introduced. This guided us to choose a hazard analysis method specifically for software interactions. In addition, we were able to automate generation of assurance case elements from the hazard analysis' tabular representation. 2. Question: Would our methods prove understandable to the software engineers tasked with integrating the software into the CubeSat's existing system? Conclusion: Somewhat. In interim discussions with the software engineers we found the assurance case style, of decomposing an argument into smaller pieces, to be useful and understandable to organize discussion. Ultimately however we did not persuade them to adopt assurance cases as the means to present review information. We attribute this to reluctance to deviate from JPL's tried and true style of holding reviews. For the CubeSat project as a whole, hosting an autonomy demonstration was already a novelty. Combining this with presentation of review information via an assurance case, with which our reviewers would be unaccustomed, would have exacerbated the unfamiliarity. 3. Question: Would conducting our methods prove to be compatible with the (limited) time available of the software engineers? Conclusion: Yes. We used a series of six brief meetings (approximately one hour each) with the development team to first identify the interactions as the area on which to focus, and to then perform the hazard analysis on those interactions. We used the meetings to confirm, or correct as necessary, our understanding of the software system and the spacecraft context. Between meetings we studied the existing software documentation, did preliminary analyses by ourselves, and documented the results in a concise form suitable for discussion with the team. 4. Question: Would our methods yield useful results to the software engineers? Conclusion: Yes. The hazard analysis systematically confirmed existing hazards' mitigations, and drew attention to a mitigation whose implementation needed particular care. In some cases, the analysis identified potential hazards - and what to do about them - should some of the more sophisticated capabilities of the planning and execution software be used. These capabilities, not exercised in the initial experiments on the CubeSat, may be used in future experiments. We remain involved with the developers as they prepare for these future experiments, so our analysis results will be of benefit as these proceed.