Biblio

Filters: Keyword is Adaptive Autonomy  [Clear All Filters]
2020-10-12
Khosravi, Morteza, Fereidunian, Alireza.  2019.  Enhancing Smart Grid Cyber-Security Using A Fuzzy Adaptive Autonomy Expert System. 2019 Smart Grid Conference (SGC). :1–6.

Smart Grid cyber-security sounds to be a critical issue, because of widespread development of information technology. To achieve secure and reliable operation, the complexity of human automation interaction (HAI) necessitates more sophisticated and intelligent methodologies. In this paper, an adaptive autonomy fuzzy expert system is developed using gradient descent algorithm to determine the Level of Automation (LOA), based on the changing of Performance Shaping Factors (PSF). These PSFs indicate the effects of environmental conditions on the performance of HAI. The major advantage of this method is that the fuzzy rule or membership function can be learnt without changing the form of the fuzzy rule in conventional fuzzy control. Because of data shortage, Leave-One-Out Cross-Validation (LOOCV) technique is applied for assessing how the results of proposed system generalizes to the new contingency situations. The expert system database is extracted from superior experts' judgments. In order to regard the importance of each PSF, weighted rules are also considered. In addition, some new environmental conditions are introduced that has not been seen before. Nine scenarios are discussed to reveal the performance of the proposed system. Results confirm that the presented fuzzy expert system can effectively calculates the proper LOA even in the new contingency situations.

2016-12-06
Javier Camara, David Garlan, Gabriel Moreno, Bradley Schmerl.  2016.  Evaluating Trade-offs of Human Involvement in Self-adaptive Systems. Managing Trade-offs in Adaptable Software Architectures.

Software systems are increasingly called upon to autonomously manage their goals in changing contexts and environments, and under evolving requirements. In some circumstances, autonomous systems cannot be fully-automated but instead cooperate with human operators to maintain and adapt themselves. Furthermore, there are times when a choice should be made between doing a manual or automated repair. Involving operators in self-adaptation should itself be adaptive, and consider aspects such as the training, attention, and ability of operators. Not only do these aspects change from person to person, but they may change with the same person. These aspects make the choice of whether to involve humans non-obvious. Self-adaptive systems should trade-off whether to involve operators, taking these aspects into consideration along with other business qualities it is attempting to achieve. In this chapter, we identify the various roles that operators can perform in cooperating with self-adapting systems. We focus on humans as effectors-doing tasks which are difficult or infeasible to automate. We describe how we modified our self-adaptive framework, Rainbow, to involve operators in this way, which involved choosing suitable human models and integrating them into the existing utility trade-off decision models of Rainbow. We use probabilistic modeling and quantitative verification to analyze the trade-offs of involving humans in adaptation, and complement our study with experiments to show how different business preferences and modalities of human involvement may result in different outcomes.