Visible to the public Biblio

Filters: Keyword is workload  [Clear All Filters]
2020-12-01
Yang, R., Ouyang, X., Chen, Y., Townend, P., Xu, J..  2018.  Intelligent Resource Scheduling at Scale: A Machine Learning Perspective. 2018 IEEE Symposium on Service-Oriented System Engineering (SOSE). :132—141.

Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.

2019-03-11
Oliveira, Luis, Luton, Jacob, Iyer, Sumeet, Burns, Chris, Mouzakitis, Alexandros, Jennings, Paul, Birrell, Stewart.  2018.  Evaluating How Interfaces Influence the User Interaction with Fully Autonomous Vehicles. Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. :320–331.
With increasing automation, occupants of fully autonomous vehicles are likely to be completely disengaged from the driving task. However, even with no driving involved, there are still activities that will require interfaces between the vehicle and passengers. This study evaluated different configurations of screens providing operational-related information to occupants for tracking the progress of journeys. Surveys and interviews were used to measure trust, usability, workload and experience after users were driven by an autonomous low speed pod. Results showed that participants want to monitor the state of the vehicle and see details about the ride, including a map of the route and related information. There was a preference for this information to be displayed via an onboard touchscreen device combined with an overhead letterbox display versus a smartphone-based interface. This paper provides recommendations for the design of devices with the potential to improve the user interaction with future autonomous vehicles.
2017-05-16
Pearson, Carl J., Welk, Allaire K., Boettcher, William A., Mayer, Roger C., Streck, Sean, Simons-Rudolph, Joseph M., Mayhorn, Christopher B..  2016.  Differences in Trust Between Human and Automated Decision Aids. Proceedings of the Symposium and Bootcamp on the Science of Security. :95–98.

Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that humans often rely on automation or other humans, but not both simultaneously. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measures how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.

2016-07-01
Pearson, Carl J., Welk, Allaire K., Boettcher, William A., Mayer, Roger C., Streck, Sean, Simons-Rudolph, Joseph M., Mayhorn, Christopher B..  2016.  Differences in Trust Between Human and Automated Decision Aids. Proceedings of the Symposium and Bootcamp on the Science of Security. :95–98.

Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that humans often rely on automation or other humans, but not both simultaneously. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measures how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.