Visible to the public Biblio

Filters: Author is Lebiere, Christian  [Clear All Filters]
2019-09-13
Gonzalez, Cleotilde, Lerch, Javier F, Lebiere, Christian.  2003.  Instance-based learning in dynamic decision making. Cognitive Science. 27:591–635.

This paper presents a learning theory pertinent to dynamic decision making (DDM) called instance-based learning theory (IBLT). IBLT proposes five learning mechanisms in the context of a decision-making process: instance-based knowledge, recognition-based retrieval, adaptive strategies, necessity-based choice, and feedback updates. IBLT suggests in DDM people learn with the accumulation and refinement of instances, containing the decision-making situation, action, and utility of decisions. As decision makers interact with a dynamic task, they recognize a situation according to its similarity to past instances, adapt their judgment strategies from heuristic-based to instance-based, and refine the accumulated knowledge according to feedback on the result of their actions. The IBLT’s learning mechanisms have been implemented in an ACT-R cognitive model. Through a series of experiments, this paper shows how the IBLT’s learning mechanisms closely approximate the relative trend magnitude and performance of human data. Although the cognitive model is bounded within the context of a dynamic task, the IBLT is a general theory of decision making applicable to other dynamic environments.

Cranford, Edward A, Gonzalez, Cleotilde, Aggarwal, Palvi, Lebiere, Christian.  2019.  Towards Personalized Deceptive Signaling for Cyber Defense Using Cognitive Models.

Recent research in cybersecurity has begun to develop active defense strategies using game-theoretic optimization of the allocation of limited defenses combined with deceptive signaling. While effective, the algorithms are optimized against perfectly rational adversaries. In a laboratory experiment, we pit humans against the defense algorithm in an online game designed to simulate an insider attack scenario. Humans attack far more often than predicted under perfect rationality. Optimizing against human bounded rationality is vitally important. We propose a cognitive model based on instancebased learning theory and built in ACT-R that accurately predicts human performance and biases in the game. We show that the algorithm does not defend well, largely due to its static nature and lack of adaptation to the particular individual’s actions. Thus, we propose an adaptive method of signaling that uses the cognitive model to trace an individual’s experience in real time, in order to optimize defenses. We discuss the results and implications of personalized defense.