TY - JOUR
T1 - Confabulation dynamics in a reservoir computer
T2 - Filling in the gaps with untrained attractors
AU - O’Hagan, Jack
AU - Keane, Andrew
AU - Flynn, Andrew
N1 - Publisher Copyright:
© 2025 Author(s).
PY - 2025/9/1
Y1 - 2025/9/1
N2 - Artificial intelligence has advanced significantly in recent years, thanks to innovations in the design and training of artificial neural networks (ANNs). Despite these advancements, we still understand relatively little about how elementary forms of ANNs learn, fail to learn, and generate false information without the intent to deceive, a phenomenon known as “confabulation.” To provide some foundational insight, in this paper, we analyze how confabulation occurs in reservoir computers (RCs): a dynamical system in the form of an ANN. RCs are particularly useful to study as they are known to confabulate in a well-defined way: when RCs are trained to reconstruct the dynamics of a given attractor, they sometimes construct an attractor that they were not trained to construct, a so-called “untrained attractor” (UA). This paper sheds light on the role played by UAs when reconstruction fails and their influence when modeling transitions between reconstructed attractors. Based on our results, we conclude that UAs are an intrinsic feature of learning systems whose state spaces are bounded and that this means of confabulation may be present in systems beyond RCs.
AB - Artificial intelligence has advanced significantly in recent years, thanks to innovations in the design and training of artificial neural networks (ANNs). Despite these advancements, we still understand relatively little about how elementary forms of ANNs learn, fail to learn, and generate false information without the intent to deceive, a phenomenon known as “confabulation.” To provide some foundational insight, in this paper, we analyze how confabulation occurs in reservoir computers (RCs): a dynamical system in the form of an ANN. RCs are particularly useful to study as they are known to confabulate in a well-defined way: when RCs are trained to reconstruct the dynamics of a given attractor, they sometimes construct an attractor that they were not trained to construct, a so-called “untrained attractor” (UA). This paper sheds light on the role played by UAs when reconstruction fails and their influence when modeling transitions between reconstructed attractors. Based on our results, we conclude that UAs are an intrinsic feature of learning systems whose state spaces are bounded and that this means of confabulation may be present in systems beyond RCs.
UR - https://www.scopus.com/pages/publications/105015675061
U2 - 10.1063/5.0283285
DO - 10.1063/5.0283285
M3 - Article
C2 - 40932348
AN - SCOPUS:105015675061
SN - 1054-1500
VL - 35
JO - Chaos
JF - Chaos
IS - 9
M1 - 093130
ER -