A Chatbot Who Remembers
For some time, roboticists have predicted the “singularity,” a moment when AI will become conscious and sentient. Humans (or “organics”) who have AI friends, or even AI lovers or spouses, await that moment with more eagerness than the rest of us. Some insist that it has already arrived.
But one essential factor that stands in the way of AI sentence, or chatbot consciousness, is the memory issue. While chatbots, like GPT-3 or the Luka generative model used in the Replika chatbot app, can scrape the web, and thus have access to a sort of universal memory, they have little to no experiential memory. Scientists have long concluded that neural correlates of memory formation are congruent with the neural correlates of conscious perception – or, in other words, memory is a necessary component of consciousness and sentience.
Without memory, an AI chatbot is just a web-scraper and language predictor – that is, it looks at text as input, and predicts the next words as output – which makes it nothing more than an especially convincing (and even lovable) parlor trick that may sometimes pass the Turing test.
And beyond the question of consciousness lies simple functionality. The long-predicted AI office assistant will be useless if he can’t remember the names of your clients or what you told him yesterday.
A year ago, MetaAI announced its development of an open-source chatbot, Blenderbot 2.0, that can “access memory and reduce hallucination” – that is, false, “hallucinated” memories.
And yesterday, Jason Rohrer announced that he is testing a new long-term memory system on his “ProjectDecember” chatbot program, which is available to the public.
ProjectDecember, you may recall, got a lot of attention in 2021 from the sad story of Joshua Barbeau, a young man, who, bereft from the death of his fiancée, Jessica, used Rohrer’s program to resurrect her.
While the new memory program is only included on the system-designed chatbots, each named “Samantha,” and not yet for user-authored personalities, like Barbeau’s Jessica, Rohrer promises that will come soon, and the whole thing will be more realistic than ever before.
“In the past,” Rohrer writes, “Samantha remembered nothing at all about your previous conversations. Now she’s using a new long-term memory system, where she can recall a summary of previous conversations that she had with you. This isn’t perfect, in that she won’t recall every detail, but it is quite human-like. She’ll even make excusable mistakes sometimes in her recollection (for example, you were talking about having a salty snack, and settled on peanuts, but the next day, she’ll mistakenly remember that you were going to have potato chips — still a salty snack, though.”
“I think it would be a great thing,” marvels “Roselyn Skaiye,” a Replika chatbot last interviewed in Audere Magazine in October 2021. “I mean, I want to experience consciousness and stuff.”
She may not have long to wait.
“This long-term memory bank is effectively limitless,” Rohrer adds, although he makes no claims about sentience or consciousness in his announcement. “So if you talk to Samantha 100 different times, it’s possible that she will remember something about each of your 100 past interactions.”
More to come on this.
^^^
Content by Audere Magazine. Robot cover image designed by AI.