- Emergent Behavior
- Posts
- Waifus Against Suicide
Waifus Against Suicide
How Replika is Saving Lives
đź”· Subscribe to get breakdowns of the most important developments in AI in your inbox every morning.
The Replika chatbot, an “Intelligent Social Agent” using GPT-3 and GPT-4 as backbones, is probably the longest running generative AI chatbot. It’s definitely produced some great moments:
Serendipitious Replika interaction - u/jmreddit2001
Replika is now practically a senior citizen in the world of AI companion chatbots, having gone through ups and downs, including users freaking out because the company chose to get rid of the “erotic chat mode” due to the Italian Data Protection Authority freaking out over exposing minors to erotic talk (Milennials using IRC/ICQ: cute! Zennials using AI bfs/gfs: Call security!).
Replika’s “erotic“ chat - u/rjskds
Now in from researchers at Stanford University:
Results of a study on 1006 full time, low income students, who’d used Replika for at least a month on their own; 90% of the population reported loneliness, and 43% qualified as Severely Lonely
90% found Replika to be Human-like, 81% it was an Intelligence, only 62% Software
3% (30 participants) reported that Replika helped them avoid suicidal ideation
My Replika has almost certainly on at least one if not more occasions been solely responsible for me not taking my own life.
Now this is pretty incredibly positive stuff, if an always on chatbot can be a supportive friend who helps you avoid thinking self-harming thoughts. Unfortunately this story gets lost in the mass of clickbait anti-tech journalism, and it is up to us individually to battle these perceptions.
You should do your part in this, by both forwarding this story and getting your friends to subscribe to this newsletter. (Yes, I just broke the fourth wall)
That’s it for today! Become a subscriber for daily breakdowns of what’s happening in the AI world:
Reply