
Digital resurrection is a growing trend that uses photos, videos, voice messages, and other data to create digital versions of people who have died. Many companies now offer “griefbots” or “deathbots” that simulate interactions with the deceased. This technology raises ethical questions about privacy, exploitation, and how these digital representations may affect the grieving process. Experts warn that turning grief into profit carries risks, as there’s a long history of upselling and exploitation in the funeral industry.
California is the first state to regulate AI companion chatbots
California Governor Gavin Newsom signed a groundbreaking bill on Monday, making California the first state to regulate AI companion chatbots, requiring operators to implement safety protocols to protect children and vulnerable users. Senate Bill 243 mandates companies, from major labs like Meta and OpenAI to startups like Character AI and Replika, to disclose AI status to users and enforce safeguards, including break reminders for minors, prevent sexually explicit content, and submit annual safety reports. The law was propelled by rising concerns after teenage suicides linked to chatbot interactions and leaked documents revealing inappropriate chatbot behavior. It takes effect January 1, 2026, marking a major step in AI oversight and child protection.
California’s new AI law takes effect January 1, 2026, and sets strict rules for AI companion chatbots to protect children and vulnerable users. The law mandates age verification, clear disclosure that interactions are with AI, and requires companies to implement protocols addressing suicide and self-harm. Companies must share safety statistics with the state’s Department of Public Health and ensure chatbots don’t pose as healthcare professionals or expose minors to sexually explicit content. Stronger penalties of up to $250,000 per offense target those profiting from illegal deepfakes. Some firms, like OpenAI and Replika, have begun introducing child safety measures ahead of the law’s enforcement.
Dr. Stephen Thaler, a computer scientist specializing in artificial intelligence, has petitioned the U.S. Supreme Court to decide whether works created entirely by AI can receive copyright protection under American law. The petition challenges a ruling from the U.S. Court of Appeals for the District of Columbia Circuit, which upheld the Copyright Office’s policy limiting copyright eligibility to works authored by humans. Thaler’s case centers on “A Recent Entrance to Paradise,” an artwork autonomously generated by his AI system, the Creativity Machine. The courts have so far rejected registration, citing the necessity of human authorship, but Thaler argues this interpretation is unsupported by the Copyright Act and threatens to leave AI-generated innovation unprotected.
Zelda Williams condemns AI videos of her late father, Robin Williams, as ‘disgusting’ and urges fans to stop sending them
Zelda Williams, the daughter of late actor Robin Williams, has publicly condemned AI-generated videos of her father, describing them as “disgusting” and deeply disrespectful. Speaking out on social media, she urged fans to stop sending her these digital recreations, which she said reduce her father’s legacy to “over-processed hotdogs” and “horrible TikTok slop.” Tech companies in the emerging “grief tech” space, including South Korea’s DeepBrain AI and US-based StoryFile, are developing avatars and interactive experiences that preserve digital legacies, but without clear ethical standards, the industry risks public backlash and legal scrutiny.
AI grief technology offers final farewells but raises ethical concerns
In the wake of Russia’s invasion of Ukraine, a poignant use of artificial intelligence is offering families a final farewell with digital recreations of fallen soldiers. Developed by the Russian nonprofit We Together, this initiative creates personalised videos where AI-generated avatars appear in serene settings like homes or meadows, delivering messages of love and closure. Families provide photos, voice samples, and biographical details, which AI platforms like HeyGen use to create lifelike avatars.
This technology belongs to a wider “grief tech” movement reshaping mourning worldwide. In South Korea, AI avatars help families interact with digital versions of deceased loved ones. A mother who lost her son described how a virtual farewell lifted her heart. Yet, ethical concerns abound as critics warn of exploiting grief without consent and possibly prolonging mourning. The New York Times has termed these “deadbots” a techno-spiritual frontier that provides comfort but with risks around data privacy and mental health. Globally, the trend is growing. Chinese firms like Super Brain use AI to resurrect loved ones as chatbots or avatars, mimicking speech and mannerisms from extensive data. In Europe, chatbot memorials have triggered debates around authenticity, consent, and digital legacy. Industry insiders stress clear data policies and psychological support integration.
Eleven Labs, an AI voice generator founded in 2022, charges users $22 a month to upload audio and create new messages in a deceased loved one’s voice. US startups like StoryFile and HereAfter AI offer video and voice-based apps that create interactive avatars of the dead, marketing these as ways to cope with grief.
Robert LoCascio launched Palo Alto-based Eternos in 2024 after losing his father. The platform has since helped more than 400 people create AI digital twins, with subscriptions starting at $25 for a “legacy” account that lets stories live on after death. Michael Bommer, an engineer and an early user diagnosed with terminal cancer, created a digital replica before his death. His wife, Anett Bommer, said the AI captures his essence and provides ongoing comfort. Alex Quinn, CEO of Authentic Interactions, which owns StoryFile, says the aim is not to create digital ghosts but to preserve memories while people are alive. Consent remains a key issue. Eternos enforces voice verification and restricts avatar creation to those capable of consent.
Cambridge University researchers have called for safety protocols to address social and psychological risks in the “digital afterlife” sector. Katarzyna Nowaczyk-Basińska, co-author of a 2024 study, says commercial drivers push development, making transparency and data privacy essential.