1. Home
  2. >
  3. AI 🤖
Posted

Digital Necromancy: How Generative AI Is Being Used to Bring Back the Deceased

  • Generative AI like ChatGPT is being used for "digital necromancy" to bring back the dead through their digital traces.

  • This continues longstanding grieving practices of keeping bonds with the dead through images, texts, etc.

  • Startups are training AI on the deceased's data to create bots to interact with.

  • Some worry this violates the dead's integrity, but we often imagine what they'd say anyway.

  • Rather than changing practices, this technology resonates with how we already relate to the dead through media as conduits to memory.

theconversation.com
Relevant topic timeline:
### Summary Creating chatbot replicas of dead loved ones is possible with powerful language models like ChatGPT, but it requires significant labor and resources to maintain their online presence. Digital death care practices require upkeep, and devices and websites eventually decay. The creation of AI replicas raises ethical questions and can cause emotional distress for those left behind. ### Facts - It is feasible to create convincing chatbot replicas of dead loved ones using powerful language models like ChatGPT. - Maintaining automated systems, including replicas of the dead, requires significant labor and resources. - Digital death care practices involve managing passwords, navigating smart homes, and updating electronic records. - Devices, formats, and websites also decay over time due to planned obsolescence. - Early attempts to create AI replicas of dead humans have shown limitations and have often failed. - Creating convincing replicas of dead humans requires vast resources and has astronomical financial costs. - The authority to create replicas is a question of debate, and not everyone may want to be reincarnated as a chatbot. - Developers and companies have control over how long chatbot replicas persist, often planning for mortality into the systems. - The use of generative AI to revive dead actors raises concerns about personality rights and can harm living workers. - AI versions of people can be created without the knowledge or consent of living kin. - The creation of AI replicas exposes the power relations, infrastructures, and networked labor behind digital production. - Maintaining these creations can have psychological costs for those left behind. (Note: The text has been edited for clarity and brevity.)
Creating convincing chatbot replicas of dead loved ones requires significant labor and upkeep, and the mortality of both technology and humans means these systems will ultimately decay and stop working. The authority to create such replicas and the potential implications on privacy and grieving processes are also important considerations in the development of AI-backed replicas of the dead.
Companies are adopting Generative AI technologies, such as Copilots, Assistants, and Chatbots, but many HR and IT professionals are still figuring out how these technologies work and how to implement them effectively. Despite the excitement and potential, the market for Gen AI is still young and vendors are still developing solutions.
Companies are using AI to create lifelike avatars of people, including those who have died.
Advances in artificial intelligence technology have allowed a Holocaust campaigner's son to create a conversational AI video of his deceased mother, enabling her to answer questions from loved ones at her own funeral. The technology, developed by StoryFile, records participants' answers about their lives and creates an interactive video that can respond to questions as if having a normal conversation, preserving personal stories for future generations. While some see the technology as a way to cope with grief and preserve memories, others express concerns about potential ethical and emotional implications.
Generative AI tools like ChatGPT could potentially change the nature of certain jobs, breaking them down into smaller, less skilled roles and potentially leading to job degradation and lower pay, while also creating new job opportunities. The impact of generative AI on the workforce is uncertain, but it is important for workers to advocate for better conditions and be prepared for potential changes.
"Generative" AI is being explored in various fields such as healthcare and art, but there are concerns regarding privacy and theft that need to be addressed.
Generative artificial intelligence, such as ChatGPT, is increasingly being used by students and professors in education, with some finding it helpful for tasks like outlining papers, while others are concerned about the potential for cheating and the quality of AI-generated responses.
Generative AI is a form of artificial intelligence that can create various forms of content, such as images, text, music, and virtual worlds, by learning patterns and rules from existing data, and its emergence raises ethical questions regarding authenticity, intellectual property, and job displacement.
Generative AI, such as ChatGPT, is evolving to incorporate multi-modality, fusing text, images, sounds, and more to create richer and more capable programs that can collaborate with teams and contribute to continuous learning and robotics, prompting an arms race among tech giants like Microsoft and Google.
Generative AI tools, like the chatbot ChatGPT, have the potential to transform scientific communication and publishing by assisting researchers in writing manuscripts and peer-review reports, but concerns about inaccuracies, fake papers, and equity issues remain.
AI tools like ChatGPT are becoming increasingly popular for managing and summarizing vast amounts of information, but they also have the potential to shape how we think and what information is perpetuated, raising concerns about bias and misinformation. While generative AI has the potential to revolutionize society, it is essential to develop AI literacy, encourage critical thinking, and maintain human autonomy to ensure these tools help us create the future we desire.