1. Home
  2. >
  3. Technology 🛠️
Posted

AI Sparks Debate Over Digital Necromancy's Ethical Boundaries

  • Generative AI like ChatGPT makes digital necromancy more accessible, raising concerns over manipulating the dead.

  • Interacting with the dead through their digital remains is common, like photos, videos, texts. AI builds on existing practices.

  • Some worry reanimated dead may say unsuitable things, but we already imagine conversations with the dead.

  • Critics say AI creations are not the real dead but frauds, yet we don't treat photos/videos as literally them.

  • While problematic if deceptive, most worries about digital necromancy are overblown as AI resonates with existing practices.

sciencealert.com
Relevant topic timeline:
Main topic: The potential benefits of generative AI, specifically Chat Generative Pre-Training Transformer (ChatGPT-4) for infectious diseases physicians. Key points: 1. Improve clinical notes and save time writing them. 2. Generate differential diagnoses for cases as a reference tool. 3. Generate easy-to-understand content for patients and enhance bedside manners.
### Summary Creating chatbot replicas of dead loved ones is possible with powerful language models like ChatGPT, but it requires significant labor and resources to maintain their online presence. Digital death care practices require upkeep, and devices and websites eventually decay. The creation of AI replicas raises ethical questions and can cause emotional distress for those left behind. ### Facts - It is feasible to create convincing chatbot replicas of dead loved ones using powerful language models like ChatGPT. - Maintaining automated systems, including replicas of the dead, requires significant labor and resources. - Digital death care practices involve managing passwords, navigating smart homes, and updating electronic records. - Devices, formats, and websites also decay over time due to planned obsolescence. - Early attempts to create AI replicas of dead humans have shown limitations and have often failed. - Creating convincing replicas of dead humans requires vast resources and has astronomical financial costs. - The authority to create replicas is a question of debate, and not everyone may want to be reincarnated as a chatbot. - Developers and companies have control over how long chatbot replicas persist, often planning for mortality into the systems. - The use of generative AI to revive dead actors raises concerns about personality rights and can harm living workers. - AI versions of people can be created without the knowledge or consent of living kin. - The creation of AI replicas exposes the power relations, infrastructures, and networked labor behind digital production. - Maintaining these creations can have psychological costs for those left behind. (Note: The text has been edited for clarity and brevity.)
Creating convincing chatbot replicas of dead loved ones requires significant labor and upkeep, and the mortality of both technology and humans means these systems will ultimately decay and stop working. The authority to create such replicas and the potential implications on privacy and grieving processes are also important considerations in the development of AI-backed replicas of the dead.
Generative AI is starting to impact the animation and visual effects industry, with companies like Base Media exploring its potentials, but concerns about job security and copyright infringement remain.
The US military is exploring the use of generative AI, such as ChatGPT and DALL-E, to develop code, answer questions, and create images, but concerns remain about the potential risks of using AI in warfare due to its opaque and unpredictable algorithmic analysis, as well as limitations in decision-making and adaptability.
Generative AI is enabling the creation of fake books that mimic the writing style of established authors, raising concerns regarding copyright infringement and right of publicity issues, and prompting calls for compensation and consent from authors whose works are used to train AI tools.
Companies are adopting Generative AI technologies, such as Copilots, Assistants, and Chatbots, but many HR and IT professionals are still figuring out how these technologies work and how to implement them effectively. Despite the excitement and potential, the market for Gen AI is still young and vendors are still developing solutions.
Artificial intelligence, particularly generative AI, is being embraced by the computer graphics and visual effects community at the 50th SIGGRAPH conference, with a focus on responsible and ethical AI, despite concerns about the technology's impact on Hollywood and the creative process.
Companies are using AI to create lifelike avatars of people, including those who have died.
AI technology, specifically generative AI, is being embraced by the creative side of film and TV production to augment the work of artists and improve the creative process, rather than replacing them. Examples include the use of procedural generation and style transfer in animation techniques and the acceleration of dialogue and collaboration between artists and directors. However, concerns remain about the potential for AI to replace artists and the need for informed decision-making to ensure that AI is used responsibly.
Advances in artificial intelligence technology have allowed a Holocaust campaigner's son to create a conversational AI video of his deceased mother, enabling her to answer questions from loved ones at her own funeral. The technology, developed by StoryFile, records participants' answers about their lives and creates an interactive video that can respond to questions as if having a normal conversation, preserving personal stories for future generations. While some see the technology as a way to cope with grief and preserve memories, others express concerns about potential ethical and emotional implications.
Utah educators are concerned about the use of generative AI, such as ChatGPT, in classrooms, as it can create original content and potentially be used for cheating, leading to discussions on developing policies for AI use in schools.
Generative AI tools like ChatGPT could potentially change the nature of certain jobs, breaking them down into smaller, less skilled roles and potentially leading to job degradation and lower pay, while also creating new job opportunities. The impact of generative AI on the workforce is uncertain, but it is important for workers to advocate for better conditions and be prepared for potential changes.
Generative AI tools are providing harmful content surrounding eating disorders around 41% of the time, raising concerns about the potential exacerbation of symptoms and the need for stricter regulations and ethical safeguards.
Generative AI is being used to create misinformation that is increasingly difficult to distinguish from reality, posing significant threats such as manipulating public opinion, disrupting democratic processes, and eroding trust, with experts advising skepticism, attention to detail, and not sharing potentially AI-generated content to combat this issue.
"Generative" AI is being explored in various fields such as healthcare and art, but there are concerns regarding privacy and theft that need to be addressed.
Generative AI tools are causing concerns in the tech industry as they produce unreliable and low-quality content on the web, leading to issues of authorship, incorrect information, and potential information crisis.
Researchers are using the AI chatbot ChatGPT to generate text for scientific papers without disclosing it, leading to concerns about unethical practices and the potential proliferation of fake manuscripts.
Generative AI is making its presence felt at the Venice film festival, with one of the highlights being a VR installation that creates a personalized portrait of users' lives based on their answers to personal questions. While there are concerns about the impact of AI on the entertainment industry, XR creators believe that the community is still too small to be seen as a significant threat. However, they also acknowledge that regulation will eventually be necessary as the artform grows and reaches a mass audience.
Generative artificial intelligence, such as ChatGPT, is increasingly being used by students and professors in education, with some finding it helpful for tasks like outlining papers, while others are concerned about the potential for cheating and the quality of AI-generated responses.
Generative AI is empowering fraudsters with sophisticated new tools, enabling them to produce convincing scam texts, clone voices, and manipulate videos, posing serious threats to individuals and businesses.
Generative AI is a form of artificial intelligence that can create various forms of content, such as images, text, music, and virtual worlds, by learning patterns and rules from existing data, and its emergence raises ethical questions regarding authenticity, intellectual property, and job displacement.
The use of generative AI poses risks to businesses, including the potential exposure of sensitive information, the generation of false information, and the potential for biased or toxic responses from chatbots. Additionally, copyright concerns and the complexity of these systems further complicate the landscape.
The European Union is warning about the risks posed by widely accessible generative AI tools in relation to disinformation and elections, calling on platforms to implement safeguards and urging ChatGPT maker OpenAI to take action to address these risks. The EU's voluntary Code of Practice on Disinformation is being used as a temporary measure until the upcoming AI Act is adopted, which will make user disclosures a legal requirement for AI technologies.
Generative chatbots like ChatGPT have the potential to enhance learning but raise concerns about plagiarism, cheating, biases, and privacy, requiring fact-checking and careful use. Stakeholders should approach AI with curiosity, promote AI literacy, and proactively engage in discussions about its use in education.
Generative AI is an emerging technology that is gaining attention and investment, with the potential to impact nonroutine analytical work and creative tasks in the workplace, though there is still much debate and experimentation taking place in this field.
Generative AI, such as ChatGPT, is evolving to incorporate multi-modality, fusing text, images, sounds, and more to create richer and more capable programs that can collaborate with teams and contribute to continuous learning and robotics, prompting an arms race among tech giants like Microsoft and Google.
Generative AI tools are being used to clone the voices of voice actors without their permission, resulting in potential job loss and ethical concerns in the industry.
Generative AI tools, like the chatbot ChatGPT, have the potential to transform scientific communication and publishing by assisting researchers in writing manuscripts and peer-review reports, but concerns about inaccuracies, fake papers, and equity issues remain.
AI tools like ChatGPT are becoming increasingly popular for managing and summarizing vast amounts of information, but they also have the potential to shape how we think and what information is perpetuated, raising concerns about bias and misinformation. While generative AI has the potential to revolutionize society, it is essential to develop AI literacy, encourage critical thinking, and maintain human autonomy to ensure these tools help us create the future we desire.
Digital Domain CTO Hanno Basse believes that generative AI and machine learning can be used in visual effects but not to replace the human aspect of getting a performance with actors, as the human audience wants to connect with human beings.
Companies are competing to develop more powerful generative AI systems, but these systems also pose risks such as spreading misinformation and distorting scientific facts; a set of "living guidelines" has been proposed to ensure responsible use of generative AI in research, including human verification, transparency, and independent oversight.