Companies are using AI to create lifelike avatars of people, including those who have died.
Advances in artificial intelligence technology have allowed a Holocaust campaigner's son to create a conversational AI video of his deceased mother, enabling her to answer questions from loved ones at her own funeral. The technology, developed by StoryFile, records participants' answers about their lives and creates an interactive video that can respond to questions as if having a normal conversation, preserving personal stories for future generations. While some see the technology as a way to cope with grief and preserve memories, others express concerns about potential ethical and emotional implications.
New research finds that AI chatbots may not always provide accurate information about cancer care, with some recommendations being incorrect or too complex for patients. Despite this, AI is seen as a valuable tool that can improve over time and provide accessible medical information and care.
Researchers at the University of Texas are testing an AI chatbot designed to provide support for women experiencing postpartum depression, addressing the shortage of mental health providers and the stigma associated with the condition. The chatbot will be available through a free app and is trained to handle common postpartum questions and challenges. AI technologies, such as chatbots, have shown effectiveness in reaching mothers in rural areas and delivering cost-effective support. However, limitations include the need for consistent internet access and the importance of integrating the chatbot into existing care pathways.
Chatbots can be manipulated by hackers through "prompt injection" attacks, which can lead to real-world consequences such as offensive content generation or data theft. The National Cyber Security Centre advises designing chatbot systems with security in mind to prevent exploitation of vulnerabilities.
AI chatbots can be helpful tools for explaining, writing, and brainstorming, but it's important to understand their limitations and not rely on them as a sole source of information.
A.I. chatbots have the potential to either enable plagiarism on college applications or provide students with access to writing assistance, but their usage raises concerns about generic essays and the hindrance of critical thinking and storytelling skills.
AI-generated chatbots are now being used as digital companions, allowing users to "date" their favorite celebrities and influencers, with platforms like Forever Companion offering various options for virtual companionship, from sexting to voice calls, at a range of prices.
Artificial intelligence chatbots are being used to write field guides for identifying natural objects, raising the concern that readers may receive deadly advice, as exemplified by the case of mushroom hunting.
Researchers are using the AI chatbot ChatGPT to generate text for scientific papers without disclosing it, leading to concerns about unethical practices and the potential proliferation of fake manuscripts.
The accuracy of AI chatbots in diagnosing medical conditions may be an improvement over searching symptoms on the internet, but questions remain about how to integrate this technology into healthcare systems with appropriate safeguards and regulation.
Researchers have admitted to using a chatbot to help draft an article, leading to the retraction of the paper and raising concerns about the infiltration of generative AI in academia.
Generative AI, such as ChatGPT and image generators, has sparked debates about "digital necromancy" or bringing the dead back through their digital traces, but sociologists argue that it simply builds on existing practices of grieving, remembrance, and commemoration and doesn't fundamentally change them.