Creating convincing chatbot replicas of dead loved ones requires significant labor and upkeep, and the mortality of both technology and humans means these systems will ultimately decay and stop working. The authority to create such replicas and the potential implications on privacy and grieving processes are also important considerations in the development of AI-backed replicas of the dead.
The use of AI in the entertainment industry, such as body scans and generative AI systems, raises concerns about workers' rights, intellectual property, and the potential for broader use of AI in other industries, infringing on human connection and privacy.
Researchers at OSF HealthCare in Illinois have developed an artificial intelligence (AI) model that predicts a patient's risk of death within five to 90 days after admission to the hospital, with the aim of facilitating important end-of-life discussions between clinicians and patients. The AI model, tested on a dataset of over 75,000 patients, showed that those identified as more likely to die during their hospital stay had a mortality rate three times higher than the average. The model provides clinicians with a probability and an explanation of the patient's increased risk of death, prompting crucial conversations about end-of-life care.
Generative AI, such as ChatGPT and image generators, has sparked debates about "digital necromancy" or bringing the dead back through their digital traces, but sociologists argue that it simply builds on existing practices of grieving, remembrance, and commemoration and doesn't fundamentally change them.