A woman in South Korea, Kim So-young, is currently on trial for allegedly poisoning three men, resulting in the deaths of two and injury to a third, after consulting ChatGPT about the dangers of mixing sleeping pills and alcohol, according to police and prosecutors [1]. Forensic investigators extracted ChatGPT conversations from Kim's phone, which included questions such as 'What happens if you take sleeping pills and alcohol together?' and 'Could you die?' These chat logs are being used to demonstrate intent, marking a significant precedent as direct evidence in a murder case in South Korea [1]. Nam Eonho, a senior attorney representing one of the victims' families, emphasized the importance of admitting such evidence, stating that without it, proving intent to kill would be difficult [1]. Kim has denied any intent to kill, claiming the deaths were accidental, but Nam asserts that the chat log evidence contradicts her statements [1]. The trial, which began after Kim's arrest in February on charges of murder and violating South Korea’s Narcotics Con, has attracted considerable public and media attention, with a courtroom overflowing at the latest hearing on May 7 [1]. The case is part of a growing trend of criminal cases involving AI tools like ChatGPT, with experts warning that the use of chatbots for nefarious purposes may accelerate as their adoption increases [1]. OpenAI, the developer of ChatGPT, did not respond to questions about its cooperation with law enforcement or its procedures for referring cases, instead pointing to a letter and blog post about community safety [1]. It remains uncertain whether the judge will admit the ChatGPT logs as evidence, as the trial is ongoing [1].
CONCLUSION
The South Korean murder trial involving ChatGPT as evidence underscores the growing legal and societal challenges posed by AI tools in criminal cases. While the trial's outcome remains uncertain, the case has sparked debate about the responsibilities of AI companies and the admissibility of digital evidence. Market sentiment is negative, reflecting concerns about potential regulatory and reputational risks for AI developers like OpenAI.