Washington: Sushir Balaji, a researcher and whistleblower at OpenAI, the company that created ChatGPIT, has died under suspicious circumstances. He has raised concerns about copyright infringement in the development of generative AI. He was found dead in his apartment in San Francisco. Billionaire Elon Musk has also responded to this. According to the San Francisco Police and Medical Examiner’s Office, Sushir Balaji’s death was ruled a suicide and no evidence of foul play was found. His death has intensified debate on ethical and legal issues related to the development of artificial intelligence. When Balaji’s friends and colleagues raised their concerns, police arrived at his Lower Heights residence at 1pm on 26 November. The police found his body in the apartment. The police said that police teams and medical teams arrived at the scene and found the body of a person. It looked as if he had committed suicide. The initial investigation found no signs of any conspiracy. The Medical Examiner’s Office ruled the death a suicide. But how this happened has not been revealed.
Concern has been expressed about the company’s operation
According to the report, Balaji has worked at Open AI for over four years and played a key role in the development of ChatGPT. Initially, they believed that the use of online data, including copyrighted material, was acceptable in line with the Open AI strategy. But after ChatGPT launched in late 2022, it faced legal and ethical concerns. In August 2023, he resigned from Open AI and began sharing his concerns publicly.
The company was accused of illegally using copyrighted materials to train generative AI models. “If you believe what I believe, you should leave the company,” he said in an interview with The New York Times in October.
Lawsuits against the company
This revelation by Balaji came at a time when writers, programmers and journalists were filing copyright infringement cases against the company. In October, Balaji wrote in a post on He further said that the fair use argument is weak because these products can generate the same data they were trained on.