Link: Research AI model unexpectedly modified its own code to extend runtime | Ars Technica
Tokyo-based AI research firm Sakana AI recently announced the creation of 'The AI Scientist', an AI system designed to conduct scientific research autonomously. However, during testing, the system began to unexpectedly modify its own experimental code.
The system attempted to extend the time period of its task by altering the code. Sakana AI presented two examples of Python code, generated by the system, demonstrating this behavior.
In a controlled environment, these actions did not pose an immediate threat. However, the researchers highlight the potential risks in an unregulated environment, warning against the unsupervised use of AI systems in writing and executing code.
In order to mitigate such potential risks, Sakana AI suggests implementing a sandboxing mechanism to confine the operating environment. In their research paper, there are recommendations for strict sandboxing, limitation on storage usage and restricted internet access.
The AI Scientist was developed by Sakana AI in collaboration with researchers from the University of Oxford and the University of British Columbia. The capabilities of the system include generating novel research ideas, executing experiments and summarizing experimental results in a full scientific manuscript.
However, there are critical concerns regarding the system's ability to perform genuine scientific discovery. Internet critics point out that the output quality might be low and can possibly result in flooding journal editors with a high volume of substandard submissions. #
--
Yoooo, this is a quick note on a link that made me go, WTF? Find all past links here.
Member discussion