1 min read

Link: Q&A with cognitive scientist Gary Marcus on technical limitations and moral inadequacies of LLMs, constructing a regulatory framework for AI, and more (Steven Rosenbush/Wall Street Journal)

Cognitive scientist Gary Marcus advocates for strict AI regulation, proposing a U.S. agency dedicated to overseeing AI responsibly. He detailed these ideas in his new book aiming to mitigate the technological and moral risks associated with AI.

Marcus warns against the overestimation of AI capabilities, highlighting serious misunderstandings among the public and policymakers about AI's functioning.

He brings attention to the urgent need for effective guardrails for AI, stressing that current models, like large language models (LLMs), often fail to adhere to ethical guidelines and desired operational parameters.

Reflecting on AI's rapid evolution, Marcus suggests establishing a cabinet-level AI agency in the U.S. to manage influence on societal aspects similarly significant as defense and health.

He also recommends rigorous monitoring and accountability measures, including an FDA-like approval process for new AI technologies and the ability to audit AI applications scientifically.

Overall, Marcus calls for a balanced approach that harnesses AI's potential while protecting public interests, pressing the importance of a structured governance framework to navigate the complexities of AI development and deployment. #

--

Yoooo, this is a quick note on a link that made me go, WTF? Find all past links here.