What Happens When 25 AI Beings Live in a Simulated World?
A group of researchers at Stanford University and Google have created a miniature RPG-style virtual world similar to The Sims, where 25 characters, controlled by ChatGPT and custom code, live out their lives independently with a high degree of realistic behavior. – Ars Technica
Recently, shows like Westworld and movies like Free Guy have popularized the idea of a society in which AI-powered virtual beings co-exist alongside us, sometimes without our knowledge. However, current LLMs make it possible to create and observe these simulated worlds. And the results are eye-opening.
"Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day," write the researchers in their paper, "Generative Agents: Interactive Simulacra of Human Behavior." – Ars Technica
Each agent was given a one-paragraph description of their job, character, and motivation. Then they were released in “Smallville” and allowed to lead their lives. The crazy part is the emergent behaviors coordinated by the agents without any human intervention. For example, one agent suggested a Valentine’s Day Party:
The agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. While 12 agents heard about the party through others, only five agents (including Klaus and Maria) attended. Three said they were too busy, and four agents just didn't go. – Ars Technica
Sounds like the results of a typical (human) Valentine’s party to me.
WTF? Agent-Based Modeling
This study is part of a broader field known as agent-based modeling, which you may know from playing “God Games” like Sim City, where you oversee and control everything but with semi-autonomous beings programmed to “live” and interact in specific ways.
The idea of agent-based modeling is to realistically simulate human behavior in order to predict real-world human behavior.
A simple example would be simulating how agents exit a building in the event of a fire. Observing this simulation would allow architects and engineers to design safer buildings.
A more complex example involves simulating how entire countries may act in the run-up to an election.
Beyond using this tech to learn about our behaviors, I see it as necessary to understand how AGI (Artificial General Intelligence) may eventually make decisions in the real world.
With AutoGPT and Agent AIs on the horizon, studying the morals and ethics of autonomous AI systems is of the utmost importance. Because Agent AIs will be able to operate free from human oversight after being given a simple objective and goals, this means we may be letting AIs run free, theoretically. And it would be nice to know how they will act with this freedom.
Although AutoGPT currently falls short at actions and carrying out its plans, given the speed that AI is developing, I don’t think it’s long before AutoGPT can build a website, purchase products, publish content, and do just about anything on the Internet that we do.
A similar AI experiment called the MACHIAVELLI benchmark (Measuring Agents’ Competence & Harmfulness In A Vast Environment of Long-horizon Language Interactions) aimed to study the ethical decision-making of AI. The research paper describes it as “a step towards measuring an [AI] agent’s ability to plan in realistic social environments.”
…the Machiavelli benchmark [was] created from 500,000+ Choose-Your-Own-Adventure scenarios. This data focused on social choices, to assess intelligent agents' balance between maximizing rewards and ethical actions. While agents initially displayed power-hungry, unethical behaviors, they were successfully guided towards ethical choices without sacrificing rewards. – Azeem Azhar
This is a vastly different result from the Paperclip Maximizer, a popular AI thought experiment by Nick Bostrom:
Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans. – Instrumental Convergence
What I find most fascinating is that people have been thinking about the implications of AI-powered simulated societies for decades. Artificial Societies: The Computer Simulation Of Social Life was first published in 1995, and this 2005 article from NewScientist theorizes how autonomous AIs could actually create their own culture.
On a side note, video games may never be the same once we have NPCs (non-player characters) that are powered by GPT-4:
Member discussion