What Will It Take to Trust AI’s Decisions Over Your Own?
Sam Altman has been touring the world to get out of the Silicon Valley bubble, share his optimistic perspective, and (mostly) to soothe the worries of European regulators. He stands firm on the belief that the benefits of AI greatly outweigh the downsides, but we’re naturally prone to focus on the negatives.
His optimism is to be expected, claiming “I’ve never seen convincing evidence that what we do with better tools is to work less.” However, I’m not sure that message resonates.
Trust is a major hurdle. Not necessarily trust in what these tools can do. It only takes one glimpse of AI matching or getting close to what you pride yourself on doing well to be impressed. The hurdle is trusting the output of AI’s perspective.
When an algorithm recommends ways to improve business outcomes, do employees trust it? Conventional wisdom suggests that understanding the inner workings of artificial intelligence (AI) can raise confidence in such programs.
Yet, new research finds the opposite holds true.
In fact, knowing less about how an algorithm works—but following its advice based on trusting the people who designed and tested it—can lead to better decision-making and financial results for businesses, say researchers affiliated with the Laboratory for Innovation Science at Harvard (LISH). – Harvard Business School
What would it take for you to trust an AI’s decision over your own?
It’s an interesting thought prompt. But really the answers to that question are at the core of widespread AI adoption.
More than anything, we’re on the verge of a major shift in education around the world. How we receive information, internalize this information, and learn to put it into use will all be ultra-personalized. We’re also going to create a new standard for what is considered proficiency/expertise of skills, since AI augmentation will be a major part of operating.
I think we’re inevitably going to figure out how to teach the billions of “other people” to use and get something out of AI. That change – like getting a smartphone in everyone’s hands – will happen. More importantly is mitigating the people that use AI for nefarious purposes. They are the ones who will cause the most impact to the timeline.
With open-source models, endless AI extensions, and real competition at the highest level, we’re dealing with a lot of moving pieces that can go wrong.
“People competing with each other to make better and better models is awesome,” Sam Altman said. “As long as we’re not competing in a way that would put safety at risk — if we’re competing for models while raising the bar around safety — I think it’s a good thing.”
In fact, there isn’t going to be one model that rules them all. Some models will become more specialized. Some models will be better at some tasks than others. “There are going to be a ton of models in the world. I think the trajectory we’re on is that it’s going to be a fundamental enabling of technology,” he said. – TechCrunch
Google beat OpenAI to connecting its LLM to the Internet, and Bard outperforms ChatGPT on many AI-powered search functions. Anthropic’s tool called Claude (still in beta) can comprehend 100,000 tokens (or about 75,000 words) of input without losing memory – a massive leap for LLMs which has people talking of Claude dethroning ChatGPT.
At this point, it’s not just OpenAI’s game anymore. There’s a lot more people on the playing field with clear benefits over OpenAI. Sam Altman is looking to be the figurehead of reason. It’s a risky position to occupy. But he seems to handle public pressure a little better than Zuckerberg did as the figurehead of the social Internet.
Member discussion