24 Hours with GPT-4
It’s been about 24 hours since OpenAI released GPT-4. My notes over the last few weeks have talked extensively about how MLLMs will change human-AI interaction. GPT-4 appears to be the first multimodal large language model open to the public. Namely, this means the system can parse text and image inputs (but can still only respond via text).
Although they haven’t clarified exactly how they upgraded this language model in the size of ingested data, it’s clear from early users that there is a lot of improvement.
Currently, only ChatGPT Plus users are able to use GPT-4, so I haven’t had any hands-on experience with it. Thus, you’ll have to wait a little for my full review. Additionally, I believe there’s supposed to be a Federal ruling on copyrighting AI-generated work tomorrow, which I’m watching closely.
Nonetheless, I wanted to share some of the amazing new experiments people have already run in the 24 hours since GPT-4 was released. For example, GPT-4 is acing nearly every standardized test for the professional world:
GPT-4 analyzed a handwritten note and designed a website based on the note:
Joshua Browder is back at it again, upgrading his DoNotPay model for automated litigation:
GPT-4 is able to identify security vulnerabilities in blockchain smart contracts:
GPT-4 created Snake, and the guy had zero knowledge of coding:
Member discussion