Anthropic’s Claude 2, a competitor to ChatGPT, is now available(opens in a new tab) to the public.
Anthropic is founded by former senior members of the OpenAI team Daniela and Dario Amodei. The startup purports to be a more ethically-driven company that makes generative AI safe and “steerable,” according to its website. Notably, it was reported in February that Google had invested in the AI startup. The large language model is the second version of Claude, which was previously only available to businesses.
Unlike Anthropic’s Claude, Claude 2 is available via a public-facing beta site (as well as an API). According to the announcement, Claude 2 scored 76.5 percent on the multiple choice section of the Bar exam and in the 90th percentile on the reading and writing portion of the GRE. Its coding skills have improved from its predecessor scoring 71.2 percent on a Python coding test compared to Claude’s 56 percent.
How to access Claude 2
If you’re in the US or UK, go to the Claude 2 page(opens in a new tab) on Anthropic’s website, and you can sign up for free. Just click “Talk to Claude,” and you’ll be prompted to provide an email address. After you confirm the address you enter, you’ll be ready to go.
Is Claude 2 as good as ChatGPT?
It’s still early, but for most purposes, Claude 2 is a decent alternative to ChatGPT performance-wise. That said, it seems to be slightly less nuanced when explaining, for instance, the philosophy of the right against self-incrimination using the Socratic method (as in a test by Mashable’s Mike Pearl).
Claude 2’s response
Claude 2’s response to a prompt explaining the right against self-incrimination using the Socratic method, was pretty decent.
But ChatGPT’s response was more thorough and had more dialog.
Compared to the test scores of GPT-4, which powers the flagship version of ChatGPT, Claude 2 is comparable albeit slightly lower. But Anthropic seeks to set itself apart from OpenAI by being a more responsible and ethical alternative. “We have an internal red-teaming evaluation that scores our models on a large representative set of harmful prompts, using an automated test while we also regularly check the results manually,” said the announcement. This is to ensure that Claude 2 is less susceptible to jailbreaks or nefarious uses.