>The ask is simple: let us use your models for anything that's technically legal.
> Weapons development, intelligence collection, battlefield operations, mass surveillance of American citizens.
> OpenAI said yes.
> Google said yes.
> xAI said yes.
> Anthropic said no.
Is that accurate? Did all the labs, other than Anthropic, say yes to allowing their models to be used for weapons development and mass surveillance of Americans?
The Claude chatbot for the general public won't even answer questions related to military AI. It won't even answer questions like if there are any dual use papers among a group of new AI research paper listings that might be of concern from an AI safety viewpoint.
>The ask is simple: let us use your models for anything that's technically legal.
> Weapons development, intelligence collection, battlefield operations, mass surveillance of American citizens.
> OpenAI said yes.
> Google said yes.
> xAI said yes.
> Anthropic said no.
Is that accurate? Did all the labs, other than Anthropic, say yes to allowing their models to be used for weapons development and mass surveillance of Americans?
Or is the poster overlooking some nuance?
This seems like an easy google search. And after 15 seconds of googling, I see this:
https://www.wsj.com/livecoverage/stock-market-today-dow-sp-5...
I don't think the poster knows what he's talking about.
He's not a reporter, he doesn't work for any of those companies, per his profile.
I believe he's doing a lot of guesswork.
Pretty sure an LLM wrote it anyway
WSJ agrees with this story
Here's the link:
https://www.wsj.com/livecoverage/stock-market-today-dow-sp-5...
> WSJ agrees with this story
Link?
Pentagon might ask contractors to certify they don't use Anthropic's Claude (wsj.com)
13 points by fortran77 20 hours ago | 6 comments
https://news.ycombinator.com/item?id=47057294
Ironic that it is Anthropic that is actually focused on the thing that OpenAI was founded on ... safety.
I read recently that they removed "safely" from their mission statement...or have been moving away from it for a while.
https://theconversation.com/openai-has-deleted-the-word-safe...
The Claude chatbot for the general public won't even answer questions related to military AI. It won't even answer questions like if there are any dual use papers among a group of new AI research paper listings that might be of concern from an AI safety viewpoint.
Anthropic has been killing it with the marketing recently. Wouldn't put it past them for this to be one more brand campaign.
Previously: https://news.ycombinator.com/item?id=47035607
100% AI slop tweet. Probably real news, but it you actually care about getting people to care then you can't be slopping out writing like this.
I think you're right. Called into question the whole story for me until i saw the WSJ link