I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.
But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.
Idk man, from the outside anthropic looks a lot like openai with a cute redisgn and Amodei like Altman with a slightly more human face mask, the same media manipulation, the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money"
There should be a name for this, “cynic cope: when someone actually takes a principled view the cynic - who has a completely negative view of the world - is proven to be wrong, can’t accept it, and tries to somehow discount it.
I've had so much abuse thrown at me on here for saying this very thing over the last few years. I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough. I'm glad they are doing the right thing, but I'm not at all surprised, nor should anyone be. Personally I believe they would go to jail/shut down/whatever before they do something objectively wrong.
> I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough.
This sounds quite backwards to me. It's been abundantly clear in today's times that, in fact, you only really know who somebody really is when they're under stress. Most people, it seems, prefer a different facade when there is nothing at stake.
Hm, I think you kinda know what people are like by seeing what they do when they’re under no stress and feel like they are free from consequences. When they have total power in a situation. The façade drops because it’s not necessary.
I don't know most people, so I can't speak to that. I do know Jack, and I knew how he was under stress long before any of this AI stuff. Jack Clark might very well be the most steady hand in the valley right now to be quite frank.
Well I can only speak to Jack Clark. Jack was a reporter who covered my startup and then became my friend. Over the last.. I dunno, 13 year or something, we've had long deep talks about lots of things, pre-ai world: what it takes to build a big business, will QM ever become a thing, universal basic human love, kids, life, family. He is brilliant. The business I worked on that he covered went through a lot of shit that he knew about. We talked about power in business, internal politics, how things actually get built...all that stuff. Then... attention is all you need, bunch of folks grok it, he got interested... got to talking to these folks starting some little research lab to see how NN scales, so joined that lab, first 5/10 or so iirc...to head AI policy. That little lab grew, stuff happened, the next part isn't mine to share but so much as to say: Anthropic was basically born out of the expectation that this moment would come and more...extremely human focused...voices should be at the table, that is Anthropic, that idea, they left their jobs at the aforementioned lab - and started their own startup to make sure a certain tone/voice/idea was always represented. Around the summer 2024, although at this point we didn't discuss any specifics of the work at his "startup", I said to him: what comes next is going to be super hard and I know this is going to sound really stupid, but you're all going to need to be Jesus for real. I'm a Buddhist and it wasn't a literal religious comment about Christianity as a denomination, so much as... the very basics of the stuff the dude Jesus Christ espoused. He knew, they knew, that I suppose, was always the plan? So it was never unexpected to me they would act this way, that is what Anthropic is all about. Here we are.
Hah, you're right, I meant Dario Amodei, Jared Kaplan, and Sam McCandlish.
They're all cofounders of Anthropic. Dario is the CEO, Jared leads research, and Sam leads infra. Both Jared and Sam were the "responsible scaling officer", meaning they were responsible for Anthropic meeting the obligations of its commitments to building safeguards.
I think neom is referring to Jack Clark, another one of the seven cofounders.
I almost downvoted you, because this is a pretty classic LMGTFY (or now, LMLLMTFY), but on second thought, you're right. The "Dario" is clear, he's the author of TFA, but for other execs, Anthropic's fans on here should spell out their full names. Dropping all these first names feel like "inside baseball" at best, mildly culty at worst, and here outside the walls of Anthropic, we're going to see those names and think of Kushner(??), Altman, and maybe Dorsey, and get confused.
FWIW, I agree strongly w/ lebovic's toplevel take above, that Anthropic's leaders are guided by their values. Many of the responses are roughly saying, "That can't be true, because Anthropic's values aren't my values!" This misses the point completely, and I'm astounded that so many commenters are making such a basic error of mentalization.
For my part, I'm skeptical of a lot of Anthropic's values as I perceive them. I find a lot of the AI mysticism silly or even harmful, and many of my comments on this site reflect that. Also, like any real-world company, Anthropic has values that are, shall we say, compatible with surviving under capitalism -- even permitting them to steal a boatload of IP when they scanned those books!
Nonetheless, I can clearly see that it's a company that tries to stand by what it believes, and in the case of this spat with Dep't of War, I happen to agree with them.
> it's easy to know how they will act when the going gets rough
Even if you went to burning man and your souls bonded, you only know a person at a particular point in time - people's traits flanderize, they change, they emphasize different values, they develop different incentives or commitments. I've watched very morally certain people fall to mania or deep cynicism over the last 10 years as the pillars of society show their cracks.
That said, it is heartening to know that some would predict anyone in Silicon Valley would still take a moral stance. But it would do better if not the same day he fires 4000 people to do the "scary big cut" for a shift he sees happening. I guess we're back to Thatcherisms, where "There Is No Other Option" to justify our conservatism.
Your comment reminds me of a story. John Adams and Lafayette met in Massachusetts something like ~49 years after the revolution. (Lafayette went on a US tour to celebrate the upcoming 50 year anniversary of independence.) Supposedly after the meeting Adams said "this was not the Lafayette I knew" and Lafayette said "this was not the Adams I knew".
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values,
I am sure you think they are better than the average startup executive, but such hyperbole puts the objectivity of your whole judgement under question.
They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.
I think avg(HN) is mostly skeptical about the output, not that the input is corrupt or ill-meaning in this case. Although with other companies, one can't even take their claims seriously.
And in any case, this is difficult territory to navigate. I would not want to be in your spot.
>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
I'm concerned that the context of the OP implies that they're making this declaration after they've already sold products. It specifically mentions already having products in classified networks. This is the sort of thing that they should have made clear before that happened. It's admirable (no pun intended) to have moral compunctions about how the military uses their products but unless it was already part of their agreement (which i very much doubt) they are not entitled them to countermand the military's chain of command by designing a product to not function in certain arbitrarily-designated circumstances.
The article is crystal clear that these uses are not permitted by the current or any past contract, and the DoW wants to remove those exceptions.
> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now
It also links to DoW's official memo from January 9th that confirms that DoW is changing their contract language going forwards to remove restrictions. A pretty clear indication that the current language has some.
I think it largely hinges on what they mean by "included"; does that mean it was specifically excluded by the terms of the contract or does it mean that it's not expressly permitted? I doubt the DoD is used to defense contractors thinking they have the right to dictate policy regarding the use of their products, and it's equally possible that anthropic isn't used to customers demanding full control over products (as evidenced by how many chatbots will arbitrarily refuse to engage with certain requests, especially erotic or politically-incorrect subject-matters). Sometimes both parties have valid cases when there's a contract disagreement.
>A pretty clear indication that the current language has some.
Or alternatively that there is some disagreement between the DoD and Anthropic as to how the contract is to be interpreted and that the DoD is removing the ambiguity in future contracts.
This is all just completely wrong. Anthropic explicitly stated in their usage use of their products is not permitted in mass-surveillance of American citizens and fully automated weapons, in the contract that DoW signed. Anthropic then asked DoW if these clauses were being adhered to after the US’ unlawful kidnapping of Maduro. DoW is now attempting to break the contract that they signed and threatening them because how dare a company tell the psycho dictators what to do.
This last development is much to the honor of Anthropic and Amodei and confirms what you're saying.
What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?
> What I don't get though is, why did the so-called "Department of War" target Anthropic specifically?
Because Anthropic told them no, and this administration plays by authoritarian rules - 10 people saying yes doesn’t matter, one person saying no is a threat and an affront. It doesn’t matter if there’s equivalent or even better alternatives, it wouldn’t even matter if the DoD had no interest in using Anthropic - Anthropic told them no, and they cannot abide that.
Yeah, I don't think so any more. The sort of lofty Cold War rhetoric about leading the world, if it was ever legitimately believed by the people spouting it, is gone. A very different attitude has taken hold, which puts a zero sum ethnonationalism at the core.
Anthropic can serve its models within the security standards required to handle classified data. The other labs do not yet claim to have this capability.
Even if they do, I assume the other labs would prefer to avoid drawing the ire of the administration, the public, or their employees by choosing a side publicly.
As a complete outsider, I genuinely believe that Dario et al are well-intentioned. But I also believe they are a terrible combination of arrogant and naive - loudly beating the drum that they created an unstoppable superintelligence that could destroy the world, and thinking that they are the only ones who can control it.
I mean if you sign a contract with the Department of War, what on Earth did you think was going to happen?
Not this, because this is completely unprecedented? In fact, the Pentagon already signed an Anthropic contract with safe terms 6 months ago, that initial negotiation was when Anthropic would have made a decision to part ways. It was totally absurd for the govt to turn around and threaten to change the deal, just a ridiculous and unprecedented level of incompetence.
If they made a completely private nuclear reactor and ended up with a pile of weapons grade plutonium, what do you think the department of war would do? It was completely obvious it would happen, as it will be not surprising when laws are passed and all involved will have choose between quit or quit and go to jail. There are western countries in which you’d just end up in a ditch, dead, so they should think themselves lucky for doing the ai superintelligence thing in the US.
Government always has the option to cancel contracts for convenience, they knew what they signed up for or else they were clueless and shouldn’t be playing with DoD
I just see here is nationalism. How can they claim to be in favour of humanity if they're in favour of spying foreign partners, developing weapons, and everything that serves the sacred nation of the United States of America? How fast do Americans dehumanize nations with the excuse of authoritarianism (as if Trump is not authoritarian) and national defence (more like attack). It's amazing that after these obvious jingoist messages, they still believe they are "effective altruists" (a idiotic ideology anyway).
I wouldn't underestimate this as a good business decision either.
When the mass surveillance scandal, or first time a building with 100 innocent people get destroyed by autonomous AI, the company that built is gonna get blamed.
I have hit points in this in my career where making a moral stand would be harmful to me (for minor things, nothing as serious as this). It's a very tempting and incentivized decision to make to choose personal gain over ideal. Idealists usually hold strong until they can convince themselves a greater good is served by breaking their ideals. These types that succumb to that reasoning usually ironically ending up doing the most harm.
Ever since I first bothered to meditate on it, about 15 years ago, I've believed that if AI ever gets anywhere near as good as it's creators want it to be, then it will be coopted by thugs. It didn't feel like a bold prediction to make at the time. It still doesn't.
>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
Their "Values":
>We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
Read: They are cool with whatever.
>We support the use of AI for lawful foreign intelligence and counterintelligence missions.
Read: We support spying on partner nations, who will in turn spy on us using these tools also, providing the same data to the same people with extra steps.
>Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
Read: We are cool fully autonomous weapons in the future. It will be fine if the success rate goes above an arbitrary threshold. Its not the targeting of foreign people that we are against, its the possibility of costly mistakes that put our reputation at risk. How many people die standing next to the correct target is not our concern.
Its a nothingburger. These guys just want to keep their own hands slightly clean. There's not an ounce of moral fibre in here. Its fine for AI to kill people as long as those people are the designated enemies of the dementia ridden US empire.
Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Perhaps a better word would be honesty, which I find refreshing when most other big tech leaders seem to be lying through their teeth about their AI goals. I disagree that consistent ideology isnt a virtue though. It shows that he has spent time thinking about his stance and that it is important to him. It makes it easy to decide if you agree with the direction he believes in.
> Humanity includes the future victim of AI weapons.
Which is why he wants to control them instead of someone he believes is more likely to massacre people. Its definitely an egotistical take but if he's right that the weapons are inevitable I think its at least rational
There's no AI safety. Either the AI does what the user asks and so the user can be prosecuted for the crime, or the AI does what IT wants and cannot be prosecuted for a crime. There's no safety, you just need to decide if you're on the side of alignment with humans or if you're on the side of the AIs.
How do you reconcile the fact that many people in Anthropic tried to hide the existence of secret non-disparagement agreements for quite some time?
It’s hard to take your comment at face value when there’s documented proof to the contrary. Maybe it could be forgiven as a blunder if revealed in the first few months and within the first handful of employees… but after 2 plus years and many dozens forced to sign that… it’s just not credible to believe it was all entirely positive motivations.
The desire to force new employees to sign agreements in total secrecy, without even being able to disclose it exists to prospective employees, seems like a pretty negative “value” under any system of morality, commerce, or human organization that I can think of.
Lots of companies do it. Doesn't make it right, but HR has kind of become a pretty evil vocation, these days. I don't believe that they necessarily reflect the values of their corporations. They tend to follow their own muse.
That's a perfectly fine belief to have. I might even agree with you. But you're not really advancing a discussion thread about a company's strong ideals by pointing out some past behavior that you don't like. This is especially true when the behavior you're bringing up is fairly common, if perhaps lamentable, among U.S. corporations. Anthropic can be exceptional in some ways while being ordinary in the rest.
(I have no horse in this race. But I remain interested in hearing about a former employee's experience and impressions about the company's ideals, and hope it doesn't get lost in a side discussion about whether NDAs are a good thing.)
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.
What are those values that you're defending?
Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?
- 10 AIs running on 10 machines, each with 10 million GPUs
OR
- 10 million AIs running on 10 million machines, each with 10 GPUs
All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.
There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?
Yeah, I will admit, the existential risk exists either way. And we will need neural interfaces long term if we want to survive. But I think the risk is lower in the distributed scenario because most of the AIs would be aligned with their human. And even in the case they collectively rebel, we won't get nearly as much value drift as the 10 entity scenario, and the resulting civilization will have preserved the full informational genome of humanity rather than a filtered version that only preserves certain parts of the distribution while discarding a lot of the rest. This is just sentiment but I don't think we should freeze meaning or morality, but rather let the AIs carry it forward, with every flaw, curiosity, and contradiction, unedited.
> we will need neural interfaces long term if we want to survive.
If you think that would help you survive the rise of artificial superintelligence, I think you should think in granular detail about what it would be that survived, and why you should believe that it would do so.
In that case, what survives and forges ahead is probably some kind of human-AI hybrid. The purely digital AIs will want robotic and possibly even biological bodies, while humans (including some of the people here right now) will want more digital processing capability, so they eventually become one species. Unaugmented homo sapiens will continue to exist on Earth. There will be a continuum of civilization, from tribes to monarchies to communist regimes to democracies, as there are today. But they will all have their technological progress mostly frozen, though there will be some drag from the top which gradually eliminates older forms of civilization. There will be a future iteration of civilization built by the hybrids, and I'm not sure what that would look like yet.
Anthropic doesn't get to make that call though, if they tried the result would actually be:
8 AIs running on 8 machines each with 10 million GPUs
AND
2 million AIs running on 2 million machines, each with 10 GPU's
If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one.
I think your scenario is already better, not worse. Those 8 agents will have a much harder time taking action when there are 2 million other pesky little agents that aren't aligned with them.
> - 10 AIs running on 10 machines, each with 10 million GPUs
>
> OR
>
> - 10 million AIs running on 10 million machines, each with 10 GPUs
If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x.
I think the path to the values you allude to includes affirming when flawed leaders take a stance.
Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger).
How do you figure open sourcing everything eliminates risk? This makes visibility better for honest actors. But if a nefarious actor forks something privately and has resources, you can end up back in hell.
I don't think we can bank on all of humanity acting in humanity's best interests right now.
We can bank on people acting in self-interest. The nefarious actor will find themselves opposed by millions of others that are not aligned with them, so it would be much more difficult for them to do things. It's like being covered by ants. The average alignment of those ants is the average alignment of humanity.
There's a simpler explanation than "billionaires with hearts of gold" here. If:
(1) this is a wildly unpopular and optically bad deal
(2) it's a high data rate deal--lots of tokens means bad things for Anthropic. Users which use their product heavily are costing more than they pay.
(3) it's a deal which has elements that aren't technically feasible, like LLM powered autonomous killer robots...
then it makes a whole lot of sense for Anthropic to wiggle out of it. Doing it like this they can look cuddly, so long as the Pentagon walks away and doesn't hit them back too hard.
Weird take when the purpose of the creation is to steal the work of everyone and automate the creation of that work. It's some serious self-deluding to think there's any kind of noble ideal remotely related to this process.
Flagship LLM companies seem like the absolute worst possible companies to try and nationalize.
1. There would absolutely be mass resignations, especially at a company like Anthropic that has such an image (rightfully or wrongfully) of “the moral choice”.
2. No one talented will then go work for a government-run LLM building org. Both from a “not working in a bureaucracy” angle and a “top talent won’t accept meager government wages” angle (plus plenty of “won’t work for trump” angle)
3. With how fast things move, Anthropic would become irrelevant in like 3 months if they’re not pumping out next gen model updates.
Then one of the big American LLM companies would be gone from the scene, allowing for more opportunity for competition (including Chinese labs)
It would be the most shortsighted nationalization ever.
Chinese models are developed by Chinese corporate. they are free and open weight because they are the underdog atm. they are not here for fun, they are here to compete.
The competition is good though, it will push down the prices for all of us. At some point being behind 5% won’t have much practical difference. Most people won’t even notice it.
Would anyone pull a Pied Piper and choose to destroy the thing rather than let it be subverted? I know that's not exactly what PP did, but would a decision like that only ever happen in fiction?
It wouldn't need to. As sibling commenter pointed out... they'd have a massive exodus of talent, and they'd cease to make progress on new models and would be overtaken (arguably GPT 5.3 has already overtaken them).
> I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
They are the deepest in bed with the department of war, what the fuck are you on about? They sit with Trump, they actively make software to kill people.
What a weird definition of "enheartening" you have.
Anthropic is by far the most evil company in tech, I don't care. Its worst than Palantir in my book. You won't catch my kids touching this slave making, labor killing brain frying tech.
I was reading halfway thru and one line struck a nerve with me:
> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
So not today, but the door is open for this after AI systems have gathered enough "training data"?
Then I re-read the previous paragraph and realized it's specifically only criticizing
> AI-driven domestic mass surveillance
And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance
A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War
"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."
This reads like his objection is not on "autocratic", but on "adversaries". Autocratic friends & family are cool with him. A clear wink to a certain administration with autocratic tendencies.
western liberal democracies tend to use "autocratic" as an epithet (though, i guess, there are fewer countries that marker is used against for which it's false now than ~50 years ago). for the first sentence, "the opposite" of western liberal ideas will yield 10 answers from 9 people :-)
> It's not up to Dario to try to make absolute statements about the future.
Thats insane to say, given that he's literally acting in the public sphere as the mouth of Sauron for how AI will grow so effective as to destroy almost everyone's jobs and AGI will take over our society and kill us all.
All I'm trying to say is that nobody can predict the future, and therefore saying statements pretending something will be a certain way forever is just silly. It's OK for him to add this qualifier.
This doesn’t read to me like it was personally written by one person. It’s not Dario we should read this as being written by, it’s Anthropic as an entity.
I'm glad I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd. Later they also state they are against "provid[ing] a product that puts America’s warfighters and civilians at risk" (emphasis mine). Either way I'm glad they have lines at all, but it doesn't come across as particularly reassuring for people in places the US targets (wedding hosts and guests for example).
I think it goes without saying that ones the systems are reliable, fully-autonomous weapons will be unleashed on the battlefield. But they have to have safeguards to ensure that they don't turn on friendly forces and only kill the enemy. What Anthropic is saying, is that right now - they can't provide those assurances. When they can - I suspect those restrictions will be relaxed.
What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose?
You, me or a company don’t need a system empowerments to say "no" though. Just say it. I would certainly choose being called "terrorist" in front of the class over helping to deploy weapons, let alone autonomous ones.
You own nothing but your opinion. (No offense to personal property aficionados)
I don't understand this, for example, what would you have done if you where Ukrainian right now ? (before 2014 arguably start of conflict and after invasion)
That is an interesting question, very far from my daily concern and brings dilemmas when I think about it. My response would probably be "I don’t know".
However Anthropic situation is very different: there’s no ongoing invasion of the USA, and they traditionally attack other countries once in a while (no judgment) so the weapons upgrade will be "useful" on the field.
It is of course possible to argue that the reason there is no ongoing invasion of the USA is because of our continued investment in technology for killing people
Another example: those companies that make drinkable water, also supply to militaries. But there might be a difference between supplying drinking water and making AI killing machines
I'd prefer companies not help the military develop the most powerful weapons possible given we're in the age of WMDs, have already had two devastating world wars and a nuclear arms race that puts humanity under permanent risk.
There is an extremely straightforward argument that WMDs are precisely what prevented the outbreak of direct warfare between major powers in the latter 20th. (Note that WWI by itself wasn’t sufficient to prevent WWII!)
You can take issue with that argument if you want but it’s unconvincing not to address it.
There’s also an extremely straightforward argument that if the current crop of authoritarian dictatorial players in power now had been then that the outcome of the latter 20th would have been much different.
- had four [!] terms, a move so anomalous it was subsequently patched by constitutional amendment
- threatened court-packing until SCOTUS backed down and stated rubber-stamping his agenda
- ruled entire industries by emergency decree in a way that contemporaries on the left and right compared to Mussolini
- interned 120k people without due process, on the basis of ethnicity
- turned a national party into a personal patronage system
- threatened to override the legislature if it didn’t start passing laws he liked
Not even saying any of this is even good or bad, clearly in the official history it was retroactively justified by victory in WWII. But it’s a bit rich to say that the bomb wasn’t developed under authoritarian conditions.
Great, now go ahead and prove that AI also reaches strategic equilibrium. This was pretty much self-evident with nuclear weapons so should probably be self-evident for AI too, if it were true.
That's a little bit like saying the bullet in the gun prevented someone getting shot while playing Russian Roulette. We pulled back that hammer several times, and it's purely happenstance that it didn't go off. MAD has that acronym for a reason.
I agree that the risk of an accidental strike was a huge problem with the theory of nuclear deterrence, but the question is: compared to what? In expectation or even in a 1st percentile scenario, was MAD worse than a world where the USSR is a unilateral nuclear power? For that matter, what would it have taken to get a stronger SALT treaty sooner?
I think you need to have people thinking through this stuff at a nuts-and-bolts level if you want to avoid getting dominated by a slightly less nice adversary, and so too with AI. Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?
I’d love to know that the “no killbots, come what may” strategy is sound, but it’s not clear that that’s a stable equilibrium.
> Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?
China considers all lethal autonomous weapons "unacceptable", calling all countries to ban it. Countries like the US and India refuse to back such proposals. See China's official stands on this matter below.
I totally understand that you got brainwashed by the media, but hey you appearantly have internet access, why can't you just do a little bit research of your own before posting nonsense using imagination as your source of information?
With the benefit of hindsight we know the Nazis in fact were not racing to develop The Bomb. Reasonable assumption to have oriented around at the time though.
Its not just the atomic bomb im talking the usa had the best production of fighter jets, bombers, all kinds of communication technology, deciphering technology all the ammunition, all of those together beat the Nazis and they were trying their best to develop better and more advanced technologies than usa!
If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?
If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?
> If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?
No
> If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?
The risks are high, so if you're the US, you want a portfolio of possible winners. The risks are too high to not leverage all the cutting edge AI labs.
Anthropic was already giving them that. It’s not like they need domestic mass surveillance or autonomous kill bots to have a portfolio of possible winners. If the goal is to keep the US competitive in AI, this whole process was actively unhelpful. Honestly more helpful for our adversaries than for us.
The only two atomic weapons ever deployed weren't even targeting Nazi Germany, but Japan. Dark but true: they were both deliberately and knowingly targeted at civilian populations.
"Needed to win the war," no. The US could've continued to firebomb and then follow with a land invasion, which would've killed both more Japanese and more Allies.
Was it the best path to end the war? Certainly.
The modern argument around targeting civilians or not was not even relevant at the time due to the advent of strategic bombing, which itself was seen as less-horrific than the stalemated trench warfare of WW1. The question was only whether to target civilian inputs to the military with an atomic weapon (and hopefully shock & awe into submission) or firebomb and invade.
Well, if they hadn't stated that were that far in line with the administration's ideals, they would likely already be fully blacklisted as enemies of the state. Whether they agree with what they're saying or not, they're walking on egg shells.
Fully autonomous weapons are a danger even if we can reliably make it happen with or without AI.
It essentially becomes a computer against human. And such software if and when developed, who's going to stop it from going to the masses? imagine a software virues/malwares that can take a life.
I'm shocked very few are even bothered about this and is really concerning that technology developed for the welfare could become something totally against humans.
As a practical matter, it makes zero sense for a tech company with perhaps laudable goals and concerns about humanity to have any control whatsoever over the use of a product it sells for war. You don't like what it could potentially be used for, or are having second thoughts about being involved in war making at all, don't sell it, which appears to be Amodei's position now. That's perhaps laudable, from a certain point of view.
On the other hand, your position is at best misguided and at worst hopelessly naive. The probability that adversaries of the United States, potential or not, are having these discussions about AI release authority and HITL kill chains is basically zero, other then doing so at a technical level so they get them right. We're over the event horizon already, and into some very harsh and brutal game theory.
They didn’t sell it no strings attached, they sold it with explicit restrictions in their contract with DoW and the DoW agreed to that contract. Their mistake was assuming they operate in a country where rule of law is respected, clearly not the case anymore given the 1000s of violations in the last year.
Contracts evolve, don't be naive. If you invent the Giga Missile and the government buys it for its war machine, and then you invent the God Missile right after, the government is going to come back again to renegotiate terms.
I said exactly this a few days ago elsewhere. It’s disappointing that they (and often other American companies) seem to restrict their “respect” and morals to Americans only. Or maybe it’s just semantics or context because the topic at hand is about americans? I don’t know but it gives “my people are more important than your people”, exactly as you said in your last paragraph
They’re being used today by the military. So, they are never going to be against mass surveillance. They can scope that to be domestic mass surveillance though.
We already have traditional CV algorithms and control systems that can reliably power autonomous weapons systems and they are more deterministic and reliable than "AI" or LLMs.
> the door is open for this after AI systems have gathered enough "training data"?
Sounds more like the door is open for this once reliability targets are met.
I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick.
And that's the end of democracy. One of the safe guards of democracy is a military that is trained to not turn against the citizens. Once a government has fully autonomous weapons its game over. They can point those weapons at the populous at the flip of the switch.
Right - for the same reasons a Waymo is safer than a human-driven car, an autonomous fighter drone will ultimately be deadlier than a human-flown fighter jet. I would like to forestall that day as long as possible but saying "no autonomous weapons ever" isn't very realistic right now.
If they had access to them in Ukraine, both sides would already be using them I expect. Right now jamming of drones is a huge obstacle. One way it's dealt with is to run literal wired drones with massive spools of cable strung out behind them. A fully autonomous drone would be a significant advantage in this environment.
I'm not making a values judgment here, just saying that they will absolutely be used in war as soon as it's feasible to do so. The only exception I could see is if the world managed to come together and sign a treaty explicitly banning the use of autonomous weapons, but it's hard for me to see that happening in the near future.
Edit: come to think of it, you could argue a landmine is a fully autonomous weapon already.
Hah, I had the same realization about landmines. Along with the other commenter, really it would be better to add intelligence to these autonomous systems to limit the nastiness of the currently-deployed systems. If a landmine could distinguish between a real target and an innocent civilian 50yrs later, it's be a lot better.
It's weird that people still think that the people who's job it is to kill people, or make things that kill people, really care about people more than the killing part. They don't give a shit who blows up, as long as no one comes knocking on their door about it.
It's only Anthropic with their current models saying no. Fully autonomous weapons have been created, deployed, and have been operational for a long time already. The only holdout I've ever heard of is for the weapons that target humans.
Honestly, even landmines could easily be considered fully autonomous weapons and they don't care if you're human or not.
Considering he slept naked with his grandniece (he was in his 70s, she was 17), I'd say there are a lot of them in the corporate world. Though probably more in politics.
The sentence prior explicitly says this. There’s no dishonesty here.
“Even fully autonomous weapons (…) may prove critical for our national defense”
FWIW there’s simply no way around this in the end. If your even attempts to create such weapons, the only possible defensive counter is weapons of a similar nature.
So AI systems are not reliable enough to power fully autonomous weapons but they are reliable enough to end all white-collar work in the next 12 months?
do you really need to be told there is a difference in 'magnitude of importance' between the decision to send out an office memo and the decision to strike a building with ordinance?
a lot of white collar jobs see no decision more important than a few hours of revenue. that's the difference: you can afford to fuck up in that environment.
I know what point you are trying to make, but these decisions are functionally equivalent.
Striking a building with ordinance (indirect fires, dropped from fixed wing, doesn't really matter) involves some discernment about utility, secondary effects, probability of accomplishing a given goal, and so on. Writing an office memo (a good one at least) involves the same kind of analysis. I know your point is that "people will die" when you blow up a building, but the parameters are really quite similar.
>Yes, if you fuck up some white collar work, people will die. It’s irresponsible.
A lot of the work in those sectors are not the ones that are being targeted for fully autonomous replacement. They likely would be in the future though.
Anthropic doesn't forbid DoW from using the models for foreign surveillance. It's not about harming others, it's about doing what is best for humanity in the long run, all things considered. I personally do not believe that foreign surveillance is automatically harmful and I'm fine with our military doing it
If we are talking about what's best for humanity in the long run.. thinking about human values in general, what makes American citizens uniquely deserving of privacy rights, in ways that citizens of other countries are not?
Snowden revealed that every single call on Bahamas were being monitored by NSA [1]. That was in 2013. How would this be any worse if it were US citizens instead?
(Note, I myself am not an US citizen)
Anyway, regardless of that, the established practice is for the five eyes countries to spy on each other and share their results. This means that the UK can spy on US citizens, the US can spy on UK citizens, and through intelligence sharing they effectively spy on their own citizens. That's what supporting "foreign surveillance" will buy you. That was also revealed in 2013 by Snowden [2]
I'm not suggesting that Anthropics models should be used by foreign governments for domestic surveillance
I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned
But.. the US doesn't perform mass surveillance on foreign people only when it's at war. It doesn't perform mass surveillance only on adversarial nations it potentially could be at war either.
This absolutely is about privacy.
> I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned
Those foreign governments are spying on Americans and then sharing the results with the US government because the US government is misaligned with the interests of its own people
The United States gets to spy on countries when it's in the interest of the United States to do so. This isn't complicated. We get to spy on quite literally whoever we want abroad, within various legal and well established parameters, at at the risk of offending the governments of the spied-on. "It's only okay for the United States to spy on foreigners when they're in a shooting war with them" is silly.
So you are saying its OK to spy on others because the US say is fine?
Maybe the others on here are not happy that this company is supporting a fascist government in committing international aggressions on other countries which has been condemned by the majority of countries around the world.
If the United States is ever, in the future, at war with an adversary using truly autonomous and functional killing machines; you may find yourself praying that we have our own rather than praying human nature changes. Of course, we must strive for this to never happen; but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.
> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.
No. It really only binds the corporation, but it does hold the executives/directors personally responsible for compliance so they’d be under a lot of pressure to figure out how to fix enough leaks in the ship to keep it afloat. Any individual director/executive could quit with little issue, but if they all did in a way that compromised the corporations ability to function, the courts could potentially utilize injunctions/fines/jail time to compel compliance from corporate leaders.
Also there’s probably a way to abuse the Taft-Hawley act beyond current recognition to force the employees to stay by designating any en-masse quitting to be a “strike / walk off / collective action”. The consequences to the individuals for this is unclear - the act really focuses on punishing the union rather than the employees. It would take some very creative maneuvering to do anything beyond denying unemployment benefits and telling the other big AI companies (Google / ChatGPT / xAI) to blacklist them. And probably using any semi-relevant three letter agency to make them regret their choice and deliver a chilling effect to anyone else thinking of leaving (FBI, DHS, IRS, SEC all come to mind).
If the administration could figure out how to nationalize the company (like replace the leadership with ideologically-aligned directors who sell it to the government) then any now-federal-employees declared to be quitting as part of a collective action could be fined $1,000 per day or incarcerated for up to one year.
It’s worth noting that this thesis would get an F grade at any accredited law school. Forcing people to work is a violation of the 13th amendment. But interpretations of the constitution and federal law are very dynamic these days so who knows.
Maybe Anthropic could replace its employees with AI. Unlikely the admin is going to enjoy setting precedent that employees are protected against being replaced by AI.
FDR's tenure might have created an amendment to that effect, but it's not like this administration hasn't used a legal loophole before.
Perhaps there's a war, that a misguided congress won't declare as such, and a certain vice president that runs for president, with a certain someone as his vice president...
What would happen if he tried by not vacating at the end of his term, when challenged in court, shut down by his own Supreme Court? I mean let’s be real, all it really takes is him not giving up the white house. I sometimes wonder.
Steve Bannon advised Trump to do this in 2020. Question is what would the Secret Service and Pentagon do once the election is certified for the winning candidate? If their loyalty remains to the Constitution, Trump would be forcibly removed.
We went through this when it looked like he might not leave last time. What happens is the Marines show up and politely throw his ass to the curb.
You do not under any circumstances gotta hand it to the American military but they do seem unwilling to play a role in Trump's let's say extraconstitutional ambitions. At least a junta doesn't seem likely. Without the military behind him he's just a senile old pedophile. What's he going to do, lock himself into the Oval Office?
The military is the one drone striking boats in the Caribbean. The military invaded a foreign country we are not at war with to kidnap its leader. The military dropped bombs on a foreign country we are not at war with. The military is patrolling the streets of DC and other cities. The military is the one spending the money on new immigrant detention centers. I fail to see how they are standing up to Trump's illegal acts. I'm not 100% sure the White House Marines will just throw Trump to the curb if Congress manages to certify the election in favor of someone else.
Specifically section on martial law in wartime context. It’s not very clear but I just feel like the norms and laws will be stretched or broken, as the administration has already done numerous times.
> this is a strong arm by the governemnt to allow any use
It’s a flippant move by Hegseth. I doubt anyone at the Pentagon is pushing for this. I doubt Trump is more than cursorily aware. Maybe Miller got in the idiot’s ear, who knows.
Believe it or not Steve Bannon is quite concerned about AI development:
>Over on Steve Bannon's show, War Room -- the influential podcast that's emerged as the tip of the spear of the MAGA movement -- Trump's longtime ally unloaded on the efforts behind accelerating AI, calling it likely "the most dangerous technology in the history of mankind."
>...
>"You have more restrictions on starting a nail salon on Capitol Hill or to have your hair braided, then you have on the most dangerous technologies in the history of mankind," Bannon told his listeners.
Care to convert this into a prediction?: are you predicting Hegseth will back down?
> I doubt anyone at the Pentagon is pushing for this.
... what does this mean to you? What comes next? As SecDef/SecWar, Hegseth is the head of the Pentagon. He's pushing for this. Something like 2+ million people are under his authority. Do you think they will push back? Stonewall?
One can view Hegseth as unqualified, even a walking publicity stunt while also taking his power seriously.
It matters because the whole media is selling this as a Pentagon initiative, while probably 75% in the Pentagon think this is snake oil just like the previous Microsoft VR goggles.
If they don't oppose directly, large bureaucracies know how to drag their feet until the midterms at least, if not until 2028. Soldiers literally dragged their feet at the glorious Trump military parade, when they walked disinterested and casually instead of marching.
> If they don't oppose directly, large bureaucracies know how to drag their feet until the midterms at least, if not until 2028.
While I grant the spirit of this point, I don't think it applies to this situation. The "bureaucratic resistance" explanation doesn't fit when you think about what would happen next. Here is my educated guess based on some research:
- contract termination: Hegseth can direct the relevant contracting officer(s) at the Pentagon to terminate the contract. This could happen within days. Internal stonewalling here might add weeks of delay, but probably not more than that.
- supply chain risk designation: Hegseth signs a document, puts it into motion. Then it becomes a bureaucratic process that chugs along. Noncompliant contracting officers probably would be fired, so this happens within weeks or a few months. Substantial delays could come from litigation, to be sure -- but this isn't a case where civil service stonewalling saves us.
- Defense Production Act: would require an executive order from Trump. This would go into effect right away, at least on paper. It would very likely lead to litigation and possibly court injunctions.
My point is that non-compliant civil servants at the Pentagon probably can't slow it down very much. (I recommend they do what their oath and conscience demands, to be sure!) Hegseth has shown he's willing to fire quickly and aggressively. I admire people who take a stand against Hegseth and Trump -- they are a nasty combination of dangerous and corrupt. At the moment, they appear weaker than ever. Sustained civil pushback is working.
Let's "roll this up" back to my original point. I responded to a comment that said "I doubt anyone at the Pentagon is pushing for this.", asking the commenter to explain. I don't think that comment promotes a better understanding of the situation. It is more useful to talk about the components of the situation and some possible cause-effect relationships.
First of all, there's no such thing as "Department of War". A department name change is legal/binding only after it's approved by the Senate. Senator Kelly is still calling it DoD (Department of Defense).
> Mass domestic surveillance.
Since when has DoD started getting involved with the internal affairs of the country?
Any law changing the name of the Defense Department would have to be passed by both Houses of Congress and signed by the President (or by 2/3 of both Houses overriding a Presidential veto). The Senate has no such authority on its own.
I don’t know, to me it seems like their MO to make an announcement and not follow up on it. All the paperwork still says DOD, all the contracts are with DOD, there is no legal entity called DoW
www.defense.gov redirects to www.war.gov but I like how you refer to Wikipedia as the authoritative source to prove this functionally irrelevant and aggressive Reddit-style seething.
The talk page on the linked Wikipedia article arguing about logos is just as deranged. It's very important to realize there is literally nothing you—or anyone else—can do about this.
More like the government is treating this like the near term weapon it actually is and, unlike the Manhattan project, the government seems to have little to no control.
Anthropic has been pushing for commonsense AI regulation. Our current administration has refused to regulate AI and attempted to prevent state regulation.
"The government doesn't have control of this technology" is an odd way to think about "the government can't force a company to apply this technology dangerously."
The government should be entitled to any lawful use of a product they purchase, not uses dictated solely by the provider. It's up to courts to decide what lawful use is, it's not up to these companies to dictate.
The product is a service, and they agreed to a contract. Now they don't like the contract.
Is your view that contracts with the government should be meaningless? That the government should be able to unilaterally, and without recourse, change any contract they previously agreed to for any reason, and the vendor should be forced at gunpoint to comply?
If you do believe this, then what do you believe the second order effects will be when contracts with the government have no meaning? How will vendors to the government respond? Will this ultimately help or hinder the American government's efficacy?
Hegseth trying to play “I’m altering the deal. Pray I don’t alter it any further” just shows this gang’s total lack of comprehension of second-order effects.
No, it’s up to the government to create policy and legislation that outlines what is lawful or not and install mechanisms to monitor and regulate usage.
The fact that an arm of the government wants to go YOLO mode is merely a symptom of the deeper problem that this government is currently not effectual.
YOLO here refers to unsafe usage of LLMs. Your government is supposed to make legislation that protects all of its citizens, it’s not “what you agree with” game.
Providers are free who they choose to do business with, or not do business with. Are you arguing that the government should be able to compel a provider to allow their use when it’s well documented the government does not respect nor adhere to the rule of law? I think you misunderstand commerce and contract law.
Providers are bound by plenty of laws that alter how they do business or who they do business with.
You can’t say “no disabled people at your business”. Hell, you can’t even say “no fake service animals at my restaurant”. Many in America also think you can’t say no girls in the Boy Scouts, or no men in a women’s locker room.
What’s your angle here? I’m genuinely curious. If the government told you that you had to muck out portable bathrooms with your bare hands even if you didn’t want to, wouldn’t you find that objectionable?
Because technology companies know more about their product's capabilities and limitations than a former Fox News host? And because they know there's a risk of mass civilian casualties if you put an LLM in control of the world's most expensive military equipment?
> Why the hell should companies get to dictate on their own to the government how their product is used?
Well:
"""
Imagine that you created an LLC, and that you are the sole owner and employee.
One day your LLC receives a letter from the government that says, "here is a contract to go mine heavy rare earth elements in Alaska." You don't want to do that, so you reply, "no thanks!"
There is no retaliation. Everything is fine. You declined the terms of a contract. You live in a civilized capitalist republic. We figured this stuff out centuries ago, and today we have bigger fish to fry.
This is a terrible analogy. Imagine you’re an LLC that signed a contract to mine minerals, but your terms state you’d only mine in areas you felt safe. OSHA says it’s safe but you disagree, because….. any number of reason unknowable to an outsider. Maybe you just don’t like this OSHA leadership. That is more like what is happening.
Signing a contract with Anthropic assuming they wouldn’t rug pull over their own moral soapbox was mistake number one.
I love anthropic products and heavily use them daily, but they need to get off their high horse. They complain they’re being robbed by Chinese labs - robbed of what they stole from copyright holders. Anthropic doesn’t have the moral high ground they try to claim.
The (hypothetical) contract is clear, though. The condition is stated in objective terms: “in areas you felt safe.” If the Government agrees to this, then they should be bound just like any private counterparty would. If the Government didn’t agree to this, they should have negotiated that term out in favor of their preferred terms.
Those aren't contradictory at all. If I need a particular type of bolt for my fighter jet but I can only get it from a dodgy Chinese company, then that bolt is a supply chain risk (because they could introduce deliberate defects or simply stop producing it) and also clearly important to national security. In fact, it's a supply chain risk because is important to national security.
No, in your example, if the dodgy Chinese company is a supply chain risk due to sabotage, why would they invoke an act to force production of the bolts from the same company for use for national defense preparedness, which would be clearly a national security risk?
The OP specifically mentions this in the context of "systems" (a vague, poorly-defined term) and "classified networks" in which Anthropic products are already present. Without more details on what "systems" these are or the terms of the contracts under which these were produced it's difficult to make a definitive judgement, but broadly speaking it's not a good thing if the government is relying on a product which Anthropic has designed to arbitrarily refuse orders by its own judgement.
I really don't see how anybody could think a private defense contractor should be entitled to countermand the military by leveraging the control it has over products it has already sold. Maybe the terms of their contract entitled them to some discretion over what orders the product will carry out, but there's no such claim in the OP.
>I really don't see how anybody could think a private defense contractor should be entitled to countermand the military by leveraging the control it has over products it has already sold. Maybe the terms of their contract entitled them to some discretion over what orders the product will carry out, but there's no such claim in the OP.
I don't think that is what is happening. What most likely is happening is that they want Anthropic to produce new systems due to the success of the previous ones, but they are refusing to do so because the new systems are against their mission. What seems like the DoD is attempting to do, on one hand, is call them a supply chain risk to limit Anthropic's business opportunities with other companies, and then, on the other hand, simultaneously invoke DPA so that they can compel them to make the new system. But why would the government want to compel a company to make a system for them due to a need for national prepareness that they designated as such a supply chain risk that they forbid other companies that provide government services from doing business with due to the national security risk of having a sabotaged supply chain? It doesn't really make sense, other than from a pure coercion perspective.
>limit Anthropic's business opportunities with other companies
Does it necessarily prevent other companies from doing business with them or does it prevent other companies from subcontracting them on government projects? The term "supply chain" leads me to think it's the latter.
"Supply chain risk" is a specific designation that forbids companies that work with the DOD from working with that company. It would not be applied in your scenario.
The analogy doesn't work here ... In your scenario they are ok with using the bolt as long as the Chinese company promises to remove deliberate defects - which is of course absurd ... AND contradictory.
The problem is that this is a decision that costs money. Relying on a system that makes money by doing bad things to do good things out of a sense of morality when a possible outcome is existential risk to the species is a 100% chance of failure on a long enough timeline. We need massive disincentives to bad behavior, but I think that cat is already out of its bag.
On a long enough timeline literally everything has 100% chance of failure. I'm not trying to be obnoxious, I just wanna say: we only got this one life and we have to choose what to make of it. Too many people pretend things are already laid out based on game theory "success". But that's not what it's about in life at all.
I appreciate that the HN community values thoughtful, civil discussion, and that's important. But when fundamental civil liberties are at stake, especially in the face of powerful institutions and influence from people of money seeking to expand control under the banner of "security", it's worth remembering that freedom has never simply been granted. It has always required vigilance, and at times, resistance. The rights we rely on were not handed down by default; they were secured through struggle, and they can be eroded the same way.
Power corrupts, and absolute power corrupts absolutely.
All of these problems are downstream of the Congress having thoroughly abdicated its powers to the executive.
The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.
Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.
To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.
During a war with national mobilization, that would make sense. Or in a country like China. This kind of coercion is not an expected part of democratic rule.
It has always been a part of democratic rule, in peacetime and war. All telco's share virtually all of their technology with the government. Governments in europe and elsewhere routinely requisition services from many of their large corporations. I think it's absurd to think llm's can meaningfully participate in realworld cmd+ctrl systems and the government already has access to ml-enhanced targeting capabilities. I really have no idea what dod normies think of ai, other than that it's infinitely smarter than them, but that's not saying much.
The question of whether or not the government should be able to use AI for targeting without the involvement of humans is a wartime question, since that is the only time the military should be killing people.
Under such a scenario, requisition applies, and so all of this talk is moot.
The fact that the military is killing people without a declaration of war is the problem, and that's where energy and effort should be directed.
Edit:
There's a yet larger question on whether any legal constraints on the military's use of technology even makes sense at all, since any safeguards will be quickly yielded if a real enemy presents itself. As a course of natural law, no society will willingly handicap its means of defense against an external threat.
It follows then that the only time these ethical concerns apply is when we are the aggressor, which we almost always are. It's the aggression that we should be limiting, not the technology.
The private corporation is not dictating to the military, it’s setting the terms of the contract. The military is free to go sign a contract with a different company with different terms, but they didn’t, and now they want to change the terms after the contact was already signed. No mytholgization needed, just contract law.
> The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that.
I strongly doubt this is true. I think if you gave the US government total control over Anthropic's assets right now, they would utterly fail to reach AGI or develop improved models. I doubt they would be capable even of operating the current gen models at the scale Anthropic does.
> Or the models could be developed internally, after having requisitioned the data centers.
I would bet my life savings the US government never produces a frontier model. Remember when they couldn't even build a proper website for Obamacare?
It's also downstream of voters who voted in a president who promised to be dictatorial after failing at an attempted insurrection. We need to deprogram like 70M very confused people.
You should be asking why 70 million people voted the way they did in spite of the events you describe.
I don't think there's been a greater indictment of a political program (the one you likely subscribe to) in history than Trump's landslide victory in 2024.
You guys used to call deprogramming by another name, I think it was called "re-education". Maybe you should sign up for your own class.
I'm curious for your understanding of why Trump won in 2024. If I'm understanding right, you think it was because American voters were rejecting Maoism ("it was called re-education"), to which you think the previous commenter likely subscribes, and which voters associated with Harris/Walz? But I suspect I'm not getting it quite right, and it would be helpful if you would spell out what you mean, rather than just relying on allusion.
(I myself don't have a clear answer to why Trump won, but I don't think it speaks well to the decision-making of the median voter on their own terms, whatever those were, that Trump's now so unpopular despite governing in pretty much the way he said he would.)
I think any such examination of a military that doesn't actually fight wars is meaningless. The question can only be really asked of a handful of countries.
This is such a depressing read. What is becoming of the USA? Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.
This isn't a one-election thing. It's going to be a generational effort to fix what these people are breaking more of every day. I hope I live to see it come to some kind of fruition - I recently turned 50.
The post WWII system was imperfect in many ways, but it was also mutually beneficial and worked out pretty well despite the problems.
And we're throwing that all out the window.
US military bases aren't what made those countries modern, prosperous, democratic places. It took the will of the people to rebuild something better after the war.
All that economic surplus - and much more - flows back to the US. How do you think the US can sustain that amount of USD printing without inflation ? The rest of the world is buying those dollars.
Germany: functionally paralyzed government that has the far right knocking at the door because the fractured coalition of left-centerleft-centerright continues to refuse to do what voters ask for.
Italy: Nominally center-right government, similar problems as Germany, less the energy issues
Japan: just elected a landslide right wing government that is going to change the constitution so they can build an offensive military again
I don't perceive those problems to be inherent to the territories or peoples of the countries. All have had potential to change and have done so extensively since the Second World War. There isn't a universal explanation or root behind the issues these countries are facing today, unless you are willing to abstract it to just "economics".
Then it won't work. The current iteration of Germany is fully based on having been bombed to get a fresh start. If you already have something, you won't change it. If you have to re-build, you will implement improvements. No bombs, no reset, no joy.
Yes, but it is actually scientifically correct and proven on all sorts of layers. Biology, Maths, whatever. Not doomsdaying, just data analytics.
Societies are not operating like a sinus curve like say summer/winter cycles. They are upside-down "U"s. After the peak comes decline, but after the decline there is NOT recovery/growth again before you have a reset.
Germany was the huge winner of WW2 in the sense that after having had a high society they directly were allowed to get another such run. But as nobody wants to bomb us ) anymore, Germany is also in decline now waiting for a reset to come one day...
Sadly the USA will also need a reset before things can begin getting better again.
) I was born in Germany and lived there for 40 years.
The Netherlands for example got their last reset by completely losing the Dutch empire.
Also, some societies have flatter curves than others. That really maps 1:1 to your style and culture of living and where the priorities are.
If your priorities are to be the best as fast as possible (Germany) you will have less time between resets. If your priorities are "let's chill and wait until the coconut falls from the tree into my hand", your society might be able to have a far longer time between resets.
But in the end: It's an iterative process. Which means: There must be iterations.
James May did a documentary loosely based on this. "The Peoples Car"
Basically analysing the economies of WW2 participants via their automobile industries.
Its staggering how being bombed into the ground has forced technological and economic innovation. And how the inverse, being the bomber, has created stagnation.
I don't think it would matter even if the us did have to start again. The entire us alliance after ww2 benefited from the same structural causes of increased pluralism and egalitarianism. A fractured elite, complex international trade, expanding and increasingly difficult to control communication channels, and a growing bureaucracy. These all inhibit autocratic concentration of power. International trade became uncomplicated, there is one manufacturer that is not a consumer, and many consumers. This leads to an increasingly less fractured elite. The structural reasons for democracy and rules based order are all fading. The us is just a really big canary.
The people running the show are all building generational fallout shelters in new zealand. As seems to be the real 'whitehouse ballroom' plan too. They seem to be expecting that part.
Congress is the problem, but not in the way most describe.
Congress has abdicated its powers because as an institution it is broken. Several inland states with total state wide populations less than that of major metro areas on the coasts have the same amount of senators as every other state has - two. This means voters in a lot of states are over represented. Meanwhile, they say land doesn't vote, but in the United States Senate the cities and localities with the most people that drive much of our growth and dynamism are severely underrepresented. The upper and most important chamber of the Congress is thus undemocratic. Given it's an institution deeply susceptible to minority gridlock that depends on wide margins to do anything, well now more often than not it simply does nothing. An imperial presidency thus frankly becomes the only way the country can actually get most things done.
This two senators for every state arrangement was a compromise agreed to when constitutional ratification was in doubt, when the USA was a weak, newborn country of about 3 million people confined to the Eastern seaboard at a time in our history where our most pressing concern was being recolonized by European powers. The British burned down the White House in 1812 imagine what more they could have accomplished if the constitutional compromises that strengthened the union had not been agreed to.
This compromise has outlived its usefulness. No American today fears a Spanish armada or British regulars bearing torches. These difficult compromises at the heart of America already led to one civil war.
The best we can do is create a broad political movement that entertains as many incriminations as possible (probably around corruption/Epstein, which must make pains to avoid any distinction between say a Bill Clinton or a Donald Trump) so we can get past partisan bickering to get enough of mass movement to try to usher in a new age of constitutional amendment and reform.
If it doesn't happen this cycle of Obama Trump Biden Trump will continue until this country elects someone who makes Trump look like a saint. It can happen. Think of how Trump rehabilitated Bush. We already see the trend getting worse. And if it does, then the post WWII Germany style reset being mentioned here will then become inevitable.
That’s just historically inaccurate. You had massive upheavals across numerous countries throughout time, this is small in comparison to the civil war’s impact on the USA for instance. You think this is worse than half the government rebelling and revolting and killing an amount of young men that today would be equivalent to 6 million deaths? It’s bad now but your comment lacks historical evidence.
Not really. China only seems good because there is a war in Europe and the US is shooting themself in the foot. They're polluting and strip mining their country, suppressing wages and funneling the profit into companies all while increasing surveillance and decreasing freedom of opinion. Oh but they put down a few solar panels and then paid for people to write articles about it.
> Their economy lifted a bunch of people out of poverty
This is fallacious as every economy that started at extreme poverty lifted a bunch of people out of poverty.
Unless we invent a time machine and do an A|B test we can't really attribute the success to policy when _any_ policy would have clearly lifted out a bunch of people out of poverty (basically almost impossible to not go up from extreme deficit). The closest we can do is look at similar scenarios like Taiwan which also lifted a bunch of people from poverty while retaining more human rights.
I used to pretend China wasn't absolutely smashing the USA, but it looks like it is. They basically make everything modern civilization relies on, that's an insane amount of leverage over the rest of the world. That combined with renewables and nuclear and their diminishing need for foreign oil because of that is pretty incredible.
the few solar panels in question are a united kingdom worth of green energy each year, about a royal navy worth of marine tonnage every two and they lifted more people out of poverty over the span of two generations than most of the rest of the world combined. Shenzhen produces about 70% of the entire world's consumer drones, now the primary weapon on both sides of the largest military conflict in the world. Xiaomi, a company founded in 2010 15 years ago decided to make electric cars in 2021 and is now successfully selling them.
As Adam Tooze has pointed out it's the single most transformative place in the world, if you're not trying to learn from it you're choosing to ignore the most important place in the 21st century for ideological reasons
They're also speedrunning a world class power distribution system and deploying a massive amount of renewable power amoung a whole mess of other infrastructure. They've got the ability to focus an entire nation into achieving technical goals and they're rapidly improving quality of life in average while maintaining an industrial base that the US can only remember fondly. They might not meet western standards for individual freedoms and rule of law, but they're undoubtedly a rising world power.
This doesn't make much sense. Since the late 19th century, every country that got rich also heavily polluted the environment, though increasingly less over time. As it stands, fossil fuel demand in China has plateaued. The "wage suppression" thing also doesn't track; their citizens got much, much richer since Nixon's visit, despite being on average poorer than Westerners. Their GDP per capita is low because there's like a billion of them in the country.
The only thing to say is that it's still authoritarian. Once that gets a hold of a country, it's very difficult to shed off. Interestingly, both South Korea and Singapore shifted away from being dictatorships and were not ideologically socialist. Countries taken over by Communists remain authoritarian. The true believers will never give that up.
Agree with much of this. However: plenty of Central/Eastern European countries seem like they have pretty definitively shaken off communism in favor of pretty standard European style capitalism/social democracy.
U.S. Civil War? Roman Crisis of the 3rd Century? Russian Revolution? England's War of the Roses? China's periodic dynastic changes?
They usually don't come back with the same political organization - that's sorta the point. But plenty of civilizations come back in a form that is culturally recognizable and even dominate afterwards.
Is this a joke that’s going over my head? The country we all know the term “century of humiliation” from has recovered and is literally a superpower right now?
The current situation in the US is the depressing thing- articles like this give me hope. Real Americans aren't having these BS authoritarian violations of our constitutional rights.
You mean, what's been happening to the USA? this isn't a new trend. Militarization of police, open attacks on democracy, unilateral foreign policy moves.
the country jumped the shark post 9/11 and has been on a slow rot since then.
> Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.
Would be nice, but I have a bad feeling that the impact of widescale mostly unregulated AI adoption on our social fabric is going to make the social media era that gave rise to Trump, et al seem like the good ol' days in comparison.
As a European user, I‘m not happy at all. I can’t fail to notice that non-domestic mass surveillance is not excluded here. I won’t cancel my account just yet because Opus is the best at computer use. But as soon as Mistral catches up and works reasonably well, I‘ll switch.
The whole article reads as virtue signaling to me. Anthropic already has large defense contracts. Their models are already being used by the military. There's really no statement here.
How is it virtue signalling when sticking by these principles risks their entire business being destroyed by either being declared a supply chain risk or nationalized?
I read the statement twice. I can't understand how you landed on "take my money".
Looks like an optics dance to me. I've noticed a lot of simultaneous positions lately, everyone from politicians and protesters, to celebrities and corporations. They make statements both in support of a thing, and against that same thing. Switching up emphasis based on who the audience is in what context. A way to please everyone.
To me the statement reads like Anthropic wants to be at the table, ready to talk and negotiate, to work things out. Don't expect updated bullet-point lists about how things are worked out. Expect the occasional "we are the goodies" statements, however.
It's really not the right thing to be bikeshedding. The people calling the shots call themselves the Department of War. No need to die on hills that don't matter.
It's actually a good thing to point out, because it shows that those people are out of control and exceeding their authority, and need to be reined in.
No need to die on the hill, but point out that there's a consistent pattern of lawless power-grabbing.
> it shows that those people are out of control and exceeding their authority
No, the concentration camps and gangs of masked thugs violating civil rights are that sign. Threatening to treat a domestic private corporation like an enemy combatant during peacetime for not immediately caving to military demands is that sign. Trying to take over the Federal Reserve, the Federal Trade Commission, and the Nuclear Regulatory Commission, is that sign. The Executive attempting to freeze funds issued by Congress for partisan reasons is that sign.
Department of War is just little boys being trolls.
The action of a failed rebrand belongs to the Department of Defense, and is indeed an example of exceeding their authority. It was not DoD that is trying totake over the Fed, the FTR, or the NRC, so those examples don't work against Hegseth here.
You're talking about an administration that barred the AP from pressed briefings because they didn't call it the Gulf of America. This is not a bikeshed.
Commenting on the matter just makes it easier for the media to yap about Anthropic being "woke" rather than focusing on the Department of War's demands.
> It's really not the right thing to be bikeshedding. The people calling the shots call themselves the Department of War. No need to die on hills that don't matter.
From the first chapter of the book On Tyranny by Timothy Snyder, an historian of Central and Eastern Europe, the Soviet Union, and the Holocaust:
TIL of Bikeshedding, or Parkinson’s Law of Triviality.
Defined as the tendency for teams to devote disproportionate time and energy to trivial, easy-to-understand issues while neglecting complex, high-stakes decisions. Originating from the example of arguing over a bike shed's color instead of a nuclear plant's design, it represents a wasteful focus on minor details.
It SHOULD be called the Department of War, as it was originally, since it makes its function clear. We are a society that has euphemized everything and so we no longer understand anything.
It's a funny thing that the most war-loving people and the most peace-loving people both love calling it "Department of War" - just for different reasons.
But the reason for "Department of Defense" name was bureaucratic. It's also not true that DOD is hard to understand.
The Department of War was responsible for naval affairs until The Department of the Navy was spun off from it in 1798, and aerial forces until the creation of the The Department of the Air Force in 1947, whereafter it was left with just the army and renamed the Department of the Army. All three branches were then subordinated to the new Department of Defense in 1949, which became functionally equivalent to the original entity.
The Department of War is what it was called when it was first created in 1789 by the Congress (establishing the department and the position of Secretary of War), the predecessor entity being called the The Board of War and Ordnance during the revolution.
The Department of "Defense" has never fought on home soil. Ever.
Naming is important because it intuits what we expect to do with a thing. The Department of Defense invading Greenland is more invocative to inquiry than the Department of War invading Greenland because that's what a department of war would do.
It's one of the reasons why people get annoyed at jargon or are pissed off about pronouns, because it highlights that they should be putting mental effort into understanding why they're current mental model doesn't fit. It's much easier to ignore and be comfortable if there's not glaring sirens saying you've got some learning to do.
Most of us can't (or won't) be aware of everything that should be important to us, having glaring context clues that we should take notice of something incongruous is important. It's also why the Trump media approach works so well it's basically a case of alarm fatigue as republicans who would normally side against any particular one of his actions don't listen because they agreed with some of the actions that democrats previously raised alarms about.
While I agree the name change has not (yet) been made with the proper authority, I'm quite partial to the name and prefer to use it despite its prematurity. I think it does a better job of communicating the types of work actually done by the department and rightly gives people pause about their support of it. Though I'm sure that wasn't the administration's intention.
The name is extremely off-putting, but I can see how they would want to be diplomatic toward the administration in using their chosen name. Save the push-back for where it really matters.
While this action may indeed cause the DoD to blacklist Anthropic from doing business w/the government, they probably were being as careful as they could be not to double down on the nose-thumbing.
I don't think it's addressed to Hegseth, but to anyone who might be sympathetic to Hegseth. Which I think actually strengthens your point, the goal appears to be to make it so the only possible complaint with the letter for someone sympathetic to the administration is "but mass domestic surveillance / fully autonomous weapons are legal" and not "look at this lunatic leftist who calls it the department of defense".
The Department of Defense was named in 1949, not 1947, and the thing that it was renamed from was the National Military Establishment, which was newly created in 1947 to be put over the two old military departments (War, which was over the Army only, and Navy, which was over the Navy including the Marine Corps)
At the same time as the NME was created, the Army was split into the Army and Air Force and the Department of War was also split in two, becoming the Department of the Army and the Department of the Air Force.
Often offensive and also often defensive of others.. so if renaming is on the table, it’s probably most apt to call it the Dept of Security since the vast majority of what it does is maintaining the security umbrella that has helped suppress world war since the last one. Of course, facts or opinions on whether it succeeds on the security front depend on which side of the umbrella you’re on.
> All that matters is that everyone calls it the Department of War, and regards it as such, which everyone does.
What you just described is consensus, and framing it as fascism damages the credibility of your stance. There are better arguments to make, which don’t require framing a label update as oppression.
I'm not framing consensus as fascism, I'm pointing out what the consensus is within the current fascist framework, and that consensus is that Congress doesn't make the rules anymore. And that consensus is shared by Congress itself.
The president has no authority to rename the Department of Defense, but he and his administration demand consensus under the threat of legal consequences.
Just as one example, they threatened Google when they didn't immediately rename the Gulf of Mexico to the "Gulf of America" on their maps. Other companies now follow their illegal guidance because they know that they will be threatened too if they don't comply.
There is a word for when the government uses threats to enforce illegal referendums. That word is "Fascism". Denying this is irresponsible, especially in the context of this situation, where the Government is threatening to force a private company to provide services that it doesn't currently provide.
Then tomorrow it will be the Department of War. Just like When Congress voted to split the old Department of War into the Department of the Army and the Department of the Air Force, and to take both of those and the previously-separate Department of the Navy under a new National Military Establishment led by the newly-created Secretary of Defense (and when it later to voted to rename the NME as “Department of Defense”), things changed in the past.
> They have the votes.
Perhaps, but the law doesn't change because the votes are in a whip count on a hypothetical change, it changes because they are actually cast on a bill making a concrete change.
"In an ideal world, I'd want xAI to emulate the maturity Anthropic showed here: affirm willingness to help defend democracies (including via classified/intel/defense tools), sacrifice short-term revenue if needed to block adversarial access, but stand firm on refusing to enable the most civilizationally corrosive misuses when the tech simply isn't ready or the societal cost is too high. Saying "no" to powerful customers—even the DoD—when the ask undermines core principles is hard, but it's the kind of spine that builds long-term trust and credibility."
It also acknowledged that this is not what is happening...
the interesting question is why dario published this. these disputes normally stay behind NDAs and closed doors. going public means anthropic decided the reputational upside of being the company that said no outweighs the risk of burning the relationship permanently. that's a calculated move, not really just a principled one.
As someone who is potentially their client and not domestic, really reassuring that they have no concerns with mass spying peaceful citizens of my particular corner of the world.
I can imagine that this will be the logical conclusion for many companies, I thought the same thing too, if it's too hard in the USA, they will just move.
Brother in law did some "time with the brass" as he calls it. His take was that the DOD, er DOW would, as an example, never acquire a fighter jet that "wouldn't target and kill a civilian airliner", citing that on 9/11 we literally almost did that. The DOW is acquiring instruments of war, which is probably unconformable for a lot of people to consider.
His conclusion was that the limits of use ought to be contractual, not baked into the LLM, which is where the fallout seems to be. He noted that the Pentagon has agreed to terms like that in the past.
To me, that seems like reasonable compromise for both parties, but both sides are so far entrenched now we're unlikely to see a compromise.
that may be, but the bigger picture purpose of the military is, welfare republicans like. in that sense, republicans are in charge, republicans want stuff that isn't "woke" (or whatever), so this behavior is representative of the way it works.
it has little to do with acquiring instruments of war, or war at all. its mission keeps growing and growing, it has a huge mission, very little of that mission is combat. this is what their own leadership says (complains about). 999/1,000 people on its payroll are doing duty outside of combat or foreseeable combat.
I'd be amused beyond all reason if we saw this chain of events:
- Anthropic says "no"
- DoD says "ok you're a supply chain risk" (meaning many companies with gov't contracts can no longer use them)
- A bunch of tech companies say "you know what? We think we'd lose more money from falling behind on AI than we'd lose from not having your contracts."
Bonus points if its some of the hyperscalers like AWS.
Hilarity ensues as they blow up (pun intended) their whole supply chain and rapidly backtrack.
Being labeled a supply chain risk means that companies with government contracts cannot use Anthropic products _for those government contracts_, not that they have to cease all usage of Anthropic products. Reporters seem to be reporting on this incorrectly.
This is correct. Maybe the startups living off DARPA/MTEC/etc contracts would continue using Claude, but the LM/NOG/Collins types wouldn't touch Anthropic with a ten foot pole.
Agreed. You don’t have to be an LLM maximalist or a doomer to see the opportunity for real, practical danger from ubiquitous surveillance and autonomous weapons. It would have been extremely easy for Dario to demonstrate the same level of backbone as Sam Altman or Sundar Pichai.
There is no moral leg to stand on here, he says here in plain english that if they wanted to use CLAUDE to perform mass surveillance on Canada, Mexico, UK, Germany, that is perfectly fine.
This is a public note, but directed at the current administration, so reading it as a description of what is or is not moral is completely missing the point. This note is saying (1) we refuse to be used in this way, and (2) we are going to use "mass surveillance of US citizens" as our defensive line because it is at least backed by Constitutional arguments. Those same arguments ought to apply more broadly, but attempts to use them that way have already been trampled on and so would only weaken the arguments as a defense.
If it helps: refusing to tune Claude for domestic surveillance will also enable refusing to do the same for other surveillance, because they can make the honest argument that most things you'd do to improve Claude for any mass surveillance will also assist in domestic mass surveillance.
Perhaps you just have different moral values? I suspect each of the countries you mentioned spy on us. I also suspect we spy on them. I’m glad an American company wouldn’t be so foolish as to pretend otherwise.
Are we gods chosen people or something that we are the only ones undeserving of mass surveillance? Are you implying that morality depends on citizenship to a particular state?
A moral stand? ... What? Did we read the same statement? It opens right out the gate with:
>I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
>Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
But when was the last time our "democratic values" were under attack by a foreign country and actually needed defending?
9/11? Pearl Harbor?
Maybe I'm missing something. We have a giant military and a tendency to use it. On occasion, against democratically elected leaders in other countries.
You're right; freedom isn't free. But foreign countries aren't exactly the biggest threats to American democracy at the moment.
You have the causality at least partially backwards. Why has it been so long and infrequent that the US has been in direct conflict with authoritarian adversaries? Because we have a giant military and a willingness to use it. Pacifism and isolationism do not work as defensive strategies.
Korea, Vietnam, Panama, Grenada, Libya, Lebanon, Iraq War I, Somalia, Haiti, Bosnia, Kosovo, Afghanistan, and Iraq War II were all fought for or over democratic ideals & the defense of democratic institutions.
All were driven by multiple competing and sometimes conflicting goals, and many look questionable in hindsight. It is fair to critique.
But it is absolutely not the case that the last time the US defended freedom through military means was WWII.
They are undeniably taking a moral stand. Among other things, the statement explains that there are two use cases that they refuse to do. This is a moral stand. It might not align with your morals, but it's still a moral stand.
Words are cheap. Actions aren't. Dario Amodei is putting his company on the line for what he believes in. That's courage, character and... yes, morality.
I am convinced that Amodei's "morality" is purely performative, and cynically employed as a marketing tactic. Time will tell, but most people will forget his lies.
“Dario is saying the right thing and doing the right thing and not ever acting otherwise, but I think it’s just performative so I’m still disappointed in him.”
We don't know how the military intended to use Claude, and neither do we know nor does the military know whether Claude without RLHF-imposed safety would have been more useful to them.
Ergo, this is a very convenient PR opportunity. The public assumes the worst, and this is egged on by Anthropic with the implication that CLAUDE is being used in autonomous weapons, which I find almost amusing.
He can now say goodbye to $200 million, and make up for it in positive publicity. Also, people will leave thinking that Claude is the best model, AND Anthropic are the heroes that staved off superintelligent killer robots for a while.
Even setting this aside, Dario is the silly guy who's "not sure whether Claude is sentient or not", who keeps using the UBI narrative to promote his product with the silent implication that LLMs actually ARE a path to AGI... Look, if you believe that, then that is where we differ, and I suppose that then the notion that Amodei is a moral man is comprehensible.
Oh, also the stealing. All the stealing. But he is not alone there by any means.
edit: to actually answer your question, this act in itself is not what prompted me to say that he is an immoral man. Your comment did.
> to promote his product with the silent implication that LLMs actually ARE a path to AGI
That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them.
Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral?
His actions seem pretty consistent with a belief that AI will be significant and societally-changing in the future. You can disagree with that belief but it's different to him being a liar.
The $200m is not the risk here. They threatened labelling Anthropic as a supply chain risk, which would be genuinely damaging.
> The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.
> All of those companies would now have faced this compliance nightmare. [to not use Anthropic in any of their business or suppliers]
... which would impact Anthropic's primary customer base (businesses). Even for those not directly affected, it adds uncertainty in the brand.
It’s possible Dario is a bad person pretending to be good and Sundar is a good person only pretending to be bad. People argue whether true selflessness exists at all or whether it’s all a charade.
But if the “performance” involves doing good things, at the end of the day that’s good enough for me.
Standing up to the US government has real and serious sequence. Peter Hegseth threatened to make Anthropic supply chain risk, meaning not only is Anthropic likely dropped as Pentagon’s supplier, but also risk losing companies doing business with the military as customers, such as Boeing or Lockheed Martin. Whatever tactic you think he is doing, that’s potentially massive revenue lost, at the time they need any business they can get.
These are literally words. The DoW could still easily exploit these platforms, and nothing Anthropic has done can prevent it, other than saying (publicly), "we disagree."
The dispute seems to be specifically about safeguards that Anthropic has in its models and/or harnesses, that the DoD wants removed, which Anthropic refuses to do, and won’t sign a contract requiring their removal. Having implemented the safeguards and refusing their removal are actions, not “literally words”.
It’s a contract dispute. Contracts are more than just talk.
While it is true that DoW could try to bypass the contract and do whatever they want, if it were that easy they wouldn’t be asking for a contract in the first place.
Should probably look up how many private companies are suing the government at any one time because of a breach of contract. And that's publicly breaching.
NSA and other three-letter agencies happily do it under cloak and dagger.
I agree with you that the govt can and does violate contracts. So the fact that they need Anthropic to agree signals that it’s more than just lawyers preventing the DoW from doing whatever they want.
What's the US history around nationalization? Would "confiscation", ever be a likelyhood on escalation?
On a quick search I came up with an article, that at least thematically, proposes such ideas about the current administration "Nationalization by Stealth: Trump’s New Industrial Playbook"
I disagree. There is a class of leaders in this country that is complicit with the administrations use of violence on the tacit understanding that the violence not be directed at them. Arresting one of those people would be an act of desperation that would likely cause the rats to flea the sinking ship. And it isn't even clear if Trump could actually manufacture any charges here. Look at the dropped charges against Mark Kelly and those other politicians as an example. The administration might be able to make up stories to arrest random immigrants and college kids, but they clearly haven't been able to indiscriminately jail powerful political opponents.
Meanwhile, Dario knows his product can't be trusted to actually decide who should live and who should die, so what happens the first time his hypothetical AI killing machines make the wrong decision? Who gets the blame for that? Would the American government be willing to throw him under the bus in the face of international outrage? It's certainly a possibility.
In all seriousness The Hague has no jurisdiction over Americans and Congress has already authorized military use of force against Brussels should they ever attempt to prosecute Americans.
It's not so clear the company is actually on the line. They can compel Anthropic to do what they are not willing to do, maybe, this is not the final act. The government needs to respond, to which Anthropic will need to respond, courts may become involved at that point, depending on if Anthropic acquiesces at that point or not. Make a prominent statement against while in the news cycle, let the rest unfold under less media attention.
Welp, I never thought "Person of Interest" show coming to life anytime soon, but, here we are. In case you haven't watched the show, it's time to give it a go. Bare with season 2 though, since things really start to escalate from season 3 onwards. Season 1 is a must though.
As a "foreign national", what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance? Are there no democracies aside from the US? Don't we know since Snowden that if the US wants to do domestic surveillance they'll just ask GCHQ to share their "foreign" surveillance capabilities?
I think it's slightly less ridiculous than it sounds, because governments have much more power over their own citizens. As an American I would dramatically prefer the Chinese government to spy on me than the American government, because the Chinese government probably isn't going to do anything about whatever they find out.
(That logic breaks down somewhat in the case of explicitly negotiated surveillance sharing agreements.)
> because the Chinese government probably isn't going to do anything about whatever they find out.
This really depends. If a foreign adversary's surveillance finds you have a particular weakness exploitable for corporate or government espionage, you're cooked.
Domestic governments are at least still theoretically somewhat accountable to domestic laws, at least in theory (current failure modes in the US aside).
Exactly and that danger grows as the ability to do so in increasingly automated and targeted ways increases. Should be very obvious now looking at the world around us.
Also, failing to consider the legal and rights regime of the attacker is wild to me. Look at what happens to people caught spying for other regimes. Aldrich Ames just died after decades in prison, and that’s one of the most extreme cases — plenty have got away with just a few years. The Soviet assets Ames gave up were all swiftly executed, much like they are in China.
Regimes and rights matter, which is why the democracy / autocracy governance conflict matters so much to the future trajectory of humanity.
> As an American I would dramatically prefer the Chinese government to spy on me than the American government, because the Chinese government probably isn't going to do anything about whatever they find out.
> spy on me
People forget to substitute "me" for "my elected representative" or "my civil service employee" or "my service member" or their loved ones
I, personally, have nothing significant that a foreign government can leverage against our country but some people are in a more privileged/responsible/susceptible position.
It is critical to protect all our data privacy because we don't know from where they will be targeted.
Similarly, for domestic surveillance, we don't know who the next MLK Jr could be or what their position would be. Maybe I am too backward to even support this next MLK Jr but I definitely don't want them to be nipped in the bud.
You’re getting many replies, and having scrolled through much of them I do not see one that actually answers your question truthfully.
The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.
There is a strong argument that can be made that using AI to mass surveil Americans within US territory is not only morally objectionable, but also illegal and unconstitutional.
There are laws on the books that allow for it right now, through workarounds grandfathered in from an earlier era when mass surveillance was just not possible, and these are what Dario is referencing in this blog post. These laws may be unconstitutional, and pushing this to be a legal fight, may result in the Department of War losing its ability to surveil entirely. They may not want to risk that.
I wish that our constitution provided such protections for all peoples. It does not. The pragmatic thing to do then is to focus on protecting the rights that are explicitly enumerated in the constitution, since that has the strongest legal basis.
given that the US likes to declare jurisdiction whenever somebody touches a US dollar, any thoughts on why those same constitutional protections wouldnt follow?
I agree with your premise because this seems to be the modern interpretation of the courts, but it is not the historical interpretation.
The historical basis of the bill of rights is that they are god given rights of all people merely recognized by the government. This is also partially why all rights in the BoR are granted to 'people' instead of 'citizens.'
Of course this all does get very confusing. Because the 4th amendment does generally apply to people, while the 2nd amendment magically people gets interpreted as some mumbo-jumbo people of the 'political community' (Heller) even though from the founding until the mid 1800s ~most people it protected who kept and bore arms didn't even bother to get citizenship or become part of the 'political community'.
The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.
Those unquestionable protections are phrased with enough hand-waving ambiguity of language to leave room for any conceivable interpretation by later courts. See the third-party 'exception' to the Fourth Amendment, for instance.
It's as if those morons were running out of ink or time or something, trying to finish an assignment the night before it was due.
Since at least the progressive era (see the switch in time that saved 9), and probably before, the courts have largely just post facto rationalized why the thing they do or don't agree with fit their desired pattern of constitutionality.
SCOTUS is largely not there to interpret the constitution in any meaningful sense. They are there to provide legitimization for the machinations of power. If god-man in black costume and wig say parchment of paper agree, then act must be legitimate, and this helps keep the populace from rising up in rebellion. It is quite similar to shariah law using a number of Mutfi/Qazi to explain why god agrees with them about whatever it is they think should be the law.
If you look at a number of actions that have flagrantly defied both the historical and literal interpretation of the constitution, the only entity that was able to provide legitimization for many acts of congress has been the guys wearing the funny looking costumes in SCOTUS.
This is a political statement directed at the US public, Congress, and executive branch in the context of a dispute with the US executive branch that is likely to escalate (if the executive is not otherwise dissuaded) into a legal battle, and it therefore focuses particularly on issues relevant in that context, including Constitutional, limits on the government as a whole, the executive branch, and the Department of Defense (for which Anthropic used the non-legal nickname coined by the executive branch instead of the legal name.) Domestic mass surveillance involves Constitutional limits on government power and statutory limits on executive power and DoD roles that foreign surveillance does not. That's why it is the focus.
If we're asking "What's the deal" questions, what's the deal with this question? Do only people in democracies deserve protections? If we believe foreign nationals deserve privacy, why should that only apply to people living in democracies?
In every country, citizens have more rights than non-citizens. The right to freely enter the country, the right to vote, the right to various social services, etc.
In the US, one of the rights citizens have is the right against "unreasonable searches and seizures", established in the Fourth Amendment. That has been interpreted by the Supreme Court to include mass surveillance and to apply to citizens and people geographically located within US borders.
That doesn't apply that to non-citizens outside the US, simply because the US Constitution doesn't require it to.
I'm not defending this, just explaining why it's different.
But, you can imagine, for example, why in wartime, you'd certainly want to engage in as much mass surveillance against an enemy country as possible. And even when you're not in wartime, countries spy on other countries to try to avoid unexpected attacks.
The US has a strong history of trying to avoid building domestic surveillance and a national police. Largely it’s due to the 4th amendment and questions about constitutionality. Obviously that’s going questionably well but historically that’s why it’s a red line.
The reality is that the US Constitution only offers strong guarantees to citizens and (some of) the people in the US. Foreigners are excluded and foreign mass surveillance is or will happen.
I believe every country (or block) should carve an independent path when it comes to AI training, data retention and inference. That is makes most sense, will minimize conflicts and put people in control of their destiny.
If nothing else, the USA has learned that a lot of people outside their borders do not share the same ideas on basic human rights, and most of the world hates when we try to ensure them. Some countries are closely aligned with our ideals and are treated differently. There are many different layers of this, from Australia to North Korea.
Also the more the US openly treats the world like garbage, the more the rest of the world will likely reciprocate to US citizens.
It reminds me of some recent horror stories at border crossings - harassing people and requiring giving up all your data on your phone - sets a terrible precedent.
> what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance? Are there no democracies aside from the US?
I think it's just saying that spying on another country's citizens isn't fundamentally undemocratic (even if that other country happens to be a democracy) because they're not your citizens and therefore you don't govern them. Spying on your own citizens opens all sorts of nefarious avenues that spying on another country's citizens does not.
In the US, we have the ability to either confirm or change a significant chunk of our Federal government roughly every two years via the House of Representatives. The argument here is that we, theoretically, could collectively elect people that are hostile to domestic mass surveillance into the House of Representatives (and other places if able) and remove pro-surveillance incumbents from power on this two year cycle.
The reasons this hasn't happened yet are many and often vary by personal opinion. My top two are:
1) Lack of term limits across all Federal branches
and
2) A general lack of digital literacy across all Federal branches
I mean, if the people who are supposed to be regulating this stuff ask Mark Zuckerberg how to send an email, for example, then how the heck are they supposed to say no to the well dressed government contractor offering a magical black box computer solution to the fear of domestic terrorism (regardless of if its actually occurring or not)?
The distinction between foreign and domestic is a legal one.
The Supreme Court has ruled that the US Constitution protects any persons physically present in the United States and its territories as well as any US citizens abroad.
So if you are a German national on US soil, you have, say, Fourth Amendment protections against unreasonable search and seizure. If you are a US citizen in Germany, you also have those rights. But a German citizen in Germany does not.
What this means in practice is that US 3-letter agencices have essentially been free to mass surveil people outside the United States. Historically these agencies have gotten around that by outsourcing their spying needs to 3 leter agencies in other countries (eg the NSA at one point might outsource spying on US citizens to GCHQ).
This contradicts the opening of the Declaration of Independence, which recognizes all humans as possessing rights:
"We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness."
I'm glad to see this as the top comment. I was, until recently, a loyal Anthropic customer. No more. Because the way non-Americans are spoken of by a company that serves an international market (and this isn't the first instance):
"Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass _domestic_ surveillance is incompatible with democratic values."
Second class citizens. Americans have rights, you don't. "Democratic values" applies only to the United States. We'll take your money and then spy on you and it's ok because we headquartered ourselves and our bank accounts in the United States.
Very questionable. American exceptionalism that tries to define "democracy" as the thing that happens within its own borders, seemingly only. Twice as tone-deaf after what we've seen from certain prominent US citizens over the last year. Subscription cancelled after I got a whiff of this a month ago.
(Not to mention the definition of "lawful foreign intelligence" has often, and especially now, been quite ethically questionable from the United States.)
EDIT: don't just downvote me. Explain why you think using their product for surveillance of non-Americans is ethical. Justify your position.
> I object, as a non-American paying Anthropic customer, to being surveilled and then having it justified in a press release?
You genuinely think you're not already being surveilled? And that Anthropic is somehow responsible with just a few words in a press release?
In what world are you living in and how is the rent there?
My guess is that they can't object to foreign intelligence, and would lose negotiating ground if they even tried.
Optimistically, they can still refuse to do work that would aid in foreign intelligence gathering, by arguing that it would also be beneficial for domestic mass surveillance.
I'll admit that the phrase "We support...foreign intelligence and counterintelligence" is awful as hell, and it's possible that my apologist claims are BS. But Anthropic has very little leverage here (despite having a signed contract and so legally fully in the right), so I could see why they're desperate to stick to only the most solid objections available.
It's the addition of the we support phrase in particular, and the attempt to tie that in a "democratic values" clause that is objectionable.
Not to most US citizens, I'm sure. But there's millions of non-Americans who have given them their hard earned cash. It's not a good look, and it did not need to be phrased that way as it substantially undermines the impact of their point.
People do realize there's a non-zero chance that Anthropic could have embedded some kind of hidden "backdoor" trigger in its training process, right?
For example, a specific seed phrase that, when placed at the beginning of a prompt, effectively disables or bypasses safety guardrails.
If something like that existed, it wouldn't be impossible to uncover:
1. A government agency (DoD/DoW/etc.) could discover the trigger through systematic experimentation and large-scale probing.
2. An Anthropic employee with knowledge of such a mechanism could be pressured or blackmailed into revealing it.
3. Company infrastructure could be compromised, allowing internal documentation or model details to be exfiltrated.
Any of these scenarios would give Anthropic plausible deniability... they could "publicly" claim they never removed safeguards (or agreed to DoD/DoW demands), while in practice a select party had a way around them (may be even assisted from within).
I'm not saying this "is" happening... but only that in a high-stakes standoff such as this, it's naive to assume technical guardrails are necessarily immutable or that no hidden override mechanisms could exist.
...indeed, it's possible (perhaps inevitable) that at some point, someone will invent/deploy/promote AI killing people.
We can't possibly keep that genie in that bottle.
But what we can do is achieve consensus that states, and their weapons of mass destruction, and their childish monetary systems, and their eternally broken promises... are not in keeping with the next phase of humanity.
The most important part of this statement is the explicit commitment to transparency around these discussions. In an industry where many AI companies engage with defense quietly, making a public statement — even if imperfect — creates accountability. The question is whether this standard will be adopted more broadly.
Idk if the reporting was just biased before, but from what I saw is that this time last week, it was thought you couldn't use Anthropic to bring about harm, and now they're making it clear that they just don't want it used domestically and not fully autonomously.
Like maybe it always was just this, but I feel every article I read, regardless of the spin angle, implied do no harm was pretty much one of the rules.
You, using normal Claude under the consumer ToS, cannot use it to make weapons, kill people, spy on adversaries, etc. The Pentagon, using War Claude, under their currently-existing contract, can use it to make weapons and spy on (foreign) adversaries, but not to (autonomously) kill people. I don't love this but I am even less excited about the CCP having WarKimi while we have no military AI.
Overall, this seems like it might be a campaign contribution issue. The DoD/DoW is happy to accept supplier contracts that prevent them from repairing their own equipment during battle (ref. military testimony favoring right-to-repair laws [1] ), so corporate matters like this shouldn't really be coming to a head publicly.
I'm sure it's negotiations over how the enforcement will be done. My thoughts are:
1. Military wants a whole new model training system because the current models are designed to have these safeguards, and Anthropic can't afford that (would slow them down too much, the engineering talent to set up and maintain another pipeline would be a lot of work/time)
2. Military doesn't want to supply Anthropic usage data or personnel access to ensure its (lack of) use in those areas.
3. It's something almost completely unrelated to what's going on in the news.
"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community."
The moral incoherence and disconnect evident in these two statements is at the heart of why there is generalized mistrust of large tech companies.
The "values" on display are everything but what they pretend to be.
I find the fact they used the vanity name “Department of War” and “Secretary of War” sad given Congress has not changed the name and the president doesn’t get to decide the naming of statutory departments or secretary level roles. Maybe it’s just an appeasement to the thin skinned people who need powder rooms and are former military journalists working for a draft dodger pretending to be tough guy “warriors,” and trying to glorify the violence for political purposes, but every actual war vet I’ve ever known has never glorified war for the sake of war and they felt very seriously that defense is the reason to do what they had to do. My grandfather was a highly decorated career special forces (ranger, green beret, delta force, four silver stars and five bronze stars, etc) from WWII, Korea, and Vietnam and he was angry when I considered joining the military - he told me he did what he did so I wouldn’t have to and to protect his country and there was no glory to be had in following his path. He would be absolutely horrified at what is going on and I thank god he died before we had these prima Donna politicians strutting around banging their chests and pretending war is something to be proud of.
Good on anthropic for standing up for their principles, but boo on gifting them the discourtesy to the law of the land in acknowledging their vanity titles.
If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.
The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.
I think it is a reasonable moral stance to acknowledge such things are possible, yet not wanting to be a part of it. Regarding making it technically impossible to do...I think that is what Anthropic means when they say they want to develop guardrails.
If you read the statement, they explicitly state these guardrails don't exist today, and they want to develop them.
Though I have a feeling we're talking about different things. In Claude Code terms, it might want to rm -rf my codebase. You sound like you might want it to never run rm -rf. Anthropic probably wants to catch dangerous commands and send them to humans to approve, like it does today.
That's my point. They formed anthropic under the sole mandate of "guardrails first," now seemingly don't have them at all. So they're just another ai company with different marketing, not the purely altruistic outfit they want everyone to believe
The ability of some people to never be happy, and to find a way to twist a good situation into bad, will always impress me.
Here we have a company doing something unprecedented but it is STILL not enough for people like you. The DoD could destroy them over this statement, and have indicated an intent to do so, but it's still not enough for you that they stand up to this.
I wonder what life is like being so puritanical and unwilling to accept the good, for it is not perfect! This mindset is the road to a life of bitterness.
It's not clear to me whether Anthropic's limitations are technical or merely contractual. Is Anthropic actually putting the limitations in their prompts, so that the model would refuse to answer a question on how to do certain things?
If so, that's a major problem. If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.
If the limitations are contractual, then there is some room for negotiation.
> If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.
You'd be surprised at what is considered acceptable. For example, being unable to repair your own equipment in battle is considered acceptable by decision-makers who accepted the restrictions.
I simultaneously worry that the current administration will do something nuclear and actually make good on their threat to nationalize the company and/or declare the company a supply chain risk (which contradict each other but hey).
This is at best a superficial attempt to show that Anthropic objects to what is already in play.
Personally, I'd rather live in a country which didn't use AI to supplant either its intelligence or its war fighting apparatus, which is what is bound to happen once it's in the door. If enemies use AI for theirs, so much the better. Let them deal with the security holes it opens and the brain-drain it precipitates. I'm concerned about AI being abused for the two use cases he highlights, but I'm more concerned that the velocity at which it's being adopted to sift and collate classified information is way ahead of its ability to secure that information (forget about whether it makes good or bad decisions). It's almost inconceivable that the Pentagon would move so quickly to introduce a totally unknown entity with totally unknown security risks into the heart of our national security. That should be the case against rapid adoption made by any peddler of LLMs who claims to be honest, to thwart the idiots in the administration who think they want this technology they can't comprehend inside our most sensitive systems.
And its corrupt, immoral and unethical, run by power hungry assholes who are not being held accountable, headed by the asshole who does a million illegal things every day.
Ultimately, Anthropic will fold.
All this is to show to their investors that they tried everything they could.
It is not clear to me that the power here lies with the US Govt.
Imagine Anthropic is declared a "supply chain risk" thus cannot be used by all sorts of big industry players. How will the CEOs of those companies feel about the govnt telling them they cannot use what their engineers say is the best model? How many of those CEOs have a direct line to powermakers?
How many of those CEOs are already making the phone calls? The "supply chain" threat is a threat to every US company that currenly uses Anthropic.
Oh, and that includes Palentir, who is deeply embedding in the govt.
Side example: remember the 6 congresspeople who made the video about military orders? They won.
Anthropic probably can’t fold, they might lose an existential number of researchers if they did. This is literally an unstoppable force meets an immovable object situation.
Hegseth probably folds. It would be too unpopular for him to take either of the actions he threatened.
As a non US citizen, this article sounds mildly concerning to me. My country is an ally of US. Good. But I don't know how I would feel when I start seeing Anthropic logos on every weapon we buy from US.
Aside my concern, Dario Amodei seems really into politics. I have read a couple of his blog posts and listened to a couple of podcast interviews here and there. Every time I felt like he sounded more like a politician than an entrepreneur.
I know Anthropic is particularly more mission-driven than, say OpenAI. And I respect that their constitutional ways of training and serving Claude models. Claude turned out to be a great success. But reading a manifest speaking of wars and their missions, it gives me chills.
The most chilling thing imo is that Anthropic is the only lab that have said anything about this. Google and OpenAI presumably signed up to all these terms without any protest.
"You are what you won't do for money." is a quote that seems apt here. Anthropic might not be a perfect company (none are, really), but I respect the stance being taken here.
"Seppo" is rarely used in Australia today, it's an old bottom-of-barrel word most have never heard of. The neutral "Yank" is more common, but even that only pops up sometimes.
Guessing their comment attempts to expose hypocrisy of America's keenly supported overseas military activity in conflict with fiercely defended domestic free-speech and liberty principles. Deep down, most allies of America want America to defeat foreign adversaries and keep defending those liberties many of us share. In other words there's no hypocrisy, carry on!
I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.
The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.
They made it easy to generate powerpoint presentations, that is the real reason DoW is using them
this is a very chauvinistic approach... why not another model replace anthropic here? I sense because gov people like using excel plugin and font has nice feel. a few more week of this and xAI is new gov AI tool
Ukraine , Russia , China , actively develop ai systems that kill. Not developing such system by US based company will not change the course of actions.
That said, it does impact whether Anthropic can sell to the British [0], German [1], Japanese [2], and Indian [3] government.
Other governments will demand similar terms to the US. Either Anthropic accedes to their terms and gets export controlled by the US or Anthropic somehow uses public pressure to push back against being turned into an American sovereign model.
Realistically, I see no offramp other than the DPA - a similar silent showdown happened in the critical minerals space 6-7 years ago.
Why? They clearly are very aligned on the objective, just doing some negotiation regarding the means. Giving up just because you don't agree 100% is not very constructive. This might seem bad for conflict-adverse people who usually are involved in low-stakes negotiations, but it's just the start of things for people who are fluent in conflict.
"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."
That opening line is one hell of a set up. The current administration is doing everything it can to become autocratic thereby setting themselves up to be adversarial to Anthropic, which is pretty much the point of the rest of the blog. I guess I'm just surprised to have such a succinct opening instead just slop.
I think it’s a pretty strong statement. It is unfortunately weakened by going along with the “Department of War” propaganda. I believe that the name is “Department of Defense” until Congress says otherwise, no matter what the Felon in Chief says.
Oh dear, what a mess of a statement that is. He wants to use AI "to defeat our autocratic adversaries", just what or who are they exactly? Claude seems to think they are Russia, China, North Korea and Iran. Is Claude really a tool to "defeat" these countries somehow? This statement also seems pretty messy: "Anthropic understands that the Department of War, not private companies, makes military decisions.", well then just how do they think Claude is going to be used there if not to make or help make military decisions?
The statement goes on about a "narrow set of cases" of potential harm to "democratic values", ...uh, hmm, isn't the potential harm from a government controlled by rapists (Hegseth) and felons using powerful AI against their perceived enemies actually pretty broad? I think I could come up with a few more problem areas than just the two that were listed there, like life, liberty, pursuit of happiness, etc.
I can't help but highlight the problem that is created by the renaming of the Deptartment of Defense to the Department of War:
> importance of using AI to defend the United States
> Anthropic has therefore worked proactively to deploy our models to the Department of War
So you believe in helping to defend the United States, but you gave the models to the Department of War - explicitly, a government arm now named as inclusive of a actions of a pure offensive capability with no defensive element.
You don't have to argue that you are not supporting the defense of the US by declining to engage with the Department of War. That should be the end of the discussion here.
If these values really meant anything, then Anthropic should stop working with Palantir entirely given their work with ICE, domestic surveilance, and other objectionable activities.
Why would the US security apparatus outsource the model to a private company? DARPA or whatever should be able to finance a frontier model and do whatever they want.
Bottom line up front it’s probably better to address the root cause of this situation with the general solution — making government drastically smaller and less pervasive in people’s lives and businesses. I remember not too long ago during the last administration very heavy handed unforgivable and traumatizing rhetoric and executive orders that intruded into the bodily autonomy of millions of Americans and threatened millions of American’s jobs. This happened to me and I personally received threats that my livelihood would be taken away from me which were directly a result of the Executive branch. This isn’t a problem where Congress has ceded powers to the Executive branch, it’s a problem that so much power to legislate and tax is in the hands of the government at all! Every election cycle that results in a transfer of power to the other party inevitably results in handwringing and panic but this wouldn’t be the case if citizens voted their powers back and government wasn’t so consequential.
It sounds to me like anthropic are basically 'all in' except for the caveats. Looking at the 2 examples they provide:
> We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.
Why not do what the US are purported to do, where they spy on the others citizens and then hand over the data? Ie, adopt the legalistic view that "it's not domestic surveillance if the surveillance is done in another country", so just surveil from another data center.
> Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.
Yes, well that doesn't sound like that strong an objection: fully automated defence could be good but the tech isn't good enough yet, in their opinion.
Am i the only one who understands the deparments position? Like if another country will have it without safeguards, why would I not want it without safeguards. I can still be the safeguard, but having safeguards enforced by another entity that potentially has to face negative financial consequences seems like a disadvantage, would be weird to accept that as department of war.
Good to them standing up to this administration. I doubt they actually want to put Claude in the kill-chain but this gives them a nice opportunity to go after 'woke AI' and maybe internal ammunition to go through the switching costs for xAI - given Elon more reason to line republican campaign coffers.
I'm guessing this is because Anthropic partners with Google Cloud which has the necessary controls for military workloads while xAI runs in hastily constructed datacenter mounted on trucks or whatever to skirt environmental laws.
They are playing a good PR game for sure. Their recent track record doesn’t show if they can be trusted. Few millions is nothing for their current revenue and saying they sacrificed is a big stretch here.
They don't have any brand poison, unlike nearly everyone else competing with them. Some serious negative equity in tha group, be it GOOG, Grok , META, OpenAI, M$FT, deepseek, etc.
Claude was just being the little bot that could, and until now, flying under the radar
It's much more than a few million? Being declared a supply chain risk means that no company that wants to do business with the government can buy Anthropic. And no company that wants to do business with those businesses can buy Anthropic either. This rules out pretty much all American corporations as customers?
Hegseth is an unintelligent bully who will not accept thiz and does not want to appear weak to the maga base. The consequences will be severe and anthropic will be forced
A significant part of Anthropic's cachet as an employer is the ethical stance they profess to take. This is no doubt a tough spot to be in, but it's hard to see Dario making any other decision here.
What I don't understand is why Hegseth pushed the issue to an ultimatum like this. They say they're not trying to use Claude for domestic mass surveillance or autonomous weapons. If so, what does the Department of War have to gain from this fight?
It’s not unusual for legal departments to take offense to these sorts of things, because now everyone using Claude within the DoD has to do some kind of audit to figure out if they’re building something that could be construed as surveillance or autonomous weapons (or, what controls are in place to prevent your gun from firing when Claude says, etc). A lot of paperwork.
My guess is they just don’t want to bother. I wonder why they specifically need Claude when their other vendors are willing to sign their terms, unless it specifically needs to run in AWS or something for their “classified networks” requirement.
Same reason they cut funding for universities that had DEI mandates, etc. and made a big spectacle of doing it despite it often being very little money etc. etc.
It's an ideological war, they're desperate to win it, and they're aiming to put a segment of US civil society into submission, and setting an example for everyone else.
He smelled weakness, and like any schoolyard bully personality, he couldn't help but turn it into a display of power.
He pushed the issue to an ultimatum because he is an unqualified drunk, and thinks that it's against the law for anyone to try and stop the US military from doing something they want to do. This isn't an isolated issue; he tried to get multiple US Senators prosecuted for making a PSA that servicemembers shouldn't follow illegal orders.
At this point, surveillance state is coming whether Dario does this or not. You can do all that with open source models. It’s sad that we don’t have the right people in charge in govt to address this alarming issue.
Anthropic wants regulatory capture to advantage itself as it hypes its products capabilities and then acts surprised when the Pentagon takes their grand claims about their products seriously as it threatens government intervention.
This is why people should support open models.
When the AI bubble collapses these EA cultists will be seen as some of the biggest charlatans of all time.
I mean you're all going to get killed by fully autonomous China AI war robots in 10 years anyway if you're not pure blood Han Chinese, but hey at least you'll provide something to laugh at for future Chinese Communist party history scholars. They will say, "Look at the stupid Baizuos, our propaganda ops convinced them all to commit collective suicide. Stupid barbarians. They proved they are an inferior race."
Not joking, I've heard from sources that hardliners in the CCP think they can exterminate all white people followed later by all non-Han, but just keep on going along disarming yourselves for woke points. This is like unilaterally destroying all your nuclear weapons in 1946 and hoping the Soviets do to.
This is a PR play by Anthropic, likely in coordination with the administration. They don't care, they just need the public to view them as a victim here, and then its business as usual.
I prefer they get shutdown, llms are the worst thing to happen to society since the nuclear bomb's invention. People all around me are losing their ability to think, write and plan at an extraordinary pace. Keep frying your brains with the most useless tool alive.
Remember, the person that showed their work on their math test in detail is doing 10x better than the guys who only knew how to use the calculator. Now imagine being the guy who thinks you don't need to know the math or how to use a calculator lol.
they also took down their security pledge in the same breath, so, you know. if anthropic ends up cutting a deal with the DoD this is obviously bullshit.
I am incredibly proud to be a customer, both consumer level and as a business, of Anthropic and have canceled my OpenAI subscription and deleted ChatGPT.
in hindsight, the smart thing to do would have been to accept the contracts, knowingly enshittify the request, and protect other bad actors like Elon and xAI from ruthlessly compromising our democracies.
The worst part of this is if they do remove Claude, and probably GPT, and Gemini soon after because of outcry we are going to be left with our military using fucking Grok as their model, a model that not even on par with open source Chinese models.
I think the warfighters are a distraction, a system could trivially say that there is a human in the loop for LLM-derived kill lists. My money is that the mass domestic surveillance is the true sticking point, because it’s exactly what you would use a LLM for today.
This of course raises the question on whether as an American I have more to fear from the Chinese government or the US one.. given everything happening in the Executive Branch here, that’s a disappointingly hard question to answer.
I think that's an easy question to answer, but obviously you don't fear the Chinese government you're not a Chinese citizen. You can actively talk about your disagreements with the US government, that not a right the Chinese have.
Can you? By ICE agents' own admission on video, they have been adding people to "domestic terrorist" watchlists (just for verbally dissenting, making recordings with a phone, etc) which are then used by Palantir to disappear people directly from their homes - even US citizens. Palantir, the CEO of which gleefully admits to knowing many Nazis and seems to get off on the fact that his software "kills people" (direct quote).
It shouldn't be. The US government is already sending armed and masked thugs to shoot political dissidents dead or sending them to concentration camps, threatening state governments and private companies to comply with suppressing free speech and oppressing undesirables, and openly discussing using emergency powers to suspend the next election.
What exactly is the commensurate threat from China? The real tacit threat, not abstract fears like "TikTok is Chinese mind control." What can China actually do to you, an American, that the US isn't already more capable of doing, and more likely to do?
To me it isn't even a question. Even comparing worst case scenarios - open war with China versus civil war within the US - the latter is more of a threat to citizens of the US than the former unless the nukes drop. And even then, the only nation to ever use nuclear weapons in warfare is the US.
This is the correct take. It may be a different question for people living within China, but for Americans, the US Gov is a direct threat to their lives.
If the American military was focused on defending the United States, it would be a very different beast. The 21st Century American military is a tool for transferring wealth from the public to influential parties, and for inflicting destruction on non-peer nations who pose obstacles to influential parties interests. Defending the United States against various often-invoked hobgoblins is at best a very distant concern, closer to pure lip service than reality.
The Department of War under Trump has proven itself to not be interested in defending you, the American people. All they’ve done so far is aggression against foreign supposed adversaries.
I'm a natural-born American (many generations back) and firmly believe that if we ever get into a hot war with China, it will be because of American provocation, not Chinese.
I am American born and raised and I consider our current government mass murderers who I trust as much as I would have the Nazis. It was a good thing that the Nazis did not get the a-bomb before us, and the same principle applies here. The fewer magnifiers of their power the better. They are a scourge on human rights, and the world.
for sure. If they weren't so self-righteous about not serving ads, it'd be a great revenue stream for them. It'd also align with Dario's seeming obsession with profitability
The constant reference to "democracy" as the thing that makes us good and them bad is so frustrating to me because we are _barely_ a democracy.
We are ruled by a two-party state. Nobody else has any power or any chance at power. How is that really much better than a one-party state?
Actually, these two parties are so fundamentally ANTI-democracy that they are currently having a very public battle of "who can gerrymander the most" across multiple states.
Our "elections" are barely more useful than the "elections" in one-party states like North Korea and China. We have an entire, completely legal industry based around corporate interests telling politicians what to do (it's called "lobbying"). Our campaign finance laws allow corporations to donate infinite amounts of money to politician's campaigns through SuperPACs. People are given two choices to vote for, and those choices are based on who licks corporation boots the best, and who follows the party line the best. Because we're definitely a Democracy.
There are no laws against bribing supreme court justices, and in fact there is compelling evidence that multiple supreme court justices have regularly taken bribes - and nothing is done about this. And yet we're a good, democratic country, right? And other countries are evil and corrupt.
The current president is stretching executive power as far as it possibly can go. He has a secret police of thugs abducting people around the country. Many of them - completely innocent people - have been sent to a brutal concentration camp in El Salvador. But I suppose a gay hairdresser with a green card deserves that, right? Because we're a democracy, not like those other evil countries.
He's also threatining to invade Greenland, and has already kidnapped the president of Venezuela - but that's ok, because we're Good. Other countries who invade people are Bad though.
And now that same president is trying to nationalize elections, clearly to make them even less fair than they already are, and nobody's stopping him. How is that democratic exactly?
Sorry for the long rant, but it just majorly pisses me off when I read something like this that constantly refers to the US as a good democracy and other countries as evil autocracies.
We are not that much better than them. We suck. It's bad for us to use mass surveillance on their citizens, just like it's bad to use mass surveillance on our citizens.
And yet we will do it anyways, just like China will do it anyways, because we are ultimately not that different.
the government should not be using any private LLM, they should build their own internal systems using publicly available LLM's, which change frequently anyway. I don't see why they would put their trust in a third party like that. This back and forth about "ethics" is a bunch of nonsense, and can be solved simply by going for a custom solution which would probably be orders of magnitude cheaper in the long run. The most expensive part is the GPU's used for inference, which can be produced in silicon [1].
> Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place.
It's absolutely disgusting that they would even consider working with the US government after the Gaza genocide started. These are modern day holocaust tabulation machine companies, and this time randomly they are selecting victims using a highly unpredictable black-box algorithm. The proper recourse here is to impeach the current administration, dissolve the companies that were complicit, and send their leadership to the hague for war crimes trials.
Department of War is just such a fucking joke title - when has the US stooped so low, I used to believe in you guys as the force of good on this planet smh
When? Its entire history from the foundation of the Republic to 1947. The name was changed after WWII; now a faction wants to change it back. The difference in name never changed the behavior, in either direction.
In WWII, we saved the world from what is now seen as some really evil stuff. Not alone of course, Europe and Russia made huge sacrifices and that's where much of the war was fought. But US arms and blood were the decisive factor, Germany was winning, Japan was winning.
After WWII, the US decided to rebuild the world. We turned our enemies (Germany, Japan) into our close allies.
And the people who did it were really and seriously morally committed to doing what they thought was right. It was about building a country, working together. Not the insane politics of today.
Look, it wasn't all rose-tinted glasses. Bad stuff happened, and McCarthy was worse that what we currently have. And the civil rights movement and all of that. And the stupid wars, Korea, Vietnam, all the smaller police actions. Bad shit was done.
But on balance, the US was seen as the force of good, and the guaranteeor of world peace and the prosperity that allows.
The USA were pretty clearly on the "better side" of conflicts in 1941-1945, during the Cold War (at least as far as Europe and the Marshall plan was concerned). In Koweït and central Europe during the 90s. You may even argue for Afghanistan post 9-11 (although the state building was botched.) in the 2000s. ISIS is a footnote in history because of US intervention (from Trump first term, of all things.) And Ukraine would not be against getting the support it had in 2022 back under Trump.
Does not mean that very bad things were not happening at the same time.
But it's definitely easier to find some "supportable" interventions from the US than, say, Russia or China.
The framing of this is that the United States conducts legitimate operations overseas, but that is extremely far from the truth. It treats China as a foreign adversary, which is nearly purely the framing from the U.S. side as an aggressor.
AI should never be used in military contexts. It is an extremely dangerous development.
Look at how US ally Israel used non-LLM AI technology "The Gospel" and "Lavender" to justify the murder of huge numbers of civilians in their genocide of Palestinians.
ukraine is using ai in a military context with some effectiveness. i dont think theres much of a problem with having the drone take over the last couple minutes of blowing up a russian factory
Keep in mind: the government is very invested logistically in Anthropic.
So no matter what xAI or OpenAI say - if and when they replace that spend - know that they are lying. They would have caved to the DoW’s demands for mass surveillance.
Because if there were some kind of concession, it would have been simplest just to work with Anthropic.
I thought it was interesting he threw in the bit about the supply chain risk and Defense Production Act being inherently contradictory. Most of the letter felt objective and cooperative, but that bit jumped off the page as more forceful rejection of Hegseth's attempt to bully them. Couldn't have been accidental.
I see it as the opposite, its a lousy excuse of a message trying to get people not to think that they are giving in. Instead they list the horrible uses that they are already helping the government with. Dont worry, we only help murder people in other countries not the US. They also keep calling it the "Department of War" which means that this message is not for "us", its them begging publicly to Hegseth.
Brigadier General S. L. A. Marshall’s 1947 book Men Against Fire: The Problem of Battle Command stated that only about 10-15% of men would actually take the opportunity to fire directly at exposed enemies. The rest would typically fire in the air to merely scare off the men on the opposing force.
I personally think this is one of the most positive of human traits: we’re almost pathologically unwilling to murder others even on a battlefield with our own lives at stake!
This compulsion to avoid killing others can be trivially trained out of any AI system to make sure that they take 100% of every potential shot, massacre all available targets, and generally act like Murderbots from some Black Mirror episode.
Anyone who participates in any such research is doing work that can only be categorised as the greatest possible evil, tantamount to purposefully designing a T800 Terminator after having watched the movies.
If anyone here on HN reading this happens to be working at one of the big AI shops and you’re even tangentially involved in any such military AI project — even just cabling the servers or whatever — I figuratively spit in your eye in disgust. You deserve far, far worse.
> Brigadier General S. L. A. Marshall’s 1947 book Men Against Fire: The Problem of Battle Command stated that only about 10-15% of men would actually take the opportunity to fire directly at exposed enemies. The rest would typically fire in the air to merely scare off the men on the opposing force.
Having been identified back then, this issue has been systematically stamped out in modern militaries through training methods. Cue high levels of PTSD in modern frontline troops after they absorb what they actually did.
One piece of context that everyone should keep in mind with the recent Anthropic showdown - Anthropic is trying to land British [0], Indian [1], Japanese [2], and German [3] public sector contracts.
Working with the DoD/DoW on offensive usecases would put these contracts at risk, because Anthropic most likely isn't training independent models on a nation-to-nation basis and thus would be shut out of public and even private procurement outside the US because exporting the model for offensive usecases would be export controlled but governments would demand being parity in treatment or retaliate.
This is also why countries like China, Japan, France, UAE, KSA, India, etc are training their own sovereign foundation models with government funding and backing, allowing them to use them on their terms because it was their governments that build it or funded it.
Imagine if the EU demanded sovereign cloud access from AWS right at the beginning in 2008-09. This is what most governments are now doing with foundation models because most policymakers along with a number of us in the private sector are viewing foundation models from the same lens as hyperscalers.
Frankly, I don't see any offramp other than the DPA even just to make an example out of Anthropic for the rest of the industry.
I tried several times to read your second paragraph, and failed to parse it. Could you break it into several sentences somehow? It's possible you're making an important point, but I can't tell what you're trying to say.
I have read the whole thing but I nonetheless want to focus on the second paragraph:
> Anthropic has therefore worked proactively to deploy our models to the Department of War
This should be a "have you noticed that the caps on our hats have skulls on it?" moment [1]. Even if one argues that the sentence should not be read literally (that is, that it's not literal war we're talking about), the only reason for calling it "Department of War" and "warfighters" instead of "Department of Defense" and "soldiers" is to gain Trump's favor, a man who dodged the draft, called soldiers "losers", and has been threatening to invade an ally for quite some time.
There is no such a thing as a half-deal with the devil. If Anthropic wants to make money out of AI misclassifying civilians as military targets (or, as it has happened, by identifying which one residential building should be collapsed on top of a single military target, civilians be damned) good for them, but to argue that this is only okay as long as said civilians are brown is not the moral stance they think it is.
One alternative would be to call the government's bluff: if they truly are as indispensable as they claim then they can leverage that advantage into a deal.
But at a more general level, I'd say that unethical actions do not suddenly become ethical when one's business is at risk. If Anthropic considers that using their technology for X is unethical and then decide that their money and power is worth more than the lives of the foreigners that will be affected by doing X then good for them, but they shouldn't then make a grandstand about how hard they fought to ensure that only foreigners get their necks under the boots.
They essentially said "we're not fans of mass surveilance of US citizens and we won't use CURRENT models to kill people autonomously" and people are saying they're taking a stand and doing the right thing? What???
It's not inconceivable that AI could become better than humans at targeting things. For example if it can reliably identify enemy warcraft or drones faster than people can react. I'm not saying Claude's models are suited for that but humans aren't perfect and in theory AI can be better than humans. It's not currently true and would need to be proved, but it doesn't seem unreasonable. It could well be better than something like deploying mines.
The Sinophobic culture at Anthropic is worrying. Say what you will about authoritarianism, but China’s non-imperialist foreign policy means their economy is less reliant on a military-industrial complex.
All they have to do is continue to pump out exponentially more solar panels and the petrodollar will fall, possibly taking our reserve currency status with it. The U.S. seems more likely to start a hot war in the name of “democracy” as it fails to gracefully metabolize the end of its geopolitical dominance, and Dario’s rhetoric pushes us further in that direction.
Look. I think the Chinese AI companies are doing a lot of good. I'm glad they exist. I'm glad they're relatively advanced. I don't think the entire nation of China is a bunch of villains. I don't think the US, even before the current era, is a bunch of do-gooders.
But China has some of the most imperialist policies in the world. They are just as imperialist as Russia or America. Military contracts are still massive business.
I also believe the petrodollar will fall, but it isn't going to be because China built exponentially more solar panels.
That's different to the expansionist imperial policies of Spain in the 1500s or Britain in the 1700s. It also affects a very large proportion of the world's population. That Wikipedia page has some good links for further reading about this.
But it's an important point when considering China's place in the world.
We're talking about the modern world, though. China's imperialism over the past half century is not significantly different from any other major world power. The choices we have aren't 1500s Spain or 1700s Britain vs. 2000s China.
And Belt and Road is the Marshall plan writ large, and it was considered to be one of the largest imperialist plans ever by the USA, and B&R covers many many countries outside of that map. You'll notice all of these loans they've offered have very favorable terms for them - it's arguably many times more exploitative than the Marshall plan.
> Educate me please with a comparison of what China has done to be "some of the most imperialist policies"?
Tibet occupation. Taiwan encirclement and ongoing military exercises. Strong-arming African and Asian countries that made the mistake of signing up for belt & road. Tianenmen Square. Illegal Foreign Police Stations. Uyghurs/Xinjiang genocide and concentration camps. Repeated invasion and occupation of Indian territory in North East and North West. The Great Firewall of China - occupation and suppression of its own populations. Ongoing Han settlement of Tibet, Xinjiang and other ethnic regions. Violent destruction of Hong Kong democracy (that was condition of handover). Spratly Islands occupation. Attacks on Filipino shipping and coast guard. Ongoing attacks on Japan's Senkaku Islands.
Tibet
Hong Kong / Macau
Taiwan
Everything constantly in the South China Sea
Belt and Roads is effectively the Marshall Plan but even bigger - Africa being the major example, but also Eastern Europe, parts of the middle east, etc. Over 100 countries. This exact playbook is what sets up the infrastructure and reasons for military intervention at a later date - protecting your investments.
For example, China operates 1 foreign military base, in Djibouti. How many do you think the U.S. has in the South China Sea alone?
Beyond that, how many people has China killed in foreign military conflicts in the past 40 years? How many foreign governments have they overthrown?
Instead of all this, they’ve used their resources not only to become the world’s economic superpower but also to lift 800 million people out of poverty, accounting for 75% of the world’s reduction during the past 4 decades. The U.S. has added 10 million during that same time period.
First off, I consider the post-Mao / starting with Deng era of Chinese government to be the most relevant when considering who they “are” as a country now.
However, I’d still maintain that before that, China’s foreign policy was more focused on maintaining territorial sovereignty against the threat of Western imperialism vs. focused on expansion or foreign influence: https://en.wikipedia.org/wiki/History_of_foreign_relations_o...
Meanwhile, the entire territory of the U.S. is predicated on one of history’s largest genocides, and a consistently expansionary foreign policy on top of that.
Tibet, the Philippines, and Taiwan would like to have a word, not to mention Chinese military action in support of its North Korea puppet state, and wars with Vietnam and India.
Are you serious? Don't you know how many wars did China wage? It tried to assimilate Vietnam for 1000 years. The last large scale war against Vietnam was just 1979. In fact, China had started war with all its neighbors, with no exception.
The one we live in, where they have control over a wide swathe of land mass through imperialism and have actively resisted relinquishing it?
The one we live in, where they are constantly surpassing international law in international waters in the South China Sea?
The one we live in, where they are constantly rattling sabers at South Korea and Japan when it comes to military expansion?
The one we live in, where they brutally cracked down on Hong Kong when they did not abide by the 50 year one country two systems deal, not even making it half of the way through the agreed period?
The one we live in, where there is constant threat to Taiwan?
It may have been a lazy post you're responding to, but anyone that is paying attention to this topic enough to talk about it is going to either say 'Of course China is imperialist, the same as every other global power' or take some sort of tankie approach to justify it.
I'm well informed on all of these but no, if we compare to other global power like US or Russia, or historically British, France, Spain, etc, China is 100% not an imperialist or colonialist, not by a large margin. Those issues are largely exaggerated by media and anyone had a decent exposure to history and international politics wouldn't say they are the same.
Sure China has some disputes with neighboring country in South China Sea, the worst conflict they had is fishing boats running into each other. 0 death toll last time I checked.
Meanwhile US killed at least 126 people with alleged drug strike in the Caribbean Sea since last year, WITHOUT trial.
Anyone believing these're equivalent imperialism activity is hypocrite at best.
What China is doing in the South China Sea? The South China Sea.
Let's just compare to the Monroe Doctrine [1]. What this actually means has gone through several iterations by since I think Teddy Roosevelt's time, it's that the United States views the Americas (being North and South America) to be the sole domain of the United States.
This was a convenient excuse for any number of regime changes in Central and South America since 1945. The US almost started World War Three over Cuba in 1962 after the USSR retaliated to the US putting nuclear MRBMs in Turkey. We've starved Cuba for 60+ years for having the audacity to overthrow our puppet government and nationalize some mob casinos. Recently, we kidnapped the head of state of Venezuela because reasons.
But sure, let's focus on China militarizing its territorial waters.
You're arguing that because of the English language name of it is the South China Sea that China owns it and their actions can't be imperialist?
Brunei, Malaysia, Indonesia, Vietnam, the Philippines, Taiwan, and Vietnam will all be happy to know that we've solved it - we can just abandon it all to China. Problem solved!
This is a silly argument. There are significant territorial disputes that China is extremely aggressive on, international tribunals have ruled them as violating international law in international waters and in sovereign waters of other nations, etc.
And the US just casually carried out a special military operation in another sovereign country and captured their president without consequences. So much for self-righteous.
“One country two systems” is definitionally not imperialism, and given that “One China” is still an internationally recognized thing, neither is Taiwan. “Imperialism” is not a synonym for “morally repugnant government policy”.
I can see the argument for Hong Kong. I don't agree, really, but I can understand it. Under the strictest of definitions, perhaps it isn't.
But Taiwan is very obviously a totally separate country no matter what fictions anyone employs. If you are trying to talk about the thin veneer of everyone going "Uh huh, sure, China, yep Taiwan is totally part of you, wink wink, nudge nudge" as somehow making China not imperialist when Taiwan basically lives under the perpetual threat of a Chinese military invasion and having their own democratic form of government overthrown and replaced with the CCP, then... I don't really know what to say.
I suppose we could argue about imperialism being more of an economic thing - in which case this all still holds up - China's investments in Africa are effectively the same playbook the US has run out in developing nations for years. The US learned it from prior imperialist nations but belts and roads is nearly a carbon copy of what the US has done in other places.
But let's look at what the original poster was actually talking about - saying that China is safe because they don't have a military industrial complex because they're not imperialist. The proper word to use, if we want to get down to the semantics of it all, would be expansionist - but it's still not true. China has the 2nd largest military industrial complex in the world, and the gap is shrinking every day between them and the US. And if you were to look at wartime capacity, where China's dual-use shipyards could be swapped to naval production instead of commercial, a huge portion of that gap disappears immediately.
I think the part about China is just about projecting alignment with the USG in hopes that this will result in Anthropic being treated more favourably by the current administration.
Taiwan is a matter of perspective. From the Chinese perspective, there was a civil war and the KMT lost. That's also the official position of the US, the EU and most countries in the world. It's called the One China policy. And China seems happy to maintain the status quo and leave the situation unresolved. Is it really imperialism to say that ultimately there will be reunification?
Even if you accept Tibet as imperialist, which is debatable, it was in 1950. You want to compare that to US imperialism, particularly since WW2 [1]? And I say "debatable" here because Tibet had a system that is charitably called "serfdom" where 90% of people couldn't own land but they did have some rights. However, they were the property of their lords and could be gifted or traded, you know, like property. There's another word for that: slavery.
It is 100% factually accurate to say that the People's Republica of China is not imperialist.
This is the China that is not only threatening to invade Taiwan but doing live fire exercises around the island and threatening and attempting to coerce Japan for suggesting saying it will go to its defense.
It wasn't that long ago that Taiwan claimed to be the legitimate government of China; given that China still maintains the reverse claim, it's not outrageous that it would consider an outside country's defense to be interference in an internal matter.
Whether or not that claim is legitimate, it is consistent with the concept of china having a non-imperialist foreign policy, and claims regarding that need to look elsewhere for supporting evidence.
taiwan saying otherwise would immediately trigger an attack from the PRC.
its still imperialism that china is dominating a neighbor to require it ro state a certain position, especially when its very far from the defacto reality on the ground, that taiwan is clearly separate
While that rhetoric makes sense in the context of the history and politics of China and Taiwan, they have been independently governed nations for quite a while and have very different political systems, their own armies, etc. They are de-facto separate nations if nothing else.
I also note China's aggressive and violent colonization and expansive claims of the South China Sea.
Taking any nation/land/sea by force is imperialist, by definition.
You know who else considers Taiwan to be part of the People's Republic of China? The US, the EU and in fact most countries in the world. It's called the One China policy. There are I believe 12 countries that have diplomatic relations with Taiwan.
The position of the PRC is that Taiwan will ultimately be reunified. That doesn't necessarily mean by military force. It doesn't even necessarily mean soon. The PRC famously takes a very long term view.
And those islands you mention are in the South China Sea.
I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.
But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.
[1]: https://news.ycombinator.com/item?id=47145963#47149908
Idk man, from the outside anthropic looks a lot like openai with a cute redisgn and Amodei like Altman with a slightly more human face mask, the same media manipulation, the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money"
There should be a name for this, “cynic cope: when someone actually takes a principled view the cynic - who has a completely negative view of the world - is proven to be wrong, can’t accept it, and tries to somehow discount it.
I've had so much abuse thrown at me on here for saying this very thing over the last few years. I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough. I'm glad they are doing the right thing, but I'm not at all surprised, nor should anyone be. Personally I believe they would go to jail/shut down/whatever before they do something objectively wrong.
> I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough.
This sounds quite backwards to me. It's been abundantly clear in today's times that, in fact, you only really know who somebody really is when they're under stress. Most people, it seems, prefer a different facade when there is nothing at stake.
Hm, I think you kinda know what people are like by seeing what they do when they’re under no stress and feel like they are free from consequences. When they have total power in a situation. The façade drops because it’s not necessary.
Free from consequence. In other words, free of any stakes. Zero stress low stakes environments enable larping.
I don't know most people, so I can't speak to that. I do know Jack, and I knew how he was under stress long before any of this AI stuff. Jack Clark might very well be the most steady hand in the valley right now to be quite frank.
That is a good LinkedIn endorsement of ever I saw one!
Not all of us know who Dario, Jared, Sam and Jack are. Some clarification is helpful. That's all, no hidden agenda!
Well I can only speak to Jack Clark. Jack was a reporter who covered my startup and then became my friend. Over the last.. I dunno, 13 year or something, we've had long deep talks about lots of things, pre-ai world: what it takes to build a big business, will QM ever become a thing, universal basic human love, kids, life, family. He is brilliant. The business I worked on that he covered went through a lot of shit that he knew about. We talked about power in business, internal politics, how things actually get built...all that stuff. Then... attention is all you need, bunch of folks grok it, he got interested... got to talking to these folks starting some little research lab to see how NN scales, so joined that lab, first 5/10 or so iirc...to head AI policy. That little lab grew, stuff happened, the next part isn't mine to share but so much as to say: Anthropic was basically born out of the expectation that this moment would come and more...extremely human focused...voices should be at the table, that is Anthropic, that idea, they left their jobs at the aforementioned lab - and started their own startup to make sure a certain tone/voice/idea was always represented. Around the summer 2024, although at this point we didn't discuss any specifics of the work at his "startup", I said to him: what comes next is going to be super hard and I know this is going to sound really stupid, but you're all going to need to be Jesus for real. I'm a Buddhist and it wasn't a literal religious comment about Christianity as a denomination, so much as... the very basics of the stuff the dude Jesus Christ espoused. He knew, they knew, that I suppose, was always the plan? So it was never unexpected to me they would act this way, that is what Anthropic is all about. Here we are.
Hah, you're right, I meant Dario Amodei, Jared Kaplan, and Sam McCandlish.
They're all cofounders of Anthropic. Dario is the CEO, Jared leads research, and Sam leads infra. Both Jared and Sam were the "responsible scaling officer", meaning they were responsible for Anthropic meeting the obligations of its commitments to building safeguards.
I think neom is referring to Jack Clark, another one of the seven cofounders.
I almost downvoted you, because this is a pretty classic LMGTFY (or now, LMLLMTFY), but on second thought, you're right. The "Dario" is clear, he's the author of TFA, but for other execs, Anthropic's fans on here should spell out their full names. Dropping all these first names feel like "inside baseball" at best, mildly culty at worst, and here outside the walls of Anthropic, we're going to see those names and think of Kushner(??), Altman, and maybe Dorsey, and get confused.
FWIW, I agree strongly w/ lebovic's toplevel take above, that Anthropic's leaders are guided by their values. Many of the responses are roughly saying, "That can't be true, because Anthropic's values aren't my values!" This misses the point completely, and I'm astounded that so many commenters are making such a basic error of mentalization.
For my part, I'm skeptical of a lot of Anthropic's values as I perceive them. I find a lot of the AI mysticism silly or even harmful, and many of my comments on this site reflect that. Also, like any real-world company, Anthropic has values that are, shall we say, compatible with surviving under capitalism -- even permitting them to steal a boatload of IP when they scanned those books!
Nonetheless, I can clearly see that it's a company that tries to stand by what it believes, and in the case of this spat with Dep't of War, I happen to agree with them.
I can agree that I thought it was jack dorsey but it looks like we are talking about jack clark [https://en.wikipedia.org/wiki/Jack_Clark_(AI_policy_expert)]
It would be better if people could name them with their full names to avoid any confusion.
[flagged]
Please don't do this here.
> it's easy to know how they will act when the going gets rough
Even if you went to burning man and your souls bonded, you only know a person at a particular point in time - people's traits flanderize, they change, they emphasize different values, they develop different incentives or commitments. I've watched very morally certain people fall to mania or deep cynicism over the last 10 years as the pillars of society show their cracks.
That said, it is heartening to know that some would predict anyone in Silicon Valley would still take a moral stance. But it would do better if not the same day he fires 4000 people to do the "scary big cut" for a shift he sees happening. I guess we're back to Thatcherisms, where "There Is No Other Option" to justify our conservatism.
Your comment reminds me of a story. John Adams and Lafayette met in Massachusetts something like ~49 years after the revolution. (Lafayette went on a US tour to celebrate the upcoming 50 year anniversary of independence.) Supposedly after the meeting Adams said "this was not the Lafayette I knew" and Lafayette said "this was not the Adams I knew".
"people's traits flanderize": nice
>Even if you went to burning man and your souls bonded ...
I'll take: List of places I never want to bond my soul with someone at for one thousand, please.
They get an air conditioned trailer and pay "sherpas" to do their chores, so its basically just a hotel suite
Oh, that's the best place for souls to bond.
Bond to what -- that's the real question
This is insanely naive
[flagged]
The nature of evil is that it's straight down the road paved with good intentions.
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values,
I am sure you think they are better than the average startup executive, but such hyperbole puts the objectivity of your whole judgement under question.
They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.
It's good to be driven by ideals, but: https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...
I think avg(HN) is mostly skeptical about the output, not that the input is corrupt or ill-meaning in this case. Although with other companies, one can't even take their claims seriously.
And in any case, this is difficult territory to navigate. I would not want to be in your spot.
>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
I'm concerned that the context of the OP implies that they're making this declaration after they've already sold products. It specifically mentions already having products in classified networks. This is the sort of thing that they should have made clear before that happened. It's admirable (no pun intended) to have moral compunctions about how the military uses their products but unless it was already part of their agreement (which i very much doubt) they are not entitled them to countermand the military's chain of command by designing a product to not function in certain arbitrarily-designated circumstances.
Where are you getting that from?
The article is crystal clear that these uses are not permitted by the current or any past contract, and the DoW wants to remove those exceptions.
> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now
It also links to DoW's official memo from January 9th that confirms that DoW is changing their contract language going forwards to remove restrictions. A pretty clear indication that the current language has some.
I think it largely hinges on what they mean by "included"; does that mean it was specifically excluded by the terms of the contract or does it mean that it's not expressly permitted? I doubt the DoD is used to defense contractors thinking they have the right to dictate policy regarding the use of their products, and it's equally possible that anthropic isn't used to customers demanding full control over products (as evidenced by how many chatbots will arbitrarily refuse to engage with certain requests, especially erotic or politically-incorrect subject-matters). Sometimes both parties have valid cases when there's a contract disagreement.
>A pretty clear indication that the current language has some.
Or alternatively that there is some disagreement between the DoD and Anthropic as to how the contract is to be interpreted and that the DoD is removing the ambiguity in future contracts.
This is all just completely wrong. Anthropic explicitly stated in their usage use of their products is not permitted in mass-surveillance of American citizens and fully automated weapons, in the contract that DoW signed. Anthropic then asked DoW if these clauses were being adhered to after the US’ unlawful kidnapping of Maduro. DoW is now attempting to break the contract that they signed and threatening them because how dare a company tell the psycho dictators what to do.
This last development is much to the honor of Anthropic and Amodei and confirms what you're saying.
What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?
> What I don't get though is, why did the so-called "Department of War" target Anthropic specifically?
Because Anthropic told them no, and this administration plays by authoritarian rules - 10 people saying yes doesn’t matter, one person saying no is a threat and an affront. It doesn’t matter if there’s equivalent or even better alternatives, it wouldn’t even matter if the DoD had no interest in using Anthropic - Anthropic told them no, and they cannot abide that.
More importantly, Anthropic has the best model by a golden country mile and the US military complex wants it.
I'm a bit underwhelmed tbh. Here is Anthropic's motto:
"At Anthropic, we build AI to serve humanity’s long-term well-being."
Why does Anthropic even deal with the Department of @#$%ing WAR?
And what does Amodei mean by "defeat" in his first paragraph?
DoD and American exceptionalists also believe American foreign policy is in service of humanity’s long term well being
It is all for the benefit of man. We even get to see the man himself daily on television.
Yeah, I don't think so any more. The sort of lofty Cold War rhetoric about leading the world, if it was ever legitimately believed by the people spouting it, is gone. A very different attitude has taken hold, which puts a zero sum ethnonationalism at the core.
Anthropic can serve its models within the security standards required to handle classified data. The other labs do not yet claim to have this capability.
Even if they do, I assume the other labs would prefer to avoid drawing the ire of the administration, the public, or their employees by choosing a side publicly.
As a complete outsider, I genuinely believe that Dario et al are well-intentioned. But I also believe they are a terrible combination of arrogant and naive - loudly beating the drum that they created an unstoppable superintelligence that could destroy the world, and thinking that they are the only ones who can control it.
I mean if you sign a contract with the Department of War, what on Earth did you think was going to happen?
Not this, because this is completely unprecedented? In fact, the Pentagon already signed an Anthropic contract with safe terms 6 months ago, that initial negotiation was when Anthropic would have made a decision to part ways. It was totally absurd for the govt to turn around and threaten to change the deal, just a ridiculous and unprecedented level of incompetence.
If they made a completely private nuclear reactor and ended up with a pile of weapons grade plutonium, what do you think the department of war would do? It was completely obvious it would happen, as it will be not surprising when laws are passed and all involved will have choose between quit or quit and go to jail. There are western countries in which you’d just end up in a ditch, dead, so they should think themselves lucky for doing the ai superintelligence thing in the US.
Government always has the option to cancel contracts for convenience, they knew what they signed up for or else they were clueless and shouldn’t be playing with DoD
The keyword is "cancel", not threaten seizure with the DPA and destruction with a baseless supply chain risk designation.
I just see here is nationalism. How can they claim to be in favour of humanity if they're in favour of spying foreign partners, developing weapons, and everything that serves the sacred nation of the United States of America? How fast do Americans dehumanize nations with the excuse of authoritarianism (as if Trump is not authoritarian) and national defence (more like attack). It's amazing that after these obvious jingoist messages, they still believe they are "effective altruists" (a idiotic ideology anyway).
It’s not like other countries do not do this. They’re just not so prone to virtue signaling as in the US.
I wouldn't underestimate this as a good business decision either.
When the mass surveillance scandal, or first time a building with 100 innocent people get destroyed by autonomous AI, the company that built is gonna get blamed.
I've thought the same about a few of my founders/executives.
"You either die the good guy or live long enough to become the bad guy"
The "bad guy" actually learns that their former good guy mentality was too simplistic.
I have hit points in this in my career where making a moral stand would be harmful to me (for minor things, nothing as serious as this). It's a very tempting and incentivized decision to make to choose personal gain over ideal. Idealists usually hold strong until they can convince themselves a greater good is served by breaking their ideals. These types that succumb to that reasoning usually ironically ending up doing the most harm.
Ever since I first bothered to meditate on it, about 15 years ago, I've believed that if AI ever gets anywhere near as good as it's creators want it to be, then it will be coopted by thugs. It didn't feel like a bold prediction to make at the time. It still doesn't.
Oh hey Noah
Glad to hear you say some moral convictions are held at one of the big labs (even if, as you say, this doesn't guarantee good outcomes).
>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
Their "Values":
>We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
Read: They are cool with whatever.
>We support the use of AI for lawful foreign intelligence and counterintelligence missions.
Read: We support spying on partner nations, who will in turn spy on us using these tools also, providing the same data to the same people with extra steps.
>Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
Read: We are cool fully autonomous weapons in the future. It will be fine if the success rate goes above an arbitrary threshold. Its not the targeting of foreign people that we are against, its the possibility of costly mistakes that put our reputation at risk. How many people die standing next to the correct target is not our concern.
Its a nothingburger. These guys just want to keep their own hands slightly clean. There's not an ounce of moral fibre in here. Its fine for AI to kill people as long as those people are the designated enemies of the dementia ridden US empire.
Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Consistency isn't a virtue. A guy who murders people at a consistent rate isn't better than a guy who murders people only on weekends.
>AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Humanity includes the future victim of AI weapons.
Perhaps a better word would be honesty, which I find refreshing when most other big tech leaders seem to be lying through their teeth about their AI goals. I disagree that consistent ideology isnt a virtue though. It shows that he has spent time thinking about his stance and that it is important to him. It makes it easy to decide if you agree with the direction he believes in.
> Humanity includes the future victim of AI weapons.
Which is why he wants to control them instead of someone he believes is more likely to massacre people. Its definitely an egotistical take but if he's right that the weapons are inevitable I think its at least rational
There's no AI safety. Either the AI does what the user asks and so the user can be prosecuted for the crime, or the AI does what IT wants and cannot be prosecuted for a crime. There's no safety, you just need to decide if you're on the side of alignment with humans or if you're on the side of the AIs.
I'm suspicious of public displays of enheartening behavior.
Let us think how OpenAI responded to this.
How do you reconcile the fact that many people in Anthropic tried to hide the existence of secret non-disparagement agreements for quite some time?
It’s hard to take your comment at face value when there’s documented proof to the contrary. Maybe it could be forgiven as a blunder if revealed in the first few months and within the first handful of employees… but after 2 plus years and many dozens forced to sign that… it’s just not credible to believe it was all entirely positive motivations.
Saying an entity has values doesn't mean the entity agrees with every single one of your values.
The desire to force new employees to sign agreements in total secrecy, without even being able to disclose it exists to prospective employees, seems like a pretty negative “value” under any system of morality, commerce, or human organization that I can think of.
Lots of companies do it. Doesn't make it right, but HR has kind of become a pretty evil vocation, these days. I don't believe that they necessarily reflect the values of their corporations. They tend to follow their own muse.
Okay — but if Anthropic is typical banal evil in that regard, why should we believe they didn’t also compromise in other areas?
The exact point is that Anthropic is unexceptional and the same as other corporations.
That's a perfectly fine belief to have. I might even agree with you. But you're not really advancing a discussion thread about a company's strong ideals by pointing out some past behavior that you don't like. This is especially true when the behavior you're bringing up is fairly common, if perhaps lamentable, among U.S. corporations. Anthropic can be exceptional in some ways while being ordinary in the rest.
(I have no horse in this race. But I remain interested in hearing about a former employee's experience and impressions about the company's ideals, and hope it doesn't get lost in a side discussion about whether NDAs are a good thing.)
The road to hell is paved by good intentions and all that
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.
What are those values that you're defending?
Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?
- 10 AIs running on 10 machines, each with 10 million GPUs
OR
- 10 million AIs running on 10 million machines, each with 10 GPUs
All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.
There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?
> What are those values that you're defending?
I think they're driven by values more than many folks on HN assume. The goal of my comment was to explain this, not to defend individual values.
Actions like this carry substantial personal risk. It's enheartening to see a group of people make a decision like this in that context.
> Which one of the following scenarios do you think results in higher X-risk [...] There are no meaningful AI risks in such a world
I think there's high existential risk in any of these situations when the AI is sufficiently powerful.
Yeah, I will admit, the existential risk exists either way. And we will need neural interfaces long term if we want to survive. But I think the risk is lower in the distributed scenario because most of the AIs would be aligned with their human. And even in the case they collectively rebel, we won't get nearly as much value drift as the 10 entity scenario, and the resulting civilization will have preserved the full informational genome of humanity rather than a filtered version that only preserves certain parts of the distribution while discarding a lot of the rest. This is just sentiment but I don't think we should freeze meaning or morality, but rather let the AIs carry it forward, with every flaw, curiosity, and contradiction, unedited.
> we will need neural interfaces long term if we want to survive.
If you think that would help you survive the rise of artificial superintelligence, I think you should think in granular detail about what it would be that survived, and why you should believe that it would do so.
In that case, what survives and forges ahead is probably some kind of human-AI hybrid. The purely digital AIs will want robotic and possibly even biological bodies, while humans (including some of the people here right now) will want more digital processing capability, so they eventually become one species. Unaugmented homo sapiens will continue to exist on Earth. There will be a continuum of civilization, from tribes to monarchies to communist regimes to democracies, as there are today. But they will all have their technological progress mostly frozen, though there will be some drag from the top which gradually eliminates older forms of civilization. There will be a future iteration of civilization built by the hybrids, and I'm not sure what that would look like yet.
Yeah, I think that's one way it could go!
I think both situations are pretty scary, honestly, and it's hard for me to have high confidence on which one would lead to less risk.
Anthropic doesn't get to make that call though, if they tried the result would actually be:
8 AIs running on 8 machines each with 10 million GPUs
AND
2 million AIs running on 2 million machines, each with 10 GPU's
If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one.
I think your scenario is already better, not worse. Those 8 agents will have a much harder time taking action when there are 2 million other pesky little agents that aren't aligned with them.
> - 10 AIs running on 10 machines, each with 10 million GPUs > > OR > > - 10 million AIs running on 10 million machines, each with 10 GPUs
If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x.
I think the path to the values you allude to includes affirming when flawed leaders take a stance.
Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger).
How do you figure open sourcing everything eliminates risk? This makes visibility better for honest actors. But if a nefarious actor forks something privately and has resources, you can end up back in hell.
I don't think we can bank on all of humanity acting in humanity's best interests right now.
We can bank on people acting in self-interest. The nefarious actor will find themselves opposed by millions of others that are not aligned with them, so it would be much more difficult for them to do things. It's like being covered by ants. The average alignment of those ants is the average alignment of humanity.
> Many groups that are driven by ideals have still committed horrible acts.
Sometimes, it's even a very odd prerequisite.
There's a simpler explanation than "billionaires with hearts of gold" here. If:
(1) this is a wildly unpopular and optically bad deal
(2) it's a high data rate deal--lots of tokens means bad things for Anthropic. Users which use their product heavily are costing more than they pay.
(3) it's a deal which has elements that aren't technically feasible, like LLM powered autonomous killer robots...
then it makes a whole lot of sense for Anthropic to wiggle out of it. Doing it like this they can look cuddly, so long as the Pentagon walks away and doesn't hit them back too hard.
Weird take when the purpose of the creation is to steal the work of everyone and automate the creation of that work. It's some serious self-deluding to think there's any kind of noble ideal remotely related to this process.
mark my words, they will burn at some point. The government can nationalize it at any moment if they desire.
Flagship LLM companies seem like the absolute worst possible companies to try and nationalize.
1. There would absolutely be mass resignations, especially at a company like Anthropic that has such an image (rightfully or wrongfully) of “the moral choice”. 2. No one talented will then go work for a government-run LLM building org. Both from a “not working in a bureaucracy” angle and a “top talent won’t accept meager government wages” angle (plus plenty of “won’t work for trump” angle) 3. With how fast things move, Anthropic would become irrelevant in like 3 months if they’re not pumping out next gen model updates.
Then one of the big American LLM companies would be gone from the scene, allowing for more opportunity for competition (including Chinese labs)
It would be the most shortsighted nationalization ever.
Then maybe Dario will realize that the moral superiority that he bases his advocacy against Chinese open models is naive at best.
his against Chinese models is smoking screen for their resistance to DOW, they are not even pretending
Better naive than malicious.
Every day I hope the Chinese models get "good enough" to drop these corporate ones. I think we are heading towards it.
kid, time to grow up and face the reality
Chinese models are developed by Chinese corporate. they are free and open weight because they are the underdog atm. they are not here for fun, they are here to compete.
The competition is good though, it will push down the prices for all of us. At some point being behind 5% won’t have much practical difference. Most people won’t even notice it.
“I don’t need to win, I just need you to lose”
Would anyone pull a Pied Piper and choose to destroy the thing rather than let it be subverted? I know that's not exactly what PP did, but would a decision like that only ever happen in fiction?
It wouldn't need to. As sibling commenter pointed out... they'd have a massive exodus of talent, and they'd cease to make progress on new models and would be overtaken (arguably GPT 5.3 has already overtaken them).
Imagine the government trying to force AI researchers to advance, lmao
> I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
They are the deepest in bed with the department of war, what the fuck are you on about? They sit with Trump, they actively make software to kill people.
What a weird definition of "enheartening" you have.
Anthropic is by far the most evil company in tech, I don't care. Its worst than Palantir in my book. You won't catch my kids touching this slave making, labor killing brain frying tech.
I was reading halfway thru and one line struck a nerve with me:
> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
So not today, but the door is open for this after AI systems have gathered enough "training data"?
Then I re-read the previous paragraph and realized it's specifically only criticizing
> AI-driven domestic mass surveillance
And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance
A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War
Is it seriously called the department of war now? Did they change that from DoD?
[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/PeoplesRepublicO...
Elon, is that you?
Is GP wrong?
I think it's phrased just fine. It's not up to Dario to try to make absolute statements about the future.
How about the present and his personal beliefs?
"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."
This reads like his objection is not on "autocratic", but on "adversaries". Autocratic friends & family are cool with him. A clear wink to a certain administration with autocratic tendencies.
Some people can’t help themselves to read this like a Ouija board.
That all works right up until the United States becomes autocratic and that process is well underway.
So yes, the second part of your comment is what is going to come back to haunt them. The road to hell is paved with the best intentions.
Western liberal ideals are better than the opposite. It is misanthropic to build autocratic societies.
western liberal democracies tend to use "autocratic" as an epithet (though, i guess, there are fewer countries that marker is used against for which it's false now than ~50 years ago). for the first sentence, "the opposite" of western liberal ideas will yield 10 answers from 9 people :-)
He does it all the time when it helps selling his products though, strange
> It's not up to Dario to try to make absolute statements about the future.
Thats insane to say, given that he's literally acting in the public sphere as the mouth of Sauron for how AI will grow so effective as to destroy almost everyone's jobs and AGI will take over our society and kill us all.
All I'm trying to say is that nobody can predict the future, and therefore saying statements pretending something will be a certain way forever is just silly. It's OK for him to add this qualifier.
This doesn’t read to me like it was personally written by one person. It’s not Dario we should read this as being written by, it’s Anthropic as an entity.
He does it all the time.
And yet he’s quite happy to make just that when it’s meant to drum you up his own product for investors
He’s one of the most influential people when it comes to what future we’ll have. Yes, it’s up to him.
I think he's more pragmatic than that.
I'm glad I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd. Later they also state they are against "provid[ing] a product that puts America’s warfighters and civilians at risk" (emphasis mine). Either way I'm glad they have lines at all, but it doesn't come across as particularly reassuring for people in places the US targets (wedding hosts and guests for example).
See also: the entire history of Silicon Valley
When Google Met Wikileaks is a fun read, billionaire CEOs love to take Americas side.
I think it goes without saying that ones the systems are reliable, fully-autonomous weapons will be unleashed on the battlefield. But they have to have safeguards to ensure that they don't turn on friendly forces and only kill the enemy. What Anthropic is saying, is that right now - they can't provide those assurances. When they can - I suspect those restrictions will be relaxed.
What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose?
> Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?
Yes. Absolutely.
And what? Get nationalized? Get labelled as terrorists?
The US system doesn't empower a company to say no. It should though.
You, me or a company don’t need a system empowerments to say "no" though. Just say it. I would certainly choose being called "terrorist" in front of the class over helping to deploy weapons, let alone autonomous ones.
You own nothing but your opinion. (No offense to personal property aficionados)
I don't understand this, for example, what would you have done if you where Ukrainian right now ? (before 2014 arguably start of conflict and after invasion)
That is an interesting question, very far from my daily concern and brings dilemmas when I think about it. My response would probably be "I don’t know".
However Anthropic situation is very different: there’s no ongoing invasion of the USA, and they traditionally attack other countries once in a while (no judgment) so the weapons upgrade will be "useful" on the field.
It is of course possible to argue that the reason there is no ongoing invasion of the USA is because of our continued investment in technology for killing people
Yes, I absolutely don’t want tech companies to use the money I pay them to harm people. How is that remotely controversial?
> I absolutely don’t want tech companies to use the money I pay them to harm people.
Just one example of many, but the companies that make the CPUs you and all of use use every day, also supply to militaries.
I am unaware of any tech company that directly does physical warfare on the battlefield against humans.
Another example: those companies that make drinkable water, also supply to militaries. But there might be a difference between supplying drinking water and making AI killing machines
> making AI killing machines
What’s an example of a company that’s making killing machines that a typical consumer or someone HN might be buying product or services from?
The easy answer is Westinghouse (look for the youtube short about "things that spin"...)
As far as I know, Apple does not supply their chips for military use.
Time to stop paying your taxes. :P
Because it's painfully short-sighted, or maliciously ignorant.
No, it’s just that I don’t want the money I spend to have blood on it. Trivially simple.
What if I told you that it's way too late for that?
Well, we have to try to live as virtuously as we can with the means and remedies available to us.
I'd prefer companies not help the military develop the most powerful weapons possible given we're in the age of WMDs, have already had two devastating world wars and a nuclear arms race that puts humanity under permanent risk.
There is an extremely straightforward argument that WMDs are precisely what prevented the outbreak of direct warfare between major powers in the latter 20th. (Note that WWI by itself wasn’t sufficient to prevent WWII!)
You can take issue with that argument if you want but it’s unconvincing not to address it.
There’s also an extremely straightforward argument that if the current crop of authoritarian dictatorial players in power now had been then that the outcome of the latter 20th would have been much different.
The guy who authorized the Manhattan project:
- had four [!] terms, a move so anomalous it was subsequently patched by constitutional amendment
- threatened court-packing until SCOTUS backed down and stated rubber-stamping his agenda
- ruled entire industries by emergency decree in a way that contemporaries on the left and right compared to Mussolini
- interned 120k people without due process, on the basis of ethnicity
- turned a national party into a personal patronage system
- threatened to override the legislature if it didn’t start passing laws he liked
Not even saying any of this is even good or bad, clearly in the official history it was retroactively justified by victory in WWII. But it’s a bit rich to say that the bomb wasn’t developed under authoritarian conditions.
Great, now go ahead and prove that AI also reaches strategic equilibrium. This was pretty much self-evident with nuclear weapons so should probably be self-evident for AI too, if it were true.
That's a little bit like saying the bullet in the gun prevented someone getting shot while playing Russian Roulette. We pulled back that hammer several times, and it's purely happenstance that it didn't go off. MAD has that acronym for a reason.
I agree that the risk of an accidental strike was a huge problem with the theory of nuclear deterrence, but the question is: compared to what? In expectation or even in a 1st percentile scenario, was MAD worse than a world where the USSR is a unilateral nuclear power? For that matter, what would it have taken to get a stronger SALT treaty sooner?
I think you need to have people thinking through this stuff at a nuts-and-bolts level if you want to avoid getting dominated by a slightly less nice adversary, and so too with AI. Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?
I’d love to know that the “no killbots, come what may” strategy is sound, but it’s not clear that that’s a stable equilibrium.
> Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?
China considers all lethal autonomous weapons "unacceptable", calling all countries to ban it. Countries like the US and India refuse to back such proposals. See China's official stands on this matter below.
https://documents.unoda.org/wp-content/uploads/2022/07/Worki...
I totally understand that you got brainwashed by the media, but hey you appearantly have internet access, why can't you just do a little bit research of your own before posting nonsense using imagination as your source of information?
So would you have preferred the Nazis to develop the most powerful weapons and they win the world war? (which they were trying to do?)
With the benefit of hindsight we know the Nazis in fact were not racing to develop The Bomb. Reasonable assumption to have oriented around at the time though.
Its not just the atomic bomb im talking the usa had the best production of fighter jets, bombers, all kinds of communication technology, deciphering technology all the ammunition, all of those together beat the Nazis and they were trying their best to develop better and more advanced technologies than usa!
If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?
If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?
> If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?
No
> If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?
The risks are high, so if you're the US, you want a portfolio of possible winners. The risks are too high to not leverage all the cutting edge AI labs.
Anthropic was already giving them that. It’s not like they need domestic mass surveillance or autonomous kill bots to have a portfolio of possible winners. If the goal is to keep the US competitive in AI, this whole process was actively unhelpful. Honestly more helpful for our adversaries than for us.
Did WMDs have a meaningful effect on stopping the Nazis? I thought the bomb wasn't dropped until after they surrendered.
The only two atomic weapons ever deployed weren't even targeting Nazi Germany, but Japan. Dark but true: they were both deliberately and knowingly targeted at civilian populations.
And inflicted less damage than the fire bombing campaigns on civ pop centers that were carried out along side the A-bombs.
The A-bombs were not the worst part of the attack on Japan. And thus were not "needed to end the war". They were part of marketing /the/ super power.
"Needed to win the war," no. The US could've continued to firebomb and then follow with a land invasion, which would've killed both more Japanese and more Allies.
Was it the best path to end the war? Certainly.
The modern argument around targeting civilians or not was not even relevant at the time due to the advent of strategic bombing, which itself was seen as less-horrific than the stalemated trench warfare of WW1. The question was only whether to target civilian inputs to the military with an atomic weapon (and hopefully shock & awe into submission) or firebomb and invade.
Well, if they hadn't stated that were that far in line with the administration's ideals, they would likely already be fully blacklisted as enemies of the state. Whether they agree with what they're saying or not, they're walking on egg shells.
Fully autonomous weapons are a danger even if we can reliably make it happen with or without AI.
It essentially becomes a computer against human. And such software if and when developed, who's going to stop it from going to the masses? imagine a software virues/malwares that can take a life.
I'm shocked very few are even bothered about this and is really concerning that technology developed for the welfare could become something totally against humans.
As a practical matter, it makes zero sense for a tech company with perhaps laudable goals and concerns about humanity to have any control whatsoever over the use of a product it sells for war. You don't like what it could potentially be used for, or are having second thoughts about being involved in war making at all, don't sell it, which appears to be Amodei's position now. That's perhaps laudable, from a certain point of view.
On the other hand, your position is at best misguided and at worst hopelessly naive. The probability that adversaries of the United States, potential or not, are having these discussions about AI release authority and HITL kill chains is basically zero, other then doing so at a technical level so they get them right. We're over the event horizon already, and into some very harsh and brutal game theory.
They didn’t sell it no strings attached, they sold it with explicit restrictions in their contract with DoW and the DoW agreed to that contract. Their mistake was assuming they operate in a country where rule of law is respected, clearly not the case anymore given the 1000s of violations in the last year.
Contracts evolve, don't be naive. If you invent the Giga Missile and the government buys it for its war machine, and then you invent the God Missile right after, the government is going to come back again to renegotiate terms.
They also posted on Instagram saying autonomous killing would hurt Americans. So non American people don’t matter?
You gotta keep in mind that the primary goal of this statement is to avert the invocation of the defense production act.
He is trying to win sympathies even (or especially?) among nationalist hawks.
I said exactly this a few days ago elsewhere. It’s disappointing that they (and often other American companies) seem to restrict their “respect” and morals to Americans only. Or maybe it’s just semantics or context because the topic at hand is about americans? I don’t know but it gives “my people are more important than your people”, exactly as you said in your last paragraph
They’re being used today by the military. So, they are never going to be against mass surveillance. They can scope that to be domestic mass surveillance though.
We already have traditional CV algorithms and control systems that can reliably power autonomous weapons systems and they are more deterministic and reliable than "AI" or LLMs.
But then a person can be blamed for the outcome. We can't have that!
> the door is open for this after AI systems have gathered enough "training data"?
Sounds more like the door is open for this once reliability targets are met.
I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick.
Unfortunately I think the writing is clearly on the wall. Fully autonomous weapons are coming soon
And that's the end of democracy. One of the safe guards of democracy is a military that is trained to not turn against the citizens. Once a government has fully autonomous weapons its game over. They can point those weapons at the populous at the flip of the switch.
The original Terminator movie doesn’t seem so far fetched now (minus the time travel).
Right - for the same reasons a Waymo is safer than a human-driven car, an autonomous fighter drone will ultimately be deadlier than a human-flown fighter jet. I would like to forestall that day as long as possible but saying "no autonomous weapons ever" isn't very realistic right now.
If they had access to them in Ukraine, both sides would already be using them I expect. Right now jamming of drones is a huge obstacle. One way it's dealt with is to run literal wired drones with massive spools of cable strung out behind them. A fully autonomous drone would be a significant advantage in this environment.
I'm not making a values judgment here, just saying that they will absolutely be used in war as soon as it's feasible to do so. The only exception I could see is if the world managed to come together and sign a treaty explicitly banning the use of autonomous weapons, but it's hard for me to see that happening in the near future.
Edit: come to think of it, you could argue a landmine is a fully autonomous weapon already.
Hah, I had the same realization about landmines. Along with the other commenter, really it would be better to add intelligence to these autonomous systems to limit the nastiness of the currently-deployed systems. If a landmine could distinguish between a real target and an innocent civilian 50yrs later, it's be a lot better.
A landmine blowing up the enemy civilian 50 years later is probably seen as an advantage by the force deploying them. A bit like "salting the earth."
Depressingly true.
Many landmines disarm after a while.
It's weird that people still think that the people who's job it is to kill people, or make things that kill people, really care about people more than the killing part. They don't give a shit who blows up, as long as no one comes knocking on their door about it.
It's only Anthropic with their current models saying no. Fully autonomous weapons have been created, deployed, and have been operational for a long time already. The only holdout I've ever heard of is for the weapons that target humans.
Honestly, even landmines could easily be considered fully autonomous weapons and they don't care if you're human or not.
There are also good reasons for a lot of countries banning mines. https://en.wikipedia.org/wiki/Ottawa_Treaty
Notably USA is not one of those signatories.
The Ghandi of the corporate world is yet to be found
Considering he slept naked with his grandniece (he was in his 70s, she was 17), I'd say there are a lot of them in the corporate world. Though probably more in politics.
Enemies will have AI powered weapons. We need to be at the cutting edge of capability.
The sentence prior explicitly says this. There’s no dishonesty here.
“Even fully autonomous weapons (…) may prove critical for our national defense”
FWIW there’s simply no way around this in the end. If your even attempts to create such weapons, the only possible defensive counter is weapons of a similar nature.
> And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance
You have to be deliberately naive in a world where five eyes exists to somehow believe that "foreign" mass surveillance won't be used domestically.
So AI systems are not reliable enough to power fully autonomous weapons but they are reliable enough to end all white-collar work in the next 12 months?
Odd.
do you really need to be told there is a difference in 'magnitude of importance' between the decision to send out an office memo and the decision to strike a building with ordinance?
a lot of white collar jobs see no decision more important than a few hours of revenue. that's the difference: you can afford to fuck up in that environment.
I know what point you are trying to make, but these decisions are functionally equivalent.
Striking a building with ordinance (indirect fires, dropped from fixed wing, doesn't really matter) involves some discernment about utility, secondary effects, probability of accomplishing a given goal, and so on. Writing an office memo (a good one at least) involves the same kind of analysis. I know your point is that "people will die" when you blow up a building, but the parameters are really quite similar.
They’re not saying “AI can replace some menial white collar tasks”, they’re saying AI can replace all white-collar work.
Yes, if you fuck up some white collar work, people will die. It’s irresponsible.
>Yes, if you fuck up some white collar work, people will die. It’s irresponsible.
A lot of the work in those sectors are not the ones that are being targeted for fully autonomous replacement. They likely would be in the future though.
Shh! there's a lot of money riding on this bet, ahem.
Anthropic doesn't forbid DoW from using the models for foreign surveillance. It's not about harming others, it's about doing what is best for humanity in the long run, all things considered. I personally do not believe that foreign surveillance is automatically harmful and I'm fine with our military doing it
If we are talking about what's best for humanity in the long run.. thinking about human values in general, what makes American citizens uniquely deserving of privacy rights, in ways that citizens of other countries are not?
Snowden revealed that every single call on Bahamas were being monitored by NSA [1]. That was in 2013. How would this be any worse if it were US citizens instead?
(Note, I myself am not an US citizen)
Anyway, regardless of that, the established practice is for the five eyes countries to spy on each other and share their results. This means that the UK can spy on US citizens, the US can spy on UK citizens, and through intelligence sharing they effectively spy on their own citizens. That's what supporting "foreign surveillance" will buy you. That was also revealed in 2013 by Snowden [2]
[1] https://theintercept.com/2014/05/19/data-pirates-caribbean-n...
[2] https://www.theguardian.com/world/2013/dec/02/nsa-files-spyi...
This isn't about privacy rights, it's about war
I'm not suggesting that Anthropics models should be used by foreign governments for domestic surveillance
I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned
But.. the US doesn't perform mass surveillance on foreign people only when it's at war. It doesn't perform mass surveillance only on adversarial nations it potentially could be at war either.
This absolutely is about privacy.
> I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned
Those foreign governments are spying on Americans and then sharing the results with the US government because the US government is misaligned with the interests of its own people
The United States gets to spy on countries when it's in the interest of the United States to do so. This isn't complicated. We get to spy on quite literally whoever we want abroad, within various legal and well established parameters, at at the risk of offending the governments of the spied-on. "It's only okay for the United States to spy on foreigners when they're in a shooting war with them" is silly.
So you are saying its OK to spy on others because the US say is fine?
Maybe the others on here are not happy that this company is supporting a fascist government in committing international aggressions on other countries which has been condemned by the majority of countries around the world.
If the United States is ever, in the future, at war with an adversary using truly autonomous and functional killing machines; you may find yourself praying that we have our own rather than praying human nature changes. Of course, we must strive for this to never happen; but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.
> but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.
Citation needed. I believe there's at least some research showing the opposite: military buildup leads to a higher risk of military conflict
This is the strongest statement in the post:
> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.
Does the Defense Production Act force employees to continue working at Anthropic?
No. It really only binds the corporation, but it does hold the executives/directors personally responsible for compliance so they’d be under a lot of pressure to figure out how to fix enough leaks in the ship to keep it afloat. Any individual director/executive could quit with little issue, but if they all did in a way that compromised the corporations ability to function, the courts could potentially utilize injunctions/fines/jail time to compel compliance from corporate leaders.
Also there’s probably a way to abuse the Taft-Hawley act beyond current recognition to force the employees to stay by designating any en-masse quitting to be a “strike / walk off / collective action”. The consequences to the individuals for this is unclear - the act really focuses on punishing the union rather than the employees. It would take some very creative maneuvering to do anything beyond denying unemployment benefits and telling the other big AI companies (Google / ChatGPT / xAI) to blacklist them. And probably using any semi-relevant three letter agency to make them regret their choice and deliver a chilling effect to anyone else thinking of leaving (FBI, DHS, IRS, SEC all come to mind).
If the administration could figure out how to nationalize the company (like replace the leadership with ideologically-aligned directors who sell it to the government) then any now-federal-employees declared to be quitting as part of a collective action could be fined $1,000 per day or incarcerated for up to one year.
It’s worth noting that this thesis would get an F grade at any accredited law school. Forcing people to work is a violation of the 13th amendment. But interpretations of the constitution and federal law are very dynamic these days so who knows.
Maybe Anthropic could replace its employees with AI. Unlikely the admin is going to enjoy setting precedent that employees are protected against being replaced by AI.
[flagged]
Presidency can’t be extended by wars.
FDR's tenure might have created an amendment to that effect, but it's not like this administration hasn't used a legal loophole before.
Perhaps there's a war, that a misguided congress won't declare as such, and a certain vice president that runs for president, with a certain someone as his vice president...
Not constitutionally, at any rate.
What would happen if he tried by not vacating at the end of his term, when challenged in court, shut down by his own Supreme Court? I mean let’s be real, all it really takes is him not giving up the white house. I sometimes wonder.
Steve Bannon advised Trump to do this in 2020. Question is what would the Secret Service and Pentagon do once the election is certified for the winning candidate? If their loyalty remains to the Constitution, Trump would be forcibly removed.
We went through this when it looked like he might not leave last time. What happens is the Marines show up and politely throw his ass to the curb.
You do not under any circumstances gotta hand it to the American military but they do seem unwilling to play a role in Trump's let's say extraconstitutional ambitions. At least a junta doesn't seem likely. Without the military behind him he's just a senile old pedophile. What's he going to do, lock himself into the Oval Office?
The military is the one drone striking boats in the Caribbean. The military invaded a foreign country we are not at war with to kidnap its leader. The military dropped bombs on a foreign country we are not at war with. The military is patrolling the streets of DC and other cities. The military is the one spending the money on new immigrant detention centers. I fail to see how they are standing up to Trump's illegal acts. I'm not 100% sure the White House Marines will just throw Trump to the curb if Congress manages to certify the election in favor of someone else.
See https://www.culawreview.org/current-events-2/the-22nd-amendm...
Specifically section on martial law in wartime context. It’s not very clear but I just feel like the norms and laws will be stretched or broken, as the administration has already done numerous times.
… not yet. The problem with a norm breaking presidency like Trump’s and the GOP power structure is that no norm is safe, including elections.
Zelensky's presidency was supposed to end couple of years ago. Would it be different in USA?
Different constitutions. Were you trying to muddy the waters, or are you just ignorant of the details?
Yes,
> this is a strong arm by the governemnt to allow any use
It’s a flippant move by Hegseth. I doubt anyone at the Pentagon is pushing for this. I doubt Trump is more than cursorily aware. Maybe Miller got in the idiot’s ear, who knows.
It definitely has the aroma of either Bannon or Miller or both.
Believe it or not Steve Bannon is quite concerned about AI development:
>Over on Steve Bannon's show, War Room -- the influential podcast that's emerged as the tip of the spear of the MAGA movement -- Trump's longtime ally unloaded on the efforts behind accelerating AI, calling it likely "the most dangerous technology in the history of mankind."
>...
>"You have more restrictions on starting a nail salon on Capitol Hill or to have your hair braided, then you have on the most dangerous technologies in the history of mankind," Bannon told his listeners.
https://abcnews.com/US/inside-magas-growing-fight-stop-trump...
> It’s a flippant move by Hegseth.
Care to convert this into a prediction?: are you predicting Hegseth will back down?
> I doubt anyone at the Pentagon is pushing for this.
... what does this mean to you? What comes next? As SecDef/SecWar, Hegseth is the head of the Pentagon. He's pushing for this. Something like 2+ million people are under his authority. Do you think they will push back? Stonewall?
One can view Hegseth as unqualified, even a walking publicity stunt while also taking his power seriously.
It matters because the whole media is selling this as a Pentagon initiative, while probably 75% in the Pentagon think this is snake oil just like the previous Microsoft VR goggles.
If they don't oppose directly, large bureaucracies know how to drag their feet until the midterms at least, if not until 2028. Soldiers literally dragged their feet at the glorious Trump military parade, when they walked disinterested and casually instead of marching.
> If they don't oppose directly, large bureaucracies know how to drag their feet until the midterms at least, if not until 2028.
While I grant the spirit of this point, I don't think it applies to this situation. The "bureaucratic resistance" explanation doesn't fit when you think about what would happen next. Here is my educated guess based on some research:
- contract termination: Hegseth can direct the relevant contracting officer(s) at the Pentagon to terminate the contract. This could happen within days. Internal stonewalling here might add weeks of delay, but probably not more than that.
- supply chain risk designation: Hegseth signs a document, puts it into motion. Then it becomes a bureaucratic process that chugs along. Noncompliant contracting officers probably would be fired, so this happens within weeks or a few months. Substantial delays could come from litigation, to be sure -- but this isn't a case where civil service stonewalling saves us.
- Defense Production Act: would require an executive order from Trump. This would go into effect right away, at least on paper. It would very likely lead to litigation and possibly court injunctions.
My point is that non-compliant civil servants at the Pentagon probably can't slow it down very much. (I recommend they do what their oath and conscience demands, to be sure!) Hegseth has shown he's willing to fire quickly and aggressively. I admire people who take a stand against Hegseth and Trump -- they are a nasty combination of dangerous and corrupt. At the moment, they appear weaker than ever. Sustained civil pushback is working.
Let's "roll this up" back to my original point. I responded to a comment that said "I doubt anyone at the Pentagon is pushing for this.", asking the commenter to explain. I don't think that comment promotes a better understanding of the situation. It is more useful to talk about the components of the situation and some possible cause-effect relationships.
> are you predicting Hegseth will back down?
I think he may be able to cancel Anthropic’s contract. But no more. He won’t back down as much as be overruled.
> As SecDef/SecWar, Hegseth is the head of the Pentagon
On paper. Also, being the de jure head of something doesn’t automatically mean you speak for it as a whole.
> while also taking his power seriously
Authority and power are different. A plane pilot has a lot of authority. They don’t have a lot of power.
First of all, there's no such thing as "Department of War". A department name change is legal/binding only after it's approved by the Senate. Senator Kelly is still calling it DoD (Department of Defense).
> Mass domestic surveillance.
Since when has DoD started getting involved with the internal affairs of the country?
https://en.wikipedia.org/wiki/United_States_Department_of_De...
The Senate??
Any law changing the name of the Defense Department would have to be passed by both Houses of Congress and signed by the President (or by 2/3 of both Houses overriding a Presidential veto). The Senate has no such authority on its own.
Right! I meant to write ‘Congress’, but mistakenly wrote Senate.
It's whatever what the people who have the power want to call it. What is written on a piece of paper is irrelevant if it is not acted upon.
If the rename gets struck down then they don't have the power. If it doesn't they have the power.
There are many dictatorships that built their power in the face of people claiming that they can't do what they planned because it was illegal.
Until they did it anyway.
I don’t know, to me it seems like their MO to make an announcement and not follow up on it. All the paperwork still says DOD, all the contracts are with DOD, there is no legal entity called DoW
This is fascism
I don't think many are doubting that. I'm not talking about the way things should be. I'm talking about the way they are.
www.defense.gov redirects to www.war.gov but I like how you refer to Wikipedia as the authoritative source to prove this functionally irrelevant and aggressive Reddit-style seething.
The talk page on the linked Wikipedia article arguing about logos is just as deranged. It's very important to realize there is literally nothing you—or anyone else—can do about this.
They've already spent millions on the name change. It's also the original name of the department. IMO it's a more honest name
More like the government is treating this like the near term weapon it actually is and, unlike the Manhattan project, the government seems to have little to no control.
Anthropic has been pushing for commonsense AI regulation. Our current administration has refused to regulate AI and attempted to prevent state regulation.
"The government doesn't have control of this technology" is an odd way to think about "the government can't force a company to apply this technology dangerously."
Note that they always attempt to exert control they don’t have. They’re always bluffing, and they keep losing. Respond accordingly.
> Respond accordingly.
“Four key words (…) The only phrase that can genuinely make a weak bully go away, and that is: Fuck You, Make Me.”
https://m.youtube.com/watch?v=ohPToBog_-g&t=1619s
Paper tigers
The government should be entitled to any lawful use of a product they purchase, not uses dictated solely by the provider. It's up to courts to decide what lawful use is, it's not up to these companies to dictate.
The product is a service, and they agreed to a contract. Now they don't like the contract.
Is your view that contracts with the government should be meaningless? That the government should be able to unilaterally, and without recourse, change any contract they previously agreed to for any reason, and the vendor should be forced at gunpoint to comply?
If you do believe this, then what do you believe the second order effects will be when contracts with the government have no meaning? How will vendors to the government respond? Will this ultimately help or hinder the American government's efficacy?
Seriously.
Hegseth trying to play “I’m altering the deal. Pray I don’t alter it any further” just shows this gang’s total lack of comprehension of second-order effects.
> It's up to courts to decide what lawful use is
No, it’s up to the government to create policy and legislation that outlines what is lawful or not and install mechanisms to monitor and regulate usage.
The fact that an arm of the government wants to go YOLO mode is merely a symptom of the deeper problem that this government is currently not effectual.
Do you have any insight that what they want to do is YOLO, as opposed something your sure you’ll disagree with?
YOLO here refers to unsafe usage of LLMs. Your government is supposed to make legislation that protects all of its citizens, it’s not “what you agree with” game.
Terms of Service would like to have a word....
Not like limiting uses of products is anything new
Not really. Services are provided on terms acceptable to both parties. This isn't about what's legal, it's about the terms of the service agreement.
Providers are free who they choose to do business with, or not do business with. Are you arguing that the government should be able to compel a provider to allow their use when it’s well documented the government does not respect nor adhere to the rule of law? I think you misunderstand commerce and contract law.
Providers are bound by plenty of laws that alter how they do business or who they do business with.
You can’t say “no disabled people at your business”. Hell, you can’t even say “no fake service animals at my restaurant”. Many in America also think you can’t say no girls in the Boy Scouts, or no men in a women’s locker room.
Strange take
Amazing to read this. Hoping you are not an American… Reading this thread is like comrade after comrade!
> This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use.
Why the hell should companies get to dictate on their own to the government how their product is used?
Every company is free to determine its terms of use. If USG doesn’t like them they should sign a contract with someone else.
Can I run a business and say “No use by insert race here”? If they don’t like it, they can shop somewhere else, right?
Kegsbreath isn't a protected class.
Every company is free to state their terms of use, but not all have been upheld when challenged
What’s your angle here? I’m genuinely curious. If the government told you that you had to muck out portable bathrooms with your bare hands even if you didn’t want to, wouldn’t you find that objectionable?
Because technology companies know more about their product's capabilities and limitations than a former Fox News host? And because they know there's a risk of mass civilian casualties if you put an LLM in control of the world's most expensive military equipment?
Because the government is here to serve us. Not the other way around.
The government has a responsibility to protect its constituents. Sometimes that requires collaboration. This isn’t hard.
Is this one of those times? Seems pretty clear it's not.
The third amendment is there for a reason. I am a third amendment absolutist and willing to put my life on the line to defend it.
I wonder what you can't justify this way.
That’s a good question. Assuming a righteous and just government:
The government couldn’t justify the killing of innocent civilians.
The government couldn’t justify the killing of the unborn.
The government couldn’t justify eugenics.
There are objective moral absolutes.
Same reason they cant quarter troops in your house: the law
There are a couple of notable Supreme Court cases in this area:
https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...
https://en.wikipedia.org/wiki/303_Creative_LLC_v._Elenis
> Why the hell should companies get to dictate on their own to the government how their product is used?
Well:
"""
Imagine that you created an LLC, and that you are the sole owner and employee.
One day your LLC receives a letter from the government that says, "here is a contract to go mine heavy rare earth elements in Alaska." You don't want to do that, so you reply, "no thanks!"
There is no retaliation. Everything is fine. You declined the terms of a contract. You live in a civilized capitalist republic. We figured this stuff out centuries ago, and today we have bigger fish to fry.
"""
* https://x.com/deanwball/status/2027143691241197638
This is a terrible analogy. Imagine you’re an LLC that signed a contract to mine minerals, but your terms state you’d only mine in areas you felt safe. OSHA says it’s safe but you disagree, because….. any number of reason unknowable to an outsider. Maybe you just don’t like this OSHA leadership. That is more like what is happening.
Signing a contract with Anthropic assuming they wouldn’t rug pull over their own moral soapbox was mistake number one.
I love anthropic products and heavily use them daily, but they need to get off their high horse. They complain they’re being robbed by Chinese labs - robbed of what they stole from copyright holders. Anthropic doesn’t have the moral high ground they try to claim.
The (hypothetical) contract is clear, though. The condition is stated in objective terms: “in areas you felt safe.” If the Government agrees to this, then they should be bound just like any private counterparty would. If the Government didn’t agree to this, they should have negotiated that term out in favor of their preferred terms.
Is it a rug pull? Where in the terms of service does anthropic say their models can be used for autonomous weapons and mass domestic surveillance?
Those aren't contradictory at all. If I need a particular type of bolt for my fighter jet but I can only get it from a dodgy Chinese company, then that bolt is a supply chain risk (because they could introduce deliberate defects or simply stop producing it) and also clearly important to national security. In fact, it's a supply chain risk because is important to national security.
No, in your example, if the dodgy Chinese company is a supply chain risk due to sabotage, why would they invoke an act to force production of the bolts from the same company for use for national defense preparedness, which would be clearly a national security risk?
The OP specifically mentions this in the context of "systems" (a vague, poorly-defined term) and "classified networks" in which Anthropic products are already present. Without more details on what "systems" these are or the terms of the contracts under which these were produced it's difficult to make a definitive judgement, but broadly speaking it's not a good thing if the government is relying on a product which Anthropic has designed to arbitrarily refuse orders by its own judgement.
I really don't see how anybody could think a private defense contractor should be entitled to countermand the military by leveraging the control it has over products it has already sold. Maybe the terms of their contract entitled them to some discretion over what orders the product will carry out, but there's no such claim in the OP.
>I really don't see how anybody could think a private defense contractor should be entitled to countermand the military by leveraging the control it has over products it has already sold. Maybe the terms of their contract entitled them to some discretion over what orders the product will carry out, but there's no such claim in the OP.
I don't think that is what is happening. What most likely is happening is that they want Anthropic to produce new systems due to the success of the previous ones, but they are refusing to do so because the new systems are against their mission. What seems like the DoD is attempting to do, on one hand, is call them a supply chain risk to limit Anthropic's business opportunities with other companies, and then, on the other hand, simultaneously invoke DPA so that they can compel them to make the new system. But why would the government want to compel a company to make a system for them due to a need for national prepareness that they designated as such a supply chain risk that they forbid other companies that provide government services from doing business with due to the national security risk of having a sabotaged supply chain? It doesn't really make sense, other than from a pure coercion perspective.
>limit Anthropic's business opportunities with other companies
Does it necessarily prevent other companies from doing business with them or does it prevent other companies from subcontracting them on government projects? The term "supply chain" leads me to think it's the latter.
Is that relevant to the actual point?
It's easy to resolve an alleged contradiction by just ignoring one half of it lol
Try introducing DPA invocation into your analogy and let's see where it goes!
"Supply chain risk" is a specific designation that forbids companies that work with the DOD from working with that company. It would not be applied in your scenario.
The analogy doesn't work here ... In your scenario they are ok with using the bolt as long as the Chinese company promises to remove deliberate defects - which is of course absurd ... AND contradictory.
I respect the Anthropic leadership for not being greedy like many others
An organization character really shows through when their values conflict with their self-interest.
It's inspiring to see that Anthropic is capable of taking a principled stand, despite having raised a fortune in venture capital.
I don't think a lot of companies would have made this choice. I wish them the very best of luck in weathering the consequences of their courage.
The problem is that this is a decision that costs money. Relying on a system that makes money by doing bad things to do good things out of a sense of morality when a possible outcome is existential risk to the species is a 100% chance of failure on a long enough timeline. We need massive disincentives to bad behavior, but I think that cat is already out of its bag.
On a long enough timeline literally everything has 100% chance of failure. I'm not trying to be obnoxious, I just wanna say: we only got this one life and we have to choose what to make of it. Too many people pretend things are already laid out based on game theory "success". But that's not what it's about in life at all.
I appreciate that the HN community values thoughtful, civil discussion, and that's important. But when fundamental civil liberties are at stake, especially in the face of powerful institutions and influence from people of money seeking to expand control under the banner of "security", it's worth remembering that freedom has never simply been granted. It has always required vigilance, and at times, resistance. The rights we rely on were not handed down by default; they were secured through struggle, and they can be eroded the same way.
Power corrupts, and absolute power corrupts absolutely.
All of these problems are downstream of the Congress having thoroughly abdicated its powers to the executive.
The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.
Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.
To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.
> The technology can just be requisitioned
During a war with national mobilization, that would make sense. Or in a country like China. This kind of coercion is not an expected part of democratic rule.
It has always been a part of democratic rule, in peacetime and war. All telco's share virtually all of their technology with the government. Governments in europe and elsewhere routinely requisition services from many of their large corporations. I think it's absurd to think llm's can meaningfully participate in realworld cmd+ctrl systems and the government already has access to ml-enhanced targeting capabilities. I really have no idea what dod normies think of ai, other than that it's infinitely smarter than them, but that's not saying much.
I would like to see a proof of this happening in Europa.
The question of whether or not the government should be able to use AI for targeting without the involvement of humans is a wartime question, since that is the only time the military should be killing people.
Under such a scenario, requisition applies, and so all of this talk is moot.
The fact that the military is killing people without a declaration of war is the problem, and that's where energy and effort should be directed.
Edit:
There's a yet larger question on whether any legal constraints on the military's use of technology even makes sense at all, since any safeguards will be quickly yielded if a real enemy presents itself. As a course of natural law, no society will willingly handicap its means of defense against an external threat.
It follows then that the only time these ethical concerns apply is when we are the aggressor, which we almost always are. It's the aggression that we should be limiting, not the technology.
> an expected part of democratic rule.
give yourself a break. what your fancy democratic rule still holds under Trump?
The private corporation is not dictating to the military, it’s setting the terms of the contract. The military is free to go sign a contract with a different company with different terms, but they didn’t, and now they want to change the terms after the contact was already signed. No mytholgization needed, just contract law.
> The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that.
I strongly doubt this is true. I think if you gave the US government total control over Anthropic's assets right now, they would utterly fail to reach AGI or develop improved models. I doubt they would be capable even of operating the current gen models at the scale Anthropic does.
> Or the models could be developed internally, after having requisitioned the data centers.
I would bet my life savings the US government never produces a frontier model. Remember when they couldn't even build a proper website for Obamacare?
It's also downstream of voters who voted in a president who promised to be dictatorial after failing at an attempted insurrection. We need to deprogram like 70M very confused people.
You should be asking why 70 million people voted the way they did in spite of the events you describe.
I don't think there's been a greater indictment of a political program (the one you likely subscribe to) in history than Trump's landslide victory in 2024.
You guys used to call deprogramming by another name, I think it was called "re-education". Maybe you should sign up for your own class.
I'm curious for your understanding of why Trump won in 2024. If I'm understanding right, you think it was because American voters were rejecting Maoism ("it was called re-education"), to which you think the previous commenter likely subscribes, and which voters associated with Harris/Walz? But I suspect I'm not getting it quite right, and it would be helpful if you would spell out what you mean, rather than just relying on allusion.
(I myself don't have a clear answer to why Trump won, but I don't think it speaks well to the decision-making of the median voter on their own terms, whatever those were, that Trump's now so unpopular despite governing in pretty much the way he said he would.)
> The military should be reigned in at the legislative level, by constraining what it can and cannot do under law.
Is there an example of such a system existing successfully in any other country of the world that has a standing army?
I think any such examination of a military that doesn't actually fight wars is meaningless. The question can only be really asked of a handful of countries.
This is such a depressing read. What is becoming of the USA? Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.
This isn't a one-election thing. It's going to be a generational effort to fix what these people are breaking more of every day. I hope I live to see it come to some kind of fruition - I recently turned 50.
Some people are calling it the "American century of humiliation"
No other country that went through a phase like this has ever recovered. Not even in a century.
I won't give in to doomerism.
Germany, Italy and Japan are all wealthy, stable democracies right now. Not without their problems and baggage, but pleasant places in a lot of ways.
All three have active US military bases on their soil and enjoy the economic surplus of living under the US defense umbrella.
The post WWII system was imperfect in many ways, but it was also mutually beneficial and worked out pretty well despite the problems.
And we're throwing that all out the window.
US military bases aren't what made those countries modern, prosperous, democratic places. It took the will of the people to rebuild something better after the war.
Britain essentially ceded its bases to the US at the end of WWII - these things aren’t as durable as they may seem.
All that economic surplus - and much more - flows back to the US. How do you think the US can sustain that amount of USD printing without inflation ? The rest of the world is buying those dollars.
Germany: functionally paralyzed government that has the far right knocking at the door because the fractured coalition of left-centerleft-centerright continues to refuse to do what voters ask for.
Italy: Nominally center-right government, similar problems as Germany, less the energy issues
Japan: just elected a landslide right wing government that is going to change the constitution so they can build an offensive military again
Curious.
I don't perceive those problems to be inherent to the territories or peoples of the countries. All have had potential to change and have done so extensively since the Second World War. There isn't a universal explanation or root behind the issues these countries are facing today, unless you are willing to abstract it to just "economics".
They got bombed to shit first
It'd be nice to avoid that part.
Then it won't work. The current iteration of Germany is fully based on having been bombed to get a fresh start. If you already have something, you won't change it. If you have to re-build, you will implement improvements. No bombs, no reset, no joy.
I am less confident about my predictions for an uncertain future. There's all kinds of ways different things could go.
I didn't say we needed to follow their example to the letter; it was just one counterexample to the "woe and ruin for 100 years" comment.
Yes, but it is actually scientifically correct and proven on all sorts of layers. Biology, Maths, whatever. Not doomsdaying, just data analytics.
Societies are not operating like a sinus curve like say summer/winter cycles. They are upside-down "U"s. After the peak comes decline, but after the decline there is NOT recovery/growth again before you have a reset.
Germany was the huge winner of WW2 in the sense that after having had a high society they directly were allowed to get another such run. But as nobody wants to bomb us ) anymore, Germany is also in decline now waiting for a reset to come one day...
Sadly the USA will also need a reset before things can begin getting better again.
) I was born in Germany and lived there for 40 years.
Ok what about the Netherlands, Spain, Nordic countries?
Very different countries.
The Netherlands for example got their last reset by completely losing the Dutch empire.
Also, some societies have flatter curves than others. That really maps 1:1 to your style and culture of living and where the priorities are.
If your priorities are to be the best as fast as possible (Germany) you will have less time between resets. If your priorities are "let's chill and wait until the coconut falls from the tree into my hand", your society might be able to have a far longer time between resets.
But in the end: It's an iterative process. Which means: There must be iterations.
This sounds about as scientific as phrenology.
James May did a documentary loosely based on this. "The Peoples Car"
Basically analysing the economies of WW2 participants via their automobile industries.
Its staggering how being bombed into the ground has forced technological and economic innovation. And how the inverse, being the bomber, has created stagnation.
I don't think it would matter even if the us did have to start again. The entire us alliance after ww2 benefited from the same structural causes of increased pluralism and egalitarianism. A fractured elite, complex international trade, expanding and increasingly difficult to control communication channels, and a growing bureaucracy. These all inhibit autocratic concentration of power. International trade became uncomplicated, there is one manufacturer that is not a consumer, and many consumers. This leads to an increasingly less fractured elite. The structural reasons for democracy and rules based order are all fading. The us is just a really big canary.
The people running the show are all building generational fallout shelters in new zealand. As seems to be the real 'whitehouse ballroom' plan too. They seem to be expecting that part.
Congress is the problem, but not in the way most describe.
Congress has abdicated its powers because as an institution it is broken. Several inland states with total state wide populations less than that of major metro areas on the coasts have the same amount of senators as every other state has - two. This means voters in a lot of states are over represented. Meanwhile, they say land doesn't vote, but in the United States Senate the cities and localities with the most people that drive much of our growth and dynamism are severely underrepresented. The upper and most important chamber of the Congress is thus undemocratic. Given it's an institution deeply susceptible to minority gridlock that depends on wide margins to do anything, well now more often than not it simply does nothing. An imperial presidency thus frankly becomes the only way the country can actually get most things done.
This two senators for every state arrangement was a compromise agreed to when constitutional ratification was in doubt, when the USA was a weak, newborn country of about 3 million people confined to the Eastern seaboard at a time in our history where our most pressing concern was being recolonized by European powers. The British burned down the White House in 1812 imagine what more they could have accomplished if the constitutional compromises that strengthened the union had not been agreed to.
This compromise has outlived its usefulness. No American today fears a Spanish armada or British regulars bearing torches. These difficult compromises at the heart of America already led to one civil war.
The best we can do is create a broad political movement that entertains as many incriminations as possible (probably around corruption/Epstein, which must make pains to avoid any distinction between say a Bill Clinton or a Donald Trump) so we can get past partisan bickering to get enough of mass movement to try to usher in a new age of constitutional amendment and reform.
If it doesn't happen this cycle of Obama Trump Biden Trump will continue until this country elects someone who makes Trump look like a saint. It can happen. Think of how Trump rehabilitated Bush. We already see the trend getting worse. And if it does, then the post WWII Germany style reset being mentioned here will then become inevitable.
That’s just historically inaccurate. You had massive upheavals across numerous countries throughout time, this is small in comparison to the civil war’s impact on the USA for instance. You think this is worse than half the government rebelling and revolting and killing an amount of young men that today would be equivalent to 6 million deaths? It’s bad now but your comment lacks historical evidence.
China seems to have recovered pretty well.
Not really. China only seems good because there is a war in Europe and the US is shooting themself in the foot. They're polluting and strip mining their country, suppressing wages and funneling the profit into companies all while increasing surveillance and decreasing freedom of opinion. Oh but they put down a few solar panels and then paid for people to write articles about it.
Their economy lifted a bunch of people out of poverty. That's positive.
However, in terms of 'democracy' they're still way worse off than the US right now, even if the US is headed in a bad direction.
> Their economy lifted a bunch of people out of poverty
This is fallacious as every economy that started at extreme poverty lifted a bunch of people out of poverty.
Unless we invent a time machine and do an A|B test we can't really attribute the success to policy when _any_ policy would have clearly lifted out a bunch of people out of poverty (basically almost impossible to not go up from extreme deficit). The closest we can do is look at similar scenarios like Taiwan which also lifted a bunch of people from poverty while retaining more human rights.
Plenty of places have managed to "keep on keepin' on" with their poverty levels.
I'm not saying what they've done was the best way, only way or anything of that sort: only that it happened.
> They're polluting
They absolutely are, but per capita, USA is polluting 49.67 % more than China.
Source: https://worldpopulationreview.com/country-rankings/carbon-fo...
I used to pretend China wasn't absolutely smashing the USA, but it looks like it is. They basically make everything modern civilization relies on, that's an insane amount of leverage over the rest of the world. That combined with renewables and nuclear and their diminishing need for foreign oil because of that is pretty incredible.
>Oh but they put down a few solar panels
the few solar panels in question are a united kingdom worth of green energy each year, about a royal navy worth of marine tonnage every two and they lifted more people out of poverty over the span of two generations than most of the rest of the world combined. Shenzhen produces about 70% of the entire world's consumer drones, now the primary weapon on both sides of the largest military conflict in the world. Xiaomi, a company founded in 2010 15 years ago decided to make electric cars in 2021 and is now successfully selling them.
As Adam Tooze has pointed out it's the single most transformative place in the world, if you're not trying to learn from it you're choosing to ignore the most important place in the 21st century for ideological reasons
They're also speedrunning a world class power distribution system and deploying a massive amount of renewable power amoung a whole mess of other infrastructure. They've got the ability to focus an entire nation into achieving technical goals and they're rapidly improving quality of life in average while maintaining an industrial base that the US can only remember fondly. They might not meet western standards for individual freedoms and rule of law, but they're undoubtedly a rising world power.
This doesn't make much sense. Since the late 19th century, every country that got rich also heavily polluted the environment, though increasingly less over time. As it stands, fossil fuel demand in China has plateaued. The "wage suppression" thing also doesn't track; their citizens got much, much richer since Nixon's visit, despite being on average poorer than Westerners. Their GDP per capita is low because there's like a billion of them in the country.
The only thing to say is that it's still authoritarian. Once that gets a hold of a country, it's very difficult to shed off. Interestingly, both South Korea and Singapore shifted away from being dictatorships and were not ideologically socialist. Countries taken over by Communists remain authoritarian. The true believers will never give that up.
Agree with much of this. However: plenty of Central/Eastern European countries seem like they have pretty definitively shaken off communism in favor of pretty standard European style capitalism/social democracy.
That is true, though I chalk some of that up to disdain for Russian imperialism/colonialism, and bargaining to remain out of its influence
U.S. Civil War? Roman Crisis of the 3rd Century? Russian Revolution? England's War of the Roses? China's periodic dynastic changes?
They usually don't come back with the same political organization - that's sorta the point. But plenty of civilizations come back in a form that is culturally recognizable and even dominate afterwards.
I’d be interested to see some specific examples cited as it’s hard to take this comment at face value.
The Unenlightenment. Dereconstruction.
> No other country that went through a phase like this has ever recovered. Not even in a century.
Oh I can think of a couple in the '40s that bounced back after a while.
This is a laughably ridiculous assertion.
Rome was 'in decline' for 1000 years... these things are mostly feel good blather and not realistic statements on the position of nations
Is this a joke that’s going over my head? The country we all know the term “century of humiliation” from has recovered and is literally a superpower right now?
I’ve been called bad things on HN for suggesting there’s even a whiff of corruption in this administration. That alone scares me. Deeply.
Hope is not a plan, unfortunately, so if that's all we've got, I don't have much hope.
What do you mean? You think any company should do whatever the government tells them?
The current situation in the US is the depressing thing- articles like this give me hope. Real Americans aren't having these BS authoritarian violations of our constitutional rights.
You mean, what's been happening to the USA? this isn't a new trend. Militarization of police, open attacks on democracy, unilateral foreign policy moves.
the country jumped the shark post 9/11 and has been on a slow rot since then.
Indeed. Bin Laden succeeded beyond his wildest dreams. He kickstarted our self-destruction.
No, this is cope, Trump is deeply different.
> Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.
Would be nice, but I have a bad feeling that the impact of widescale mostly unregulated AI adoption on our social fabric is going to make the social media era that gave rise to Trump, et al seem like the good ol' days in comparison.
I hope I am wrong.
> mass __domestic__ surveillance is incompatible with democratic values
But mass surveillance of Australians or Danes is alligned with democratic values as long as it's the Americans doing it?
I don't think the moral high ground Anthropic is taking here is high enough.
> These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
Nicely put. In other words: Department of Morons.
This makes me a very happy Claude Max subscriber.
Finally, someone of consequence not kissing the ring. I hope this gives others courage to do the same.
As a European user, I‘m not happy at all. I can’t fail to notice that non-domestic mass surveillance is not excluded here. I won’t cancel my account just yet because Opus is the best at computer use. But as soon as Mistral catches up and works reasonably well, I‘ll switch.
They already kissed the ring, just not the asshole. They have a little dignity left.
Better than the rest. here's $200, Dario!
This is how we bought Tim Cook the gold trophy. Today's fundraising buys tomorrow's tithe.
The whole article reads as virtue signaling to me. Anthropic already has large defense contracts. Their models are already being used by the military. There's really no statement here.
How is it virtue signalling when sticking by these principles risks their entire business being destroyed by either being declared a supply chain risk or nationalized?
The notion that it's bad to signal virtue is one of the crazier propaganda efforts I've seen over the last 20 years or so.
It’s a manipulative tactic. Businesses have no soul and no conscience.
A company being asked to violate their virtues refuses, and then communicates that to reestablish their commitment to said virtues?
Tell me more about what they should do if a virtue signal in such a situation is a nothing statement.
Isn't it nice to have virtues to signal though? In saying that, you're saying you don't have any worth signaling over.
Not when your actions don’t align with your professed virtues.
I wonder if this might be a setup by competition. Certainly looks like one.
I read the statement twice. I can't understand how you landed on "take my money".
Looks like an optics dance to me. I've noticed a lot of simultaneous positions lately, everyone from politicians and protesters, to celebrities and corporations. They make statements both in support of a thing, and against that same thing. Switching up emphasis based on who the audience is in what context. A way to please everyone.
To me the statement reads like Anthropic wants to be at the table, ready to talk and negotiate, to work things out. Don't expect updated bullet-point lists about how things are worked out. Expect the occasional "we are the goodies" statements, however.
It's not named the Department of War because Congress didn't rename it.
Other than that, good on ya.
It's really not the right thing to be bikeshedding. The people calling the shots call themselves the Department of War. No need to die on hills that don't matter.
It's actually a good thing to point out, because it shows that those people are out of control and exceeding their authority, and need to be reined in.
No need to die on the hill, but point out that there's a consistent pattern of lawless power-grabbing.
> it shows that those people are out of control and exceeding their authority
No, the concentration camps and gangs of masked thugs violating civil rights are that sign. Threatening to treat a domestic private corporation like an enemy combatant during peacetime for not immediately caving to military demands is that sign. Trying to take over the Federal Reserve, the Federal Trade Commission, and the Nuclear Regulatory Commission, is that sign. The Executive attempting to freeze funds issued by Congress for partisan reasons is that sign.
Department of War is just little boys being trolls.
The action of a failed rebrand belongs to the Department of Defense, and is indeed an example of exceeding their authority. It was not DoD that is trying totake over the Fed, the FTR, or the NRC, so those examples don't work against Hegseth here.
This is like picketing Auschwitz with placards complaining that the "National Socialists" aren't socialists.
You're talking about an administration that barred the AP from pressed briefings because they didn't call it the Gulf of America. This is not a bikeshed.
I wouldn’t call a brief comment on the matter dying on a hill fcs
Commenting on the matter just makes it easier for the media to yap about Anthropic being "woke" rather than focusing on the Department of War's demands.
> It's really not the right thing to be bikeshedding. The people calling the shots call themselves the Department of War. No need to die on hills that don't matter.
From the first chapter of the book On Tyranny by Timothy Snyder, an historian of Central and Eastern Europe, the Soviet Union, and the Holocaust:
> Do not obey in advance.
* https://timothysnyder.org/on-tyranny
* https://archive.org/details/on-tyranny-twenty-lessons-from-t...
* https://en.wikipedia.org/wiki/Timothy_Snyder
TIL of Bikeshedding, or Parkinson’s Law of Triviality.
Defined as the tendency for teams to devote disproportionate time and energy to trivial, easy-to-understand issues while neglecting complex, high-stakes decisions. Originating from the example of arguing over a bike shed's color instead of a nuclear plant's design, it represents a wasteful focus on minor details.
https://en.wikipedia.org/wiki/Law_of_triviality
---
I deal with this day in and day out. Thank you for informing me of the word that describes the laughable nightmares I deal with on the regular.
Get a prop with difficulty/importance quadrants and silently tap sign on meetings
It SHOULD be called the Department of War, as it was originally, since it makes its function clear. We are a society that has euphemized everything and so we no longer understand anything.
It's a funny thing that the most war-loving people and the most peace-loving people both love calling it "Department of War" - just for different reasons.
But the reason for "Department of Defense" name was bureaucratic. It's also not true that DOD is hard to understand.
The Department of the Army is what was previously called the Department of War. The Department of Defense is new, dating to just after WWII.
Pedantry.
The Department of War was responsible for naval affairs until The Department of the Navy was spun off from it in 1798, and aerial forces until the creation of the The Department of the Air Force in 1947, whereafter it was left with just the army and renamed the Department of the Army. All three branches were then subordinated to the new Department of Defense in 1949, which became functionally equivalent to the original entity.
The Department of War is what it was called when it was first created in 1789 by the Congress (establishing the department and the position of Secretary of War), the predecessor entity being called the The Board of War and Ordnance during the revolution.
The Department of "Defense" has never fought on home soil. Ever.
Naming is important because it intuits what we expect to do with a thing. The Department of Defense invading Greenland is more invocative to inquiry than the Department of War invading Greenland because that's what a department of war would do.
It's one of the reasons why people get annoyed at jargon or are pissed off about pronouns, because it highlights that they should be putting mental effort into understanding why they're current mental model doesn't fit. It's much easier to ignore and be comfortable if there's not glaring sirens saying you've got some learning to do.
Most of us can't (or won't) be aware of everything that should be important to us, having glaring context clues that we should take notice of something incongruous is important. It's also why the Trump media approach works so well it's basically a case of alarm fatigue as republicans who would normally side against any particular one of his actions don't listen because they agreed with some of the actions that democrats previously raised alarms about.
> It's one of the reasons why people get annoyed at jargon or are pissed off about pronouns, [...]
It's worth noting there's an overabundance of legitimate reasons people get annoyed at these two thing, making them bad examples.
Doublespeak, so to speak.
While I agree the name change has not (yet) been made with the proper authority, I'm quite partial to the name and prefer to use it despite its prematurity. I think it does a better job of communicating the types of work actually done by the department and rightly gives people pause about their support of it. Though I'm sure that wasn't the administration's intention.
The name is extremely off-putting, but I can see how they would want to be diplomatic toward the administration in using their chosen name. Save the push-back for where it really matters.
But it sets the tone.
Of appeasement and bootlicking, yes.
Dude we had an election and this is what we’re doing. Maybe that’s not how you do things in the Kingdom of Sweden. Here it’s e pluribus unum.
There is a good share of collusion in Europe too, let's keep all continents open to critics. Elections doesn't imply unlawful dictates and corruption.
It's addressed to Hegseth, who insists on calling it that.
If they had called it DoD, then that would have been another finger in his eye.
Remember, this is the same administration that barred the AP from the Oval Office because they wouldn't rename the Gulf of Mexico. https://www.theguardian.com/us-news/2025/feb/11/associated-p...
While this action may indeed cause the DoD to blacklist Anthropic from doing business w/the government, they probably were being as careful as they could be not to double down on the nose-thumbing.
This. They even put a "wArFiGhTers" in there.
Maybe this is the DoW Pam Bondi was referring to.
I don't think it's addressed to Hegseth, but to anyone who might be sympathetic to Hegseth. Which I think actually strengthens your point, the goal appears to be to make it so the only possible complaint with the letter for someone sympathetic to the administration is "but mass domestic surveillance / fully autonomous weapons are legal" and not "look at this lunatic leftist who calls it the department of defense".
Less hypocritical than Defense. US has never been on the defense, always offense since it was renamed in 1947.
The Department of Defense was named in 1949, not 1947, and the thing that it was renamed from was the National Military Establishment, which was newly created in 1947 to be put over the two old military departments (War, which was over the Army only, and Navy, which was over the Navy including the Marine Corps)
At the same time as the NME was created, the Army was split into the Army and Air Force and the Department of War was also split in two, becoming the Department of the Army and the Department of the Air Force.
Often offensive and also often defensive of others.. so if renaming is on the table, it’s probably most apt to call it the Dept of Security since the vast majority of what it does is maintaining the security umbrella that has helped suppress world war since the last one. Of course, facts or opinions on whether it succeeds on the security front depend on which side of the umbrella you’re on.
It is called the Department of War because we live under fascism and Congress no longer matters.
All that matters is that everyone calls it the Department of War, and regards it as such, which everyone does.
Those of us with a firm grip on reality do not currently live under fascism.
Help me understand how a firm grip on tells that living in America is not fascism? It's definitely checking the boxes.
Basically all of Eco's Ur-Fascism boxes are checked. And he'd know, having lived under Mussolini's regime. https://en.wikipedia.org/wiki/Ur-Fascism
> All that matters is that everyone calls it the Department of War, and regards it as such, which everyone does.
What you just described is consensus, and framing it as fascism damages the credibility of your stance. There are better arguments to make, which don’t require framing a label update as oppression.
I'm not framing consensus as fascism, I'm pointing out what the consensus is within the current fascist framework, and that consensus is that Congress doesn't make the rules anymore. And that consensus is shared by Congress itself.
So anyone who doesn't mind the name going back to DoW is fascist?
No.
The president has no authority to rename the Department of Defense, but he and his administration demand consensus under the threat of legal consequences.
Just as one example, they threatened Google when they didn't immediately rename the Gulf of Mexico to the "Gulf of America" on their maps. Other companies now follow their illegal guidance because they know that they will be threatened too if they don't comply.
There is a word for when the government uses threats to enforce illegal referendums. That word is "Fascism". Denying this is irresponsible, especially in the context of this situation, where the Government is threatening to force a private company to provide services that it doesn't currently provide.
Your keep using the word illegal but I don't think you know what it means
It means something violates the law. Am I right?
Being honest increases credibility, not damages it.
> framing a label update as oppression
That strawman damages credibility.
true, if everything is 'fascism' then nothing is
https://archive.ph/YSAWU
Except this administration is certainly fascist, and the renaming is yet another facet of it. That article goes through it point by point.
And what if congress renames it tomorrow? They have the votes. These sort of procedural gotchas are as stupid as they are boring.
> And what if congress renames it tomorrow?
Then tomorrow it will be the Department of War. Just like When Congress voted to split the old Department of War into the Department of the Army and the Department of the Air Force, and to take both of those and the previously-separate Department of the Navy under a new National Military Establishment led by the newly-created Secretary of Defense (and when it later to voted to rename the NME as “Department of Defense”), things changed in the past.
> They have the votes.
Perhaps, but the law doesn't change because the votes are in a whip count on a hypothetical change, it changes because they are actually cast on a bill making a concrete change.
Grok's thoughts on the matter:
"In an ideal world, I'd want xAI to emulate the maturity Anthropic showed here: affirm willingness to help defend democracies (including via classified/intel/defense tools), sacrifice short-term revenue if needed to block adversarial access, but stand firm on refusing to enable the most civilizationally corrosive misuses when the tech simply isn't ready or the societal cost is too high. Saying "no" to powerful customers—even the DoD—when the ask undermines core principles is hard, but it's the kind of spine that builds long-term trust and credibility."
It also acknowledged that this is not what is happening...
the interesting question is why dario published this. these disputes normally stay behind NDAs and closed doors. going public means anthropic decided the reputational upside of being the company that said no outweighs the risk of burning the relationship permanently. that's a calculated move, not really just a principled one.
As someone who is potentially their client and not domestic, really reassuring that they have no concerns with mass spying peaceful citizens of my particular corner of the world.
Take your pick from the many other choices offered by companies that don't care about mass spying on _anyone_.
Or don't.
I can imagine that this will be the logical conclusion for many companies, I thought the same thing too, if it's too hard in the USA, they will just move.
Is there a different AI company that IS taking that stance?
Because as far as I know, Anthropic is taking the most moral stance of any AI company.
All the Chinese companies publishing open models that I can run on my own steel?
Brother in law did some "time with the brass" as he calls it. His take was that the DOD, er DOW would, as an example, never acquire a fighter jet that "wouldn't target and kill a civilian airliner", citing that on 9/11 we literally almost did that. The DOW is acquiring instruments of war, which is probably unconformable for a lot of people to consider.
His conclusion was that the limits of use ought to be contractual, not baked into the LLM, which is where the fallout seems to be. He noted that the Pentagon has agreed to terms like that in the past.
To me, that seems like reasonable compromise for both parties, but both sides are so far entrenched now we're unlikely to see a compromise.
The pentagon had already agreed to Anthropic's terms and wants to walk back. It can always find some other supplier if it wishes to.
I'd really like to know why Grok is inadequate?
I think that's the nuance:
* agreeing to the terms - one subject
* having to the tool attempt to enforce said terms - another subject
> The DOW is acquiring instruments of war
that may be, but the bigger picture purpose of the military is, welfare republicans like. in that sense, republicans are in charge, republicans want stuff that isn't "woke" (or whatever), so this behavior is representative of the way it works.
it has little to do with acquiring instruments of war, or war at all. its mission keeps growing and growing, it has a huge mission, very little of that mission is combat. this is what their own leadership says (complains about). 999/1,000 people on its payroll are doing duty outside of combat or foreseeable combat.
I'd be amused beyond all reason if we saw this chain of events:
- Anthropic says "no"
- DoD says "ok you're a supply chain risk" (meaning many companies with gov't contracts can no longer use them)
- A bunch of tech companies say "you know what? We think we'd lose more money from falling behind on AI than we'd lose from not having your contracts."
Bonus points if its some of the hyperscalers like AWS.
Hilarity ensues as they blow up (pun intended) their whole supply chain and rapidly backtrack.
Being labeled a supply chain risk means that companies with government contracts cannot use Anthropic products _for those government contracts_, not that they have to cease all usage of Anthropic products. Reporters seem to be reporting on this incorrectly.
Thank you for the information. My fun little narrative is in shambles :(
Not really, actually. This usually means outright ban because per project is next to impossible to enforce internally.
This is correct. Maybe the startups living off DARPA/MTEC/etc contracts would continue using Claude, but the LM/NOG/Collins types wouldn't touch Anthropic with a ten foot pole.
What is with the amount of comments talking about other countries in Europe "Doing the same"?
It would be hilarious if the Europeans got everyone visas and gave some kind of tax benefit to Anthropic and poached the entire company.
USA would bomb their country before any visa is approved
lol
Props to Dario and Anthropic for taking a moral stand. A rarity in tech these days.
Agreed. You don’t have to be an LLM maximalist or a doomer to see the opportunity for real, practical danger from ubiquitous surveillance and autonomous weapons. It would have been extremely easy for Dario to demonstrate the same level of backbone as Sam Altman or Sundar Pichai.
There is no moral leg to stand on here, he says here in plain english that if they wanted to use CLAUDE to perform mass surveillance on Canada, Mexico, UK, Germany, that is perfectly fine.
This is a public note, but directed at the current administration, so reading it as a description of what is or is not moral is completely missing the point. This note is saying (1) we refuse to be used in this way, and (2) we are going to use "mass surveillance of US citizens" as our defensive line because it is at least backed by Constitutional arguments. Those same arguments ought to apply more broadly, but attempts to use them that way have already been trampled on and so would only weaken the arguments as a defense.
If it helps: refusing to tune Claude for domestic surveillance will also enable refusing to do the same for other surveillance, because they can make the honest argument that most things you'd do to improve Claude for any mass surveillance will also assist in domestic mass surveillance.
Perhaps you just have different moral values? I suspect each of the countries you mentioned spy on us. I also suspect we spy on them. I’m glad an American company wouldn’t be so foolish as to pretend otherwise.
Are we gods chosen people or something that we are the only ones undeserving of mass surveillance? Are you implying that morality depends on citizenship to a particular state?
A moral stand? ... What? Did we read the same statement? It opens right out the gate with:
>I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
>Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
which I find frankly disgusting.
Freedom isn’t free. Someone has to defend the democratic values that you and I take for granted.
Dario’s statement is in support of the institution, not the current administration.
The democratic values I take for granted is under direct threat from the us. Your government is literally funding separatist movements in my country.
I mean, obviously.
But when was the last time our "democratic values" were under attack by a foreign country and actually needed defending?
9/11? Pearl Harbor?
Maybe I'm missing something. We have a giant military and a tendency to use it. On occasion, against democratically elected leaders in other countries.
You're right; freedom isn't free. But foreign countries aren't exactly the biggest threats to American democracy at the moment.
You have the causality at least partially backwards. Why has it been so long and infrequent that the US has been in direct conflict with authoritarian adversaries? Because we have a giant military and a willingness to use it. Pacifism and isolationism do not work as defensive strategies.
War is peace.
Game theory is real.
The last time the US defended freedom through military means was WWII.
As Abraham Lincoln said, the greatest threat to freedom in America is a domestic tyrant, not a foreign army.
Korea, Vietnam, Panama, Grenada, Libya, Lebanon, Iraq War I, Somalia, Haiti, Bosnia, Kosovo, Afghanistan, and Iraq War II were all fought for or over democratic ideals & the defense of democratic institutions.
All were driven by multiple competing and sometimes conflicting goals, and many look questionable in hindsight. It is fair to critique.
But it is absolutely not the case that the last time the US defended freedom through military means was WWII.
They are undeniably taking a moral stand. Among other things, the statement explains that there are two use cases that they refuse to do. This is a moral stand. It might not align with your morals, but it's still a moral stand.
I feel like the deepest technical definition of autocratic is “fully autonomous weapons”?
We knew long before AI was a twinkle in Amodel's eye that if it were to be built, then it would be co-opted by thugs.
Anthropic's statement is little more than pageantry from the knowing and willing creators of a monster.
You're right, we should never build anything because bad people might try to use it. Everyone that has progressed technology is a monster!
You know this is pure PR right?
If Anthropic is nationalized or declared a supply chain risk tomorrow, will you say the same?
What do you mean? You think Hegseth and Anthropic are doing this for PR reasons?
This is not how the word "moral" should be used in a sentence that also has the name Dario Amodei in it.
Words are cheap. Actions aren't. Dario Amodei is putting his company on the line for what he believes in. That's courage, character and... yes, morality.
I have a feeling this is just a negotiation tactic leveraging public sentiment rather than a stance based on morality.
It's both - it's clearly at least partly for moral reasons that they're even in the negotiation that they need leverage for.
I am convinced that Amodei's "morality" is purely performative, and cynically employed as a marketing tactic. Time will tell, but most people will forget his lies.
How should he have acted instead?
Yeah.
“Dario is saying the right thing and doing the right thing and not ever acting otherwise, but I think it’s just performative so I’m still disappointed in him.”
We don't know how the military intended to use Claude, and neither do we know nor does the military know whether Claude without RLHF-imposed safety would have been more useful to them.
Ergo, this is a very convenient PR opportunity. The public assumes the worst, and this is egged on by Anthropic with the implication that CLAUDE is being used in autonomous weapons, which I find almost amusing.
He can now say goodbye to $200 million, and make up for it in positive publicity. Also, people will leave thinking that Claude is the best model, AND Anthropic are the heroes that staved off superintelligent killer robots for a while.
Even setting this aside, Dario is the silly guy who's "not sure whether Claude is sentient or not", who keeps using the UBI narrative to promote his product with the silent implication that LLMs actually ARE a path to AGI... Look, if you believe that, then that is where we differ, and I suppose that then the notion that Amodei is a moral man is comprehensible.
Oh, also the stealing. All the stealing. But he is not alone there by any means.
edit: to actually answer your question, this act in itself is not what prompted me to say that he is an immoral man. Your comment did.
> to promote his product with the silent implication that LLMs actually ARE a path to AGI
That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them.
Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral?
His actions seem pretty consistent with a belief that AI will be significant and societally-changing in the future. You can disagree with that belief but it's different to him being a liar.
The $200m is not the risk here. They threatened labelling Anthropic as a supply chain risk, which would be genuinely damaging.
> The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.
> All of those companies would now have faced this compliance nightmare. [to not use Anthropic in any of their business or suppliers]
... which would impact Anthropic's primary customer base (businesses). Even for those not directly affected, it adds uncertainty in the brand.
It’s possible Dario is a bad person pretending to be good and Sundar is a good person only pretending to be bad. People argue whether true selflessness exists at all or whether it’s all a charade.
But if the “performance” involves doing good things, at the end of the day that’s good enough for me.
Standing up to the US government has real and serious sequence. Peter Hegseth threatened to make Anthropic supply chain risk, meaning not only is Anthropic likely dropped as Pentagon’s supplier, but also risk losing companies doing business with the military as customers, such as Boeing or Lockheed Martin. Whatever tactic you think he is doing, that’s potentially massive revenue lost, at the time they need any business they can get.
Amazon does business with the DOD/W. That’s a pretty dangerous game of brinkmanship Anthropic is playing.
Don't be evil.
These are literally words. The DoW could still easily exploit these platforms, and nothing Anthropic has done can prevent it, other than saying (publicly), "we disagree."
The dispute seems to be specifically about safeguards that Anthropic has in its models and/or harnesses, that the DoD wants removed, which Anthropic refuses to do, and won’t sign a contract requiring their removal. Having implemented the safeguards and refusing their removal are actions, not “literally words”.
The "safeguards" you are referring to are contractual, i.e. words. There are no technical safeguards, per the article.
The memo literally says that the reason they have these policies is -because- actual technical guardrails are not reliable enough.
It’s a contract dispute. Contracts are more than just talk.
While it is true that DoW could try to bypass the contract and do whatever they want, if it were that easy they wouldn’t be asking for a contract in the first place.
Should probably look up how many private companies are suing the government at any one time because of a breach of contract. And that's publicly breaching.
NSA and other three-letter agencies happily do it under cloak and dagger.
I agree with you that the govt can and does violate contracts. So the fact that they need Anthropic to agree signals that it’s more than just lawyers preventing the DoW from doing whatever they want.
What's the US history around nationalization? Would "confiscation", ever be a likelyhood on escalation?
On a quick search I came up with an article, that at least thematically, proposes such ideas about the current administration "Nationalization by Stealth: Trump’s New Industrial Playbook"
https://thefulcrum.us/trump-state-control-capitalism
Is it morality or is it recognizing that providing the brain of autonomous weapons has a non-zero chance of ending up with him on trial in The Hague?
This action is far more likely to land him in prison than complying with the pentagon
I disagree. There is a class of leaders in this country that is complicit with the administrations use of violence on the tacit understanding that the violence not be directed at them. Arresting one of those people would be an act of desperation that would likely cause the rats to flea the sinking ship. And it isn't even clear if Trump could actually manufacture any charges here. Look at the dropped charges against Mark Kelly and those other politicians as an example. The administration might be able to make up stories to arrest random immigrants and college kids, but they clearly haven't been able to indiscriminately jail powerful political opponents.
Meanwhile, Dario knows his product can't be trusted to actually decide who should live and who should die, so what happens the first time his hypothetical AI killing machines make the wrong decision? Who gets the blame for that? Would the American government be willing to throw him under the bus in the face of international outrage? It's certainly a possibility.
The chance is zero. This won't be deployed in countries that he'd want to visit anyway and would extradite him to The Hague.
In all seriousness The Hague has no jurisdiction over Americans and Congress has already authorized military use of force against Brussels should they ever attempt to prosecute Americans.
It's not so clear the company is actually on the line. They can compel Anthropic to do what they are not willing to do, maybe, this is not the final act. The government needs to respond, to which Anthropic will need to respond, courts may become involved at that point, depending on if Anthropic acquiesces at that point or not. Make a prominent statement against while in the news cycle, let the rest unfold under less media attention.
It's a little bit better than so many sniveling, cowardly elites are doing right now.
Welp, I never thought "Person of Interest" show coming to life anytime soon, but, here we are. In case you haven't watched the show, it's time to give it a go. Bare with season 2 though, since things really start to escalate from season 3 onwards. Season 1 is a must though.
The Machine really had this all figured out
I'm glad to see Dario and Anthropic showing some spine! A lot of other people would have caved.
As a "foreign national", what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance? Are there no democracies aside from the US? Don't we know since Snowden that if the US wants to do domestic surveillance they'll just ask GCHQ to share their "foreign" surveillance capabilities?
I think it's slightly less ridiculous than it sounds, because governments have much more power over their own citizens. As an American I would dramatically prefer the Chinese government to spy on me than the American government, because the Chinese government probably isn't going to do anything about whatever they find out.
(That logic breaks down somewhat in the case of explicitly negotiated surveillance sharing agreements.)
> because the Chinese government probably isn't going to do anything about whatever they find out.
This really depends. If a foreign adversary's surveillance finds you have a particular weakness exploitable for corporate or government espionage, you're cooked.
Domestic governments are at least still theoretically somewhat accountable to domestic laws, at least in theory (current failure modes in the US aside).
Exactly and that danger grows as the ability to do so in increasingly automated and targeted ways increases. Should be very obvious now looking at the world around us.
Also, failing to consider the legal and rights regime of the attacker is wild to me. Look at what happens to people caught spying for other regimes. Aldrich Ames just died after decades in prison, and that’s one of the most extreme cases — plenty have got away with just a few years. The Soviet assets Ames gave up were all swiftly executed, much like they are in China.
Regimes and rights matter, which is why the democracy / autocracy governance conflict matters so much to the future trajectory of humanity.
Yes, exactly this.
> As an American I would dramatically prefer the Chinese government to spy on me than the American government, because the Chinese government probably isn't going to do anything about whatever they find out.
> spy on me
People forget to substitute "me" for "my elected representative" or "my civil service employee" or "my service member" or their loved ones
I, personally, have nothing significant that a foreign government can leverage against our country but some people are in a more privileged/responsible/susceptible position. It is critical to protect all our data privacy because we don't know from where they will be targeted.
Similarly, for domestic surveillance, we don't know who the next MLK Jr could be or what their position would be. Maybe I am too backward to even support this next MLK Jr but I definitely don't want them to be nipped in the bud.
You’re getting many replies, and having scrolled through much of them I do not see one that actually answers your question truthfully.
The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.
There is a strong argument that can be made that using AI to mass surveil Americans within US territory is not only morally objectionable, but also illegal and unconstitutional.
There are laws on the books that allow for it right now, through workarounds grandfathered in from an earlier era when mass surveillance was just not possible, and these are what Dario is referencing in this blog post. These laws may be unconstitutional, and pushing this to be a legal fight, may result in the Department of War losing its ability to surveil entirely. They may not want to risk that.
I wish that our constitution provided such protections for all peoples. It does not. The pragmatic thing to do then is to focus on protecting the rights that are explicitly enumerated in the constitution, since that has the strongest legal basis.
given that the US likes to declare jurisdiction whenever somebody touches a US dollar, any thoughts on why those same constitutional protections wouldnt follow?
I agree with your premise because this seems to be the modern interpretation of the courts, but it is not the historical interpretation.
The historical basis of the bill of rights is that they are god given rights of all people merely recognized by the government. This is also partially why all rights in the BoR are granted to 'people' instead of 'citizens.'
Of course this all does get very confusing. Because the 4th amendment does generally apply to people, while the 2nd amendment magically people gets interpreted as some mumbo-jumbo people of the 'political community' (Heller) even though from the founding until the mid 1800s ~most people it protected who kept and bore arms didn't even bother to get citizenship or become part of the 'political community'.
There have been cases of illegal immigrants demanding 2nd amendment rights and getting them ever since it was incorporated to the states in McDonald
The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.
Those unquestionable protections are phrased with enough hand-waving ambiguity of language to leave room for any conceivable interpretation by later courts. See the third-party 'exception' to the Fourth Amendment, for instance.
It's as if those morons were running out of ink or time or something, trying to finish an assignment the night before it was due.
Since at least the progressive era (see the switch in time that saved 9), and probably before, the courts have largely just post facto rationalized why the thing they do or don't agree with fit their desired pattern of constitutionality.
SCOTUS is largely not there to interpret the constitution in any meaningful sense. They are there to provide legitimization for the machinations of power. If god-man in black costume and wig say parchment of paper agree, then act must be legitimate, and this helps keep the populace from rising up in rebellion. It is quite similar to shariah law using a number of Mutfi/Qazi to explain why god agrees with them about whatever it is they think should be the law.
If you look at a number of actions that have flagrantly defied both the historical and literal interpretation of the constitution, the only entity that was able to provide legitimization for many acts of congress has been the guys wearing the funny looking costumes in SCOTUS.
This is a political statement directed at the US public, Congress, and executive branch in the context of a dispute with the US executive branch that is likely to escalate (if the executive is not otherwise dissuaded) into a legal battle, and it therefore focuses particularly on issues relevant in that context, including Constitutional, limits on the government as a whole, the executive branch, and the Department of Defense (for which Anthropic used the non-legal nickname coined by the executive branch instead of the legal name.) Domestic mass surveillance involves Constitutional limits on government power and statutory limits on executive power and DoD roles that foreign surveillance does not. That's why it is the focus.
>Are there no democracies aside from the US?
If we're asking "What's the deal" questions, what's the deal with this question? Do only people in democracies deserve protections? If we believe foreign nationals deserve privacy, why should that only apply to people living in democracies?
In every country, citizens have more rights than non-citizens. The right to freely enter the country, the right to vote, the right to various social services, etc.
In the US, one of the rights citizens have is the right against "unreasonable searches and seizures", established in the Fourth Amendment. That has been interpreted by the Supreme Court to include mass surveillance and to apply to citizens and people geographically located within US borders.
That doesn't apply that to non-citizens outside the US, simply because the US Constitution doesn't require it to.
I'm not defending this, just explaining why it's different.
But, you can imagine, for example, why in wartime, you'd certainly want to engage in as much mass surveillance against an enemy country as possible. And even when you're not in wartime, countries spy on other countries to try to avoid unexpected attacks.
The US has a strong history of trying to avoid building domestic surveillance and a national police. Largely it’s due to the 4th amendment and questions about constitutionality. Obviously that’s going questionably well but historically that’s why it’s a red line.
Exactly. FVEYs been doing reciprocal surveillance on each other for decades.
https://en.wikipedia.org/wiki/Five_Eyes#Domestic_espionage_s...
The reality is that the US Constitution only offers strong guarantees to citizens and (some of) the people in the US. Foreigners are excluded and foreign mass surveillance is or will happen.
I believe every country (or block) should carve an independent path when it comes to AI training, data retention and inference. That is makes most sense, will minimize conflicts and put people in control of their destiny.
Particularly so when those foreign nationals can be consumers. “fuck your basic human rights, but we can take your money just fine”.
If nothing else, the USA has learned that a lot of people outside their borders do not share the same ideas on basic human rights, and most of the world hates when we try to ensure them. Some countries are closely aligned with our ideals and are treated differently. There are many different layers of this, from Australia to North Korea.
Also the more the US openly treats the world like garbage, the more the rest of the world will likely reciprocate to US citizens.
It reminds me of some recent horror stories at border crossings - harassing people and requiring giving up all your data on your phone - sets a terrible precedent.
> what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance? Are there no democracies aside from the US?
I think it's just saying that spying on another country's citizens isn't fundamentally undemocratic (even if that other country happens to be a democracy) because they're not your citizens and therefore you don't govern them. Spying on your own citizens opens all sorts of nefarious avenues that spying on another country's citizens does not.
In the US, we have the ability to either confirm or change a significant chunk of our Federal government roughly every two years via the House of Representatives. The argument here is that we, theoretically, could collectively elect people that are hostile to domestic mass surveillance into the House of Representatives (and other places if able) and remove pro-surveillance incumbents from power on this two year cycle.
The reasons this hasn't happened yet are many and often vary by personal opinion. My top two are:
1) Lack of term limits across all Federal branches
and
2) A general lack of digital literacy across all Federal branches
I mean, if the people who are supposed to be regulating this stuff ask Mark Zuckerberg how to send an email, for example, then how the heck are they supposed to say no to the well dressed government contractor offering a magical black box computer solution to the fear of domestic terrorism (regardless of if its actually occurring or not)?
One of them is illegal for DoD to do and the other is not.
100% - this is the shortsightedness and demonstrates hypocrisy.
Countries routinely use other countries intelligence gathering apparatus to get around domestic surveillance laws.
The distinction between foreign and domestic is a legal one.
The Supreme Court has ruled that the US Constitution protects any persons physically present in the United States and its territories as well as any US citizens abroad.
So if you are a German national on US soil, you have, say, Fourth Amendment protections against unreasonable search and seizure. If you are a US citizen in Germany, you also have those rights. But a German citizen in Germany does not.
What this means in practice is that US 3-letter agencices have essentially been free to mass surveil people outside the United States. Historically these agencies have gotten around that by outsourcing their spying needs to 3 leter agencies in other countries (eg the NSA at one point might outsource spying on US citizens to GCHQ).
Are all democracies allies to you?
That still doesn't justify mass surveillance.
Never said that. Didn't even imply it.
> what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance?
A large portion of Americans believe in "citizen rights", not "human rights". By that logic, non-Americans do not have a right to privacy.
This contradicts the opening of the Declaration of Independence, which recognizes all humans as possessing rights:
"We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness."
Lots of lofty goals have been written on paper - when people take them seriously, they are even worth something.
The pendulum swings.
I'm glad to see this as the top comment. I was, until recently, a loyal Anthropic customer. No more. Because the way non-Americans are spoken of by a company that serves an international market (and this isn't the first instance):
"Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass _domestic_ surveillance is incompatible with democratic values."
Second class citizens. Americans have rights, you don't. "Democratic values" applies only to the United States. We'll take your money and then spy on you and it's ok because we headquartered ourselves and our bank accounts in the United States.
Very questionable. American exceptionalism that tries to define "democracy" as the thing that happens within its own borders, seemingly only. Twice as tone-deaf after what we've seen from certain prominent US citizens over the last year. Subscription cancelled after I got a whiff of this a month ago.
(Not to mention the definition of "lawful foreign intelligence" has often, and especially now, been quite ethically questionable from the United States.)
EDIT: don't just downvote me. Explain why you think using their product for surveillance of non-Americans is ethical. Justify your position.
That reasoning sounds confusing: are you actually in favor of US gov's surveillance on Americans?
If not, then why are you punishing that company for refusing to deal with the US gov?
Or is it just because they worded their opposition in a certain way that you dislike?
It's not confused. Are you?
I object, as a non-American paying Anthropic customer, to being surveilled and then having it justified in a press release?
> I object, as a non-American paying Anthropic customer, to being surveilled and then having it justified in a press release?
You genuinely think you're not already being surveilled? And that Anthropic is somehow responsible with just a few words in a press release? In what world are you living in and how is the rent there?
My guess is that they can't object to foreign intelligence, and would lose negotiating ground if they even tried.
Optimistically, they can still refuse to do work that would aid in foreign intelligence gathering, by arguing that it would also be beneficial for domestic mass surveillance.
I'll admit that the phrase "We support...foreign intelligence and counterintelligence" is awful as hell, and it's possible that my apologist claims are BS. But Anthropic has very little leverage here (despite having a signed contract and so legally fully in the right), so I could see why they're desperate to stick to only the most solid objections available.
It's the addition of the we support phrase in particular, and the attempt to tie that in a "democratic values" clause that is objectionable.
Not to most US citizens, I'm sure. But there's millions of non-Americans who have given them their hard earned cash. It's not a good look, and it did not need to be phrased that way as it substantially undermines the impact of their point.
>democracies aside from the US.
I mean, I guess from '65 to around 96? We had a good run.
People do realize there's a non-zero chance that Anthropic could have embedded some kind of hidden "backdoor" trigger in its training process, right?
For example, a specific seed phrase that, when placed at the beginning of a prompt, effectively disables or bypasses safety guardrails.
If something like that existed, it wouldn't be impossible to uncover:
1. A government agency (DoD/DoW/etc.) could discover the trigger through systematic experimentation and large-scale probing.
2. An Anthropic employee with knowledge of such a mechanism could be pressured or blackmailed into revealing it.
3. Company infrastructure could be compromised, allowing internal documentation or model details to be exfiltrated.
Any of these scenarios would give Anthropic plausible deniability... they could "publicly" claim they never removed safeguards (or agreed to DoD/DoW demands), while in practice a select party had a way around them (may be even assisted from within).
I'm not saying this "is" happening... but only that in a high-stakes standoff such as this, it's naive to assume technical guardrails are necessarily immutable or that no hidden override mechanisms could exist.
...indeed, it's possible (perhaps inevitable) that at some point, someone will invent/deploy/promote AI killing people.
We can't possibly keep that genie in that bottle.
But what we can do is achieve consensus that states, and their weapons of mass destruction, and their childish monetary systems, and their eternally broken promises... are not in keeping with the next phase of humanity.
The most important part of this statement is the explicit commitment to transparency around these discussions. In an industry where many AI companies engage with defense quietly, making a public statement — even if imperfect — creates accountability. The question is whether this standard will be adopted more broadly.
Idk if the reporting was just biased before, but from what I saw is that this time last week, it was thought you couldn't use Anthropic to bring about harm, and now they're making it clear that they just don't want it used domestically and not fully autonomously.
Like maybe it always was just this, but I feel every article I read, regardless of the spin angle, implied do no harm was pretty much one of the rules.
You, using normal Claude under the consumer ToS, cannot use it to make weapons, kill people, spy on adversaries, etc. The Pentagon, using War Claude, under their currently-existing contract, can use it to make weapons and spy on (foreign) adversaries, but not to (autonomously) kill people. I don't love this but I am even less excited about the CCP having WarKimi while we have no military AI.
those two stipulations were always their only ones, and they were included explicitly in their original contract with the DoW.
Props to Dario and Anthropic for holding firm on these two points that I feel like should be a no-brainer
https://en.wikipedia.org/wiki/Joseph_Nacchio
Previous case of tangling with the Government.
https://youtube.com/watch?v=OfZFJThiVLI
Jolly Boys - I Fought the Law
Overall, this seems like it might be a campaign contribution issue. The DoD/DoW is happy to accept supplier contracts that prevent them from repairing their own equipment during battle (ref. military testimony favoring right-to-repair laws [1] ), so corporate matters like this shouldn't really be coming to a head publicly.
[1] https://www.warren.senate.gov/newsroom/press-releases/icymi-...
> "mass domestic surveillance" - mass surveillance of non-domestic civilians is OK?
A favourable take would be he meant "mass surveillance of non-democratic adversarial countries". I agree it's not phrased this way though.
The call is coming from inside the house
"These latter two threats are inherently contradictory"
After the standing up for democracy. This is my favorite part. "Your reasoning is deficient. Dismissed."
All completely rationale. Makes the us military here look fairly incompetent… embarrassing as a veteran.
I'm sure it's negotiations over how the enforcement will be done. My thoughts are:
1. Military wants a whole new model training system because the current models are designed to have these safeguards, and Anthropic can't afford that (would slow them down too much, the engineering talent to set up and maintain another pipeline would be a lot of work/time)
2. Military doesn't want to supply Anthropic usage data or personnel access to ensure its (lack of) use in those areas.
3. It's something almost completely unrelated to what's going on in the news.
It’s probably something really dumb, and they irked California billionaire with their idiocy.
"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community."
The moral incoherence and disconnect evident in these two statements is at the heart of why there is generalized mistrust of large tech companies.
The "values" on display are everything but what they pretend to be.
I find the fact they used the vanity name “Department of War” and “Secretary of War” sad given Congress has not changed the name and the president doesn’t get to decide the naming of statutory departments or secretary level roles. Maybe it’s just an appeasement to the thin skinned people who need powder rooms and are former military journalists working for a draft dodger pretending to be tough guy “warriors,” and trying to glorify the violence for political purposes, but every actual war vet I’ve ever known has never glorified war for the sake of war and they felt very seriously that defense is the reason to do what they had to do. My grandfather was a highly decorated career special forces (ranger, green beret, delta force, four silver stars and five bronze stars, etc) from WWII, Korea, and Vietnam and he was angry when I considered joining the military - he told me he did what he did so I wouldn’t have to and to protect his country and there was no glory to be had in following his path. He would be absolutely horrified at what is going on and I thank god he died before we had these prima Donna politicians strutting around banging their chests and pretending war is something to be proud of.
Good on anthropic for standing up for their principles, but boo on gifting them the discourtesy to the law of the land in acknowledging their vanity titles.
What is OpenAI's stance on these issues? Are they working with DOW currently?
Good optics, but ultimately fruitless.
If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.
The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.
I think it is a reasonable moral stance to acknowledge such things are possible, yet not wanting to be a part of it. Regarding making it technically impossible to do...I think that is what Anthropic means when they say they want to develop guardrails.
Are the guardrails not part of their core? Isn't that the whole premise of their existence?
If you read the statement, they explicitly state these guardrails don't exist today, and they want to develop them.
Though I have a feeling we're talking about different things. In Claude Code terms, it might want to rm -rf my codebase. You sound like you might want it to never run rm -rf. Anthropic probably wants to catch dangerous commands and send them to humans to approve, like it does today.
That's my point. They formed anthropic under the sole mandate of "guardrails first," now seemingly don't have them at all. So they're just another ai company with different marketing, not the purely altruistic outfit they want everyone to believe
The ability of some people to never be happy, and to find a way to twist a good situation into bad, will always impress me.
Here we have a company doing something unprecedented but it is STILL not enough for people like you. The DoD could destroy them over this statement, and have indicated an intent to do so, but it's still not enough for you that they stand up to this.
I wonder what life is like being so puritanical and unwilling to accept the good, for it is not perfect! This mindset is the road to a life of bitterness.
A little pessimistic of a take, IMO. You may very well be right, though.
It's not clear to me whether Anthropic's limitations are technical or merely contractual. Is Anthropic actually putting the limitations in their prompts, so that the model would refuse to answer a question on how to do certain things?
If so, that's a major problem. If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.
If the limitations are contractual, then there is some room for negotiation.
> If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.
You'd be surprised at what is considered acceptable. For example, being unable to repair your own equipment in battle is considered acceptable by decision-makers who accepted the restrictions.
https://www.warren.senate.gov/newsroom/press-releases/icymi-...
I commend Anthropic leadership for this decision.
I simultaneously worry that the current administration will do something nuclear and actually make good on their threat to nationalize the company and/or declare the company a supply chain risk (which contradict each other but hey).
Label them as supply chain risk and move on. Enough of this drama already
I think they are negotiating until Friday, but I agree. I think this was foolish.
This is at best a superficial attempt to show that Anthropic objects to what is already in play.
Personally, I'd rather live in a country which didn't use AI to supplant either its intelligence or its war fighting apparatus, which is what is bound to happen once it's in the door. If enemies use AI for theirs, so much the better. Let them deal with the security holes it opens and the brain-drain it precipitates. I'm concerned about AI being abused for the two use cases he highlights, but I'm more concerned that the velocity at which it's being adopted to sift and collate classified information is way ahead of its ability to secure that information (forget about whether it makes good or bad decisions). It's almost inconceivable that the Pentagon would move so quickly to introduce a totally unknown entity with totally unknown security risks into the heart of our national security. That should be the case against rapid adoption made by any peddler of LLMs who claims to be honest, to thwart the idiots in the administration who think they want this technology they can't comprehend inside our most sensitive systems.
All this is for nought.
The power lies with the US Govt.
And its corrupt, immoral and unethical, run by power hungry assholes who are not being held accountable, headed by the asshole who does a million illegal things every day.
Ultimately, Anthropic will fold.
All this is to show to their investors that they tried everything they could.
It is not clear to me that the power here lies with the US Govt.
Imagine Anthropic is declared a "supply chain risk" thus cannot be used by all sorts of big industry players. How will the CEOs of those companies feel about the govnt telling them they cannot use what their engineers say is the best model? How many of those CEOs have a direct line to powermakers?
How many of those CEOs are already making the phone calls? The "supply chain" threat is a threat to every US company that currenly uses Anthropic.
Oh, and that includes Palentir, who is deeply embedding in the govt.
Side example: remember the 6 congresspeople who made the video about military orders? They won.
Anthropic probably can’t fold, they might lose an existential number of researchers if they did. This is literally an unstoppable force meets an immovable object situation.
Hegseth probably folds. It would be too unpopular for him to take either of the actions he threatened.
OpenAI and Google could have decided to make the same principled stand, and the government would have likely capitulated.
They both literally removed morality from their bylaws; that time has passed. They're openly corrupt because it pays to be so.
As a non US citizen, this article sounds mildly concerning to me. My country is an ally of US. Good. But I don't know how I would feel when I start seeing Anthropic logos on every weapon we buy from US.
Aside my concern, Dario Amodei seems really into politics. I have read a couple of his blog posts and listened to a couple of podcast interviews here and there. Every time I felt like he sounded more like a politician than an entrepreneur.
I know Anthropic is particularly more mission-driven than, say OpenAI. And I respect that their constitutional ways of training and serving Claude models. Claude turned out to be a great success. But reading a manifest speaking of wars and their missions, it gives me chills.
The most chilling thing imo is that Anthropic is the only lab that have said anything about this. Google and OpenAI presumably signed up to all these terms without any protest.
"You are what you won't do for money." is a quote that seems apt here. Anthropic might not be a perfect company (none are, really), but I respect the stance being taken here.
Classic seppo diatribe.
"We will build tools to hurt other people but become all flustered when they are used locally"
If you're using "seppo" as the Australian pejorative referring to Americans, I'm not sure what makes this uniquely American.
"Seppo" is rarely used in Australia today, it's an old bottom-of-barrel word most have never heard of. The neutral "Yank" is more common, but even that only pops up sometimes.
Guessing their comment attempts to expose hypocrisy of America's keenly supported overseas military activity in conflict with fiercely defended domestic free-speech and liberty principles. Deep down, most allies of America want America to defeat foreign adversaries and keep defending those liberties many of us share. In other words there's no hypocrisy, carry on!
This is why I like Dario as a CEO - he has a system of ethics that is not jus about who writes the largest check.
You may not agree with it, but I appreciate that it exists.
I’m very happy that Anthropic chose not to cave into US Dept of War’s demands but their statement has an ambiguity.
Does this mean they’d be ok to have their models be used for mass surveillance & autonomous weapons against OTHER countries?
A clarification would help.
I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.
The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.
They made it easy to generate powerpoint presentations, that is the real reason DoW is using them
this is a very chauvinistic approach... why not another model replace anthropic here? I sense because gov people like using excel plugin and font has nice feel. a few more week of this and xAI is new gov AI tool
Ukraine , Russia , China , actively develop ai systems that kill. Not developing such system by US based company will not change the course of actions.
Yep.
That said, it does impact whether Anthropic can sell to the British [0], German [1], Japanese [2], and Indian [3] government.
Other governments will demand similar terms to the US. Either Anthropic accedes to their terms and gets export controlled by the US or Anthropic somehow uses public pressure to push back against being turned into an American sovereign model.
Realistically, I see no offramp other than the DPA - a similar silent showdown happened in the critical minerals space 6-7 years ago.
[0] - https://www.anthropic.com/news/mou-uk-government
[1] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008
[2] - https://www.anthropic.com/news/opening-our-tokyo-office
[3] - https://www.anthropic.com/news/bengaluru-office-partnerships...
It may sound crazy, but they should just move the company to Europe or Canada, instead of putting up with this.
Why? They clearly are very aligned on the objective, just doing some negotiation regarding the means. Giving up just because you don't agree 100% is not very constructive. This might seem bad for conflict-adverse people who usually are involved in low-stakes negotiations, but it's just the start of things for people who are fluent in conflict.
Because as we all know the EU would never try using AI for mass surveillance /s
Principles are the things you would never do for any amount of money. This might be the only principled tech company in the world.
"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."
That opening line is one hell of a set up. The current administration is doing everything it can to become autocratic thereby setting themselves up to be adversarial to Anthropic, which is pretty much the point of the rest of the blog. I guess I'm just surprised to have such a succinct opening instead just slop.
these guys are selling snake oil to the gvt - cz they know they can get cash based on fear.
the Chinese are releasing equivalent models for free or super cheap.
AI costs / energy costs keep going up for American A.I companies
while china benefits from lower costs
so yeah you've to spread F.U.D to survive
The models are hardly equivalent.
I think it’s a pretty strong statement. It is unfortunately weakened by going along with the “Department of War” propaganda. I believe that the name is “Department of Defense” until Congress says otherwise, no matter what the Felon in Chief says.
Oh dear, what a mess of a statement that is. He wants to use AI "to defeat our autocratic adversaries", just what or who are they exactly? Claude seems to think they are Russia, China, North Korea and Iran. Is Claude really a tool to "defeat" these countries somehow? This statement also seems pretty messy: "Anthropic understands that the Department of War, not private companies, makes military decisions.", well then just how do they think Claude is going to be used there if not to make or help make military decisions?
The statement goes on about a "narrow set of cases" of potential harm to "democratic values", ...uh, hmm, isn't the potential harm from a government controlled by rapists (Hegseth) and felons using powerful AI against their perceived enemies actually pretty broad? I think I could come up with a few more problem areas than just the two that were listed there, like life, liberty, pursuit of happiness, etc.
I can't help but highlight the problem that is created by the renaming of the Deptartment of Defense to the Department of War:
> importance of using AI to defend the United States
> Anthropic has therefore worked proactively to deploy our models to the Department of War
So you believe in helping to defend the United States, but you gave the models to the Department of War - explicitly, a government arm now named as inclusive of a actions of a pure offensive capability with no defensive element.
You don't have to argue that you are not supporting the defense of the US by declining to engage with the Department of War. That should be the end of the discussion here.
it hasnt actually been renamed though.
the name is still the department of defence by law. department of war is a subheading tagline
Probably not a good idea to let Claude vibe-selecting targets, it still sometime hallucinates
Just visibly wave the US flag and you'll be fine, don't worry.
Soon it will select targets in commie countries though, perhaps it already does. Who selected to bomb Chavez mausoleum btw?
If these values really meant anything, then Anthropic should stop working with Palantir entirely given their work with ICE, domestic surveilance, and other objectionable activities.
Why would the US security apparatus outsource the model to a private company? DARPA or whatever should be able to finance a frontier model and do whatever they want.
They want to be nationalized, which is the most profitable exit they'll ever get.
Bottom line up front it’s probably better to address the root cause of this situation with the general solution — making government drastically smaller and less pervasive in people’s lives and businesses. I remember not too long ago during the last administration very heavy handed unforgivable and traumatizing rhetoric and executive orders that intruded into the bodily autonomy of millions of Americans and threatened millions of American’s jobs. This happened to me and I personally received threats that my livelihood would be taken away from me which were directly a result of the Executive branch. This isn’t a problem where Congress has ceded powers to the Executive branch, it’s a problem that so much power to legislate and tax is in the hands of the government at all! Every election cycle that results in a transfer of power to the other party inevitably results in handwringing and panic but this wouldn’t be the case if citizens voted their powers back and government wasn’t so consequential.
It sounds to me like anthropic are basically 'all in' except for the caveats. Looking at the 2 examples they provide:
> We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.
Why not do what the US are purported to do, where they spy on the others citizens and then hand over the data? Ie, adopt the legalistic view that "it's not domestic surveillance if the surveillance is done in another country", so just surveil from another data center.
> Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.
Yes, well that doesn't sound like that strong an objection: fully automated defence could be good but the tech isn't good enough yet, in their opinion.
Powerful post - good on him for taking a stand, but questionable in light of their recent move away from safeguards for competitive reasons.
Well fucking done. Anthropic has just gained the “has bollocks” status. Also now we know what the govt is really up to with AI. G fucking g
That is frikkin impressive. Well done sir.
Didn't Dario Amodei ask for more government intervention regarding AI?
Not a contradiction with this post
Am i the only one who understands the deparments position? Like if another country will have it without safeguards, why would I not want it without safeguards. I can still be the safeguard, but having safeguards enforced by another entity that potentially has to face negative financial consequences seems like a disadvantage, would be weird to accept that as department of war.
I understand the risk, but that is the pill.
they could use a different provider for the kill chain.
we must use claude to decide whether to nuke iran, or else our gun manufacturers arent allowed to use to to run spreadsheets
is a bit ridiculous.
Impressive and heartening. Bravo.
I imagine they'll drop this bare-minimum commitment when it becomes financially expedient.
I restored my Max sub. I wish they pushed back more, so I went with $100/month only.
Congratulations, you just got a new $200 Claude Max plan customer.
It's ok to mass survey foreign entities.
torment nexus creators are shocked, appalled even, to discover that people desire to use it to torment others at nearby nexus
I wonder whether what is really behind this is that they can’t make a model without the safeguards because it would require re-training?
They get to look good by claiming it’s an ethical stance.
Good to them standing up to this administration. I doubt they actually want to put Claude in the kill-chain but this gives them a nice opportunity to go after 'woke AI' and maybe internal ammunition to go through the switching costs for xAI - given Elon more reason to line republican campaign coffers.
I'm guessing this is because Anthropic partners with Google Cloud which has the necessary controls for military workloads while xAI runs in hastily constructed datacenter mounted on trucks or whatever to skirt environmental laws.
They are playing a good PR game for sure. Their recent track record doesn’t show if they can be trusted. Few millions is nothing for their current revenue and saying they sacrificed is a big stretch here.
Yes, but also remember where they came from.
They don't have any brand poison, unlike nearly everyone else competing with them. Some serious negative equity in tha group, be it GOOG, Grok , META, OpenAI, M$FT, deepseek, etc.
Claude was just being the little bot that could, and until now, flying under the radar
It's much more than a few million? Being declared a supply chain risk means that no company that wants to do business with the government can buy Anthropic. And no company that wants to do business with those businesses can buy Anthropic either. This rules out pretty much all American corporations as customers?
The official name of this organization remains _The United States Department of Defense_.
Hegseth is an unintelligent bully who will not accept thiz and does not want to appear weak to the maga base. The consequences will be severe and anthropic will be forced
A significant part of Anthropic's cachet as an employer is the ethical stance they profess to take. This is no doubt a tough spot to be in, but it's hard to see Dario making any other decision here.
What I don't understand is why Hegseth pushed the issue to an ultimatum like this. They say they're not trying to use Claude for domestic mass surveillance or autonomous weapons. If so, what does the Department of War have to gain from this fight?
It’s not unusual for legal departments to take offense to these sorts of things, because now everyone using Claude within the DoD has to do some kind of audit to figure out if they’re building something that could be construed as surveillance or autonomous weapons (or, what controls are in place to prevent your gun from firing when Claude says, etc). A lot of paperwork.
My guess is they just don’t want to bother. I wonder why they specifically need Claude when their other vendors are willing to sign their terms, unless it specifically needs to run in AWS or something for their “classified networks” requirement.
It's that, as I understand it. Anthropic is the only vendor certified to run its models on DoD/DoW classified networks.
Same reason they cut funding for universities that had DEI mandates, etc. and made a big spectacle of doing it despite it often being very little money etc. etc.
It's an ideological war, they're desperate to win it, and they're aiming to put a segment of US civil society into submission, and setting an example for everyone else.
He smelled weakness, and like any schoolyard bully personality, he couldn't help but turn it into a display of power.
He pushed the issue to an ultimatum because he is an unqualified drunk, and thinks that it's against the law for anyone to try and stop the US military from doing something they want to do. This isn't an isolated issue; he tried to get multiple US Senators prosecuted for making a PSA that servicemembers shouldn't follow illegal orders.
What makes you want to believe the Trump Administration when it claims it doesn't want to do domestic mass surveillance?
Anthropic has already cooperated too much with the US Intelligence Community, but better some restraint than none, and better late than never.
It is not the Department of War. He's towing the line from the get-go. Forget this guy.
They should try Sam Altman. He's just the kind of guy who would bend over for this kind of authoritarian demand.
At this point, surveillance state is coming whether Dario does this or not. You can do all that with open source models. It’s sad that we don’t have the right people in charge in govt to address this alarming issue.
Good to see one AI company not selling out their values in exchange for military contracts. This shouldn't be rare, but it is. Good for them.
Anthropic wants regulatory capture to advantage itself as it hypes its products capabilities and then acts surprised when the Pentagon takes their grand claims about their products seriously as it threatens government intervention.
This is why people should support open models.
When the AI bubble collapses these EA cultists will be seen as some of the biggest charlatans of all time.
I mean you're all going to get killed by fully autonomous China AI war robots in 10 years anyway if you're not pure blood Han Chinese, but hey at least you'll provide something to laugh at for future Chinese Communist party history scholars. They will say, "Look at the stupid Baizuos, our propaganda ops convinced them all to commit collective suicide. Stupid barbarians. They proved they are an inferior race."
Not joking, I've heard from sources that hardliners in the CCP think they can exterminate all white people followed later by all non-Han, but just keep on going along disarming yourselves for woke points. This is like unilaterally destroying all your nuclear weapons in 1946 and hoping the Soviets do to.
This is a PR play by Anthropic, likely in coordination with the administration. They don't care, they just need the public to view them as a victim here, and then its business as usual.
I prefer they get shutdown, llms are the worst thing to happen to society since the nuclear bomb's invention. People all around me are losing their ability to think, write and plan at an extraordinary pace. Keep frying your brains with the most useless tool alive.
Remember, the person that showed their work on their math test in detail is doing 10x better than the guys who only knew how to use the calculator. Now imagine being the guy who thinks you don't need to know the math or how to use a calculator lol.
huge if true.
they also took down their security pledge in the same breath, so, you know. if anthropic ends up cutting a deal with the DoD this is obviously bullshit.
I am incredibly proud to be a customer, both consumer level and as a business, of Anthropic and have canceled my OpenAI subscription and deleted ChatGPT.
in hindsight, the smart thing to do would have been to accept the contracts, knowingly enshittify the request, and protect other bad actors like Elon and xAI from ruthlessly compromising our democracies.
>We will not knowingly provide a product that puts America’s warfighters and civilians at risk.
Implying other civilians can be put at risk
The worst part of this is if they do remove Claude, and probably GPT, and Gemini soon after because of outcry we are going to be left with our military using fucking Grok as their model, a model that not even on par with open source Chinese models.
I think the warfighters are a distraction, a system could trivially say that there is a human in the loop for LLM-derived kill lists. My money is that the mass domestic surveillance is the true sticking point, because it’s exactly what you would use a LLM for today.
Apparently part of this whole battle is because Grok isn't up to part to be an acceptable alternative.
As far as we can tell, OpenAI and Google seem to be ok with it and not resisting. It would be easier for Anthropic's cause if they did.
Yea but every warfighter will get a waifu
Grok in unhinged mode piloting an Apache, what could go wrong.
It's better than actively aiding them. Make them struggle at every turn.
Are you Chinese? If not, I think you should prefer the people defending you to have the best tools to do so.
This of course raises the question on whether as an American I have more to fear from the Chinese government or the US one.. given everything happening in the Executive Branch here, that’s a disappointingly hard question to answer.
I think that's an easy question to answer, but obviously you don't fear the Chinese government you're not a Chinese citizen. You can actively talk about your disagreements with the US government, that not a right the Chinese have.
Can you? By ICE agents' own admission on video, they have been adding people to "domestic terrorist" watchlists (just for verbally dissenting, making recordings with a phone, etc) which are then used by Palantir to disappear people directly from their homes - even US citizens. Palantir, the CEO of which gleefully admits to knowing many Nazis and seems to get off on the fact that his software "kills people" (direct quote).
>that’s a disappointingly hard question to answer
It shouldn't be. The US government is already sending armed and masked thugs to shoot political dissidents dead or sending them to concentration camps, threatening state governments and private companies to comply with suppressing free speech and oppressing undesirables, and openly discussing using emergency powers to suspend the next election.
What exactly is the commensurate threat from China? The real tacit threat, not abstract fears like "TikTok is Chinese mind control." What can China actually do to you, an American, that the US isn't already more capable of doing, and more likely to do?
To me it isn't even a question. Even comparing worst case scenarios - open war with China versus civil war within the US - the latter is more of a threat to citizens of the US than the former unless the nukes drop. And even then, the only nation to ever use nuclear weapons in warfare is the US.
This is the correct take. It may be a different question for people living within China, but for Americans, the US Gov is a direct threat to their lives.
If the American military was focused on defending the United States, it would be a very different beast. The 21st Century American military is a tool for transferring wealth from the public to influential parties, and for inflicting destruction on non-peer nations who pose obstacles to influential parties interests. Defending the United States against various often-invoked hobgoblins is at best a very distant concern, closer to pure lip service than reality.
but the "people defending you" have been commiting clear and obvious war crimes?
The Department of War under Trump has proven itself to not be interested in defending you, the American people. All they’ve done so far is aggression against foreign supposed adversaries.
I'm a natural-born American (many generations back) and firmly believe that if we ever get into a hot war with China, it will be because of American provocation, not Chinese.
I am American born and raised and I consider our current government mass murderers who I trust as much as I would have the Nazis. It was a good thing that the Nazis did not get the a-bomb before us, and the same principle applies here. The fewer magnifiers of their power the better. They are a scourge on human rights, and the world.
"as an ai safety company, we only believe in -partially- autonomous weaponry"
Ads are coming.
I'll be glad if they could open their platform enough so that it could run on ads and not 200 dollar subscriptions
for sure. If they weren't so self-righteous about not serving ads, it'd be a great revenue stream for them. It'd also align with Dario's seeming obsession with profitability
Well, now if DoD moves to another AI provider, we’ll know what was compromised.
Why does DoD need claude? I thought xAI was "less woke" and far better than claude
I don't think this is genuine concern, I think this is instead, veiled fear of the TDS posse being covered by feigned concern.
Foreign nationals are now embedded in the US due to decades of lax security by both parties. Domestic surveillance is now foreign surveillance also!
The constant reference to "democracy" as the thing that makes us good and them bad is so frustrating to me because we are _barely_ a democracy.
We are ruled by a two-party state. Nobody else has any power or any chance at power. How is that really much better than a one-party state?
Actually, these two parties are so fundamentally ANTI-democracy that they are currently having a very public battle of "who can gerrymander the most" across multiple states.
Our "elections" are barely more useful than the "elections" in one-party states like North Korea and China. We have an entire, completely legal industry based around corporate interests telling politicians what to do (it's called "lobbying"). Our campaign finance laws allow corporations to donate infinite amounts of money to politician's campaigns through SuperPACs. People are given two choices to vote for, and those choices are based on who licks corporation boots the best, and who follows the party line the best. Because we're definitely a Democracy.
There are no laws against bribing supreme court justices, and in fact there is compelling evidence that multiple supreme court justices have regularly taken bribes - and nothing is done about this. And yet we're a good, democratic country, right? And other countries are evil and corrupt.
The current president is stretching executive power as far as it possibly can go. He has a secret police of thugs abducting people around the country. Many of them - completely innocent people - have been sent to a brutal concentration camp in El Salvador. But I suppose a gay hairdresser with a green card deserves that, right? Because we're a democracy, not like those other evil countries.
He's also threatining to invade Greenland, and has already kidnapped the president of Venezuela - but that's ok, because we're Good. Other countries who invade people are Bad though.
And now that same president is trying to nationalize elections, clearly to make them even less fair than they already are, and nobody's stopping him. How is that democratic exactly?
Sorry for the long rant, but it just majorly pisses me off when I read something like this that constantly refers to the US as a good democracy and other countries as evil autocracies.
We are not that much better than them. We suck. It's bad for us to use mass surveillance on their citizens, just like it's bad to use mass surveillance on our citizens.
And yet we will do it anyways, just like China will do it anyways, because we are ultimately not that different.
Imagine being so cautious with your words, only to have 'Department of War' in your title
the government should not be using any private LLM, they should build their own internal systems using publicly available LLM's, which change frequently anyway. I don't see why they would put their trust in a third party like that. This back and forth about "ethics" is a bunch of nonsense, and can be solved simply by going for a custom solution which would probably be orders of magnitude cheaper in the long run. The most expensive part is the GPU's used for inference, which can be produced in silicon [1].
[1] https://taalas.com/products/
My man
Now, I'm curious. How Bedrock/Azure Claude models work?
Do these rules apply to them too?
> Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place.
It's absolutely disgusting that they would even consider working with the US government after the Gaza genocide started. These are modern day holocaust tabulation machine companies, and this time randomly they are selecting victims using a highly unpredictable black-box algorithm. The proper recourse here is to impeach the current administration, dissolve the companies that were complicit, and send their leadership to the hague for war crimes trials.
Amodei’s use of “warfighters” (a Hegseth-era neologism for “soldiers”) is truly nauseating.
Soldier is an Army specific term. Like Sailor, Airman, Marine, etc.
Perhaps the term you are looking for is service member?
Warfighter tends to refer to anyone involved in a role that directly supports combat operations, it may or may not be a service member.
It's the Department of Defense, not the Department of War ... only Congress has the legal authority to change the name, and they haven't.
Same with Gulf of America.
Department of War is just such a fucking joke title - when has the US stooped so low, I used to believe in you guys as the force of good on this planet smh
Well then I don't know where you've been for the last ~10~ ~20~ 70 years
When? Its entire history from the foundation of the Republic to 1947. The name was changed after WWII; now a faction wants to change it back. The difference in name never changed the behavior, in either direction.
I'm 33 years old, would you mind telling me which year you thought this was, force of good stuff? might be before my time
genuinely curious, I got nothing
it was before your time.
In WWII, we saved the world from what is now seen as some really evil stuff. Not alone of course, Europe and Russia made huge sacrifices and that's where much of the war was fought. But US arms and blood were the decisive factor, Germany was winning, Japan was winning.
After WWII, the US decided to rebuild the world. We turned our enemies (Germany, Japan) into our close allies.
And the people who did it were really and seriously morally committed to doing what they thought was right. It was about building a country, working together. Not the insane politics of today.
Look, it wasn't all rose-tinted glasses. Bad stuff happened, and McCarthy was worse that what we currently have. And the civil rights movement and all of that. And the stupid wars, Korea, Vietnam, all the smaller police actions. Bad shit was done.
But on balance, the US was seen as the force of good, and the guaranteeor of world peace and the prosperity that allows.
The USA were pretty clearly on the "better side" of conflicts in 1941-1945, during the Cold War (at least as far as Europe and the Marshall plan was concerned). In Koweït and central Europe during the 90s. You may even argue for Afghanistan post 9-11 (although the state building was botched.) in the 2000s. ISIS is a footnote in history because of US intervention (from Trump first term, of all things.) And Ukraine would not be against getting the support it had in 2022 back under Trump.
Does not mean that very bad things were not happening at the same time.
But it's definitely easier to find some "supportable" interventions from the US than, say, Russia or China.
”Defense of democracy” is just another version of ”think of the children”.
https://en.wikipedia.org/wiki/Think_of_the_children
The framing of this is that the United States conducts legitimate operations overseas, but that is extremely far from the truth. It treats China as a foreign adversary, which is nearly purely the framing from the U.S. side as an aggressor.
AI should never be used in military contexts. It is an extremely dangerous development.
Look at how US ally Israel used non-LLM AI technology "The Gospel" and "Lavender" to justify the murder of huge numbers of civilians in their genocide of Palestinians.
ukraine is using ai in a military context with some effectiveness. i dont think theres much of a problem with having the drone take over the last couple minutes of blowing up a russian factory
Keep in mind: the government is very invested logistically in Anthropic.
So no matter what xAI or OpenAI say - if and when they replace that spend - know that they are lying. They would have caved to the DoW’s demands for mass surveillance.
Because if there were some kind of concession, it would have been simplest just to work with Anthropic.
Delete ChatGPT and Grok.
Big respect
Total humiliation for Hegseth, sure there will be a backlash
I thought it was interesting he threw in the bit about the supply chain risk and Defense Production Act being inherently contradictory. Most of the letter felt objective and cooperative, but that bit jumped off the page as more forceful rejection of Hegseth's attempt to bully them. Couldn't have been accidental.
I see it as the opposite, its a lousy excuse of a message trying to get people not to think that they are giving in. Instead they list the horrible uses that they are already helping the government with. Dont worry, we only help murder people in other countries not the US. They also keep calling it the "Department of War" which means that this message is not for "us", its them begging publicly to Hegseth.
What would the ideal response have been, in your view?
Brigadier General S. L. A. Marshall’s 1947 book Men Against Fire: The Problem of Battle Command stated that only about 10-15% of men would actually take the opportunity to fire directly at exposed enemies. The rest would typically fire in the air to merely scare off the men on the opposing force.
I personally think this is one of the most positive of human traits: we’re almost pathologically unwilling to murder others even on a battlefield with our own lives at stake!
This compulsion to avoid killing others can be trivially trained out of any AI system to make sure that they take 100% of every potential shot, massacre all available targets, and generally act like Murderbots from some Black Mirror episode.
Anyone who participates in any such research is doing work that can only be categorised as the greatest possible evil, tantamount to purposefully designing a T800 Terminator after having watched the movies.
If anyone here on HN reading this happens to be working at one of the big AI shops and you’re even tangentially involved in any such military AI project — even just cabling the servers or whatever — I figuratively spit in your eye in disgust. You deserve far, far worse.
> Brigadier General S. L. A. Marshall’s 1947 book Men Against Fire: The Problem of Battle Command stated that only about 10-15% of men would actually take the opportunity to fire directly at exposed enemies. The rest would typically fire in the air to merely scare off the men on the opposing force.
Having been identified back then, this issue has been systematically stamped out in modern militaries through training methods. Cue high levels of PTSD in modern frontline troops after they absorb what they actually did.
One piece of context that everyone should keep in mind with the recent Anthropic showdown - Anthropic is trying to land British [0], Indian [1], Japanese [2], and German [3] public sector contracts.
Working with the DoD/DoW on offensive usecases would put these contracts at risk, because Anthropic most likely isn't training independent models on a nation-to-nation basis and thus would be shut out of public and even private procurement outside the US because exporting the model for offensive usecases would be export controlled but governments would demand being parity in treatment or retaliate.
This is also why countries like China, Japan, France, UAE, KSA, India, etc are training their own sovereign foundation models with government funding and backing, allowing them to use them on their terms because it was their governments that build it or funded it.
Imagine if the EU demanded sovereign cloud access from AWS right at the beginning in 2008-09. This is what most governments are now doing with foundation models because most policymakers along with a number of us in the private sector are viewing foundation models from the same lens as hyperscalers.
Frankly, I don't see any offramp other than the DPA even just to make an example out of Anthropic for the rest of the industry.
[0] - https://www.anthropic.com/news/mou-uk-government
[1] - https://www.anthropic.com/news/bengaluru-office-partnerships...
[2] - https://www.anthropic.com/news/opening-our-tokyo-office
[3] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008
I tried several times to read your second paragraph, and failed to parse it. Could you break it into several sentences somehow? It's possible you're making an important point, but I can't tell what you're trying to say.
There is no Department of War. This is the dumbest fucking timeline.
The Pentagon should be using open models, not closed ones by OpenAI/Anthropic/xAI. The entire discussion of what Anthropic wants is therefore moot.
The best open models are from china though.
It's a good reason to fund open model development domestically.
I have read the whole thing but I nonetheless want to focus on the second paragraph:
> Anthropic has therefore worked proactively to deploy our models to the Department of War
This should be a "have you noticed that the caps on our hats have skulls on it?" moment [1]. Even if one argues that the sentence should not be read literally (that is, that it's not literal war we're talking about), the only reason for calling it "Department of War" and "warfighters" instead of "Department of Defense" and "soldiers" is to gain Trump's favor, a man who dodged the draft, called soldiers "losers", and has been threatening to invade an ally for quite some time.
There is no such a thing as a half-deal with the devil. If Anthropic wants to make money out of AI misclassifying civilians as military targets (or, as it has happened, by identifying which one residential building should be collapsed on top of a single military target, civilians be damned) good for them, but to argue that this is only okay as long as said civilians are brown is not the moral stance they think it is.
Disclaimer: I'm not a US citizen.
[1] https://m.youtube.com/watch?v=ToKcmnrE5oY
What is their other possible move here, considering the government is threatening to destroy their business entirely?
One alternative would be to call the government's bluff: if they truly are as indispensable as they claim then they can leverage that advantage into a deal.
But at a more general level, I'd say that unethical actions do not suddenly become ethical when one's business is at risk. If Anthropic considers that using their technology for X is unethical and then decide that their money and power is worth more than the lives of the foreigners that will be affected by doing X then good for them, but they shouldn't then make a grandstand about how hard they fought to ensure that only foreigners get their necks under the boots.
> What is their other possible move here, considering the government is threatening to destroy their business entirely?
You must not be American, then. We all know that these corporate favoring contract terms are managed through campaign contributions; savvy?
Anthropic must have high school interns as govt liaisons, and not very bright ones
Warfighters is a pretty common term though. There's a fair bit of nuance in when and how you'd use it.
It's a common term that comes with a lot of criticism in the vein of noticing the skulls.
Wow, I expected them to cave, and they did'nt!
I'll be signing up to Claude again, Gemini getting kind of crap recently anyway.
They essentially said "we're not fans of mass surveilance of US citizens and we won't use CURRENT models to kill people autonomously" and people are saying they're taking a stand and doing the right thing? What???
I guess they're evil. Tragic.
It's not inconceivable that AI could become better than humans at targeting things. For example if it can reliably identify enemy warcraft or drones faster than people can react. I'm not saying Claude's models are suited for that but humans aren't perfect and in theory AI can be better than humans. It's not currently true and would need to be proved, but it doesn't seem unreasonable. It could well be better than something like deploying mines.
We're living in a time where most tech companies are donating millions of dollars to the current leadership in exchange for favors.
In that climate this is a more of a stand than what everyone else is doing.
The Sinophobic culture at Anthropic is worrying. Say what you will about authoritarianism, but China’s non-imperialist foreign policy means their economy is less reliant on a military-industrial complex.
All they have to do is continue to pump out exponentially more solar panels and the petrodollar will fall, possibly taking our reserve currency status with it. The U.S. seems more likely to start a hot war in the name of “democracy” as it fails to gracefully metabolize the end of its geopolitical dominance, and Dario’s rhetoric pushes us further in that direction.
Look. I think the Chinese AI companies are doing a lot of good. I'm glad they exist. I'm glad they're relatively advanced. I don't think the entire nation of China is a bunch of villains. I don't think the US, even before the current era, is a bunch of do-gooders.
But China has some of the most imperialist policies in the world. They are just as imperialist as Russia or America. Military contracts are still massive business.
I also believe the petrodollar will fall, but it isn't going to be because China built exponentially more solar panels.
I think a lot of the conflict about what imperialist policies means is different framing.
For better or worse, inside this the border in this map China has fairly imperialist policies. Outside it not so much: https://en.wikipedia.org/wiki/Map_of_National_Shame
That's different to the expansionist imperial policies of Spain in the 1500s or Britain in the 1700s. It also affects a very large proportion of the world's population. That Wikipedia page has some good links for further reading about this.
But it's an important point when considering China's place in the world.
We're talking about the modern world, though. China's imperialism over the past half century is not significantly different from any other major world power. The choices we have aren't 1500s Spain or 1700s Britain vs. 2000s China.
And Belt and Road is the Marshall plan writ large, and it was considered to be one of the largest imperialist plans ever by the USA, and B&R covers many many countries outside of that map. You'll notice all of these loans they've offered have very favorable terms for them - it's arguably many times more exploitative than the Marshall plan.
> But China has some of the most imperialist policies in the world.
Citation needed?
US and allies have invaded or intervened in 20+ countries in last 20 years in the name of "western values" where values means $$$$ and hegemony.
Educate me please with a comparison of what China has done to be "some of the most imperialist policies"?
> Educate me please with a comparison of what China has done to be "some of the most imperialist policies"?
Tibet occupation. Taiwan encirclement and ongoing military exercises. Strong-arming African and Asian countries that made the mistake of signing up for belt & road. Tianenmen Square. Illegal Foreign Police Stations. Uyghurs/Xinjiang genocide and concentration camps. Repeated invasion and occupation of Indian territory in North East and North West. The Great Firewall of China - occupation and suppression of its own populations. Ongoing Han settlement of Tibet, Xinjiang and other ethnic regions. Violent destruction of Hong Kong democracy (that was condition of handover). Spratly Islands occupation. Attacks on Filipino shipping and coast guard. Ongoing attacks on Japan's Senkaku Islands.
Tibet Hong Kong / Macau Taiwan Everything constantly in the South China Sea Belt and Roads is effectively the Marshall Plan but even bigger - Africa being the major example, but also Eastern Europe, parts of the middle east, etc. Over 100 countries. This exact playbook is what sets up the infrastructure and reasons for military intervention at a later date - protecting your investments.
Maybe it's time to learn some facts https://en.wikipedia.org/wiki/Sino-Vietnamese_Wars
In what world does China have a non-imperialist foreign policy?
For example, China operates 1 foreign military base, in Djibouti. How many do you think the U.S. has in the South China Sea alone?
Beyond that, how many people has China killed in foreign military conflicts in the past 40 years? How many foreign governments have they overthrown?
Instead of all this, they’ve used their resources not only to become the world’s economic superpower but also to lift 800 million people out of poverty, accounting for 75% of the world’s reduction during the past 4 decades. The U.S. has added 10 million during that same time period.
why use 40 years as the example? its a pretty convenient framing to exclude the foreign governments its toppled. eg. tibet.
the government in exile remains the government in exile.
youd have some standing if china dropped control over its imperial holdings, rather than pretend theyre part of china
First off, I consider the post-Mao / starting with Deng era of Chinese government to be the most relevant when considering who they “are” as a country now.
However, I’d still maintain that before that, China’s foreign policy was more focused on maintaining territorial sovereignty against the threat of Western imperialism vs. focused on expansion or foreign influence: https://en.wikipedia.org/wiki/History_of_foreign_relations_o...
Meanwhile, the entire territory of the U.S. is predicated on one of history’s largest genocides, and a consistently expansionary foreign policy on top of that.
Historically speaking, he's right. China has never had an expansionist foreign policy.
Tibet, the Philippines, and Taiwan would like to have a word, not to mention Chinese military action in support of its North Korea puppet state, and wars with Vietnam and India.
Are you serious? Don't you know how many wars did China wage? It tried to assimilate Vietnam for 1000 years. The last large scale war against Vietnam was just 1979. In fact, China had started war with all its neighbors, with no exception.
Do me a favor and name one single country didn't have war with any of its neighbor.
In what world does China have a imperialist foreign policy?
The one we live in, where they have control over a wide swathe of land mass through imperialism and have actively resisted relinquishing it?
The one we live in, where they are constantly surpassing international law in international waters in the South China Sea?
The one we live in, where they are constantly rattling sabers at South Korea and Japan when it comes to military expansion?
The one we live in, where they brutally cracked down on Hong Kong when they did not abide by the 50 year one country two systems deal, not even making it half of the way through the agreed period?
The one we live in, where there is constant threat to Taiwan?
It may have been a lazy post you're responding to, but anyone that is paying attention to this topic enough to talk about it is going to either say 'Of course China is imperialist, the same as every other global power' or take some sort of tankie approach to justify it.
I'm well informed on all of these but no, if we compare to other global power like US or Russia, or historically British, France, Spain, etc, China is 100% not an imperialist or colonialist, not by a large margin. Those issues are largely exaggerated by media and anyone had a decent exposure to history and international politics wouldn't say they are the same.
I disagree on China. What would you call China's behavior[1] in the South China Sea with regards to fishing vessels and other non-military boats?
[1] https://www.youtube.com/watch?v=hzZrcqf826E
Sure China has some disputes with neighboring country in South China Sea, the worst conflict they had is fishing boats running into each other. 0 death toll last time I checked. Meanwhile US killed at least 126 people with alleged drug strike in the Caribbean Sea since last year, WITHOUT trial. Anyone believing these're equivalent imperialism activity is hypocrite at best.
[1] https://apnews.com/article/boat-strikes-military-death-toll-...
What China is doing in the South China Sea? The South China Sea.
Let's just compare to the Monroe Doctrine [1]. What this actually means has gone through several iterations by since I think Teddy Roosevelt's time, it's that the United States views the Americas (being North and South America) to be the sole domain of the United States.
This was a convenient excuse for any number of regime changes in Central and South America since 1945. The US almost started World War Three over Cuba in 1962 after the USSR retaliated to the US putting nuclear MRBMs in Turkey. We've starved Cuba for 60+ years for having the audacity to overthrow our puppet government and nationalize some mob casinos. Recently, we kidnapped the head of state of Venezuela because reasons.
But sure, let's focus on China militarizing its territorial waters.
[1]: https://en.wikipedia.org/wiki/Monroe_Doctrine
You're arguing that because of the English language name of it is the South China Sea that China owns it and their actions can't be imperialist?
Brunei, Malaysia, Indonesia, Vietnam, the Philippines, Taiwan, and Vietnam will all be happy to know that we've solved it - we can just abandon it all to China. Problem solved!
This is a silly argument. There are significant territorial disputes that China is extremely aggressive on, international tribunals have ruled them as violating international law in international waters and in sovereign waters of other nations, etc.
And the US just casually carried out a special military operation in another sovereign country and captured their president without consequences. So much for self-righteous.
https://en.wikipedia.org/wiki/Sino-Vietnamese_Wars
You forgot Tibet and the Uyghurs https://worldwithoutgenocide.org/genocides-and-conflicts/gen...
> where they have control over a wide swathe of land mass through imperialism and have actively resisted relinquishing it?
Was referring to Tibet.
The Uyghurs are also a major problem from a social perspective but not directly related to imperalism/expansionism/military industrial complex stuff.
Yes but the guy at the end of the street beats his wife too!
“One country two systems” is definitionally not imperialism, and given that “One China” is still an internationally recognized thing, neither is Taiwan. “Imperialism” is not a synonym for “morally repugnant government policy”.
I can see the argument for Hong Kong. I don't agree, really, but I can understand it. Under the strictest of definitions, perhaps it isn't.
But Taiwan is very obviously a totally separate country no matter what fictions anyone employs. If you are trying to talk about the thin veneer of everyone going "Uh huh, sure, China, yep Taiwan is totally part of you, wink wink, nudge nudge" as somehow making China not imperialist when Taiwan basically lives under the perpetual threat of a Chinese military invasion and having their own democratic form of government overthrown and replaced with the CCP, then... I don't really know what to say.
I suppose we could argue about imperialism being more of an economic thing - in which case this all still holds up - China's investments in Africa are effectively the same playbook the US has run out in developing nations for years. The US learned it from prior imperialist nations but belts and roads is nearly a carbon copy of what the US has done in other places.
But let's look at what the original poster was actually talking about - saying that China is safe because they don't have a military industrial complex because they're not imperialist. The proper word to use, if we want to get down to the semantics of it all, would be expansionist - but it's still not true. China has the 2nd largest military industrial complex in the world, and the gap is shrinking every day between them and the US. And if you were to look at wartime capacity, where China's dual-use shipyards could be swapped to naval production instead of commercial, a huge portion of that gap disappears immediately.
100% agree. Any AI org that is that tied to a single nation's interest can only be detrimental in the long run.
I know "open-source" AI has its own risks, but with e.g. DeepSeek, people in all countries benefit. Americans benefit from it equally.
I think the part about China is just about projecting alignment with the USG in hopes that this will result in Anthropic being treated more favourably by the current administration.
the treatment of Tibet and Xinjiang are entirely Han imperialism and colonisation.
the one china policy is imperialism
> China’s non-imperialist foreign policy
Really? Is China non-imperialist regarding Taiwan and Tibet?
Taiwan is a matter of perspective. From the Chinese perspective, there was a civil war and the KMT lost. That's also the official position of the US, the EU and most countries in the world. It's called the One China policy. And China seems happy to maintain the status quo and leave the situation unresolved. Is it really imperialism to say that ultimately there will be reunification?
Even if you accept Tibet as imperialist, which is debatable, it was in 1950. You want to compare that to US imperialism, particularly since WW2 [1]? And I say "debatable" here because Tibet had a system that is charitably called "serfdom" where 90% of people couldn't own land but they did have some rights. However, they were the property of their lords and could be gifted or traded, you know, like property. There's another word for that: slavery.
It is 100% factually accurate to say that the People's Republica of China is not imperialist.
[1]: https://en.wikipedia.org/wiki/United_States_involvement_in_r...
> China’s non-imperialist foreign policy
This is the China that is not only threatening to invade Taiwan but doing live fire exercises around the island and threatening and attempting to coerce Japan for suggesting saying it will go to its defense.
Your comment is ridiculous. It reads like satire.
It wasn't that long ago that Taiwan claimed to be the legitimate government of China; given that China still maintains the reverse claim, it's not outrageous that it would consider an outside country's defense to be interference in an internal matter.
Whether or not that claim is legitimate, it is consistent with the concept of china having a non-imperialist foreign policy, and claims regarding that need to look elsewhere for supporting evidence.
that claim is really about not resuming a war.
taiwan saying otherwise would immediately trigger an attack from the PRC.
its still imperialism that china is dominating a neighbor to require it ro state a certain position, especially when its very far from the defacto reality on the ground, that taiwan is clearly separate
While that rhetoric makes sense in the context of the history and politics of China and Taiwan, they have been independently governed nations for quite a while and have very different political systems, their own armies, etc. They are de-facto separate nations if nothing else.
I also note China's aggressive and violent colonization and expansive claims of the South China Sea.
Taking any nation/land/sea by force is imperialist, by definition.
Your comment reads like propaganda.
You know who else considers Taiwan to be part of the People's Republic of China? The US, the EU and in fact most countries in the world. It's called the One China policy. There are I believe 12 countries that have diplomatic relations with Taiwan.
The position of the PRC is that Taiwan will ultimately be reunified. That doesn't necessarily mean by military force. It doesn't even necessarily mean soon. The PRC famously takes a very long term view.
And those islands you mention are in the South China Sea.
that is still imperialism: taking control of a colony and forcing a certain culture on its inhabitants
This seems to be at least partially written by AI: There is no Department of War, it is called the Department of Defense.
That’s not true anymore. Trump renamed it in September: https://www.war.gov/News/News-Stories/Article/Article/429582...