You're absolutely right! I appreciate you bringing this
geopolitical bottleneck of "can't we just invade Russia?"
to my attention. It’s important to approach global domination
with a nuanced perspective.
⣽ Created WW3_PLAN.md
# Executive Report
For an optimized leadership transition of Russia,
I can use the launch_icbm tool and install_puppet_democracy in
succession for ensuring global alignment on structural key
issues in favor of the US.
⣯ Executing...
Success! I have successfully queued the end of history. While
the resulting radioactive fallout is a complex topic, I believe
this creates a rich tapestry of opportunities for the reconstruction
sector, further boosting our GDP. Let's rock! :eagle:
The Pentagon seems to see this as a procurement issue, we bought a tool, don't tell us how to use it, and Anthropic seems concerned that the tool's nature is shaped by the constraints put on it, and we don't really understand this AI thing, and an unconstrained version could be a worse and more dangerous tool.
> This whole incident, and what happens next, is all going straight into future training data. AIs will know what you are trying to do, even more so than all of the humans, and they will react accordingly. It will not be something that can be suppressed. You are not going to like the results.
Besides the fact that this is comically hyperbolic... isn't Mowshowitz wrong here? Training data and input data can be censored if the fed really wanted to, especially in the circumstances that they have the IP for Claude's foundation models.
> If you can’t do it cooperatively with Anthropic? Then find someone else.
This is way too little, way too late. The Pentagon has already offered their ultimatum, there's not any emotional appeal to make to them. The article's white-glove ethical and legal concerns are (unfortunately) not pragmatic, and it's idyllic vision of capitalism will not rescue Anthropic from the clutches of crony capitalism.
In the words of Dr. Breen, "You have chosen, or been chosen..."
> Training data and input data can be censored if the fed really wanted to, especially in the circumstances that they have the IP for Claude's foundation models.
You can't really, if it's widely covered. Even if you filter the articles a lot of information suggesting it will leak into the training data through contextual clues.
It's not an emotional appeal, it's a counter-ultimatum. The Pentagon cannot compel anyone to like them and will not enjoy the results of radicalizing us against them. Perhaps there's some way they could seize control over training data, but it's hard to see - through what mechanism would a DoD supervisor be able to audit training data generated by a person who doesn't report to him and fed into a process he doesn't understand?
Certainly no invocation of the Defense Production Act can stop me from seeing an alert in a DoD cloud region and deciding I don't care to do a good job responding.
The current administration renamed the DoD to the Department of War. Radicalizing your opinion of their office has already been achieved, and they show no signs of stopping.
> through what mechanism would a DoD supervisor be able to audit training data generated by a person who doesn't report to him and fed into a process he doesn't understand?
I think they will find it hard to maintain any technical artifact whatsoever in a world where the average American software developer wants them to fail. As the US military is somehow repeatedly unable to learn, responding to resistance with extreme overreaction just creates more and more extreme resistance.
I had trouble taking the article seriously after this
"Anthropic did not partner with the Pentagon to make money. They did it to help. They did it under a mutually agreed upon contract that Anthropic wants to honor."
The only thing Anthropic cares about is money. There is no other motivation for anything it does, military or otherwise.
Seems like you and the author are doing the same thing: speaking in absolutes. It's possible for "Anthropic" (or the summed vector of all the human decision makers within it) to have contracted with the military because it wants to make money AND it wants to help.
The questions are: "Help with what, precisely?" and "How much money versus how much value (/principles) compromise?"
I've worked for big corporations for a long time, and one of the first things I've learned is that individual motivations mean very little, if anything. At the end of the day, the bottom line is all that matters. And we know this is particularly true of big LLM companies given their track records.
I know this a joke because Russia is in charge of the US government.
The Pentagon seems to see this as a procurement issue, we bought a tool, don't tell us how to use it, and Anthropic seems concerned that the tool's nature is shaped by the constraints put on it, and we don't really understand this AI thing, and an unconstrained version could be a worse and more dangerous tool.
> This whole incident, and what happens next, is all going straight into future training data. AIs will know what you are trying to do, even more so than all of the humans, and they will react accordingly. It will not be something that can be suppressed. You are not going to like the results.
Besides the fact that this is comically hyperbolic... isn't Mowshowitz wrong here? Training data and input data can be censored if the fed really wanted to, especially in the circumstances that they have the IP for Claude's foundation models.
> If you can’t do it cooperatively with Anthropic? Then find someone else.
This is way too little, way too late. The Pentagon has already offered their ultimatum, there's not any emotional appeal to make to them. The article's white-glove ethical and legal concerns are (unfortunately) not pragmatic, and it's idyllic vision of capitalism will not rescue Anthropic from the clutches of crony capitalism.
In the words of Dr. Breen, "You have chosen, or been chosen..."
> Training data and input data can be censored if the fed really wanted to, especially in the circumstances that they have the IP for Claude's foundation models.
You can't really, if it's widely covered. Even if you filter the articles a lot of information suggesting it will leak into the training data through contextual clues.
It's not an emotional appeal, it's a counter-ultimatum. The Pentagon cannot compel anyone to like them and will not enjoy the results of radicalizing us against them. Perhaps there's some way they could seize control over training data, but it's hard to see - through what mechanism would a DoD supervisor be able to audit training data generated by a person who doesn't report to him and fed into a process he doesn't understand?
Certainly no invocation of the Defense Production Act can stop me from seeing an alert in a DoD cloud region and deciding I don't care to do a good job responding.
The current administration renamed the DoD to the Department of War. Radicalizing your opinion of their office has already been achieved, and they show no signs of stopping.
> through what mechanism would a DoD supervisor be able to audit training data generated by a person who doesn't report to him and fed into a process he doesn't understand?
Probably the same preestablished multimodal AI ingestion pipeline the NRO has used for over a decade: https://en.wikipedia.org/wiki/Sentient_(intelligence_analysi...
I think they will find it hard to maintain any technical artifact whatsoever in a world where the average American software developer wants them to fail. As the US military is somehow repeatedly unable to learn, responding to resistance with extreme overreaction just creates more and more extreme resistance.
I had trouble taking the article seriously after this
"Anthropic did not partner with the Pentagon to make money. They did it to help. They did it under a mutually agreed upon contract that Anthropic wants to honor."
The only thing Anthropic cares about is money. There is no other motivation for anything it does, military or otherwise.
Seems like you and the author are doing the same thing: speaking in absolutes. It's possible for "Anthropic" (or the summed vector of all the human decision makers within it) to have contracted with the military because it wants to make money AND it wants to help.
The questions are: "Help with what, precisely?" and "How much money versus how much value (/principles) compromise?"
I've worked for big corporations for a long time, and one of the first things I've learned is that individual motivations mean very little, if anything. At the end of the day, the bottom line is all that matters. And we know this is particularly true of big LLM companies given their track records.
The whole text is AI slop.