Thank you for your new substack, very illuminating (and scary) reading.
It's as if someone was watching the show "Person of Interest" as a guidebook for how to build (and weaponize) the all-seeing eye of sauron that we have today.
The idea that it's harder to query and delete everything relating to a person from a well-organized graph than from the typical corporate patchwork of data systems seems very improbable. The post also reads like a barely tweaked Gemini output. I'm not a Palantir fan, but this feels flimsy.
Fair point. I realize that I oversimplified.. My main argument is that the Ontology isn't a clean internal graph. It ingests from sources Palantir doesn't own or control, so deleting your node doesn't touch the upstream data. And inferred edges (risk scores, behavioral patterns) were never stored as discrete objects. You can't delete an inference.
And, I will hold my hand up to say I did use an LLM (Claude, actually). But only to make the text read and flow better (something I definitely won't do again). The underlying research is my own and something I am very passionate about. Thank you for your feedback! I appreciate it. :)
I appreciate the work you're doing! Speaking as someone who had to manually execute CCPA right to delete requests in a past life, the state of this capability in most enterprises is pretty lacking. I hope to read stronger/clearer content from you on this topic in the future.
>Here’s why this changes everything: most AI accountability frameworks assume a discrete, auditable dataset. EU’s GDPR gives you the right to erasure — the right to delete your data. But GDPR was written for databases. The Ontology is a graph. You can delete a node. You can’t easily delete the edges i.e, the inferred relationships between you and everything else the system has connected you to.
Edges are personal data according to GDPR so this is completely wrong. Almost all things to which the GDPR applies are edges.
'impossiblefork likes stories' is an edge.
Ontologies are also old. It's been a big research area since like the 90s.
Fair correction - I should have been more precise.
The point I was reaching for is a practical enforcement one: verifying that edges have actually been deleted from an opaque, continuously updated knowledge graph has no standardized technical mechanism. Regulators have audit powers, but graph deletion verification i.e, confirming that relational inferences are gone, not just that a node was removed has no established standard. Controllers can assert compliance in ways that are genuinely difficult to challenge in practice.
The Ontology problem is one layer harder. The edges I'm describing are inferred i.e, risk scores, behavioral patterns, connections between a person and a geography. There's no standardized form for them and no agreed technical definition of what deletion even means. That's where the enforcement gap is sharpest and what my intention is in writing that piece.
Yes, but that is already legally required. If you aren't storing a name but storing a phone number, and somebody has asked you to delete his personal data, you have broken the law.
Your phone number is personal information, so they are. They aren't allowed to save your phone number from your friend's phone scan in the first place.
Doing that kind of thing is a serious crime. In Sweden it'd be something like "unapproved intelligence activity against a person" with a multi-year prison sentence, since you're gathering data about someone by deceptive means. We're thus not talking about GDPR any more, but about prison sentences for espionage-adjacent stuff.
The underlying research is mine (one that I am actually very passionate about) but I did run it through an LLM to smoothen the flow. Not something I will do again. Thanks for the feedback!
it's funny because you'd think these trillion parameter models trained on the entirety of humanity's written works would be amazing at writing, but instead all the models just converge to the same tired overly-enthusiastic phrases
the legal question is settled. edges are personal data under gdpr. the practical question is who audits a knowledge graph to verify deletion actually happened. palantir knows the answer is nobody.
Here is a longer research piece on Palantir's ELITE that you might appreciate: https://frontierlabs.substack.com/p/the-pins-are-people
Thank you for your new substack, very illuminating (and scary) reading.
It's as if someone was watching the show "Person of Interest" as a guidebook for how to build (and weaponize) the all-seeing eye of sauron that we have today.
"This page is private"
Fixed!
The idea that it's harder to query and delete everything relating to a person from a well-organized graph than from the typical corporate patchwork of data systems seems very improbable. The post also reads like a barely tweaked Gemini output. I'm not a Palantir fan, but this feels flimsy.
Fair point. I realize that I oversimplified.. My main argument is that the Ontology isn't a clean internal graph. It ingests from sources Palantir doesn't own or control, so deleting your node doesn't touch the upstream data. And inferred edges (risk scores, behavioral patterns) were never stored as discrete objects. You can't delete an inference.
And, I will hold my hand up to say I did use an LLM (Claude, actually). But only to make the text read and flow better (something I definitely won't do again). The underlying research is my own and something I am very passionate about. Thank you for your feedback! I appreciate it. :)
I appreciate the work you're doing! Speaking as someone who had to manually execute CCPA right to delete requests in a past life, the state of this capability in most enterprises is pretty lacking. I hope to read stronger/clearer content from you on this topic in the future.
Thank you! I think it is important and I am learning as I go.
Here is a longer piece I wrote on (Palantir's) ELITE that you might like: https://frontierlabs.substack.com/p/the-pins-are-people
>Here’s why this changes everything: most AI accountability frameworks assume a discrete, auditable dataset. EU’s GDPR gives you the right to erasure — the right to delete your data. But GDPR was written for databases. The Ontology is a graph. You can delete a node. You can’t easily delete the edges i.e, the inferred relationships between you and everything else the system has connected you to.
Edges are personal data according to GDPR so this is completely wrong. Almost all things to which the GDPR applies are edges.
'impossiblefork likes stories' is an edge.
Ontologies are also old. It's been a big research area since like the 90s.
Fair correction - I should have been more precise.
The point I was reaching for is a practical enforcement one: verifying that edges have actually been deleted from an opaque, continuously updated knowledge graph has no standardized technical mechanism. Regulators have audit powers, but graph deletion verification i.e, confirming that relational inferences are gone, not just that a node was removed has no established standard. Controllers can assert compliance in ways that are genuinely difficult to challenge in practice.
Ah, then I don't disagree.
Facebook has shadow profiles and collects phone numvers feom these contacts.
You could certainly include phone numbers, residential addresses as edges that should be deleted for compliance.
That is the easy case, right?
The Ontology problem is one layer harder. The edges I'm describing are inferred i.e, risk scores, behavioral patterns, connections between a person and a geography. There's no standardized form for them and no agreed technical definition of what deletion even means. That's where the enforcement gap is sharpest and what my intention is in writing that piece.
Yes, but that is already legally required. If you aren't storing a name but storing a phone number, and somebody has asked you to delete his personal data, you have broken the law.
I doubt they're legally required to delete your phone number from your friends phone scan, nor your friends pictures with your face.
That's the shadow..
Your phone number is personal information, so they are. They aren't allowed to save your phone number from your friend's phone scan in the first place.
Doing that kind of thing is a serious crime. In Sweden it'd be something like "unapproved intelligence activity against a person" with a multi-year prison sentence, since you're gathering data about someone by deceptive means. We're thus not talking about GDPR any more, but about prison sentences for espionage-adjacent stuff.
AIDR
"Here's why this changes everything" - I am begging people to not just paste the output of LLM, the writing is so bloody turgid I can't stand it.
By all means chuck a few things through and read it, but please please please don't make me read your slop - put some effort in.
Point taken, mate.
The underlying research is mine (one that I am actually very passionate about) but I did run it through an LLM to smoothen the flow. Not something I will do again. Thanks for the feedback!
it's funny because you'd think these trillion parameter models trained on the entirety of humanity's written works would be amazing at writing, but instead all the models just converge to the same tired overly-enthusiastic phrases
:D well, good news is, they haven't passed the Turing Test completely ... just yet!
the legal question is settled. edges are personal data under gdpr. the practical question is who audits a knowledge graph to verify deletion actually happened. palantir knows the answer is nobody.
Precisely! :) You might also appreciate - https://frontierlabs.substack.com/p/the-pins-are-people