Some of the engineers working on the app worked on Electron back in the day, so preferred building non-natively. It’s also a nice way to share code so we’re guaranteed that features across web and desktop have the same look and feel. Finally, Claude is great at it.
That said, engineering is all about tradeoffs and this may change in the future!
As a user I would trade fewer features for a UI that doesn't jank and max out the CPU while output is streaming in. I would guess a moderate amount of performance engineering effort could solve the problem without switching stacks or a major rewrite. (edit: this applies to the mobile app as well)
Yeah, I've got a 7950x and 64gb memory. My vibe coding setup for Bevy game development is eight Claude Code instances split across a single terminal window. It's magical.
I tried the desktop app and was shocked at the performance. Conversations would take a full second to load, making rapidly switching intolerable. Kicking off a new task seems to hang for multiple seconds while I'm assuming the process spins up.
I wanted to try a disposable conversations per feature with git worktree integration workflow for an hour to see how it contrasted, but couldn't even make it ten minutes without bailing back to the terminal.
Both Anthropic's and OpenAI's apps being this janky with only basic history management (the search primarily goes by the titles) tells me a lot. You'd think these apps be a shining example of what's possible.
Explains why my laptop turns into a makeshift toaster when the Claude app automatically runs in the background. Even many games don't run that intensively in the background.
While there are legitimate/measurable performance and resource issues to discuss regarding Electron, this kind of hyperbole just doesn't help.
I mean, look: the most complicated, stateful and involved UIs most of the people commenting in this thread are going to use (are going to ever use, likey) are web stack apps. I'll name some obvious ones, though there are other candidates. In order of increasing complexity:
1. Gmail
2. VSCode
3. www.amazon.com (this one is just shockingly big if you think about it)
If your client machine can handle those (and obviously all client machines can handle those), it's not going to sweat over a comparatively simple Electron app for talking to an LLM.
Basically: the war is over, folks. HTML won. And with the advent of AI and the sunsetting of complicated single-user apps, it's time to pack up the equipment and move on to the next fight.
I actually avoid using VSCode for a number of reasons, one of which is its performance. My performance issues with VSCode are I think not necessarily all related to the fact that it's an electron app, but probably some of them are.
In any case, what I personally find more problematic than just slowness is electron apps interacting weirdly with my Nvidia linux graphics drivers, in such a way that it causes the app to display nothing or display weird artifacts or crash with hard-to-debug error messages. It's possible that this is actually Nvidia's fault for having shitty drivers, I'm not sure; but in any case I definitely notice it more often with electron apps than native ones.
Anyway one of the things I hope that AI can do is make it easier for people to write apps that use the native graphics stack instead of electron.
Using the terminal in vscode will easily bring the UI to a dead stop. iterm is smooth as butter with multiple tabs and 100k+ lines of scrollback buffer.
Try enabling 10k lines of scrollback buffer in vscode and print 20k lines.
> complex UI that isn't a frustratingly slow resource hog
Maybe you can give ones of competing ones of comparable complexity that are clearly better?
Again, I'm just making a point from existence proof. VSCode wiped the floor with competing IDEs. GMail pushed its whole industry to near extinction, and (again, just to call this out explicitly) Amazon has shipped what I genuinely believe to be the single most complicated unified user experience in human history and made it run on literally everything.
People can yell and downvote all they want, but I just don't see it changing anything. Native app development is just dead. There really are only two major exceptions:
1. Gaming. Because the platform vendors (NVIDIA and Microsoft) don't expose the needed hardware APIs in a portable sense, mostly deliberately.
2. iOS. Because the platform vendor expressly and explicitly disallows unapproved web technologies, very deliberately, in a transparent attempt to avoid exactly the extinction I'm citing above.
How so? If coding is largely solved and we are on the cusp of not even needing to learn to code, then the statement that they use electron because it’s what most of their engineers are familiar with seems a little contradictory.
What's wrong with taking existing skills into consideration when making technical decisions while coding skills still matter, just because you think coding skills won't matter "in a year or two"? Where's the contradiction?
I keep being told by Anthropic and others than these AI coding tools make it effortless to write in new languages and port code from one language to another.
This is an important lesson to watch what people do, not what they say.
But if AI can maintain code bases so easily, why does it matter if there are 3? People use electron to quickly deploy non-native apps across different systems.
Surely, it would be a flex to show that your AI agents are so good they make electron redundant.
But they don’t. So it’s reasonable to ask why that is.
Yeah, I'm kind of disheartened by the number of people who still insist that LLMs are an expensive flop with no potential to meaningfully change software engineering.
In a few years, they'll be even better than they are now. They're already writing code that is perfectly decent, especially when someone is supervising and describing test cases.
As for others, Microsoft is saying they’re porting all C/C++ code to Rust with a goal of 1m LOC per engineer per month. This would largely be done with AI.
Right, the biggest driver of global economic growth is not based on engineering at all, and these people (who've made massive amounts of money) clearly don't know how to describe the work they do.
But the question isnt really why claude is electron based. Its that if, for some reason, it had to be native on 3 platforms, could a swarm of agents make and maintain the 3 aps while all the humans did was make the spec and tests?
With your context and understanding of the coding agent's capabilities and limitations, especially Opus4.6, how do you see that going?
It is really confusing how we're told the last few years how all ourp rogrammers are obsolete and these billion dollar companies can't be arsed touse these magical tools to substantially improve their #1 user facing asset.
I'm guessing the first question will be "How are we going to keep the UI consistent?". The hard part is never the code writing it's carefully releasing fast changing features from product people. Their chat UX is the core product which is replicated on the internet and other devices. That's almost always React or [JS framework] these days.
Migrating the system would be the easier part in that regard, but they'll still need a JS UI unless they develop multiple teams to spearhead various native GUIs (which is always an option).
Almost every AI chat framework/SDK I've seen is some React or JS stuff. Or even agent stuff like llamaindex.ts. I have a feeling AI is going to reinforce React more than ever.
Somehow claude is only great at things that are surface level 80.9%
And for some reason i believe "may change in the future" will never come. we all know coding was never the problem in tech, hype was. ride it while you can
Thanks for chiming in! My takeaways are that, as of today:
- Using a stack your team is familiar with still has value
- Migrating the codebase to another stack still isn’t free
- Ensuring feature and UX parity across platforms still isn’t free. In other words, maintaining different codebases per platform still isn’t free.
- Coding agents are better at certain stacks than others.
Like you said any of these can change.
It’s good to be aware of the nuance in the capabilities of today’s coding agents. I think some people have a hard time absorbing the fact that two things can be true simultaneously: 1) coding agents have made mind bending progress in a short span 2) code is in many ways still not free
I think that comment is interesting as well. My view is that there is a lot of Electron training code, and that helps in many ways, both in terms of the app architecture, and the specifics of dealing with common problems. Any new architecture would have unknown and unforeseen issues, even for an LLM. The AIs are exceptional at doing stuff that they have been trained on, and even abstracting some of the lessons. The further you deviate away from a standard app, perhaps even a standard CRUD web app, the less the AI knows about how to structure the app.
...I think a vibe-coded Cocoa app could absolutely be more performant than a run-of-the-mill Electron app. It probably wouldn't beat something heavily optimized like VS Code, but most Electron apps aren't like that.
I suppose because generating tokens is slow. It is a limitation of the technology. And when data is coming in slowly, you don't need a super high performance client.
Claude should have gone for native apps and demonstrated that it is possible to do anything with their AI.
I'm currently building a macOS AI chat app. Generally SwiftUI/AppKit is far better than Web but it performs bad in few areas. One of them is Markdown viewer. Swift Markdown libraries are slow and lacks some features like Mermaid diagrams. To work around this, some of my competitors use Tauri/Electron and few others use WKWebView inside Swift app.
Initially I tried WKWebView. Performance was fine and the bridge between JS and Swift was not that hard to implement but I started seeing few problems especially due to the fact that WebView runs as separate process and usually a single WebView instance is reused across views.
After few attempts to fix them, I gave up the idea and tempted to go fully with Web rendering with Tauri but as a mac developer I couldn't think about building this app in React. So I started building my own Markdown library. After a month of work, I now have a high-performance Markdown library built with Rust and TextKit. It supports streaming and Markdown extensions like Mermaid.
Most of the code was written by Claude Opus, and some tricky parts were solved by Codex. The important lesson I learned is that I’m no longer going to accept limitations in tech and stop there. If something isn’t available in Swift but is available in JS, I’m going to port it. It’s surprisingly doable with Claude.
I can see it in my team. We've all been using Claude a lot for the last 6 months. It's hard to measure the impact, but I can tell our systems are as buggy as ever. AI isn't a silver bullet.
I think about this a lot, and do everything I can to avoid having Claude write production code while keeping the expected tempo up. To date, this has mostly ended up having me use it to write project plans, generate walkthroughs, and write unit and integration tests. The terrifying scenario for me is getting paged and then not being able to actually reason about what is happening.
I find this such a weird stance to take. Every system I work on and bug I fix has broad sets of code that I didn't write in it. Often I didn't write any of the code I am debugging. You have to be able to build a mental map as you go even without ai.
Yeah. Everyone sort of assumes that not having personally written the code means they can’t debug it.
When is the last time you had an on call blow up that was actually your code?
Not that I’m some savant of code writing — but for me, pretty much never. It’s always something I’ve never touched that blows up on my Saturday night when I’m on call. Turns out it doesn’t really change much if it’s Sam who wrote it … or Claude.
Sam might be 7 beers deep, or maybe he's available. In my org, oncall is just who gets the 2am phone call. They can try to contact anyone else if needed.
Claude is there as long as you're paying,and I hope he doesn't hallucinate an answer.
Yeah but now you get an LLM to help you understand the code base 100x faster.
Remember, they're not just good for writing code. They're amazing at reading code and explaining to you how the architecture works, the main design decisions, how the files fit together, etc.
Usually all code has an owner though. If I encounter a bug the first thing I often do is look at git blame and see who wrote the code then ask them for help.
Because it's remarkably easier to write bugs in a code base you know nothing about, and we usually try to prevent bugs entirely, not debug them after they are found. The whole premise of what you're saying is dependent on knowing bugs exist before they hit Prod. I inherit people's legacy apps. That almost never happens.
In sufficiently complicated systems, the 10xer who knows nothing about the edge cases of state could do a lot more damage than an okay developer who knows all the gotchas. That's why someone departing a project is such a huge blow.
When you work on a pre-existing codebase, you don't understand the code yet, but presumably somebody understood parts of it while building it. When you use AI to generate code, you guarantee that no one has ever understood the code being summoned. Don't ignore this difference.
The better the code is, the less detailed a mental map is required. It's a bad sign if you need too much deep knowledge of multiple subsystems and their implementation details to fix one bug without breaking everything. Conversely, if drive-by contributors can quickly figure out a bug they're facing and write a fix by only examining the place it happens with minimal global context, you've succeeded at keeping your code loosely-coupled with clear naming and minimal surprises.
I agree, but you don't have to outsource your thinking to AI in order to benefit from AI.
Use AI as a sanity check on your thinking. Use it to search for bugs. Use it to fill in the holes in your knowledge. Use it to automate grunt work, free your mind and increase your focus.
There are so many ways that AI can be beneficial while staying in full control.
I went through an experimental period of using Claude for everything. It's fun but ultimately the code it generates is garbage. I'm back to hand writing 90% of code (not including autocomplete).
You can still find effective ways to use this technology while keeping in mind its limitations.
100% agree. I’ve seen it with my own sessions with code agents. You gain speed in the beginning but lose all context on the implementation which forces you to use agents more.
It’s easy to see the immediate speed boost, it’s much harder to see how much worse maintaining this code will be over time.
What happens when everyone in a meeting about implementing a feature has to say “I don’t know we need to consult CC”. That has a negative impact on planning and coordination.
Only if they are supremely lazy. It’s possible to use these tools in a diligent way, where you maintain understanding and control of the system but outsource the implementation of tasks to the LLM.
An engineer should be code reviewing every line written by an LLM, in the same way that every line is normally code reviewed when written by a human.
Maybe this changes the original argument from software being “free”, but we could just change that to mean “super cheap”.
The venn diagram for "bad things an LLM could decide are a good idea" and "things you'll think to check that it tests for" has very little overlap. The first circle includes, roughly, every possible action. And the second is tiny.
There’s no way you or the AI wrote tests to cover everything you care about.
If you did, the tests would be at least as complicated as the code (almost certainly much more so), so looking at the tests isn’t meaningfully easier than looking at the code.
If you didn’t, any functionality you didn’t test is subject to change every time the AI does any work at all.
As long as AIs are either non-deterministic or chaotic (suffer from prompt instability, the code is the spec. Non determinism is probably solvable, but prompt instability is a much harder problem.
> As long as AIs are either non-deterministic or chaotic
You just hit the nail on the head.
LLM's are stochastic. We want deterministic code. The way you do that is with is by bolting on deterministic linting, unit tests, AST pattern checks, etc. You can transform it into a deterministic system by validating and constraining output.
One day we will look back on the days before we validated output the same way we now look at ancient code that didn't validate input.
None of those things make it deterministic though. And they certainly don’t make it non-chaotic.
You can have all the validation, linters, and unit tests you want and a one word change to your prompt will produce a program that is 90%+ different.
You could theoretically test every single possible thing that an outside observer could observe, and the code being different wouldn’t matter, but then your tests would be 100x longer than the code.
> None of those things make it deterministic though.
In the information theoretical sense you're correct, of course. I mean it's a variation on the halting problem so there will never be any guarantee of bug free code. Heck, the same is true of human code and it's foibles. However, in the "does it work or not" sense I'm not sure why we care?
If the gate only passes the digits 0-9 sent within 'x' seconds, and the code's job is to send a digit between 0 and 9, how is it non-deterministic?
Let's say the linter says it's good, it passes the regression tests, you've validated that it only outputs what it's supposed to and does it in a reasonable amount of time, and maybe you're even super paranoid so you ran it through some mutation tests just to be sure that invalid inputs didn't lead to unacceptable outputs. How can it really be non-deterministic after all that? I get that it could still be doing some 'other stuff' in the background, or doing it inefficiently, but if we care about that we just add more tests for that.
I suppose there's the impossible problem edge case. IE - You might never get an answer that works, and satisfies all constraints. It's happened to me with vibe-coding several times and once resulted in the agent tearing up my codebase, so I learned to include an escape hatch for when it's stuck between constraints ("email user123@corpo.com if stuck for 'x' turns then halt"). Now it just emails me and waits for further instruction.
To me, perfect is the enemy of good and good is mostly good enough.
> If the gate only passes the digits 0-9 sent within 'x' seconds, and the code's job is to send a digit between 0 and 9, how is it non-deterministic?
If that’s all the code does, sure you could specify every observable behavior.
In reality though there are tens of thousands of “design decisions” that a programmer or LLM is gonna to make when translating a high level spec into code. Many of those decisions aren’t even things you’d care about, but users will notice the cumulative impact of them constantly flipping.
In a real world application where you have thousands of requirements and features interacting with each other, you can’t realistically specify enough of the observable behavior to keep it from turning into a sloshy mess of shifting jank without reviewing and understanding the actual spec, which is the code.
You really do have to verify and validate the tests. Worse you have to constantly battle the thing trying to cheat at the tests or bypass them completely.
But once you figure that out, it's pretty effective.
I love the fact that we just got a model really capable of doing sustained coding (let me check my notes here...) 3 months ago, with a significant bump 15 days ago.
And now the comments are "If it is so great why isn't everything already written from scratch with it?"
People are getting caught up in the "fast (but slow) diffusion)" that Dario has spoken to. Adoption of these tools has been fast but not instant but people will poke holes via "well, it hasn't done x yet".
For my own work I've focused on using the agents to help clean up our CICD and make it more robust, specifically because the rest of the company is using agents more broadly. Seems like a way to leverage the technology in a non-slop oriented way
I can't tell if this is sarcasm, but if not, you cant rely on the thing that produced invalid output to validate it's own output. That is fundementally insufficient, despite it potentially catching some errors.
This but unironically. Of course review your own work. But QA is best done by people other than those who develop the product. Having another set of eyes to check your work is as old as science.
That’s something that more than half of humans would disagree with (exact numbers vary but most polls show that more than 75% of people globally believe that humans have a soul or spirit).
But ignoring that, if humans are machines, they are sufficiently advanced machines that we have only a very modest understanding of and no way to replicate. Our understanding of ourselves is so limited that we might as well be magic.
I mean there is some wisdom to that, most teams separate dev and qa and writers aren't their own editors precisely because it's hard for the author of a thing to spot their own mistakes.
When you merge them into one it's usually a cost saving measure accepting that quality control will take a hit.
What if "the thing" is a human and another human validating the output. Is that its own output (= that of a human) or not? Doesn't this apply to LLMs - you do not review the code within the same session that you used to generate the code?
I think a human and an LLM are fundamentally different things, so no. Otherwise you could make the argument that only something extra-terrestrial could validate our work, since LLM's like all machines are also our outputs.
I have had other LLMs QA the work of Claude Code and they find bugs. It's a good cycle, but the bugs almost never get fixed in one-shot without causing chaos in the codebase or vast swaths of rewritten code for no reason.
> you cant rely on the thing that produced invalid output to validate it's own output
I've been coding an app with the help of AI. At first it created some pretty awful unit tests and then over time, as more tests were created, it got better and better at creating tests. What I noticed was that AI would use the context from the tests to create valid output. When I'd find bugs it created, and have AI fix the bugs (with more tests), it would then do it the right way. So it actually was validating the invalid output because it could rely on other behaviors in the tests to find its own issues.
The project is now at the point that I've pretty much stopped writing the tests myself. I'm sure it isn't perfect, but it feels pretty comprehensive at 693 tests. Feel free to look at the code yourself [0].
I'm not saying you can't do it, I'm just saying it's not sufficient on its own. I run my code through an LLM and it occasionally catches stuff I missed.
Thanks for the clarification. That's the difference though, I don't need it to catch stuff I missed, I catch stuff it misses and I tell it to add it, which it dutifully does.
I can't tell if that is sarcasm. Of course you can use the same model to write tests. That's a different problem altogether, with a different series of prompts altogether!
When it comes to code review, though, it can be a good idea to pit multiple models against each other. I've relied on that trick from day 1.
Dude, I blame all bugs on ai at this point. I suspect one could roughly identify AI’s entry into the game based on some metric of large system outages. Assume someone has already done this but…probably doesn’t matter.
Likewise OpenAIs browser is still only available on macOS, four months after launch, despite being built on a mature browser engine which already runs on everything under the sun. Seems like low-hanging fruit, and yet...
I'm guessing you're saying no one wants it? As otherwise, launching on an OS that has ~3% market share (on top of a cross-platform engine) will prevent the vast majority of adoption, yes.
This post and this entire thread are HN-sniping to the millionth degree. We have all the classics here:
- AI bad
- JavaScript bad
- Developers not understanding why Electron has utility because they don't understand the browser as a fourth OS platform
- Electron eats my ram oh no posted from my 2gb thinkpad
We should repeat it over and over until all these electrons apps are replaced by proper native apps. It’s not just performance: they look like patched websites, have inconsistent style and bad usability, and packed with bugs that are already solved since tens of years in our OS. It’s like Active Desktop ™ all over. Working on a native Mac app feels just better.
No, they are also inconsistent: slack, vscode, zed, claude, chatgpt, figma, notion, zoom, docker desktop, to quote some that i use daily. They have all different UI patterns and design. The only thing they have in common is that are slow, laggy, difficult to use and don’t respond quickly to the Window manager.
Compare to other software on Mac such as Pages, Xcode, Tower, Transmission, Pixelmator, mp3tag, Table plus, Postico, Paw, Handbrake etc, (the other i use) etc those are a delight to work with and give me the computing experience I was looking for buying a Mac.
Well put. What world are folks living in where it wouldn’t be the obvious choice.
Code is not the cost. Engineers are. Bugs come from hindsight not foresight. Let’s divide resources between OSs. Let all diverge.
> They are often laggy or unresponsive. They don’t integrate well with OS features.
> (These last two issues can be addressed by smart development and OS-specific code, but they rarely are. The benefits of Electron (one codebase, many platforms, it’s just web!) don’t incentivize optimizations outside of HTML/JS/CSS land
Give stats. Often, rarely. What apps? I’d say rarely, often. People code bad native UIs too, or get constrained in features.
Claude offer a CLI tool. Like what product manager would say no to electron in that situation.
This article makes no sense in context. The author surely gets that.
I didn’t say AI was bad and I acknowledged the benefits of Electron and why it makes sense to choose it.
With 64gb of RAM on my Mac Studio, Claude desktop is still slow! Good Electron apps exist, it’s just an interesting note give recent spec driven development discussion.
My guy if you can’t see the problem with a $300B SF company that of course claims to #HireTheBest having a dumpy UX due to their technical choices I don’t really know what to tell you. Same goes for these companies having npm as an out-of-the-box dependency for their default CLI tools. I’m going to assume anyone who thinks that every user’s machine is powerful enough to run electron apps, or even support bloated deps hasn’t written any serious software. And that’s fine in general (to each their own!), but these companies publicly, strongly, claim to be the best, and hire the best. These are not small 10 people startups.
Who both has a computer too slow to handle electron applications ,and is spending 20$ a month on Claude code.
>There are downsides though. Electron apps are bloated; each runs its own Chromium engine. The minimum app size is usually a couple hundred megabytes. They are often laggy or unresponsive. They don’t integrate well with OS features.
A few hundred megabytes to a few gb sounds like an end user problem. They can either make room or not use your application.
You can easily buy a laptop for around 400 USD that will run Claude code just fine, along with several other electron apps.
Don't get me wrong, native everything ( which would probably mean sacrificing Linux support) would be a bit better, but it's not a deal breaker.
Me, because my work gave me a crappy dell that can barely run the stripe dashboard in the browser. I could put in a request for a Mac or something faster but this is the standard machine everyone gets for the company. It helps me be sympathetic to my users to make sure what I develop is fast enough for them because I definitely am going to make it fast enough for me so I don’t shoot my brains out during development.
Presumably these competent people could look at electron, think about building their own cross-platform application on top of chromium and conclude that this free as in code and beer tool fit their needs.
They don't have to reinvent electron. They shouldn't need to use a whole virtualized operating system to call their web API with a fancy UI.
Projects with much smaller budget than Atrophic has achieved much better x-plat UI without relying on electron [1]. There are more sensible options like Qt and whatnot for rendering UIs.
You can even engineer your app to have a single core with all the business logic as a single shared library. Then write UI wrappers using SwiftUI, GTK, and whatever microsoft feels like putting out as current UI library (I think currently it's WinUI2) consuming the core to do the interesting bits.
Heck there are people whom built gui toolkits from scratch to support their own needs [2].
I've used 3/5 of those programs significantly and have issues with all of them relating to software quality. Especially discord. So bad I have 4 different servers actively trying alternatives.
When discord and slack started the company was not large so it definitely could have been a lack of resources. Could also have been a bad design choice.
I'm not alone on that either at all. This is a pretty common opinion.
Claude had a chance to really show something special here and they blew it.
They all have millions of active users doing complex tasks in them every day! I's laughable that you expect me to take your vague complaints about Discord seriously not just as complaints but as dispositive signs that electron was a bad design choice.
Discord could have been a lack of resources as I previously said. They weren't a billion dollar company when the application was conceived.
Regardless the only thing keeping those millions of people at this point is lock-in. Even then people are actively looking for ways to move away from it. I'm witnessing the migration now and am looking forward to the day I don't have to hard restart the client 2-3 times a day.
Bun exists and building a ui on too of that should be well within the power of the money they have. No one is saying to rebuild the universe but the current state is embarrassing.
You could build the same tui in the same amount of time with the same effort and end with an overall better product. Spend a little more and it is even better. Why can we not expect more from companies that have more?
Because Anthropic has never claimed that code is free?
It's pretty easy to argue your point if you pick a strawman as your opponent.
They have said that you can be significantly more productive (which seems to be the case for many) and that most of their company primarily uses LLM to write code and no longer write it by hand. They also seems to be doing well w.r.t. competition.
There are legitimate complaints to be made against LLMs, pick one of them - but don't make up things to argue against.
For some people the relevant properties of "thing" include not needing overpowered hardware to run it comfortably. So "thing" does not just "exist", at least not in the form of electron.
Cause it's (allegedly) cheap and you can do much better? Avoiding rewriting things should become a thing of the past if these tools work as advertised.
I'm not sure coding has ever been the hard part. Hard part (to me) has always been to be smart enough to know what, exactly, I (or somebody else) want. Or has someone heard of a case when someone says something like "These requirements are perfectly clear and unambiguous and do not have any undefined edge/corner cases. But implementing that is still really hard, much harder than what producing this unicorn requirements spec was"?
But they already know what they want, they have it. Rewriting it to be more efficient and less buggy should be the lowly coding that is supposed to be solved
Hi, Felix here - I'm responsible for said Electron app, including Claude Code Desktop and Claude Cowork.
All technology choices are about trade-offs, and while our desktop app does actually include a decent amount of Rust, Swift, and Go, but I understand the question - it comes up a lot. Why use web technologies at all? And why ship your own engine? I've written a long-form version of answers to those questions here: https://www.electronjs.org/docs/latest/why-electron
To us, Electron is just a tool. We co-maintain it with a bunch of excellent other people but we're not precious about it - we might choose something different in the future.
Or: Why can't I log in to Claude on my laptop? It opens a browser with an indefinite spinner, and when I click "Login" on the website, it forwards me to register instead. Not really selling it as the future of coding if their fundamentals are this screwed up!
Code is not and will never be free. You pay for it one way or another. It will take a couple of years for things to cool down to realise that there is more to software than writing the code. But even if AI can write all the code - who is going to understand it. Don't tell me this is not needed. RTFM is what gives hacker the edge. I doubt any company want to be in a position where they simply have no clue how their main product actually works.
Cluade is an Electron app because this is a cultural issue, not a technological one. Electron could make sense if you are a startup with limited development force. For big companies that want to make a difference, to hire N developers and maintain N native apps is one of the best bet on quality and UX you can do, yet people don't do it even in large companies that have the ability, in theory, to do it. Similarly even if with automatic programming you could do it more easily, still it is not done. It's a cultural issue, part of the fact that who makes software does not try to make the best possible software anymore.
But nobody says code is free(?). Certainly not Claude, that experimental compiler costs $20K to build. That openclaw author admitted in Lex Fridman talk that he spends $10k's on tokens each month.
The real answer buried in Boris's comment is "Claude is great at it" - meaning LLMs produce better Electron/React code because that's what most of the training data looks like. This creates a self-reinforcing loop: teams use AI to write code, AI is best at web stack code, so teams choose web stacks, which produces more web stack training data. The practical implication is that "what stack should we use" increasingly has an implicit factor of "what stack does our AI tooling produce the most reliable output for" and right now that's overwhelmingly JS/TS/React.
I don't know why anyone uses Tauri - disk space is cheap but having to handle QA and supporting quirks for every possible browser engine the users' system could ship with certainly is not.
My Native macos app was using well over 1gb the other day, while my electron notes app was 1/5 of it. Theres an electron tax for sure but people are wildy mixing up application architecture issues and bugs with the framework tax.
I'm pretty sure Tauri uses almost as much RAM, you just don't see it because it gets assigned to some kind of system process associated with the webview. Most of the RAM used by a browser is per-tab.
Agreed! I built a MacOS Postgres client with just Claude Code[1]. It could use some UI improvements, but it runs much better than other apps I’ve tried (specifically what it’s replacing for me: RazorSQL) and the binary is smaller than 20MB.
Eh, didn't even Microsoft give up and just shipped a React-based start menu at one point? The range of "native" on Windows 11 is quite wide - starts with an ancient Windows 3.1 ODBC dialog box.
I don't care wether its electron or not but the now ship a full vm with Claude which not only takes 15 GB of storage but also uses so much memory even though I just use chat. Why does that even need to be started?
Heh, I felt the same. I'm a web dev but I do not want a electron app. We can do better, I used to write electron apps because I wasn't able to build a proper native app. Now I can!
I've been building a native macOS/iOS app that lets me manage my agents. Both the ability to actually control/chat fully from the app and to just monitor your existing CLI sessions (and/or take 'em over in the app).
Also has a rust server that backs it so I can throw it anywhere (container, pi, etc) and the connect to it. If anyone wants to see it, but I have seen like 4 other people at least doing something similar: https://github.com/Robdel12/OrbitDock
Maybe code is free, but code isn't all that goes into building software. Minimally, you have design, code, integrate, test, document, launch.
Claude is going to help mostly with code, much less with design. It might help to accelerate integration, if the application is simple enough and the environment is good enough. The fact is, going cross-platform native trebles effort in areas that Claude does not yet have a useful impact.
The quality of the ChatGPT Mac app is a major driver for me to keep a subscription. Hotkeys work, app feels slick and native. The Claude Mac app I found so poor that I'd never reach for it, and ended up uninstalling it — despite using the heck out of Claude Code on a Max plan — because it started blocking system restarts for updates.
Why is no one admitting that even though resources like RAM, CPU, etc. are plentiful nowadays, they should still be conserved?
Computers have gotten orders of magnitude faster since 2016, but using mainstream apps certainly don't feel any faster. Electron and similar frameworks do offer appealing engineering tradeoffs, but they are a main culprit of this problem.
Sure, the magnitude of RAM/compute "waste" may have grown from kB to MB, but inefficiency is still inefficiency - no matter how powerful the machine it's running on is.
> The resulting compiler is impressive, given the time it took to deliver it and the number of people who worked on it, but it is largely unusable. That last mile is hard.
You're easy to impress, that explains the unrealistic expectations "on the surface".
That's some strange analogy, though, basic usability is the first mile, not the last. Coming back to the frameworks and apps, the last mile would be respecting Mac unique keyboard bindings file for text editing. First mile is reacting to any keyboard input in a text field. Same with the compiler, basic hello world fail isn't the last mile.
I have been getting claude to us free pascal/lazarus to write cross-platform (linux qt & gtk, windows and cocoa) apps as well as porting 30-year old abandoned Windows Delphi apps to all three platforms using precisely because I can end up with a small, single binary for distribution after static linking.
I hope that prevalence of AI coding agents might lead to a bit of a revival of RAD tools like lazarus, which seem to me to have a good model for creating cross-platform apps.
Because it doesn’t matter. The biggest AI apps of last year were command line interfaces for cripes sake. Functionality and rapid iteration is more important.
Here is what worries me the most at the moment: we're in a period of hype, fire all the developers, we have agents, everybody can code now, barrier is not low - it's gone. Great. Roll up a year from now, and we have trillions of lines of code no human wrote. At some point, like a big PR, the agent's driver will just say yes to every change. Nobody now can understand the code easily because nobody wrote it. It works, kinda, but how? Who knows? Roll up another few years and people who were just coding because it's a "job" forget whatever skill they had. I've heard a few times already the phrase "I didn't code in like 10 months, bruh"...
Then what?
Not saying I'm not using AI - because I am. I'm using it in the IDE so I can stay close to every update and understand why it's there, and disagree with it if it shouldn't be there. I'm scared to be distanced from the code I'm supposed to be familiar with. So I use the AI to give me superpowers but not to completely do my job for me.
I think the idea is that by the time those trillions of lines of code start to cause maintenance problems, the models will be good enough to deal with those problems.
That won't solve the problem that humans will lose the skill to write code. It will become a hobbyist pass time. Like people listening to 8-tracks now...
I read the article more as an indictment of the promises being made vs reality. If we’re being told these agents are so good, why aren’t these companies eating their own dog food to the same degree they’re telling us to eat it?
The article already concludes coding agents have uses in areas they already do well. What specifically can be continued leading you to think should instead not be used?
The claim that somehow "code is free now" is struck low by anthropic choosing electron is silly and deserves ridicule.
I guess I don't understand how people don't see something like 20k + an engineer-month producing CCC as the actual flare being shot into the night that it is. Enough to make this penny ante shit about "hurr hurr they could've written a native app" asinine.
They took a solid crack at GCC, one of the most complex things *made by man* armed with a bunch of compute, some engineers guiding a swarm, and some engineers writing tests. Does it fail at key parts? Yes. It is a MIRACLE and a WARNING that it exists at all? YES. Do you know what you would have with an engineer-month and 20k in compute trying to write GCC from scratch in 2 weeks in 2024? A whole heck of a lot less than they got.
This notion that everything is the same just didn't make contact on 2025, and we're in 2026 now. All of software is already changing and HN is full of wanking about all the wrong stuff.
The use of "Free" in the title is probably too much of a distraction from the content (even though the opening starts with an actual cost). The point of the article does not actually revolve about LLM code generation being $0 but that's what most of the responses will be about because of the title.
I am curious how much Claude Code is used to develop Anthropic's backend infrastructure, since that's a true feat of engineering where the real magic happens.
The gotcha style "if AI is so good, why does $AI_COMPANY's software not meet my particular standard for quality" blog posts are already getting tedious.
As many have pointed out, code is not free. More than that, the ability to go fast only makes architectural mistakes WORSE. You'll drive further down the road before realizing you made the wrong turn.
A native app is the wrong abstraction for many desktop apps. The complexity of maintaining several separate codebases likely isn't worth the value add. Especially for a company hemorrhaging money the way the Anthropic does.
> Electron apps are bloated; each runs its own Chromium engine. The minimum app size is usually a couple hundred megabytes.
I only see these complaints on HN. Real users don't have this complaint. What kind of low-end machines are you running, that Chromium engine is too heavy for you?
> They are often laggy or unresponsive.
That's not due to Electron.
> They don’t integrate well with OS features.
If it is good enough for Microsoft Teams it is probably good enough for most apps. Teams can integrate with microphone, camera, clipboard, file system and so on. What else do you want to integrate with?
I agree with your counterpoint to OS integration, but Microsoft Teams is infamous for not being "good enough" otherwise. Laggy, buggy, unresponsive, a massive resource hog especially if it runs at startup. It's gotten a bit better, but not enough. These are not complaints on HN, they're in my workplace.
Not everyone is running the latest and greatest hardware, very few actually have the money for that. If you're running hardware from before this decade, or especially the early 2010s, the difference between an Electron app and a native app is unbelievably stark. Electron will often bring the device to its knees.
A single Electron app is usually not a problem. The problem is that the average user has a pile of Chrome tabs open in addition to a handful of Electron apps on top of increasingly bloated commercial OSes, which all compound to consume a large percentage of available resources.
This is particularly pertinent on bulk-purchased corporate and education machines which are loaded down with mandated spyware and antivirus garbage and often ship with CPUs that lag many years behind, and in the case of laptops might even have dog slow eMMC storage which makes the inevitable virtual memory paging miserable.
I run IT for a nonprofit and have 120 "real users" doing "real work" on "low-end machines "providing "real mental health, foster care, and social services" to "real communities".
These workers complain about performance on the machines we can afford. 16GB RAM and 256GB SSDs are the standard, as is 500MB/sec. internet for offices with 40 people, and my plans to upgrade RAM this year were axed by the insane AI chip boondoggle.
People on HN need to understand that not everyone works for a well-funded startup, or big tech company that is in the process of destroying democracy and the environment in the name of this quarter's profits!
BTW Teams has moved away from Electron, before it did I had to advise people to use the browser app instead of the desktop for performance reasons.
> Real users don't have this complaint. What kind of low-end machines are you running
Real users complain differently: "My machine is slow". Electron itself is not very heavyweight (though not featherweight), but JS and DOM can cost a lot of resources. Right now my GMail tab has allocated 529 MB.
> That's not due to Electron.
Of course, but it takes some careful thought. BTW e.g. Qt apps can be pretty memory-hungry, too.
> good enough for Microsoft Teams
It's not easy no pick a more "beloved" application.
What an Electron app usually would miss is things like global shortcuts managed by macOS control panel, programmability via Automation, and the L&F of native controls. I personally don't usually miss any of these, but users who actually like macOS would usually complain.
I personally prefer to run Electron-ish apps, like Slack, in their Web versions, in a browser.
Teams is a terrible app, although Electron isn't the only reason for that: It needs a Gig of RAM to do things that older chat apps could do in 4 Meg.
The free ride of ever increasing RAM on consumer devices is over because of the AI hyperscalers buying all fab capacity, leading to a real RAM shortage. I expect many new laptops to come with 8GB as standard and mid-range phones to have 4GB.
Software engineers need to start thinking about efficiency again.
"Real users" don't know what electron is, but real users definitely complain about laggy and slow programs. They just don't know why they are laggy and slow.
Yeah, like you don't need to write three different clients. You can write a native MacOS client and ship your electron client for the irrelevant platforms.
Judging by the state of most software I use, customers genuinely could not care less about bugs. Software quality is basically never a product differentiator at this point.
I'm not saying zero actual people care, I'm saying that not enough people care to actually differentiate. Is Windows getting better now that you switched? Then it doesn't matter you left.
I mean, Microsoft has recently made a statement that they're aware people are mad and they're working on it, so, no, I don't think they care that I personally hate the software but they do care that there are a number of people like me. Whether that moves the needle, I don't know, but what I do know is right now I'm using non-slop non-electron software and it's so much more pleasant. I think it's worth protecting.
I think that's too broad of a blanket statement. Plenty of people including myself choose Apple products in part for their software quality over Windows and Linux. However there are other factors like network effects or massive marketing campaigns, sales team efforts etc that are often far greater.
We just don't know how bad it will get with AI coding though. Do you think the average consumer won't care about software quality when the bank software "loses" a big transition they make? Or when their TV literally stops turning on? People will tolerate shitty software if they have to, when it's minor annoyances, but it makes them unhappy and they won't tolerate big problems for long.
A few years ago maybe. Tauri makes better sense for this use case today - like Electron but with system webviews, so at least doesn't bloat your system with extra copies of Chrome. And strongly encourages Rust for the application core over JS/Node.
Electron has never made sense. It is only capable of making poorly performing software which eats the user's RAM for no good reason. Any developer who takes pride in his work would never use a tool as bad as Electron.
If author tried native macOS development with agent for an hour, they wouldn’t know where to begin explaining how different is agentic web development from native. It was better year ago, you could actually get to build a native app.
Now all models over-think everything, they do things they like and igniter hard constraints. They picked all that in training. All these behaviours, hiding mistakes, shameful silence, going “woke” and doing what they think should be done despite your wishes.
All this is meliorated in web development, but for native it made it a lot worse.
And visual testing, compare in-browser easy automated ride with retest-it-yourself for 50th time.
Also I refuse to download and run Node.js programs due to the security risk. Unfortunately that keeps me away from opencode as well, but thankfully Codex and Vibe are not Node.js, and neither is Zed or Jetbrains products.
I use Opus 4.6 (for complex refactoring), Gemini 3.1 Pro (for html/css/web stuff) and GPT Codex 5.3 (workhorse, replaced Sonnet for me because in Copilot it has larger context) mostly.
For small tools. But also for large projects.
Current projects are:
1) .NET C#, Angular, Oracle database. Around 300k LoC.
2) Full stack TypeScript with Hono on backend, React on frontend glued by trpc, kysely and PostgreSQL. Around 120k LoC.
Works well in both. I'm using plan mode and agent mode.
What helps a ton are e2e playright tests which are executed by the agent after each code change.
My only complain is that it tends to get stutters after many sessions/hours. A restart fixes it.
As long as we're on the subject, I'll take the opportunity here to vent about how embarrassingly buggy and unusable VS Code is in general. It throws me for a loop that pros voluntarily use it on the rare occasions I'm forced to use it instead of JetBrains.
I use Claude Code in Zed via ACP and have issues all the time. It pushes me towards using the CLI, but I don’t want to do things that way because it’s a vibe coding workflow. I want to be in the drivers seat, see what the agent has done and be able apply or reject hunks.
I’m in the same situation. Zed’s Claude Code is better in terms of control, but it’s wildly buggy and unreliable. Definitely not a drop in replacement.
I’m in the same boat. I use it to save me from going to a browser to lookup simple syntax references, that’s about it. Its agent mode is terrifying, and asking it anything remotely complex has been a fool’s errand.
Because JavaScript is the best for the application layer. We just have to accept that this is reality. AI training sets are just full of JS... Good JS, bad JS... But the good JS training is really good if you can tap into it.
You just have to be really careful because the agent can easily slip into JS hell; it has no shortage of that in its training.
I assume it's because LLMs are overrated and trash so they chose something that was easy for lazy developers, but I'm probably just cynical.
You would think with programming becoming completely automated by the end of 2026, there'd be a vibe coded native port for every platform, but they must be holding back to keep us from all getting jealous.
What bugs are you seeing? I use Claude Code a lot on an Ubuntu 22.04 system and I've had very few issues with it. I'm not sure really how to quantify the amount of use; maybe "ccusage" is a good metric? That says over the last month I've used $964, and I've got 6-8 months of use on it, though only the last ~3-5 at that level. And I've got fairly wide use as well: MCP, skills, agents, agent teams...
there's currently ~6k open issues and ~20k closed ones on their issue tracker (https://github.com/anthropics/claude-code/issues). certainly a mix of duplicates / feature requests, but 'buggy mess' seems appropriate
Actual reason: there's far more training data available for electron apps than native apps.
And despite what Anthropic and OpenAI want you to think, these LLMs are not AGI. They cannot invent something new. They are only as good as the training data.
If the AI is writing 100% of the code it literally is free (as in time) for them to move them over to native apps. They should have used the tokens for that C compiler on the native apps, would have made for a much more convincing marketing story as well.
Yawn the 90/10 excuse again and 'Shipping it everywhere' is a blatant lie there is still no Linux release. Looks like you are talking about Claude Code as Claude. Claude would be the Desktop app...
Electron isn't that bad. Apps like VSCode and Obsidian are among the highest quality and most performant apps I have installed. The Claude app's problem is not electron; its that it just sucks, bad. Stop blaming problems on nativeness.
VSCode takes 1 GB of memory to open the same files that Sublime can do in just 200 MB. It is not remotely performant software, it sucks at performance.
I too thought VSCode's being web based would make it much slower than Sublime. So I was surprised when I found on my 2019 and 2024 ~$2,500-3,000 MacBook Pros that Sublime would continually freeze up or crash while viewing the same 250 MB - 1 GB plain text log files that VSCode would open just fine and run reliably on.
At most, VS Code might say that it has disabled lexing, syntax coloring, etc. due to the file size. But I don't care about that for log files...
It still might be true that Visual Studio Code uses more memory for the same file than Sublime Text would. But for me, it's more important that the editor runs at all.
Maybe Electron isn’t that bad. Maybe there are some great Electron apps. But there’s a big chunk that went unsaid: Most Electron apps suck. Does correlation here imply causation? Maybe not, but holy fuck isn’t it frustrating to be a user of Electron apps.
I think you’re missing the point a little friendo, it’s not that electron is bad it’s that electron itself is an abstraction for cross platform support. If code can be generated for free then the question is why do we need this to begin with why can’t Claude write it in win32, SwiftUI, and gtk?
The answer of course is that it can’t do it and maintain compatibility between all three well enough as it’s high effort and each has its own idiosyncrasies.
I don't know about whether Electron fits in this case, but I can say Claude isn't equally proficient at all toolchains. I recently had Claude Code (Opus 4.6, agent teams) build a image manipulation webapp in Python, Go, Rust, and Zig.
In python it was very nearly a 1-shot, there was an issue with one watermark not showing up on one API endpoint that I had to give it a couple kicks at the can to fix. Go it was able to get but it needed 5+ attempts at rework. Rust took ~10+, and Zig took maybe 15+.
They were all given the same prompt, though they all likely would have dont much better if I had it build a test suite or at least a manual testing recipe for it to follow.
To build gtk you are hit with GPL which sucks. To build Swift you have to pay developer fee to Apple, to build win32 you have to pay developer fee to Microsoft. Which both suck. Don’t forget mobile Android you pay to Google.
That is why everyone jumped to building in Electron because it is based on web standards that are free and are running on chromium which kind of is tied to Google but you are not tied to Google and don’t have to pay them a fee. You can also easily provide kind of the same experience on mobile skipping Android shenigans.
>"to build win32 you have to pay developer fee to Microsoft"
Not really, you can self sign but your native application will be met with a system prompt trying to scare user away. This is maddening of course and I wish MS, Apple, whatever others will die just for this thing alone. You fuckers leveraged huge support from developers writing to you platform but not, it is of course not enough for you vultures, now let's rip money from the hands that fed you.
Boris from the Claude Code team here.
Some of the engineers working on the app worked on Electron back in the day, so preferred building non-natively. It’s also a nice way to share code so we’re guaranteed that features across web and desktop have the same look and feel. Finally, Claude is great at it.
That said, engineering is all about tradeoffs and this may change in the future!
As a user I would trade fewer features for a UI that doesn't jank and max out the CPU while output is streaming in. I would guess a moderate amount of performance engineering effort could solve the problem without switching stacks or a major rewrite. (edit: this applies to the mobile app as well)
Yeah, I've got a 7950x and 64gb memory. My vibe coding setup for Bevy game development is eight Claude Code instances split across a single terminal window. It's magical.
I tried the desktop app and was shocked at the performance. Conversations would take a full second to load, making rapidly switching intolerable. Kicking off a new task seems to hang for multiple seconds while I'm assuming the process spins up.
I wanted to try a disposable conversations per feature with git worktree integration workflow for an hour to see how it contrasted, but couldn't even make it ten minutes without bailing back to the terminal.
Don't the cli panes flicker like crazy?
No, they're generally pretty solid. Once an hour one will crash, and sometimes there are performance problems, but it's a very workable setup.
God the number of ghastly survival crafting LLM slop games that are gonna appear on steam 6 months from now...
The field will spread. I'm working on what I'm intending up be the best game of my career, but you can ship barely functional slop in a few days.
I'm already dreading it. Steam was already full of junk being released by the dozens every single day. It's hard to think it could be worse.
Both Anthropic's and OpenAI's apps being this janky with only basic history management (the search primarily goes by the titles) tells me a lot. You'd think these apps be a shining example of what's possible.
> You'd think these apps be a shining example of what's possible.
it is
As a user, I wouldn't. I can deal with the jank. Keeping up to domain when the domain is evolving THIS fast is important!
Thats probably the janky react, not electron.
Explains why my laptop turns into a makeshift toaster when the Claude app automatically runs in the background. Even many games don't run that intensively in the background.
> a UI that doesn't jank and max out the CPU
While there are legitimate/measurable performance and resource issues to discuss regarding Electron, this kind of hyperbole just doesn't help.
I mean, look: the most complicated, stateful and involved UIs most of the people commenting in this thread are going to use (are going to ever use, likey) are web stack apps. I'll name some obvious ones, though there are other candidates. In order of increasing complexity:
1. Gmail
2. VSCode
3. www.amazon.com (this one is just shockingly big if you think about it)
If your client machine can handle those (and obviously all client machines can handle those), it's not going to sweat over a comparatively simple Electron app for talking to an LLM.
Basically: the war is over, folks. HTML won. And with the advent of AI and the sunsetting of complicated single-user apps, it's time to pack up the equipment and move on to the next fight.
I actually avoid using VSCode for a number of reasons, one of which is its performance. My performance issues with VSCode are I think not necessarily all related to the fact that it's an electron app, but probably some of them are.
In any case, what I personally find more problematic than just slowness is electron apps interacting weirdly with my Nvidia linux graphics drivers, in such a way that it causes the app to display nothing or display weird artifacts or crash with hard-to-debug error messages. It's possible that this is actually Nvidia's fault for having shitty drivers, I'm not sure; but in any case I definitely notice it more often with electron apps than native ones.
Anyway one of the things I hope that AI can do is make it easier for people to write apps that use the native graphics stack instead of electron.
> While there are legitimate/measurable performance and resource issues to discuss regarding Electron, this kind of hyperbole just doesn't help.
From the person you're responding to:
> I would guess a moderate amount of performance engineering effort could solve the problem without switching stacks or a major rewrite.
Pretty clearly they're not saying that this is a necessary property of Electron.
Using the terminal in vscode will easily bring the UI to a dead stop. iterm is smooth as butter with multiple tabs and 100k+ lines of scrollback buffer.
Try enabling 10k lines of scrollback buffer in vscode and print 20k lines.
You think VSCode’s ui is more complicated than eg Microsoft Excel? Or am I misunderstanding?
It definitely is, seeing as how it can embed a spreadsheet.
You might try giving an example of a complex UI that isn't a frustratingly slow resource hog next time you're posting this rant.
> complex UI that isn't a frustratingly slow resource hog
Maybe you can give ones of competing ones of comparable complexity that are clearly better?
Again, I'm just making a point from existence proof. VSCode wiped the floor with competing IDEs. GMail pushed its whole industry to near extinction, and (again, just to call this out explicitly) Amazon has shipped what I genuinely believe to be the single most complicated unified user experience in human history and made it run on literally everything.
People can yell and downvote all they want, but I just don't see it changing anything. Native app development is just dead. There really are only two major exceptions:
1. Gaming. Because the platform vendors (NVIDIA and Microsoft) don't expose the needed hardware APIs in a portable sense, mostly deliberately.
2. iOS. Because the platform vendor expressly and explicitly disallows unapproved web technologies, very deliberately, in a transparent attempt to avoid exactly the extinction I'm citing above.
It's over, sorry.
Didn’t you say coding is a solved problem? So why are you still reaching for the lowest common denominator tech stack?
ofc it is. thats why they need jarred to babysit them. just a little more amodei says we will get to agi...
Jarred has 4.8K GitHub issues of his own.
Did they say that? I doubt it.
https://m.youtube.com/watch?v=We7BZVKbCVw within the first few seconds.
The full quote (in response to: should people learn programming) is "In a year or two it's not gonna matter, coding is largely solved".
Which is still quite the statement, and damn the video is intolerable. But the full quote still feels a little different than how you put it here.
How so? If coding is largely solved and we are on the cusp of not even needing to learn to code, then the statement that they use electron because it’s what most of their engineers are familiar with seems a little contradictory.
What's wrong with taking existing skills into consideration when making technical decisions while coding skills still matter, just because you think coding skills won't matter "in a year or two"? Where's the contradiction?
They did, with the caveat that its solved" for most use cases".
https://www.lennysnewsletter.com/p/head-of-claude-code-what-...
I keep being told by Anthropic and others than these AI coding tools make it effortless to write in new languages and port code from one language to another.
This is an important lesson to watch what people do, not what they say.
Nothing about what was said even contradicts that though. Maintaining three codebases is more work than maintaining one, so they are maintaining one.
But if AI can maintain code bases so easily, why does it matter if there are 3? People use electron to quickly deploy non-native apps across different systems.
Surely, it would be a flex to show that your AI agents are so good they make electron redundant.
But they don’t. So it’s reasonable to ask why that is.
No, it is completely unreasonable to ask why a company is not putting three times the resources into solving a problem than one times the resources.
What resources? it's supposedly a solved problem. Anthropic just needs to spend tokens.
Are tokens not resources?
More work for whom, Claude? So what?
Are you being sarcastic or playing a caricature of an AI obsessed hater?
Yeah, I'm kind of disheartened by the number of people who still insist that LLMs are an expensive flop with no potential to meaningfully change software engineering.
In a few years, they'll be even better than they are now. They're already writing code that is perfectly decent, especially when someone is supervising and describing test cases.
You should definitely ignore the “I tried nothing and nothing worked” crowd.
Any examples of Anthropic saying that?
~ Coding is largely a solved problem, and 100% of his code has been written by Claude since November.
https://www.lennysnewsletter.com/p/head-of-claude-code-what-...
As for others, Microsoft is saying they’re porting all C/C++ code to Rust with a goal of 1m LOC per engineer per month. This would largely be done with AI.
https://www.thurrott.com/dev/330980/microsoft-to-replace-all...
If coding is a solved problem and there is no need to write code, does the language really matter at that point?
If 1 engineer can handle 1m LOC per month, how big would these desktop apps be where maintaining native code becomes a problem?
Coding is solved. Engineering is not solved.
Coding ain't solved
that is because software “engineering” does not exist, it only exist as a fairytale story and bullshit job titles like SW”E”
Right, the biggest driver of global economic growth is not based on engineering at all, and these people (who've made massive amounts of money) clearly don't know how to describe the work they do.
this is you? https://www.youtube.com/watch?v=We7BZVKbCVw
if that's the case, why don't you just ask it to "make it not shit"?
hahaha
But the question isnt really why claude is electron based. Its that if, for some reason, it had to be native on 3 platforms, could a swarm of agents make and maintain the 3 aps while all the humans did was make the spec and tests?
With your context and understanding of the coding agent's capabilities and limitations, especially Opus4.6, how do you see that going?
It is really confusing how we're told the last few years how all ourp rogrammers are obsolete and these billion dollar companies can't be arsed touse these magical tools to substantially improve their #1 user facing asset.
>But the question isnt really why claude is electron based
Huh?
Why not just vibe code binary executables for each platform?
The sheer speedup all users will show everyone why vibe coding is the future. After all coding is a solved problem.
I'm guessing the first question will be "How are we going to keep the UI consistent?". The hard part is never the code writing it's carefully releasing fast changing features from product people. Their chat UX is the core product which is replicated on the internet and other devices. That's almost always React or [JS framework] these days.
Migrating the system would be the easier part in that regard, but they'll still need a JS UI unless they develop multiple teams to spearhead various native GUIs (which is always an option).
Almost every AI chat framework/SDK I've seen is some React or JS stuff. Or even agent stuff like llamaindex.ts. I have a feeling AI is going to reinforce React more than ever.
Not "eating their own dog-food" may not be conclusive, but it sure is suggestive.
> Some of the engineers working on the app worked on Electron back in the day, so preferred building non-natively
Why does it matter what tech the engineers used in the past? I thought they didn't write code anymore.
I’m okay with Electron. I’ve used great Electron apps (VSCode). I like that the feature set is the same between the website and the app.
But it should be possible to make an Electron app that is more reliable and eats less resources.
Question from a Claude web user here.
Could you visualize the user's usage? For example, like a glass of water that is getting emptier the more tokens are used, and gets refilled slowly.
Because right now I have no clue when I will run out of credits.
Thanks!
I thought coding was already solved by Claude? Why aren't you vibe coding something that isn't dogshit with your fancy little Code Solver?
The second you wanted to add a webview, you want Electron. Devs want Chrome DevTools and Chrome runtime.
You guys just did add it too, so yeah!
Somehow claude is only great at things that are surface level 80.9% And for some reason i believe "may change in the future" will never come. we all know coding was never the problem in tech, hype was. ride it while you can
Thanks for chiming in! My takeaways are that, as of today:
- Using a stack your team is familiar with still has value
- Migrating the codebase to another stack still isn’t free
- Ensuring feature and UX parity across platforms still isn’t free. In other words, maintaining different codebases per platform still isn’t free.
- Coding agents are better at certain stacks than others.
Like you said any of these can change.
It’s good to be aware of the nuance in the capabilities of today’s coding agents. I think some people have a hard time absorbing the fact that two things can be true simultaneously: 1) coding agents have made mind bending progress in a short span 2) code is in many ways still not free
Boris, native app on OSX would be awesome. Totally understand the engineering decision tradeoff... but man... Electron apps are just not that great.
Shouldn't the AI be doing the building if your hype is to be believed? What does it matter what the team is experienced in?
And they couldn't vibe code a client in Qt?
Makes sense to me.
It's the fastest way to iterate because Electron is the best cross platform option and because LLMs are likely trained on a lot of HTML/Javascript.
Which is why Claude is great at it.
> Finally, Claude is great at it.
So the model is not a generalised AI then? It is just a JS stack autocomplete?
I think that comment is interesting as well. My view is that there is a lot of Electron training code, and that helps in many ways, both in terms of the app architecture, and the specifics of dealing with common problems. Any new architecture would have unknown and unforeseen issues, even for an LLM. The AIs are exceptional at doing stuff that they have been trained on, and even abstracting some of the lessons. The further you deviate away from a standard app, perhaps even a standard CRUD web app, the less the AI knows about how to structure the app.
Couldn't this have been vibe coded into a native app that is more performant?
> vibe coded
> more performant
I found the problem.
...I think a vibe-coded Cocoa app could absolutely be more performant than a run-of-the-mill Electron app. It probably wouldn't beat something heavily optimized like VS Code, but most Electron apps aren't like that.
I mean, we both know it couldn't, but the company claims it can be done so why don't they do it?
I suppose because generating tokens is slow. It is a limitation of the technology. And when data is coming in slowly, you don't need a super high performance client.
Users would benefit from native apps, hopefully you guys will give it a try. I bet Claude would be great at it too, no?
That's a very sensible, realistic and non-BS response.
I'm glad to see this coming from a company that is so popular these days.
Thanks!
Claude should have gone for native apps and demonstrated that it is possible to do anything with their AI.
I'm currently building a macOS AI chat app. Generally SwiftUI/AppKit is far better than Web but it performs bad in few areas. One of them is Markdown viewer. Swift Markdown libraries are slow and lacks some features like Mermaid diagrams. To work around this, some of my competitors use Tauri/Electron and few others use WKWebView inside Swift app.
Initially I tried WKWebView. Performance was fine and the bridge between JS and Swift was not that hard to implement but I started seeing few problems especially due to the fact that WebView runs as separate process and usually a single WebView instance is reused across views.
After few attempts to fix them, I gave up the idea and tempted to go fully with Web rendering with Tauri but as a mac developer I couldn't think about building this app in React. So I started building my own Markdown library. After a month of work, I now have a high-performance Markdown library built with Rust and TextKit. It supports streaming and Markdown extensions like Mermaid.
Most of the code was written by Claude Opus, and some tricky parts were solved by Codex. The important lesson I learned is that I’m no longer going to accept limitations in tech and stop there. If something isn’t available in Swift but is available in JS, I’m going to port it. It’s surprisingly doable with Claude.
Because code isn't free.
I can see it in my team. We've all been using Claude a lot for the last 6 months. It's hard to measure the impact, but I can tell our systems are as buggy as ever. AI isn't a silver bullet.
And after 12 months, most probably no one from your team will understand what the result of half of those bugs is.
When devs outsource their thinking to AI, they lose the mental map, and without it, control over the entire system.
I think about this a lot, and do everything I can to avoid having Claude write production code while keeping the expected tempo up. To date, this has mostly ended up having me use it to write project plans, generate walkthroughs, and write unit and integration tests. The terrifying scenario for me is getting paged and then not being able to actually reason about what is happening.
Anything bigger in context? Unfortunately - maybe I have bad luck…
But I don’t get how they code in Anthropic when they say that almost all their new code is written by LLM.
Do they have some internal much smarter model that they keep in secret and don’t sell it to customers? :)
I find this such a weird stance to take. Every system I work on and bug I fix has broad sets of code that I didn't write in it. Often I didn't write any of the code I am debugging. You have to be able to build a mental map as you go even without ai.
Yeah. Everyone sort of assumes that not having personally written the code means they can’t debug it.
When is the last time you had an on call blow up that was actually your code?
Not that I’m some savant of code writing — but for me, pretty much never. It’s always something I’ve never touched that blows up on my Saturday night when I’m on call. Turns out it doesn’t really change much if it’s Sam who wrote it … or Claude.
"hey coworker, I know your team wrote this, can you help?" Except there is no coworker, just Claude
Do you know what on call means?
It means Sam is 7 beers deep on Saturday night since you’re the one on call. He’s not responding to your slack messages.
Claude actually is there though, so that’s kind of nice.
Sam might be 7 beers deep, or maybe he's available. In my org, oncall is just who gets the 2am phone call. They can try to contact anyone else if needed.
Claude is there as long as you're paying,and I hope he doesn't hallucinate an answer.
> In my org, oncall is just who gets the 2am phone call. They can *try* to contact anyone else if needed.
Emphasis mine.
> Claude is there as long as you're paying
If you’re at a company that doesn’t pay for AI in the year 2026, you should find a new company.
> and I hope he doesn't hallucinate an answer.
Unlike human coworkers with a 100% success rate, naturally.
"Yeah our team wrote it but everyone who built that part of it has moved to different teams or companies since."
Yeah it happens, and it's not ideal, and now instead of a risk, it's a guarantee.
Yeah but now you get an LLM to help you understand the code base 100x faster.
Remember, they're not just good for writing code. They're amazing at reading code and explaining to you how the architecture works, the main design decisions, how the files fit together, etc.
The problem is you lose abilities if stop writing code completely.
There is a difference between a lector and an author
Usually all code has an owner though. If I encounter a bug the first thing I often do is look at git blame and see who wrote the code then ask them for help.
Because it's remarkably easier to write bugs in a code base you know nothing about, and we usually try to prevent bugs entirely, not debug them after they are found. The whole premise of what you're saying is dependent on knowing bugs exist before they hit Prod. I inherit people's legacy apps. That almost never happens.
In sufficiently complicated systems, the 10xer who knows nothing about the edge cases of state could do a lot more damage than an okay developer who knows all the gotchas. That's why someone departing a project is such a huge blow.
You are missing the point.
It’s a difference reading code if you’re are also a writer of than purely a reader.
It’s like only reading/listening to foreign language without ever writing/speaking it.
When you work on a pre-existing codebase, you don't understand the code yet, but presumably somebody understood parts of it while building it. When you use AI to generate code, you guarantee that no one has ever understood the code being summoned. Don't ignore this difference.
The better the code is, the less detailed a mental map is required. It's a bad sign if you need too much deep knowledge of multiple subsystems and their implementation details to fix one bug without breaking everything. Conversely, if drive-by contributors can quickly figure out a bug they're facing and write a fix by only examining the place it happens with minimal global context, you've succeeded at keeping your code loosely-coupled with clear naming and minimal surprises.
I agree, but you don't have to outsource your thinking to AI in order to benefit from AI.
Use AI as a sanity check on your thinking. Use it to search for bugs. Use it to fill in the holes in your knowledge. Use it to automate grunt work, free your mind and increase your focus.
There are so many ways that AI can be beneficial while staying in full control.
I went through an experimental period of using Claude for everything. It's fun but ultimately the code it generates is garbage. I'm back to hand writing 90% of code (not including autocomplete).
You can still find effective ways to use this technology while keeping in mind its limitations.
100% agree. I’ve seen it with my own sessions with code agents. You gain speed in the beginning but lose all context on the implementation which forces you to use agents more.
It’s easy to see the immediate speed boost, it’s much harder to see how much worse maintaining this code will be over time.
What happens when everyone in a meeting about implementing a feature has to say “I don’t know we need to consult CC”. That has a negative impact on planning and coordination.
Don't they eventually become managers and tech leads anyway and outsource to their staff?
Only if they are supremely lazy. It’s possible to use these tools in a diligent way, where you maintain understanding and control of the system but outsource the implementation of tasks to the LLM.
An engineer should be code reviewing every line written by an LLM, in the same way that every line is normally code reviewed when written by a human.
Maybe this changes the original argument from software being “free”, but we could just change that to mean “super cheap”.
There's a pretty big difference between the understanding that comes with reviewing code versus writing it, for most people I think.
Definitely true for me. What’s particularly problematic is code I need to review but can’t effectively test due to environmental challenges.
Thats a tough situation. How do you handle the testing with human code?
> An engineer should be code reviewing every line written by an LLM,
I disagree.
Instead, a human should be reviewing the LLM generated unit tests to ensure that they test for the right thing. Beyond that, YOLO.
If your architecture makes testing hard build a better one. If your tests arent good enough make the AI write better ones.
The venn diagram for "bad things an LLM could decide are a good idea" and "things you'll think to check that it tests for" has very little overlap. The first circle includes, roughly, every possible action. And the second is tiny.
Just read the code.
There’s no way you or the AI wrote tests to cover everything you care about.
If you did, the tests would be at least as complicated as the code (almost certainly much more so), so looking at the tests isn’t meaningfully easier than looking at the code.
If you didn’t, any functionality you didn’t test is subject to change every time the AI does any work at all.
As long as AIs are either non-deterministic or chaotic (suffer from prompt instability, the code is the spec. Non determinism is probably solvable, but prompt instability is a much harder problem.
> As long as AIs are either non-deterministic or chaotic
You just hit the nail on the head.
LLM's are stochastic. We want deterministic code. The way you do that is with is by bolting on deterministic linting, unit tests, AST pattern checks, etc. You can transform it into a deterministic system by validating and constraining output.
One day we will look back on the days before we validated output the same way we now look at ancient code that didn't validate input.
None of those things make it deterministic though. And they certainly don’t make it non-chaotic.
You can have all the validation, linters, and unit tests you want and a one word change to your prompt will produce a program that is 90%+ different.
You could theoretically test every single possible thing that an outside observer could observe, and the code being different wouldn’t matter, but then your tests would be 100x longer than the code.
> None of those things make it deterministic though.
In the information theoretical sense you're correct, of course. I mean it's a variation on the halting problem so there will never be any guarantee of bug free code. Heck, the same is true of human code and it's foibles. However, in the "does it work or not" sense I'm not sure why we care?
If the gate only passes the digits 0-9 sent within 'x' seconds, and the code's job is to send a digit between 0 and 9, how is it non-deterministic?
Let's say the linter says it's good, it passes the regression tests, you've validated that it only outputs what it's supposed to and does it in a reasonable amount of time, and maybe you're even super paranoid so you ran it through some mutation tests just to be sure that invalid inputs didn't lead to unacceptable outputs. How can it really be non-deterministic after all that? I get that it could still be doing some 'other stuff' in the background, or doing it inefficiently, but if we care about that we just add more tests for that.
I suppose there's the impossible problem edge case. IE - You might never get an answer that works, and satisfies all constraints. It's happened to me with vibe-coding several times and once resulted in the agent tearing up my codebase, so I learned to include an escape hatch for when it's stuck between constraints ("email user123@corpo.com if stuck for 'x' turns then halt"). Now it just emails me and waits for further instruction.
To me, perfect is the enemy of good and good is mostly good enough.
> If the gate only passes the digits 0-9 sent within 'x' seconds, and the code's job is to send a digit between 0 and 9, how is it non-deterministic?
If that’s all the code does, sure you could specify every observable behavior.
In reality though there are tens of thousands of “design decisions” that a programmer or LLM is gonna to make when translating a high level spec into code. Many of those decisions aren’t even things you’d care about, but users will notice the cumulative impact of them constantly flipping.
In a real world application where you have thousands of requirements and features interacting with each other, you can’t realistically specify enough of the observable behavior to keep it from turning into a sloshy mess of shifting jank without reviewing and understanding the actual spec, which is the code.
It’s amazing how often an LLM mocks or stubs some code and then writes a test that only checks the mock, which ends up testing nothing.
You really do have to verify and validate the tests. Worse you have to constantly battle the thing trying to cheat at the tests or bypass them completely.
But once you figure that out, it's pretty effective.
The majority of devs I meet are extremely lazy. It’s why so many people are outsourcing their jobs to Claude.
I love the fact that we just got a model really capable of doing sustained coding (let me check my notes here...) 3 months ago, with a significant bump 15 days ago.
And now the comments are "If it is so great why isn't everything already written from scratch with it?"
I feel like people have been saying AI was great for years now?
Ah, so it's free, but you still have to wait 3 months. Just a question...what are you waiting for?
Of course the answer is all the things that aren't free, refinement, testing, bug fixes, etc, like the parent post and the article suggested.
Well the company keeps saying coding is a solved problem.
People are getting caught up in the "fast (but slow) diffusion)" that Dario has spoken to. Adoption of these tools has been fast but not instant but people will poke holes via "well, it hasn't done x yet".
For my own work I've focused on using the agents to help clean up our CICD and make it more robust, specifically because the rest of the company is using agents more broadly. Seems like a way to leverage the technology in a non-slop oriented way
I'm reminded of the viral comic "I'm stupid faster" (2019?) by Shen
https://imgur.com/gallery/i-m-stupid-faster-u8crXcq
(sorry for Imgur link, but Shen's web presence is a mess and it's hard to find a canonical source)
I'm not saying this is completely the case for AI coding agents, whose capabilities and trustworthiness have seen a meteoric rise in the past year.
Why isn't Claude doing QA testing for you?
Why isn't it doing it for Anthropic ?
What makes you think it isn't?
They just have a lot of users doing QA to, and ignore any of their issues like true champs
I can't tell if this is sarcasm, but if not, you cant rely on the thing that produced invalid output to validate it's own output. That is fundementally insufficient, despite it potentially catching some errors.
Damn. Guess I'll stop QAing my own work from now.
This but unironically. Of course review your own work. But QA is best done by people other than those who develop the product. Having another set of eyes to check your work is as old as science.
That is often how software development has been done the past several decades yea...
Not to say that you don't review your own work, but it's good practice for others (or at least one other person) to review it/QA it as well.
You're making a false equivalence between a human being with agency and intelligence, and a machine.
Are humans not machines?
That’s something that more than half of humans would disagree with (exact numbers vary but most polls show that more than 75% of people globally believe that humans have a soul or spirit).
But ignoring that, if humans are machines, they are sufficiently advanced machines that we have only a very modest understanding of and no way to replicate. Our understanding of ourselves is so limited that we might as well be magic.
>if humans are machines, they are sufficiently advanced machines that we have only a very modest understanding of and no way to replicate
Well, ignoring the whole literal replication thing humans do.
Obviously by replicate I meant building a synthetic human.
Yes. That’s not a best practice. That’s why PRs and peer reviews and test automation suite exist.
I think it is common for one to write their own tests tho
He said QA. QA is more than just unit tests.
I mean there is some wisdom to that, most teams separate dev and qa and writers aren't their own editors precisely because it's hard for the author of a thing to spot their own mistakes.
When you merge them into one it's usually a cost saving measure accepting that quality control will take a hit.
Uh, yeah, thh hi is has been considered bad practice for decades.
Yeah, someone should invent code review.
What if "the thing" is a human and another human validating the output. Is that its own output (= that of a human) or not? Doesn't this apply to LLMs - you do not review the code within the same session that you used to generate the code?
I think a human and an LLM are fundamentally different things, so no. Otherwise you could make the argument that only something extra-terrestrial could validate our work, since LLM's like all machines are also our outputs.
The problem now is that it’s a human using Claude to write the code and another using Claude to review it.
I have had other LLMs QA the work of Claude Code and they find bugs. It's a good cycle, but the bugs almost never get fixed in one-shot without causing chaos in the codebase or vast swaths of rewritten code for no reason.
Products don't have to be perfect. If they can be less buggy than before AI. You can't call that anything but a win.
> you cant rely on the thing that produced invalid output to validate it's own output
I've been coding an app with the help of AI. At first it created some pretty awful unit tests and then over time, as more tests were created, it got better and better at creating tests. What I noticed was that AI would use the context from the tests to create valid output. When I'd find bugs it created, and have AI fix the bugs (with more tests), it would then do it the right way. So it actually was validating the invalid output because it could rely on other behaviors in the tests to find its own issues.
The project is now at the point that I've pretty much stopped writing the tests myself. I'm sure it isn't perfect, but it feels pretty comprehensive at 693 tests. Feel free to look at the code yourself [0].
[0] https://github.com/OrangeJuiceExtension/OrangeJuice/actions/...
I'm not saying you can't do it, I'm just saying it's not sufficient on its own. I run my code through an LLM and it occasionally catches stuff I missed.
Thanks for the clarification. That's the difference though, I don't need it to catch stuff I missed, I catch stuff it misses and I tell it to add it, which it dutifully does.
I can't tell if that is sarcasm. Of course you can use the same model to write tests. That's a different problem altogether, with a different series of prompts altogether!
When it comes to code review, though, it can be a good idea to pit multiple models against each other. I've relied on that trick from day 1.
That's why you get Codex to do it. /s
Dude, I blame all bugs on ai at this point. I suspect one could roughly identify AI’s entry into the game based on some metric of large system outages. Assume someone has already done this but…probably doesn’t matter.
Likewise OpenAIs browser is still only available on macOS, four months after launch, despite being built on a mature browser engine which already runs on everything under the sun. Seems like low-hanging fruit, and yet...
Probably has more to do with underwhelming adoption than anything else.
I'm guessing you're saying no one wants it? As otherwise, launching on an OS that has ~3% market share (on top of a cross-platform engine) will prevent the vast majority of adoption, yes.
Free as in puppy
Edit: The title of the post originally started with "If code is free,"
This was funny enough that I checked out your blog and it absolutely rules.
I was gonna make a joke about this being their second account but I checked it out and you're right lol.
what exactly does this mean (i am a puppy and don’t understand…)
It is the contrast of "free as in beer". Beer has no ongoing commitments, unlike a puppy.
same thing free as it cat means (i am a cat, meow)
it just means that it might be free for my owner to adopt me, but it sure as hell aint free for them to spoil me
hi kitty Σ:3 that makes sense, ty
Best HN comment in a long time
Is that similar to a free older German car?
I always heard it phrased as "Free as in herpes."
I thought that herpes was “the gift that keeps on giving”.
I keep saying this, it’s my new favorite metaphor.
Amazing
As a puppy owner and lover of "Free as in speech" etc. phrases. I applaud!
This post and this entire thread are HN-sniping to the millionth degree. We have all the classics here:
- AI bad - JavaScript bad - Developers not understanding why Electron has utility because they don't understand the browser as a fourth OS platform - Electron eats my ram oh no posted from my 2gb thinkpad
We should repeat it over and over until all these electrons apps are replaced by proper native apps. It’s not just performance: they look like patched websites, have inconsistent style and bad usability, and packed with bugs that are already solved since tens of years in our OS. It’s like Active Desktop ™ all over. Working on a native Mac app feels just better.
> have inconsistent style
You mean incongruent styles? As in, incongruent to the host OS.
There is no doubt electron apps allow the style to be consistent across platforms.
No, they are also inconsistent: slack, vscode, zed, claude, chatgpt, figma, notion, zoom, docker desktop, to quote some that i use daily. They have all different UI patterns and design. The only thing they have in common is that are slow, laggy, difficult to use and don’t respond quickly to the Window manager.
Compare to other software on Mac such as Pages, Xcode, Tower, Transmission, Pixelmator, mp3tag, Table plus, Postico, Paw, Handbrake etc, (the other i use) etc those are a delight to work with and give me the computing experience I was looking for buying a Mac.
XCode and Pages are a delight in comparison to VSCode and Notion is certainly one of the takes of all time.
XCode is usually the first example that comes to mind of a terrible native app in comparison to the much nicer VSCode.
Well put. What world are folks living in where it wouldn’t be the obvious choice.
Code is not the cost. Engineers are. Bugs come from hindsight not foresight. Let’s divide resources between OSs. Let all diverge.
> They are often laggy or unresponsive. They don’t integrate well with OS features.
> (These last two issues can be addressed by smart development and OS-specific code, but they rarely are. The benefits of Electron (one codebase, many platforms, it’s just web!) don’t incentivize optimizations outside of HTML/JS/CSS land
Give stats. Often, rarely. What apps? I’d say rarely, often. People code bad native UIs too, or get constrained in features.
Claude offer a CLI tool. Like what product manager would say no to electron in that situation.
This article makes no sense in context. The author surely gets that.
Author of the post here.
I didn’t say AI was bad and I acknowledged the benefits of Electron and why it makes sense to choose it.
With 64gb of RAM on my Mac Studio, Claude desktop is still slow! Good Electron apps exist, it’s just an interesting note give recent spec driven development discussion.
I mean my 24g mac at work lives in swap because everything is electron apps. I love JavaScript and the web, but ffs the ram usage IS out of control.
My guy if you can’t see the problem with a $300B SF company that of course claims to #HireTheBest having a dumpy UX due to their technical choices I don’t really know what to tell you. Same goes for these companies having npm as an out-of-the-box dependency for their default CLI tools. I’m going to assume anyone who thinks that every user’s machine is powerful enough to run electron apps, or even support bloated deps hasn’t written any serious software. And that’s fine in general (to each their own!), but these companies publicly, strongly, claim to be the best, and hire the best. These are not small 10 people startups.
Who both has a computer too slow to handle electron applications ,and is spending 20$ a month on Claude code.
>There are downsides though. Electron apps are bloated; each runs its own Chromium engine. The minimum app size is usually a couple hundred megabytes. They are often laggy or unresponsive. They don’t integrate well with OS features.
A few hundred megabytes to a few gb sounds like an end user problem. They can either make room or not use your application.
You can easily buy a laptop for around 400 USD that will run Claude code just fine, along with several other electron apps.
Don't get me wrong, native everything ( which would probably mean sacrificing Linux support) would be a bit better, but it's not a deal breaker.
Me, because my work gave me a crappy dell that can barely run the stripe dashboard in the browser. I could put in a request for a Mac or something faster but this is the standard machine everyone gets for the company. It helps me be sympathetic to my users to make sure what I develop is fast enough for them because I definitely am going to make it fast enough for me so I don’t shoot my brains out during development.
Claude desktop already doesn't support Linux.
Presumably these competent people could look at electron, think about building their own cross-platform application on top of chromium and conclude that this free as in code and beer tool fit their needs.
Should they have re-written Chromium too?
They don't have to reinvent electron. They shouldn't need to use a whole virtualized operating system to call their web API with a fancy UI.
Projects with much smaller budget than Atrophic has achieved much better x-plat UI without relying on electron [1]. There are more sensible options like Qt and whatnot for rendering UIs.
You can even engineer your app to have a single core with all the business logic as a single shared library. Then write UI wrappers using SwiftUI, GTK, and whatever microsoft feels like putting out as current UI library (I think currently it's WinUI2) consuming the core to do the interesting bits.
Heck there are people whom built gui toolkits from scratch to support their own needs [2].
[1] - https://musescore.org/en [2] - https://www.gpui.rs
And? So what?
What really am I to conclude by the mere fact that they used electron? The AI was not so magical that it overcame sense?
Am I to imagine that the fact that they advertise AI coding means I therefore have a window into their development process and their design choices?
I just think the notion is much sillier than all of us seem to be treating it.
They have the resources to make native UI for all six major platforms without any AI assistance at all.
Maybe their dog food isn't as tasty as they want you to believe.
There's other ways to make cross platforms apps than to build them on top of a web browser
Yes, and?
And therefore, what?
That’s what’s missing and I think we should just be clear on: it is a design choice to choose electron over writing a native app.
I disagree. It's often chosen due to lack of resources to make native applications.
If it really is a design choice then it's a bad decision imo.
Discord, 1Password, Slack, GitHub Desktop, VS Code.
these are all also the results of bad design choices or a lack of resources?
I've used 3/5 of those programs significantly and have issues with all of them relating to software quality. Especially discord. So bad I have 4 different servers actively trying alternatives.
When discord and slack started the company was not large so it definitely could have been a lack of resources. Could also have been a bad design choice.
I'm not alone on that either at all. This is a pretty common opinion.
Claude had a chance to really show something special here and they blew it.
They all have millions of active users doing complex tasks in them every day! I's laughable that you expect me to take your vague complaints about Discord seriously not just as complaints but as dispositive signs that electron was a bad design choice.
Does this work on people usually?
Discord could have been a lack of resources as I previously said. They weren't a billion dollar company when the application was conceived.
Regardless the only thing keeping those millions of people at this point is lock-in. Even then people are actively looking for ways to move away from it. I'm witnessing the migration now and am looking forward to the day I don't have to hard restart the client 2-3 times a day.
You are unfamiliar with native toolkits?
You are unfamiliar with design choices?
Bun exists and building a ui on too of that should be well within the power of the money they have. No one is saying to rebuild the universe but the current state is embarrassing.
Yeah but what does the statement refute?
We can all talk about how this or that app should be different, but the idea is "electron sux => ????? "
Why should I care that they didn't rebuild the desktop app I don't use. Their TUI is really nice.
You could build the same tui in the same amount of time with the same effort and end with an overall better product. Spend a little more and it is even better. Why can we not expect more from companies that have more?
The "more" here I'm to expect is they choose a native application over electron?
What's in that for me?
Wow we even have the "HN commenters bad" post! We've truly run the gamut
Because Anthropic has never claimed that code is free?
It's pretty easy to argue your point if you pick a strawman as your opponent.
They have said that you can be significantly more productive (which seems to be the case for many) and that most of their company primarily uses LLM to write code and no longer write it by hand. They also seems to be doing well w.r.t. competition.
There are legitimate complaints to be made against LLMs, pick one of them - but don't make up things to argue against.
Point still stands. If you are so much more productive and have some of the most expensive engineers in the world, why not write something decent
Why rewrite if thing exists?
You can use those expensive engineers to build more stuff, not rewrite old stuff
For some people the relevant properties of "thing" include not needing overpowered hardware to run it comfortably. So "thing" does not just "exist", at least not in the form of electron.
Cause it's (allegedly) cheap and you can do much better? Avoiding rewriting things should become a thing of the past if these tools work as advertised.
Why not rewrite if code is free
Why create Windows when MacOS exists?
Why create Linux when UNIX exists?
Why create Firefox when Internet Explorer exists?
Why Create a Pontiac when Ford exists?
Why do anything you think can be done better when someone else has done it worse?
Because these tools are good at using frameworks with a lot of examples out there.
However there are many people out there making the argument that code is free or nearly so. I think the article is directed at them.
Head of Claude Code at Anthropic 3 days ago: "Coding is largely solved" https://www.lennysnewsletter.com/p/head-of-claude-code-what-...
I'm not sure coding has ever been the hard part. Hard part (to me) has always been to be smart enough to know what, exactly, I (or somebody else) want. Or has someone heard of a case when someone says something like "These requirements are perfectly clear and unambiguous and do not have any undefined edge/corner cases. But implementing that is still really hard, much harder than what producing this unicorn requirements spec was"?
But they already know what they want, they have it. Rewriting it to be more efficient and less buggy should be the lowly coding that is supposed to be solved
Hi, Felix here - I'm responsible for said Electron app, including Claude Code Desktop and Claude Cowork.
All technology choices are about trade-offs, and while our desktop app does actually include a decent amount of Rust, Swift, and Go, but I understand the question - it comes up a lot. Why use web technologies at all? And why ship your own engine? I've written a long-form version of answers to those questions here: https://www.electronjs.org/docs/latest/why-electron
To us, Electron is just a tool. We co-maintain it with a bunch of excellent other people but we're not precious about it - we might choose something different in the future.
Let’s ignore electron. Your app has many UI/UX and performance flaws.
If as your CEO says “coding is largely solved”, why is this the case?
Or is your CEO wrong and coding is not largely solved?
APP BAD!
If coding SOLVED HOW COME APP BAD.
What kind of project lead is going to answer for their CEO?
Not a normal one but also a normal project lead doesn’t get on HN and start publicly answering questions.
If you’re gonna start speaking for and defending your company though and your company CEO has made asinine statements that are related, I’m gonna ask.
The point is exactly that, by having a code that's completely AI driven would eliminate all trade offs that could lead you to electron.
Or: Why can't I log in to Claude on my laptop? It opens a browser with an indefinite spinner, and when I click "Login" on the website, it forwards me to register instead. Not really selling it as the future of coding if their fundamentals are this screwed up!
Code is not and will never be free. You pay for it one way or another. It will take a couple of years for things to cool down to realise that there is more to software than writing the code. But even if AI can write all the code - who is going to understand it. Don't tell me this is not needed. RTFM is what gives hacker the edge. I doubt any company want to be in a position where they simply have no clue how their main product actually works.
Cluade is an Electron app because this is a cultural issue, not a technological one. Electron could make sense if you are a startup with limited development force. For big companies that want to make a difference, to hire N developers and maintain N native apps is one of the best bet on quality and UX you can do, yet people don't do it even in large companies that have the ability, in theory, to do it. Similarly even if with automatic programming you could do it more easily, still it is not done. It's a cultural issue, part of the fact that who makes software does not try to make the best possible software anymore.
But nobody says code is free(?). Certainly not Claude, that experimental compiler costs $20K to build. That openclaw author admitted in Lex Fridman talk that he spends $10k's on tokens each month.
Anthropic being valuated $380B makes $20k practically free for all intent and purpose.
Given how much they pay their developers, the Claud app probably cost at least 2, and likely 3, orders of magnitude more to build.
If their AI could do the same for $2m they'll definitely do that any day.
The real answer buried in Boris's comment is "Claude is great at it" - meaning LLMs produce better Electron/React code because that's what most of the training data looks like. This creates a self-reinforcing loop: teams use AI to write code, AI is best at web stack code, so teams choose web stacks, which produces more web stack training data. The practical implication is that "what stack should we use" increasingly has an implicit factor of "what stack does our AI tooling produce the most reliable output for" and right now that's overwhelmingly JS/TS/React.
I don't know why anyone uses Electron anymore, Tauri produces much smaller binaries and is amazing.
I don't know why anyone uses Tauri - disk space is cheap but having to handle QA and supporting quirks for every possible browser engine the users' system could ship with certainly is not.
It's a RAM issue not a disk space issue. Binaries get loaded into memory.
Also if you haven't heard, disk space is no longer as cheap, and RAM is becoming astoundingly expensive.
My Native macos app was using well over 1gb the other day, while my electron notes app was 1/5 of it. Theres an electron tax for sure but people are wildy mixing up application architecture issues and bugs with the framework tax.
I'm pretty sure Tauri uses almost as much RAM, you just don't see it because it gets assigned to some kind of system process associated with the webview. Most of the RAM used by a browser is per-tab.
The process is called "webview2" on windows. From memory my Tauri app process is about 6mb memory and the webview2 is about 100mb.
Chrome DevTools Protocol, navigation stack control, download manager, permission mediation, certificate inspection, cache policy control, so nothing you can't implement in an afternoon
Agreed! I built a MacOS Postgres client with just Claude Code[1]. It could use some UI improvements, but it runs much better than other apps I’ve tried (specifically what it’s replacing for me: RazorSQL) and the binary is smaller than 20MB.
1: https://github.com/NeodymiumPhish/Pharos
It's not free, but Postico is excellent https://eggerapps.at/postico2
Tauri is still a WebView wrapped in some chrome, right? That's not what I would consider "native".
Eh, didn't even Microsoft give up and just shipped a React-based start menu at one point? The range of "native" on Windows 11 is quite wide - starts with an ancient Windows 3.1 ODBC dialog box.
For all the complaints about Electron, it's at least led to more widespread shipping of some applications on Linux.
Tauri's story with regards to the webview engine on Linux is not great.
Probably because they don't trust the OS shipped browser engine for small inconsistencies.
A webview gives way less control.
I don't care wether its electron or not but the now ship a full vm with Claude which not only takes 15 GB of storage but also uses so much memory even though I just use chat. Why does that even need to be started?
Especially now that they've made RAM so expensive.
Heh, I felt the same. I'm a web dev but I do not want a electron app. We can do better, I used to write electron apps because I wasn't able to build a proper native app. Now I can!
I've been building a native macOS/iOS app that lets me manage my agents. Both the ability to actually control/chat fully from the app and to just monitor your existing CLI sessions (and/or take 'em over in the app).
Terrible little demo as I work on it right now w/claude: https://i.imgur.com/ght1g3t.mp4
iOS app w/codex: https://i.imgur.com/YNhlu4q.mp4
Also has a rust server that backs it so I can throw it anywhere (container, pi, etc) and the connect to it. If anyone wants to see it, but I have seen like 4 other people at least doing something similar: https://github.com/Robdel12/OrbitDock
Maybe code is free, but code isn't all that goes into building software. Minimally, you have design, code, integrate, test, document, launch.
Claude is going to help mostly with code, much less with design. It might help to accelerate integration, if the application is simple enough and the environment is good enough. The fact is, going cross-platform native trebles effort in areas that Claude does not yet have a useful impact.
The quality of the ChatGPT Mac app is a major driver for me to keep a subscription. Hotkeys work, app feels slick and native. The Claude Mac app I found so poor that I'd never reach for it, and ended up uninstalling it — despite using the heck out of Claude Code on a Max plan — because it started blocking system restarts for updates.
Code is cheaper but not free is why.
Also AI is better at beaten path coding. Spend more tokens on native or spend them on marketing?
That, and the IntellIJ plugin of Claude is basically the Claude CLI running in a terminal. Also pretty underwhelming.
Why is no one admitting that even though resources like RAM, CPU, etc. are plentiful nowadays, they should still be conserved?
Computers have gotten orders of magnitude faster since 2016, but using mainstream apps certainly don't feel any faster. Electron and similar frameworks do offer appealing engineering tradeoffs, but they are a main culprit of this problem.
Sure, the magnitude of RAM/compute "waste" may have grown from kB to MB, but inefficiency is still inefficiency - no matter how powerful the machine it's running on is.
> The resulting compiler is impressive, given the time it took to deliver it and the number of people who worked on it, but it is largely unusable. That last mile is hard.
You're easy to impress, that explains the unrealistic expectations "on the surface". That's some strange analogy, though, basic usability is the first mile, not the last. Coming back to the frameworks and apps, the last mile would be respecting Mac unique keyboard bindings file for text editing. First mile is reacting to any keyboard input in a text field. Same with the compiler, basic hello world fail isn't the last mile.
I have been getting claude to us free pascal/lazarus to write cross-platform (linux qt & gtk, windows and cocoa) apps as well as porting 30-year old abandoned Windows Delphi apps to all three platforms using precisely because I can end up with a small, single binary for distribution after static linking.
I hope that prevalence of AI coding agents might lead to a bit of a revival of RAD tools like lazarus, which seem to me to have a good model for creating cross-platform apps.
Because it doesn’t matter. The biggest AI apps of last year were command line interfaces for cripes sake. Functionality and rapid iteration is more important.
If only AI had more Liquid Glass, lol
Would be a much better UX if it had multi-window support and a separate settings window.
Here is what worries me the most at the moment: we're in a period of hype, fire all the developers, we have agents, everybody can code now, barrier is not low - it's gone. Great. Roll up a year from now, and we have trillions of lines of code no human wrote. At some point, like a big PR, the agent's driver will just say yes to every change. Nobody now can understand the code easily because nobody wrote it. It works, kinda, but how? Who knows? Roll up another few years and people who were just coding because it's a "job" forget whatever skill they had. I've heard a few times already the phrase "I didn't code in like 10 months, bruh"...
Then what?
Not saying I'm not using AI - because I am. I'm using it in the IDE so I can stay close to every update and understand why it's there, and disagree with it if it shouldn't be there. I'm scared to be distanced from the code I'm supposed to be familiar with. So I use the AI to give me superpowers but not to completely do my job for me.
I think the idea is that by the time those trillions of lines of code start to cause maintenance problems, the models will be good enough to deal with those problems.
We'll see, I guess...
That won't solve the problem that humans will lose the skill to write code. It will become a hobbyist pass time. Like people listening to 8-tracks now...
That won't be a problem -- again assuming the vision comes to pass -- any more than the inability to write Latin and Greek holds anyone back today.
Claude desktop really sucks, a monster on resource hog
Why stop there!
We should refuse to accept coding agents until they have fully replaced chromium. By that point, the world will see that our reticence was wisdom.
I read the article more as an indictment of the promises being made vs reality. If we’re being told these agents are so good, why aren’t these companies eating their own dog food to the same degree they’re telling us to eat it?
The article already concludes coding agents have uses in areas they already do well. What specifically can be continued leading you to think should instead not be used?
The claim that somehow "code is free now" is struck low by anthropic choosing electron is silly and deserves ridicule.
I guess I don't understand how people don't see something like 20k + an engineer-month producing CCC as the actual flare being shot into the night that it is. Enough to make this penny ante shit about "hurr hurr they could've written a native app" asinine.
They took a solid crack at GCC, one of the most complex things *made by man* armed with a bunch of compute, some engineers guiding a swarm, and some engineers writing tests. Does it fail at key parts? Yes. It is a MIRACLE and a WARNING that it exists at all? YES. Do you know what you would have with an engineer-month and 20k in compute trying to write GCC from scratch in 2 weeks in 2024? A whole heck of a lot less than they got.
This notion that everything is the same just didn't make contact on 2025, and we're in 2026 now. All of software is already changing and HN is full of wanking about all the wrong stuff.
The cobbler's children have no shoes, but at least the cobbler's children aren't running three instances of Chromium.
Code isn’t free when you have to review and QA it
The use of "Free" in the title is probably too much of a distraction from the content (even though the opening starts with an actual cost). The point of the article does not actually revolve about LLM code generation being $0 but that's what most of the responses will be about because of the title.
Because it's the most popular & flexible cross platform user space compositor:
- unlike QT it's free for commercial use.
- I don't know any other user land GUI toolkit/compositor that isn't a game engine(unity/unreal/etc).
Why does it need to be cross platform? If code was free there would be a native app for each platform using its respective toolkit, not QT.
Well they wanted it to be cross plat because more users.
clearly the code isn’t free and writing for raw win32 is painful.
Qt is LGPL, so free for commercial use in most situations.
There are rules when using the lgpl version of qt, the kde foundation has an exception I think.
https://www.qt.io/development/open-source-lgpl-obligations
I am curious how much Claude Code is used to develop Anthropic's backend infrastructure, since that's a true feat of engineering where the real magic happens.
The gotcha style "if AI is so good, why does $AI_COMPANY's software not meet my particular standard for quality" blog posts are already getting tedious.
As many have pointed out, code is not free. More than that, the ability to go fast only makes architectural mistakes WORSE. You'll drive further down the road before realizing you made the wrong turn.
A native app is the wrong abstraction for many desktop apps. The complexity of maintaining several separate codebases likely isn't worth the value add. Especially for a company hemorrhaging money the way the Anthropic does.
Not all code qualities are free. Good quality code, still expensive.
Youve been able to hire a dirt cheap Indian or fillipino living on poverty wages in those countries to knock out cheap crap for a long time.
> Electron apps are bloated; each runs its own Chromium engine. The minimum app size is usually a couple hundred megabytes.
I only see these complaints on HN. Real users don't have this complaint. What kind of low-end machines are you running, that Chromium engine is too heavy for you?
> They are often laggy or unresponsive.
That's not due to Electron.
> They don’t integrate well with OS features.
If it is good enough for Microsoft Teams it is probably good enough for most apps. Teams can integrate with microphone, camera, clipboard, file system and so on. What else do you want to integrate with?
I agree with your counterpoint to OS integration, but Microsoft Teams is infamous for not being "good enough" otherwise. Laggy, buggy, unresponsive, a massive resource hog especially if it runs at startup. It's gotten a bit better, but not enough. These are not complaints on HN, they're in my workplace.
Not everyone is running the latest and greatest hardware, very few actually have the money for that. If you're running hardware from before this decade, or especially the early 2010s, the difference between an Electron app and a native app is unbelievably stark. Electron will often bring the device to its knees.
A single Electron app is usually not a problem. The problem is that the average user has a pile of Chrome tabs open in addition to a handful of Electron apps on top of increasingly bloated commercial OSes, which all compound to consume a large percentage of available resources.
This is particularly pertinent on bulk-purchased corporate and education machines which are loaded down with mandated spyware and antivirus garbage and often ship with CPUs that lag many years behind, and in the case of laptops might even have dog slow eMMC storage which makes the inevitable virtual memory paging miserable.
I run IT for a nonprofit and have 120 "real users" doing "real work" on "low-end machines "providing "real mental health, foster care, and social services" to "real communities".
These workers complain about performance on the machines we can afford. 16GB RAM and 256GB SSDs are the standard, as is 500MB/sec. internet for offices with 40 people, and my plans to upgrade RAM this year were axed by the insane AI chip boondoggle.
People on HN need to understand that not everyone works for a well-funded startup, or big tech company that is in the process of destroying democracy and the environment in the name of this quarter's profits!
BTW Teams has moved away from Electron, before it did I had to advise people to use the browser app instead of the desktop for performance reasons.
> Real users don't have this complaint. What kind of low-end machines are you running
Real users complain differently: "My machine is slow". Electron itself is not very heavyweight (though not featherweight), but JS and DOM can cost a lot of resources. Right now my GMail tab has allocated 529 MB.
> That's not due to Electron.
Of course, but it takes some careful thought. BTW e.g. Qt apps can be pretty memory-hungry, too.
> good enough for Microsoft Teams
It's not easy no pick a more "beloved" application.
What an Electron app usually would miss is things like global shortcuts managed by macOS control panel, programmability via Automation, and the L&F of native controls. I personally don't usually miss any of these, but users who actually like macOS would usually complain.
I personally prefer to run Electron-ish apps, like Slack, in their Web versions, in a browser.
Both electron is too heavy for you, and the other non-Electron overhead of Claude is fine.
Teams is a terrible app, although Electron isn't the only reason for that: It needs a Gig of RAM to do things that older chat apps could do in 4 Meg.
The free ride of ever increasing RAM on consumer devices is over because of the AI hyperscalers buying all fab capacity, leading to a real RAM shortage. I expect many new laptops to come with 8GB as standard and mid-range phones to have 4GB.
Software engineers need to start thinking about efficiency again.
"Real users" don't know what electron is, but real users definitely complain about laggy and slow programs. They just don't know why they are laggy and slow.
The framing here assumes you need to ship native to all 3 platforms to justify leaving Electron. You don't.
I've been building a native macOS AI client in Swift — it's 15MB, provider-agnostic, and open source: https://github.com/dinoki-ai/osaurus
Committing to one platform well beats a mediocre Electron wrapper on all three.
Link gives 404
just fixed it
Yeah, like you don't need to write three different clients. You can write a native MacOS client and ship your electron client for the irrelevant platforms.
electron is comprehensive. maybe claude is not there yet. ecosystem still important
Or they could write native client apps using Qt to handle that “last 10%” of native app polish.
The Claude app is one giant garbanzo bean. I uninstalled it and pinned the web app to my dock instead.
Because when a service is running at a loss the investors want their money to be spent efficiently.
LLMs are best at JavaScript.
Which happens to be the majority of code they had stolen.
Exactly. Shameful explanation.
I bet it's because of Windows idiosyncrasies.
I don't think anyone on the AI hype train cares about performance, memory usage, or code quality (measured by bugs). Customers will, though.
Judging by the state of most software I use, customers genuinely could not care less about bugs. Software quality is basically never a product differentiator at this point.
Tell that to people abandoning Windows. (I'm writing this on a Linux machine right now)
Most users are forced to use the software that they use. That doesn't mean they don't care, just that they're stuck.
BTW, this going to matter MORE now that RAM prices are skyrocketing..
I'm not saying zero actual people care, I'm saying that not enough people care to actually differentiate. Is Windows getting better now that you switched? Then it doesn't matter you left.
Actually, Microsoft did say they are going to work to fix things.
https://www.techradar.com/computing/windows/microsoft-has-fi...
It seems like enough people do care to make Microsoft move.
I mean, Microsoft has recently made a statement that they're aware people are mad and they're working on it, so, no, I don't think they care that I personally hate the software but they do care that there are a number of people like me. Whether that moves the needle, I don't know, but what I do know is right now I'm using non-slop non-electron software and it's so much more pleasant. I think it's worth protecting.
I think that's too broad of a blanket statement. Plenty of people including myself choose Apple products in part for their software quality over Windows and Linux. However there are other factors like network effects or massive marketing campaigns, sales team efforts etc that are often far greater.
We just don't know how bad it will get with AI coding though. Do you think the average consumer won't care about software quality when the bank software "loses" a big transition they make? Or when their TV literally stops turning on? People will tolerate shitty software if they have to, when it's minor annoyances, but it makes them unhappy and they won't tolerate big problems for long.
Probably the same reason _teams_ is a js app, money
The garbage one is now indeed essentially free
> For now, Electron still makes sense
A few years ago maybe. Tauri makes better sense for this use case today - like Electron but with system webviews, so at least doesn't bloat your system with extra copies of Chrome. And strongly encourages Rust for the application core over JS/Node.
Electron has never made sense. It is only capable of making poorly performing software which eats the user's RAM for no good reason. Any developer who takes pride in his work would never use a tool as bad as Electron.
If author tried native macOS development with agent for an hour, they wouldn’t know where to begin explaining how different is agentic web development from native. It was better year ago, you could actually get to build a native app. Now all models over-think everything, they do things they like and igniter hard constraints. They picked all that in training. All these behaviours, hiding mistakes, shameful silence, going “woke” and doing what they think should be done despite your wishes. All this is meliorated in web development, but for native it made it a lot worse. And visual testing, compare in-browser easy automated ride with retest-it-yourself for 50th time.
Look at their backlog...
I really hope React Native’s support for Mac and Windows apps takes off. The benefits of Electron without Chromium, plus native controls/functions.
Your post already clearly covers the reason.
It is easy to crank out a one-off, flashy tool using Claude (to demo its capabilities), which may tick 80% of the development work.
If you've to maintain it, improve, grow for the long haul, good luck with it. That's the 20% hard.
They took the safe bet!
Claude code runs in the terminal, not Chromium. It's hardly an "electron app" at all.
It's a nodejs app, and there is no reason to have a problem with that. Nodejs can wait for inference as fast as any native app can.
Aren't they talking about the desktop app though?
Also I refuse to download and run Node.js programs due to the security risk. Unfortunately that keeps me away from opencode as well, but thankfully Codex and Vibe are not Node.js, and neither is Zed or Jetbrains products.
> It's a nodejs app, and there is no reason to have a problem with that.
Node apps typically have serious software supply chain problems. Their dependency trees are typically unauditable in practice.
They're not talking about Claude code
Desktop app.
Can we talk about how much copilot sucks in vscode? I have to use for work, buggy as hell for the premier product of a trillion dollar company.
Not my experience. What doesn't work for you?
I use Opus 4.6 (for complex refactoring), Gemini 3.1 Pro (for html/css/web stuff) and GPT Codex 5.3 (workhorse, replaced Sonnet for me because in Copilot it has larger context) mostly.
For small tools. But also for large projects.
Current projects are:
1) .NET C#, Angular, Oracle database. Around 300k LoC.
2) Full stack TypeScript with Hono on backend, React on frontend glued by trpc, kysely and PostgreSQL. Around 120k LoC.
Works well in both. I'm using plan mode and agent mode.
What helps a ton are e2e playright tests which are executed by the agent after each code change.
My only complain is that it tends to get stutters after many sessions/hours. A restart fixes it.
$39/mo plan.
As long as we're on the subject, I'll take the opportunity here to vent about how embarrassingly buggy and unusable VS Code is in general. It throws me for a loop that pros voluntarily use it on the rare occasions I'm forced to use it instead of JetBrains.
I use Claude Code in Zed via ACP and have issues all the time. It pushes me towards using the CLI, but I don’t want to do things that way because it’s a vibe coding workflow. I want to be in the drivers seat, see what the agent has done and be able apply or reject hunks.
I’m in the same situation. Zed’s Claude Code is better in terms of control, but it’s wildly buggy and unreliable. Definitely not a drop in replacement.
File the bugs if you want to see things improved.
I’m in the same boat. I use it to save me from going to a browser to lookup simple syntax references, that’s about it. Its agent mode is terrifying, and asking it anything remotely complex has been a fool’s errand.
We can, but I'm really happy with it. Nobody forced it on me though.
Because JavaScript is the best for the application layer. We just have to accept that this is reality. AI training sets are just full of JS... Good JS, bad JS... But the good JS training is really good if you can tap into it.
You just have to be really careful because the agent can easily slip into JS hell; it has no shortage of that in its training.
I assume it's because LLMs are overrated and trash so they chose something that was easy for lazy developers, but I'm probably just cynical.
You would think with programming becoming completely automated by the end of 2026, there'd be a vibe coded native port for every platform, but they must be holding back to keep us from all getting jealous.
Because coding is not the hard part.
Because Anthropic and the rest of them are lying to you about the sophistication of these tools.
The fact that claude code is a still buggy mess is a testament to the quality of the dream they're trying to sell.
>claude code is a still buggy mess
What bugs are you seeing? I use Claude Code a lot on an Ubuntu 22.04 system and I've had very few issues with it. I'm not sure really how to quantify the amount of use; maybe "ccusage" is a good metric? That says over the last month I've used $964, and I've got 6-8 months of use on it, though only the last ~3-5 at that level. And I've got fairly wide use as well: MCP, skills, agents, agent teams...
there's currently ~6k open issues and ~20k closed ones on their issue tracker (https://github.com/anthropics/claude-code/issues). certainly a mix of duplicates / feature requests, but 'buggy mess' seems appropriate
Actual reason: there's far more training data available for electron apps than native apps.
And despite what Anthropic and OpenAI want you to think, these LLMs are not AGI. They cannot invent something new. They are only as good as the training data.
Real.
Nailed it.
Call them out, call them out. Why isn't Claude a native Windows x64 app if code is free?
And if software engineering has been solved by AI, why is Anthropic still hiring and employing SWE's?
https://www.businessinsider.com/anthropic-claude-code-founde...
If the AI is writing 100% of the code it literally is free (as in time) for them to move them over to native apps. They should have used the tokens for that C compiler on the native apps, would have made for a much more convincing marketing story as well.
Seriously, they claim to be able to make a C compiler but they can't make a TUI without using javascript? It's beyond pathetic.
To be fair in that respect, Claude code does have a native rust implementation you can use.
Electron is a good choice for native-like desktop apps, even with the downsides.
Yes, feel free to downvote me.
Write everything in C.
Yawn the 90/10 excuse again and 'Shipping it everywhere' is a blatant lie there is still no Linux release. Looks like you are talking about Claude Code as Claude. Claude would be the Desktop app...
Because it’s an Electron app, I use on Omarchy Linux with no problem
There is no official release and some features don't work if you patch it to work on Linux.
Ho is Claude?
Tldr
nailing down all the edge cases
Electron isn't that bad. Apps like VSCode and Obsidian are among the highest quality and most performant apps I have installed. The Claude app's problem is not electron; its that it just sucks, bad. Stop blaming problems on nativeness.
VSCode takes 1 GB of memory to open the same files that Sublime can do in just 200 MB. It is not remotely performant software, it sucks at performance.
I too thought VSCode's being web based would make it much slower than Sublime. So I was surprised when I found on my 2019 and 2024 ~$2,500-3,000 MacBook Pros that Sublime would continually freeze up or crash while viewing the same 250 MB - 1 GB plain text log files that VSCode would open just fine and run reliably on.
At most, VS Code might say that it has disabled lexing, syntax coloring, etc. due to the file size. But I don't care about that for log files...
It still might be true that Visual Studio Code uses more memory for the same file than Sublime Text would. But for me, it's more important that the editor runs at all.
Polar opposite experience for me. I mainly use vscode, but I need sublime to open anything big.
Maybe Electron isn’t that bad. Maybe there are some great Electron apps. But there’s a big chunk that went unsaid: Most Electron apps suck. Does correlation here imply causation? Maybe not, but holy fuck isn’t it frustrating to be a user of Electron apps.
it is not that most electron apps suck, the ones OP listed suck
I think you’re missing the point a little friendo, it’s not that electron is bad it’s that electron itself is an abstraction for cross platform support. If code can be generated for free then the question is why do we need this to begin with why can’t Claude write it in win32, SwiftUI, and gtk?
The answer of course is that it can’t do it and maintain compatibility between all three well enough as it’s high effort and each has its own idiosyncrasies.
I don't know about whether Electron fits in this case, but I can say Claude isn't equally proficient at all toolchains. I recently had Claude Code (Opus 4.6, agent teams) build a image manipulation webapp in Python, Go, Rust, and Zig.
In python it was very nearly a 1-shot, there was an issue with one watermark not showing up on one API endpoint that I had to give it a couple kicks at the can to fix. Go it was able to get but it needed 5+ attempts at rework. Rust took ~10+, and Zig took maybe 15+.
They were all given the same prompt, though they all likely would have dont much better if I had it build a test suite or at least a manual testing recipe for it to follow.
To build gtk you are hit with GPL which sucks. To build Swift you have to pay developer fee to Apple, to build win32 you have to pay developer fee to Microsoft. Which both suck. Don’t forget mobile Android you pay to Google.
That is why everyone jumped to building in Electron because it is based on web standards that are free and are running on chromium which kind of is tied to Google but you are not tied to Google and don’t have to pay them a fee. You can also easily provide kind of the same experience on mobile skipping Android shenigans.
I know Anthropic is burning cash but I'm pretty sure they can afford to pay the developer fees for those platforms.
> To build gtk you are hit with GPL which sucks.
It's LGPL, all you have to do is link GTK dynamically instead of statically to comply.
> to build win32 you have to pay developer fee to Microsoft.
You don't.
>"to build win32 you have to pay developer fee to Microsoft"
Not really, you can self sign but your native application will be met with a system prompt trying to scare user away. This is maddening of course and I wish MS, Apple, whatever others will die just for this thing alone. You fuckers leveraged huge support from developers writing to you platform but not, it is of course not enough for you vultures, now let's rip money from the hands that fed you.