super valid concerns, but i believe all of them fixable.
true that llms are stochastic by nature, but with models like opus 4.5/4.6, the quality is just another magnitude. and all these can have live tests, giving feedback for the code written.
the "kernel" here is merely an analogy, just like how we don't care about lsp and all the details in old ide's, and just use claude code and such, the ai native os will abstract away all these details, like kernel, ...
the failure rate will depend on what type of projects you're asking and how much you're customizing your os. here is where the tests needs to be in place (and the healer jumps to fix the config format,...)
[flagged]
super valid concerns, but i believe all of them fixable.
true that llms are stochastic by nature, but with models like opus 4.5/4.6, the quality is just another magnitude. and all these can have live tests, giving feedback for the code written.
the "kernel" here is merely an analogy, just like how we don't care about lsp and all the details in old ide's, and just use claude code and such, the ai native os will abstract away all these details, like kernel, ...
the failure rate will depend on what type of projects you're asking and how much you're customizing your os. here is where the tests needs to be in place (and the healer jumps to fix the config format,...)