233 comments

  • paxys 2 days ago ago

    > In comparison with the previous Java service, the updated backend delivers a 40% increase in performance, along with improved scalability, security, and availability.

    As is always the case with such rewrites, the big question is whether the improvements came from the choice of language or because they updated a crusty legacy codebase and fixed bugs/bottlenecks.

    • mpweiher 2 days ago ago

      With the 90% reduction in memory consumption, I'd wager that most if not all the performance improvement came from that. In fact, it is a little surprising that hardware utilization only dropped 50%.

      Reduced memory consumption for cloud applications was apparently also the primary reason IBM was interested in Swift for a while. Most cloud applications apparently sit mostly idle most of the time, so the number of clients you can multiplex on single physical host is limited by memory consumption, not CPU throughput.

      And Java, with the JIT and the GC, has horrible memory consumption.

      • tgma 2 days ago ago

        IBM is a huge and quite balkanized company and I don't think there was ever a centralized push towards Swift outside some excited parties. With that, I would note that circa 2018 there was a DRAM shortage in the industry and people started thinking more about memory conservation in datacenter workloads.

      • eosterlund a day ago ago

        The thing about DRAM is that it isn't SRAM; cost matters. You struggle to find deployment environments that have less than 1 GB DRAM available per core, because at that point, ~95% the HW cost is typically CPU anyway. Shrinking that further is kind of pointless, so people don't do it. Hence, when utilizing 16 cores, you get at least 16 GB DRAM that comes with it, whether you choose to use it or not. If you use only 10% of that memory by removing the garbage from the heap, then while lower seems better and all that, it's not necessarily any cheaper in actual memory spending, if both fit within the same minimum 1 GB/core shape you can buy anyway. It might just under utilize the memory resources you paid for in a minimum memory per CPU shape, which isn't necessarily a win. Utilizing the memory you bought isn't wasting it.

        Each extra GB per core you add to your shape, actually costs something. Hence every GB/core that can be saved results in actual cost savings. But even then, usually every extra GB/core is ~5% of the CPU cost. Hence, even when going from 10 GB/core (sort of a lot) to 1 GB/core, that only translates to ballpark ~50% less HW cost. Since they did not mention how many cores these instances have, it's hard to know what GB/core were used before and after, and hence whether there were any real cost savings in memory at all, and if so what the relative memory cost savings might have been compared to CPU cost.

      • conradev 2 days ago ago

        This is why serverless is taking off on the opposite end of the spectrum (and why it’s so cheap)

        You can share memory not only at the machine level, but between different applications.

      • dgs_sgd 2 days ago ago

        Interesting. If IBM was trying to solve for memory consumption, why do you think they picked Swift over alternatives that might also have achieved lower memory consumption?

        • dagmx 2 days ago ago

          Swift is at a good middle ground of performance, safety and ease of use.

          It has higher level ergonomics that something like rust lacks (as much as I like rust myself), doesn’t have many of the pitfalls of Go (error handling is much better for example) and is relatively easy to pickup. It’s also in the same ballpark performance as rust or C++.

          It’s not perfect by any means, it has several issues but it’s quickly becoming my preferred language as well for knocking out projects.

          • ngcc_hk an hour ago ago

            What are those issues I wonder. I used for one project and not sure how deep the rabbit hole might be. Just wonder.

        • tgma 2 days ago ago

          Which other sufficiently popular modern language that's more efficient than Java lacks a tracing GC?

          Rust and Swift are pretty much the only two choices and Rust is arguably much more pain in the ass for the average joe enterprise coder.

    • tiffanyh 2 days ago ago

      I'd typically agree with your comment but ...

      Given that they also experienced a 90% reduction in Memory Usage (presumably from Java GC vs Swift AOT memory management) - it seems more likely the gains are in fact from the difference in languages.

      • javanonymous 2 days ago ago

        The JVM tends to use as much memory as it can for performance reasons. It is not a reliable indicator of how much memory it actually needs. Why spend resources on clearing memory if there's still unused memory left?

        If memory is an issue, you can set a limit and the JVM will probably still work fine

        • tgma 2 days ago ago

          Your comment is definitely true. However, if you study GC performance academic papers over the past three for four decades, they pretty much conclude GC overhead, amortized, can be on par with manual alloc/free[1], but usually the unwritten assumption is they have unbounded memory for that to be true. If you study how much memory in practice you need not to suffer a performance loss on an amortized basis, you'd arrive at 2-3x, so I'd claim it is fair to assume Java needs 2-3x as much memory as Swift/C++/Rust to run comfortably.

          You can actually witness this to some degree on Android vs iPhone. iPhone comfortably runs with 4GB RAM and Android would be slow as dog.

          [1]: I don't dispute the results, but I also like to note that as a researcher in Computer Science in that domain, you were probably looking to prove how great GC is, not the opposite.

          • wiseowise 2 days ago ago

            Android doesn’t even run JVM.

            > iPhone comfortably runs with 4GB RAM and Android would be slow as dog.

            This has nothing to do with RAM. Without load, Android wouldn’t even push 2GB, it would be still slower than iPhone because of different trade-offs they make in architecture.

            • tgma 2 days ago ago

              The point was GC cost in general, not which Java/JVM implementation you choose. Try comparing two Androids with the same chipset at 4GB vs 8GB RAM.

              Anyhow, that was just an anecdotal unscientific experiment to give you some idea--obviously they are two different codebases. The literature is there to quantify the matter as I noted.

              • pjmlp a day ago ago

                Android for a very long time lacked a quality JIT, AOT and GC implementation, and then each device is a snowflake of whatever changes each OEM has done to the device.

                Unless one knows exactly what ART version is installed on the device, what build options from AOSP were used on the firmware image, and what is the mainline version deployed via PlayStore (if on Android 12 or later), there are zero conclusions that one can take out of it.

                Also iOS applications tend to just die, when there is no more memory to make use of, due to lack of paging and memory fragmentation.

        • cellularmitosis 2 days ago ago

          If it were such an easy problem to fix, don’t you think they would have done so rather than rewriting in Swift?

          • cogman10 2 days ago ago

            Having been in development for a long time, no.

            Frankly, it just takes some motivated senior devs and the tantalizing ability to put out the OP blog post and you've got something management will sign off on. Bonus points you get to talk about how amazing it was to use Apple tech to get the job done.

            I don't think they seriously approached this because the article only mentioned tuning G1GC. The fact is, they should have been talking about ZGC, AppCDS, and probably Graal if pause times and startup times were really that big a problem for them. Heck, even CRaC should have been mentioned.

            It is not hard to get a JVM to startup in sub second time. Here's one framework where that's literally the glossy print on the front page. [1]

            [1] https://quarkus.io/

            • jamesfinlayson 2 days ago ago

              Yep, resume-driven development. I remember at a previous company a small group of people pushed a Go rewrite to speed everything up. The serious speed improvements came from re-architecting (elimination of a heavy custom framework, using a message queue instead of handling requests synchronously etc). They would have been better off fixing the original system so that everything could benefit from the improvements, not just the tiny bits that they carved off.

              Then the next annual report talked about improved scalability because of this amazing technology from Google.

              • geodel 2 days ago ago

                Resume driven development would be using some random ass Java framework to pad on resume. Apple using Apple technologies seems rather like corporate mandate.

            • geodel 2 days ago ago

              If Apple does not dogfood their own technology for production systems what chance do they have to tell 3rd party users that Swift is ready for prime time.

              Delving into Java arcana instead of getting first hand experience in developing in Swift would've been great opportunity wasted to improve Swift.

              • cogman10 2 days ago ago

                I agree if this was a brand new system.

                However, they chose to replace an existing system with swift. The "arcana" I mentioned is start up options easily found and safe to apply. It's about as magical as "-O2" is to C++.

                Sure, this may have been the right choice if the reason was to exercise swift. However, that shouldn't pretend like there was nothing to do to make Java better. The steps I described are like 1 or 2 days worth of dev work. How much time do you think a rewrite took?

                • MBCook 2 days ago ago

                  Apple has explicitly stating that they want to try and move as much of their stuff to Swift as possible.

                  I’m sure you’re right, there must’ve been ways to improve the job of deployment. But if they wanted to reduce resource usage and doing it in Swift aligned with some other company goal it would make sense they might just go straight to this.

                • geodel 2 days ago ago

                  Once Amazon CEO was asked about new competitors trying to create cloud infrastructure fast. His reply was "You cannot compress experience"

                  Saving few weeks or months by learning 3rd party technology instead of applying and improving first party technology would be amateurish.

                  > However, that shouldn't pretend like there was nothing to do to make Java better.

                  This seems like constant refrain that Apple or anyone choosing their own tech over someone else's owe absolute fair shot to stuff they didn't choose. This is simply not the way world works.

                  Yes, there are endless stories companies spending enormous resources to optimize Java stack even up to working with Core Java team at Oracle to improve on JVM innards. But those companies are just (although heavy) user of core technology rather than developer of competing one. Apple is not one of those users, they are developers.

                  • cogman10 2 days ago ago

                    > Yes, there are endless stories companies spending enormous resources to optimize Java stack

                    And not what I'm advocating for. Sometimes rewrites are necessary.

                    What I'm advocating is exercising a few well documented and fairly well known jvm flags that aren't particularly fiddly.

                    The jvm does have endless knobs, most of which you shouldn't touch and instead should let the heuristics do their work. These flags I'm mentioning are not that.

                    Swapping g1gc for zgc, for example, would have resolved one of their major complaints about GC impact under load. If the live set isn't near the max heap size then pause times are sub millisecond.

                    > This seems like constant refrain that Apple or anyone choosing their own tech over someone else's owe absolute fair shot to stuff they didn't choose. This is simply not the way world works.

                    The reason for this refrain is because Java is a very well known tech, easy to hire for (which Amazon that you cite heavily uses). And Apple had already adopted Java and wrote a product with it (I suspect they have several).

                    I would not be saying any of this if the article was a generic benchmark and comparison of Java with swift. I would not fault Apple for saying "we are rewriting in swift to minimize the number of languages used internally and improve the swift ecosystem".

                    I'm taking umbridge to them trying to sell this as an absolute necessity because of performance constraints while making questionable statements on the cause.

                    And, heck, the need to tweak some flags would be a valid thing to call out in the article "we got the performance we wanted with the default compiler options of Swift. To achieve the same thing with Java requires multiple changes from the default settings". I personally don't find it compelling, but it's honest and would sway someone that wants something that "just works" without fiddling.

                  • pjmlp 2 days ago ago

                    I remember the days when Apple developed their own JVM, ported WebObjects from Objective-C to Java, and even had it as the main application language for a little while, uncertain if the Object Pascal/C++ educated developers on their ecosystem would ever bother to learn Objective-C when transitioning to OS X.

          • selcuka 2 days ago ago

            Nothing at IBM is ever straightforward.

            Decades ago, I was working with three IBM employees on a client project. During a discussion about a backup solution, one of them suggested that we migrate all customer data into DB2 on a daily basis and then back up the DB2 database.

            I asked why we couldn't just back up the client's existing database directly, skipping the migration step. The response? "Because we commercially want to sell DB2."

          • Someone 2 days ago ago

            You tune for what you have/can get. Machines with less memory tend to have slower CPUs. That may make it impossible to tune for (close to) 100% CPU and memory usage.

            And yes, Apple is huge and rich, so they can get fast machines with less memory, but they likely have other tasks with different requirements they want to run on the same hardware.

          • tgma 2 days ago ago

            No one gets promoted fixing bugs. Rewriting an entire system is a great way to achieve that.

          • wiseowise 2 days ago ago

            But then you don’t get promo and “fun” work!

      • plorkyeran 2 days ago ago

        The typical rule of thumb is that getting good performance out of tracing GC requires doubling your memory usage, so a 90% reduction suggests that they made significant improvements on top of the language switch.

        • mrighele 2 days ago ago

          The 90% reduction doesn't necessarily have to be related only to GC.

          In my experience Java is a memory hog even compared to other garbage collected languages (that's my main gripe about the language).

          I think a good part of the reason is that if you exclude primitive types, almost everything in Java is an heap-allocated object and Java objects are fairly "fat": every single instance has an header of between 96 and 128 bit on 64-bit architectures [1]. That's... a lot. Just by making the headers smaller (the topic of the above link) you can get 20% decrease in heap usage and improvements in cpu and GC time [2].

          My hope is that once value classes arrive [3][4], and libraries start to use them, we will see a substantial decrease in heap usage in an average java app.

          [1] https://openjdk.org/jeps/450

          [2] https://openjdk.org/jeps/519

          [3] https://openjdk.org/jeps/401

          [4] https://www.youtube.com/watch?v=Dhn-JgZaBWo

          • cogman10 2 days ago ago

            The java GC approach is somewhat unique compared to other languages. There are multiple GCs and pretty much all of them are moving collectors. That means the JVM fairly rarely ends up freeing memory that it's claimed. A big spike will mean that it holds onto the spike's amount of memory.

            Many other GCed languages, such as swift, CPython, Go, do not use a moving collector. Instead, they allocate and pin memory and free it when not in use.

            The benefit to the JVM approach is heap allocations are wicked fast on pretty much all its collectors. Generally, to allocate it's a check to see if space is available and a pointer bump. For the other languages, you are bound to end up using a skiplist and/or arena allocator provided by your malloc implementation. Roughly O(log(n)) vs O(1) in performance terms.

            Don't get me wrong, the object header does eat a fair chunk of memory. Roughly double what another language will take. However, a lot of people confuse the memory which the JVM has claimed from the OS (and is thus reported by the OS) with the memory the JVM is actively using. 2 different things.

            It just so happens that for moving collectors like the JVM typically uses more reserved memory means fewer garbage collections and time spend garbage collecting.

            • mrighele 2 days ago ago

              A moving garbage collector is not that rare, other languages have it. I think C# has one for example, OCaml and SBCL too.

              I know about the trade-offs that a moving GC does, but the rule is about double memory usage, not ten times more like a 90% reduction would seem to imply.

            • sweetjuly 2 days ago ago

              > Many other GCed languages, such as swift

              Swift is not garbage collected, it uses reference counting. So, memory there is freed immediately when it is no longer in scope.

          • formerly_proven 2 days ago ago

            If Java's 128-bit object headers are already fairly fat, then what adjective applies to CPython's? [] is about a whole cache line. Trivial Python objects are barely smaller.

        • username223 2 days ago ago

          I remember the 2x rule from 20 years ago - do you know if things have changed? If locality is more important now, tracing GC might never be as performant as reference counting. Either you use 2x the memory and thrash your cache, or you use less and spend too much CPU time collecting.

          • jeroenhd 2 days ago ago

            Java has had AOT compilation for a while, so traditional GC and its massive overhead are no longer a strict necessity. Even AOT Java will probably stay behind Swift or any other natively compiled language in terms of memory usage, but it shouldn't be that drastic.

            As for performance and locality, Java's on-the-fly pointer reordering/compression can give it an edge over even some compiled languages in certain algorithms. Hard to say if that's relevant for whatever web framework Apple based their service on, but I wouldn't discount Java's locality optimisations just because it uses a GC.

            • pjmlp 2 days ago ago

              For a while means since around 2000, althought toolchains like Excelsior JET and Websphere Real Time, among others, were only available to companies that cared enough to pay for AOT compilers, and JIT caches.

              Nowadays to add to your comment, all major free beer implementations, OpenJDK, OpenJ9, GraalVM, and the ART cousin do AOT and JIT caches.

              Even without Valhala, there are quite a few tricks possible with Panama, one can manually create C like struct memory layouts.

              Yes it is a lot of boilerplate, however one can get around the boilerplate with AI (maybe), or just write the C declarations and point jextract to it.

            • username223 2 days ago ago

              > Java has had AOT compilation for a while, so traditional GC and its massive overhead are no longer a strict necessity.

              You mean it does escape analysis and stack-allocates what it can? That would definitely help, but not eliminate the GC. Or are you thinking of something else?

              Thinking about it more, I remember that Java also has some performance-hostile design decisions baked in (e.g. almost everything's an Object, arrays aren't packed, dynamic dispatch everywhere). Swift doesn't have that legacy to deal with.

              • acdha 2 days ago ago

                Java also has a lot of culture making optimization-resistant code so there’s the question of whether you’re talking about the language itself or various widespread libraries, especially if they’re old enough to have patterns designed around aesthetics or now-moot language limitations rather than performance.

                I’ve replaced Java code with Python a few times and each time even though we did it for maintenance (more Python devs available) we saw memory usage more than halved while performance at least doubled because the code used simpler functions and structures. Java has a far more advanced GC and JIT but at some point the weight of code and indirection wins out.

                • winrid 2 days ago ago

                  That's interesting. I did a line by line rewrite of a large Django route to Quarkus and it was 10x faster, not using async or anything.

                  • acdha a day ago ago

                    That’s why I said “culture” - by all rights the JVM should win that competition. I wrote a bit more about the most recent one in a sibling comment but I’d summarize it as “the JVM can’t stop an enterprise Java developer”.

                    https://news.ycombinator.com/item?id=44179589

                    • pjmlp a day ago ago

                      Enterprise developers and architects would be the same, regardless of the programming language.

                      I am old enough to have seen enterprise C and C++ developers.

                      Where do you think stuff like DCE, CORBA, DCOM has come from?

                      Also many of the things people blame Java for, where born as Smalltalk, Objective-C and C++ frameworks before being re-written in Java.

                      Since we are in a Apple discussion thread, here is some Objective-C ids from Apple frameworks,

                      https://github.com/Quotation/LongestCocoa

                      I also advise getting hold of the original WebObjects documentation in Objective-C, before its port to Java.

                      • acdha a day ago ago

                        > Enterprise developers and architects would be the same, regardless of the programming language.

                        This is true to some extent but the reason I focused on culture is that there are patterns which people learn and pass on differently in each language. For example, enterprise COBOL programmers didn’t duplicate data in memory to the same extent not only due hardware constraints but also because there wasn’t a culture telling every young programmer that was the exemplar style to follow.

                        I totally agree about C++ having had the same problems but most of the enterprise folks jumped to Java or C# which felt like the community of people writing C++ improved the ratio of performance sensitive developers. Python had a bit of that, especially in the 2000s, but a lot of the Very Serious Architects didn’t like the language and so they didn’t influence the community anywhere near as much.

                        I’m not saying everyone involved are terrible, I just find it interesting how we like to talk about software engineering but there are a lot of major factors which are basically things people want to believe are good.

                • throwaway2037 2 days ago ago

                      > I’ve replaced Java code with Python a few times ... while performance at least doubled
                  
                  Are you saying you made Python code run twice as fast as Java code? I have written lots of both. I really struggle to make Python go fast. What am I doing wrong?
                  • acdha a day ago ago

                    More precisely, when deploying the new service microservice it used less than half as much CPU to process more requests per second.

                    This is not “Java slow, Python fast” – I expected it to be the reverse – but rather that the developers who cranked out a messy Spring app somehow managed to cancel out all of the work the JVM developers have done without doing anything obviously wrong. There wasn’t a single bottleneck, just death by a thousand cuts with data access patterns, indirection, very deep stack traces, etc.

                    I have no doubt that there are people here who could’ve rewritten it in better Java for significant wins but the goal with the rewrite was to align a project originally written by a departed team with a larger suite of Python code for the rest of the app, and to deal with various correctness issues. Using Pydantic for the data models not only reduced the amount of code significantly, it flushed out a bunch of inconsistency in the input validation and that’s what I’d been looking for along with reusing our common code libraries for consistency. The performance win was just gravy and, to be clear, I don’t think that’s saying anything about the JVM other than that it does not yet have an optimization to call an LLM to make code less enterprise-y.

                    • throwaway2037 a day ago ago

                      Okay, I understand your point. Basically, you rewrote an awful (clickbat-worthy) enterprisey Java web app into a reasonable, maintainable Python web app. I am sympathetic. Yes, I agree: I have seen, sadly, far more trashy Java enterprisey apps than not. Why? I don't know. The incentives are not well-aligned.

                      As a counterpoint: Look at Crazy Bob's (Lee/R.I.P.) Google Guice or Norman Maurer's Netty.IO or Tim Fox's Vert.x: All of them are examples of how to write ultra-lean, low-level, high-performance modern Java apps... but are frequently overlooked to hire cheap, low-skill Java devs to write "yet another Spring app".

                      • p2detar 5 hours ago ago

                        IMO the “Spring fever” is the most horrible thing that has happened to Java. There genuinely are developers and companies that reduce the whole language and its ecosystem to Spring. This is just sad. I’m glad that I have been working 15+ years with Java and never touched any Spring stuff whatsoever.

                      • acdha a day ago ago

                        > but are frequently overlooked to hire cheap, low-skill Java devs to write "yet another Spring app".

                        Yeah, that’s why I labeled it culture since it was totally a business failure with contracting companies basically doing the “why hire these expensive people when we get paid the same either way?” No point at ranting about the language, it can’t fix the business but unfortunately there’s a ton of inertia around that kind of development and a lot of people have been trained that way. I imagine this must be very frustrating for the Java team at Oracle knowing that their hard work is going to be buried by half of the users.

          • cogman10 2 days ago ago

            It all depends, but one major advantage of the way the JVM GCs is related memory will tend to be colocated. This is particularly true of the serial, parallel, and G1GC collectors.

            Let's say you have an object that looks like A -> B -> C. Even if the allocation of A/B/C happened at very temporally different times and inbetween different allocations, the next time the GC runs as it traverses the graph it will see and place in memory [A, B, C] assuming A is still live. That means even if the memory originally looks something like [A, D, B, Q, R, S, T, C] the act of collecting and compacting has a tendency to colocate.

            • username223 2 days ago ago

              That's the theory -- a compacting collector will reduce fragmentation and can put linked objects next to each other. On the other hand, a reference count lives in the object, so you're likely using that cache line already when you change it.

              I don't know which of these is more important on a modern machine, and it probably depends upon the workload.

              • cogman10 2 days ago ago

                The problem is memory colocation not RC management. But I agree, it'll likely be workload dependent. One major positive aspect of RC is the execution costs are very predictable. There's little external state which can negatively impact performance (like the GC currently running).

                The downside is fragmentation and the CPU time required for memory management. If you have an A -> B -> C chain where A is the only owner of the B and B is the only owner of C, then when A hits 0, it has to do 2 pointer hops to deallocate B and then deallocate C (plus arena management for the deallocs).

                One of the big benefits of JVM moving style collectors is that when A dies, the collector does not need to visit B or C to deallocate them. The collector only visits and moves live memory.

                • dwaite 2 days ago ago

                  > The downside is fragmentation and the CPU time required for memory management. If you have an A -> B -> C chain where A is the only owner of the B and B is the only owner of C, then when A hits 0, it has to do 2 pointer hops to deallocate B and then deallocate C (plus arena management for the deallocs).

                  I suspect this puts greater emphasis on functionality like value types and flexibility in compositionally creating objects. You can trend toward larger objects rather than nesting inner objects for functionality. For example, you can use tagged unions to represent optionality rather than pointers.

                  The cost of deep A->B->C relationships in Java comes during collections, which still default to be halting. The difference is a reference counting GC will evaluate these chains while removing objects, while a reference tracking GC will evaluate live objects.

                  So, garbage collection is expensive for ref-counting if you are creating large transient datasets, and is expensive for ref-tracking GC if you are retaining large datasets.

        • geodel 2 days ago ago

          That is just GC part. Another big difference is Reference type (Java) vs Value Type (Swift).

      • loxs 2 days ago ago

        The java runtime is a beast. Even the fact that another runtime is just capable to do a similar thing is impressive, disregard the fact that it might be better. Even being on par makes it interesting for me to maybe try it on my own.

    • favorited 2 days ago ago

      The post notes that the user-facing app was "introduced in the fall of 2024," so presumably the services aren't that legacy.

      • remus 2 days ago ago

        You can learn a lot when writing V2 of a thing though. You've got lots of real world experience about what worked and what didn't work with the previous design, so lots of opportunity for making data structures that suit the problem more closely and so forth.

      • isodev 2 days ago ago

        But did they write the backend from scratch or was it based on a number of “com.apple.libs.backend-core…” that tend to bring in repeating logic and facilities they have in all their servers? Or was it a PoC they promoted to MVP and now they’re taking time to rewrite “properly” with support for whatever features are coming next?

    • Someone 2 days ago ago

      My $0.02 is that Java not having value types (yet), while Swift has, is a large reason for the efficiency gains.

      • CharlieDigital 2 days ago ago

        As a C# dev, I guess I've just taken it for granted that we have value types. Learned something new today (that Java apparently does not).

        • MBCook 2 days ago ago

          It does for primitives.

          For user defined stuff we’ve recently gained records, which are a step in that direction, and a full solution is coming.

          • SigmundA 2 days ago ago

            What about structs?

            • dwaite 2 days ago ago

              Basically no.

              Even records are not value-based types, but rather are classes limited to value-like semantics - e.g. they can't extend types, are expected to have immutable behavior by default where modification creates a new record instance, and the like.

              The JVM theoretically can perform escape analysis to see that a record behaves a certain way and can be stack allocated, or embedded within the storage of an aggregating object rather than having a separate heap allocation.

              A C# struct gets boxed to adapt it to certain things like an Object state parameter on a call. The JVM theoretically would just notice this possibility and decide to make the record heap-allocated from the start.

              I say theoretically because I have not tracked if this feature is implemented yet, or what the limitations are if it has been.

            • pjmlp 2 days ago ago

              Currently only via Panama, creating the memory layout manually in native memory segments.

              Valhala is supposed to bring language level support, the biggest issue is how to introduce value types, without breaking the ABI from everything that is in Maven Central kind of.

              Similar to the whole async/await engineering effort in .NET Framework, on how to introduce it, without adding new MSIL bytecodes, or requiring new CLR capabilities.

            • cogman10 2 days ago ago

              I'm not sure with the semantics of structs for C#.

              What java is getting in the future is immutable data values where the reference is the value.

              When you have something like

                 class Foo {
                   int a;
                   int b;
                 }
              
                 var c = new Foo();
              
              in java, effectively the representation of `c` is a reference which ultimately points to the heap storage locations of `a, b`. In C++ terms, you could think of the interactions as being `c->b`.

              When values land, the representation of `c` can instead be (the JVM gets to decide, it could keep the old definition for various performance reasons) something like [type, a, b]. Or in C++ terms the memory layout can be analogous to the following:

                  struct Foo { int a, int b };
              
                  struct Foo c;
                  c.a = 1;
                  c.b = 2;
      • pjmlp 2 days ago ago

        However you can make use of Panama to work around that, even if it isn't the best experience in the world.

        Create C like structs, in regards to memory layout segments, and access them via the Panama APIs.

      • dehrmann 2 days ago ago

        I would have guessed it's boxed primitives.

        • throwaway2037 2 days ago ago

          Is that still a thing in 2025? There are so many third party libraries that offer primitive collections. Example: https://github.com/carrotsearch/hppc

          • dehrmann a day ago ago

            If you're not specifically concerned about memory use, why would you use a third-party library?

    • gt0 2 days ago ago

      Agree, this is almost always where the benefits come from. You get to write v2 of the software with v1 to learn from.

    • rs186 2 days ago ago

      Yes! The post would have been much more informative if it did an in-depth analysis of where the performance gain comes from. But Apple being Apple, I don't think they'll ever want to expose details on their internal systems, and we probably can only get such hand wavy statements.

      • MBCook 2 days ago ago

        I suspect that didn’t fit into the goal of the blog post.

        I don’t think it’s meant to be a postmortem on figuring out what was going on and a solution, but more a mini white paper to point out Swift can be used on the server and has some nice benefits there.

        So the exact problems with the Java implementation don’t matter past “it’s heavy and slow to start up, even though it does a good job”.

    • tialaramex 2 days ago ago

      Sure, maybe you can get money to have some businesses try out rewriting their line-of-business software in the same language versus in a different language and get some results.

      My expectation is that if you put the work in you can get actual hard numbers, which will promptly be ignored by every future person asking the same "question" with the same implied answer.

      If the "just rewrite it and it'll be better" people were as right as they often seem to believe they are, a big mystery is JWZ's "Cascade of Attention-Deficit Teenagers" phenomenon. In this scenario the same software is rewritten, over, and over, and over, yet it doesn't get faster and doesn't even fix many serious bugs.

      • tilne 2 days ago ago

        For others that hadn’t head of CADT either: https://www.jwz.org/doc/cadt.html

        I confess to having been part of the cascade at various parts of my career.

      • staplers 2 days ago ago

          If the "just rewrite it and it'll be better" people were as right as they often seem to believe
        
        Generally speaking, technological progress over thousands of years serves to validate this. Sure, in the short term we might see some slippage depending on talent/expertise, but with education and updated application of learnings, it's generally true.
    • BonoboIO 2 days ago ago

      Imagine what rust or go could have achieved

      • dontlaugh 2 days ago ago

        Go is similar to Swift when it comes to mandatory costly abstractions.

        It’s only Rust (or C++, but unsafe) that have mostly zero-cost abstractions.

        • airspeedswift 2 days ago ago

          Swift, Rust, and C++ all share the same underlying techniques for implementing zero-cost abstrations (primarily, fully-specialized generics). The distinction in Swift's case is that generics can also be executed without specialization (which is what allows generic methods to be called over a stable ABI boundary).

          Swift and Rust also allow their protocols to be erased and dispatched dynamically (dyn in Rust, any in Swift). But in both languages that's more of a "when you need it" thing, generics are the preferred tool.

          • dontlaugh 2 days ago ago

            To an approximation, but stdlib and libraries will have a bias. In practice, abstractions in Rust and C++ more commonly are actually zero-cost than in Go or Swift.

            This is not a bad thing, I was just pointing out that Go doesn't have a performance advantage over Swift.

        • frizlab 2 days ago ago

          Swift have them too now (non-Copyable types).

        • rafram 2 days ago ago

          > It’s only Rust (or C++, but unsafe) that have mostly zero-cost abstractions.

          This just isn't true. It's good marketing hype for Rust, but any language with an optimizing compiler (JIT or AOT) has plenty of "zero-cost abstractions."

    • misiek08 2 days ago ago

      I love to always see such comments. On JVM you use crap like Spring and over-engineer everything. 20 types, interfaces and objects to keep single string in memory.

      JVM also like memory, but can be tailored to look okayish, still worse than opponents.

      • paxys 2 days ago ago

        And I'm 100% sure you can do the same in Swift.

        • eikenberry 2 days ago ago

          It's not technical, it's cultural. Different community conventions.

        • jbverschoor 2 days ago ago

          Sure, And you can also make beautiful code in php, or shot code in Java

          It’s the history, standard libs, and all the legacy tutorials which don’t get erased from the net

        • jen20 2 days ago ago

          Swift's limitations around reflection actually make it surprisingly difficult to create a typical Java-style mess with IOC containers and so forth.

          • pjmlp 2 days ago ago

            Ever heard of Swift macros?

            Do you know where Java EE comes from?

            It started as an Objective-C framework, a language which Swift has full interoperability with.

            https://en.wikipedia.org/wiki/Distributed_Objects_Everywhere

            • jen20 a day ago ago

              > Ever heard of Swift macros?

              Yes, having lived on the daily build bleeding edge of Swift for several years, including while macros were being developed, I have indeed heard of them.

              > Do you know where Java EE comes from?

              Fully aware of the history.

              The point stands: it is substantially harder with Swift to make the kind of spring style mess that JVM apps typically become (of course there are exceptions: I typically suggest people write Java like Martin Thompson instead of Martin Fowler). Furthermore, _people just don’t do it_. I imagine you could visualise the percentage of swift server apps using an IOC container using no hands.

              • pjmlp a day ago ago

                First the amount of Swift server apps has to grow to a number that is actually relevant for enterprise architects to pay attention.

                Then I can start counting.

                • jen20 a day ago ago

                  That is the reason I used percentages, rather than absolute. For Java, every single app I’ve ever seen has been a massive shit show. For Swift, 0 of the 40-50 I’ve seen are.

                  • pjmlp 16 hours ago ago

                    For Swift to be a shit show on Fortune 500 Linux and Windows servers, someone has to start shipping them in volume, regardless of percentiles.

                    Any language can be a shit show, when enough people beyond the adoption curve write code in them, from the letcoders with MIT degree to the six week bootcamp learners shipping single functions as a package, and architects designing future proof architectures on whiteboards with SAFe.

                    When Swift finally crosses into this world, then we can compare how much of it has survived the world scale adoption exposure, beyond cozy Apple ecosystem.

      • bdangubic 2 days ago ago

        same - in every single language with incompetent team

  • quux 2 days ago ago

    I'm hoping to hear some good news at WWDC for swift development in editors other than Xcode (VSCode, Neovim, etc.) Last year they said "we need to meet backend developers where they are" and announced plans to improve sourcekit-lsp and other efforts.

    • st3fan 2 days ago ago

      This project is getting more mature https://github.com/swiftlang/vscode-swift

      And https://github.com/swiftlang/sourcekit-lsp an be used in any LSP compatible editor like Neovim.

    • candiddevmike 2 days ago ago

      IMO, Apple has quite the track record of never "meeting X where they are". They could make Xcode cross platform, but they never will.

      • klausa 2 days ago ago

        Xcode is probably like, one of the top… 3? 5? biggest macOS-native applications in the world.

        Making it cross-platform would require either reimplementing it from scratch, or doing a Safari-on-Windows level of shenanigans of reimplementing AppKit on other platforms.

        • rafram 2 days ago ago

          > Safari-on-Windows level of shenanigans of reimplementing AppKit on other platforms

          I was curious about this, so I downloaded it to take a look. It doesn't look like they actually shipped AppKit, at least as a separate DLL, but they did ship DLLs for Foundation, Core Graphics, and a few other core macOS frameworks.

          • quux 2 days ago ago

            Yep they did. I think those dlls are have their origin in NeXT’s Openstep for Windows product, which let you develop windows apps using the NeXT APIs.

        • AdamN 2 days ago ago

          Xcode on Windows/Linux would be wacky and not worth the effort - it's very tightly coupled to MacOS so effectively impossible imho. People targeting MacOS/iOS aren't typically running Windows on the desktop. The more critical thing to meeting developers where they are would be for the entire developer loop to be doable from a JetBrains IDE on MacOS.

          • cosmic_cheese 2 days ago ago

            > People targeting MacOS/iOS aren't typically running Windows on the desktop.

            Or if they are, they're treating macOS/iOS as “blind targets” where those platforms are rarely if ever QA’d or dogfooded.

            > The more critical thing to meeting developers where they are would be for the entire developer loop to be doable from a JetBrains IDE on MacOS.

            I think most current Apple platform devs would be happiest if they were equipped with the tools to build their own IDEs/toolchains/etc, so e.g. Panic's Nova could feasibly have an iOS/macOS/etc dev plugin or someone could turn Sublime Text into a minimalistic Apple platform IDE. JetBrains IDEs certainly have a wide following but among the longtime Mac user devs in particular they’re not seen as quite the panacea they’re presented as in the larger dev community.

      • st3fan 2 days ago ago

        There would be no point in doing that. All the devs they care about already have Macs. If you are on Windows or Linux and you want to work on Swift server side code then you can use their official LSP or VSCode extension in your favorite editor.

      • paxys 2 days ago ago

        Who even wants Xcode to be cross platform? People can barely tolerate it on macs.

        What you really mean is people want the iOS development toolchain to be cross platform, and that would mean porting iOS to run in a hypervisor on linux/windows (to get the simulator to work). That is way too big a lift to make sense.

        • atonse 2 days ago ago

          Exactly.

          I’ve never wanted Xcode in more places. When I used to be a native mobile dev, I wanted to not have to use Xcode.

          And it’s technically possible. But totally not smooth as of a few years ago.

    • kridsdale1 2 days ago ago

      I’ve been writing iOS apps primarily for 15 years. I haven’t had to use Xcode since about 2016 since Facebook and Google have fully capable VSCode based editors with distributed builds in the Linux clouds. It’s pretty great, but I don’t know of an open source version of this system. That is, integration with Bazel/Buck that can build iOS on non-Mac hardware.

    • DavidPiper 2 days ago ago

      Swift SourceKit LSP + VSCode is actually pretty good these days. I recently got it working with CMake for a Swift / CMake project, and the only real pain in that was getting CMake set up properly. Everything else worked out of the box with VSCode's extension manager.

    • lawgimenez 2 days ago ago

      My main wish for Apple this upcoming WWDC is to make Xcode not make my M2 super hot.

    • elpakal 2 days ago ago

      Ive wanted this for years. FWIW I've used CLion's Swift plugin for my pure SPM projects and it's actually decent.

    • sureglymop 2 days ago ago

      I hope so too! We even have an official Kotlin LSP now... maybe that serve as inspiration.

  • maximilianroos 2 days ago ago

    > One of the challenges faced by our Java service was its inability to quickly provision and decommission instances due to the overhead of the JVM. ... To efficiently manage this, we aim to scale down when demand is low and scale up as demand peaks in different regions.

    but this seems to be a totally asynchronous service with extremely liberal latency requirements:

    > On a regular interval, Password Monitoring checks a user’s passwords against a continuously updated and curated list of passwords that are known to have been exposed in a leak.

    why not just run the checks at the backend's discretion?

    • potatolicious 2 days ago ago

      > "why not just run the checks at the backend's discretion?"

      Because the other side may not be listening when the compute is done, and you don't want to cache the result of the computation because of privacy.

      The sequence of events is:

      1. Phone fires off a request to the backend. 2. Phone waits for response from backend.

      The gap between 1 and 2 cannot be long because the phone is burning battery the entire time while it's waiting, so there are limits to how long you can reasonably expect the device to wait before it hangs up.

      In a less privacy-sensitive architecture you could:

      1. Phone fires off request to the backend. Gets a token for response lookup later. 2. Phone checks for a response later with the token.

      But that requires the backend to hold onto the response, which for privacy-sensitive applications you don't want!

      • paxys 2 days ago ago

        Especially since the request contains the user's (hashed) passwords. You definitely don't want to be holding that on the server for longer than necessary.

      • ivan_gammel 2 days ago ago

        Is it really a problem? Client can pass an encryption key with the request and then collect encrypted result later. As long as computation is done and result is encrypted, server can forget the key, so cache is no longer a privacy concern.

        • potatolicious 2 days ago ago

          You can, and in situations where the computation is unavoidably long that's what you'd do. But if you can do a bit of work to guarantee the computation is fast then it removes a potential failure mode from the system - a particularly nasty one at that.

          If you forget to dump the key (or if the deletion is not clean) then you've got an absolute whopper of a privacy breach.

          Also worth noting that you can't dump the key until the computation is complete, so you'd need to persist the key in some way which opens up another failure surface. Again, if it can't be avoided that's one thing, but if it can you'd rather not have the key persist at all.

          • ivan_gammel a day ago ago

            „UPDATE checks SET result=?, key=null“

            Is it that hard?

            Also I don’t think persisting a key generated per task is a big privacy issue.

      • maximilianroos 2 days ago ago

        thanks!

    • lilyball 2 days ago ago

      > why not just run the checks at the backend's discretion?

      Presumably it's a combination of needing to do it while the computer is awake and online, and also the Passwords app probably refreshes the data on launch if it hasn't updated recently.

    • undefined 2 days ago ago
      [deleted]
  • mtrovo 2 days ago ago

    Without a deeper profiling analysis of the Java application it's hard to not consider this whole piece just advertorial content. Where exactly were the bottlenecks in Java or top delta gains compared to Swift. The service scope looks so simple that there might be some underlying problem with the previous version, be it the code not scaling well with the irregular batch nature of traffic or custom cryptography code not taking advantage of the latest native IO constructs.

    And I'm not defending Java by any means, more often than not Java is like an 80s Volvo: incredibly reliable, but you'll spend more time figuring out its strange noises than actually driving it at full speed.

    • CharlesW 2 days ago ago

      > Without a deeper profiling analysis of the Java application it's hard to not consider this whole piece just advertorial content.

      I'd be surprised if anything Apple wrote would satisfy you. TFA makes it clear that they first optimized the Java version as much as it could be under Java's GC, that they evaluated several languages (not just Swift) once it became clear that a rewrite was necessary, that they "benchmarked performance throughout the process of development and deployment", and they shared before/after benchmarks.

      • cogman10 2 days ago ago

        Ok, I'm pretty skeptical that they actually did optimize what they could. In fact, reading between the lines it sounds like they barely tried at all.

        For example, they mention G1GC as being better than what was originally there but not good enough. Yet the problems they mention, prolonged GC pauses, indicates that G1GC was not the right collector for them. Instead, they should have been using ZGC.

        The call out of G1GC as being "new" is also pretty odd as it was added to the JVM in Java 9, released in 2016. Meaning, they are talking about a 9 year old collector as if it were brand new. Did they JUST update to java 11?

        And if they are complaining about startup time, then why no mention of AppCDS usage? Or more extreme, CRaC? What about doing an AOT compile with Graal?

        The only mention they have is the garbage collector, which is simply just one aspect of the JVM.

        And, not for nothing, the JVM has made pretty humongous strides in startup time and GC performance throughout the versions. Theres pretty large performance wins going from 11->17 and 17->21.

        I'm sorry, but this really reads as Apple marketing looking for a reason to tout swift as being super great.

        • CharlesW 2 days ago ago

          As someone who hasn't used Java professionally, this was helpful informed skepticism. Thanks!

          > Did they JUST update to java 11?

          As an LTS release with "extended" support until 2032, that certainly seems possible.

          Another near-certain factor in this decision was that Apple has an extreme, trauma-born abhorrence of critical external dependencies. With "premier" support for 11 LTS having ended last fall, it makes me wonder if a primary lever for this choice was the question of whether it was better to (1) spend time evaluating whether one of Oracle's subsequent Java LTS releases would solve their performance issues, or instead (2) use that time to dogfood server-side Swift (a homegrown, "more open" language that they understand better than anyone in the world) by applying it to a real-world problem at Apple scale, with the goal of eventually replacing all of their Java-based back-ends.

          • vips7L 2 days ago ago

            Just because it’s supported (receives security updates) doesn’t mean that it isn’t still an ancient and antiquated runtime. Java 11 was released almost 8 years ago. That is thousands of commits and performance improvements behind the latest JDK24.

            • dwaite 2 days ago ago

              JDK 24 is not a LTS release, with 1st party support ending in September. It is also 6 1/2 years newer than Java 11.

              I would imagine many companies would not use anything newer than JDK 21, the latest LTS release.

              • mtrovo 2 days ago ago

                I don't get your point.

                JDK 11 LTS is from 2018 and after that Oracle pushed two LTS releases: JDK 17 in 2021 and JDK 21 in 2023. On top of that Oracle is commited to releasing a LTS every 2 years with the next one planed for later this year.

                Using an LTS doesn't mean you have to create applications on the oldest available release, it means that if you target the latest LTS release your application is going to have a predictable and supported runtime for a very long time.

                If they had to start a Java 11 project in 2024 that just points to a deeper organizational problem bigger than just GC.

      • wiseowise 2 days ago ago

        > TFA makes it clear that they first optimized the Java version as much as it could be under Java's GC

        No, it doesn’t.

  • mring33621 2 days ago ago

    I understand there may be some bias in this article, but the resource usage improvements are hard to ignore for a company that pays for cloud compute/memory usage.

    I'm gonna look into server-side Swift.

    Looks like it'll take some fiddling to find the right non-xcode tools approach for developing on linux.

    I prefer Jetbrains tools over VSCode, if anyone has any hints in that direction.

    • dhosek 2 days ago ago

      I don’t know that there’s anything that magical about Swift in particular, but rather the difference of running without the big runtime of the JVM and the advantages of memory management. Go will probably give improvements (and likely be an easier mental adjustment from a JVM language), and Rust, once you mentally adapt to its model will give even bigger improvements. Swift has the advantage of being object-oriented (if that’s an advantage in your mind), but I was thinking 10 years ago that with the move to microservices that Java/Spring apps were going to not work so great with the quick start-up/shut-down that the cloud/microservice model wants.

      • manmal 2 days ago ago

        Will Rust really be better than Swift in such a context? Doesn’t that more or less depend on what kind of memory management is being used in Rust?

        • dhosek 2 days ago ago

          The memory management strategies in Rust and Swift are similar. The big performance hit in Swift would come in method dispatch where it’s following Objective C strategies which would result in indirect calls for object methods. This is possible in Rust (using Box(dyn trait)), but discouraged in critical paths in favor of either static dispatch by means of enums or reified calls through Rust generics. Swift has similar capabilities, of course, but with both languages, the best performance reqires consciously making choices in development.

          • airspeedswift 2 days ago ago

            > The big performance hit in Swift would come in method dispatch where it’s following Objective C strategies which would result in indirect calls for object methods.

            While Swift can use Objective-C's message sending to communicate with Objective-C libraries, that isn't its primary dispatch mechanism. In the case of the service described in the article, it isn't even available (since it runs on Linux, which does not have an Objective-C runtime implementation).

            Instead, like Rust, Swift's primary dispatch is static by default (either directly on types, or via reified generics), with dynamic dispatch possible via any (which is similar to Rust's dyn). Swift also has vtable-based dispatch when using subclassing, but again this is opt-in like any/dyn is.

          • manmal 2 days ago ago

            Adding to what the sibling comment from airspeedswift said, there is not ever a need to use dynamic dispatch in Swift outside of iOS apps. Idiomatic apps will have many internal packages/targets that don't even import any Objective-C based libraries and can be compiled and run with just the swift toolchain. You can even inline all your frequent function calls. Idiomatic Swift also uses classes sparingly, and most are ideally declared final.

    • pharaohgeek 2 days ago ago

      Sadly, Jetbrains no longer sells AppCode or supports their Swift plugins for CLion. I WISH they would open source the plugins, as CLion is far superior to Xcode. For now, though, we're really stuck with either Xcode (Mac) or VSCode (wherever). That said, I am really starting to love server-side Swift using Vapor. Swift, as a language, is great to develop in.

    • eviks 2 days ago ago

      If resource usage improvement is entirely due to bias, it should be easy to ignore?

  • StackRiff 2 days ago ago

    I wonder what Apple is using for production monitoring, observability, and performance profiling with swift applications on Linux. In my experience this is one of the key missing pieces in Swift server ecosystem.

    You can link against jemalloc, and use google perftools to get heap and CPU profiles, but it's challenging to make good use of them especially with swifts method mangling and aggressive inlining.

  • maz1b 2 days ago ago

    I've always wondered a slight amount as to why larger enterprises (which have the resources to hire specialists) don't hire people with expertise in say things like Rust, or Elixir/Phoenix?

    It's one thing to say that we want to hire commonly available developers like in Java or C#, but if you have a long term plan and execution strategy, why not pick technology that may pay off larger dividends?

    ITT: I get why they chose Swift, it's Apple's own in house technology. Makes total sense, not knocking that at all. Nice writeup.

    • neepi 2 days ago ago

      It’s because Java and c# are so commoditised we don’t have to pay people as much.

      Also it’s about short term balance sheet not long term product management in the SME and small SaaS space. I don’t think anyone other then the developers give a shit.

    • manmal 2 days ago ago
    • geodel 2 days ago ago

      Many reasons:

      There are not enough Rust experts in the world for a typical enterprise to hire and benefit from it.

      Elixir/Phoenix are not order of magnitude improvements over Java like Rust, they are marginal improvements. Enterprise don't care for that.

    • dwaite 2 days ago ago

      it sounds like you are asking why enterprises aren't choosing to write Rust code vs Java code. I wouldn't turn away a developer just because they had experience with Elixir.

      It really comes down to whether there's someone making the case that a particular application/subsystem has specialized needs that would warrant hiring experts, and whether they can make the case successfully that the system should use technologies that would require additional training and impose additional project risk.

      Until you are dealing not with enterprise applications but actual services, it can be difficult to even maintain development teams for maintenance - if your one ruby dev leaves, there may be nobody keeping the app running.

      Even when you are producing services - if they are monolithic, you'll also be strongly encouraged to stick with a single technology stack.

      • maz1b 18 hours ago ago

        No, I was making a point about if the goal is lowering resource usage, more throughput, concurrency, speed/perf, etc, a larger company can achieve that "unfair advantage" of things like rust or elixir etc because they have the resources as compared to SMBs or startups.

        Of course, every company and org has to see whats best and feasible for them. Valid points you brought up no doubt.

  • zapnuk 2 days ago ago

    Very interesting. I wish they had gone into a little more detail about the other technologies involved.

    Was the Java Service in Spring (boot)?

    What other technologies were considerd?

    I'd assume Go was among them. Was it just the fact that Go's type system is to simplistic or what were the other factors?

    • geodel 2 days ago ago

      Swift is Apple's own language. They have all the experts from lowest to highest level .

      Writing a long winded report/article for fair technical evaluation of competing technologies would utter waste of time and no one would believe if answer were still Swift.

      > I'd assume Go was among them. ...

      I don't see any reason to evaluate Go at all.

      • latchkey 2 days ago ago

        > I don't see any reason to evaluate Go at all.

        https://devblogs.microsoft.com/typescript/typescript-native-...

        • MBCook 2 days ago ago

          But that ignores the fact Apple has a MASSIVE investment in Swift.

          I think they already use Go in places, but they’ve clearly stated their intention to use Swift as much as possible where it’s reasonable.

          I suspect they didn’t evaluate C++, Rust, Go, Erlang, Node, and 12 other things.

          They have the experience from other Swift services to know it will perform well. Their people already know and use it.

          If Swift (and the libraries used) weren’t good enough they’d get their people to improve it and then wait to switch off Java.

          If you go to a Java shop and say you want to use C# for something Java can do, they’ll probably say to use Java.

          I don’t read this post as “Swift is the best thing out there” but simply “hey Swift works great here too where you might not expect, it’s an option you might not have known about”.

          • latchkey 2 days ago ago

            Microsoft has a massive investment in C#, but they still evaluated (and picked) golang.

            • MBCook 2 days ago ago

              For TypeScript’s compiler, yes. I can see some real benefits, like Go is already common for some open source software they want to collaborate with non-MS people on. I suspect C# is much less common for that, and when targeting pure performance I suspect a bytecode language like C# wouldn’t have the same large gain.

              I’m not in the .NET ecosystem so I don’t know if native AOT compilation to machine code is an option.

              But anyway, in this case Apple is making an internal service for themselves. I think a better comparison for MS would be if they chose to rewrite some Windows service’s server back end. Would they choose Go for that?

              I don’t know.

              • pjmlp 2 days ago ago

                Native AOT is quite good enough nowadays, I think there were other politics at play.

                Azure team has no issues using AI to convert from C++ to Rust, see RustNation UK 2025 talks.

                Also they mention the reason being a port not a rewrite, yet they had anyway to rewrite the whole AST datastructures due to the weaker typesystem in Go.

                Finally, the WebAssembly tooling to support Blazor is much more mature than what Go has.

              • latchkey 2 days ago ago

                The real question isn’t whether they would choose it, but whether they’d be willing to evaluate it. Given their past behavior, as I mentioned above, it seems they are open to assessing options and selecting the best tool for the job.

                • MBCook 2 days ago ago

                  They are certainly far more open these days. I remember when it was Microsoft tools and Microsoft languages on Microsoft servers or nothing.

                  They’d have never touched Go with a 10 foot poll.

              • wiseowise 2 days ago ago

                You just stated a comment away that Apple chose Swift because of massive investment. Microsoft has much, MUCH bigger investment in C#, but you find all the reason why your original argument is invalid.

                The article is just a marketing for a team looking for promo, there’s no deep meaning or larger Apple scheme here.

        • pjmlp 2 days ago ago

          I don't buy the reasoning.

          First of all, it is a missed opportunity for Microsoft to have another vector for people to learn C#.

          Secondly at BUILD session, Anders ended up explaning that they needed to rewrite the AST data structures anyway, given that Go type system is much inferior to Typescript.

          And Go's story on WebAssembly is quite poor, when compared with Blazor toolchain, they are hopping Google will make the necessary improvements, required for the TypeScript playground and when running VSCode on the browser.

          Finally, some of the key develpers involved on this effort have been layed off during the latest round.

        • dontlaugh 2 days ago ago

          That still seems like a long term mistake to me, an evolutionary dead end.

          • latchkey 2 days ago ago

            Does it matter? Today, they get a 10x improvement by switching. Mission accomplished.

            X years from now, another language will come along and then they can switch to that for whatever benefit it has. It is just the nature of these things in technology.

            • dontlaugh 2 days ago ago

              It can be a limitation much faster than that. They are in a situation where they won’t be able to improve further by much due to unavoidable costly abstractions in Go. If they’d picked something lower level there would be more possible after this first switch.

              • throwaway2037 2 days ago ago

                FYI: I have no dog in this fight.

                    > unavoidable costly abstractions in Go
                
                Can you share some?
                • dontlaugh 2 days ago ago

                  Implicit struct copies, garbage collection, interface dispatch, etc.

              • latchkey 2 days ago ago

                My mistake, I shouldn't have put a number there since that is what you focused on.

                Rewriting it in assembly is the way to go, but that has other tradeoffs.

                • dontlaugh 2 days ago ago

                  I’m not focused on a number, I’m just pointing out Go’s optimisation potential is lower than other options.

                  Of course it’s a trade off and their reasons are fine, but rewrites are expensive and disruptive. I would have picked something that can avoid a second rewrite later on.

                  • latchkey 2 days ago ago

                    [flagged]

                    • dontlaugh 2 days ago ago

                      I'm well aware of how they're doing it. It makes sense to start with a mostly-automated rewrite, what they confusingly call a "port".

                      But after that step, the end result will be maintained and changed manually. It makes perfect sense to make improvements (including to performance) this way. All I'm questioning is the choice of target, since it excludes some possible future improvements. If you're rewriting (semi-automated or not), it's an opportunity to future-proof as well.

                      I don't understand why you're being so confrontational about mere technical disagreement.

                      • latchkey 2 days ago ago

                        I'm not disagreeing with you and it is weird that my comment above got flagged given that I'm not attacking you. ¯\_(ツ)_/¯

                        • dontlaugh 2 days ago ago

                          I don't know, I didn't flag it.

                          I'm disagreeing with MS's choice and you seemed to disagree with that, even claiming I hadn't read their rewrite plan and reasoning.

                          Doesn't really matter.

          • hardwaresofton 2 days ago ago

            It absolutely was. All it takes is a quick look around other language ecosystems and JS itself.

            Rust is an excellent language for embedding in other languages And underpinning developer tools.

            That said, someday the new typescript binary will compile to WebAssembly, and it won’t matter much anyway.

            • dontlaugh 2 days ago ago

              There is that opposite approach, yes. Add low-level control to TypeScript and get it to compile to WebAssembly, then the compiler itself can be fast as well.

              I suspect they wanted the compiler speed more than they wanted a WASM target, though.

        • geodel 2 days ago ago

          Yeah, while they are at it they can also learn on how to write OS for ARM architecture from Microsoft.

    • anuragsoni 2 days ago ago

      Apple maintains servicetalk[1] (java networking framework built on top of netty), so I'm guessing this is one potential JVM framework that was being used.

      [1] https://github.com/apple/servicetalk

  • miffy900 2 days ago ago

    I'm going to call it now: Swift on the backend is pointless for everyone but Apple, and trying to make it takeoff as a backend service language is as pointless as porting Xcode cross platform and trying to lure non-Apple devs into using it over say VSCode, any JetBrains IDE or Visual Studio.

    The choices that already serve the market are just too numerous and the existing tooling is already far greater in features, functionality and capability than anything that Apple can provide. What's funny is that they actually have the money to dedicate resources to this space to compete with C#, Java, go or Rust, but they're not going to because it's just too far afield of their core business. Any backend service written in Swift is not going to be running on a Mac in the cloud, and probably won't be serving just iPhone/iPad clients exclusively, so why bother when we know Apple leadership will treat it as an afterthought.

    If it does takeoff, I'm betting it will be because the open source community provides a solution, not Apple and even then it will be in a tiny niche. Indeed, this entire project is enabled by Vapor, an open source Swift project that I'm guessing the team only chose because Vapor as a project finally reached the requisite level of maturity for what they wanted. It's not like Apple went out on their own and built their own web framework in Swift, like Microsoft does with C# and ASP.NET. All of this makes me feel even more skeptical about Swift on the backend. Apple won't do anything specifically to give Swift a legup in the backend space, beyond the basics, like porting Swift to linux, but will avail themselves of open source stuff that other people built.

    • jeremymcanally 2 days ago ago

      Apple certainly do have web frameworks they’ve written in Swift.

      No, I don’t know why they aren’t publicly available (at least yet). But I do know they power a number of public facing services.

    • eviks 2 days ago ago

      > they're not going to because it's just too far afield of their core business

      Apple car was farther?

      • hu3 a day ago ago

        and it's dead

    • georgeecollins 2 days ago ago

      I am really curious what people's take on Hacker News is of Swift. It feels like I heard about it more five years ago. People seem to like Go, Rust is cool (I learned that one), and Python, C#, Java seem here to stay.

      I mean I am asking as general purpose language, not just back end.

      • wiseowise 2 days ago ago

        There was really big excitement about Swift everywhere, but Apple decided not to capitalize on it and hype died off. Swift is pretty much Mac/iOS language and there are no indications that it is going to change anytime soon.

      • dwaite 2 days ago ago

        Rust and Swift share a lot of core concepts.

        Swift has a few extra bits for Objective-C compatibility (when running on Apple platforms), but otherwise the biggest differences come to design and ergonomics of the language itself.

        If I need complete control over memory for say a web server, or am writing code to run on an embedded device (e.g. without Linux), I'll use Rust - even if Swift technically has the features to meet my needs.

        Thats the reverse if I am creating mobile or desktop apps. I'd probably prefer using neither for web services like in this article, but would still probably pick Swift given a limited choice.

      • MBCook 2 days ago ago

        I really like it. It feels clean to me and easy to read, but has great async and other modern features. A bit like a compiled TypeScript without all the baggage that comes from JS’s existence.

        It has its issues in places, mostly the compiler, but I love getting to develop in it.

        I suspect you hear less about it because it’s no longer new and the open source community doesn’t seem to care about it that much, unlike Rust.

        It still seems to be viewed as “the iOS language” even though it can do more than that and is available on other platforms.

      • pjmlp 2 days ago ago

        Fine language for anyone on Apple's ecosystem.

        If one rather be OS agnostic, then not so much.

      • hu3 a day ago ago

        As for HN, I see comments about swift becoming complex and bloated with many ways to do the same thing. Like C++ish.

  • misiek08 2 days ago ago

    I would love to see such post from Microsoft about C#. "We looked into few alternatives and chose our own language, it fitted the task, worked great and came as best overall in terms of technology, devex and maintainability".

    Great read, thanks for sharing! This means to me you are mature, sharing stuff instead of making obscure secrets from basic stuff existing at many companies <3

  • artdigital 2 days ago ago

    Hearing a company like Apple is using swift on the server side changed my view on server Swift

    I’ll definitely take a look again, great to see it becoming mature enough to be another viable option

    • k_bx 2 days ago ago

      The big questions are: how's package management, community, ease of forking, patching and other dependency management related stuff. Server-side development needs those questions to have good answers.

  • netbioserror 2 days ago ago

    A win on the board for ARC? Can anyone with more deep GC knowledge explain why this might be, considering the tweaks they made to G1? Is this a specific workload advantage, or does it apply generally?

    • username223 2 days ago ago

      It has been a long time, but I worked on GC at one point. My guess is that it’s the memory footprint (and loss of cache locality) that’s to blame. Copying or mark/sweep can be fast, but only if you give them lots of extra memory to accumulate garbage between GC passes. ARC does a little extra CPU work all the time, but it has good locality, and doesn’t need the extra memory.

    • adamwk 2 days ago ago

      The way to write performant Swift is classless so without ARC, but classes have been necessary for many areas. Recently, Swift’s been leveraging a lot of ownership features similar to Rust to make it possible to do more without classes, and I’d guess Apple teams would be using those

      • sunnybeetroot 2 days ago ago

        What areas are classes necessary assuming you’re talking about pure Swift and not iOS?

        • adamwk 2 days ago ago

          Where copying is expensive (most Swift collections are backed by classes for COW semantics). Where location stability is necessary (where you need a mutex). Resource allocation. Those are the primary examples that noncopyable types are meant to address.

          • sunnybeetroot 2 days ago ago

            Ah yes great examples, thank you for explaining.

  • cogman10 2 days ago ago

    I'm honestly very skeptical that Apple actually did everything they could to make Java startup fast and the GC perform well under load.

    G1GC is a fine collector, but if pause time is really important they should have used ZGC.

    And if startup is a problem, recent versions of Java have introduced AppCDS which has gotten quite good throughout the releases.

    And if that wasn't good enough, Graal has for a long time offered AOT compilation which gives you both fast startup and lower memory utilization.

    None of these things are particularly hard to add into a build pipeline or deployment, they simply require Apple to use the latest version of Java.

  • latchkey 2 days ago ago

    "A faster bootstrap time is a crucial requirement to support this dynamic scaling strategy."

    Google AppEngine has been doing this successfully for a decade. It is not easy, but it is possible.

  • YooLi 2 days ago ago

    Any idea of what the server platform is? Linux?

    • timsneath 2 days ago ago

      It runs on Linux-based infrastructure. We've updated the blog post to clarify this. Thanks!

    • dale_huevo 2 days ago ago

      It's well-known Apple is a big RHEL customer, including Linux on Azure, lending credence to the saying that every Mac is built upon Windows (Hyper-V).

      Fairly certain the iTunes store, their web store, etc. are all built upon enterprise Linux as well.

      And there's nothing wrong with that. Use the best tool for the job. Most car owners have never looked in the engine compartment.

      • gorbypark 2 days ago ago

        I don’t think many azure services are Linux on hyper-v, are they? Azure (afaik) is quite heavy on bare metal Linux.

        • pjmlp 2 days ago ago

          The OS powering Azure is Windows, even if about 60% of the VMs run Linux workloads, as per official numbers.

          https://techcommunity.microsoft.com/blog/windowsosplatform/a...

        • miffy900 2 days ago ago

          Many of their SaaS offerings are, but many infrastructure offerings are running Windows Server underneath. Rent out any Azure VM, and regardless of guest OS, it's using Hyper-v underneath as the hypervisor.

      • jen20 2 days ago ago

        > including Linux on Azure

        Are you sure about this?

    • cosmic_cheese 2 days ago ago

      In this case probably Linux, but Apple also uses custom M-series based servers running a hardened Darwin-based OS for LLM processing of user data.

      • gorbypark 2 days ago ago

        I really really want some kind of write up on that custom m-series hardware. Not very much has even leaked out.

      • saagarjha 2 days ago ago

        I’m not even sure those have been turned on tbh

    • undefined 2 days ago ago
      [deleted]
    • ezfe 2 days ago ago

      Almost certainly

    • paxys 2 days ago ago

      Can't really imagine what else it would be.

      • MBCook 2 days ago ago

        Apple is known to use Linux on servers, but I could see a case for BSD being in the running.

        The macOS userland is based on BSD so you’d get a nice little fit there. And it’s not like some common BSD is bad at doing the job of a server. I know it can do great things.

        Who knows if it was ever discussed. They wouldn’t want Windows (licensing, optics, etc) and macOS isn’t tuned to be a high performance server. Linux is incredibly performant and ubiquitous and a very very safe choice.

        • cosmic_cheese 2 days ago ago

          Another BSD that Apple used at one point was NetBSD, which is what they ran on their AirPort wireless routers prior to their discontinuation.

  • keyle 2 days ago ago

    It's nice to see a success story using Vapor and knowing Apple uses Vapor themselves. As a long time Laravel user, which Vapor is inspired from, it gives me hope that Vapor has legs.

    Swift was a lovely language, up until recently where it should be renamed to Swift++.

  • sylens 2 days ago ago

    I'd like to give server-side Swift a go but it's hard when the best tooling is tied to one platform.

    • manmal 2 days ago ago

      In case you mean Xcode with „best tooling“, I‘d disagree. I spend 8h every day with Xcode, and find VSCode often preferable. I‘d just give it a go on Linux tbh.

    • t089 2 days ago ago

      Cursor with the official Swift Extension is superior to Xcode for general purpose programming in Swift (even on macOS).

  • undefined 2 days ago ago
    [deleted]
  • msie 2 days ago ago

    This looks cool except that I see them use Cassandra in their stack. I was using an old version of Cassandra and found it a nightmare to admin.

  • schlch 2 days ago ago

    I did some iOS development last year. I did not like the iOS part of it but swift itself feels nice.

    Is swift web yet?

  • jbverschoor 2 days ago ago

    The amount of Apple hate is staggering. Replace swift with rust, And this would have 10x the upvotes and only positive hype

    • rochak 2 days ago ago

      By now, everyone and their family knows that can get lot of updates just by including Rust in the title. It is what it is.

  • sneak 2 days ago ago

    > This feature has a server component, running on Linux-based infrastructure, that is maintained by Apple.

    Somewhat surprised Apple doesn’t run their services on XNU on internal Xserve-like devices (or work with or contribute to Asahi to get Linux working great natively on all Mx CPUs).

    A 1U Mac Studio would be killer. I doubt it’d even be a huge engineering effort given that they’ve already done most of the work for the Studio.

    • MBCook 2 days ago ago

      They are known to use cloud services from Azure (and I assume other providers).

      If they’re going to run in public clouds + on prem, Linux makes sense. And if you’re doing Linux the x86-64 currently makes a ton of sense too.

      As you mentioned they’d have to contribute to Asahi, which would take up resources. Even ignoring that the price/performance on Apple hardware in server workloads may not be worth it.

      Even if it’s slightly better (they don’t pay retail) the fact that it’s a different setup than they run in Azure plus special work to rack them compared to bog-standard 1U PCs, etc may mean it simply isn’t worth it.

  • ohdeargodno 2 days ago ago

    >Java’s G1 Garbage Collector (GC) mitigated some limitations of earlier collectors

    The... existing and default GC since JDK 6 G1GC ?

    >managing garbage collection at scale remains a challenge due to issues like prolonged GC pauses under high loads, increased performance overhead, and the complexity of fine-tuning for diverse workloads.

    Man if only we had invented other garbage collectors like ZGC (https://openjdk.org/jeps/377) since JDK15 or even Shenandoah (https://wiki.openjdk.org/display/shenandoah/Main) backported all the way to JDK8 that have goals of sub millisecond GC pauses and scale incredibly well to even terabytes of RAM. Without really a need to tune much of the GC.

    > inability to quickly provision and decommission instances due to the overhead of the JVM

    Man if only we invented things like AOT compilation in Java and even native builds. We could call it GraalVM or something (https://www.graalvm.org/)

    >In Java, we relied heavily on inheritance, which can lead to complex class hierarchies and tight coupling.

    Man if only you could literally use interfaces the same way you use protocols. Imagine even using a JVM language like Kotlin that provides interface delegation! I guess we'll have to keep shooting ourselves in the foot.

    >Swift’s optional type and safe unwrapping mechanisms eliminate the need for null checks everywhere, reducing the risk of null pointer exceptions and enhancing code readability.

    I'll grant them this one. But man, if only NullAway existed :(

    >Swift’s async/await support is a nice addition, streamlining how we handle async tasks.[ ... ] We can now write async code that reads like sync code

    Putting aside the fact that Swift's whole MainActor based async story is so shit that I'd enjoy writing async in Rust with tokio and that Swift 6 is the cause of so many headaches because their guidelines on async have been terrible: Man, if only the JDK included things like virtual threads that would make writing async code feel like sync code with a tiny wrapper around. We could call such a thing that collects many threads... A Loom! (https://openjdk.org/projects/loom/)

    >Overall, our experience with Swift has been overwhelmingly positive and we were able to finish the rewrite much faster than initially estimated

    You rewrote an existing service with existing documentation using a language and libraries you entirely control and it was fast ? Wow!

    >In addition to an excellent support system and tooling, Swift’s inherent emphasis on modularity and extensibility

    I would rather rewrite all my build scripts in Ant scripts that call `make` manually before calling SPM "excellent tooling", but okay.

    Anyways, sorry for the sarcasm, but this is just an Apple ad for Apple's Swift Language. Writing low allocation Java code is possible, and while writing efficient arenas is not possible... Java's GC generations are arenas. In the same way, yes, it's more performant. Maybe because their indirection heavy, inheritance abuse led to pointer chasing everywhere, and not having this opportunity in Swift means they can't waste that performance ? Most of what's holding performance back is coding standards and bad habits written at a dark time where the Gang of Four had burrowed its way into every programmer's mind. Add in some reflection for every endpoint because Java back then really loved that for some reason (mostly because writing source generation tasks, or compiler plugins was absolute hell back then, compared to just the "meh" it is today). With a bit of luck, any network serialization also used reflection (thanks GSON & Jackson) and you know just where your throughput has gone.

    They had extremely verbose, existing Java 8 code, and just decided to rewrite them using Swift because that's most of what happens at Apple these days. Anything outlined in this post is just post-hoc rationalization. Had it failed, this post would have never happened. Modern Java, while still a bit rough in parts (and I absolutely understand preference in language, I'd much rather maintain something I enjoy writing into) can and will absolutely compete in every aspect with Swift. It just requires getting rid of bad habits, that you (cannot/never learned to) write in Swift

    Also I've never had Java tell me it can't figure out the type of my thirty chained collector calls, so maybe Hindley-Milner was not the place to go.

    • geodel 2 days ago ago

      Well, Apple is not gonna use and promote Swift, who will? They surely can't count on you for that role.

      • ohdeargodno 2 days ago ago

        You can use and promote Swift without being disingenuous.

        Swift has some great attributes, and is almost a very pleasant systems language, give or take some sides that can be mostly attributed to Apple going "we need this feature for iOS apps".

        An adticle that merely provides some unverifiable claims about how much better it is for them (that represent a large percentage of the best knowledge about Swift on Earth) is about as useful as an AI generated spam site. Anyone making decisions about using Swift on the backend with this post would be a clown.

    • Daedren 2 days ago ago

      The funny part is that there's probably zero chance they used Swift 6 for this, because it's impossible to have released Swift 6 without having, you know, actually used the language for something.

  • 5cott0 2 days ago ago

    now fix AppStore Connect

  • time4tea 2 days ago ago

    Absolute bs.

    This falls into the category of "we rewrote a thing written by people that didn't know what they were doing, and it was better"

    Large class hierarchies (favour composition over inheritance since 1990!), poorly written async code (not even needed in java, due to virtual threads), poor startup times (probably due to spring), huge heaps sizes (likely memory leaks or other poor code, compounded by inability to restart due to routing and poor startup times)

    Yawn.