I really like Julia as a language but I have struggled to adopt it and be productive in it. Part of it is because of the JIT runtime and a sub-par LSP (at least when I last tried).
To those who regularly write Julia code, what is your workflow? The whole thing with Revise.jl did not suit me honestly. I have enjoyed programming in Rust orders of magnitude more because there's no run time and you can do AOT. My intention is not write scripts, but high performance numerical/scientific code, and with Julia's JIT-based design, rapid iteration (to me at least) feels slower than Rust (!).
yup the LSP is bad, there is a new lsp being rewritten based on JET.jl a static code analyzer , this should be faster than the old lsp which kind of runs by loading all the modules into a julia instance and queries it for symbols and docs ( im not 100% sure but i think thats how it works)
Punchline: rewrote the code to look almost identical to C++, hand-held the compiler by adding @-marks to disable safety checks, forced SIMD codegen and fastmath on.
End result: code that is uglier and still much slower than C++. Kind of a shame.
I was once a bit of a Julia performance expert, but moved toward c++ for hobby projects even while still using Julia professionally.
I wrote a blog post at the time with exactly that punchline (not explicitly stated, but just look at the code!):
https://spmd.org/posts/multithreadedallocations/
The example was similar to a real production-critical hot path from work.
Maybe things changed since I left Julia, but that was December 2023, for years after this blog post.
I'd have to guess that this is because of ease of use. C++ lets you get as close to the metal as you choose to, so there is no reason why a C++ solution shouldn't be at least as fast as one written in any other language, and yet ...
Of course it also depends on what additional libaries you are using, especially when it comes to parallel/GPU programming in C++, but easy to believe that Julia out of the box makes it easy to write high performance parallel software.
Hardly seems worth the effort, perhaps things have improved since 2019. It would be interesting to see an updated benchmark, but if your going to end up with code that looks like C++ to get proper performance, you might as well write it in C++. My biggest problem with Julia is that they decided to use column-major indexing for multi-dimensional arrays (i.e. FORTRAN/MATLAB style). This makes interoperability with C/C++ and python numpy a real pain, since you can't do zero-copy array sharing between the two without one side being forced into strided-access. For that reason alone I haven't adopted it in any of my work-flows.
> code that is uglier and still much slower than C++.
Oh such a shame indeed! They didn’t even manage to produce better looking code at least?? Julia was looking great in 2019 but it was very buggy still so I stopped looking. Had hopes that by now it would be a good choice over C++ and Rust with similar performance.
From the sound of your post I'm guessing you view Julia as a general purpose language. I'd consider it general purpose insofar as the application leans into fast numerical computing, everyone else secondary. It can do most of the things other languages do reasonably well, but that's not why you would pick Julia for a project over say Java. You pick it because you want to write fast numerical code and express it elegantly. All of the other typical "glue" things you need to ship a product are secondary to that, but good enough to get the job done.
The key to performance with the GC in Julia is not allocating, but it has gotten substantially better since 2019.
How hard was it to maintain a large Julia code base rather then say an OOP or Rust one? It has an interesting paradigm. I feel like it could get really messy
Personally I never struggled. You can employ interfaces and maintain them judiciously.
But interfaces are informal. Not using a monorepo say makes it harder to be sure if your broke downstream or not (via downstream’s unit tests).
But freedom from Rust’s orphan rule etc means you can decompose large code into fragments easily, while getting almost Zig-style specialisation yet the ease of use of python (for consumers). I would say this takes a fair bit of skill to wield safely/in a maintainable fashion though, and many packages (including my own) are not extremely mature.
I personally think it requires discipline, I saw it go both ways.
I was never an expert in the language, but worked along people who were and they generally made nice code.
But there were a few places where I saw intensely confusing patterns from overloading with multimethods. Code that became hard to follow, and had poor encapsulation.
I don't get the appeal. It's like a. OSS Matlab but all contributions are used directly so the language developers can make money for a parent company? Most OSS languages aren't run that way. Seems kind of scammy
It always amuses me when people assume that the nefarious scheme is taking open source contributions and selling them. That's not the nefarious scheme. The nefarious scheme is going to partners, funding agencies and investors and saying "look at this unique capability / important research / profitable business opportunity that we can do together, but oops, all of our code is written in Julia, so I guess we better pay some people to maintain it so it'll all come crashing down, wouldn't want that to happen".
Also, I'm of course using nefarious in jest here in both cases. While we don't directly try to monetize our open source work, I respect that sometimes people need to do that. As long as people are transparent about it, I don't have a problem. Doing the thing we're doing seems to work, but it's a lot harder, because you have to build a successful pice of software and a (or multiple) successful something elses that has a critical dependency on it. It's like hitting the lottery twice.
I wouldn't say nefarious, but I don't know how I feel about the power structure. I could see it being very much a one way venture for most participants. I'd have to think about it before actually using the language.
Your baseline for comparison is a company that doesn't give anything away for free?
Also, contributing in open source is a choice, not a mandate. I greatly benefit from Julia and its ecosystem so I chose to contribute back some of my work, no one forced me. I chose the MIT license because I want other people to be able to make money with it, just like I make money with other peoples MIT licensed stuff.
the parent company is a consumer of Julia, and has no formal role in oversight or governance; they are of course invested in the success and performance of the language, but so are all other users!
Seems kind of contradictory with the other comment which states that they decide what features are prioritized. I guess not because it could be an informal process.
It's interesting. I like the more opaque approach rust takes. Rust has its own issues but it seems less corporately motivated. Maybe that's why it has more corporations using it? You aren't going to end up with the core maintainers to the language rug pulling packages or language features to slow down competition who are also using the tool. I say competition because it looks like they are making money through consultancies and very broad applications of the niche language.
Weird stuff to have to think about. I just want to write code
Meh, I’ve never been associated with the company and AFAICT they provide value through platforms for enterprises. Not everyone gets OSS sponsorships to fund team (and using a social media presence to achieve this was a post-Julia phenomenon).
It’s nothing like Google-the-ad-company influencing Chrome. The company consumes Julia for products to sell, rather. Maybe this affects the ordering of features landing, but… meh.
Dang, haven’t read much on Julia as of late. I remember using it for a CS 300-level course around 2016 when learning about tokenizing and parsing as part of language fundamentals. Julia has undoubtedly made some significant performance improvements since then. Would love to see a follow-up that explores what, if anything, from this still holds true and what improvements can be made.
I wonder how Mojo ranks along with Julia. Mojo was discussed yesterday here. Mojo seems to be more python focused while Julia is very much focused on Scientific computation. I may be wrong.
Very interesting post and I think this exposes the limitations of the Julia compiler. Note that an old version of the compiler is used (1.0.3 from 2019).
One could say that we can almost replicate the semantic of a C++ program, but writing in Julia. For example we can remove bounds checks in arrays or remove hidden memory allocations.
But the goal of a language for numerical computing is capturing the mathematical formulas using high level constructs closer to the original representation while compiling to efficient code.
Domain scientists want to play with the math and the formulas, not doing common subexpression elimination in their programs. Just curious to see how it evolves
I think the best compromise would be to get the best of two words. By default perform bound checks, but have a compiler flag which skips it. Might broke many programs written with default behaviour in mind, but allow perform additional optimizations.
this is exactly what julia does. boundschecks are default on, and there are compiler flags --- either locally, via the `@inbounds` macro, or globally with `--check-bounds=no`--- to disable them
I really like Julia as a language but I have struggled to adopt it and be productive in it. Part of it is because of the JIT runtime and a sub-par LSP (at least when I last tried).
To those who regularly write Julia code, what is your workflow? The whole thing with Revise.jl did not suit me honestly. I have enjoyed programming in Rust orders of magnitude more because there's no run time and you can do AOT. My intention is not write scripts, but high performance numerical/scientific code, and with Julia's JIT-based design, rapid iteration (to me at least) feels slower than Rust (!).
yup the LSP is bad, there is a new lsp being rewritten based on JET.jl a static code analyzer , this should be faster than the old lsp which kind of runs by loading all the modules into a julia instance and queries it for symbols and docs ( im not 100% sure but i think thats how it works)
Punchline: rewrote the code to look almost identical to C++, hand-held the compiler by adding @-marks to disable safety checks, forced SIMD codegen and fastmath on.
End result: code that is uglier and still much slower than C++. Kind of a shame.
I was once a bit of a Julia performance expert, but moved toward c++ for hobby projects even while still using Julia professionally.
I wrote a blog post at the time with exactly that punchline (not explicitly stated, but just look at the code!): https://spmd.org/posts/multithreadedallocations/ The example was similar to a real production-critical hot path from work.
Maybe things changed since I left Julia, but that was December 2023, for years after this blog post.
hey , what happened to LoopModels ?
This is 7 years old. Julia is a totally different language by now.
As a quick anecdote, in our take-home interview exercise, we usually receive answers in C++ or Julia, and the two fastest answers have been in Julia.
I'd have to guess that this is because of ease of use. C++ lets you get as close to the metal as you choose to, so there is no reason why a C++ solution shouldn't be at least as fast as one written in any other language, and yet ...
Of course it also depends on what additional libaries you are using, especially when it comes to parallel/GPU programming in C++, but easy to believe that Julia out of the box makes it easy to write high performance parallel software.
> This is 7 years old.
Yeah, I actually totally forgot to check the date...
Hardly seems worth the effort, perhaps things have improved since 2019. It would be interesting to see an updated benchmark, but if your going to end up with code that looks like C++ to get proper performance, you might as well write it in C++. My biggest problem with Julia is that they decided to use column-major indexing for multi-dimensional arrays (i.e. FORTRAN/MATLAB style). This makes interoperability with C/C++ and python numpy a real pain, since you can't do zero-copy array sharing between the two without one side being forced into strided-access. For that reason alone I haven't adopted it in any of my work-flows.
> code that is uglier and still much slower than C++.
Oh such a shame indeed! They didn’t even manage to produce better looking code at least?? Julia was looking great in 2019 but it was very buggy still so I stopped looking. Had hopes that by now it would be a good choice over C++ and Rust with similar performance.
There's simply no way it'd ever have similar performance to those. It's not possible.
I have always seen it as a potential alternative to Java, and definitely better than Python.
My experience working in it professionally was that it was... fine. But the GC in it was not good under load and not competitive with Java's.
From the sound of your post I'm guessing you view Julia as a general purpose language. I'd consider it general purpose insofar as the application leans into fast numerical computing, everyone else secondary. It can do most of the things other languages do reasonably well, but that's not why you would pick Julia for a project over say Java. You pick it because you want to write fast numerical code and express it elegantly. All of the other typical "glue" things you need to ship a product are secondary to that, but good enough to get the job done.
The key to performance with the GC in Julia is not allocating, but it has gotten substantially better since 2019.
How hard was it to maintain a large Julia code base rather then say an OOP or Rust one? It has an interesting paradigm. I feel like it could get really messy
Personally I never struggled. You can employ interfaces and maintain them judiciously.
But interfaces are informal. Not using a monorepo say makes it harder to be sure if your broke downstream or not (via downstream’s unit tests).
But freedom from Rust’s orphan rule etc means you can decompose large code into fragments easily, while getting almost Zig-style specialisation yet the ease of use of python (for consumers). I would say this takes a fair bit of skill to wield safely/in a maintainable fashion though, and many packages (including my own) are not extremely mature.
I personally think it requires discipline, I saw it go both ways.
I was never an expert in the language, but worked along people who were and they generally made nice code.
But there were a few places where I saw intensely confusing patterns from overloading with multimethods. Code that became hard to follow, and had poor encapsulation.
I don't get the appeal. It's like a. OSS Matlab but all contributions are used directly so the language developers can make money for a parent company? Most OSS languages aren't run that way. Seems kind of scammy
It always amuses me when people assume that the nefarious scheme is taking open source contributions and selling them. That's not the nefarious scheme. The nefarious scheme is going to partners, funding agencies and investors and saying "look at this unique capability / important research / profitable business opportunity that we can do together, but oops, all of our code is written in Julia, so I guess we better pay some people to maintain it so it'll all come crashing down, wouldn't want that to happen".
Also, I'm of course using nefarious in jest here in both cases. While we don't directly try to monetize our open source work, I respect that sometimes people need to do that. As long as people are transparent about it, I don't have a problem. Doing the thing we're doing seems to work, but it's a lot harder, because you have to build a successful pice of software and a (or multiple) successful something elses that has a critical dependency on it. It's like hitting the lottery twice.
I wouldn't say nefarious, but I don't know how I feel about the power structure. I could see it being very much a one way venture for most participants. I'd have to think about it before actually using the language.
Your baseline for comparison is a company that doesn't give anything away for free?
Also, contributing in open source is a choice, not a mandate. I greatly benefit from Julia and its ecosystem so I chose to contribute back some of my work, no one forced me. I chose the MIT license because I want other people to be able to make money with it, just like I make money with other peoples MIT licensed stuff.
the parent company is a consumer of Julia, and has no formal role in oversight or governance; they are of course invested in the success and performance of the language, but so are all other users!
Seems kind of contradictory with the other comment which states that they decide what features are prioritized. I guess not because it could be an informal process.
It's interesting. I like the more opaque approach rust takes. Rust has its own issues but it seems less corporately motivated. Maybe that's why it has more corporations using it? You aren't going to end up with the core maintainers to the language rug pulling packages or language features to slow down competition who are also using the tool. I say competition because it looks like they are making money through consultancies and very broad applications of the niche language.
Weird stuff to have to think about. I just want to write code
Meh, I’ve never been associated with the company and AFAICT they provide value through platforms for enterprises. Not everyone gets OSS sponsorships to fund team (and using a social media presence to achieve this was a post-Julia phenomenon).
It’s nothing like Google-the-ad-company influencing Chrome. The company consumes Julia for products to sell, rather. Maybe this affects the ordering of features landing, but… meh.
Dang, haven’t read much on Julia as of late. I remember using it for a CS 300-level course around 2016 when learning about tokenizing and parsing as part of language fundamentals. Julia has undoubtedly made some significant performance improvements since then. Would love to see a follow-up that explores what, if anything, from this still holds true and what improvements can be made.
I wonder how Mojo ranks along with Julia. Mojo was discussed yesterday here. Mojo seems to be more python focused while Julia is very much focused on Scientific computation. I may be wrong.
Recent discussion on Julia Discourse: https://discourse.julialang.org/t/making-julia-as-fast-as-c/
Very interesting post and I think this exposes the limitations of the Julia compiler. Note that an old version of the compiler is used (1.0.3 from 2019).
One could say that we can almost replicate the semantic of a C++ program, but writing in Julia. For example we can remove bounds checks in arrays or remove hidden memory allocations.
But the goal of a language for numerical computing is capturing the mathematical formulas using high level constructs closer to the original representation while compiling to efficient code.
Domain scientists want to play with the math and the formulas, not doing common subexpression elimination in their programs. Just curious to see how it evolves
I think the best compromise would be to get the best of two words. By default perform bound checks, but have a compiler flag which skips it. Might broke many programs written with default behaviour in mind, but allow perform additional optimizations.
this is exactly what julia does. boundschecks are default on, and there are compiler flags --- either locally, via the `@inbounds` macro, or globally with `--check-bounds=no`--- to disable them
Phew. 7-year old post about a 10-year old language. Triggers all the LLMs posting empty generic response "Very interesting, exposes limitations...".
Prelude of what's to come in the self-reinforcing cycle of machines talking to machines and drowning everything else.
From 2019