Please change the title to the original, "Actors: A Model Of Concurrent Computation In Distributed Systems".
I'm not normally a stickler for HN's rule about title preservation, but in this case the "in distributed systems" part is crucial, because IMO the urge to use both the actor model (and its relative, CSP) in non-distributed systems solely in order to achieve concurrency has been a massive boondoggle and a huge dead end. Which is to say, if you're within a single process, what you want is structured concurrency ( https://vorpus.org/blog/notes-on-structured-concurrency-or-g... ), not the unstructured concurrency that is inherent to a distributed system.
I'm working on an rest API server backed by a git repo. Having an actor responsible for all git operations saved me from a lot of trouble as having all git operations serialised freed me from having to prevent concurrent git operations.
Using actors also simplified greatly other parts of the app.
In this simple case they're more or less equivalent if the only task is limiting concurrency, but in general usage of mutexes multiplies and soon enough someone else has created a deadlock situation.
Extending it however reveals some benefits, locking is often for stopping whilst waiting for something enqueued can be parallell with waiting for something else that is enqueued.
I think it very much comes down to history and philosophy, actors are philosophically cleaner (and have gained popularity with success stories) but back in the 90s when computers were physically mostly single-threaded and memory scarce, the mutex looked like a "cheap good choice" for "all" multithreading issues since it could be a simple lock word whilst actors would need mailbox buffering (allocations... brr),etc that felt "bloated" (in the end, it turned out that separate heavyweight OS supported threads was often the bottleneck once thread and core counts got larger).
Mutexes are quite often still the base primitive at the bottom of lower level implementations if compare-and-swap isn't enough, whilst actors generally are a higher level abstraction (better suited to "general" programming).
This might be a question of personal preference.
At the design stage I already find it more approachable to think in separated responsibilities, and it naturally translates to actors.
Thinking about the app, it's much reasier for me to thin "send the message to the actor" than call that function that uses the necessary mutex. With mutexes, I think the separation of concerns is not as strong, and you might end up with a function taking multiples mutexes that might interfere. With the actor model, I feel there is less risk (though I'm sure this would be questioned by seasoned mutex users).
You are using mutexes, they are on the Actor message queues, amongst other places. "Just use mutexes" suggests a lack of experience of using them, they are very difficult to get both correct and scalable. By keeping them inside the Actor system, a lot of complexity is removed from the layers above. Actors are not always the right choice, but when they are they are a very useful and simplifying abstraction.
Nurseries sound similar to run-till-completion schedulers [0].
> IMO the urge to use both the actor model (and its relative, CSP) in non-distributed systems solely in order to achieve concurrency has been a massive boondoggle
Can't you model any concurrent non-distributed system as a concurrent distributed system?
I’m currently engineering a system that uses an actor framework to describe graphs of concurrent processing. We’re going to a lot of trouble to set up a system that can inflate a description into a running pipeline, along with nesting subgraphs inside a given node.
It’s all in-process though, so my ears are perking up at your comment. Would you relax your statement for cases where flexibility is important? E.g. we don’t want to write one particular arrangement of concurrent operations, but rather want to create a meta system that lets us string together arbitrary ones. Would you agree that the actor abstraction becomes useful again for such cases?
Data flow graphs could arguably be called structured concurrency (granted, of nodes that resemble actors).
FWIW, this has become a perfectly cromulent pattern over the decades.
It allows highly concurrent computation limited only by the size and shape of the graph while allowing all the payloads to be implemented in simple single-threaded code.
The flow graph pattern can also be extended to become a distributed system by having certain nodes have side-effects to transfer data to other systems running in other contexts. This extension does not need any particularly advanced design changes and most importantly, they are limited to just the "entrance" and "exit" nodes that communicate between contexts.
I am curious to learn more about your system. In particular, what language or mechanism you use for the description of the graph.
> we don’t want to write one particular arrangement of concurrent operations, but rather want to create a meta system that lets us string together arbitrary ones. Would you agree that the actor abstraction becomes useful again for such cases?
Actors are still just too general and uncontrolled, unless you absolutely can't express the thing you want to any other way. Based on your description, have you looked at iterate-style abstractions and/or something like Haskell's Conduit? In my experience those are powerful enough to express anything you want to (including, critically, being able to write a "middle piece of a pipeline" as a reusable value), but still controlled and safe in a way that actor-based systems aren't.
> both the actor model (and its relative, CSP) in non-distributed systems solely in order to achieve concurrency has been a massive boondoggle and a huge dead end.
Actors can be made to do structured concurrency as long as you allow actors to wait for responses from other actors, and implement hierarchy so if an actor dies , its children do as well. And that’s how I use them! So I have to say the OP is just ignorant of how actors are used in practice.
Except Akka in Java and for the entirety of Erlang and its children Elixir and Gleam. You obviously can scale those to multiple systems, but they provide a lot of benefit in local single process scenarios too imo.
If I'm not mistaken ROOM (ObjecTime, Rational Rose RealTime) was also heavily based on it. I worked in a company that developed real time software for printing machines with it and liked it a lot.
I've worked on a number of systems that used Akka in a non-distributed way and it was always an overengineered approach that made the system more complex for no benefit.
Fair, I worked a lot on data pipelines and found the actor model worked well in that context. I particularly enjoyed it in the Elixir ecosystem where I was building on top of Broadway[0]
Probably has to do with not fighting the semantics of the language.
Really depends of the ergonomics of the language. In erlang/elixir/beam langs etc, its incredibly ergonomic to write code that runs on distributed systems.
you have to try really hard to do the inverse. Java's ergonomics, even with Akka, lends its self to certain design patterns that don't lend itself to writing code for distributed systems.
I've written a non-distributed app that uses the Actor model and it's been very successful. It concurrently collects data from hundreds of REST endpoints, a typical run may make 500,000 REST requests, with 250 actors making simultaneous requests - I've tested with 1,000 but that tends to pound the REST servers into the ground. Any failed requests are re-queued. The requests aren't independent, request type C may depend on request types A & B being completed first as it requires data from them, so there's a declarative dependency graph mechanism that does the scheduling.
I started off using Akka but then the license changed and Pekko wasn't a thing yet, so I wrote my own single-process minimalist Actor framework - I only needed message queues, actor pools & supervision to handle scheduling and request failures, so that's all I wrote. It can easily handle 1m messages a second.
I have no idea why that's a "huge dead end", Actors are a model that's a very close fit to my use case, why on earth wouldn't I use it? That "nurseries" link is way TL;DR but it appears to be rubbishing other options in order to promote its particular model. The level of concurrency it provides seems to be very limited and some of it is just plain wrong - "in most concurrency systems, unhandled errors in background tasks are simply discarded". Err, no.
Big Rule 0: No Dogmas: Use The Right Tool For The Job.
Actor model is one of these things that really seduces me on paper, but my only exposure to it was in my consulting career, and that was to help migrate away from it. The use case seemed particularly adapted (integration of a bunch of remote devices with spotty connection), but it was practically a nightmare to debug... which was a problem since it was buggy.
To be fair, the problem was probably that particular implementation, but I'm wondering if there's any successful rollout of that model at any significant scale out there.
I was in a team that built a bigger telco project for machine to machine communication, using akka actors. It was okayish, the only thing that I hated was how the whole pattern spread through the whole code base
Orleans is pretty cool! The project has matured nicely over the years (been something like 10 years?) and they have some research papers attached to it if you like reading up on the details. The nuget stats indicate a healthy amount of downloads too, more than one might expect.
One of the single most important things I've done in my career was going down the Actor Model -framework rabbit hole about 8 or 9 years ago, read a bunch of books on the topic, that contained a ton of hidden philosophy, amazing reasoning, conversations about real-time vs eventual consistency, Two-Generals-Problem - just a ton of enriching stuff, ways to think about data flows, the direction of the flow, immutability, event-logged systems and on and on. At the time CQS/CQRS was making heavy waves and everyone tried to implement DDD & Event-based (and/or service busses - tons of nasty queues...) and Actor Model (and F# for that matter) was such clean fresh breath of air from all the Enterprise complexity.
Would highly recommend going this path for anyone with time on their hands, its time well spent. I still call on that knowledge frequently even when doing OOP.
Not books, but some inspiring resources. FModel [0] is a set of patterns for functional reactive DDD on the basis of event sourcing. In particular the Decider pattern is a great way to model aggregates, and test them using Scenario's that read like Gherkin in code (given.. when.. then). Combines well with actors to represent aggregates.
On the BEAM used by Erlang, Elixir, and Gleam actors are called processes, and this guide [1] delves into domain modeling with them.
Applied Akka Patterns by Michael Nash, Wade Waldron (Oreilly) was very digestible and relevant at the time, might be dated by now. Just read the intro to get the vibe.
These days I would recommend picking a framework and then ask claude & friends to do a deep dive with you and build an example project out. Ask it to explain concepts, architecture, trade-offs, scalability considerations, hosting considerations, compare it with other frameworks, hook it up to storage systems (sqlite, postgresql, blob storage) and so on. Try running them within a wireguard network and so on. Very interesting learning to be found.
I was disappointed when MS discontinued Axum, which I found pleasant to use and thought the language based approach was nicer than a library based solution like Orleans.
The Axum language had `domain` types, which could contain one or more `agent` and some state. Agents could have multiple functions and could share domain state, but not access state in other domains directly. The programming model was passing messages between agents over a typed `channel` using directional infix operators, which could also be used to build process pipelines. The channels could contain `schema` types and a state-machine like protocol spec for message ordering.
It didn't have "classes", but Axum files could live in the same projects as regular C# files and call into them. The C# compiler that came with it was modified to introduce an `isolated` keyword for classes, which prevented them from accessing `static` fields, which was key to ensuring state didn't escape the domain.
The software and most of the information was scrubbed from MS own website, but you can find an archived copy of the manual[1]. I still have a copy of the software installer somewhere but I doubt it would work on any recent Windows.
Sadly this project was axed before MS had embraced open source. It would've been nice if they had released the source when the decided to discontinue working on it.
Please change the title to the original, "Actors: A Model Of Concurrent Computation In Distributed Systems".
I'm not normally a stickler for HN's rule about title preservation, but in this case the "in distributed systems" part is crucial, because IMO the urge to use both the actor model (and its relative, CSP) in non-distributed systems solely in order to achieve concurrency has been a massive boondoggle and a huge dead end. Which is to say, if you're within a single process, what you want is structured concurrency ( https://vorpus.org/blog/notes-on-structured-concurrency-or-g... ), not the unstructured concurrency that is inherent to a distributed system.
I'm working on an rest API server backed by a git repo. Having an actor responsible for all git operations saved me from a lot of trouble as having all git operations serialised freed me from having to prevent concurrent git operations.
Using actors also simplified greatly other parts of the app.
So you're just using actors to limit concurrency? Why not use a mutex?
In this simple case they're more or less equivalent if the only task is limiting concurrency, but in general usage of mutexes multiplies and soon enough someone else has created a deadlock situation.
Extending it however reveals some benefits, locking is often for stopping whilst waiting for something enqueued can be parallell with waiting for something else that is enqueued.
I think it very much comes down to history and philosophy, actors are philosophically cleaner (and have gained popularity with success stories) but back in the 90s when computers were physically mostly single-threaded and memory scarce, the mutex looked like a "cheap good choice" for "all" multithreading issues since it could be a simple lock word whilst actors would need mailbox buffering (allocations... brr),etc that felt "bloated" (in the end, it turned out that separate heavyweight OS supported threads was often the bottleneck once thread and core counts got larger).
Mutexes are quite often still the base primitive at the bottom of lower level implementations if compare-and-swap isn't enough, whilst actors generally are a higher level abstraction (better suited to "general" programming).
This might be a question of personal preference. At the design stage I already find it more approachable to think in separated responsibilities, and it naturally translates to actors. Thinking about the app, it's much reasier for me to thin "send the message to the actor" than call that function that uses the necessary mutex. With mutexes, I think the separation of concerns is not as strong, and you might end up with a function taking multiples mutexes that might interfere. With the actor model, I feel there is less risk (though I'm sure this would be questioned by seasoned mutex users).
You are using mutexes, they are on the Actor message queues, amongst other places. "Just use mutexes" suggests a lack of experience of using them, they are very difficult to get both correct and scalable. By keeping them inside the Actor system, a lot of complexity is removed from the layers above. Actors are not always the right choice, but when they are they are a very useful and simplifying abstraction.
Horses for courses, as they say.
Because actors were invented to overcome deadlocks caused by mutexes. See page 137. With mutexes you can forget concurrency safety.
Nurseries sound similar to run-till-completion schedulers [0].
> IMO the urge to use both the actor model (and its relative, CSP) in non-distributed systems solely in order to achieve concurrency has been a massive boondoggle
Can't you model any concurrent non-distributed system as a concurrent distributed system?
0. https://en.wikipedia.org/wiki/Run-to-completion_scheduling
Hmm, you think?
I’m currently engineering a system that uses an actor framework to describe graphs of concurrent processing. We’re going to a lot of trouble to set up a system that can inflate a description into a running pipeline, along with nesting subgraphs inside a given node.
It’s all in-process though, so my ears are perking up at your comment. Would you relax your statement for cases where flexibility is important? E.g. we don’t want to write one particular arrangement of concurrent operations, but rather want to create a meta system that lets us string together arbitrary ones. Would you agree that the actor abstraction becomes useful again for such cases?
Data flow graphs could arguably be called structured concurrency (granted, of nodes that resemble actors).
FWIW, this has become a perfectly cromulent pattern over the decades.
It allows highly concurrent computation limited only by the size and shape of the graph while allowing all the payloads to be implemented in simple single-threaded code.
The flow graph pattern can also be extended to become a distributed system by having certain nodes have side-effects to transfer data to other systems running in other contexts. This extension does not need any particularly advanced design changes and most importantly, they are limited to just the "entrance" and "exit" nodes that communicate between contexts.
I am curious to learn more about your system. In particular, what language or mechanism you use for the description of the graph.
> we don’t want to write one particular arrangement of concurrent operations, but rather want to create a meta system that lets us string together arbitrary ones. Would you agree that the actor abstraction becomes useful again for such cases?
Actors are still just too general and uncontrolled, unless you absolutely can't express the thing you want to any other way. Based on your description, have you looked at iterate-style abstractions and/or something like Haskell's Conduit? In my experience those are powerful enough to express anything you want to (including, critically, being able to write a "middle piece of a pipeline" as a reusable value), but still controlled and safe in a way that actor-based systems aren't.
CSP in Golang makes concurrency in it look pleasant compared to the async monstrosities I've seen in C#.
> both the actor model (and its relative, CSP) in non-distributed systems solely in order to achieve concurrency has been a massive boondoggle and a huge dead end.
Why is that so?
Well, lots of people have tried it and spent a lot of money on it and don't seem to have derived any benefit from doing so.
Actors can be made to do structured concurrency as long as you allow actors to wait for responses from other actors, and implement hierarchy so if an actor dies , its children do as well. And that’s how I use them! So I have to say the OP is just ignorant of how actors are used in practice.
Except Akka in Java and for the entirety of Erlang and its children Elixir and Gleam. You obviously can scale those to multiple systems, but they provide a lot of benefit in local single process scenarios too imo.
Things like data pipelines, and games etc etc.
If I'm not mistaken ROOM (ObjecTime, Rational Rose RealTime) was also heavily based on it. I worked in a company that developed real time software for printing machines with it and liked it a lot.
I've worked on a number of systems that used Akka in a non-distributed way and it was always an overengineered approach that made the system more complex for no benefit.
Fair, I worked a lot on data pipelines and found the actor model worked well in that context. I particularly enjoyed it in the Elixir ecosystem where I was building on top of Broadway[0]
Probably has to do with not fighting the semantics of the language.
[0] https://elixir-broadway.org/
Really depends of the ergonomics of the language. In erlang/elixir/beam langs etc, its incredibly ergonomic to write code that runs on distributed systems.
you have to try really hard to do the inverse. Java's ergonomics, even with Akka, lends its self to certain design patterns that don't lend itself to writing code for distributed systems.
Eh?
I've written a non-distributed app that uses the Actor model and it's been very successful. It concurrently collects data from hundreds of REST endpoints, a typical run may make 500,000 REST requests, with 250 actors making simultaneous requests - I've tested with 1,000 but that tends to pound the REST servers into the ground. Any failed requests are re-queued. The requests aren't independent, request type C may depend on request types A & B being completed first as it requires data from them, so there's a declarative dependency graph mechanism that does the scheduling.
I started off using Akka but then the license changed and Pekko wasn't a thing yet, so I wrote my own single-process minimalist Actor framework - I only needed message queues, actor pools & supervision to handle scheduling and request failures, so that's all I wrote. It can easily handle 1m messages a second.
I have no idea why that's a "huge dead end", Actors are a model that's a very close fit to my use case, why on earth wouldn't I use it? That "nurseries" link is way TL;DR but it appears to be rubbishing other options in order to promote its particular model. The level of concurrency it provides seems to be very limited and some of it is just plain wrong - "in most concurrency systems, unhandled errors in background tasks are simply discarded". Err, no.
Big Rule 0: No Dogmas: Use The Right Tool For The Job.
A more legible version: https://dspace.mit.edu/handle/1721.1/6952
https://en.wikipedia.org/wiki/Gul_Agha_(computer_scientist)
The first link returns a 403.
Actor model is one of these things that really seduces me on paper, but my only exposure to it was in my consulting career, and that was to help migrate away from it. The use case seemed particularly adapted (integration of a bunch of remote devices with spotty connection), but it was practically a nightmare to debug... which was a problem since it was buggy.
To be fair, the problem was probably that particular implementation, but I'm wondering if there's any successful rollout of that model at any significant scale out there.
I was in a team that built a bigger telco project for machine to machine communication, using akka actors. It was okayish, the only thing that I hated was how the whole pattern spread through the whole code base
It doesn't feel 1985. Feels very 2015. Really good insights. Remembering the hardware they had back then too, and ~14 years before Google took off.
I think Microsoft Orleans, Erlang OTP and Scala Play are probably most famous examples in use today.
Orleans is pretty cool! The project has matured nicely over the years (been something like 10 years?) and they have some research papers attached to it if you like reading up on the details. The nuget stats indicate a healthy amount of downloads too, more than one might expect.
One of the single most important things I've done in my career was going down the Actor Model -framework rabbit hole about 8 or 9 years ago, read a bunch of books on the topic, that contained a ton of hidden philosophy, amazing reasoning, conversations about real-time vs eventual consistency, Two-Generals-Problem - just a ton of enriching stuff, ways to think about data flows, the direction of the flow, immutability, event-logged systems and on and on. At the time CQS/CQRS was making heavy waves and everyone tried to implement DDD & Event-based (and/or service busses - tons of nasty queues...) and Actor Model (and F# for that matter) was such clean fresh breath of air from all the Enterprise complexity.
Would highly recommend going this path for anyone with time on their hands, its time well spent. I still call on that knowledge frequently even when doing OOP.
Do any of the books you read on the topic stand out as something you'd recommend?
Not books, but some inspiring resources. FModel [0] is a set of patterns for functional reactive DDD on the basis of event sourcing. In particular the Decider pattern is a great way to model aggregates, and test them using Scenario's that read like Gherkin in code (given.. when.. then). Combines well with actors to represent aggregates.
On the BEAM used by Erlang, Elixir, and Gleam actors are called processes, and this guide [1] delves into domain modeling with them.
[0] https://fraktalio.com/fmodel/
[1] https://happihacking.com/blog/posts/2025/the-gnome-village/
You can always join the Orleans Discord
Applied Akka Patterns by Michael Nash, Wade Waldron (Oreilly) was very digestible and relevant at the time, might be dated by now. Just read the intro to get the vibe.
These days I would recommend picking a framework and then ask claude & friends to do a deep dive with you and build an example project out. Ask it to explain concepts, architecture, trade-offs, scalability considerations, hosting considerations, compare it with other frameworks, hook it up to storage systems (sqlite, postgresql, blob storage) and so on. Try running them within a wireguard network and so on. Very interesting learning to be found.
I was disappointed when MS discontinued Axum, which I found pleasant to use and thought the language based approach was nicer than a library based solution like Orleans.
The Axum language had `domain` types, which could contain one or more `agent` and some state. Agents could have multiple functions and could share domain state, but not access state in other domains directly. The programming model was passing messages between agents over a typed `channel` using directional infix operators, which could also be used to build process pipelines. The channels could contain `schema` types and a state-machine like protocol spec for message ordering.
It didn't have "classes", but Axum files could live in the same projects as regular C# files and call into them. The C# compiler that came with it was modified to introduce an `isolated` keyword for classes, which prevented them from accessing `static` fields, which was key to ensuring state didn't escape the domain.
The software and most of the information was scrubbed from MS own website, but you can find an archived copy of the manual[1]. I still have a copy of the software installer somewhere but I doubt it would work on any recent Windows.
Sadly this project was axed before MS had embraced open source. It would've been nice if they had released the source when the decided to discontinue working on it.
[1]:https://web.archive.org/web/20110629202213/http://download.m...
I would think Akka in Java world is more famous than orleans
Akka's not open source anymore so people tend to look at similar or competing systems like Scala Play.
Apache Pekko is an open-source fork of Akka from before their licensing changes.
That's probably what they meant by "Scala Play".
May be of interest: Pony Language is designed from the ground up to support the Actor model.
https://www.ponylang.io/
Mandatory mention of notable actor languages:
Missing: (1985)