As someone who plays the violin very poorly I don't think this sounds like violin at all. It is very folksy synthetic sounding. They are clearly plucking but it sounds similar to if you were bowing its really strange. I definitely could replicate that quality of model but I think I have heard much better models elsewhere
It was a finite element simulation of a CT-scanned violin, but as they note,
“If there’s anything that’s sounding mechanical to it, it’s because we’re using the exact same time function, or standard way of plucking, for each note,” says Makris, who is himself a lute player. “A musician will adapt the way they’re plucking, to put a little more feeling on certain notes than others. But there could be subtleties which we could incorporate and refine.”
The article makes it sound like this is a very a new idea, but physical models of music instruments, including violin, has been around for over 40 years. Daisy Bell, the first piece of computer music and performed by their model, utilized a physical model of the human singing voice based on measurements of human vocal tract, and that was done in 1962.
Julius Smith wrote pretty comprehensive textbook on the subject of building physical models of musical instruments, available online. Here, for example, is a chapter on modeling bowed string sounds: https://ccrma.stanford.edu/~jos/pasp/Bowed_Strings.html
> Daisy Bell, the first piece of computer music and performed by their model, utilized a physical model of the human singing voice based on measurements of human vocal tract, and that was done in 1962
From the article:
> As a demonstration, the researchers applied the computational violin to play two short excerpts: one from “Bach’s Fugue in G Minor,” and another from “Daisy Bell” — a nod to the first song that was ever produced by a computer-synthesized voice.
“As it is, the new computational model is the first to generate realistic sound based on the laws of physics and acoustics.”
Ouch: this is completely inaccurate. Physical modeling has its roots in the 80s and Stefan Bilbao has been doing FDM based methods for over 20 years. I think he discusses fem in numerical sound sysnthesis
That's also not the case. There have been some really accurate physically-modeled instruments for at least 20 years.
Also, aschkually, a violin is on the "easier" end of making it sound realistic. It's one of the "tutorial" models you go through when you start learning about this (resonators + reverb get you 80% there). Much harder to do any plucking sound (guitar, piano), and much much harder to model percussions accurately (cymbals, drums) and in such a way that the sound doesn't come out dry and very evidently synthetic.
Source: I was very invested into this in the 2000s, although as a hobby, not professionally.
Bowed instruments are very cool to model because of the nonlinear slip of the bow against the string. A bit curious why bowing was not discussed or used in the example of a violin, just plucking. Do luthiers test violins more by plucking than bowing?
I've played other instruments (not very professionally) for 10y when I was much younger but was often surrounded by sounds of violins and these don't sound realistic at all.
My main instrument was the saxophone and whenever I hear AI/artificial saxophone somewhere I can notice it right away, but I'm very curious if I've ever been a victim of the toupee fallacy.
I wonder whether there's a good test/game where you have to guess whether a given sound of a musical instrument is real or not.
Not sure if that's news, Audio Modeling[1] has been doing that for quite a long time now. The big plus of physical modeling instead of sampling is disk size - instead of tens of GB of samples, you get a 15MB plugin.
It's much more difficult to use, though - you have to control lots of aspects of the simulation (using automation in DAW or MIDI controllers) to make it sound actually realistic.
OK I guess it seems like this is more of a tool for luthiers than for composers or music producers.
The first version of Pianoteq came back in 2006. There are apparently some exotic mid-90s synths with claims of being physically modeled too, don't know how accurate that is.
I currently use a raspberry pi with Pianoteq as sound output for my digital piano. It got a reluctant stamp of approval from my pianist son, although of course he prefers the physical response of even a poor acoustic piano.
Pianoteq is amazing with a good controller like a big Kawai VPC1 or the fanciest Fatar action in the Studiologic "GT" models. It is very responsive. I've been using it for over a decade and the sound keeps improving.
The combination of pianoteq and a sample based piano is pretty nice too, though tough to do on a Pi.
Good speakers improve the experience because you get your room resonance etc.
The coolest thing - you can change temperament. So if you are playing music from before equal temperament, you can hear what different keys used to sound like! Very interesting especially with Bach.
I agree with your son, there is nothing like a real piano. There are interesting attempts at combining the digital and mechanical with soundboard transducers from Kawai and Yamaha, I haven't used them but I would like to.
Pianoteq is more like spectral modelling. The sound lacks some of the movement and bloom of a real piano.
90s physical modelling was a very simplified modular kind of modelling. Instead of analogue oscillators and filters you had "string" models, "pipe" models, various resonators, and so on.
The models were interesting, but still quite crude and basic.
This project is the most physical kind of physical modelling. It's an unsimplified brute-force model of the entire instrument body and string system, in full.
It doesn't try to "model a resonator", it models blocks of wood with various holes, and calculates how they distort and radiate as sound passes through them.
It's ridiculously expensive computationally, but it's also the only way to get all of the nuances of the sound.
I expect they're already working on a stick-slip model for bowing.
Theoretically you could use the same technique to model a piano or guitar, and you would get something indistinguishable from a real instrument.
You'd likely need a supercomputer to run the model in anything approaching real time.
But the advantage is that once you've got it you can do insane things like replace the strings with wood instead of metal, or use different metals, or "build" nonphysical pianos that are fifty feet long and have linear overtones all the way down to the bass.
Pianoteq was quite heavy computationally when it came, it still is, arguably. It was a challenge to get it to run on a raspberry pi 4 in real time.
I can tell the difference between Pianoteq and a real piano, but I can't in general tell the difference between Pianoteq and a recording of a piano. Maybe there's some insane level of hi-fi gear which would let me, idk? But in general, when it's good enough for Steinway, Petrof and my conservatory student son to give their stamp of approval, I think it's good enough for me as well :) quite a few of those insane things you mention you can already do with pianoteq's physical model (i.e. emulating a 20m grand), and I suspect they keep a few knobs to themselves to sell virtual instruments.
I can tell the difference between Pianoteq and a real piano, but I can't in general tell the difference between Pianoteq and a recording of a piano.
That's a great way to put it. There's no way to fully reproduce that live sound, but compared to anything played through speakers, Pianoteq is indistinguishable from a real piano.
Out of the box it sounds a little too perfect, but just setting the Condition to the midway point (1.0) fixes that.
As a disclaimer I haven't read the article, nor do I know much about simulating instruments in particular, but I just wanted to point out that accurately simulating the physics of a musical instrument is most likely still a very difficult problem.
I have no doubt there's been analytical/semi-analytical models around for decades. I mean a program that can take an arbitrary geometry or class thereof with specific materials and simulate the high frequency vibrations and model interactions with the body with high fidelity (not through ad-hoc models) is probably still out of scope of real time simulation.
My point is really that there's often families of models that deal with one thing, from semi-analytical first coded in Fortran in the 80s that can run in milliseconds but is only valid in certain configurations with a low degree of accuracy, to "first principles" simulations that may well require a supercomputer to produce results to a useful degree of accuracy (and not in real time). So, just because you see someone claim they can "simulate X", and then another makes the same claim 40 years later, that doesn't mean they're doing the same thing.
For instance, aeronautics has XFOIL. It's a semi-analytical model first devised in the 80s that computes aeronautics coefficients for a certain class of airfoils (NACA). My understanding is it's a very clever, and industrially significant, piece of code, but ultimately it works in a narrow regime with some heavy simplifications. You can now get results from this in real time on a webpage. A proper CFD calculation to a NACA wing will take in the order of minutes to hours on a workstation (depending on requested precision and settings, e.g. speed of air), and while closer to first principles, it's still using physical simplifications (RANS). So yeah, although nominally people have been "simulating airfoils" for 40 years, the techniques have refined considerably, and will continue to do so (practical LES and, someday, DNS). It might be another century that people are still "simulating airfoils" in ever more accurate (nailing down within the constraints), high fidelity (lifting constraints) and generic ways.
Back to instruments, this is a difficult coupled problem, in fairly high frequencies (high frequencies = more expensive), with possible fluid-structure interactions, not to mention the geometries are fairly complex (to even get a workable mesh to begin with). My uneducated guess is we're still at either semi-analytical, or at the "considerably simplified first principles" stage for this type of problems. Just like DNS, I'm sure you could "just resolve the scales and run it through a simulation with a really tiny time step", and this is liable to be similarly expensive as DNS (million dollar single simulation). Additionally, they have to deal with the human ear, which is perhaps more unforgiving than an error plot on drag or lift. So I wouldn't dismiss news of instrument simulation as stale just because someone made something that produced similar artifacts in the past, as the methods will continue to evolve considerably.
As someone who plays the violin very poorly I don't think this sounds like violin at all. It is very folksy synthetic sounding. They are clearly plucking but it sounds similar to if you were bowing its really strange. I definitely could replicate that quality of model but I think I have heard much better models elsewhere
It was a finite element simulation of a CT-scanned violin, but as they note,
“If there’s anything that’s sounding mechanical to it, it’s because we’re using the exact same time function, or standard way of plucking, for each note,” says Makris, who is himself a lute player. “A musician will adapt the way they’re plucking, to put a little more feeling on certain notes than others. But there could be subtleties which we could incorporate and refine.”
The article makes it sound like this is a very a new idea, but physical models of music instruments, including violin, has been around for over 40 years. Daisy Bell, the first piece of computer music and performed by their model, utilized a physical model of the human singing voice based on measurements of human vocal tract, and that was done in 1962.
Julius Smith wrote pretty comprehensive textbook on the subject of building physical models of musical instruments, available online. Here, for example, is a chapter on modeling bowed string sounds: https://ccrma.stanford.edu/~jos/pasp/Bowed_Strings.html
> Daisy Bell, the first piece of computer music and performed by their model, utilized a physical model of the human singing voice based on measurements of human vocal tract, and that was done in 1962
From the article:
> As a demonstration, the researchers applied the computational violin to play two short excerpts: one from “Bach’s Fugue in G Minor,” and another from “Daisy Bell” — a nod to the first song that was ever produced by a computer-synthesized voice.
“As it is, the new computational model is the first to generate realistic sound based on the laws of physics and acoustics.”
Ouch: this is completely inaccurate. Physical modeling has its roots in the 80s and Stefan Bilbao has been doing FDM based methods for over 20 years. I think he discusses fem in numerical sound sysnthesis
I'm assuming the intended meaning is that this was the first time the approach led to "realistic" sound?
If this is their definition of "realistic" sound then I'm horrified
That's also not the case. There have been some really accurate physically-modeled instruments for at least 20 years.
Also, aschkually, a violin is on the "easier" end of making it sound realistic. It's one of the "tutorial" models you go through when you start learning about this (resonators + reverb get you 80% there). Much harder to do any plucking sound (guitar, piano), and much much harder to model percussions accurately (cymbals, drums) and in such a way that the sound doesn't come out dry and very evidently synthetic.
Source: I was very invested into this in the 2000s, although as a hobby, not professionally.
Someone made a virtual car engine that was able to generate realistic sounds a few years ago.
https://www.youtube.com/watch?v=RKT-sKtR970
The coolest thing about this to me is that he managed to plug a trumpet into the same engine and it sort of... Just worked
Reminds me of this last years siggraph paper about cello playing animation
https://youtu.be/ODR6eQOjm9w
https://github.com/Qzping/ELGAR
It's just fun to see solutions to problems you didn't even know to exist.
Semi-related?
"Show HN: Anyma V, a hybrid physical modelling virtual instrument" 01-aug-2024 https://news.ycombinator.com/item?id=41132104 29 comments
"Show HN: I built a synthesizer based on 3D physics" 02-may-2025 https://news.ycombinator.com/item?id=43873074 123 comments
Bowed instruments are very cool to model because of the nonlinear slip of the bow against the string. A bit curious why bowing was not discussed or used in the example of a violin, just plucking. Do luthiers test violins more by plucking than bowing?
They briefly address this in the article:
> Violin bowing, the researchers say, is a much more complicated interaction to model.
Thanks, I missed that.
It's probably harder to model and the results "aren't quite there yet".
it doesn't sound as a real violin at all. A professional violinist would immediately tell that something is wrong.
I've played other instruments (not very professionally) for 10y when I was much younger but was often surrounded by sounds of violins and these don't sound realistic at all.
My main instrument was the saxophone and whenever I hear AI/artificial saxophone somewhere I can notice it right away, but I'm very curious if I've ever been a victim of the toupee fallacy.
I wonder whether there's a good test/game where you have to guess whether a given sound of a musical instrument is real or not.
As an amateur violinist and synth enthusiast it sounds tinny and dry
The plucking demo was disappointing. How does it compare to something like the SWAM Engine[1]? Given the title, I was expecting more.
[1] https://www.youtube.com/watch?v=UB-5uPcWVVE
Not sure if that's news, Audio Modeling[1] has been doing that for quite a long time now. The big plus of physical modeling instead of sampling is disk size - instead of tens of GB of samples, you get a 15MB plugin.
It's much more difficult to use, though - you have to control lots of aspects of the simulation (using automation in DAW or MIDI controllers) to make it sound actually realistic.
OK I guess it seems like this is more of a tool for luthiers than for composers or music producers.
[1] https://audiomodeling.com/
The first version of Pianoteq came back in 2006. There are apparently some exotic mid-90s synths with claims of being physically modeled too, don't know how accurate that is.
I currently use a raspberry pi with Pianoteq as sound output for my digital piano. It got a reluctant stamp of approval from my pianist son, although of course he prefers the physical response of even a poor acoustic piano.
Pianoteq is amazing with a good controller like a big Kawai VPC1 or the fanciest Fatar action in the Studiologic "GT" models. It is very responsive. I've been using it for over a decade and the sound keeps improving.
The combination of pianoteq and a sample based piano is pretty nice too, though tough to do on a Pi.
Good speakers improve the experience because you get your room resonance etc.
The coolest thing - you can change temperament. So if you are playing music from before equal temperament, you can hear what different keys used to sound like! Very interesting especially with Bach.
I agree with your son, there is nothing like a real piano. There are interesting attempts at combining the digital and mechanical with soundboard transducers from Kawai and Yamaha, I haven't used them but I would like to.
Pianoteq is more like spectral modelling. The sound lacks some of the movement and bloom of a real piano.
90s physical modelling was a very simplified modular kind of modelling. Instead of analogue oscillators and filters you had "string" models, "pipe" models, various resonators, and so on.
The models were interesting, but still quite crude and basic.
This project is the most physical kind of physical modelling. It's an unsimplified brute-force model of the entire instrument body and string system, in full.
It doesn't try to "model a resonator", it models blocks of wood with various holes, and calculates how they distort and radiate as sound passes through them.
It's ridiculously expensive computationally, but it's also the only way to get all of the nuances of the sound.
I expect they're already working on a stick-slip model for bowing.
Theoretically you could use the same technique to model a piano or guitar, and you would get something indistinguishable from a real instrument.
You'd likely need a supercomputer to run the model in anything approaching real time.
But the advantage is that once you've got it you can do insane things like replace the strings with wood instead of metal, or use different metals, or "build" nonphysical pianos that are fifty feet long and have linear overtones all the way down to the bass.
Pianoteq was quite heavy computationally when it came, it still is, arguably. It was a challenge to get it to run on a raspberry pi 4 in real time.
I can tell the difference between Pianoteq and a real piano, but I can't in general tell the difference between Pianoteq and a recording of a piano. Maybe there's some insane level of hi-fi gear which would let me, idk? But in general, when it's good enough for Steinway, Petrof and my conservatory student son to give their stamp of approval, I think it's good enough for me as well :) quite a few of those insane things you mention you can already do with pianoteq's physical model (i.e. emulating a 20m grand), and I suspect they keep a few knobs to themselves to sell virtual instruments.
I can tell the difference between Pianoteq and a real piano, but I can't in general tell the difference between Pianoteq and a recording of a piano.
That's a great way to put it. There's no way to fully reproduce that live sound, but compared to anything played through speakers, Pianoteq is indistinguishable from a real piano.
Out of the box it sounds a little too perfect, but just setting the Condition to the midway point (1.0) fixes that.
Do you have an analog sustain pedal? The fine control with partial pedaling made some difference for me re: pianoteq's feel.
I don't know how many levels it has, but it's definitively more than 2 :) I am a lousy pianist anyway, it's my son who's serious.
As a disclaimer I haven't read the article, nor do I know much about simulating instruments in particular, but I just wanted to point out that accurately simulating the physics of a musical instrument is most likely still a very difficult problem.
I have no doubt there's been analytical/semi-analytical models around for decades. I mean a program that can take an arbitrary geometry or class thereof with specific materials and simulate the high frequency vibrations and model interactions with the body with high fidelity (not through ad-hoc models) is probably still out of scope of real time simulation.
My point is really that there's often families of models that deal with one thing, from semi-analytical first coded in Fortran in the 80s that can run in milliseconds but is only valid in certain configurations with a low degree of accuracy, to "first principles" simulations that may well require a supercomputer to produce results to a useful degree of accuracy (and not in real time). So, just because you see someone claim they can "simulate X", and then another makes the same claim 40 years later, that doesn't mean they're doing the same thing.
For instance, aeronautics has XFOIL. It's a semi-analytical model first devised in the 80s that computes aeronautics coefficients for a certain class of airfoils (NACA). My understanding is it's a very clever, and industrially significant, piece of code, but ultimately it works in a narrow regime with some heavy simplifications. You can now get results from this in real time on a webpage. A proper CFD calculation to a NACA wing will take in the order of minutes to hours on a workstation (depending on requested precision and settings, e.g. speed of air), and while closer to first principles, it's still using physical simplifications (RANS). So yeah, although nominally people have been "simulating airfoils" for 40 years, the techniques have refined considerably, and will continue to do so (practical LES and, someday, DNS). It might be another century that people are still "simulating airfoils" in ever more accurate (nailing down within the constraints), high fidelity (lifting constraints) and generic ways.
Back to instruments, this is a difficult coupled problem, in fairly high frequencies (high frequencies = more expensive), with possible fluid-structure interactions, not to mention the geometries are fairly complex (to even get a workable mesh to begin with). My uneducated guess is we're still at either semi-analytical, or at the "considerably simplified first principles" stage for this type of problems. Just like DNS, I'm sure you could "just resolve the scales and run it through a simulation with a really tiny time step", and this is liable to be similarly expensive as DNS (million dollar single simulation). Additionally, they have to deal with the human ear, which is perhaps more unforgiving than an error plot on drag or lift. So I wouldn't dismiss news of instrument simulation as stale just because someone made something that produced similar artifacts in the past, as the methods will continue to evolve considerably.