NHacker Next
login
▲Writing N-body gravity simulations code in Pythonalvinng4.github.io
130 points by dargscisyhp 3 days ago | 24 comments
Loading comments...
kristel100 2 hours ago [-]
N-body problems are the gateway drug into numerical physics. Writing one from scratch is a rite of passage. Bonus points if you hit that sweet spot where performance doesn’t completely tank.
umpalumpaaa 13 hours ago [-]
Every time I see anything on the N-Body problem I am reminded by my final high school project... I had 2-3 weeks of time to write an n-body simulation. Back then I used C++ and my hardware was really bad (2 GHz single core CPU or so…). The result was not really impressive because it did not really work. :D But I was able to show that my code correctly predicted that the moon and the earth would eventually collide without any initial velocity given to both bodies. I went into this totally naive and blind but it was a ton of fun.
taneq 11 hours ago [-]
> my hardware was really bad (2 GHz single core CPU or so…)

laughs in 32kb of RAM

Sounds like a great project, though. There's a lot of fundamental concepts involved in something like this, so it's a great learning exercise (and fun as well!) :)

antognini 13 hours ago [-]
Once you have the matrix implementation in Step 2 (Implementation 3) it's rather straightforward to extend your N-body simulator to run on a GPU with Jax --- you can just add `import jax.numpy as jnp` and replace all the `np.`s with `jnp`s.

For a few-body system (e.g., the Solar System) this probably won't provide any speedup. But once you get to ~100 bodies you should start to see substantial speedups by running the simulator on a GPU.

1 hours ago [-]
tantalor 11 hours ago [-]
> rather straightforward

For the programmer, yes it is easy enough.

But there is a lot of complexity hidden behind that change. If you care about how your tools work that might be a problem (I'm not judging).

almostgotcaught 8 hours ago [-]
People think high-level libraries are magic fairy dust - "abstractions! abstractions! abstractions!". But there's no such thing as an abstraction in real life, only heuristics.
munch0r 9 hours ago [-]
Not true for regular GPUs like RTX5090. They have atrocious float64 performance compared to CPU. You need a special GPU designed for scientific computations (many float64 cores)
the__alchemist 9 hours ago [-]
This is confusing: GPUs seem great for scientific computations. You often want f64 for scientific computation. GPUs aren't good with f64.

I'm trying to evaluate if I can get away with f32 for GPU use for my molecular docking software. Might be OK, but I've hit cases broadly where f64 is fine, but f32 is not.

I suppose this is because the dominant uses games and AI/ML use f32, or for AI even less‽

the__alchemist 10 hours ago [-]
Next logical optimization: Barnes Hut? Groups source bodies using a recursive tree of cubes. Gives huge speedups with high body counts. FMM is a step after, which also groups target bodies. Much more complicated to implement.
RpFLCL 9 hours ago [-]
That's mentioned on the "Conclusions" page of TFA:

> Large-scale simulation: So far, we have only focused on systems with a few objects. What about large-scale systems with thousands or millions of objects? Turns out it is not so easy because the computation of gravity scales as . Have a look at Barnes-Hut algorithm to see how to speed up the simulation. In fact, we have documentations about it on this website as well. You may try to implement it in some low-level language like C or C++.

kkylin 5 hours ago [-]
C or C++? Ha! I've implemented FMM in Fortran 77. (Not my choice; it was a summer internship and the "boss" wanted it that way.) It was a little painful.
munchler 13 hours ago [-]
In Step 2 (Gravity), why are we summing over the cube of the distance between the bodies in the denominator?

Edit: To answer myself, I think this is because one of the factors is to normalize the vector between the two bodies to length 1, and the other two factors are the standard inverse square relationship.

itishappy 8 hours ago [-]
You got it. The familiar inverse square formula uses a unit vector:

    a = G * m1 * m2 / |r|^2 * r_unit

    r_unit = r / |r|

    a = G * m1 * m2 / |r|^3 * r
quantadev 7 hours ago [-]
The force itself is `G * m1 * m2 / (r^2)`. That's a pure magnitude. The direction of the force is just the unit vector going from m1 to m2. You need it to be a unit vector or else you're multiplying up to something higher than that force. However, I don't get why you'd ever cube the 'r'. Never seen that. I don't think it's right, tbh.
13 hours ago [-]
mkoubaa 11 hours ago [-]
My favorite thing about this kind of code is that people are constantly inventing new techniques to do time integration. It's the sort of thing you'd think was a solved problem but then when you read about time integration strategies you realize how rich the space of solutions are. And they are more like engineering problems than pure math, with tradeoffs and better fits based on the kind of system.
JKCalhoun 9 hours ago [-]
Decades ago ... I think it was Computer Recreations column in "Scientific American" ... the strategy, since computers were less-abled then, was to advance all the bodies by some amount of time — call it a "tick" — when bodies got close, the "tick" got smaller: therefore the calculations more nuanced, precise. Further apart and you could run the solar system on generalities.
sampo 3 hours ago [-]
Adaptive time step RKF algorithm is explained in section 5:

https://alvinng4.github.io/grav_sim/5_steps_to_n_body_simula...

leumassuehtam 8 hours ago [-]
One way of doing that is by each time step delta perform two integrations, one which give you x_1, and another x_2, with different accuracies. The error is estimated by the difference |x_1 - x_2| and you make the error match your tolerance by adjusting the time step.

Naturally, this difference becomes big when two objects are close together, since the acceleration will induce a large change of velocity, and a lower time step is needed to keep the error under control.

hermitcrab 52 minutes ago [-]
I have run into the problem where a constant time step can suddenly result in bodies getting flung out of the simulation because they go very close.

Your solution sounds interesting, but isn't it only practical when you have a small number of bodies?

halfcat 9 hours ago [-]
So in this kind of simulation, as we look closer, uncertainty is reduced.

But in our simulation (reality, or whatever), uncertainty increases the closer we look.

Does that suggest we don’t live in a simulation? Or is “looking closer increases uncertainty” something that would emerge from a nested simulation?

quantadev 6 hours ago [-]
The reason uncertainty goes up the closer we try to observe something in physics, is because everything is ultimately waves of specific wavelengths. So if you try to "zoom in" on one of the 'humps' of a sine wave, for example you don't see anything but a line that gets straighter and straighter, which is basically a loss of information. This is what Heisenberg Principle is about. Not the "Say my Name" one, the other one. hahaha.

And yes that's a dramatic over-simplification of uncertainty principle, but it conveys the concept perfectly.

gitroom 11 hours ago [-]
Man I tried doing this once and my brain nearly melted lol props for actually making it work
quantadev 7 hours ago [-]
I was fascinated with doing particle simulations in "C" language as a teenager in the late 1980s on my VGA monitor! I would do both gravity and charged particles.

Once particles accelerate they'll just 'jump by' each other of course rather than collide, if you have no repulsive forces. I realized you had to take smaller and smaller time-slices to get things to slow down and keep running when the singularities "got near". I had a theory even back then that this is what nature is doing with General Relativity making time slow down near masses. Slowing down to avoid singularities. It's a legitimate theory. Maybe this difficulty is why "God" (if there's a creator) decided to make everything actually "waves" as much as particles, because waves don't have the problem of singularities.

mclau157 15 hours ago [-]
Very well done!