Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hi HN,

Have you ever run GNU Parallel on a powerful machine just to find one core pegged at 100% while the rest sit mostly idle?

I hit that wall...so I built forkrun.

forkrun is a self-tuning, drop-in replacement for GNU Parallel (and xargs -P) designed for high-frequency, low-latency shell workloads on modern and NUMA hardware (e.g., log processing, text transforms, HPC data prep pipelines).

On my 14-core/28-thread i9-7940x it achieves:

- 200,000+ batch dispatches/sec (vs ~500 for GNU Parallel)

- ~95–99% CPU utilization across all 28 logical cores (vs ~6% for GNU Parallel)

- Typically 50×–400× faster on real high-frequency low-latency workloads (vs GNU Parallel)

These benchmarks are intentionally worst-case (near-zero work per task), where dispatch overhead dominates. This is exactly the regime where GNU Parallel and similar tools struggle — and where forkrun is designed to perform.

A few of the techniques that make this possible:

- Born-local NUMA: stdin is splice()'d into a shared memfd, then pages are placed on the target NUMA node via set_mempolicy(MPOL_BIND) before any worker touches them, making the memfd NUMA-spliced.

- SIMD scanning: per-node indexers use AVX2/NEON to find line boundaries at memory bandwidth and publish byte-offsets and line-counts into per-node lock-free rings.

- Lock-free claiming: workers claim batches with a single atomic_fetch_add — no locks, no CAS retry loops; contention is reduced to a single atomic on one cache line.

- Memory management: a background thread uses fallocate(PUNCH_HOLE) to reclaim space without breaking the logical offset system.

…and that’s just the surface. The implementation uses many additional systems-level techniques (phase-aware tail handling, adaptive batching, early-flush detection, etc.) to eliminate overhead at every stage.

In its fastest (-b) mode (fixed-size batches, minimal processing), it can exceed 1B lines/sec. In typical streaming workloads it's often 50×–400× faster than GNU Parallel.

forkrun ships as a single bash file with an embedded, self-extracting C extension — no Perl, no Python, no install, full native support for parallelizing arbitrary shell functions. The binary is built in public GitHub Actions so you can trace it back to CI (see the GitHub "Blame" on the line containing the base64 embeddings).

- Benchmarking scripts and raw results: https://github.com/jkool702/forkrun/blob/main/BENCHMARKS

- Architecture deep-dive: https://github.com/jkool702/forkrun/blob/main/DOCS

- Repo: https://github.com/jkool702/forkrun

Trying it is literally two commands:

    . frun.bash    # OR  `. <(curl https://raw.githubusercontent.com/jkool702/forkrun/main/frun.bash)`
    frun shell_func_or_cmd < inputs
Happy to answer questions.
 help



> Have you ever run GNU Parallel on a powerful machine just to find one core pegged at 100% while the rest sit mostly idle?

Not exactly, but maybe I haven't used large enough NUMA machines to run tiny jobs?

I think usually parallel saturates my CPU and I'd guess most CPU schedulers are NUMA-aware at this point.

If you care about short tasks maybe parallel is the wrong tool, but if picking the task to run is the slow part AND you prefer throughput over latency maybe you need batching instead of a faster job scheduling tool.

I'm pretty sure parallel has some flags to allow batching up to K-elements, so maybe your process can take several inputs at once. Alternatively you can also bundle inputs as you generate them, but that might require a larger change to both the process that runs tasks and the one that generates the inputs for them.


parallel works fine so long as the time per job is on the order of seconds or longer.

Let me give you an example of a "worst-case" scenario for parallel. Start by making a file on a tmpfs with 10 million newlines

    yes $'\n' | head -n 10000000 > /tmp/f1
So, now lets see how long it takes parallel to push all these lines through a no-op. This measures the pure "overhead of distributing 10 million lines in batches". Ill set it to use all my cpu cores (`-j $(nproc)`) and to use multiple lines per batch (`-m`).

    time { parallel -j $(nproc) -m : <f1; }

    real    2m51.062s
    user    2m52.191s
    sys     0m6.800s
Average CPU utalization here (on my 14c/28t i9-7940x) is CPU time / real time

    (172.191 + 6.8) / 171.062 = 1.0463516152 CPUs utalized
Note that there is 1 process that is pegged at 100% usage the entire time that isnt doing any "work" in terms of processing lines - its just distributing lines to workers. If we assume that thread averaged about 0.98 cores utalized, it means that throughout the run it managed to keep around 0.066 out of 28 CPUs saturated with actual work.

Now let's try with frun

    . ./frun.bash
    time { frun : <f1; }

    real    0m0.559s
    user    0m10.409s
    sys     0m0.201s
CPU utilization is

    ( 10.409 + .201 ) / .559 = 18.9803220036 CPUs utalized
Lets compare the wall clock times

    171.062 / 0.559 = 306x speedup
Interestingly, if we look at the ratio of CPU utilization (spent on real work):

    18.9803220036 / 0.066 = 287x more CPU usage doing actual work
which gives a pretty straightforward story - forkrun is 300x faster here because it is utilizing 300x more CPU for actually doing work.

This regime of "high frequency low latency tasks" - millions or billions of tasks that make milliseconds or microseconds each - is the regime where forkrun excels and tools like parallel fall apart.

Side note: if I bump it to 100 million newlines:

    time { frun : <f1; }

    real    0m4.212s
    user    1m52.397s
    sys     0m1.019s
CPU utilization:

    ( 112.397 + 1.019 ) / 4.212 = 26.9268 CPUs utalized
which on a 14c/28t CPU doing no-ops...isnt bad.

>Have you ever run GNU Parallel on a powerful machine just to find one core pegged at 100% while the rest sit mostly idle?

Yes, to my extreme frustration. Thank you, I'm installing this right now while I read the rest of your comment.


How did it work for you?

Thanks for making and thanks for sharing :)

I’m not a parallels kind of user but I can appreciate your craft and know how rewarding these odysseys can be :)

What was the biggest “aha” moment when you worked how things interlock or you needed to make both change an and b at the same time, as either on their own slowed it down? Etc. And what is the single biggest impacting design choice?

And if you’re objective, what could be done to other tools to make them competitive?


So, in forkruns development there have been a few "AHA!" moments. Most of them were accompanied by a full re-write (current forkrun is v3).

The 1st AHA, and the basis for the original forkrun, was that you could eliminate a HUGE amount of the overhead of parallelizing things in shell in you use persistent workers and have them run things for you in a loop and distribute data to them. This is why the project is called "forkrun" - its short for "first you FORK, then you RUN".

The 2nd AHA, which spawned forkrun v2, was that you could distribute work without a central coordinator thread (which inevitably becomes the bottleneck). forkrun v2 did this by having 1 process dump data into a tmpfile on a ramdisk, then all the workers read from this file using a shared file descriptor and a lightweight pipe-based lock: write a newline into a shared anonymous pipe, read from pipe to acquire lock, write newline back to pipe to release it. FIFO naturally queues up waiters. This version actually worked really well, but it was a "serial read, parallel execute" design. Furthermore, the time it took to acquire and release a lock meant the design topped out at ~7 million lines per second. Nothing would make it faster, since that was the locking overhead.

The 3rd AHA was that I could make a very fast (SIMD-accellerated) delimiter scanner, post the byte offsets where lines (or batches of lines) started in the global data file, and then workers could claim batches and read data in parallel, making the design fully "parallel read + parallel execute"

The 4th AHA was regarding NUMA. it was "instead of reactively re-shuffling data between nodes, just put it on the right node to begin with". Furthermore, determine the "right node" using real-time backpressure from the nodes with a 3 chunk buffer to ensure the nodes are always fed with data. This one didn't need a rewrite, but is why forkrun scales SO WELL with NUMA.


> And if you’re objective, what could be done to other tools to make them competitive?

I wanted to reply separately to this bit, because I needed a bit of time to think about and respond to it.

To be frank, parallel optimizes for "breadth of features" and has, for example, the ability to coordinate distributed computing over ssh. But it fundamentally assumes that the workload itself will take dramatically longer than the coordination.

To really be competitive in "high-frequency low-latency workloads", where you have millions of inputs and each only takes microseconds, you would need a complete rewrite with an entirely different way of thinking.

Let me drop a few numbers just to drive this point home. Parallel is capable of batching and distributing around 500 batches of work a second. forkrun, in its "pass arguments via quoted cmdline args" mode is capable of batching and distributing around 10,000 batches a second. This is mostly limited by how fast bash can assemble long strings of quoted arguments to pass via the command line. In forkrun's `-s` mode, which bypasses bash entirely and splices data directly to the stdin of what you are parallelizing, forkrun is capable of batching and distributing over 200,000 batches a second.

The biggest architectural hurdle most existing tools have that makes it impossible to achieve forkrun's batch distribution rate is that almost all use a central distributor thread that forks each individual call (which is very expensive) and that is ALWAYS the bottleneck in high-frequency low-latency workloads. Pushing past this requires moving to a persistent worker model without a central coordinator. This alone necessitates a complete rewrite for basically all the existing tools.

That said, forkrun takes it so much further:

* It uses a SIMD-accelerated delimiter scanner + lock-free async-io to allow for workers to not only execute in parallel but to read inputs to run in parallel.

* It doesn't just use a standard "lock-free" design with CAS retry loops everywhere - it treats the problem like a physical pipeline of data flow and structurally eliminates contention between workers. The literal only "contention" is a single atomic on a single cache line - namely when a worker claims a batch by running `atomic_fetch_add` on a global monotonically increasing index (`read_idx`).

* It doesn't use heuristics - it uses a proper closed-loop control system. There is a 3-stage ramp-up (saturate workers -> geometric ramp -> backpressure-guided PID) to dynamically determine the batch size and the number of workers that works extremely well for all input types with 0 manual tuning.

* It keeps complexity in the slow path. Claiming a batch of lines literally just involves reading a couple shared mmap'ed vars and an `atomic_fetch_add` op in the fast path, which is why it can break 1 billion lines a second. The complexity is all so the slow path degrades gracefully, which is where it smartly trades latency for throughput (but only when throughput is limited by stdin to begin with).

* It treats NUMA as 1st class and chooses the "obvious in hindsight" path to just put data on the correct NUMA node from the very start instead of re-shuffling it between nodes reactively later.

I could go on, but the TL;DR is: to be competitive, other tools would really need to try and solve the "high-frequency low-latency stream parallelization" problem from first principles like forkrun did.


Great read, thanks :)

This is the kind of buzz I search out in my own programming :)

Have fun and keep challenged :)


Please don't support only curl for installation. There are many package registries you can use; e.g., https://github.com/aquaproj/aqua-registry

Theres no "install" - you just need to source the `frun.bash` file. Downloading frun.bash and sourcing it works just fine. directly sourcing a curl stream that grabs frun.bash from the git repo is just an alternate approach. It is not "required" by any means.

This is great!

Forkrun is part of a vanishingly small number of projects written since the 1990s that get real work done as far as multicore computing goes.

I'm not super-familiar with NUMA, but hopefully its concepts might be applicable to other architectures. I noticed that you mentioned things like atomic add in the readme, so that gives me confidence that you really understand this stuff at a deep level.

My use case might eventually be to write a self-parallelizing programming language where higher-order methods run as isolated processes. Everything would be const by default to make imperative code available in a functional runtime. Then the compiler could turn loops and conditionals into higher-order methods since there are no side effects. Any mutability could be provided by monads enforcing the imperative shell, functional core pattern so that we could track state changes and enumerate all exceptional cases.

Basically we could write JavaScript/C-style code having MATLAB-style matrix operators that runs thousands of times faster than current languages, without the friction/limitations of shaders or the cognitive overhead of OpenCL/CUDA.

-

I feel that pretty much all modern computer architectures are designed incorrectly, which I've ranted about countless times on HN. The issue is that real workloads mostly wait for memory, since the CPU can run hundreds of times faster than load/store, especially for cache and branch prediction misses. So fabs invested billions of dollars into cache and branch prediction (that was the incorrect part).

They should have invested in multicore with local memories acting together as a content-addressable memory. Then fork with copy-on-write would have provided parallelism for free.

Instead, CPU progress (and arguably Moore's law itself) ended around 2007 with the arrival of the iPhone and Android, which sent R&D money to low-cost and low-power embedded chips. So the world was forced to jump on the GPU bandwagon, doubling down endlessly on SIMD instead of giving us MIMD.

Leaving us with what we have today: a dumpster fire of incompatible paradigms like OpenGL, Direct3D, Vulkan, Metal, TPUs, etc.

When we could have had transputers with unlimited compute and memory, scaling linearly with cost, that could run 3D and AI libraries as abstraction layers. Sadly that's only available in cloud computing currently.

We just got lucky that neural nets can run on GPUs. It would have been better to have access to the dozen or so other machine learning algorithms, especially genetic algorithms (which run poorly on GPUs).

Maybe your work can help bridge that gap.


I appreciate the high praise re: forkrun.

forkrun's NUMA approach is really largely based on the idea that, as you said, "real workloads mostly wait for memory". The waiting for memory gets worse in NUMA because accessing memory from a different chiplet or a different socket requires accessing data that is physically farther from the CPU and thus has higher latency. forkrun takes a somewhat unique approach in dealing with this: instead of taking data in, putting it somewhere, and reshuffling it around based on demand, forkrun immediately puts it on the correct numa node's memory when it comes in. This creates a NUMA-striped global data memfd. on NUMA forkrun duplicates most of its machinery (indexer+scanner+worker pool) per node, and each node's machinery is only offered chunks from the global data memfd that are already on node-local memory.

This directly aims to solve (or at least reduce the effect from) "CPUs waiting for memory" on NUMA systems, where the wait (if memory has to cross sockets) can be substantial.


I am using a 9950x3D processor and didn't see any slow-down nor cpu sitting idle, I suggest you read the man-pages more clearly :P

I know this is obviously sarcasm and it made me laugh but I'm pretty sad HN couldn't catch it.

No. I was being earnest! Works for me TM

Why the hell do you curl ? Additionally, why do you advertise it when you just had uploaded it? Nobody should install something that new...

curl isnt required - you just need to source the `frun.bash` file. Downloading frun.bash and sourcing it works just fine. directly sourcing a curl stream that grabs frun.bash from the github repo is just an alternate approach. It is not "required" by any means.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: