Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

RTX 3090 has 24GB of memory, a quantized llama70b takes around 60GB of memory. You can offload a few layers on the gpu, but most of them will run on the CPU with terrible speeds.


You're not required to put the whole model in a single GPU.

You can buy a 24GB gpu for $150-ish (P40).


Wow that's a really good idea. I could potentially buy 4 Nvidia P40's for the same price as a 3090 and run inference on pretty much any model I want.


For reference for readers.

SUPPORTED

=========

* Ada / Hopper / A4xxx (but not A4000)

* Ampere / A3xxx

* Turing / Quadro RTX / GTX 16xx / RTX 20XX / Volta / Tesla

EOL 2023/2024

=============

* Pascal / Quadro P / Geforce GTX 10XX / Tesla

Unsupported

===========

* Maxwell

* Kepler

* Fermi

* Tesla (yes, this one pops up over and over, chaotically)

* Curie

Older don't really do GPGPU much. The older cards are also quite slow relative to modern ones! A lot of the ancient workstation cards can run big models cheaply, but (1) with incredible software complexity (2) very slowly, even relative to modern CPUs.

Blender rendering very much isn't ML, but it is a nice, standardized benchmark:

https://opendata.blender.org/

As a point of reference: A P40 has a score of 774 for Blender rendering, and a 4090 has 11,321. There are CPUs ($$$) in the 2000 mark, so about dual P40. It's hard for me to justify a P40-style GPU over something like a 4060Ti 16GB (3800), an Arc a770 16GB (1900), or a 7600XT 16GB (1300). They cost more, but the speed difference is nontrivial, as is the compatibility difference and support life. A lot of work is going into making modern Intel / AMD GPUs supported, while ancient ones are being deprecated.


P40 is essentially a faster 1080 with 24GB ram. For many tasks (including LLMs) it's easy to be memory bandwidth bottlenecked and if you are they are more evenly matched. (newer hardware has more bandwidth, sure but not in a cost proportional manner).

I find that my hosts using 9x P40 do inference on 70b models MUCH MUCH faster than a e.g. a dual 7763 and cost a lot less. ... and can also support 200B parameter models!

For the price of a single 4090, which doesn't have enough ram to run anything I'm interested in, I can have slower cards which have cumulatively 15 times the memory and cumulatively 3.5 times the memory bandwidth.


Interesting.

Technically, P40 is rated at an impressive 347.1GB/sec memory bandwidth, and 4060, at a slightly lower 272GB/sec. For bandwidth-limited workloads, the P40 still wins.

The 4090 is about 3-4x that, but as you point out, is not cost-competitive.

What do you use to fit 9x P40 cards in one machine, supply them with 2-3kW of power, and keep them cooled? Best I've found are older rackmount servers, and the ones I was looking at stoped short of that.


I technically have 10 plus a 100GBE card in them, but due to nvidia chipset limitations using more than 9 in a single task is a pain. (Also, IIRC one of the slots is only 8x in my chassis)

Supermicro has made a couple 5U chassis that will take 10x double width cards and provide adequate power and cooling. SYS-521GE-TNRT is one such example (I'm not sure off the top of my head which mine are, they're not labeled on the chassis, but they may be that).

They pricey new but they show up on ebay for 1-2k. The last ones I bought I paid $1800, I think the earlier set I paid $1500/ea-- around the time that ethereum gpu mining ended ( I haven no clue why someone was using chassis like these for gpu mining, but I'm glad to have benefited! ).


Just make sure you're comfortable with manually compiling the bitsandbytes and generally combine a software stack of almost out of date libraries


P40 still works with 12.2 at the moment. I used to use K80s (which I think I paid like $50 for!) which turned into a huge mess to deal with older libraries, especially since essentially all ML stuff is on a crazy upgrade cadence with everything constantly breaking even without having to deal with orphaned old software.

You can get gpu server chassis that have 10 pci-slots too! for around $2k on ebay. But note that there is a hardware limitation on the PCI-E cards such that each card can only directly communicate with 8 others at a time. Beware, they're LOUD even by the standards of sever hardware.

Oh also the nvidia tesla power connectors have cpu-connector like polarity instead of pci-e, so at least in my chassis I needed to adapt them.

Also keep in mind that if you aren't using a special gpu chassis, the tesla cards don't have fans, so you have to provide cooling.


That's a good point. Are you referring to the out of date cuda libraries?


I don't remember exactly (either cuda directly or the cudnn version used by the flashattention)... Anyway, /r/localLlama has few instances of such builds. Might be really worthwhile looking that up before buying


Can that be split across multiple GPUs? i.e. what if I have 4xV100-DGXS-32GBs?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: