For me in particular I’m a software developer who works on developer tools, so I have a lot of tests running in VMs so I can test on different operating systems. I just finished running a test suite that used up over 50 gigs of RAM for a dozen VMs.
I use Virtual Machines and run local LLMs. LLMs need VRAM rather than CPU RAM. You shouldn’t be doing it on a laptop without a serious NPU or GPU, if at all. I don’t know if I will be using VMs heavily on this machine or not, but that would be a good reason to have more RAM. Even so 32 GiB should be enough for a few VMs running concurrently.
Honestly, I think that for many people, if they’re using a laptop or phone, doing LLM stuff remotely makes way more sense. It’s just too power-intensive to do a lot of that on battery. That doesn’t mean not-controlling the hardware – I keep a machine with a beefy GPU connected to the network, can use it remotely. But something like Stable Diffusion normally requires only pretty limited bandwidth to use remotely.
If people really need to do a bunch of local LLM work, like they have a hefty source of power but lack connectivity, or maybe they’re running some kind of software that needs to move a lot of data back and forth to the LLM hardware, I think I might consider lugging around a small headless LLM box with a beefy GPU and a laptop, plug the LLM box into the laptop via Ethernet or whatnot, and do the LLM stuff on the headless box. Laptops are just not a fantastic form factor for heavy crunching; they’ve got limited ability to dissipate heat and tight space constraints to work with.
Yeah it is easier to do it on a desktop or over a network. That’s what I was trying to imply. Although having an NPU can help. Regardless I would rather be using my own server than something like ChatGPT.
Any memory that’s going unused by apps is going to be used by the OS for caching disk contents. That’s not as significant with SSD as with rotational drives, but it’s still providing a benefit, albeit one with diminishing returns as the size of the cache increases.
That being said, if this is a laptop and if you shut down or hibernate your laptop on a regular basis, then you’re going to be flushing the memory cache all the time, and it may buy you less.
IIRC, Apple’s default mode of operation on their laptops these days is to just have them sleep, not hibernate, so a Mac user would probably benefit from that cache.
Outside of storage servers and ZFS no one is buying RAM specifically to use it as disk cache. You will also find that Windows laptops are also designed to be left in sleep rather than hibernate.
My Linux machine has 64 GiB of RAM, which is like 128 GiB of Mac RAM. It’s still not enough
Serious question what are you using all that RAM for? I am having a hard time justifying upgrading one of my laptops to 32 GiB, nevermind 64 GiB.
For me in particular I’m a software developer who works on developer tools, so I have a lot of tests running in VMs so I can test on different operating systems. I just finished running a test suite that used up over 50 gigs of RAM for a dozen VMs.
Same, 48c/96t with 192gb ram.
make -j is fun, htop triggers epilepsy.
Few vms, but tons of Lxc containers, it’s like having 1 machine that runs 20 systems in parallel and really fast.
Have containers for dev, for browsing, for wine, the dream finally made manifest.
If games, modding uses a lot. It can go to the point of needing more than 32gb, but rarely so.
Usually, you’d want 64gb or more for things like video editing, 3d modeling, running simulations, LLMs, or virtual machines.
I use Virtual Machines and run local LLMs. LLMs need VRAM rather than CPU RAM. You shouldn’t be doing it on a laptop without a serious NPU or GPU, if at all. I don’t know if I will be using VMs heavily on this machine or not, but that would be a good reason to have more RAM. Even so 32 GiB should be enough for a few VMs running concurrently.
That’s fair. I’ve put it there as more of a possible use case rather than something you should be consistently doing.
Although iGPU can perform quite well when given a lot of RAM, afaik.
Honestly, I think that for many people, if they’re using a laptop or phone, doing LLM stuff remotely makes way more sense. It’s just too power-intensive to do a lot of that on battery. That doesn’t mean not-controlling the hardware – I keep a machine with a beefy GPU connected to the network, can use it remotely. But something like Stable Diffusion normally requires only pretty limited bandwidth to use remotely.
If people really need to do a bunch of local LLM work, like they have a hefty source of power but lack connectivity, or maybe they’re running some kind of software that needs to move a lot of data back and forth to the LLM hardware, I think I might consider lugging around a small headless LLM box with a beefy GPU and a laptop, plug the LLM box into the laptop via Ethernet or whatnot, and do the LLM stuff on the headless box. Laptops are just not a fantastic form factor for heavy crunching; they’ve got limited ability to dissipate heat and tight space constraints to work with.
Yeah it is easier to do it on a desktop or over a network. That’s what I was trying to imply. Although having an NPU can help. Regardless I would rather be using my own server than something like ChatGPT.
Photo Editing, Video Transcoding.
k8s
Any memory that’s going unused by apps is going to be used by the OS for caching disk contents. That’s not as significant with SSD as with rotational drives, but it’s still providing a benefit, albeit one with diminishing returns as the size of the cache increases.
That being said, if this is a laptop and if you shut down or hibernate your laptop on a regular basis, then you’re going to be flushing the memory cache all the time, and it may buy you less.
IIRC, Apple’s default mode of operation on their laptops these days is to just have them sleep, not hibernate, so a Mac user would probably benefit from that cache.
Outside of storage servers and ZFS no one is buying RAM specifically to use it as disk cache. You will also find that Windows laptops are also designed to be left in sleep rather than hibernate.