Zero. My office workstation has 48 GB of RAM, my home computer has 64 (I went a bit overboard). I have very bad memories of swap thrashing and the computer becoming totally unresponsive until I forced a reset; if I manage to fill up so much RAM, I very much prefer the offending process to die instead of killing the whole computer.
It's funny how people think they're disabling swapping just because they don't have a swap file. Where do you think mmap()-ed file pages go? Your machine can still reclaim resident file-backed pages (either by discarding them if they're clear or writing them to their backing file if dirty) and reload them later. That's.... swap.
Instead of achieving responsiveness by disabling swap entirely (which is silly, because everyone has some very cold pages that don't deserve to be stuck in memory), people should mlockall essential processes, adjust the kernel's VM swap propensity, and so on.
Also, I wish we'd just do away with the separation between the anonymous-memory and file-backed memory subsystems entirely. The only special about MAP_ANONYMOUS should be that its backing file is the swap file.
mmap is not swap. It's using the same virtual memory mechanisms to load/dump pages to disk. The policy for when to read and write those pages is completely different.
When the room for memory mapped files gets low enough you get bad thrashing anyway, so the policy difference isn't that important.
Having no swap limits how much you can overburden your computer, but you also hit problems earlier. Here's some example numbers for 64GB of memory: With swap you can go up to 62GB of active program data (85GB allocated and used) before you have performance issues. Without swap you can go up to 45GB of active program data (63GB allocated and used) before you hit a brick wall of either thrashing or killing processes. The no-swap version is better at maintaining snappiness within its happy range, but it's a tradeoff.
It is doing exactly what swap is doing. That it's swap with a different policy doesn't make it not-swap.
Also, that separate policy shouldn't even exist. For LRU/active-list/inactive-list purposes, why does it matter whether a page is anonymous or file-backed? If you need it, you need it, and if you don't, you don't. No reason for anonymous and file-backed memory to be separate sub-sub-systems under vm.
Anonymous memory and files have different access patterns. Files are frequently read sequentially and only once, so there is no need to keep them in memory. When files are read, cached memory pages are put into inactive file LRU first and promoted to active LRU only on the second access.
It is possible to unify things behind a single mechanism, yet apply different policies to different instances of this mechanism depending on circumstances and heuristics. We do not need almost entirely disjoint paging systems in the Linux kernel to notice that some kinds of memory have access patterns different from other kinds of memory. Instead of guessing based on whether someone used MAP_ANONYMOUS, we should observe what a program is actually doing.
All LRUs, file-backend and anonymous are handled by the same code. There are some conditionals here and there - executable pages are promoted to active LRU on the first access, locked pages (if I remember correctly) too, but all pages go through the same cycle. See this Linux Plumbers conference presentation https://www.youtube.com/watch?v=0bnWQF7WQP0 with the following slides https://d3s.mff.cuni.cz/files/teaching/nswi161/2023_24/08_li...
I'm not an expert, but aren't you just reducing the choice of what pages can be offloaded from RAM? Without swap space, only file-backed pages can be written out to reclaim RAM for other uses (including caching). With swap space, rarely used anonymous memory can be written out as well.
Swap space is not just for overcommitting memory (in fact, I suspect nowadays it rarely ever is), but also for improving performance by maximizing efficient usage of RAM.
With 48GB, you're probably fine, but run a few VMs or large programs, and you're backing your kernel into a corner in terms of making RAM available for efficient caching.
I have 64GB of RAM and 16GB of swap. Swap is small enough it can't get really out of hand.
I have memories from like 20 years ago that even when I had plenty of RAM, and plenty of it was free, I would get random OOM killer events relatively regularly. Adding just a tiny bit of swap made that stop happening.
I'm like 90% sure at this point it's just a stupid superstition I carry. But I'm not gonna stop doing it even though it is stupid.
Same here, though I settled on 32GB of swap because I have a 4TB SSD (caught a good sale on a Samsung EVO SSD at Newegg). But whenever I run `top`, I constantly see:
MiB Swap: 32768.0 total, 32768.0 free, 0.0 used.
I could safely get away with 4GB of swap, and see no difference.
Luckily we're not in the spinning HDDs thrashing a working set in and out of 128 MB of primary memory days anymore. We have laptops that ship with SSDs that read/write at 6 GB/s.
I was experimenting with some graphics algorithm and had a memory leak where it would leak the uncompressed 12 MP image with every iteration. I was browsing the web when waiting for it to process when I wondered why it was taking so long. That's when I noticed it was using 80+ GB of swap just holding onto all those dead frames. It finished and meanwhile it had no noticeable performance impact on whatever else I was doing.
I did similar with my 32GB laptop, but it was fairly flaky for ~4 years and I just recently put 48GB of swap on and it's been so much better. It's using over 20GB of the swap. The are cases in Linux where running without swap results in situations very similar to swapping too much.
Oh god... My company uses Teams and it is one of the main reasons why I installed so much RAM on my workstation. Part of me wishes that the recent increase in RAM prices will force companies to reduce the ridiculous memory footprint of their software.
I ran with a setup like this for a bit, but I experienced far worse thrashing (and far more sudden onset) than I did with swap enabled. You need to take some extra steps to get a quick and graceful failure on RAM exhaustion.
On systems with 32/64/128 GB of ram, I'll typically have a 1GB or 2GB swap. Just so that the system can page out here and there to run optimally. Depending on the system, swap is typically either empty or just has a couple hundred MB kicking around.
On what OS are you using these settings? I found that Windows will refuse to allocate more virtual memory when the commit charge hit the max RAM size even if there is plenty of physical memory left to use.
I have 64 GiB of RAM and programs would start to crash at only 25 GiB of physical memory usage in some workloads because of high commit charge. I had to re-enable a 64 GiB SWAP file again just to be able to actually use my RAM.
My understanding is that Linux will not crash on the allocation and instead crash when too much virtual memory becomes active instead. Not sure how Mac handles it.
Linux. I also don't typically run any individual workload that would consume all system ram. Not loading a giant scientific model into memory all at once for fast processing or anything like that, so mileage will certainly vary based on requirements.
Windows: I set min size to whatever is necessary to make RAM+swap add up to ~2 GBytes per CPU thread, to avoid problems with parallel Visual Studio builds. (See, e.g., https://devblogs.microsoft.com/cppblog/precompiled-header-pc...) Performance is typically fine with ~0.75+ GBytes RAM per job, but if the swapfile isn't preconfigured then Windows can seemingly end up sometimes refusing to grow it fast enough. Safest to configure it first
macOS: never found a reason not to just let it do whatever it does. There's a hard limit of ~100 GBytes swap anyway, for some reason, so, either you'll never run out, or macOS is not for you
Linux: I've always gone for 1x physical RAM, though with modern RAM sizes I don't really know why any more
My work laptop currently has 96GB of RAM. 32 of it is allocated to the graphics portion of the APU. I have 128GB (2x) of SWAP allocated, since I sometimes do big FPGA Synthesizations, which take up 50GB of RAM on its own. Add another two IDEs and a browser, and my 64GB or remaining RAM is full.
Fwiw you’ll see technical reasons for swap being a bad idea on servers. These are valid. Virtualised servers don’t really have great ways to make swap work.
On a personal setup though there’s no reason not to have swap space. Your main ram gets to cache more files if you let the os have some space to place allocated but never actually used objects.
As in ‘I don’t use swap because i don’t use all my ram’ isn’t valid since free ram caches files on all major OS’s. You pretty much always end up using all your ram. Having swap is purely a win, it lets you cache even more.
But then you're putting data that used to be on RAM on storage, in order to keep copies of stored data on RAM. Without any advance knowledge of access patterns, it doesn't seem like it buys you anything.
In 1993, you could refresh the home page of Netscape (Mosaic) every day and it would mention new sites that had been added. That became unmanageable quickly, which is when two dudes from Stanford started a directory.
I've been trying to track down "What's New" for a long long time. If memory serves, there was a daily email titled "What's New on the World Wide Web" - very possibly the source for this monthly summary.
It was a fascinating way to experience the early WWW's exponential growth. It started out small, but once it began to grow, you could see it expanding faster and faster practically in real time.
At first it only took seconds to give the daily list a good once over. Over time it started taking minutes, then 20 minutes or half an hour (if things weren't too busy at work), and eventually it morphed into almost another full time job. There was just no way to keep up. Around that time they stopped sending it out.
From a historical point of view, these daily emails and monthly summaries would be a terrific resource for those interested in the early Web. It's hard to believe now that there was once a time when you could literally check out every new Web site as they came online.
If you remove commercial and edu and gov sites it's still doable today to track NEW unique websites today. There are less and less personal webpages due the instagram and fb etc
I once asked for funding from a Scottish business angel.
He confided to me that his biggest mistake in life was saying no on a phone call by a certain Tim Berners-Lee, who was looking for someone to help implement a browser for the "World Wide Web".
"Why did you reject him?" I asked. "'World Wide Web' sounded pretentious." said the man who got independently wealthy by selling a company that produced hypertext software (incl. browsers) for technical documentation running on Sun workstations...
...TBL turned to the NCSA team in the U.S. instead, and the rest is history.
One early tool in this space was Navipress (which AOL bought out and made into AOLpress) which is notable for having been used by a certain Tim Berners-Lee to write a book:
https://unix.stackexchange.com/questions/536436/linux-modify... suggests there may be risks involved using efivar to configure Apple hardware, as there probably isn't any kind of testing or validation present on the variables you set, but if you know what you're doing you should have similar control as you'd have on native macOS I believe.
you should just be able to go to data -> all records -> type company name.
By the way, if the company is public, it brings up stock ticker, SEC link and all layoff related news. Plus all historical WARN notices by that company.