Someone should get the ArchiveTeam onto it (if they aren’t already)
Someone should get the ArchiveTeam onto it (if they aren’t already)
I’m in the same boat, but didn’t jump so yet. I’ve been following paperless for a while now but every time I look at scanners I’m blown away by their prices…
Based on this thread it’s the deduplication that requires a lot of RAM.
See also: https://wiki.freebsd.org/ZFSTuningGuide
Edit: from my understand the pool shouldn’t become inaccessible tho and only get slow. So there might be another issue.
Edit2: here’s a guide to check whether your system is limited by zfs’ memory consumption: https://github.com/openzfs/zfs/issues/10251
I’m not sure whether it makes sense trying to discuss with you but let’s try…
You couldn’t know how much traffic you saved because you didn’t load the ad. The ad could be 1KB, 1MB or 1GB, but because you didn’t load it you wouldn’t know it’s size. Without knowing it’s size, you wouldn’t be able to calculate the savings.
As mentioned somewhere is in the thread you would have to directly compare two machines visiting the same pages and even then it’s probably only approximate because both machines might get served different ads.