Hacker Newsnew | past | comments | ask | show | jobs | submit | shadowpho's commentslogin

tradeoffs :)

Love Immich. Runs smoothly on an amd 4700u ($200) with minimum cpu/ram usage

I agree, and simple to me $200 new PC does this task just fine.

Yep and it also runs 30+ more containers without a hitch! Crazy times we live in.

What about the $10b to build the facility (including clean air/water/chemicals/etc)?

Rent a warehouse.

Rent a warehouse in one of the non Han dominated areas of China, where you can use all you want from the city's drinking water supply and pump all your used chemicals into the nearby river. Make sure to totally automate your robotic production line so you don't need to employ any locals.

It would be cheaper to bulldoze the warehouse and start over.

And then spend the next five years building the actual fab around it? Like what’s the plan here.

No you're right. My math is very off.

You should deactivate your bc donation link since you admitted this

Internet says to keep humidity below 50%, so a dehumidifier.

> the NAS in idle consumes more power than my UNAS Pro with 4x8TB HDD and 2X8TB SSD, as well as a Mac mini M1 with a 2TB Samsung T7 SSD, and my 4 access points and 4 protect cameras combined.

Are your drives spun? 70w is a pretty low bar. The nas by itself is probably 40w with drives, Mac mini is another 7-10w (especially at wall) and now we are at 50w, so 20w left for 4 AP and cameras


drives are spinning. 4x8TB WD Red Plus, which uses 3.4W idle, and assuming 20W for the NAS it's at ~34W (measured 35W). Mac Mini uses 4.6W idle (headless). POE consumption (measured by switch) is 37W (I'm aware there's overhead in AC/DC conversion).

All in all the total consumption at the wall is 96W, but as i have written in another comment, i was 7-8W off, meaning the quoted setup of mine uses 7-8W more than the 66.7W OPs NAS idles at.


>3.6V is considered the nominal voltage, certainly not the low end cut off.

This is not right (3.6v certainly is and can be cut off depending on device and battery).

One thing you are not considering is discharge after the cut off. Fuel gauge, protection circuitry, the cut off circuitry and battery itself has some discharge.

So you don’t want to have the cut off being too low because then the battery is permanently dead after not using it for X period of time.

You want to leave some margin there.

Depending on product, battery chemistry and design I have seen cut-off at 3.0-3.6v.


Anyone setting cut-off at 3.6V either is using it in some insanely industrial, ludicrous application where you need to handle cases like multiple years in storage... or doesn't know how to properly design their protection circuitry.

The margin is already there at 3.0V. You can still recharge batteries discharged below 3.0V. It just becomes dicey below ~2.5V.


>Anyone setting cut-off at 3.6V either is using it in some insanely industrial, ludicrous application where you need to handle cases like multiple years in storage... or doesn't know how to properly design their protection circuitry.

It really depends on application, battery size and leakage. In consumer world of electronics for example there’s an often requirement to make sure device turns on after being on a shelf for 1/2 - 2 years.

Then when you do the math it ends up needing to set the limit to 3-3.6v.

>The margin is already there at 3.0V. You can still recharge batteries discharged below 3.0V. It just becomes dicey below ~2.5V.

The margin isn’t big enough for some products. Furthermore some of the more leading edge batteries (in terms of energy density) have higher leakage which requires having more margin.


^^ this


>Even fast SSDs can get away with half of their lanes and you’ll never notice except in rare cases of sustained large file transfers

Sorta yes but kinda the other way around — you’ll mostly notice in short high burst of I/O. This is mostly the case for people who use them to run remote mounted VM.

Nowadays all nvme have a cache on board (ddr3 memory is common), which is how they manage to keep up with high speed. However once you exhaust the cache speeds drop dramatically.

But your point is valid that very few people actually notice a difference


You're pretty far off the mark about SSD caching. A majority of consumer SSDs are now DRAMless, and still can exceed PCIe 4.0 x4 bandwidth for sequential transfers. Only a seriously outdated SSD would still be using DDR3; good ones should be using LPDDR4 or maybe DDR4. And when a SSD does have DRAM, it isn't there for the sake of caching your data, it's for caching the driver's internal metadata that tracks the mapping of logical block addresses to physical NAND flash pages.


Here’s a page comparison 8 modern SSD cache, notice how they all fall off once the cache is full.

https://pcpartpicker.com/forums/topic/423337-animated-graphs...


That has nothing to do with DRAM; that would be completely obvious if you stopped to think about the cache sizes implied by writing at 5-6GB/s for tens of seconds before speeds drop. Nobody's putting 100+ GB of DRAM on a single SSD. You get at most 1GB of DRAM per 1TB of NAND.

What those graphs illustrate is SLC caching: writing faster by storing one bit per NAND flash memory cell (imprecisely), then eventually re-packing that data to store three or four memory bits per cell (as is necessary to achieve the drive's nominal capacity). Note that this only directly affects write operations; reading data at several GB/s is possible even for data that's stored in TLC/QLC cells, and can be sustained for the entire capacity of the drive.


Interesting, that makes sense, but my point still stands: all 8 of those devices fall down 4-8x in speed at some point, meaning for sequential transfers speed falls off and can handle 4-8x less lanes


The performance drop due to SLC caching only applies to writes. Sequential reads (and often, even random reads at sufficiently high queue depth) will still more or less saturate the PCIe link. Most workloads and use cases read a lot more data than they write.


2x m.2 is usually reserved for more expensive (>$200) mini pc. Or nas based mini pc which have trade offs.

Ecc ram is rare because very few people are asking for it, and it costs extra


N100/n150/n97 have similar performance. Power seems to be 6-12w at idle depending. Ram limited to 16GB usually. Low number of pcie lanes (NAS are limited). Cost used to be $100, but now it went up to $120+.

From amd side I have 4700u and 5700u, similar idle power (12w), similar cost ($200 with 32gb of ram, now more expensive). A lot more capable then n100, at a cost.

I use a whole bunch of mini pc in my lab, they are so much cheaper to run electricity wise (and cost)


While the N100 document a 16gb limit, they are known to have no problems with a 32gb module. I run one myself.


>Affordable multi-gigabit fiber is widely available in plenty of metropolitan areas in the US

Press X to doubt, isn’t a large part of country under Comcast (aka crappy monopolistic cable)?


That's why I specified that it's widely available in plenty of metropolitan areas, not a large part of the country. Internet service absolutely is abysmal in the US as a whole, but many large cities do have affordable access to fiber.


I have >1 gbps service from them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: