Hacker Newsnew | past | comments | ask | show | jobs | submit | DanielHB's commentslogin

> Atomic architecture

> [...]

> Each of these having only one consumer means they’re equivalent of inline code but cost us more to acquire (npm requests, tar extraction, bandwidth, etc.).

It costs FAR more than dep install time. It has a runtime cost too, especially if in frontend code using bundlers where it also costs extra bundlespace and extra build time.


Not really related to the topic, but I recently set up a baby-cam with ffmpeg by just telling ffmpeg to stream to the broadcast address on my home network and I can now open the stream from VLC on any device in the household.

A very heavy-handed solution, but super simple. A single one-liner. Just thought to share a weird trick I found.


It could be a little more efficient to use a multicast address. Even if you don't have any special multicast routing set up, all the receiving machines should be able to discard the traffic a bit earlier in the pipeline.


yeah I tried that for a bit but didn't manage to get multicasting to work. My internal network is fine even with the extra broadcast traffic. This is not a permanent installation, eventually I won't need the babycam anymore.

I saw some other solutions using nginx to serve the stream but that was much more complicated, just broadcasting is an one-liner (in a systemd daemon).


This stuff is used a lot in browser fingerprinting for tracking purposes. More privacy-focused browsers usually feed randomized info.


To encode all the atomic data and relative position of a single human cell probably would take a good chunk of all the hard drives in the world. A cell is not like a silicon chip where 99% of it is just repeating the same patterns.


In my generation (80s-90s) pretty much everyone in Brazil that was born in a hospital was born through C-section. Only recently did the practice of defaulting to c-section is beginning to fade.


I am at a point where I just installed Bazzite on a mini-itx PC and bought a gyroscope mouse (also called a flymouse) and use steam big picture mode. Access to a proper browser (with adblock) and a proper keyboard more than makes up for the UX problems.

I just wish modern browsers had the (old pre-chromium) Opera browser style of spatial navigation, gyroscope mouses work well enough but spatial navigation is the main feature I miss since I switched off old Opera

https://blog.codinghorror.com/spatial-navigation-and-opera/


The orgs are not ruthless like that, anything less than a certain % of the org revenue is not worth bothering unless it creates _more_ work to the person responsible for it than fixing it does.

Add some % if person who gets more work from the problem is not the same as the person who needs to fix it. People will happily leave things in a broken state if no one calls them out on it.


> speak to the staff, collate what they have to say, and launder it back to the boss

My wife is a management consultant and this is _exactly_ what she does in half of her projects. But it is a bit more sinister than that, the management consultant feed the info back to the _top_ bosses bypassing the middle-management hellscape.

For example, she did a project for a big bank where she interviewed 70 or so people her main output was a streamlined virtual machine requisition flow (which included merging a couple of teams together and configuring the ticketing system they already had). It used to take devs 6 months to get a VM. I bet the devs where yelling at their middle managers to sort it out, but their managers didn't have or want to actually bring it up with upper management with a plan on how to do it.

I joke that companies could just do that internally, have some people interviewing the leaf nodes in the org to find out top-down initiatives to help work get done, but companies simply don't do this.


You mean implement a b-tree live or whiteboard? That is insane.


Basically, any test that involves binary trees (sorry - "btree" is a somewhat different thing).

Realistically, most programmers never see another binary tree, after they leave school.

It's a "youth-pass filter." People right out of college will ace them. Us oldsters are less likely to do as well (unless we cram for them). In forty years of programming, I never encountered a single one, in the wild, and a lot of our image processing algorithms involved a decent amount of data crawling, so they had some relation to binary trees (shows why they teach them), but the way they were handled was much different.


I am not much of a devops person but running your own DB in a VPS with docker containers don't you also need to handle all this manually too?

1) Creating and restoring backups

2) Unoptimized disk access for db usage (can't be done from docker?)

3) Disk failure due to non-standard use-case

4) Sharding is quite difficult to set up

5) Monitoring is quite different from normal server monitoring

But surely, for a small app that can run one big server for the DB is probably still much cheaper. I just wonder how hard it really is and how often you actually run into problems.


My guess is some people have never worked with the constraints of time and reliability. They think setting up a database is just running a few commands from a tutorial, or they're very experienced and understand the pitfalls well; most people don't fall into the latter category.

But to answer your question: running your own DB is hard if you don't want to lose or corrupt your data. AWS is reliable and relatively cheap, at least during the bootstrapping and scaling stages.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: