I thought we were arguing for rapid iteration and continuous integration? Why would I NOT live as I preach? How can I teach others if I'm not learning bugs, quirks and workaround BEFORE those I work with?
tl;dr: I agree and healthy human organizations should never scale beyond ~5 degrees of hierarchy, which is totally manageable via basic recursive JOINs in a RDBMS without fancy stored procedures or graph theory.
I like to use Dunbar's Number (100-250) to approximate the levels of heirarchy in human organizations. The idea is that these organizations are most efficient when organizational layers don't exceed ~150 elements, due to the implementation details of the human brain.
Basically, you can do log_{150}(N) to get a very rough idea of how complex the organization of N people should be. This works for small startups and entire countries. Of course, startups should probably get comfortable with the idea of teams well before hitting >100 employees. Teams can then scale into departments (with new subteams), and once there are many departments, add regional layer, strategic/executive layer, and so on.
One interesting fact is that the USA population has roughly increased by a multiple of Dunbar's number since its organizational structure was codified in its Constitution. Perhaps time for another look?
I would argue that Dunbar's Number is the wrong number to use for this. At least not by just naively dropping it in.
Remember, it represents the total number of stable social relationships a person can maintain. If you're looking to allow your employees to have personal lives, you'll want to leave ample room for their family and friends.
Maybe an important question to ask is, how much of your employees' social-emotional carrying capacity is it appropriate to consume? If 10%, then 15-25 is your number. If 20%, then 30-50 is your number.
I would argue that there's a reason for the general size of military formations: a change of size of about half an order of magnitude per level. Much more than that for sub ordinate organizations masks it difficult to know what those orgs are actually doing. So your product team might be like 8 people. Next level up us 3-5 product teams. The 3-5 of those. And so forth. It actually scales with remarkably few levels of management. It also allows space for free form connections between people on other teams.
It's definitely a big ballpark, but I think Dunbar's Number is a good place to start. If you have managers spending 10% or less of their lives on management, I don't think the organization will be very healthy. Management should be a high commitment, high compensation role.
It's also definitely a upper limit rather than lower limit. Big bureaucracies with many layers of management and small teams can work well, but no one can really individually manage 1000 subordinates.
> One interesting fact is that the USA population has roughly increased by a multiple of Dunbar's number since its organizational structure was codified in its Constitution. Perhaps time for another look?
I have had that exact same idle thought.
In 1813, each of the 182 US Representatives represented on average ~40,000 Americans. Today, each of our 435 Reps stands for about ~760,000 people. That's over an order of magnitude growth. To keep the same rate of representation, we'd need to have over 8,000 Representatives, which is clearly too large a body to get anything done.
So we're probably well beyond the point where we could benefit from a large House of Subrepresentatives and then a smaller House of Superrepresentatives that aggregate them.
This is wise, but in the healthcare field, there are some pretty huge trees of things that you need to deal with sometimes. I've been involved with building out a structure a lot like MeSH[0] and some disease trees similar to ICD. Some of my implementations I would definitely do differently now because both the tools and my experience have improved. MeSH's "addresses" even match the ltree syntax, so it would probably make a lot of sense to use that.
Ouch! Then we have the choice between a community-provided binary that is cross-validated by multiple build servers of multiple distros, and a vendor-provided binary which is deliberately different and has legal restrictions. Which one would we consider to be more trustworthy?
They are really sturdy pieces of kit and they're UTF-8 too so great for custom characters etc. However, they're really expensive in the UK (where I'm from) and the interface is seriously old and clunky so there is room for improvement, especially if you're looking for a more customisable solution.
That would be https://quay.io/ , but also the internet. Since rkt (or appc discovery rather) just relies on DNS/URL hierarchy to refer to images. Any web server can be a "registry".
I'm one of the people working in the OCI community. Discovery/distribution is something that I care alot about personally and the whole "any web server can be a registry" idea is definitely where I want OCI to go with this. As someone who helps develop a distribution (openSUSE / SUSE Linux Enterprise) my opinion is that the current state of distribution really needs to be improved.
I also recently talked to some CoreOS devs at Dockercon and have started considering extending rkt to better support OCI runtimes (and images though images are "supported" at the moment). Exciting times.
The name doesn't really work in this project's favor.
UNIX pipes are stream interfaces, whereas this looks to be message based - that's a fundamental difference.
Named pipes are uniquely identified in a well-defined local namespace, i.e the filesystem, whereas this seems to be an abstraction on top of Kafka with service discovery TBD.
AF_UNIX on linux supports SOCK_DGRAM and SOCK_SEQPACKET in addition to SOCK_STREAM, so having message based interfaces won't be much of a change for some users.
SOCK_DGRAM at least appears to be widely supported (FreeBSD & Illumos & OpenBSD). SOCK_SEQPACKET is at least supported by OpenBSD (I'm going by what's in various docs/man pages).
Generally when a protocol is referred to as message-based it's because there's some framing around the payload. This can make it impractical as compared to a streaming protocol that just passes data back in forth. In some real-time cases, you can't tolerate the latency of waiting to bundle multiple items in a single payload but you also can't tolerate the overhead of framing many small messages.
rkt actually uses systemd-nspawn to run and namespace the container. What rkt "adds" before that is downloading, verifying managing the container image and setting up the cgroup for resource limitation.