How does the Aeron compare to the Embody? I like chairs that are tough and firm on the back (I also prefer sleeping on the floor and use no padding) and get constantly disappointed by soft furniture.
I have an Aeron at home and an Embody at work. I much prefer the Embody. It's firm but with springiness right where you want it (like in the mid back). Definitely try before you buy though.
I wouldn't really call either of them soft, but the Aeron definitely has more give in certain places. The Embody is pretty evenly firm in all places.
I don't understand why you are downvoted. Filesystem access is another big one! Chrome has the technological advantage and sports an enormous kitchen sink of features. One has to make the hard decision, work on a concept that will work only on one browser, try to cut corners with graceful (or crippling) degradation, or wait potentially a long time until a spec is standardized and implemented. The latter may result in others hitting the market sooner, so it is understandable why we end up with web-apps that function in a single browser only. Hopefully this improves in the future.
I built audiomass ( https://audiomass.co ). it is not a dev tool but a general productivity tool (allow people to record, edit and manipulate audio directly in the browser).
I just wanted something that works as promised and respects the user (open source, 70kb total payload size, no ads, no tracking, feature complete, etc)
It does not matter that much to the ruler. If they know that they will be fine then they will not care.
OTOH they were elected by their population so statistically the population should be happy (this is the case in Poland, where the corrupted/hypocrite government is elected I think now three times in a row. This is pitiful to the population that is against them but at some point you have the country you have, or you leave)
I believe that consciousness is a simulation our brain is running. Our brain is a computer that runs human simulations.
This is why we anthropomorphize everything. Why we buy our dogs a little dog-house that looks like a people house. Dog doesn't care.
Why we play such different roles - a cruel boss may be a loving father a few hours later. Likewise, why when people adopt a nickname "What would X do?" they find courage, or get to act differently (common trick in sales).
It is why religions made god in man's image. Gods that fit human archetypes, roles or feelings (the father, the mother, lust, war) etc.
And of course, why people with multiple personality disorder get to have such vast personality changes and ups and downs.
What is the evolutionary benefit? Empathy.
If I can simulate what you feel, I get to understand what you are going through, I may be kinder to you. And this way we get to cooperate and build a civilization that is not based on swarm mechanics (like ants, or bees).
Consciousness will soon have it's Gallileo moment. And we will be shocked to discover there is nothing special about our current human-centric world of "consciousness".
Calling consciousness a simulation doesn't resolve anything. A simulation is the imitation of a process, what is the process that consciousness is imitating?
Also saying there's nothing special about human-centric world of consciousness is somewhat of a contradiction. Being human-centric is incredibly special; as far as we know the brain is the only structure in the solar system that exhibits consciousness, and may very well be the only structure among a very small percentage of solar systems in the universe.
If that isn't special, then nothing is special and the word has no meaning.
I'm also sympathetic to explanations in that vein. In particular that one of the main pieces of it is our simulation of ourselves - a recursive mirrored lense.
Yes, I am with you. I really like Joscha Bach's theory that our brains hallucinate ourselves, almost as a fantasy character that goes through life. (checkout his appearance on the Lex Fridman podcast, mind blowing talk by Joscha)
I'm not sure we're celebrating it. It's something to be noted. Some applications might find it useful, for running data analysis on the blockchain, but I don't foresee a majority of ETH validators running on this service. Best to avoid Amazon in general given their propensity for censorship.
amazon getting into crypto adds a ton of credibility from an outsider's perspective. even if you're not a fan of amazon, this should have a positive impact on crypto as a whole.
I agree with you. However, after spending years and years trying to compile software that came with cryptic install instructions.
Or have the author insist that since it works on their machine I 'm just doing something stupid. Docker was largely able to fix that.
It's a somewhat odd solution for a too common problem, but any solution is still better than dealing with such an annoying problem. (source: made docker the de facto cross-teams communication standard in my company. "I 'll just give you a docker container, no need to fight trying to get the correct version of nvidia-smi to work on your machine" type of thing)
It probably depends on the space and types of software you 're working on. If it's frontend applications for example then its overkill. But if somebody wants you to let's say install multiple elasticsearch versions + some global binaries for some reason + a bunch of different gpu drivers on your machine (you get the idea), then docker is a big net positive. Both for getting something to compile without drama and for not polluting your host OS (or VM) with conflicting software packages.
Completely agree. The whole compile chain for most software and reliance on linked libraries, implicit dependencies like locale settings changing behavior, basically decades of weird accidents and hacks to get around memory and disk size limits, can be a nightmare to deal with. If using slow dynamic languages, or modern frontend bunglers, all the implicit c extension compilations and dependencies can still be a pain.
The list goes on and on, it’s bizarre to me to think of this as the true, good way of doing software and think of docker as lazy. Docker certainly has its own problems too, but does a decent job at encapsulating the decades of craziness we’ve come to heavily rely on. And it lets you test these things alongside your own software when updating versions and be sure you run the same thing in production.
If docker isn’t your preferred solution to these problems that’s fine, but I don’t get why it’s so popular on HN to pretend that docker is literally useless and nobody in their right mind would ever use it except to pad their resume with buzzwords.
When library versions start to cause issues: like V3.5 having a bug, so you need to roll back to V3.4... that's when ./configure && make starts to have issues.
Yeah, it happens with .so files, .dlls ("dll hell"), package managers and more. But that's where things like containers come in to help: "I tested Library Foo version V3.4 and that's what you get in the docker". No issues with Foo V3.5 or V3.6 causing issues... just get exactly what the developer tested on their box.
Be it a .dll, a .so, a #include library, some version of Python (2.7 with import six), some crazy version of a Ruby Gem that just won't work on Debian for some reason (but works on Red Hat)... etc. etc.
There are basically two options; maintain up-to-date dependencies carefully (engineer around dll-hell with lots of automated testing and be well-versed in the changelogs of dependencies) or compile a bunch of CVEs into production software.
There really isn't any middle ground (except to not use third-party libraries at all).
That assumes you are using free software without a support contract where the vendor has no incentive to maintain long term support for libs by only applying security patches but not adding any features to old versions. I understand this goes against the culture of using only the latest or "fixing" vulns by upgrading to a more recent version (which may have a different API or untested changes to existing APIs).
That makes sense for a hobbyist community but not so much for production.
In a former job we needed to fork and maintain patches ourselves, keeping an eye on the CVE databases and mailinglists and applying only security patches as needed rather than upgrading versions. We managed to be proactive and avoid 90% of the patches by turning stuff off or ripping it out of the build entirely. For example with openSSH we ripped out PAM, built it without LDAP support, no kerberos support etc. And kept patching it when vulns came out. You'd be amazed at how many vulns don't affect you if you turn off 90% of the functionality and only use what you need.
We needed to do this as we were selling embedded software that had stability requirements and was supported (by us).
It drove people nuts as they would run a Nessus scan and do a version check, then look in a database and conclude our software was vulnerable. To shut up the scanners we changed the banners but still people would do fingerprinting, at which point we started putting messages like X-custom-build into our banners and explained to pentesters that they need to actually pentest to verify vulns rather than fingerprinting and doing vuln db lookups.
Point being, at some point you need to maintain stuff and have stable APIs if you want long lasting code that runs well and addresses known vulns. You don't do that by constantly changing your dependencies, you do it by removing complexity, assigning long terms owners, and spending money to maintain your dependencies.
So either you pay the library vendor to make LTS versions, or you pay in house staff to do that, or you push the risk onto the customer.
Aren't we conflating compile complexities with runtime complexities here? There are plenty of open-source applications that offer pre-compiled binaries.
That difference isn't as black and white as you're making it out to be, sometimes it's just a design decision whether certain work is done at compile time or runtime. And both kinds of issues, runtime or compile-time, can be caused by the kinds of problems I'm talking about like unspecified dependencies.
This is why I wish github actually allowed automated compilation. That way we could all see exactly how binaries are compiled and don't need to setup a build environment for each open source project we want to build ourselves.
I am totally on board with the idea of improving productivity. The issue I see is that this is avoiding a deeper problem - namely that the software stack requires a max-level wizard to set up from scratch each time.
Refactoring your application so that it can be cloned and built and ran within 2-3 keypresses is something that should be strongly considered. For us, these are the steps required to stand up an entirely new stack from source:
0. Create new Windows Server VM, and install git + .NET Core SDK.
1. Clone our repository's main branch.
2. Run dotnet build to produce a Self-Contained Deployment
3. Run the application with --console argument or install as a service.
This is literally all that is required. The application will create & migrate its internal SQLite databases automatically. There is no other software or 3rd party services which must be set up as a prerequisite. Development experience is the same, you just attach debugger via VS rather than start console or service.
We also role play putting certain types of operational intelligence into our software. We ask questions like "Can our application understand its environment regarding XYZ and respond automatically?"
The issue Docker solves for me is not the complexity or number of steps but the compatibility.
I built a service that is installed in 10 lines that could be ran through a makefile, but I assume specific versions of each library of the system and don’t intend to test against the hundreds of possible system dependencies combinations or assume it will surely be compatible anyway.
The dev running the container won’t building their own debian installs with the specific version required in my doc just to run the install script from there, they just instanciate the container and run with it.
the energy utilization of keeping sockets open for every tab, can get out of control quickly. Usually if users are expecting it to (eg multiplayer game, or collaboration apps) that might be fine. But fetching a tiny page and then keeping a socket active just in case 2 hours later somebody goes back and clicks on some link is not so great. Sure one could close the connection and re-open when the user comes back, but again it feels a bit too much for something that simple as websites.
Nonetheless the idea is not bad. I just feel that HTTP/2 with its multiplexing is a nice middle ground and it comes for absolutely free.