Or to put it in other words: browser technology is asked to do so much that they had to go invent a new language to keep up. So that they could serve ads.
For us ordinary folks trying to write good applications that can be maintained by one person and scale reasonably well, there's justifiable reason to jump off this rollercoaster and work in a more humble environment with modest perf/safety tradeoffs and native code executables(e.g. Go, Basic, Pascal).
The "so they could serve ads" meme is facile, reductionist, sophomoric. Actually, the fact that Google et al. were able to identify a profit function that could be optimized to fund the last 15 years of technology is an incredible achievement. In ~2000 it wasn't at all obvious that the internet could find a robust funding model.
Where the funding comes from is practically irrelevant, specially if it is a feedback function (i.e., an economic phenomenon, not "phone the legislature to fund more research!!1").
New funding models are good, too, of course. But the tone of the anti-ad camp is asinine.
A realpolitik view of tech only holds under the assumption that any new technology is good technology. It allows nothing to balance or sustainability, and I believe the trajectory of the Web is unsustainable and therefore fundamentally doomed despite its near-term wonders.
The SV camp has said nothing to convince me otherwise.
>Where the funding comes from is practically irrelevant, specially if it is a feedback function (i.e., an economic phenomenon, not "phone the legislature to fund more research!!1").
I preper phoning the legislature -- more democratic and less private interests-driven.
Dynamic types, OO with some extension for functional style, inherited Algol syntax with some homegrown innovations, batteries-included design. Where they differ, it's largely in whether they bend towards consistent style(Python) or more late-binding power(Ruby).
Get yourself into a classroom again. It doesn't have to be an ambitious course. It's a familiar structure and you've been out of it long enough that it'll seem refreshing. While you do that go hit the gym if you aren't, yet. (Or if you want to do it on a budget, get a set of resistance bands.) Set lots of simple goals with structure and regularity. Journal your progress. This will get you back into the thick of things without the nasty obligations of the workplace - by the holidays you might have a good plan together.
Or in other descriptive terms, Forth could be called "point-free" or "tacit". There is no structure given by the code's context, no named arguments to describe what is in scope - all you have is what is on the stack at the moment of execution, and global variables that the program might access by convention.
Point-free can be a very concise and flexible strategy but also error-prone since (in Forth) it allows the stack to leak and consume/return an unbalanced quantity of arguments. This class of error is eliminated within the syntax of Algol style languages languages but can be reproduced easily if you build your own stack machine.
I've cottoned on to the idea of focusing on feedback loops instead of "practices" lately as a way to deal with the kinds of evolutionary changes projects tend to go through.
Practices tend to emerge within a specific project situation as a pragmatic way of addressing concerns. When the practices are communicated across projects a game of telephone is played, the nuances lost, and the meaning is eventually replaced with dogma.
Consider which would be more useful: a workout log that suggests how much additional difficulty you should add each week, or a fixed plan found in a magazine that says that you should be performing an exact workout in each week?
This is the struggle I think programmers really face, because the codebase at any moment in time needs an appropriate "workout plan" to successfully reach the next stage. Sometimes a form of cheat can be used to accelerate it towards a goal, but the concrete progress is reliant on a similar formula to progressive overload cycles.
This focus on feedback also guides healthy cutoffs between prototyping and production solutions: the production solution only makes sense once prototype learning has been done, and the prototype likewise stops making sense when it conflicts with the demands of feedback.
1. I left in the hands of other people
2. I failed to keep it accessible to myself
It's easy to lean too far in one or the other direction by leaving stuff on a commercial service or forgotten on a single device without backups. For most folks, paper wallet and safebox is the appropriate mix since it follows traditional physical security patterns and ensures some protection from theft or damage. A strong secondary option is to be online but obscure and not advertise where your valuable data rests - perhaps your keys exist on a backup service, but they're tucked away such that an attacker has to think to look for them, and to do some forensics to track down their location. This buys time to hear the alarm bells of "your password was reset" and rotate anything valuable out of the compromised accounts.
Under no circumstances would I keep the money within any of these dedicated services: even though I use Coinbase and exchanges, it's too easy to employ social engineering and privilege escalation to get in and take everything, so any value stored in them has be considered "hot", and I only keep the amounts I want to trade on them(which at this moment is $0).
I believe the main size optimization here comes from having only teletype and batch processing interfaces. When you do that, the surface area of the UI, and hence the amount of supporting code, drops tremendously. These early tools also skimped on error messages and checks, so the user experience was generally one of confusion and catastrophic error, despite being small and simple. You really needed the documentation to have any hope of understanding an old system.
Now we have operating systems that go out of their way to automate away everything and present it in real-time with multitasking and custom graphical elements everywhere, while also supporting many more protocols and hardware interfaces - wireless networking, GPU APIs, etc. The biggest growing pains seem to be past - things went from simple and stable to complex and unstable in the 1990's and then to complex and stable but insecure now. There's a lot of room to mature all of these features, but there aren't as many novel ones.
I keep a minimal form of journal for all life events. I briefly describe a thing I did or a thing that happened, and how much energy it took me to go through it on a 0 to 4 scale, where 0 is breathing and 4 is major life crisis. If I think there was a lesson I write the lesson too.
This creates a feedback loop for stress reduction where I aim to mitigate both the likelihood of having a crisis and train a calm response by learning how to deal with smaller stuff so that the day is full of 1's. I chuck it all in a text file once a day, and return to it because my todo is also in there so I always have a backlog. I don't distinguish between work and hone tasks although I might if I were going in to an office.
Long form diaries I don't really engage with because they'd sap relatively more time - I would get caught in trying to make it a storytelling experience.
That is to say, there are probably hundreds of things, off the top of your head, that bother you and probably bother others. To the best of your ability choose one that you have a clear "thread to pull on" - as in, there is an action here you could take that might not be elegant, might not scale, be politically heated, or put you in a position beyond your understanding, ultimately require a team or need financing. But you could do it NOW and not just dream about it, consequences be damned. When the potential is scary like that, that means you actually hold a lot of leverage to unleash new forces, just by starting on it and not stopping.
Most of software isn't like that: it's predictable in its design, it automates a thing that was done slower or less effectively before. It fits into the system and stays within the lines. So you also won't find many examples for the particular thread you're pulling on, and that's expected.
If you do this and it's something you personally care about and will pour heart and soul into, you're doing about as much as anyone could hope for. You won't and can't get all of it right - but what people need isn't perfection, so much as a vehicle that will last well enough for the journey.
For us ordinary folks trying to write good applications that can be maintained by one person and scale reasonably well, there's justifiable reason to jump off this rollercoaster and work in a more humble environment with modest perf/safety tradeoffs and native code executables(e.g. Go, Basic, Pascal).