Hacker Newsnew | past | comments | ask | show | jobs | submit | grugagag's favoriteslogin

i mean how far are you willing to take that argument? every decade has just been a new abstraction, imagine people flipping switches or in raw assembly talking about how they don't "understand" you now with your no effort. or even those who don't "understand" why you use your autocomplete and fancy IDE, preferring a simple text editor.

i say this as someone who cut my teeth on this stuff growing up and seeing the evolution, it's both. and at some point it's honestly elitism and gatekeeping. i sort of cringe when it's called a "craft" because it's not like woodworking or something. the process is both full of joy but so is the end result, and the nature of our industry is that the process is ALWAYS changing.

you accumulate a depth of knowledge and watch as it washes away in a few years. that kind of change, and the big kind of change that AI brings scares people so they start clinging to it like it's some kind of centuries old trade lol.


Consider federal government remote work. You're not going to make FAANG money, but if you're remote, you don't need it. You need a job that supports your living needs and your mental health goals. For folks who have a STEM degree, the US Patent and Trademark Office is currently paying ~$90k/year for remote patent examiners.

This macro will pass, you just need a place to hang your hat until you've gotten your foundation poured and cured. It will also help you build your resume while others cannot find junior work. Best wishes.

https://www.usds.gov/apply

https://18f.gsa.gov/join/

https://www.usajobs.gov/search/results/?rmi=true

(Your state gov might have great remote opportunities as well, I would encourage you to spend a few hours researching)


OO code for domain modeling might be, to date, the single greatest source of disillusionment in my career.

There are absolutely use cases where it works very well. GUI toolkits come to mind. But for general line-of-business domain modeling, I keep noticing two big mismatches between the OO paradigm and the problem at hand. First and foremost, allowing subtyping into your business domain model is a trap. The problem is that your business rules are subject to change, you likely have limited or even no control over how they change, and the people who do get to make those decisions don't know and don't care about the Liskov Substitution Principle. In short, using one of the headline features of OOP for business domain modeling exposes you to outsize risk of being forced to start doing it wrong, regardless of your intentions or skill level. (Incidentally, this phenomenon is just a specific example of premature abstraction being the root of all evil.)

And then, second, dynamic dispatch makes it harder for newcomers to figure out the business logic by reading the code. It creates a bit of a catch-22 situation where figuring out which methods will run when - an essential part of understanding how the code behaves - almost requires already knowing how the code works. Not actually, of course, but reading unfamiliar code that uses dynamic dispatch is and advanced skill, and nobody enjoys it. Also, this problem can easily be mitigated with documentation. But that solution is unsatisfying. Just using procedural code and banging out whatever boilerplate you need to get things working using static dispatch creates less additional work than what it takes to write and maintain satisfying documentation for an object-oriented codebase, and comes with the added advantage that it cannot fall out of sync with what the code actually does.

Incidentally, Donald Knuth made a similar observation in his interview in the book Coders at Work. He expressed dissatisfaction with OOP on the grounds that, for the purposes of maintainability, he found code reuse to be less valuable than modifiability and readability.


This is a great book and I quote this 1998 article she wrote for Salon all the time: https://www.salon.com/1998/05/12/feature_321/

"The dumbing-down of programming"

> The computer was suddenly revealed as palimpsest. The machine that is everywhere hailed as the very incarnation of the new had revealed itself to be not so new after all, but a series of skins, layer on layer, winding around the messy, evolving idea of the computing machine. Under Windows was DOS; under DOS, BASIC; and under them both the date of its origins recorded like a birth memory. Here was the very opposite of the authoritative, all-knowing system with its pretty screenful of icons. Here was the antidote to Microsoft's many protections. The mere impulse toward Linux had led me into an act of desktop archaeology. And down under all those piles of stuff, the secret was written: We build our computers the way we build our cities -- over time, without a plan, on top of ruins.

I repeat the last sentence to my students all the time ("We build our computers the way we build our cities -- over time, without a plan, on top of ruins.")

There's no way to understand why our computers work the way they do without understanding the human, social, and economic factors involved in their production. And foregrounding the human element often makes it easier to explain what's going on and why.


If you're not sure about the tests, you can use this quick framework in this comment to help you find your letters.

Extraverted (E) vs Introverted (I)

“E” generally means gaining energy from other people, while “I” means people drain your batteries. This one is not always immediately obvious for people. In general "E"s talk more when with groups of people and "I"s think more. Even though I spend a lot of time at home with my wife, I'm an E - I get energy from social interactions.

Sensing (S) vs Intuiting (N)

This is about how you process new information. S people see what's actually in front of them - they think and talk more in specifics. In general, S types are better at detail oriented work.

N types intuit things, so they sometimes aren't great at focusing at what's in front of them, but are great at coming up with ideas and next steps based on what they see.

Feeling (F) vs Thinking (T)

Everyone feels and everyone thinks. A good way to judge this is people's reactions to situations. Feelers react with empathy first, thinkers react with problem solving first.

Very basic example: Your friend comes in with their arm bleeding. Is your very first reaction?

"Oh no! What happened?" - Feeler

"You should go to the hospital!" or "Let me get something to wrap that", etc etc.

In general feelers are more likely to feel empathy for someone, even if they they think they are dead wrong or disagree with them.

Judging (J) vs Perceiving (P)

This is about how you make decisions. And it has nothing to do with the dictionary definitions of judging and perceiving.

Some defining traits of Js:

Achieving the goal is more important than the process.

You are comfortable making decisions with limited information.

For Ps:

Being true to your moral system is more important than achieving the goal.

You prefer to collect more information before making decisions.

The side effect of these two things means Js tend to have steadier lives with more commitment, while Ps tend to have a broader range of experiences and a bigger variety of life experiences.

---

Lastly, the temperaments:

ExxJ - Organizes people

IxxJ - Keeps systems running, also good at absorbing and teaching information.

ExxP - Collectors of experiences, achievements, pleasures, etc.

IxxP - Being true to your convictions

---

Now, people who study function stacks are going to shit on this comment, saying the letters don't mean anything, its all about functions like Extraverted Thinking, etc. But I find these letter rules make a great shortcut for 97% of people.


1. Are you a manager? You will be limited in success

2. Use something like clockwise to compress meeting times

3. Block your time so people literally can't schedule you

4. Ask for an agenda

5. Actively challenge status update meetings which serve no one. Use tools where you can pull the data. Or ask to have the data pushed to you (email). If this can't be done, work to fix that problem instead.

My biggest piece of advice? Say no to 30 minute meetings. Ask for longer meetings. This is counter intuitive I think, but I feel like 30 meetings give people enough time to get started talking about something and then no ability to finish the discussion. They become filled with "let's circle back" or "let's not get into the weeds" type chatter. No, let's get into the damn weeds and as a result on the same page. For certain things, I demand 2 hour meetings so we can actually hash stuff out.


Sorry, free speech isn't really a human right, even in HN. I definitely haven't earned it. Maybe in the future.

I work on multiple small multiplayer games. We have shared modules across the front and back end. We split our code into what is game specific, what is library level and what is framework level.

We got the simple stuff right the first time, but it's not until project #3 where we are actually building the framework properly.

A library exposes utilities that an application chooses to use.

A framework constrains the application to a precise system and offers a lot of power within that system.

Frameworks are the most risky to write.

You should always prefer libraries over frameworks until you are certain not only of the problem domain, but the future of the company.

Our first project we really just prioritized features and time to market.

When I was designing the golang based backend for our newest project, I spent almost 2 weeks iterating on the design of the framework. The set of constraints around what an API call would do. I implemented a system for syncing JSON objects between the client and server. A system I had been thinking about for 4 years. It works really well and felt good to implement.

The constraints I created offer us a lot of power. Logs are automatic and consistent. We can run analytics on the diff of every single player interaction. The constraints guide developers into writing performant code.

However there are flaws in the framework. Those flaws could become landmines. Or there are theoretical designs that cannot be represented by the framework, and I have a strong opinion on how to extend the framework to support that. Another dev might implement that new feature in a spaghetti code way. But I can't really justify spending another week iterating on those flaws.

When I was pondering where the line is of how long to iterate on the framework level, I was reminiscing fondly of how simple the first products backend is. Each API call written as a unique snowflake, with all its reads and writes and errors in a straight line, and divided into simple functions with no rules. But then I remember the flaws, the database load is high, there are reads and writes going on inside deep function calls. Reading the same record multiple times in an API call. The lack of consistency in the response structure and how the client would deal with that. The lack of completeness of the logs. The inability for us to comprehensively analyze our players data.

Abstractions are useful. It's hard to get them right. You're better off dumping all the Legos on the floor before you try to sort them.


Thanks for sharing this paper. As a music language designer and developer (https://glicol.org/), I have a special feeling about this paper as the research gap mentioned in the abstract is exactly one of my goals: to bridge the note representation and sound synthesis. And there are other axes to consider: real-time audio performance, coding ergonomics, collaborations, etc. There is no doubt that these languages have now become musical interfaces and part of instruments (laptops). And they have now another role: the medium between humans and AI.

TDD can be great if you know exactly what you're building and what you must test for, but it gets in the way when you're rapidly prototyping and exploring.

Just don't be dogmatic about practices like TDD. It's always a tradeoff. Being a good engineer means understanding when the tradeoff is worth it and when it isn't. Be wary of development teams, or people, that force you to practice certain methodologies regardless of context.


> Libraries, UI frameworks, build systems, preprocessing tools, the JS community seems to have a knack for rebuilding everything from the ground up with almost zero backward compatibility with a frequency I haven't seen anywhere

The modern web industry isn't that old yet. Give it time. We've been writing C++ since 1985.

React is 10 years old. Vue is 9 years old. Angular is 11 years old. The only "real" new kid on the block is Svelte, which is 7 years old now.

The web constantly strives towards simplicity. React was developed because AngularJS was this huge enterprise framework of global state. React includes immutability to make things simpler. Vue was developed because its developers saw React (then with classes) as too complicated. It's loved to this day even by the HN crowd for its simplicity. Svelte again was developed because their developers saw the possibility to make updating the DOM simpler. Again, the HN crowd and its users love it, because Svelte really is dead simple. It's been my drug of choice for 2 years now and I'm seeing unbelievable productivity among friends and colleagues which choose to use it.

The amount of libraries you see can be explained because of JS huge open source crowd, perhaps the largest among any language. There is no controlling entity, so people are free to write their own open source abstractions. We as an industry should see this as a good thing, because there are mountains of code now available for everyone, free of charge with MIT licensing. Startups can write applications that reach millions and generate billions, without worrying about QT licensing costs or paying for a Delphi compiler toolset. They need one codebase for an application that runs virtually the same on every device, mobile, tablet or desktop alike.

> Another example, I recently had to upgrade a Vue 2 application to Vue 3, because Vue 2 is going EOL this year. Well, apparently that means I also had to change the build system from Webpack to Vite

Again, Vite strives for simplicity. Vite is so simple that it took the industry by storm. Gone are the days of webpack 4 era configuration files that made you want to rip your hair out. Vite was written by Evan You, the man behind Vue, which could give you an indicator of the quality of the tool.

I hate working on our C++ / QT enterprise product, because each change takes ages for me to see and verify. If that product was a web app (which we are currently looking into), I could see changes instantly, with application state persisting through recompilations.

Grizzled seasoned developers have longed for a native application toolset that runs on every device for decades. Some just miss that web browsers do exactly that, with the added benefit of being a near perfect sandbox. Of course it's not perfect (yet?), but perfect is the enemy of good.


The main problem with ORM is the "object" part. I feel object-oriented programming had its chance and it failed. Both event sourcing and relational models (="table modelling") are better models for programming systems.

So by using an ORM, you are (at least in the name, and as traditionally done) taking a better programming model (relational models) and mapping it to an inferior one (OOP) -- also getting a ton of leaky abstractions involved too.

I wish there were more "ORMs" that were made with the assumption that Object-Oriented Programming is BAD, and started out with assumptions of wanting to program either in an event sourcing model and/or a relational model, and see where that would take things.

Related:

Fred Brooks, The Mythical Man Month (1975): "Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious."

Out of the tar pit paper: https://curtclifton.net/papers/MoseleyMarks06a.pdf

--

My take on the software industry today is that OOP failed to produce good and maintainable code, but instead of fixing that, one is adding the idea of micro-services to try to contain the damage done by OOP.

If more people moved to event sourcing, functional code and focusing on data over code, there would be much less need for micro-services etc (or, they just become less relevant).


Normally I finish Bruce's essays with a sense of clarity and having read a sharp analysis. Not saying there's anything technically amis with this one, it's all terrifyingly clear, but I think he bit off a little more that he could chew. The result is indigestible. I personally would have split this into several shorter pieces.

When it's all put together this way though, I can't help think we've shot ourselves in the foot. As computer scientists and developers we've probably already lost control.

We owe a debt of honesty to the world to say so now and stop pretending otherwise. Then we can, as a society, revisit Postman's Seven Questions:

1. What is the problem that this new technology solves?

2. Whose problem is it?

3. What new problems do we create by solving this problem?

4. Which people and institutions will be most impacted by a technological solution?

5. What changes in language occur as the result of technological change?

6. Which shifts in economic and political power might result when this technology is adopted?

7. What alternative (and unintended) uses might be made of this technology?


If you’re really interested in polyhedra and how to code them, you might want to check out this resource: https://polyhedra.tessera.li/

I use the following prompt (just an example; in a real scenario I would describe the situation in greater detail).

Example of prompt:

Three Minute Therapy is based on the teachings of Albert Ellis who developed Cognitive Behavior Therapy. It is a short-term, low-cost process that is most effective if you are looking for a scientific, in-the-present, active treatment for coping with life’s difficulties. Use the A-B-C-D-E-F flowchart described in Three Minute Therapy Exercise (TME) to address the following concern:

I made a mistake at work and now people are making fun of me.

Example of response:

A: Activating Event: I made a mistake at work.

B: Beliefs about the event: I am embarrassed and embarrassed that I am being made fun of.

C: Consequences: I feel anxious, ashamed, and inadequate.

D: Dispute: My mistake does not define me. Mistakes are normal and part of life. Everyone makes mistakes, and I can learn from this and move on.

E: Effective New Belief: I am capable and this mistake does not define me.

F: New Consequences: I am able to take responsibility and learn from this experience, and not feel embarrassed or ashamed.


You might be intereseted in these youtube archive scripts: https://github.com/TheFrenchGhosty/TheFrenchGhostys-Ultimate...

I agree it's gotten too complicated. In my mind, the evolution has gone like this:

1) Plain HTML/CSS, built using desktop tools like FrontPage/Dreamweaver

2) Server-side scripts (CGI/ASP/ColdFusion) to dynamically generate HTML/CSS. Javascript or VBScript for "Dynamic HTML".

3) Javascript wins, VBScript dies. Javascript idiosyncratic between browsers. Flash fills niche for dynamic client UX that generally works on most browsers.

4) iPhone happens. Safari won't support flash. Devices that do support flash or a subset have terrible battery life.

5) Javascript has matured. jQuery patches remaining cross-browser issues.

6) Server-side frameworks like ASP.NET, Ruby, PHP, NodeJS mature.

7) JS further matures. cross-browser without jQuery feasible, but binding to JSON is still clunky. Frameworks like Angular, Vue help with databinding and other common problems developing JS/CSS/HTML clients.

8) If you solve enough problems, you end up with a framework to build a complete client, transpiled from various languages (TypeScript, SASS/LESS, some markup). Angular, Vue, React, and other SPA frameworks evolve further.

9) Some UX/SEO/perf issues with SPA approach. SPA frameworks evolve to support server-side rendering.

10) Static sites generated by frameworks reduce the need for server at runtime. Build the site and host on cheap CDNs

...and this is where we are now. As someone more used to ASP.NET + Vue/Angular, I'm still having a hard time transitioning to server-side rendering, WebPack, etc. It just feels like too much ceremony and too many dependencies. I think the industry is waiting for some solution that is low-dependency, widely adopted, open. I haven't found my perfect platform yet, but feel it looks like this: Put a few files in source control and very easily spin up sites that look ok by default but are easy to customize. It supports building sortable/filterable grids,lists, and forms against some server API (REST/GraphQL/etc) with minimal code and and simplifies auth. It also has a rich ecosystem of components.


I've seen this happen so much with IT people it became a bit of a clichė: Around 35, pure IT is not enough anymore.

IT people just drop out. The simple cases become managers or architects. The more advanced cases start a bakery, go work in a call center. One of the most extreme cases was a very intelligent, very cynical, very anti religion guy who just quit without warning and joined the hare chrisna. We got photos from him in red clothes doing some kind of ritual. Huh?

A big part for me is that IT just doesn't learn. Every 5 years, a new generation pops up, invents completely new tooling, and makes all the mistakes from the previous generation. Now your knowledge is obsolete, and after you relearn everything your tooling is worse than where you started. Enter a few years of slow tooling maturisation with a very predictable outcome, after which a new generation pops up, declares the existing stuff too complicated, and reinvent everything again. 35 is 4 or 5 of these cycles, bringing to front the huge wastefull uselessness of it all. Learning your whole life is a nice slogan, but becomes very pointless.

The survivors that continue in IT, deal with it somehow. You enter a new cycle knowing it will be change but not much advancement, and don't learn the stack as deep as you used too. You get a life outside IT: Kids, hobbies, social events. You let the youngsters run before you, smile when they do better than your old tech would, and compare with older tools when they get stuck. And you keep some of your tech enthousiasm just for the hell of it.


(going with the definition that frameworks are "your code gets called into by the community code" where libraries are "you call into the community code", please correct me if I'm wrong)

There are some very nice FP frameworks, for example Phoenix LiveView, or phoenix in general. Elixir has lots of mini-frameworks (like the Enum and Access protocols). They're a joy to work with and pretty easy to understand. I'm not really sure what constitutes a plugin, but I've written what I would consider "plugins" for the BEAM vm (loaded at runtime, with an exposed API behaviour called into by the system), and they're pretty smooth and easy to handle.

Arguably all of the BEAM OTP (std lib) and the patterns that packages expose for building persistent services are FP frameworks. If I may be permitted to make a more apples-to-apples comparison between the Erlang BEAM framework, to the BeOS C++ framework (both highly concurrent message-passing, actor frameworks). While I loved programming for BeOS, IMO doing everything functionally is far far smoother an experience than doing everything with objects. Both I enjoyed far more than programming the Dalvik (android) Java framework (though that was circa 2009).

Anyways having experience in all four quadrants, I don't think it's as simple as ("frameworks/plugins -> OOP; libraries -> FP).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: