Hacker Newsnew | past | comments | ask | show | jobs | submit | FlyingAvatar's commentslogin

I am a former Directory of Technology for small/medium business who has transitioned to freelance work three years ago.

Open to fractional CTO (particular interest in non-profits) roles, AI/LLM process integration and web/mobile app development projects.

  Location: Rhode Island, USA or REMOTE
  Remote: Yes
  Willing to Relocate: Mostly no, but possible for the right opportunity
  Technologies: Vue/Vite/Pinia, HTML/CSS/JS, Python, AWS (S3/Lambda/CloudFront/Dynamo/RDS)
                OpenAI/Anthropic APIs, Security/PCI audits and compliance 
  Resume: https://www.linkedin.com/in/andrewjbent/
  Email: gmail - andrew.bent


Disclaimer: I'm not the OP, and there are certainly places where using recursive type definitions is justified.

My interpretation of OP's point is that excessive complexity can be a "code smell" on its own. You want to use the solution to match the complexity of the job and both the team that is building it and the one that is likely to maintain it.

As amused as I am by the idea of a dev team being debased by the inelegance of basic bitch programming, the daily reality of the majority of software development in industry is "basic bitch" teams working on "basic bitch" problems. I would argue this is a significant reason why software development roles are so much at risk of being replaced by AI.

To me, it's similar to the choice one has as they improve their vocabulary. Knowing and using more esoteric words might allow adding nuance to ideas, but it also risks excluding others from understanding them or more wastefully can be used as intelligence signalling more than useful communication.

tldr: Complexity is important when it's required, but possibly detrimental when it's not.


I don't really buy the comparison. If you're really unlucky, you can get cancer from a "safe dose" of radiation.

Low exposures of both things are statistically less likely to hurt you than large doses. We pick a line to call "safe", but completely safety in either case is not guaranteed.


There is a natural level of radioactivity which the body is used to compensate. Small additional doses of radioactivity can therefor be neglected. This is not true for stuff like abestos.


There is even the "hormesis hypothesis" which posits that low levels above background might be beneficial for human health.


There are actually naturally occurring asbestos fibers in the air, caused by weathering of asbestos-containing rocks.


Talking to an audio-enabled LLM is definitely "simpler" in terms of device interaction than navigating menus and such. Also having less GUI focus would feel simpler to me.

I find myself missing the experience of earlier iPhone where it didn't feel like I had so much crammed into my phone.

I can imagine using a device that I interact with primarily by talking with it, and the GUI is secondary or non-existent. For the bulk of what I use my phone for other than consuming video / doom-scrolling (which I could use much less of anyway), I think a voice interface would be preferable.

Initially "Apple Intelligence" was very exciting to think about, in that having a Siri that you could actually talk to would have a lot of possibilities, but we've seen essentially no progress in that direction.


A speech or LLM based device is only more simple if it works perfectly. As it stands today, it’s far from perfect. When it makes a mistake, if there nothing to fall back on, I would think that would be very frustrating. I run into these types of issues on a daily basis with current LLMs.

I would liken it to a non-responsive touch screen. The magic of the modern smartphone evaporates if it stops responding to touch.

I missed the early iPhone as well. I actually setup my Home Screen on my phone to be the same app setup as the original iPhone when iOS 26 released. I still have other apps in the App Library if/when I need them, but I have my basic setup how things were in 2007… At least for now. I’ve actually liked it quite a bit so far.

I’m curious if Siri with Apple Intelligence will live up to some of what they showed in the ads last year, but at this point I need to experience it working with 0 issues for an extended period of time before I even start to think about an AI-first device, and especially for an AI-only device. I haven’t seen AI from anyone that can perform at that level. Much like full-self-driving, I feel like this is going to be something that is perpetually 5 years away.


Agree that if the primary UI is voice, it will need to be the the multiple nines of success to avoid being frustrating.

Even so, I still frequently use Siri now despite it being much less successful than that.


Yeah, the convenience of pairing and switching is something I didn't think of.


It's pretty difficult to imagine.

Apple did a ton of work on the power efficiency of iOS on their own ARM chips for iPhone for a decade before introducing the M1.

Since iOS and macOS share the same code base (even when they were on different architectures) it makes much more sense to simplify to a single chip architecture that they already had major expertise with and total control over.

There would be little to no upside for cutting Intel in on it.


Isn't it also easier to license ARM, because that's the whole point of the ARM Corporation.

It's not like Intel or AMD are known for letting other customize their existing chip designs.


Apple was a very early investor in ARM and is one of the few with a perpetual license of ARM tech


And an architect license that lets them modify the ISA I believe


Intel and AMD both sell quite a lot of customized chips, at least in the server space. As one example, any EC2 R7i or R7a instance you have are not running on a Sapphire Rapids or EPYC processor that you could buy, but instead one customized for AWS. I would presume that other cloud providers have similar deals worked out.


I have wondered this as well and my best guess is so two times can be diffed without converting them to an signed type. With 64-bit especially, the extra bit isn't buying you anything useful.


The vast majority of internet bandwidth is people streaming video. Shaving a few megs from a webpage load would be the tiniest drop in the bucket.

I am all for efficiency, but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.


In that spirit I have a userscript, ironically called Youtube HD[0], that with one edit sets the resolution to 'medium' ie 360p. On a laptop it's plenty for talking head content (the softening is nice actually), and I only find myself switching to 480p if there's small text on screen.

It's a small thing, but as you say internet video is relatively heavy.

To reduce my AI footprint I use the udm=14 trick[1] to kill AI in Google search. It generally gives better results too.

For general web browsing the best single tip is running uBlock Origin. If you can master medium[2] or hard mode (which will require un-breaking/whitelisting sites) it saves more bandwidth and has better privacy.[3]

To go all-out on bandwidth conservation, LocalCDN[4] and CleanURLs[5] are good. "Set it and forget it," improves privacy and load times, and saves a bit of energy.

Sorry this got long. Cheers

[0] https://greasyfork.org/whichen/scripts/23661-youtube-hd

[1] https://arstechnica.com/gadgets/2024/05/google-searchs-udm14...

[2] https://old.reddit.com/r/uBlockOrigin/comments/1j5tktg/ubloc...

[3] https://github.com/gorhill/ublock/wiki/Blocking-mode

[4] https://www.localcdn.org/

[5] https://github.com/ClearURLs/Addon


I've been using uBlock in advanced mode with 3rd party frames and scripts blocked. I recommend it, but it is indeed a pain to find the minimum set of things you need to unblock to make a website work, involving lots of refreshing.

Once you find it for a website you can just save it though so you don't need to go through it again.

LocalCDN is indeed a nobrainer for privacy! Set and forget.


> but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.

Is it? My front end engineer spending 90 minutes cutting dependencies out of the site isn’t going to deny YouTube the opportunity to improve their streaming algorithms.


It might do the opposite. We need to teach engineers of all stripes how to analyse and fix performance problems if we’re going to do anything about them.


If you turn this into open problem, without hypothetical limits of what an frontend engineer ca do it would become more interesting and more impactful in real life. That said engineer is human being who could use that time in myriad other ways that would be more productive to helping the environment


That's exactly it, but I fully expected whataboutism under my comment. If I had mentioned video streaming as a disclaimer, I'd probably have gotten crypto or Shein as counter "arguments".

Everyone needs to be aware that we are part of an environment that has limited resources beyond "money" and act accordingly, whatever the scale.


I feel this way sometimes about recycling. I am very diligent about it, washing out my cans and jars, separating my plastics. And then I watch my neighbour fill our bin with plastic bottles, last-season clothes and uneaten food.


Recycling is mostly a scam. Most municipalities don't bother separating out the plastics and papers that would be recyclable, decontaminating them, etc. because it would be too expensive. They just trash them.


At least you and your neighbor are operating on the same scale. Don't stop those individual choices but more members of the populace making those choices is not how the problem gets fixed, businesses and whole industries are the real culprits.


Yes but drops in the bucket count. If I take anything away from your statement, it is that people should be selective where to use videos for communications and where not.


> The vast majority of internet bandwidth is people streaming video. Shaving a few megs from a webpage load would be the tiniest drop in the bucket.

Is it really? I was surprised to see that surfing newspaper websites or Facebook produces more traffic per time than Netflix or Youtube. Of course there's a lot of embedded video in ads and it could maybe count as streaming video.


Cate to share that article, I find that hard to believe.


No article sorry, it's just what the bandwidth display on my home router shows. I could post some screenshots but I don't care for answering to everyone who tries to debunk them. Mobile version of Facebook is by the way much better optimized than the full webpage. I guess desktop browser users are a small minority.


Well Facebook has video on it. Highly unlikely that a static site is going to even approach watching a video.


Did you actually look?

Typical websites are not static and include a huge amount of JavaScript and other stuff from different ad networks, analysis tools, etc. It looks like most of it isn't cached. Video delivery on the other hand is incredibly well optimized because everyone knows it's data intensive.


It may surprise you how heavy Facebook is these days


The problem is that a lot of people DO have their own websites for which they have some control over. So it's not like a million people optimizing their own websites will have any control over what Google does with YouTube for instance...


A million people is a very strong political force.

A million determined voters can easily force laws to be made which forces youtube to be more efficient.

I often think about how orthodoxical all humans are. We never think about different paths outside of social norms.

- Modern western society has weakened support for mass action to the point where it is literally an unfathomable "black swan" perspective in public discourse.

- Spending a few million dollars on TV ads to get someone elected is a lot cheaper than whatever Bill Gates spends on NGOs, and for all the money he spent it seems like aid is getting cut off.

- Hiring or acting as a hitman to kill someone to achieve your goal is a lot cheaper than the other options above. It seems like this concept, for better or worse, is not quite in the public consciousness currently. The 1960s 1970s era of assassinations have truly gone and past.


I sort of agree...but not really, because you'll never get a situation where a million people can vote on a specific law about making YT more efficient. One needs to muster some sort of general political will to even get that to be an issue, and that takes a lot more than a million people.

Personally, if a referendum were held tomorrow to disband Google, I would vote yes for that...but good luck getting that referendum to be held.


It matters at web scale though.

Like how industrial manufacturing are the biggest carbon consumers and compared to them, I’m just a drop in the ocean. But that doesn’t mean I don’t also have a responsibility to recycle because culminate effect of everyone like me recycling quickly becomes massive.

Similarly, if every web host did their bit with static content, you’d still see a big reduction at a global scale.

And you’re right it shouldn’t the end of the story. However that doesn’t mean it’s a wasted effort / irrelevant optimisation


I feel better about limiting the size of my drop in the bucket than I would feel about just saying my drop doesn't matter even if it doesn't matter. I get my internet through my phone's hotspot with its 15gig a month plan, I generally don't use the entire 15gigs. My phone and and laptop are pretty much the only high tech I have, audio interface is probably third in line and my oven is probably fourth (self cleaning). Furnace stays at 50 all winter long even when it is -40 out and if it is above freezing the furnace is turned off. Never had a car, walk and bike everywhere including groceries and laundry, have only used motorized transport maybe a dozen times in the past decade.

A nice side effect of these choices is that I only spend a small part of my pay. Never had a credit card, never had debt, just saved my money until I had enough that the purchase was no big deal.

I don't really have an issue with people who say that their drop does not matter so why should they worry, but I don't understand it, seems like they just needlessly complicate their life. Not too long ago my neighbor was bragging about how effective all the money he spent on energy efficient windows, insulation, etc, was, he saved loads of money that winter; his heating bill was still nearly three times what mine was despite using a wood stove to offset his heating bill, my house being almost the same size, barely insulated and having 70 year old windows. I just put on a sweater instead of turning up the heat.

Edit: Sorry about that sentence, not quite awake yet and doubt I will be awake enough to fix it before editing window closes.


Talking about video streaming, I have a question for big tech companies: Why? Why are we still talking about optimising HTML, CSS and JS in 2025? This is tech from 35 years ago. Why can't browsers adopt a system like video streaming, where you "stream" a binary of your site? The server could publish a link to the uncompressed source so anyone can inspect it, keeping the spirit of the open web alive. Do you realise how many years web developers have spent obsessing over this document-based legacy system and how to improve its performance? Not just years, their whole careers! How many cool technologies were created in the last 35 years? I lost count. Honestly, why are big tech companies still building on top of a legacy system, forcing web developers to waste their time on things like performance tweaks instead of focusing on what actually matters: the product.


> Why can't browsers adopt a system like video streaming, where you "stream" a binary of your site?

I'll have to speculate what you mean

1. If you mean drawing pixels directly instead of relying on HTML, it's going to be slower. (either because of network lag or because of WASM overhead)

2. If you mean streaming video to the browser and rendering your site server-side, it will break features like resizing the window or turning a phone sideways, and it will be hideously expensive to host.

3. It will break all accessibility features like Android's built-in screen reader, because you aren't going to maintain all the screen reader and braille stuff that everyone might need server-side, and if you do, you're going to break the workflow for someone who relies on a custom tweak to it.

4. If you are drawing pixels from scratch you also have to re-implement stuff like selecting and copying text, which is possible but not feasible.

5. A really good GUI toolkit like Qt or Chromium will take 50-100 MB. Say you can trim your site's server-side toolkit down to 10 MB somehow. If you are very very lucky, you can share some of that in the browser's cache with other sites, _if_ you are using the same exact version of the toolkit, on the same CDN. Now you are locked into using a CDN. Now your website costs 10 MB for everyone loading it with a fresh cache.

You can definitely do this if your site _needs_ it. Like, you can't build OpenStreetMap without JS, you can't build chat apps without `fetch`, and there are certain things where drawing every pixel yourself and running a custom client-side GUI toolkit might make sense. But it's like 1% of sites.

I hate HTML but it's a local minimum. For animals, weight is a type of strength, for software, popularity is a type of strength. It is really hard to beat something that's installed everywhere.


Thanks for explaining this in such detail


I see you mistake html/css for what they were 30 years ago „documents to be viewed”.

HTML/CSS/JS is the only fully open stack, free as in beer and free and not owned by a single entity and standardized by multinational standardization bodies for building applications interfaces that is cross platform and does that excellent. Especially with electron you can build native apps with HTML/CSS/JS.

There are actually web apps not „websites” that are built. Web apps are not html with sprinkled jquery around there are actually heavy apps.


I'm talking about big ideas. Bigger than WebAssembly. My message was about the future of the www, the next‑gen web, not the past.


OK now I think you don't understand all the implications of the status quo.

Everyone writing about "future view" or "next gen" would have to prove to me that they really understand current state of things.


I spent 10 years teaching computer science and some of my ex-students now work at Google, Amazon, Uber, and Microsoft. They still come by to say hi. It's teachers who inspire change, and we do it by talking about the past, present, and future of tech in classrooms, not online forums.

I like reminding students about Steve Jobs. While others pushed for web apps, he launched the App Store alongside the iPhone in 2008 and changed everything. I ask my students, why not stick with web apps? Why go native?

Hopefully, these questions get them thinking about new platforms, new technologies and solutions, and maybe even spark ideas that lead to something better.

Just picture this: it's 2007, and you're standing in front of Steve Jobs telling him: “You don't understand anything about the web, Steve.”

Yeah, good luck with that. For that reason I'll politely decline your offer to prove how much I know about this topic, but you're more than welcome to share your own perspective.


Practically it is owned by Google, or maybe Google + Apple


That’s already how it works.

The binary is a compressed artefact and the stream is a TLS pipe. But the principle is the same.

In fact videos streams over the web are actually based on how HTTP documents are chunked and retrieved, rather than the other way around.


Thanks for clarifying that.

My goal was to question the way we do things. Game streaming, for example, works differently. The game runs entirely on a remote server. The server has a GPU/CPU combo that renders the game in real time and user input is sent back to the server via UDP/WebRTC or a TCP tunnel.

Also, native mobile apps work quite differently from both video and game streaming.

Bottom line: we've seen some great innovations over the last 35 years, and I'm hoping to see plenty more. The most important thing is to question everything we do, to have that "yes, but why?" mindset.


Almost everything these days is built on top of HTTP simply because it’s easy.

Video streaming is still HTTP and the data is chunked. The reason data is chunked is so that you can have adaptive bitrate. Ie if one chunk takes too long to download then the next chunk will be the same time slice but at a lower bitrate. All served over HTTP.

HTTP itself has evolved massively over the years too. HTTP3 is actually a binary protocol built on top of UDP.

You’ve got the right attitude questioning why things are. But you’re also not alone. Which is why modern HTTP doesn’t look anything like its 90s predecessor and why so many public facing systems have adopted HTTP.

You do see proprietary video streaming protocols too. But the reason they’re not generally used for public-facing services is the same reason HTTP is everywhere: firewalls and routers work fine with HTTP because HTTP is ubiquitous. Whereas proprietary formats, and particularly ones based on UDP, don’t always play nicely with routers and firewalls.

In fact UDP in particular is a troublesome datagram for routing.

I’ve worked in the broadcasting sector for a while but it’s been a long long time since I’ve worked in the games industry. So I don’t know what is commonly used for game streaming. But for video streaming, they moved away from proprietary protocols to HTTP. As latency drops and bandwidth increases, I would expect to see the same trend with game streaming too.


I see, I didn't know this


1. How does that help not wasting resources? It needs more energy and traffic

2. Everything in our world is dwarfes standing on the shoulders of giants. To rip everything up and create something completely new is most of the time an idea that sounds better than it really would be. Anyone who thinks something else is mostly to young to see this pattern.


The ideal HTML I have in mind is a DOM tree represented entirely in TLV binary -- and a compiled .so file instead of .js. And a unpacked data to be used directly in C programming data structure. Zero copy, no parsing, (data vaildation is unavoidable but) that's certainly fast.


I would have loved this in 1993. Not that I don't now, but I would have had a real use for it then.


At least I tried to make it look like a 1993 website


https://github.com/stravu/crystal

This is still alpha, but I have been using it to handle the multi-agent worktree paradigm.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: