Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Cost of Frameworks (aerotwist.com)
89 points by antouank on Nov 17, 2015 | hide | past | favorite | 54 comments



(Disclaimer: I'm the author of Mithril.js)

Is it my impression or is Paul on some sort of a crusade to downplay the dev ergonomics of React and try to convince people that it's "slow"?

TodoMVC benchmarks have been done before: https://github.com/pygy/todomvc-perf-comparison . So sure, there's room for performance improvements in mainstream frameworks, and React is not the fastest thing in the universe, but come on. Maintaining a large project in vanilla js is largely equivalent to writing code in assembler when there are good C compilers: it's doable, and required in a handful of situations, but not really wise for 99% of real world projects.

Re: Tom's response: Something that he didn't mention (which is not surprising since he's an Ember dev) is that frameworks do sometimes detract from end user experience by imposing "opinionated" complexity and assumptions that might prevent devs from doing certain specific things and settling with suboptimal UX - the old adage of "if you want to deviate from the holy way(tm) you're on your own"

I'm kinda in the middle of the two opinions: it's definitely important to have access to the "metal" (both in terms of actually being able to code against low level APIs, and in terms of the amount of effort required to wade through framework abstractions in order to get there), but even using vanilla js, a complex app does need a "framework" (in the sense of having rules for where things should be and how they should interact with one another, and in the sense that any non-trivial app will have "library-level" plumbing). So, why not meet in the middle and use a lightweight framework that does 95% of things well enough to actually be used in non-trivial mobile apps[1] but that doesn't have high enough byte count to be bloated?

[1] http://en.lichess.org/mobile


Seems to have gotten the HN hug-o-death, so here is the cached version: http://webcache.googleusercontent.com/search?q=cache:cPuIbiv...


> frameworks let you manage the complexity of your application

Close.

Front-end frameworks are one solution to the problem of managing the complexity involved in making an app.

After working with a few of them, I'm not really sure that they're the best ones.


Could you elaborate on what some other solutions are?


I'm not the parent commenter but here are a few that come to my mind:

    - Functions
    - Abstraction
    - Modularity
    - Design patterns
    - Programming paradigms
    - DSLs
    - Conventions for naming/formatting/arrangement
Although I grant that any of these could possibly work even better when codified into a framework.


Frameworks will be implemeting a number of these for you.


Microservices


Guy that makes a framework replying that frameworks are good. Aha....


Well, I believe it is a "good thing" when you believe in what you do and got arguments to back up what you believe in.


Advocate something you do: cynical self-interest

Advocate something you don't do: hypocrite


Guy that makes a framework thinks that frameworks are good. Perfect sense to me.


I see what you are doing but, that's not relevant to his point.


It seems like a zero sum, the problem is that any real world project that starts off as vanilla without a framework is almost assured to mutate into its own framework as the complexity of the project matures. Most of the pain points and pleasure points that frameworks bring to the table are slowly replicated with self built devices.

So the real choice here is not vanilla vs framework but do you want to use someone's framework or build your own.


It seems JS developers are recreating the debates that were already had, numerous times, long ago.

Yes using a JS Framework is going to be slower than writing and optimizing code by hand. In the same way that you can do faster code in assembler than using any languages or that using GTK on top of X is going to be slower than writing directly to the graphic card.

Yet we still uses them because it mean less bugs (because you are not reinventing the wheel every time), it encourage code reuse, it add structure, allow other dev to easily enter the code and open you to a vast library of modules. But most importantly it allow shipping features faster and that's where it impact the users.

Users care more about having an usable product than a fast one that does nothing (with some exceptions obviously). We can afford to optimize everything by hand and ship at the same time.

Sure, in some use-cases it will be too slow, but then you identify those particulars cases and you take your time to optimize them. Even by bypassing the framework if you need to.

But don't decide to not use a framework because the general use case will be a bit slower.

Or as Knuth put it decades ago "Premature optimization is the root of all evil"

The complete quote being, because it's relevant here:

> "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%"


The interesting thing, for me, is that I recall the "golden age" of cross-platform GUI widgeting toolkits to be 1997-2001. AWT, Swing, SWT, wxWidgets, Qt, Gtk, Tk, MFC, XUL, and many others.

And this coincided almost perfectly with the rise of the web and the end of desktop GUI supremacy. The few big desktop successes in the 2000s (uTorrent, DropBox) often used no framework at all or had minimalist UIs where they used little-known corners of the OS but moved the heavy interface lifting to the web.

I'm wondering if this is part of the general pattern of things reaching perfection just as they become obsolete. Right when a technology reaches mainstream adoption, everyone has an opinion about how to do it "right", and right as it reaches mainstream adoption, all the major opportunities have been plucked clean.


These aren't "small efficiencies". Ember TodoMVC takes over 40 times as long to start than the vanilla JS version.


We have an Ember mobile app, with native wrapper. Loads in ~1 second on a modern phone, which feels reasonable.

If it would take 25ms in vanilla JS (40x quicker) I'd still consider it a small efficiency gain in our case (from a users perspective).

---

(should mention that there is no network latency, since it's an installed app the JS-code is bundled, obviously)


Yes, but the problem is that you can have slower performance during loading, or you can have slower performance overall, and this is all down to the fact that the DOM does not have a native way of explicitly batching updates, and doing things like touching offset* properties can trigger layouts/repaints. ReactJS uses DOM tree diffing and merging, but that can also get you into trouble with the GC (this may be a solved issue, don't use ReactJS, so my bad if this isn't correct...).

Our product, Elevate Web Builder, uses an in-memory element representation with a DOM change management architecture that avoids all of these issues. However, because of this design, you have to use our product and our framework in order to benefit from it, because it sits as a virtual layer on top of the DOM.

What the DOM really needs, but probably won't get because there could be some serious side-effects from bad coding, is a set of simple reference-counted beginUpdate/endUpdate methods on each DOM element that isolate that portion of the DOM tree from repaints, but not layouts. That way, dimensional information is always immediately available, but painting is always handled as a single last step.

I'm meandering here, but the point is that improvements in the DOM management layer of the browsers could drastically cut the necessity of frameworks whose primary purpose is to work around deficiencies in the DOM. Fix the browsers, and you fix the load time. Cutting out the frameworks just ends up trading different kinds of pain...


The DOM already does dirty-checking on manipulation. Manipulating the DOM is cheap, it's manipulating the DOM while also getting dimension or style information that's expensive.

None of the major frameworks solve this problem. The closest may be React, which strongly discourages you from interacting with real DOM nodes or doing anything stateful, but even then, if you really want to touch that offsetWidth you can trivially destroy React's performance. Maybe the best alternative is just strict coding standards that say "Don't do that!"

I'd talked with a few Chrome PMs when I was working on Material Design for Google Search and discussed the possibility of some sort of batching API for layout-causing operations. They weren't against it in principle, but nobody could think of an API that was something we'd actually use as web developers. The problem is that thinking about your app in terms of "This expression causes layout, this expression dirties the DOM" is a big cognitive load, and making those two classes of expressions non-interleavable means you suddenly need to structure your whole app in different ways.

There'd also been talk of doing a subset of HTML that includes only the operations that can be done quickly, but to my knowledge, that never went anywhere. You run into problems with the whole huge installed base of the web; if you're going to throw everything out and start from scratch, why not just use a native mobile app, or even raw OpenGL ES commands?


Dart's html library used to make all methods that cause layout, be asynchronous. Instead of Element.offsetHeight, you had Element.offset which returned a Future<Rect> (Promise in JS). Code ended up looking like:

    fooify(element) {
      element.offset().then((rect) {
        // do stuff
        otherelement.offset().then((rect) {
          // do more stuff
        });
      });
    }
Understandably, developers didn't like it, and unfortunately it was removed. This was before Futures and Streams had been made completely ubiquitous in Dart, and before async/await. Now the code would look like:

    fooify(element) async {
      var h = await element.offsetHeight;
      // do stuff
      var h2 = await otherelement.offsetHeight;
      // do more stuff
    }
which is much more palatable.

> There'd also been talk of doing a subset of HTML that includes only the operations that can be done quickly, but to my knowledge, that never went anywhere. You run into problems with the whole huge installed base of the web; if you're going to throw everything out and start from scratch, why not just use a native mobile app, or even raw OpenGL ES commands?

Interestingly, this was basically the origin of Flutter. It started as a fork of Blink with the worst things removed, but still used DOM and JS, but they kept removing and removing, eventually removing CSS and the DOM entirely, and replacing JavaScript with Dart. Now things are fast, but they aren't like the web either.


It's no surprise that mutating data structures is cheap, and React's virtual DOM doesn't aim to solve expensive layout calculation.

What virtual DOM does is simplify application development by removing the concept of time, so the programmer doesn't have to think about how to mutate the existing state to get to the new state. It's the cheapness of creating new virtual nodes that's interesting [1], and quickly applying them to the DOM with diffing.

[1] https://jsperf.com/virtual-dom-vs-real-dom


Wasn't the subset of HTML AMP? https://www.ampproject.org/how-it-works/


I don't think so - it was different people working on it, from the Chrome team side, and it involved changes to Blink or an outright fork rather than anything server-side.


Thanks for the response.

"The DOM already does dirty-checking on manipulation. Manipulating the DOM is cheap, it's manipulating the DOM while also getting dimension or style information that's expensive."

Yes, absolutely, and I wasn't clear on this point. The catch, however, is that once you start using JS for dynamic rendering of UI elements, as opposed to static HTML, this becomes quite the problem. Simple things like dragging an element around or other types of basic interactivity require knowing where things are in real time.

"None of the major frameworks solve this problem. The closest may be React, which strongly discourages you from interacting with real DOM nodes or doing anything stateful, but even then, if you really want to touch that offsetWidth you can trivially destroy React's performance. Maybe the best alternative is just strict coding standards that say "Don't do that!""

The only way I know of around this problem is what we did: maintain a virtual element for each actual DOM element, and have the virtual element act as a gatekeeper to the DOM. Want to read the width ? No problem, here it is as a simple integer property, and the DOM isn't touched. But, as you can imagine, this sends you down the road to hell pretty quickly, and we ended up managing everything manually. We effectively use the browser as a display device. And, of course, the code size is larger than we would like...and we're back to the topic at hand. :-)

"I'd talked with a few Chrome PMs when I was working on Material Design for Google Search and discussed the possibility of some sort of batching API for layout-causing operations. They weren't against it in principle, but nobody could think of an API that was something we'd actually use as web developers. The problem is that thinking about your app in terms of "This expression causes layout, this expression dirties the DOM" is a big cognitive load, and making those two classes of expressions non-interleavable means you suddenly need to structure your whole app in different ways."

Having come from many years doing desktop development on Windows, I've become accustomed to BeginUpdate/EndUpdate as it's a pretty standard way of batching updates that cause repaints. Of course, Windows also punts on layout, so there's a pretty big difference right there.

"There'd also been talk of doing a subset of HTML that includes only the operations that can be done quickly, but to my knowledge, that never went anywhere. You run into problems with the whole huge installed base of the web; if you're going to throw everything out and start from scratch, why not just use a native mobile app, or even raw OpenGL ES commands?"

I would say that you're leaving out a whole host of things that make the browser such an attractive host environment for applications, absent automatic layouts. There's the painting capabilities alone. Ever try to deal with transparency in Windows ? It's a nightmare, but in the browser it just works. The DOM element tree model itself is beautiful and a perfectly natural way of visualizing a UI. In general, it's just a more complete environment for the developer, and there's no arcane API to work around: access to the built-in functionality is just an object property away. Then there's the single codebase, easy distribution/installs.....

I just think that browser developers need to actively push towards making the UI functionality more generalized and display-oriented, and less specific to its roots of static documents that always require some sort of layout. If a piece of JS code places an absolutely-positioned element somewhere in the browser window, there should be zero penalty for reading its layout information after doing so.


I'd be interested in seeing https://lhorie.github.io/mithril/) added to this list.


I'm yet to see a non-trivial (100+ screens, 10+ devs over 10+ months) production-level app written in "vanilla JS" without any framework (popular open-source or home-grown "monster"). You need _something_ to structure your code and take care of repetitive / boring details.


Google Search, from when it first got Javascript in 2008 to when it adopted Closure in 2012.


What most people don't understand is even if you don't use a framework you end up building your own.


The flip side of this is that you end up building your own that is tailored to the particular requirements, users, and resources available to your product. That can be a huge advantage, given that differentiation is the only thing that will make your product stand out in the marketplace.

Are you in a latency-critical domain, which Google Search is? Then optimize to minimize the number of bytes shipped to the user and startup time. Do you have 100+ screenfuls of information? Then optimize so that making new screens is easy. Do users load your app once, leave it open in the background, but then need to perform a number of interactions quickly? Then ship down the initial interface quickly and progressively lazy-load bundles of functionality that are just a JS function away on click, like GMail does.

Incidentally, one of the main reasons Google Search finally adopted Closure was because it had to integrate with Google+, which was all done using Closure. So again, the choice of a framework was driven by product concerns.

It's not a failure when a company builds a home-grown framework that is carefully tailored to the product needs that they've discovered over the last few years. Indeed, having the resources and domain knowledge to build your own framework is a far bigger success than building an app with the hot framework of the month and failing because you look just like every other app out there.


How many apps in the World require a specialised framework that GMail might require? 10? 100? 1000? Sure, everyone can invent their own framework to be very "specialised" and "tailored" to a given app needs but is it going to give a competitive advantage for a given app? Probably sometimes. Probably not most of the time.

Constructing your own framework has cost. Sure, you will acquire a lot of knowledge doing it, but is it going to help the bottom line? Maybe.

Once again, are you seriously advocating that _everyone_ should do "vanilla JS" without any frameworks?


I'm advocating that everyone start with vanilla JS without frameworks. See how far it will take you. Once you've built a simple prototype, put it in front of users, made a few changes, and have a sense of how the product will evolve to satisfy those users, then you can choose a framework. And you can choose it with a lot more knowledge about what your needs will be than if you tried to guess ahead of time, and you won't be steered away from particular customer desires because the framework makes them difficult and other stuff easy.

Rewriting your product is not failure, particularly when done at the very early stages where it only takes a week or so anyway.

As a nice side benefit, this also avoids all the holy wars about which framework is best, since you can put off the decision until you have hard data about which is best for you.


If you're going to rewrite it anyways, then why not just pick a random framework? "No framework" is still a set of trade-offs, and for better or for worse, most teams aren't made of super star devs. A lot of them appreciate the hand-holding through the where-to-put-what drudge at the beginning of a project and the presence of a community to ask questions.


For a few reasons:

1.) Just by choosing to use a framework, you incur costs: download time, initialization time, complexity & bug surface area, etc. You should make sure that you gain benefits commensurate with those costs. If you start by picking a random framework, then a.) how can you measure what the costs were? and b.) how do you know what sort of benefits would be most useful to you?

2.) Relatedly, starting from a baseline random framework can teach you lessons that are not true, and then you base your product decisions on that. For example, there's a common meme that you cannot have smooth 60fps animations on mobile websites. This is not true; however, getting smooth 60fps animations on the mobile web requires careful attention to which elements will render on the GPU and which won't, and no framework currently in existence will help you with that. Back in 2008 when I did my first startup, it was generally believed that Javascript was too slow for games; not true, JQuery was too slow for games. Doing things that others believe to be impossible is the essence of competitive advantage.

3.) Frameworks make certain tasks easy at the expense of making other tasks hard. If you start with a random framework, you will be incentivized to do things in the same way that everyone else who uses that framework is. That's why almost every web 2.0 startup started from around 2007-2010 looks like a frontend over a database. Looking like everyone else is a recipe for failure in business.

4.) The framework authors are all subject to the constraints of the browser; however, browser vendors (and people who program directly to web APIs) are not subject to the constraints of the framework. The set of programs that you can write efficiently without a framework is a strict superset of the set that you can write efficiently with one.

5.) It's easier to add code than to remove it. If you start with no framework and then decide that you need one, you've written only the code that you actually did need for your problem. If you start with a random framework and then decide it's not appropriate, you need to backtrack and identify what you were actually trying to accomplish before you shoehorned it into the framework.


1) this line of argumentation seems like selective reasoning. For example, one could say, ok I need ajax. Let's use a `fetch` polyfill. Wait, you can't abort the request with it? Fine, XMLHttpRequest then. How do I take my object and serialize it to a querystring again? Oh, I need to change a HTTP header to make my server accept the request body as JSON? Should I just use jQuery? It's too overkill. What about Reqwest? Does it do what I need? Is it maintained? etc etc etc. Point is: you don't know what kind of costs you're going to incur, regardless of whether you're using a framework or not.

2) going off 1), one could come to the (obviously ridiculous) idea that AJAX is really hard. Or that frontend build systems are not worth the trouble (I actually see this "myth" in the wild, and people wasting time due to it). 60fps on mobile is really not a "competitive advantage", even if you're going into the hyper-saturated gaming market (and in that case, you really ought to be writing ObjC/Java if you care about speed at all). Competitive advantage is about creativity and relentlessly exploiting opportunities, it's rarely ever the case where an obscure piece of software trivia will make a huge difference.

3) frameworks exist on a spectrum: if your argument was true, one could just pick an obscure framework. But honestly, It strikes me as wishful thinking to suggest that framework choice is the primary driving factor for a product (that's why the role of product managers exist)

4) there's a very real and recognized constraint that affects all low level systems compared to higher-level abstractions: complexity. jQuery is a DSL for avoiding 10000+ LOC DOM API code. Virtual DOM (or any "retained mode" templating system) help avoid 10000+ LOC jQuery spaghetti. Component systems let you create more DSLs, etc, etc.

5) it is easier to add code than to remove it, and that's precisely why most systems in the wild have technical debt: organic growth. When you start from scratch, in my experience (especially w/ a team over a period of time), it can get pretty hard to untangle the various subsystem out of an organically grown codebase. We had this issue in my previous company: there was a monolythic architecture for a mission critical system, and the 10 year old Session class was essentially impossible to replace without breaking everything. Speaking from experience, frameworks actually make it easier to bail out because there are only so many idioms you need to expect, and code is more-or-less organized even if it is a shitstorm by any other metric. With non-framework code, you get to spend a lot of time evaluating for the umpteenth time why exactly foo is seemingly replaceable, but actually not without breaking bar, baz and quux.


I think almost everyone actually understands that. What I'm not sure about is whether most of the people who say that are clear on the fact that may not be a bad thing.

Frameworks can have their own complexity and developer overhead, they can demand you spend more time solving the problems they impose (cough AngularJS), they can be ill-suited to the requirements and/or problem domain you're working in.

Building your own isn't always the sensible choice, but it can be.


This. A large codebase without a framework almost always becomes a defacto framework; else it becomes too much technical debt and is rewritten in a framework. =)


Right. I was more after examples of apps in the wild done by "average" (whatever it means) group of developers in an "average" dev shop. A group of rock-star developers will do "right thing" most of the time, regardless of the technology. The question is: can we scale this experience for majority of web applications that most developers keep on coding?


Big picture this is a good reminder when the next big thing comes along <cough>React</cough> but he forgets where we came from. Prior to Angular, the major way of doing web-apps was with rendered templates that would replace whole swaths of html and so the "updating the DOM" step truly was more expensive than the Javascript driving it. This is the problem that Angular (and newer) frameworks/libraries sought to address.

But there's one more consideration...isomorphic rendering. If a framework can be rendered on the server-side then the time-to-interaction only matters if it's longer than it takes for the user to mentally process the page and take an action.


If a framework can be rendered on the server-side then the time-to-interaction only matters if it's longer than it takes for the user to mentally process the page and take an action.

Yes, but time-to-interaction can be long when connectivity is poor. Connectivity is frequently poor.


Turbolinks is a good alternative if you want to stay on the Rails path. While Turbolinks 2 just replaced <body> with Turbolinks 3 you can replace partials without resorting to js.erb-templates.


Turbolinks should be turned off by default. Rails developers went way too far with it, it actually can f-ck up third party scripts on a page. Opinionated web frameworks are good, but they should stick to server-side tasks.

Turbolinks should have been an extension, not in the core. With these kinds of gimmicks, and constant breaking of APIs from versions to versions , Rails is committing adoption suicide.


This could be a very dumb question, since I don’t use any JS frameworks (on the client side), but… would it make sense to standardize (on the web platform) certain aspects of JS frameworks (their internals), so that the frameworks themselves could become leaner.

My thinking is, if the frameworks do similar things, i.e. provide similar functionality, maybe it would make sense to have browsers provide (standard) APIs for (some of) that functionality.


This is exactly what Web Components are: a standardized, minimal, HTML-compatible, component model that's understood by the browser. It solves DOM encapsulation, style scoping, element lifecycle, and DOM composition.

Frameworks and libraries can then build on top of that to provide templating, data-binding, additional lifecycle stages, and other helpers, which Polymer and X-Tags do.


This has happened, is happening, and will continue to happen. Huge swathes of the standardised additions to the web platform over the last fifteen years have been inspired or informed by features that were originally included in JS libraries and frameworks. Library and framework developers participate in standards bodies and give feedback to browser vendors (many of them work for browser vendors), and help to drive what goes into the web platform, based on what the implemented, and what they need for their software.

The problem is, software development is a moving target. The current feature-set of the web platform would let you easily build a state of the art application... in 2005. But things have moved on. People's ambitions and expectations for the functionality, responsiveness across devices, performance, aesthetic appeal, touch friendliness, accessibility and offline capability of their web apps has skyrocketed. And at the same time, we've learned more about the coding strategies and patterns that work (and don't work) for writing, large, ambitious web applications.

This is a good thing, because it means the web is moving forward, and so are we. However, it also means we're never going to reach a promised land where the web platform does everything we could ever need it to. There will always be new demands and new technology — retina screens, VR, fingerprint scanners, etc. — and the web platform will need to catch up in those areas. There will always be new frameworks, like React, that overturn existing best practices and experiment with new ways of doing things. This means will always be a need for libraries and frameworks at the cutting edge of web development, to pioneer the paths that can later be paved via standardisation.

We also need to be wary of premature standardisation. Web components have arguably suffered from this, although its proponents couldn't have predicted it when they began. For years, component-based encapsulation libraries/frameworks gained very little traction on the web, and so Google launched an effort to deliver components natively via a set of standardised APIs. However, halfway through their effort, Angular and React blew up, and suddenly everyone was writing components as directives and JSX components. This has caused some friction, because components as implemented by these frameworks (and thus coded by the majority of web devs) don't quite match the vision set out in web components. Web components will still deliver useful features, such as Shadow DOM, which will find use in these libraries, but had the web components effort started after Angular and React had appeared, its design would likely have looked somewhat different.


I read half of this article before I realized the author was talking exclusively about Javascript frameworks (when I saw the diagram with the frameworks he tested).

There are more frameworks in the world than Javascript frameworks, if you are only going to discuss a subset of them you should make this clear in the headline/introduction. A heading such as "The Cost of Javascript Frameworks" would be appropriate.


Developer ergonomics and agility serves the users' needs in a different way. If I can more quickly deliver value to my users, that's a good thing. It's not black and white though. These are trade-offs and need to be weighed for a given context/project.


To everyone defending JS frameworks: create a challenge that you believe requires a framework and see if someone can solve it more elegantly in vanilla JS.

Years ago when browser compatibility required 100s of workarounds for each browser and version, it made sense to use a framework to hide the complexity and keep up to date. But with modern browsers this just isn't the case.


- render a list of items, with a panel showing additional details for the selected item.

- update the view when changes from a backend data source arrive, either via some sort of polling mechanism or a persistent connection.

- allow the user to edit fields on the selected item.

- automatically save any pending changes when navigating away from the page.

AKA every CRUD app ever. You are going to need a framework (most commonly MV*) to deal with this in a maintainable way. Whether you use a vendor or create your own, you will end up with a framework.


When you say "vanilla JS", do you mean "just JS with other JS-only libs for special controls, etc.", or "just JS". Because, if the latter is the case, then the challenge is easy:

Virtual list controls that need to manage thousands of items/rows.

Hint: you're going to need a custom scroll bar.


Random things off the top of my head that are a pain without a framework:

- take object of arbitrary depth {a: 1, b: [1, 2, {c: 3}]} and send as querystring parameters to Rails/Node/PHP/whatever backend

- take two parallel ajax calls and run some code when they're both done

- SPA w/ parameterized urls


Users want a right product first, then a fast one.


Very much this. Fast is very important for usability, but it doesn't matter if it doesn't work correctly. Time and time again in my consulting engagements it proves far easier to get it right first, and then refine and optimize later.


Users want a working product first, then a right one.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: