Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is slow, and I presume that’s because competent developers wrote it clean. It’s quite possible that it’s not clean either and was just written by developers incapable of performance or cleanliness. That possibility doesn’t detract from my argument - there’s no point in discussing performance or clean code with them if they’re incapable of either.


There is a lot of inherent complexity there though. Inherently VS needs to support multiple debugging engines and multiple transports to communicate with the debuggee as well (remote vs local debugging). That's OK, two layers of dynamic dispatch is still fast in this context.

Do we update call stack and locals window at the same time, or do we update each one as early as possible but show inconsistent data?

Do we fetch call stack for all stopped threads, or do we wait until the user chooses to focus on a different thread/display all threads using Parallel Stacks windows?

If Watch result is a collection, do we load its members or wait for the user to expand the tree?

If call frame has changed, the strings in Watch need to be re parsed to refer to new variables. Do we do this work every time, or do we optimise for stepping in a single function? Is it worth it if only a few of the debugging engines can support that optimisation?

Each Watch needs to be interpreted in a way that catches all exceptions and creates an error message instead of crashing the debuggee.

If the user puts a breakpoint at the end of a loop and holds down F5, watching the Locals to decide when to switch to stepping, is that a supported use case?

These are the product decisions that lead to the performance, not micro details of how the code was laid out to achieve the goals. I mean, I don't have access to their source code, so I could still be proven wrong. What's an example of a fast debugger?


>What’s an example of a fast debugger?

My apologies in advance if this feels like a gotcha, but… Casey ended up swapping to RemedyBG for debugging. He made a video about why. From about 0:50 onwards he talks about the speed and feature set of the watch window https://m.youtube.com/watch?v=r9eQth4Q5jg

Also, I don’t mean to be rude by ignoring all the questions you posed; they’re good questions, I just don’t have answers for them (I would hope the developers whose full-time job it is to build debuggers would, though).


> Casey ended up swapping to RemedyBG for debugging.

Thank you! I had no idea it existed.

The watch window in VS is slower on my 2GHZ laptop than the one in Turbo Debugger was on a 10MHz 8086. That simply makes no sense. I know that the compilers do more optimization and that the debugging info therefore has to be more complex today. It still makes no sense!


Don’t get me started. I know I’m sounding like an incurable fanboy for Casey because I mention him in every comment, but it’s only because I am an incurable fanboy for him, his polemics about software being thousands of times slower than it should be these days strike a deep chord with me.

In one of his rants he goes to the effort of booting up a version of Visual Studio from 20 years ago, on a 20 year old machine, to demonstrate that it really truly was way faster back then than it is now: https://youtu.be/GC-0tCy4P1U at 36 minutes in.


> https://youtu.be/GC-0tCy4P1U

That was a beautiful rant :)


Of course it detracts from your argument, because your argument is using it as evidence that clean code is not good for performance.


No, Casey’s argument is that clean code is not good for performance. The counter-argument to Casey’s argument is that when performance matters, don’t write clean. My counter-counter-argument to the counter-argument to Casey’s argument is that performance is clearly missing even when it matters, so clearly developers aren’t able to figure out when performance matters, so the counter-argument won’t work.

Bob is saying:

  doCleanCode()
  // doUglyFastCode()
Casey wants to swap that around because it’s bad for performance:

  // doCleanCode()
  doUglyFastCode()
Responses say “fine, don’t do clean code when performance matters”:

  if (performanceMatters()) { 
    // doCleanCode()
    doUglyFastCode()
  }
  else {
    doCleanCode()
    // doUglyFastCode()
  }
I’m saying “most developers implementation of `performanceMatters` is bugged, it always returns false”. My evidence is that “here, look at all these cases where `performanceMatters` should return true and yet we’re obviously getting the results of the else clause”.

You’re objecting that I don’t know the bad performance in a given case is because of a `doCleanCode` call, I haven’t looked at the source, it could very well be for other reasons:

  if(performanceMatters()) {
    doDirectDrawToScreen()
  }
  else {
    doDispatchDebouncedStateChangeToUserInterfaceFramework()
  }

Can you see how that doesn’t detract from my argument? We’re obviously never getting to the `performanceMatters() == true` branch of the conditional, so putting Casey’s suggestion in that branch of the conditional means we never do it. It does not matter if my evidence for “we never get to the `performanceMatters` branch” comes from statements that include `doCleanCode` or not.


So you are saying not that “here they chose Clean Code” but “here they chose not to make good performance”. You’re arguing that the only reason for poor performance is that the developers didn’t think it mattered?


Sort of? I completely agree with Casey’s argument that Clean Code makes for poor performance, but I’m not making that argument here, he’s made it for me. I am arguing that many developers are really bad at knowing when performance matters, so if they implement the advice of “write clean code, except when performance matters” they are never going to think performance matters and always write clean code. Does that help?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: