Personally, I expect implementing specifications properly. That's it.
About "commercial applications", let's face it. Those "enterprise solutions" cost way higher not because they are 10-1000x times "better", but because they contain generous "bonuses" for senior stuff.
Perhaps, it's the author does not want to name vendors which fail giving them time to contact him with some attractive suggestions. Or I am too suspicious? :)
Too bad they have only 2 channel memory controllers and same 32/32 L1 cache. That means all that power is still wasted waiting for memory (Max Memory Bandwidth 45.8 GB/s, seriously?).
Not sure why feeling so excited about those processors.
They also still only have 16 PCI-E lanes from the CPU, which is disappointing. Ryzen's 20 isn't exactly lavish, but it's enough for the pair of x16 for the GPU + x4 for the primary NVME drive.
For typical desktop workloads memory bandwidth is not that important. They likely will release Xeon-W counterparts later with similar frequencies but higher memory channels and PCI-E lanes for those who need it.
The i7-5820K is from the "high end desktop" line of chips, derived from Xeon workstation chips. The modern equivalent is chips like the i9-10900X (note X not K), which does have quad channel DDR4-2933 and 48 PCIe lanes. Clock speeds are a bit lower though.
My main reason for using _proper_ dynamic languages (or better - dynamic languages VM - Lisps, APLs, etc.) is REPL/interactive debugger (stack restarts, walking the stack freely, changing/fixing functions while debugging).
The overall idea is keeping state across changes. That's of utmost importance for doing complex things.
And vice versa:
Restarting/recompiling to fix a small error while calculating something heavy? No, thanks.
BTW, Rust guys promised REPL by the end of 2018 :)
Agree.
Moreover, Rust compiler contains some dark areas which nobody wants to deal with. See https://github.com/rust-lang/rust/issues/38528 for example. Basically it means that Rust compiler can suddenly take exponential time and space for compilation.
That bug really bites hard any code heavy on iterators (Rust often praised feature!). It has reliable reproduce test-case, but still it's already year old and was down-prioritized!
Hard to believe anybody uses Rust for real large project given so little attention to crucial details.
I mean, that thread has a comment less than a day ago, and Niko says:
> I'm going to lower this from P-high to reflect reality. I'm still eager to investigate but haven't had time, and making it P-high is not helping =)
P-high means someone is actively assigned and working on it, so yeah in some sense this is a down-prioritization, but only from "Someone is working on this, so put your work somewhere else" to "this is open to work on"; the de-prioritization may lead to it getting fixed sooner, as Niko is a busy guy.
So, "nobody wants to deal with" feels like a mischaracterization to me here.
Well, yes.
The last comment says the issue is still there :)
I mean this bug alone in fact nullify the entire incremental compilation effort. It's kind of weird.
> The de-prioritization may lead to it getting fixed sooner, as Niko is a busy guy
Broken multi-threading, broken string, ...
Care to elaborate? UTF8 move perhaps? Which native compiler/RAD got UTF8 for free? Virtually every C/C++ project was "broken" in that respect at some point or even now.
I use both Lazarus & FPC from trunk (in between releases) and even then backward compatibility is of top priority to FPC devs.
I highly doubt it's production code.
Nobody uses both objects(pointers) and interfaces simultaneously. It's a quick way to disaster.
In practice Delphi/FPC interfaces serve their purpose.
As a Delphi/FPC/Lazarus veteran (10+ years), comparing one of the quickest compilers with Qt/Boost C++ development cycle looks like a joke. Much faster edit-compile cycle which scales without issues up 1e6+ LOC (yes, modules/incremental compilation done right right from the start)
Not apples to apples, but its a pretty good tradeoff for C++ IMO: you get super high turnaround times because the bulk of the UI tweaking cycle is in QML or the RAD tools and you only need slow-compiling-C++ where native performance is required.
For me, while I like fast compile times as much as the next person, its not a deal breaker -- the workflow/environment and library features are. Ie can they easily deliver the required value to my customers. QML gives me a good middle ground between productivity & quick turnaround time, native integration and native performance. If you're unwilling to make that tradeoff, well... then you're limiting yourself to the tools that don't make that tradeoff (which may be perfectly fine, of course).
It's not a good trade off for the end user. As it's not native you end up with a much slower running application with more memory overhead and bloat. Lazarus is compiled to small native applications that are easily installable for the user, fast, with both small memory and disk space usage.
I say this next piece as both a developer and an end user; developers of desktop applications are getting out of hand with how they treat these things. We are now to the point where a large segment of the developer population has so little regard for the end user that they believe bloatware Electron solutions are a good choice for "native" text based chat application.
EDIT: reading your reply to sibling comment, I think I probably misinterpreted what you are referring to here so my below response maybe is replying to the wrong thing.
Regarding your first paragraph, Qt/QML performance is very, very good, memory use isn't insane (in my personal experience at least), rendering is solid 60fps and animations are ultra smooth. Maybe Lazarus is better, but not being a good tradeoff for the user, at least in Qt's case, just isn't true.
Regarding your second paragraph, I completely agree, it those are C++ developers using frameworks like Qt doing that. They're primarily web developers who are using what they're familiar with (JavaScript) to develop desktop applications. An Electron application is very different from a Qt application.
Even with QML, which uses JavaScript, the bulk of the Qt framework is written in C++, the declarative QML is compiled to a scene graph on load, the rendering is done in OpenGL and shaders, and any heavy lifting or performance sensitive code can be done in C++ (Qt makes it VERY easy to call C++ from JS and JS from C++). Typically only non-performance-sensitive glue logic is in JS. This is very different front Electron and Qt (even with QML) is still primarily a C++ framework.
> As it's not native you end up with a much slower running application with more memory overhead and bloat.
For Qt that's just not true: you can reach 1080p / 60fps fluid animated UI on small embedded boards such as raspberry pi's. All the rendering is done using a nifty OpenGL pipeline.
To clarify, I was referring to the language choices here. Comparing Lazarus as opposed to Python in the particular case, as well as referring to even worse performance on Electron/NodeJS.
I certainly was not intending to imply that Qt is not performant as it certainly is. Qt is the default UI for Lazarus applications as well, but it does support other UI toolkits (such as Tk) out of the box.
About "commercial applications", let's face it. Those "enterprise solutions" cost way higher not because they are 10-1000x times "better", but because they contain generous "bonuses" for senior stuff.