>How is it possible that I can write efficient and provably-safe code in C# without a degree in type theory?
Because Anders Hejlsberg is one of the greatest language architects and the C# team are continuing that tradition.
The only grudge I have against them is they promised us discriminated unions since forever and they are still discussing how to implement it. I think that is the greatest feature C# is missing.
For the rest C# is mostly perfect. It has a good blend of functional and OOP, you can do both low level and high level code. You can target both the VM or the bare hardware. You can write all types of code beside system programming (due to the garbage collector). But you can do web backend, web front-end, services, desktop, mobile apps, microcontroller stuff, games and all else. It has very good libraries and frameworks for whatever you need. The experience with Visual Studio is stellar.
And the community is great. And for most domains there is generally only one library or framework everybody uses so you not only don't have to ask what to use for a new feature or project, but you also find very good examples and help if you need.
It feels like a better, more strait trough version of Java, less verbose and less boiler plate-y. So that's why .NET didn't need its own Kotlin.
Sure, it can't meet the speed of Rust or C# for some tasks because of the garbage collector. But provided you AOT compule, disable the garbage collector and do manual memory management, it should.
How does C# fares in terms of portability these days? I checked years ago, and at the time, for non-Windows OSes you had to use Mono. But whether your application was going to work or not also depended on which graphic libraries you were using, e.g. WinForms wasn't going to work on Mono. At the time, C# was presented to me as a better Java, but to me it seemed that Java had true cross-platform compatibility while C# was going to work nicely only on Windows, unless you had some proper planning of checking beforehand which libraries were working with Mono.
Back in the day Mono had surprisingly good WinForms support on Gtk. It was never going to win awards for pretty and could never hit true compatibility with P/Invoke calls to esoteric Win32 APIs, but it was good enough to run any simple WinForms app you wanted to write for it and ran some of the "popular" ones just fine. (That old Mono WinForms support was recently donated to Wine, which seems like a good home for it.)
.NET has moved to being directly cross-platform today and is great at server/console app cross-platform now, but its support for cross-platform UI is still relatively nascent. The official effort is called MAUI, has mostly but not exclusively focused on mobile, and it is being developed in the open (as open source does) and leaves a lot to be desired, including by its relatively slow pace compared to how fast the server/console app cross-platform stuff moves. The Linux desktop support, specifically, seems constantly in need of open source contributors that it can't find.
You'll see a bunch of mentions of third-party options Avalonia and Uno Platform doing very well in that space, though, so there is interesting competition, at least.
.NET is totally cross platform these days. Our company develops locally on Windows and deploys to Linux. I’m the only team member on Mac and it works flawlessly.
If you only care about linux on x86-64 or some ARM it is cross platform. Getting .net on FreeBSD is possible, but it isn't supported at all. QNX from what I can tell seems like it should be possible but a quick search didn't find anyone who succored (a few asked). My company has an in house non-posix OS useful for some embedded things, forget about it. There are a lot of CPUs out there that it won't work on.
.NET has some small cross platform abilities, but calling it totally cross platform is wrong.
- Application development targets on iOS and Android use Mono. Android can be targeted as linux-bionic with regular CoreCLR, but it's pretty niche. iOS has experimental NativeAOT support but nothing set in stone yet, there are similar plans for Android too.
- ARMv6 requires building runtime with Mono target. Bulding runtime is actually quite easy compared to other projects of similar size. There are community-published docker images for .NET 7 but I haven't seen any for .NET 8.
- WASM also uses Mono for the time being. There is a NativeAOT-LLVM experiment which promises significant bundle size and performance improvements
- For all the FreeBSD slander, .NET does a decent job at supporting it - it is listed in all sorts of OS enums, dotnet/runtime actively accepts patches to improve its support and there are contributions and considerations to ensure it does not break. It is present in https://www.freshports.org/lang/dotnet
At the end of the day, I can run .NET on my router with OpenWRT or Raspberry Pi4 and all the laptops and desktops. This is already quite a good level given it's completely self-contained platform. It takes a lot of engineering effort to support everything.
That's still pretty much cross-platform for all practical purposes, as it supports far more platforms than most softwares anyway. After all cross-platform only means that it runs on multiple platforms, not on all possible or even technically feasible platforms. Being cross-platform usually means a much easier porting but that porting still has to be done somehow.
Weird, I opened a binary I built years ago on Windows in Mono, and it was Winforms and rendered correctly, I think you mean WPF and the later GUI techs. Winforms renders nicely on Mono for a little while now I think?
There's a lot of options, but also the latest of .NET (not Framework) just runs natively on Linux, Mac and Windows, and there's a few open source UI libraries as mentioned by others like Avalonia that allow your UI to run on any OS.
The issue at the time was having a WinForms application to run also on macOS, and IIRC at the time WinForms wasn't supported outside of Windows. Maybe Mono on Windows is still different from Mono on macOS. Anyway, the situation seem to be much better now. I'm not going to invest time into C# at the moment, since I'm in the Java ecosystem and Im currently taking some time to practice with Kotlin. But it's good to know that now C# is an option as well.
Forgot to include my OS: I ran a .NET (Framework? I think) .exe I built on Windows in 2020, on Linux with Mono and it worked and looked (aside from thematic differences) like I remembered it looking in 2024.
Now that I thought a bit more about it, I think I unlocked a memory of WinForms working on some macOS versions and not others. Maybe it was even just supported on 32 bits versions and not in 64 bit versions. One way or another, the bottom line was that it wasn't going to work on the latest macOS version at the time. But I actually tried it on Linux and it worked there.
you should checkout AvalonuiaUI[0] or unoPlatform[1] if wanting to target web/mobile/window/linux/macOS
if building for the web online, asp.net core runs on Linux servers as well as windows
and there's MAUI [2] ( not a fan of this), you are better-off with with the others.
in summary c# and .NET is cross-platform, third party developers build better frameworks and tools for other platform while Microsoft prefers to develop for Microsoft ecosystem, if you get
> and there's MAUI [2] ( not a fan of this), you are better-off with with the others.
I will say MS has been obsessed with trying to take a slice of the mobile pie.
However their Xamarin/WPF stuff left so much to be desired and was such a Jenga Tower that I totally get the community direction to go with a framework you have ostensibly have more control over vs learning that certain WPF elements are causes of e.g. memory leaks...
If you're doing ASP.NET Core webdev, it's seamless. Runs in Linux docker containers. Developers in my team have either Windows (Visual Studio) or Linux or Mac (Rider) machines.
I mean since Mono it has completely changed. They are about to release .NET 9 which is the 8th version (there was no v4 to reduce confusion with the legacy .NET Framework) since being cross-platform.
Mono was a third party glorified hack to get C# to work on other OS. .NET has been natively cross platform with an entirely new compiler and framework since mid 2016.
> Mono was a third party glorified hack to get C# to work on other OS.
Indeed, this is what I didn't like back then. Java has official support for other OSes, which C# was lacking at the time. Good to hear that things changed now.
Except that the GC makes it exactly not viable for games and its one of the biggest problems Unity devs run into. I agree it's a great language, but its not a do it all.
Unity has literally the worst implementation of C# out there right now. Not only is it running Mono instead of .NET (Core) but it's also not even using Mono's generational GC (SGen). They have been working on switching from Mono to .NET for years now because Mono isn't being updated to support newer C# versions but it will also be a significant performance boost, according to one of the Unity developers in this area [1].
IL2CPP, Unity's C# to C++ compiler, does not help for any of this. It just allows Unity to support platforms where JIT is not allowed or possible. The GC is the same if using Mono or IL2CPP. The performance of code is also roughly identical to Mono on average, which may be surprising, but if you inspect the generated code you'll see why [2].
They did not - it is still a work in progress with no announced target release date. They also have no current plans to upgrade the GC being used by IL2CPP (their C# AOT compiler).
I could argue the opposite - GC makes it more viable for games. "GC is bad" misses too much nuance. It goes like this: developer very quickly and productively gets minimum viable game going using naive C# code. Management and investors are happy with speed of progress. Developers see frame rate stutters, they learn about hot path profiling, gen0/1/2/3 GC & how to keep GC extremely fast, stackalloc, array pooling, Span<T>, native alloc; progressively enhancing quickly until there are no problems. These advanced concepts are quick and low risk to use, and in the case of many of the advanced concepts; what you would be doing in other languages anyway.
The only reason we might see FPS drop in games, is not because C# and its GC. It's mostly because the poor usage of the graphics pipeline and the lack of optimization. As a former game developer I had to do a lot of optimization so our games run nicely on mobile phones with modest hardware.
That entirely depends on the game. Recent example is Risk of Rain 2, which had frequent hitches caused by the C# garbage collector. Someone made a mod to fix this by delaying the garbage collection until the next load-screen — in other words, controlled memory leakage.
The developers of Risk of Rain 2 were undoubtedly aware of the hitches, but it interfered with their vision of the game, and affected users were left with a degraded experience.
It's worth mentioning that when game developers scope of the features of their game, available tech informs the feature-set. Faster languages thus enable a wider feature-set.
> It's worth mentioning that when game developers scope of the features of their game, available tech informs the feature-set. Faster languages thus enable a wider feature-set.
This is true, but developer productivity also informs the feature set.
A game could support all possible features if written carefully in bare metal C. But it would take two decades to finish and the company would go out of business.
Game developers are always navigating the complex boundary around "How quickly can I ship the features I want with acceptable performance?"
Given that hardware is getting faster and human brains are not, I expect that over time higher level languages become a better fit for games. I think C# (and other statically typed GC languages) are a good balance right now between good enough runtime performance and better developer velocity than C++.
> frequent hitches caused by the C# garbage collector
They probably create too much garbage. It’s equally easy to slow down C++ code with too many malloc/free functions called by the standard library collections and smart pointers.
The solution is the same for both languages: allocate memory in large blocks, implement object pools and/or arena allocators on top of these blocks.
Neither C++ nor C# standard libraries have much support for that design pattern. In both languages, it’s something programmers have to implement themselves. I did things like that multiple time in both languages. I found that, when necessary, it’s not terribly hard to implement that in either C++ or C#.
> In both languages, it’s something programmers have to implement themselves.
I think this is where the difference between these languages and rust shines - Rust seems to make these things explicit, C++/C# hides behind compiler warnings.
Some things you can't do as a result in Rust, but really if the rust community cares it could port those features (make an always stack type type, e.g.).
Code base velocity is important to consider in addition to dev velocity, if the code needs to be significantly altered to support a concept it swept under the rug e.g. object pools/memory arenas, then that feature is less likely to be used and harder to implement later on.
As you say, it's not hard to do or a difficult concept to grasp, once a dev knows about them, but making things explicit is why we use strongly typed languages in the first place...
The GC that Unity is using is extremely bad by today's standards. C# everywhere else has a significantly better GC.
In this game's case though they possibly didn't do much optimization to reduce GC by pooling, etc. Unity has very good profiling tools to track down allocations built in so they could have easily found significant sources of GC allocations and reduced them. I work on one of the larger Unity games and we always profile and try to pool everything to reduce GC hitches.
A good datapoint, thanks.
Extending my original point - C# got really good in the last 5 years with regards to performance & low-level features. There might be an entrenched opinion problem to overcome here.
Anybody writing a game should be writing in a game engine. There are too many things you want in a game that just come "free" from an engine that you will spend years writing by hand.
GC can work or not when writing a game engine. However everybody who writes a significant graphical game engine in a GC language learns how to fight the garbage collector - at the very least delaying GC until between frames. Often they treat the game like safety critical: preallocate all buffers so that there is no garbage in the first place (or perhaps minimal garbage). Without garbage collection might technically use more CPU cycles, but in general they are spread out more over time and so more consistent.
It's hard to use C# without creating garbage. But it's not impossible. Usually you'd just create some arenas for your important stuff, and avoid allocating a lot of transient objects such as enumerators etc. So long as you can generate 0 bytes of allocation each frame, you won't need a GC no matter how many frames you render.
The question is only this: does it become so convoluted that you could just as well have used C++?
Enumerators are usually value types as long as you use the concrete type. Using the interface will box it. You can work around this by simply using List<T> as the type instead of the IEnnumerable.
You have to jump through some hoops but it's really not that convoluted and miles easier than good C++.
The problem with it is that you don't know. The fundamental language construct "foreach" is one that may or may not allocate and it's hard for you as a developer to be sure. Many other low level things do this or at least used to (events/boxing/params arrays, ...).
I wish there was an attribute in C# that was "[MustNotAllocate]" which files the compilation on known allocations such as these. It's otherwise very easy to accidentally introduce some tiny allocation into a hot loop, and it only manifests as a tiny pause after 20 minutes of runtime.
Most often you do know whether an API allocates. It is always possible to microbenchmark it with [MemoryDiagnoser] or profile it with VS or Rider. I absolutely love Rider's dynamic program analysis that just runs alongside me running an application with F5, ideally in release, and then I can go through every single allocation site and decide what to do.
Even when allocations happen, .NET is much more tolerant to allocation traffic than, for example, Go. You can absolutely live with a few allocations here and there. If all you have are small transient allocations - it means that live object count will be very low, and all such allocations will die in Gen 0. In scenarios like these, it is uncommon to see infrequent sub-500us GC pauses.
Last but not least, .NET is continuously being improved - pretty much all standard library methods already allocate only what's necessary (which can mean nothing at all), and with each release everything that has room for optimization gets optimized further. .NET 9 comes with object stack allocation / escape analysis enabled by default, and .NET 10 will improve this further. Even without this, LINQ for example is well-behaved and can be used far more liberally than in the past.
It might sound surprising to many here but among all GC-based platforms, .NET gives you the most tools to manage the memory and control allocations. There is a learning curve to this, but you will find yourself fighting them much more rarely in performance-critical code than in alternatives.
While this would be nice for certain applications, I'm not sure it's really needed in general. Most people writing C# don't have to know about these things, simply because it doesn't matter in many applications. If you're writing performance-critical C#, you're already on a weird language subset and know you way around these issues. Plus, allocations in hot loops stand out very prominently in a profiler.
That being said, .NET includes lots of performance-focused analyzers, directing you to faster and less-allocatey equivalents. There surely also is one on NuGet that could flag foreach over a class-based enumerator (or LINQ usage on a collection that can be foreach-ed allocation-free). If not, it's very easy to write and you get compiler and IDE warnings about the things you care about.
At work we use C# a lot and adding custom analyzers ensuring code patterns we prefer or require has been one of the best things we did this year, as everyone on the team requires a bit less institutional knowledge and just gets warnings when they do something wrong, perhaps even with a code fix to automatically fix the issue.
If you are calling SomeType.SomeMethod(a, b, c) then you don't know what combintions of a, b, c could allocate unless you can peek into it or try every combination of a, b and c. So it's hard to know in the general case even with profiling and testing.
At least for Unity, the actual problem lies in IL2CPP and not C#. I have professionally used C# in real-time game servers and GC was never a big issue. (We did use C++ in the lower layer but only for the availability of Boost.Asio, database connectors and scripting engines.)
Unity lets you use either IL2cPP (AOT) or Mono (JIT). Either way it will use Boehm GC which is a lot worse than the .NET GC. If your game servers weren't using Unity then they are using a better GC.
Yeah, we rolled our own server framework in .NET mainly because we were doing MMOs and there were no off-the-shelf frameworks (including Unity's) explicitly designed for that. In fact, I believe this is still mostly true today.
Unity used Mono. Which wasn't the best C# implementation, performance wise. After Mono changed its license, instead of paying for the license, Unity chose to implement their infamous IL2CPP, which wasn't better.
Now they want to use CoreCLR which is miles better than both Mono and IL2CPP.
Except that is a matter of developer skill, and Unity using Mono with its lame GC implementation, as proven by CAPCOM's custom .NET Core fork based engine used for Devil May Cry on the PlayStation 5.
GC in modern .NET runtime is quite fast. You can get very low latency collections in the normal workstation GC mode.
Also, if you invoke GC intentionally at convenient timing boundaries (I.e., after each frame), you may observe that the maximum delay is more controllable. Letting the runtime pick when to do GC is what usually burns people. Don't let the garbage pile up across 1000 frames. Take it out every chance you get.
You're basically trading off worse throughput for better latency.
If you forcibly run the GC every frame, it's going to burn cycles repeatedly analyzing the same still-alive objects over and over again. So the overall performance will suffer.
But it means that you don't have a big pile of garbage accumulating across many frames that will eventually cause a large pause when the GC runs and has to visit all of it.
For interactive software like games, it is often the right idea to sacrifice maximum overall efficiency for more predictable stable latency.
This might be more problematic under CoreCLR than under Unity. Prematurely invoking GC will cause objects that are more likely to die in Gen 0 to be promoted to Gen 1, accumulate there and then die there. This will cause unnecessary inter-generational traffic and will extend object lifetimes longer than strictly necessary. Because live object count is the main factor that affects pause duration, this may be undesirable.
If you could even just pass an array of objects to be collected or something, this would so much easier.
Magic, code or otherwise, sucks when the spell/library/runtime has different expectations than your own.
You expect levitation to apply to people, but the runtime only levitates carbon based life forms. You end up levitating people without their affects (weapons/armor), to the embarrassment of everyone.
There should be no magic, everything should be parameterized, the GC is a dangerous call, but it should be exposed as well (and lots of dire warnings issued to those using it).
> If you could even just pass an array of objects to be collected or something
If you have a bunch of objects in an array that you have a reference to such that you can pass it, then, by definition, those objects are not garbage, since they're still accessible to the program.
At least for this instance you have a good idea which objects are "ripe" for collection. There should be some way to specify "collect these, my infra objects don't need to be".
Unity (and its GC) is not representative of the performance you get with CoreCLR.
The article discusses ref lifetime analysis that does have relationship with GC, but it does not force you into using one. Byrefs are very special - they can hold references to stack, to GC-owned memory and to unmanaged memory. You can get a pointer to device mapped memory and wrap it with a Span<T> and it will "just work".
Well, when I worked in Unity I used to compile C# code with the LLVM backend. It was as fast as C++ code would be. So Unity is perhaps an example in favor of C#.
C# has much better primitives for controlling memory layout than Java (structs, reified generics).
BUT it's definitely not a language designed for no-gc so there are footguns everywhere - that's why Rider ships special static analysis tools that will warn you about this. So you can keep GC out of your critical paths, but it won't be pretty at that point. But better than Java :D
Possibly prettier than C and C++ still. Every time I write something and think "this could use C" and then I use C and then I remember why I was using C# for low-level implementation in the first place.
It's not as sophisticated and good of a choice as Rust, but it also offers "simpler" experience, and in my highly biased opinion pointers-based code with struct abstractions in C# are easier to reason about and compose than more rudimentary C way of doing it, and less error-prone and difficult to work with than C++. And building final product takes way less time because the tooling is so much friendlier.
> The only grudge I have against them is they promised us discriminated unions since forever and they are still discussing how to implement it. I think that is the greatest feature C# is missing.
To ease the wait you could try Dunet (discriminated union source generator).
The DU stuff is enormous once you consider all the corners it touches. Especially with refinements. E.g. in code like
if (s is string or s is int) {
// what's the type of s here? is it "string | int" ?
}
And not to mention that the BCL should probably get new overloads using DU's for some APIs. But there is at least a work in progress now, after years of nothing.
One of the claimed benefits of .NET Core was that they could improve the runtime at a much faster pace than .NET Framework did, especially if that meant adding new features or even IL opcodes. And they've done this before, with a big one (IMO) being ref fields in ref structs. Lately, when it comes to developing C#, the language design team has frustratingly been trying to shoehorn everything into the compiler instead of modifying the runtime. Then they say the runtime should be modified to pattern-match what they output. If DUs are to be implemented fully in C#, niches would probably be impossible. This means Optional<T>, when T is a class, would take two words.
My long term experience with Visual Studio is the inverse of stellar.
In order to submit bugs with Microsoft the application redirects the end user to their website with web socket. The company I work for has extra security and this breaks preventing me from filing a cornucopia of bugs with Visual Studio. I cannot even file a bug on how the submit system is broken with Visual Studio.
Closing and re-opening Visual Studio is a daily task. Most often during code refactoring of multiple parts. Creating new classes has inconsistency of template usage in the second most recent released version. Compile error message history can become stale and inconsistent where the output console does not. Pasting content into the resource manager is still broken during tab entry. Modal dialogs still cover the screen during debug. And those don't even touch the inconstant and buggy user experience.
C# is a tool and like all tools it is good for some things and really bad for others. No tool is perfect. You can still use a ball-peen hammer for roofing but be better to have a roofing hammer. I would use Swift on iOS and Kotlin on Android for those platform projects, I don't even know those languages, and wouldn't use C#.
I assume you mean just the Windows Visual Studio? The Mac version is not exactly on par with the Windows. Yeah C# is great, but one would need Window's version of VS (NOT VS Code) to take full advantage of C#. For me that is a deal breaker, when the DX of a language is tight to a proprietary sourced IDE by MS.
Mac visual studio isn't visual studio, it's something else that they stuck the label visual studio on. They are about as related as java and javascript (which are famously, related as car is to carpet)
Have you tried VSC for C# (yes asp core). The debugging is just broken. I would not recommend any sane person using VSC for C# related large project development.
IIRC the last couple of releases had some new/overhauled features they said were built for both from the same code, so they seemed to be starting down the path of slowly converging them, before they changed their minds and discontinued the Mac version I guess.
I’m sure you can point to many things Rider is better at, but I’ve found enough sharp edges (including, annoyingly, it not being able to infer types that Roslyn can) that it’s not a sell for me. It’s also much faster and supports NCrunch.
Never heard of ncrunch but I googled it and its described as
> the ultimate live testing tool for Microsoft Visual Studio and JetBrains Rider
So it seems at least that part of your critique is outdated.
I'm not sure what you mean about the inference, I've never had any problem with that that I can remember. And it can be a bit slow to start up or analyze a project at first load but in return it gives much better code completion and such.
I absolutely love F#! Two things though: adoption is low so you kind of can't use it professionally and most libraries are written in C#, so you kind of use them in a non idiomatic way.
However amusing, this comparison doesn’t seem accurate to me. F# may appear more challenging only to someone who is already accustomed to OO programming languages. For people just starting to code, without pre-existing habits, learning F# could be much easier than learning C#
But knowing to effectively programming F# requires you to understand OOP and the functional abstractions. Most libraries in .Net still target C#, so understanding C# syntax and mentally translating it to F# is often required. If your application doesn't require many outside dependencies that might be different, but for most projects F# will require a lot more learning.
I have been learning F# for a while now, and while the functional side that is pushed heavily is a joy to use, anything that touches the 'outside world' is going to have way more resources for C# as far as libraries, official documentation, general information including tutorials etc. You will need to understand and work with those.
So you really do need to understand C# syntax and semantics. Additionally there are a few concepts that seem the same in each language but have different implementations and are not compatible (async vs tasks, records) so there is additional stuff to know about when mentally translating between C# and F#.
I really want to love F# but keep banging my head against the wall. Elixir while not being typed yet and not being as general purpose at least allows me to be productive with it's outstanding documentation, abundance of tutorials and books on both the core language and domain specific applications. It is also very easy to mentally translate erlang to elixir and vice versa in the very few occasions needed.
Its ironic that the thing that is hard to learn about F# is C#, or more to the point, the patterns/idioms in C# libraries and frameworks. I've seen the same reaction more from people coming from other ecosystems personally working with F#. There's a lot of stuff in C# that people in Java/C# land take for granted that you just don't have to learn in other languages (Javascript, Go, Python, etc) - lots of OOP patterns, frameworks, etc. Staying in the F# lane seems to be easier but can be limiting, but at least you know you won't be stuck if you need an SDK/Library/etc.
The flipside is that adopting F# is less risky as a result - if there isn't a library or you are stuck you can always bridge to these .NET libraries. Its similar I think with other shared runtime languages (e.g. Scala, Kotlin, Clojure, etc). You do need to understand the ecosystem as a whole at some point and how it structures things.
Gleam from a language perspective seems really nice - but it's in it's ramp up stage, I will go through the Gleam Exrercism track and keep an eye on it. It would be great if it became the general purpose typed pragmatic functional language with a large ecosystem I am after!
I've found this when seeing F# team adoption in the past especially if coming from outside the .NET ecosystem (no previous .net knowledge). It is easier learning F# for a number of reasons BUT as per another comment when you need to use the "inter-op feature" (i.e. using C# libs) then the learning curve widens quickly especially if using C# like frameworks - libraries are typically still OK/relatively easy. I see interop to large frameworks and moving to different idiomatic C# styles as an advanced F# topic for these adopters.
While it's good to have the escape hatch; because it means its less of a risk to adopt F# (i.e. you will always have the whole .NET ecosystem at your finger tips) if the C# framework being adopted is complex (e.g. uses a lot of implicits) it requires good mentoring and learning to bridge the gap and usually at this point things like IDE support, mocking, etc that weren't needed as much before are needed heavily (like a typical C# code base). Many C# libraries are not that easy therefore IMO, but with C# native templates, etc it becomes more approachable if coming from that side.
I've found things like the differences in code structure, the introduction of things like patterns (vs F#'s "just functions" ideal), dependency injection, convention based things (ASP.NET is a big framework with lots of convention based programming) and other C# things that F# libraries would rather not have due to complexity is where people stumble. Generally .NET libraries are easy in F# - its frameworks that are very OOP that make people outside the C#/Java/OOP ecosystem pause a bit at least in my experience. There's good articles around libraries vs frameworks in the F# space if I recall illustrating this point.
I'm hardly missing discriminated unions (sure exhaustive checking would be nice) anymore since the introduction of switch expressions that in combination with records handles most practical cases.
That power is usually considered too reckless to retain and simultaneously too cumbersome to actually use, partly because it was never planned in the first place.
C++ concepts restrain that recklessness, and people hate them for it. Rust will get most of that power when they finally stabilize const generic expressions. I like c# but like the article says, it isn't really borrow checking, so you don't get fearless concurrency. If I want to do coarse grained multi threading (not just a parallel for loop or iterator to speed up math) I only want to use rust now. Once I stopped having to think around thread safety issues and data consistency, I didn't want to go back. But for something single threaded c# or go are great and performant.
Is Rust really bullet proof though? I've spent a lot of time fixing concurrency bugs (race conditions), it's one of those things that I'm very very good at but even then it feels like you're Indiana Jones dodging the hidden traps.
Haskell promises to solve concurrency and the Rust boys are always claiming that it's impossible to write buggy code in Rust.. and the jump from C/C++/C#/Golang to Rust is much smaller than to Haskell..
You can still leak memory via a container. You can still create deadlocks. You can still throw a panic (with a backtrace). It does not solve the halting problem. But if it compiles you will not have any form of undefined behavior. No reading or writing out of bounds. No use after free, no dangling pointers, and no data getting modified across threads in an inconsistent way. If it needs a lock, the compiler will tell you with an error.
I'm so glad it doesn't. There is absolutely no need for it and when it's used it usually makes a big mess. It goes in the same pile as multiple inheritance.
GVM dispatch is notoriously slow(-ish), yeah. But it does not require JIT. Otherwise it wouldn't work with NativeAOT :) (the latter can also auto-seal methods and unconditionally devirtualize few-implementation members which does a good job, guarded devirtualization with JIT does this even better however)
I remember when this feature was specifically not available with NativeAOT.
It's good that it is now, but how can it be implemented in a way that has truly separate instantiations of generics at runtime, when calls cross assembly boundaries? There's no single good place to generate a specialization when virtual method body is in one assembly while the type parameter passed to it is a type in another assembly.
> how can it be implemented in a way that has truly separate instantiations of generics at runtime, when calls cross assembly boundaries
There are no assembly boundaries under NativeAOT :)
Even with JIT compilation - the main concern, and what requires special handling, are collectible assemblies. In either case it just JITs the implementation. The cost comes from the lookup - you have to look up a virtual member implementation and then specific generic instantiation of it, which is what makes it more expensive. NativeAOT has the definitive knowledge of all generic instantiations that exist, since it must compile all code and the final binary does not have JIT.
Because Anders Hejlsberg is one of the greatest language architects and the C# team are continuing that tradition.
The only grudge I have against them is they promised us discriminated unions since forever and they are still discussing how to implement it. I think that is the greatest feature C# is missing.
For the rest C# is mostly perfect. It has a good blend of functional and OOP, you can do both low level and high level code. You can target both the VM or the bare hardware. You can write all types of code beside system programming (due to the garbage collector). But you can do web backend, web front-end, services, desktop, mobile apps, microcontroller stuff, games and all else. It has very good libraries and frameworks for whatever you need. The experience with Visual Studio is stellar.
And the community is great. And for most domains there is generally only one library or framework everybody uses so you not only don't have to ask what to use for a new feature or project, but you also find very good examples and help if you need.
It feels like a better, more strait trough version of Java, less verbose and less boiler plate-y. So that's why .NET didn't need its own Kotlin.
Sure, it can't meet the speed of Rust or C# for some tasks because of the garbage collector. But provided you AOT compule, disable the garbage collector and do manual memory management, it should.