Hacker Newsnew | past | comments | ask | show | jobs | submit | chusk3's commentslogin

It also solves a more subtle problem - when people install a tool globally they install latest _at that time_. But the world doesn't stand still, and things get updated. `dnx` always uses the latest available version by default (you can of course pin with `dnx <package>@<version>`) so that you never get out of date.


Hi, I'm the maintainer for the VSCode F# support. Rider support for F# is great, that team is very engaged, and I'd strongly consider it if

* you have mixed C#/F# projects in a solution (C# and F# support in VSCode can't communicate today)

* you use Rider for other technology

* you want paid support for your editor tools

* you prefer IDE-style experiences rather than editor/ extension-style experiences

* you want to take advantage of Rider features like their accelerated build caching


I appreciate the honesty and straightforwardness in this post given your position. Thank you.


Capturing these guidelines is one of the primary reasons that https://clig.dev/ exists.


They've worked for several years now on dotnet, but the type provider author has to do some work to allow their type provider to compile and target that runtime.


We've been baking this functionality directly into the .NET SDK for a couple releases now: https://github.com/dotnet/sdk-container-builds

It's really nice to derive mostly-complete container images from information your build system already has available, and the speed/UX benefits are great too!


Assuming you're talking about FsAutoComplete and this was recently, that's nothing to do with the .NET Runtime and entirely a coding mistake that I made that we've released a fix for :)


This is the most Hacker News comment I have ever seen, lol. The actual guy behind the bug here replying to comments with a mea culpa :D


Oh! HAHA it was FsAutoComplete. right on. Thank you!


Mainly what I was getting at here is that there are often repo-level files that are part of the .NET build process that are easy to forget to include in your build context/as part of the initial COPY command that most folks to do to a package restore before build to take advantage of Dockerfile layer caching. Right now as a user you have to be aware of the repo/file layouts and get your build contexts right if you build inside of a multi-stage Dockerfile.

IMO for a significant part of the user base there's no reason to have to manage that at all - .NET is capable of cross-targeting enough to not need to perform the build inside of a container. That keeps the user in the 'build context' that they are used to, and we can use all of that context to still end up at the ideal result - a correct container, with all of their app dependencies.


Hey folks - author here, happy to answer any questions about the feature or what we're hoping to do with it.

Broadly we just want to lower barriers to containerization for all .NET developers. Jib/Ko/etc are proven patterns in this field, and we saw an opportunity to use the existing infrastructure of MSBuild to reduce the amount of concepts our users would need to know in order to be successful in their journey to the cloud. On top of that, having the feature in SDK provides some opportunities to help users adhere to conventions around container labeling (or customize container metadata entirely!) so we can make .NET containers good citizens in the container ecosystem overall.


First, let me just say that this seems great. It looks like a perfect way to use reasonable defaults (project name, Version) and use the existing `dotnet publish` infrastructure to make containers. And I love how the blog post has both a simple CLI example, and a GitHub Actions yaml example! So thank you.

Now for the problem:

I still don't understand why other people compile dotnet projects in containers. Today, we have a many containers built on a monolith, and it looks like if I make containers via `-p:PublishProfile=DefaultContainer`--for example, 20 containers--then that CI build is going to compile our codebase 20 separate times. With `-p:PublishProfile=DefaultContainer`, the long build is mostly duplicated in each container. Right?

So I have one major problem preventing me from adopting this: it's compiling in the container, which balloons our build time.

It's entirely possible I'm missing something obvious or misinterpreting the situation, and if so, please let me know. I'm mostly immune to feeling shame and appreciate feedback.


There is some benefit to building inside a container - it keeps your build environment consistent across team members and makes it easier to replicate your CI.

Having said that, because the .Net toolchain is capable of cross-targeting this feature should enable broad swaths of users to not need to build inside a container to get a container created. So I completely agree with your puzzlement here and would hope that this feature leads to a reduction in that particular pattern.


> it keeps your build environment consistent across team members

I have never had .NET build issues due to environment inconsistencies across team members. I think NuGet is pretty good at making the dependencies consistent. No need for containers.


I personally appreciate the ability to build on any machine. A newly setup dev machine, or a new build machine, without having to worry about if I all the various dependencies installed for a successful build. Not all of my build dependencies can be handled with nuget.


Thank you for taking questions. Why do we need to run managed language inside a docker container? Doesn't the VM provide sufficient sandboxing?


IMO running a managed runtime like .NET inside a container isn't done as a security measure (like sandboxing) - instead it's done for uniformity and ease of deployment to the infinite number of cloud services/hosting providers that understand containers. Making it easy to make containers for .NET applications means that it's easy to go to any hosting model of your choice, instead of waiting for $NEXT_BIG_CLOUD to provide .NET-runtime runners for their bespoke service.


> Jib/Ko/etc are proven patterns in this field, and we saw an opportunity to use the existing infrastructure of MSBuild to reduce the amount of concepts our users would need to know in order to be successful in their journey to the cloud.

Hah, I don't know - my experiences with Jib have only ever been negative. Having something like a Dockerfile that lets you customize everything that goes into the container and only having to worry about your app as a .jar file seemed like a better option to me, rather than having some plugin that integrates with your build tooling and feels infinitely more opaque all of the sudden: https://cloud.google.com/java/getting-started/jib

Essentially if you'd need a bunch of custom packages, e.g. some non-open-source fonts so your PDF export in your Java app would work correctly, you'd still probably need a custom base image, thus slightly negating the benefits of this apparent simplification: https://cloud.google.com/java/getting-started/jib#base-image

In addition, the images that were generated (last I tried) didn't have proper timestamps and thus showed up in Docker as created decades ago, which might be good from a reproducible build perspective (same code --> same image), but still felt unintuitive when you actually looked at the images.

But hey, maybe I'm just used to Dockerfiles and not needing a different plugin for each separate technology stack - looking at any application as just a Dockerfile (or a similar equivalent) regardless of whether it runs Java, Ruby, .NET, Python, Node or something else under the hood has always seemed like a good idea.

I'm glad that people who like alternative approaches have those options!

Personally (bit of a tangent here), I also found things like dealing with memory limits in JVM (e.g. the container needs a bit of free memory not to OOM, so JVM needs to leave a bit free, but Xmx is not the actual limit and will still be exceeded, alas there is no actual JVM_MAX_MEMORY_LIMIT_MB parameter so it's a bit of a pain if you want stable containers that don't crash). to be problematic, so it's nice that various different technologies are getting attention, be it Jib, .NET or something else!

.NET just generally seems like a pretty sane and performant option (primarily for web development, but for other use cases as well), especially with how it feels like most of what you need comes out of the box vs the more fragmented nature of other stacks (e.g. Spring and its plugins like Hibernate/myBatis/jOOQ in Java land).

In summary: I still believe that this (much like the other tools in the space) will be good for people who don't want to learn all of the concepts of what Docker/Buildah/... provide you with and will make building containers for your particular stack more easy. Though this will come at the expense of having multiple separate tools for different tech stacks, which may or may not erase some of the benefits, depending on how polyglotic your stack is.


You make great points about the need for customization and the boundaries of solutions that aren't based on Dockerfiles. Our approach to that problem is twofold, though both parts are still only in the planning stage:

* eventually providing an 'eject' mechanism to create the matching Dockerfile for a given project. this serves as a basis for any customization you might need, as well as a base language that many existing tools can understand.

* making it easy to include arbitrary image layers by reference in your container through a syntax like `<ContainerLayer Include="<layer SHA ref>" />`. This makes it easy to grab already-built components and inject them into your build.

I entirely agree with your summary. More choices, but all built on the same standard foundation :)


This is similar in aim to FSharp.Formatting[1], which has been used for a while to generate syntax highlighting and hover tooltips for code samples and API docs in the F# ecosystem. Very cool to see!

[1]: https://github.com/fsprojects/FSharp.Formatting


Oh that has on-hover tips, that's really cool


In F# we call it 'collect' instead, partially for this reason.


When I tried to come up with better name for flatMap I got gather.

So maybe collect is a better idea.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: