Working with VMs always felt difficult because of this. So authoring was built-in to Docker. Now you can use Apptron to author and embed a Linux system on a web page. This aspect is usable, but it's only going to get better.
Apptron uses v86 because its fast. Would love it for somebody to add 64-bit support to v86. However, Apptron is not tied to v86. We could add Bochs like c2w or even JSLinux for 64-bit, I just don't think it will be fast enough to be useful for most.
Apptron is built on Wanix, which is sort of like a Plan9-inspired ... micro hypervisor? Looking forward to a future where it ties different environments/OS's together.
https://www.youtube.com/watch?v=kGBeT8lwbo0
Do the BridgeSupport files annotate whether you own returned objects and need to release them? Sprinkling in runtime.SetFinalizer calls based on ownership would be slightly nicer than exposing release/retain.
I thought I wrapped NSAutoreleasePool but maybe I just used it directly dynamically. If something is not wrapped in the source that doesn't mean you can't use it. objc.Get("NSAutoreleasePool").Alloc().Init()
Unfortunately I can afford some leaks at the moment, so if that's critical to anybody else and I'm doing something wrong just submit a PR
Ah, I don't just mean the class, I mean you need to have an autoreleasepool block active before calling into AppKit otherwise you can leak memory on every call. It doesn't look like you're using pools yet, or documenting that users of your library should use them.
> Cocoa always expects code to be executed within an autorelease pool block, otherwise autoreleased objects do not get released and your application leaks memory
The page you linked is not actually ambiguous, though perhaps a bit tricky to read. It says:
1. If you're compiling Objective C in ARC mode, you can't use NSAutoreleasePool directly, and must instead use @autoreleasepool.
2. In manual reference counting mode you can use either NSAutoreleasePool or @autoreleasepool, but the latter has lower overhead. (This may matter if e.g. you're draining the autorelease pool on every iteration of a loop to reduce memory spikes.)
Under the hood -- at least on the version I disassembled -- NSAutoreleasePool's -init and -release methods wrap the CoreFoundation CFAutoreleasePoolPush and CFAutoreleasePoolPop functions, which in turn call the runtime's objc_autoreleasePoolPush and objc_autoreleasePoolPop functions, which are the things that @autoreleasepool will cause the compiler to emit directly.
Keep in mind that the reason the `@autorelease` syntax is faster is primarily due to ARC optimizations (which don't apply here, since you're not using ARC).
Calling the `_objc_autoreleasePoolXX` functions are still likely to be faster than the NSAutoreleasePool objects, but only because you're avoiding the Objective-C message sends.
So, I wrote Dokku, helped design Docker, was co-architect of Flynn, and am now in R&D partnership with Deis. I also made Localtunnel, RequestBin, Hacker Dojo, etc etc if anybody is keeping track. I guess I'm not who this question was intended for, but I figured I'd share some context.
tl;dr, after a long lull between 2013 and 2014, its activity has picked up and is slowly working towards a 1.0 including a major refactoring to address a lot of issues and bring it up to modern standards. In fact, here is our refactoring doc in progress:
https://github.com/progrium/dokku/wiki/Refactoring
I feel the flaws of these projects quite deeply, especially Dokku. But I'm surprised to still run into so many people (just not as many here) that really love Dokku. Without them I would not be motivated to come back to it and make it what it should be.
Keep in mind Dokku was the first killer app for Docker and a lot of my design influence on Docker was to be able to easily make something like Dokku and eventually systems like Flynn, Deis, etc. But also so many other things...
I'm involved as an independent collaborator again in Docker's internal extension efforts, and after I saw this I got ClusterHQ looped in to that working group. Then we talked about collaborating on a Go port of Powerstrip, so I'm working on that here:
https://github.com/ClusterHQ/powerstrip/tree/golang
Do you know if Powerstrip could be used to extend the entire family of Docker tools: machine, compose (fig), swarm?
I think it would be interesting, for example, to explore development of an extension that "guided" those tools regarding where (virtually speaking) they should re/deploy machines and containers, based on metrics collected by a 3rd party service with which that extension communicated.
I apologize if my question is a bit naïve; at present, I've been spending a lot of time with the porcelain and haven't gotten into the internals enough to understand how all the Docker pieces truly fit together.
Powerstrip can be used to prototype extending anything which speaks the Docker API. So in theory it could be used in front of swarm, as well as behind it.
I'm also interested in finding a way to prototype extensions to the Docker CLI experience... this is the next logical step, and something we should talk about. :)
Just to be clear, as well, powerstrip is all about finding ways to prototype things. I believe that the way that it gets used can go a long way to figuring out what the best extensions points in Docker itself are.
IMO, the powerstrip project will be a success precisely if we can throw it away in a few months because we've used the results from the powerstrip experiment to build the right extensions mechanism into Docker itself that we can use instead ;)
Fig speaks to the powerstrip HTTP api which in turn speaks to the weave adapter.
It would definitely be interesting to have some kind of cadvisor adapter (or equivalent) that was automatically feeding back metrics to some kind of scheduler.
The main point is to allow the vanilla docker client to interact with powerstrip and therefore by extension - all of the existing orchestration tools.
reply