Hacker Newsnew | past | comments | ask | show | jobs | submit | progrium's commentslogin

Working with VMs always felt difficult because of this. So authoring was built-in to Docker. Now you can use Apptron to author and embed a Linux system on a web page. This aspect is usable, but it's only going to get better.

It's getting there. Among other things, it's probably the quickest way to author a Linux environment to embed on the web: https://www.youtube.com/watch?v=aGOHvWArOOE

Apptron uses v86 because its fast. Would love it for somebody to add 64-bit support to v86. However, Apptron is not tied to v86. We could add Bochs like c2w or even JSLinux for 64-bit, I just don't think it will be fast enough to be useful for most.

Apptron is built on Wanix, which is sort of like a Plan9-inspired ... micro hypervisor? Looking forward to a future where it ties different environments/OS's together. https://www.youtube.com/watch?v=kGBeT8lwbo0


Damn, who wrote this amazing piece of software


What a genius the original maintainer of this software is


This is a good question that should be added in the readme. Would love an issue for it to also collect data


age old problem, i should definitely warn people about memory management implications. what would you put in the readme for this?


Do the BridgeSupport files annotate whether you own returned objects and need to release them? Sprinkling in runtime.SetFinalizer calls based on ownership would be slightly nicer than exposing release/retain.


I thought I wrapped NSAutoreleasePool but maybe I just used it directly dynamically. If something is not wrapped in the source that doesn't mean you can't use it. objc.Get("NSAutoreleasePool").Alloc().Init()

Unfortunately I can afford some leaks at the moment, so if that's critical to anybody else and I'm doing something wrong just submit a PR


Ah, I don't just mean the class, I mean you need to have an autoreleasepool block active before calling into AppKit otherwise you can leak memory on every call. It doesn't look like you're using pools yet, or documenting that users of your library should use them.

See here: https://developer.apple.com/library/archive/documentation/Co...

> Cocoa always expects code to be executed within an autorelease pool block, otherwise autoreleased objects do not get released and your application leaks memory


I guess I will look into this as that really sounds like syntactic sugar for something more basic. Like using the class.

I have a hard time keeping up with their changes but you might be right: https://developer.apple.com/documentation/foundation/nsautor...

Oddly it says you cannot use them directly, but later implies maybe they are just less efficient. It would be nice if somebody made an issue for this.


The page you linked is not actually ambiguous, though perhaps a bit tricky to read. It says:

1. If you're compiling Objective C in ARC mode, you can't use NSAutoreleasePool directly, and must instead use @autoreleasepool.

2. In manual reference counting mode you can use either NSAutoreleasePool or @autoreleasepool, but the latter has lower overhead. (This may matter if e.g. you're draining the autorelease pool on every iteration of a loop to reduce memory spikes.)

Under the hood -- at least on the version I disassembled -- NSAutoreleasePool's -init and -release methods wrap the CoreFoundation CFAutoreleasePoolPush and CFAutoreleasePoolPop functions, which in turn call the runtime's objc_autoreleasePoolPush and objc_autoreleasePoolPop functions, which are the things that @autoreleasepool will cause the compiler to emit directly.


This answer says the block is more efficient than manually managing NSAutoreleasePool objects: https://stackoverflow.com/a/12448176

This answer looks like a better overview of what the runtime is doing: https://stackoverflow.com/a/21010442

The @autoreleasepool block seems equivalent to this:

    ctx = _objc_autoreleasePoolPush()
    defer _objc_autoreleasePoolPop(ctx)
You could maybe provide sugar for it like this: https://play.golang.org/p/dljXN3BdEGr


The implementation of your "sugar" can be shortened:

https://play.golang.org/p/8v4EL2B_c_t


Awesome, can you throw into an issue?



wow, thanks!


Keep in mind that the reason the `@autorelease` syntax is faster is primarily due to ARC optimizations (which don't apply here, since you're not using ARC).

Calling the `_objc_autoreleasePoolXX` functions are still likely to be faster than the NSAutoreleasePool objects, but only because you're avoiding the Objective-C message sends.


sorry I meant memory management not garbage collection. again they're just convenience methods wrapping those exact methods on NSObject


also helped design docker (the good parts) and a bunch of other stuff


So, I wrote Dokku, helped design Docker, was co-architect of Flynn, and am now in R&D partnership with Deis. I also made Localtunnel, RequestBin, Hacker Dojo, etc etc if anybody is keeping track. I guess I'm not who this question was intended for, but I figured I'd share some context.

I can't speak as much to Flynn right now, but here is a fairly recent blog post about the current status of Dokku: http://progrium.com/blog/2014/10/28/deis-breathes-new-life-i...

tl;dr, after a long lull between 2013 and 2014, its activity has picked up and is slowly working towards a 1.0 including a major refactoring to address a lot of issues and bring it up to modern standards. In fact, here is our refactoring doc in progress: https://github.com/progrium/dokku/wiki/Refactoring

I feel the flaws of these projects quite deeply, especially Dokku. But I'm surprised to still run into so many people (just not as many here) that really love Dokku. Without them I would not be motivated to come back to it and make it what it should be.

Keep in mind Dokku was the first killer app for Docker and a lot of my design influence on Docker was to be able to easily make something like Dokku and eventually systems like Flynn, Deis, etc. But also so many other things...


I'm involved as an independent collaborator again in Docker's internal extension efforts, and after I saw this I got ClusterHQ looped in to that working group. Then we talked about collaborating on a Go port of Powerstrip, so I'm working on that here: https://github.com/ClusterHQ/powerstrip/tree/golang

We're also planning to combine the Docker event stream plugin system I prototyped previously into Powerstrip: https://github.com/progrium/docker-plugins

And ultimately, eventually, we'll have native extension support in Docker that allows much more extensibility, but all this is a good first step IMO.


Do you know if Powerstrip could be used to extend the entire family of Docker tools: machine, compose (fig), swarm?

I think it would be interesting, for example, to explore development of an extension that "guided" those tools regarding where (virtually speaking) they should re/deploy machines and containers, based on metrics collected by a 3rd party service with which that extension communicated.

I apologize if my question is a bit naïve; at present, I've been spending a lot of time with the porcelain and haven't gotten into the internals enough to understand how all the Docker pieces truly fit together.


Hey Michael!

Powerstrip can be used to prototype extending anything which speaks the Docker API. So in theory it could be used in front of swarm, as well as behind it.

I'm also interested in finding a way to prototype extensions to the Docker CLI experience... this is the next logical step, and something we should talk about. :)

Just to be clear, as well, powerstrip is all about finding ways to prototype things. I believe that the way that it gets used can go a long way to figuring out what the best extensions points in Docker itself are.

IMO, the powerstrip project will be a success precisely if we can throw it away in a few months because we've used the results from the powerstrip experiment to build the right extensions mechanism into Docker itself that we can use instead ;)

Cheers, Luke


Hi,

because powerstrip presents a standard docker HTTP api - it can certainly be used to interact with any other tool that talks standard docker HTTP.

For example - I put together a small run-through in powerstrip-weave where fig can be used to allocate weave IP addresses:

https://github.com/binocarlos/powerstrip-weave/tree/master/e...

Fig speaks to the powerstrip HTTP api which in turn speaks to the weave adapter.

It would definitely be interesting to have some kind of cadvisor adapter (or equivalent) that was automatically feeding back metrics to some kind of scheduler.

The main point is to allow the vanilla docker client to interact with powerstrip and therefore by extension - all of the existing orchestration tools.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: