We recently implemented a simple streaming server based on Openresty and Redis. Redis is used as buffer. We load tested it up to 2000 mountpoints (128kbit mp3 streams) with 15.000 clients.
The architecture has some nice properties. For example the buffer being not in-process means (besides an obvious performance hit) that the streaming server can be seamlessly redeployed without loosing the buffer.
I just moved from a late 2014 MBP running Arch to a Precision 5520 (more or less the business variant of the XPS15) running Alpine. Both feature an i7 HQ, 16GB RAM, 512GB SSD. Both where about 2200€.
The Precision is a good laptop but it's my first that isn't better in every way than the one I had before.
On Par:
* Fast (for a laptop) processor.
* Fast SSD. 900MB/sec on the MPB, 400MB/sec on the Precision. Anyway fast enough.
* 16GB RAM. Just enough for me.
Good:
* Linux support. Very important for me. Suspend to RAM on lidclose works reliably. WLAN works. (Display dimming just works (as with the MBP), no coil whining here).
* Display. Native HD (had to use Linux' crappy scaling on the MBP). Non-glare. (And the MBP had the display stain issue)
* 3 years on site guaranty, if on site isn't possible, I can keep the SSD.
* Can change RAM, SSH, battery without voiding guaranty.
Bad:
* Display. I'd prefer 16:10.
* Sound is slightly worse when laptop is used on a table. Sound is dismal when laptop is used... on the lap.
* Keyboard on the new Precision is worse than the one on the 3 years old MBP. Will use the UHK in the office anyway.
I was excited for a moment. Then found out neovim doesn't support -x. And this is why: https://github.com/neovim/neovim/issues/694 (tldr; vim encryption is unsecure and thus removed from neovim for the time being). Would have been nice, though.
3. is the reason why NGinX is the recommended proxy in front of webapps with scarce parallelism (for example Ruby with Unicorn; see http://unicorn.bogomips.org/PHILOSOPHY.html for an explanation) when "slow clients" are to be expected. NGinX is protecting the webapp from blocked workers by slow clients and Outlook Webmail seems to behave just like one. I don't know by heart how to tune this behavior if one wants to avoid it but this property is the main reason we use NGinX.
This sounds like something else. In the outlook case their servers they seem to use the connection as a stream (which is actually valid, although not really supported by browsers outside of the event-stream class), where the server only writes little chunks of data of a time. But the server there not hindered from writing by a slow client - it simply has not more data to write at that point of time.
Actually we (laut.fm) are using the HTTP Push Module for our public API for a live stream of tracks which get played on our ~ 1500 icecast stations. We offer 3 formats: http://api.laut.fm/song_change.stream.json for a line separated JSON HTTP stream, ws://api.laut.fm/song_change.ws.json for the same as websocket endpoint and http://api.laut.fm/song_change.chunk.json for the last x songs. It's not really high volume, just 6 to 7 per second. But it runs basically unattended for years now and I'm pretty happy with it.
Is there any reason to update to the new one (other than new features; admittedly I haven't really looked into NCHAN)?
here's a page on the differences between Nchan and the Push Module: https://nchan.slact.net/upgrade . One important thing I forgot to add is that the Push Module suffered from memory fragmentation under high load, and with a fixed-size shared-memory chunk that could mean running out of usable shared memory for a long-running nginx process. If you're not experiencing that, and you don't need to scale up, or the new features don't appeal to you, don't upgrade -- certainly not yet.
Maybe in a month or two when nchan makes its way into the nginx-extras debian package (replacing the push module), then consider upgrading.
We're using a home baked solution using our own config server (https://github.com/niko/goconfd) which evaluates templates POSTed to it. Combined with some trivial shell loops over a blocking POST and a HAProxy restart this does the job sufficiently well. If anybody is interrested in the details I'm happy to elaborate.