Hacker Newsnew | past | comments | ask | show | jobs | submit | jayjohnson's commentslogin

Good timing, I am using my own AI (keras + tensorflow) stack to predict in-game hackers on ARK Survival Evolved with an AWS EC2 instance. Here's some background on the fully open-sourced stack: https://github.com/jay-johnson/train-ai-with-django-swagger-... with docs http://antinex.readthedocs.io/ I would love some players, but I'm still load testing how many players the game server can use + make real time predictions without impacting the game. Reach out if you want to try it out!


Not the same as HubPress (which looks like something I wish I knew about), but just a month ago I wanted to run an SSG locally (that wasn't using jekyll) so I built this one for deploying nginx + sphinx using docker. It auto-converts rst markup files into bootstrap + bootswatch html-themed posts (and has a search engine in it). http://jaypjohnson.com/2016-06-25-host-a-technical-blog-with...


These are great points, but with the post I wanted to share how I approach designing and building docker containers, because I hope it saves others time building them. When I started trying to deploy containers on production without Docker Compose it was painful. Now that I know what that time sink feels like, I try to focus on using configuration management during initialization inside the container instead of at the Dockerfile RUN directive (which requires a rebuild).

Back to your points, can I ask if you are interested in using this container approach to build a database container cluster in docker and then benchmark against a working imgadm(1M) + SmartOS environment? From the docker-side when I scale out a cluster of database runtime-only containers, I utilize the volume mounting attribute that is commented out in the repo's compose file (https://github.com/jay-johnson/docker-schema-prototyping-wit...) and then host the database files outside of the containers in a persistent storage location that is mounted + available on the hosts where the docker containers are running. As for building the database container for this proposed solution I would still utilize this Docker Compose-centric approach in development and then in production because I can build the container one time and not deal with a rebuild which invalidates testing a container within my DevOps artifact pipeline.

Wrapping up, I live more on the application development side so I cannot speak specifically to SmartOS or imgadm(1M), but I know the docker documentation and community are pretty helpful for getting/finding technical solutions that helped me build + launch products (go team!). If you are interested we can discuss more details about how I would approach bench marking these two database environments. I am a big proponent of testing everything and I would enjoy discussing how to tackle a db perf/load/ha + test harness like my message simulator (https://github.com/GetLevvel/message-simulator).

Feel free to connect with me on LinkedIn anytime: https://www.linkedin.com/in/jay-johnson-27a68b8a


imgadm(1M) fetch time depends on the type and size of the image, as well as the speed of one's network connection. This is performed only once.

vmadm(1M) provisioning can take anywhere from 5 to 25 seconds.

Oracle DB provisioning with my program takes 45 minutes, as it performs a CREATE DATABASE and is constant irrespective of hardware.


Agreed. That's a great point when it is a simple use case, and how I started. Now that I build for Docker Compose out of the gate, it allows me to deploy a composition across a multi-host Docker Swarm (usually just changing the network driver to overlay + utilize labels) with 1 command, and more importantly I do not have to rebuild my container on production. Back in docker 1.9, I found myself running too many variations to get the same behavior that Compose natively handles with the new labels + overlay functionality (reference http://blog.levvel.io/blog-post/running-distributed-docker-s...). I was using `docker run` to deploy across specific hosts using manually-assigned env vars invoked with a docker run:

# docker run -itd --name=AppDeployedToNode1 --env="constraint:node==swarm1.internallevvel.com" busybox

# docker run -itd --name=AppDeployedToNode2 --env="constraint:node==swarm2.internallevvel.com" busybox

# docker run -itd --name=AppDeployedToNode3 --env="constraint:node==swarm3.internallevvel.com" busybox

RabbitMQ Example - https://gist.github.com/jay-johnson/2673ce4df42317667908#fil...

Looking back I feel like it was a lot of effort to deploy 3 busybox instances or that initial RabbitMQ cluster…and that effort to handle the “production deployment case” as early as possible is what set me on the path to the new Docker Compose-centric approach discussed in the link above.


I am pretty new kubernetes. What are some of the gaps with docker compose vs kube's deployment orchestration? I am pretty happy with docker 1.10.3, but am always interested in hearing about something better/cleaner (I don't know what I don't know). I'm looking over your github for some samples at the moment.


(1) Kubernetes pods share the same ip address. This one is huge, in that it means I am not address containers by port, but by ip address. A single pod is a collection of containers, much like a single unit of compose.yml where things can be linked together.

(2) Building on (1), SkyDNS allows pods to be named.

(3) Building on (2), we can define Services that are dynamically selected from pods. That means if I have a pod that depends on another pod, I can instead have it depend upon a Service. The individual pods that make up the Service can come up or go down, allowing updates and maintenance to be decoupled.

Those are the basics. Note that, I remember seeing a presentation from the Docker folks debating whether they should implement something like this. The idea is too useful not to use.

This is sufficiently useful to run a single-node kubelet on your dev machine instead of using Docker Compose on your dev machine. Compose will still be OK if you are only working with a single microservice/app. When that n > 1, that's when the pods and services of K8S start making a lot more sense.


So if I take a sample out of your Matsuri repo, how does this get changed as an "Overridables": https://github.com/shopappsio/matsuri/blob/f966480380b685d34...


let() is define a memoized method. When you inherit or include that module into a new class, you can redefine that same let() and the new class will use the redefined method instead of what is inherited. You can do that with any of the let() that gets defined. This lets you use as much or as little of the shared code that you want.


By the way, if you want to talk over email, feel free to drop me a line at talktohosh at gmail dot com.


Agreed. It is not a good idea to store the db user/password in a docker-compose.yml file, but it is nice for demonstrating with a container that's more than a "hello world" + non-production use cases.


I totally agree and this is not a great idea for new comers. I do find it easier to debug things with all the bells and whistles turned on, but yea the CAP adds are NOT production ready. I will remove them to prevent confusion for those looking to get started with docker.


Nice. I had the same experience with DO last September...The same month Braintree asked me to write a ruby project that took about 3 days and then on the call only asked about python sqlalchemy database queries. I don't know what the disconnect is with these larger companies and their HR staff? By the time that third one asked me to write a node.js rest api from scratch. I just chuckled and said thanks for your time...I doubt I will commit more than a few hours in the future to any throwaway project for just a call back. Why can't we get paid for the time spent writing sample code like a freelancer...If it isn't good enough pay for my time (up to a max) and both parties can move on. Time is more valuable than the risk of getting passed that initial callback/more screens.


You should send them an invoice.


This is good feedback with great solutions. We are novice on all things marketing and advertising. Would you be interested in jumping on a hangout/skype to discuss more with us?

Jay


Are you asking me or sycren or us both?


We are interested in figuring what to build/how to market this and both of you have had great feedback for us.

I was reading your post on https://news.ycombinator.com/item?id=7321013 and that sounds like it's spot on what we are looking to do right now.

Are you looking to build something still? If so here's my email: jay@flowstacks.com let's get started!


I've got a couple of ideas I'd like to dabble with using APIs.

I will sign up in the next few days to give the platform a test.

I won't mind sharing feedback to help you guys grow, but remember that my actual use of the platform should not be a sole feedback tool, as you'll need consensus among a vast amount of testers to gauge the use of the software.

Also, remember not to listen to the users every need. Some are too specific to implement and will bring no value to the product itself.


How intrusive would the developer environment have to be? We're a "host your own REST service" development shop so deploying and testing your own GitHub repo is done with a dashboard from a web page. In our eyes, everything outside of code development and code hosting should take place in a web page complemented with command line tools for advanced control.

The feedback we have heard is the editor of choice is near-and-dear to a developer and would be difficult to force a new tool on anyone to edit code. Code hosting has a good solution with GitHub too. Which leaves testing and deployment up to the platform. Since we focus on giving two free REST services for publishing your own Jobs we put the deployment and testing into our web dashboard (and cli tool). Testing a REST API is easy and already integrated into our dashboard similar to Postman. The only code a developer has to write is a Job which is just 3 simple parts (inputs, outputs, and tasks). For us we wanted the .Net debugging power for every developer from our dashboard so we built it to identify tests that fail validation based off your test inputs and outputs versus your test's expected outputs.

Let me know what you think here's our beta vision based off our feedback for quickly developing and deploying REST API services for your own Jobs: https://flowstacks.com/developer/

Cheers


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: