Hacker Newsnew | past | comments | ask | show | jobs | submit | watersco's commentslogin

Aha! (http://www.aha.io) | Rails & Front End Engineering | Remote

Aha! is looking for experienced Ruby on Rails, Javascript and front-end engineers to develop rich interactive experiences in React with a Rails backend. Aha! is the #1 tool for product managers to plan strategy and roadmaps.

Aha! is profitable, you can work from anywhere in North America and we offer excellent benefits.

We use our own product to manage our work (which is especially rewarding), we deploy continuously and we are developing in Rails/CoffeeScript/React/d3. Our entire team is remote - primarily in US and Canada.

http://www.aha.io | email: amy@aha.io


I have a counter-example. I built an OSX application for remotely tailing logs on servers (http://www.remotetailapp.com/). Originally I did it to scratch and itch when I was administering a bunch of servers over SSH. I added Heroku log support when I started working on a Heroku project. I had dreams that it might grow into a self-sustaining side project, but it hasn't really taken off. The key lesson that I learnt is that the technology just solves part of the problem. Just because it is downloadable software rather than SaaS doesn't mean that you don't need to put effort into marketing and sales.


This hits at the key challenge with creating an MVP - how do you avoid ending up with minimal functionality that doesn't provide any delight? If your MVP doesn't have enough to inspire use (let alone love) how will you learn enough for the next iteration?


Why only 11 buckets? That seems like the bigger issue. Converting an O(n) lookup to O(n/10) seems silly. Why not have 1000 buckets and get two more orders of magnitude improvement in lookup performance.

Of course that will expose how bad the hash function really is. Saving a few cycles in the hash function and then chaining through 17k linked list entries doesn't make any sense.


Because it was a design decision from 20 years ago for small n hash tables. It's baked into the design of his object db files.


To be fair, we don't know the number of objects -- most of these 125k accesses could be lookups. But then again, if the number of objects is that small, then a hashtable strikes me as overkill. Otherwise yes, he should notch it up to a reasonable load factor.

But there's no denying how atrocious the hash function is. To quote:

> first and last characters of the name of the object, adds them together and mods the result by the number of buckets, which is 11

I don't know much about the name representation, but I'm guessing it's human readable ASCII. Which means your keys are confined to a very narrow range, and they'll be distributed along the same lines as the language itself (English or whatever). That means collisions up the wazoo.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: