Hacker Newsnew | past | comments | ask | show | jobs | submit | danson's commentslogin

Do you have a pricing page? And any information about your company /anything else to suggest whether you're going to be around in 12 months?


No prices set in stone yet but it will be around 10 per monthly user, 20 for pro accounts.

There will be an option to self host as a single stateless executable/ docker container (plus db) and also to do two way sync with github issues, jira, notion and linear. So the company going away should be less of a danger than with pt.


A paragraph by paragraph "dumbed down" translation of your original words would be pretty neat to have for starters. Both to understand what you mean but also to understand the lingo.


I'm hardly the best person to give a point-by-point on how modern neural networks work. The original paper that kind of brought together a bunch of ideas that were floating around is called "Attention is All You Need" in 2017 (and those folks are going to win a Turing almost certainly) and built on a bunch of `seq2seq` and Neural Turing Machine stuff that was in the ether before that.

Karpathy has a a great YouTube series where he gets into the details from `numpy` on up, and George Hotz is live-coding the obliteration of PyTorch as the performance champion on the more implementation side as we speak.

Altman being kind of a dubious-seeming guy who pretty clearly doesn't regard the word "charity" the same way the dictionary does is more-or-less common knowledge, though not often mentioned by aspiring YC applicants for obvious reasons.

Mistral is a French AI company founded by former big hitters at e.g. DeepMind that brought the best of the best on 2023's public domain developments into one model in particular that shattered all expectations of both what was realistic with open-weights and what was possible without a Bond Villain posture. That model is "Mixtral", an 8-way mixture of experts model using a whole bag of tricks but key among them are:

- gated mixture of experts in attention models - sliding window attention / context - direct-preference optimization (probably the big one and probably the one OpenAI is struggling to keep up with, probably more institutionally than technically as probably a bunch of bigshots have a lot of skin in the InstructGPT/RLHF/PPO game)

It's common knowledge that GPT-4 and derivatives were mixture models but no one had done it blindingly well in an open way until recently.

SaaS companies doing "AI as a service" have a big wall in front of them called "60%+ of the TAM can't upload their data to random-ass cloud providers much less one run by a guy recently fired by his own board of directors", and for big chunks of finance (SOX, PCI, bunch of stuff), medical (HIPAA, others), defense (clearance, others), insurance, you get the idea: on-premise is the play for "AI stuff".

A scrappy group of hackers too numerous to enumerate but exemplified by `ggerganov` and collaborators, `TheBloke` and his backers, George Hotz and other TinyGrad contributors, and best exemplified in the "enough money to fuck with foundation models" sense by Mistral at the moment are pulling a Torvalds and making all of this free-as-in-I-can-download-and-run-it, and this gets very little airtime all things considered because roughly no one sees a low-effort path to monetizing it in the capital-E enterprise: that involves serious work and very low shady factors, which seems an awful lot like hard work to your bog-standard SaaS hustler and offers almost no mega data-mining opportunity to the somnobulent FAANG crowd. So it's kind of a fringe thing in spite of being clearly the future.


The "but verify" here is the difficult bit. Calibrating the verification frequency and intensity is the art of balancing micromanaging vs disengagement.

The reality is you verify by being more hands on than you or your report want to in be long term - but a good manager explains what you're doing and that it is time limited intensity to build trust.


Not difficult at all.

Define metrics that you both agree are an accurate measure of accountability for the role.

If the metrics are off pace, have a convo about what's happening and what needs to change to get them back on pace.

If they can never get back on pace, then there is a problem with the metric (trust in the system), or person (trust in the employee).

Again, it all comes down to trust.


It's not difficult at all. If you find that someone needs constant micromanagement and doesn't get things done otherwise, it's probably best to part ways with that employee.

Assuming, of course, that you gave them a fair chance initially.


There is a well known (albeit mythic) portrayal of this that disagrees with you, putting the lazy-clevers in charge.

https://quoteinvestigator.com/2014/02/28/clever-lazy/?amp=1

I think it requires a certain definition of "lazy" that basically translates to "delegate everything you possibly can" - which we all know can be hard work in itself.


I think his point is that even with the current levels of vigilance and competence (which unarguably can and should be improved) the actualized impact was "minimal".

In other words, the expected scale of "disaster" here matters when assessing risk vs current operational fitness.


It's a bit strange to describe a total cost of $100+ billion as "minimal".


I am talking about actual effects, not effects due to overreaction.

If you count effects due to overreaction, you can also count Germany's Atomausstieg, which almost certainly materially contributed to Putin's decision to invade Ukraine...


Well put, thank you!


That's not large in this context. It would need to be large enough to cover several thousand pounds of legal defense fees to investigate and argue against the claim. So at least several tens of thousands of dollars.

Source: been through insurance claim on a business that burnt to the ground with months of finished product stock (and which our accountant had accidentally under-insured. Very painful!)


Please don't let these rather aggressive comments from Ruminator and others stop you commenting.

You put forward perfectly reasonable observations and it is frustrating when someone tries to silence others using the appalling Credentials Fallacy.

It is perfectly logical to say at the macro level our "take risks, move fast" strategy will produce more failures, whilst at the micro level being very disappointed at each failure.

Now, if this was a manned mission with life at stake I would expect the risk approach to be modulated accordingly. But even so, astronauts are not civilian passages and even they knowingly embrace flying at high risk. It would be interesting to know how (and if) SpaceX has approached derisking manned flight. Because the PR from blowing up humans is not good whether you're NASA or a private company.


I know this is a meta-comment but I thought this was a wonderful exchange - a misunderstanding was explained clearly and respectfully and then acknowledged accordingly. A great example of the maturity of the readership here, which is refreshing to see for someone relatively new to HN. Kudos to you both.


and tell us the results...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: