Hacker Newsnew | past | comments | ask | show | jobs | submit | positron26's commentslogin

Do we mean managed or PG on K8s like CNPG? In all cases, I use the infra to simplify things like having disk redundancy and failover nodes, not because 12GB is interesting.

Primary managed PG, since you still need setup/maintenance/monitoring on your K8S own solution.

You guys are doing monitoring? ;-)

This is the answer. We need online learning for our own code bases and macros.

> A user has asked if GNU Unifont can be used with commercial (non-free) software.

One can be forgiven for thinking the author means to imply that all commercial software is non-free. It is a further disappointment that anyone has to ask.

Open source was right to get rid of the intentional and unintentionally anti-commercial motifs that only got in the way of paid open source development.


Ironically, it's the FSF which discourages the use of "commercial" to mean "non-free":

https://www.gnu.org/philosophy/words-to-avoid.en.html#Commer...


Obviosuly discourages because they're not equivalent and creates confusion. Stallman himself was selling copies of Emacs while releasing it under a Free license.

Some may be confused into thinking this reply is a correction. I don't mean to appear to rebut.

We know that the FSF is aware of the problem. The trouble can only be if we expect more success from repeating the same tactics for the next forty years. I would blame no one for expecting the FSF to stay the course and to achieve similar effects. I would also not blame them for choosing a different path for themselves and recommending so to others.


> One can be forgiven for thinking the author means to imply that all commercial software is non-free.

Do they mean to imply this? It can also be read as a clarification about the mentioned software, not all commercial software in general. Could just be poor wording.

> Open source was right to get rid of the intentional and unintentionally anti-commercial motifs that only got in the way of paid open source development.

Open source did succeed in avoiding the problem present in English language, but in doing so, shifted focus away from freedom and onto different confusing motifs. A rare word like 'libre' arguably does an even better job while staying true to the original ideas behind the term 'free'.


I do believe it was just poor wording.

I don't feel strong disagreement with the four freedoms, but the biggest reason I've gone fully _OSS and intentionally avoid "free/libre" is because I don't want to endorse the FSF tactics and because I want to encourage others to demand more radical innovations instead of forty more years of the same.

What I find most disappointing when I talk to the FSF is that if I bring up social finance and technically enabled social decisions that can make social finance a lot more effective, it is rather as if I have spoken some alien language. I believe the non-programmer needs a lever to choose the development model used by programs they rely on. To the FSF insiders, such thinking is so orthogonal as to generate no reaction. If I say "a billion users are important," they refute the necessity. They are content to be monastic, conveniently propped up by donations for saying nice things. I find such abandonment inexcusable, and I get fired up talking about it.


There's also the implication that all non-commercial software is free. There's plenty of non-free (as in libre) software released by hobbyists.

Because of the Ukraine conflict, the phrase "mission command" came to my attention. It's about C2 rather than leadership but another one of those gems we might filter out in our "Bay Area" (you're all terminally online Europeans / teenagers jk) bubble.

The idea of mission command is pretty simple. If you see an incidental opportunity that will contribute to the big picture and pursuing it won't compromise the objective of your orders, take it. IIRC they call it something like "scoped initiative."

If you see an incidental opportunity that you can't take because it would compromise your local objective, you escalate. Up the chain, in the larger scope, that incidental opportunity that would compromise the objective of the smaller unit may be addressable using some available resources of the bigger unit.

It works by deduction and beautifully because you get the best of both individual initiative and large-scale coordination. It's an example where from-first-principle CS and pragmatic emergent systems resonate because it's near a morally true optimum.

In the context of OP, knowing the objective of your larger 1-2 organization levels is all the transparency that is every necessary. Neurons aren't smart. Information flows in a network are smart. Don't trust people who start performing and asking for transparency because ninety-nine times out of ten, they can't do better with what they ask for but will make everyone else do worse by breaking the cohesion.

And finally I read OP. It's a vapid feel-good long-form tweet that is nothing compared to the comment section.


> ninety-nine times out of ten

that gave me a chuckle


Please just talk about capital and leverage like an adult. Do you expect a CFO and their team to look at the math and say, "Well, we figured out that we can speed up adoption and bring forward billions of dollars of revenue by spending fewer billions from capital injection and debt deals this year" and then not do it?

Adults tell jokes too, especially gallows humor, and to great effect.

Ergo I propose grandparent commentator inject more humor in their clear understanding of leverage and debt to widen your, my, and their audiences' understanding regarding debt and leverage beyond your proposed metaphor of the toddler CFO failing the marshmallow challenge.


What doesn't work are the predictions of Uber's collapse, of which there were many, cheered on by a great deal who still gather here looking for the next things to see through.

I am personally betting on Uber’s collapse for the obvious reason: it won’t compete with robotaxis and AV companies would rather have customers on their own apps rather than Uber’s platform.

Just unsure about the timing


> Just unsure about the timing

Right after we get nuclear fusion and a million people on Mars.


Lol I can’t remember the last time I was driven by a human.

That sounds like a pretty bad memory. Unless you're like 3 and learned to read/write pretty fast, I guess?

profound insight

Uber actually has a service that's worth paying for. I can't say I feel the same about most AI slop factories.

Hardware growth is slow and predictable, but one breakthrough algorithm completely undercuts any finance hypothesis premised on compute not flowing out of the cloud and back to the edges and into the phones.

This is a kind of risk that finance people are completely blind to. Open AI won't tell them because it keeps capital cheap. Startups that must take a chance on hardware capability remaining centralized won't even bother analyzing the possibility. With so many actors incentivized to not know or not bother asking the question, there's the biggest systematic risk.

The real whiplash will come from extrapolation. If an algorithm advance shows up promising to halve hardware requirements, finance heads will reason that we haven't hit the floor yet. A lot of capital will eventually re-deploy, but in the meantime, a great deal of it will slow down, stop, or reverse gears and get un-deployed.


AI had a kind of Jevons paradox approach to efficiency improvements, unfortunately - if you halve the compute requirements with an algorithmic advance, you can run a model twice as big.

The large SOTA models have hit very diminishing returns on further scaling, I think. So you’d rather double the number of models you can run in parallel.

The schedule grows small. I have stopped writing new Elisp and will learn CL in order to adopt Lem.

A few years back, this schedule included smart voices attempting to exercise some cultural leadership. It was bright, well-meaning, and largely right. Being right does not stop RMS. It inspires him to travel in an alternative direction of his choosing for the rest of his life.


Could you expand on your comment a bit please? I've not heard of Lem before now. How does it compare to Emacs? Also your comment about cultural leadership. I'm not sure what you're referring to specifically. (Asking in good faith out of curiosity).

The FSF is the only home to the most dogmatic and narrow minded human beings I have ever met in my life. I dare not begin writing why. It is an essay. I have priorities to build things independent of the FSF proclivity for shortcomings.

The reason I recommend Lem is because CL is a general purpose programming language. The two-way flow of professional code in and out of the editor is a tremendous advantage that pays all sorts of dividends to libraries, innovation, runtimes, and tooling. Elisp is a Lisp, but affectionately known as "the worst of the Lisps" among serious Lisp programmers.

Some among the Emacs community are not blameless. Todays AI naysayers are just yesteryears tree-sitter doubters who said "We don't need all that fancy JSON garbledeygook" about LSP adoption. They were against an X frontend. They were against cl-anything in the symbol space. As the rock weathered away, the most abrasive sands remained. Proud they are of the lost atoll upon which no coral may grow.


Lem is an Emacs-like editor built in Common Lisp. It's very impressive and usable for its age and I can see why some people see it as a better Emacs. Still has nowhere near the mindshare of Emacs, though, and it has a long way to go before it can match the Emacs ecosystem.

And the UI runs on WebView.

It doesn't. There is a terminal frontend, a web rendering frontend, and a deprecated SDL frontend. The web frontend was explicitly developed to speed up development, writing implementations for graphics described in CL (the part being accelerated) that can be later served by another frontend should some technical need emerge. Anyone acting like this is Electron is either leaping to conclusions or being intentionally misleading.

Electron is not WebView my friend.

GP is a longstanding pita in the emacs community who has yet to come to terms that FOSS is an financial black hole.

You're obviously not very well acquainted. Positron is explicitly aligned with _OSS thinking. "Free/libre" is how the FSF moralizes use of their GPL in order to acquire more copyright assignment from programmers who pay code into their racket so that the FSF can then lord over donations they draw by promising yet another project.

> I think people who kept saying there is no moat in AI is about to be shocked at how strong of a moat there actually is for ChatGPT.

Game on. The systemic risk to the AI build-out happens when memory management techniques similar to gaming and training techniques that make them usable reduce the runtime memory footprints from gigabytes to megabytes, much of which fits in L2. When that happens, the data center will bleed back to the edges. Demand will find its way into private, small, local AI that is consultative, online trained, and adapted to the user's common use cases. The asymptote is emergent symbolic reasoning, and symbolic reasoning is serial computation that fits on a single core CPU. Game on, industry.


Fundamentally, if the nozzle temperatures can't possibly withstand what they are extruding without eroding, we can either:

- balance an exothermic reaction (self-propagating high-temperature synthesis) to occur just after leaving the nozzle

- externally apply the heat with laser or plasma arc etc

The limit of externally applying heating is when the heat flux has to be so high that some material vaporizes and pops. An exothermic reaction within the material overcomes this limitation.


The other alternative is like current state of the art 3d printing ceramics - you either replace some high percentage of the filament with clay and fire it as a post processing step and it burns off the plastic, or print a clay/water slurry directly and fire it after drying.

But I don't think we'd end up with the basalt being very filamentous.


If the binder that gives you something printable at low temperature doesn't integrate into the final result through chemical reaction, you are almost assuredly going to get a high porosity mess where the binder had to vaporize out.

If instead the binder and precursor can melt, react, and expand into a solid that precipitates out because of a super high melting point, the expansion will ensure that you get a fully dense part that can be machined back down.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: