Hacker Newsnew | past | comments | ask | show | jobs | submit | ramshanker's commentslogin

I can imagine, where this becomes a mainstream PCIe extension card. Like back in days we had separate graphics card, audio card etc. Now AI card. So to upgrade the PC to latest model, we could buy a new card, load up the drivers and boom, intelligence upgrade of the PC. This would be so cool.

This is exactly what's going to happen. Assuming no civilization-crippling or Great Filter events, anyway. At this point I fail to see how it could go any other way. The path has already been traveled, and governments (along with many other large organizations) will demand this functionality for themselves, which will eventually have a consumer market as well.

Another commenter mentioned how we keep cycling between local and server-based compute/storage as the dominant approach, and the cycle itself seems to be almost a law of nature. Nonetheless, regardless of where we're currently at in the cycle, there will always be both large and small players who want everything on-prem as much as possible.


I was all praise for Cerberus, and now this ! $30 M for PCIe card in hand, really makes it approachable for many startups.

I will try the newer version again. Last I tried 2 years or so back, it was crashing for me.

Personal Context: I am a civil enginer, and our requirement from CAD softwares are a lot simpler than Mechanical Engineering. Here on HN, whenever I see people discussing CAD, its the mechanical version of parts and 3d printing.

Shameless Plug: I have decided to try building my own! Over a long enough timeline, it is doable, including the UI/UX part.

https://mv.ramshanker.in/


UI/UX is not the difficult part. The hard part is the geometric modeling kernel.

If you've ever done UI/UX research and worked with volunteer developers who only care about technical problems? It is the hard part. Good UI/UX is hard to begin with, its even harder when no one is interested in front-end development.

In practice everybody uses an off the shelf modeling kernel like Parasolid, ACIS, C3D, or OpenCascade.

The history of FreeCAD proves that UI/UX is the hard part.

Is there a reason you don't just use FreeCAD, SolveSpace, Dune3D etc instead of attempting to develop all of this from scratch given that all of this software is open source in any case?

As I said, all these are optimized for Mechanical engineering, to the best of my knowledge. In civil, there are lots of standardization in 3D part and a lot more focus on 2D side. Major part of building design is using standard steel section. Mechanical side, apart from nut bolts, everything seems to be custom. Software interfaces prioritize these use cases.

Think of I beams, all major countries have national standards of shapes and sizes. There are many "devil in detail" nuances.

So, giving it a go myself. If not for others, at least for my own itch. This is one aspect of open source.


At least for me personal open source project[1], it has been >5x boost. In speed, motivation. Operating knowledge level etc. At some places, I even put inline comment, "this generated function is not understood completely" ! Or may be a question on specific syntex (c++20).

[1] https://github.com/ramshankerji/Vishwakarma/


> this generated function is not understood completely

I think this kind of stuff is OK for the most part. I think it's a thrilling part of computer science: building systems so complex they're just on the brink of what can be fully understood by a single person. It's what sets software engineering apart from other engineering fields where it's unacceptable not to fully understand the engineering, say, for factories, buildings, bridges, ships and infrastructure and such.


What? It's not ok at all. If you don't understand what the code does, you have no business submitting that code.

3 days late clarification: It is for personal projects! Not for sumbitting pull requests to established projects.

At least xAI now has a revenue generation backer. SpaceX.

Others must pull up their revenue number.


SpaceX makes 16B in revenue per year, with 7B in ebitda (which doesn't account for the cost of rockets)... so assume what, 3B in free cash flow per year? And that's being generous.

That's about what Google creates in free cash every 2 weeks.


SpaceX can also raise their prices for government launches to pretty much anything and still get business, because they are essentially a monopoly.

So why haven't they already?

I can think of a many possible reasons offhand:

1. They've been in Growth mode, where it's common for companies to prioritize capturing the market over being profitable.

2. They've had no problems with money since proving their effectiveness. They can raise capital at favorable valuations (and hold secondary sales) whenever they want. It has been one of the hottest private stocks that people clamor to own.

3. As a private company whose dominant shareholder is the CEO, nobody can pressure them to raise prices. This typically changes after an IPO.

4. Previous government administrations would likely have resisted paying them much more than they charge the private sector or other governments. The new administration has proven they will do favors for companies that are friendly to them.

5. For awhile it seemed they might soon have viable competition for manned space flight (e.g. Starliner) but only in 2024 did we see how bad those are.

6. The low cost is a point of pride for Musk who liked to prove how much more efficiently he could do spaceflight than NASA.


Do we get any model architecture details like parameter size etc.? Few months back, we used to talk more on this, now it's mostly about model capabilities.

I'm honestly not sure what you mean? The frontier labs have kept arch as secrets since gpt3.5

At the very least gemini 3's flyer claims 1T parameters.

Not just that, previously many orgs outsourced to consultancy, now when the consultancies also start outsourcing to AI, sooner all business may cut the middleman and have inhouse it teams outsourcing work to AI instead of consultancies !


Thank you for shaking me once more .....


This year: DirectX12 documentation. Lots of them. Harry Potter part 7 (again). Read Part 1 and 2 to kid.


Is there anything preventing incumbents from developing their own ASIC equivalent of a google TPU?

Or maybe the GPUs are not really that different from TPU.

To me, one of the secret sauce of AI chips is Memory amd Interconnect bandwidth. Memory everyone is using the same HBM series. No difgerentiation. Multi chasis Interconnect is already not bottlenecked vs compute. So GPUs aren,t any worse.

Ammuter Guesswork!


> developing their own ASIC equivalent of a google TPU?

It'll be same story as Apple and their Mx chips. Lots have been trying and none have matched even after many generations. And not many companies have the pockets to build a successful chip, at scale, and be efficient.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: