Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just can't imagine we are close to letting LLMs do electrical work.

What I notice that I don't see talked about much is how "steerable" the output is.

I think this is a big reason 1 shots are used as examples.

Once you get past 1 shots, so much of the output is dependent on the context the previous prompts have created.

Instead of 1 shots , try something that requires 3 different prompts on a subject with uncertainty involved. Do 4 or 5 iterations and often you will get wildly different results.

It doesn't seem like we have a word for this. A "hallucination" is when we know what the output should be and it is just wrong. This is like the user steers the model towards an answer but there is a lot of uncertainty in what the right answer even would be.

To me this always comes back to the problem that the models are not grounded in reality.

Letting LLMs do electric work without grounding in reality would be insane. No pun intended.





You'd have to make subagents call tools that limit context and give them only the tools they need with explicit instructions.

I think they'll never be great at switchgear rooms but apartment outlet circuitry? Why not?

I have a very rigid workflow with what I want as outputs, so if I shape the inputs using an LLM it's promising. You don't need to automate everything; high level choices should be done by a human.


The most promising aspect for machine learning in electrical and electronic systems is the quantity of precise and correct training data we already have, which keeps growing. This is excellent for tasks such as ASIC/FPGA/general chip design, PCB design, electrical systems design, AOI (automated optical inspection), etc.

The main task of existing tools is rule-based checks and flagging errors for attention (like a compiler), because there is simply too much for a human to think about. The rules are based on physics and manufacturing constraints--precise known quantities--leading to output accuracy which can be verified up to 100%. The output is a known-functioning solution and/or simulation (unless the tool is flawed).

Most of these design tools include auto-design (chips)/auto-routing (PCBs) features, but they are notoriously poor due to being too heavily rule-based. Similar to the Photoshop "Content Aware Fill" feature (released 15 years ago!), where the algorithm tries to fill in a selection by guessing values based on the pixels surrounding it. It can work exceptionally well, until it doesn't, due to lacking correct context, at which point the work needs to be done manually (by someone knowledgeable).

"Hallucinogenic" or diffusion-based AI (LLM) algorithms do not readily learn or repeat procedures with high accuracy, but instead look at the problem holistically, much like a human; weights of neural nets almost light up with possible solutions. Any rules are loose, context-based, interconnected, often invisible, and all based on experience.

LLM tools as features on the design-side could be very promising, as existing rule-based algorithms could be integrated in the design-loop feedback to ground them in reality and reiterate the context. Combined with the precise rule-based checking and excellent quality training data, it provides a very promising path, and more so than tasks in most fields as the final output can still be rule-checked with existing algorithms.

In the near-future I expect basic designs can be created with minimal knowledge. EEs and electrical designer "experts" will only be needed to design and manufacture the tools, to verify designs, and to implement complex/critical projects.

In a sane world, this knowledge-barrier drop should encourage and grow the entire field, as worldwide costs for new systems and upgrades decreases. It has the potential to boost global standards of living. We shouldn't have to be worrying about losing jobs, nor weighing up extortionately priced tools vs. selling our data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: