Hacker Newsnew | past | comments | ask | show | jobs | submit | tverbeure's commentslogin

> Once the design gets past toy size,

Do you consider 800+mm2 slabs of 3nm of silicon still toy size? Because there's a very high chance that those were written in Verilog, and I've never had to chase sim vs synthesis mismatches.

> Verilog gives you enough rope.

Yes. If you don't know what you're doing and don't follow the industry standard practises.


> Why don't VHDL and Verilog just simulate what hardware does?

Real hardware has hold violations. If you get your delta cycles wrong, that's exactly what you get in VHDL...

They're both modeling languages. They can model high-level RTL or gate-level and they can behave very different if you're not careful. "just simulation what the hardware does" is itself an ambiguous statement. Sometimes you want one model, sometimes the other.


AFAIK, creating latches is just as easy in Verilog as in VHDL. They use the same model to determine when to create one.

But with a solid design flow (which should include linting tools like Spyglass for both VHDL and Verilog), it’s not a major concern.


SystemVerilog basically fixes this with always_comb vs always_latch.

There's no major implementation which doesn't handle warning or even failing the flow on accidental latch logic inside an always_comb.


I used to be a huge VHDL proponent, talk about the delta cycle stuff, give VHDL classes at work to new college grads and such. And then I moved to the West Coast and was forced to start using Verilog.

And in the 21 years since, I’ve never once ran into an actual simulation determinism issues.

It’s not bad to have a strict simulation model, but if some very basic coding style rules are followed (which everybody does), it’s just not a problem.

I don’t agree at all with the statement that Verilog fails when things become too complex. The world’s most complex chips are built with it. If there were ever a slight chance that chips couldn’t be designed reliably with it, that could never be the case.

Anyway, not really relevant, but this all reminds me of the famous Verilog vs VHDL contest of 1997: https://danluu.com/verilog-vs-vhdl/


On a practical level, you're right, most of my team's work is done in Verilog.

That being said, I still have a preference for the VHDL simulation model. A design that builds correctness directly into the language structure is inherently more elegant than one that relies on coding conventions to constrain behavior.


My memory is definitely rusty on this, but you can easily construct cases where the VHDL delta cycle model creates problems where it doesn’t for Verilog.

I remember inserting clock signal assignments in VHDL to get a balanced delta cycle clock tree. In Verilog, that all simply gets flattened.

I can describe the VHDL delta cycle model pretty well, and I can’t for Verilog, yet the Verilog model has given me less issues in practice

As for elegance: I can’t stand the verboseness of VHDL anymore. :-)


Reassigning clocks to another signal name will quickly get you into trouble in ways the just don’t happen on real hardware.

Have you heard about clock buffers and hold violations?

No, is that the latest craze?

The question for me is, where do I catch, describe the physical reality the model describes? A simulation model can be very elegant. But does it represent how physical things really behave? Can we even expect to do that at RTL, or further down the design flow? As the name suggest, we are talking about transferring data between registers. In the RTL that is what I can expect to describe.

At the end of the day, what I write will become an electrical circuit - in a FPGA or an ASIC (or both), having the complex exact modelling with wire delays, capacitance, cross talk, cell behavior too early makes it impossibly to simulate fast enough to iterate. So then we need to have a more idealized world, but keeping in mind that (1) it is an idealized world and (2) sooner or later the model will be the rubber on the road.

To me, Verilog and SystemVerilog allow me to do this efficiently. Warts and all.

Oh, and also, where in my toolchain is my VHDL model translated/transformed into Verilog? How good is that translation? How much does the dual licensing cost.

Things like mixed language simulation, formal verification between a verilog netlist and RTL in Verilog, mapping to cell libraries in Verilog. Integration of IP cores written in SystemVerilog with your model?

Are the tools for VHDL as well tested as with code in Verilog? How big is the VHDL team at the tool vendor, library vendor, IP vendor, fab vendor compared to the Verilog, SV team? Can I expect the same support as a VHDL user as for Verilog? How much money does a vendor earn from VHDL customers compared to Verilog, SV? How easy is it to find employees with VHDL experience?

VHDL may be a very nice language for simulation. But the engineering, business side is messy. And dev time, money can't be ignored. Getting things as fast and cheap as possibly still meeting a lot of functional, business requirements is what we as engineers are responsible for. Does VHDL make that easier or not?


> where in my toolchain is my VHDL model translated/transformed into Verilog?

It's not? Why would it?

As much as I like Verilog, VHDL is a first class RTL language just like Verilog. I've done plenty of chips that contain both VHDL and Verilog. They both translate directly to gate level.

These days, most EDA tools use Verific parser and elaborator front-ends. The specific tool magic happens after that and that API is language agnostic.

> How easy is it to find employees with VHDL experience?

On the East Coast and in Europe: much easier than finding employees with Verilog experience. (At least that was the case 20 years ago, I have no clue how it is today.)

One thing that has changed a lot is that SystemVerilog is now the general language of choice for verification, which helps give (System)Verilog an edge for RTL design too.


Involved in FPGA and ASIC projects since 1997. Predominantly in Europe, nowadays more Asia and some in the US. Since ~2010 I have only seen VHDL in small chops targeting only FPGAs, and in government-heavy projects like defence and space. Nowadays these are also by and large SV. The ratio is something like one in VHDL for 20 Verilog, SV projects. They teach VHDL at universities, and then ppl get to experience SV as soon as they enter the market.

Typical issues are still as given before. Many small IP vendors, esp for communication, networking are using and understand, support only SV. I agree on SV for verification is a big driver.


the geographic constraint is probably the real answer to "which is better" for most people. you learn what your team uses, what your local jobs demand. theoretical elegance matters less than "can i get hired next month"

Over the years I have run Altera, Lattice, and Xilinx... and almost all reasonably complex projects were always done in Verilog. If I recall Xilinx fully integrated its Synopsys export workflow a few years back, but not sure where that went after the mergers.

The Amaranth HDL python project does look fun =3

https://github.com/amaranth-lang/amaranth


This actually sounds a bit like a C/C++ argument. Roughly: Yes, you can easily write incorrect code but when some basic coding conventions are followed, UAF/double free/buffer overflows/... are just not a problem. After all, some of the world's most complex software is built with C / C++. If you couldn't write software reliably with C / C++, that could never be the case.

I.e. just because teams manage to do something with a tool does not mean the tool didn't impede (or vice versa, enable) the result. It just says that it's possible. A qualitative comparison with other tools cannot be established on that basis.


There are folks trying to make HDL easier, and vendor neutral. Not sure why people were upset by mentioning the project...

https://github.com/amaranth-lang/amaranth

While VHDL makes a fun academic toy language, it has always been Verilog in the commercial settings. Both languages can develop hard to trace bugs when the optimizer decides to simply remove things it thinks are unused. =3


How does this compare to chisel [1] , i never could get around the whole scala tooling - seemed a bit over the top. Though i guess it is a bit more mature and probably more enterprisey

[1]https://github.com/chipsalliance/chisel


> i never could get around the whole scala tooling

scala is popular in places like Alphabet, that apparently allow go & scala projects in production.

However, I agree while scala is very powerful in some ways, it just doesn't have a fun aesthetic. If one has to go spelunking for scalable hardware accelerators, a vendors linux DMA llvm C/C++ API is probably less fragile.

For my simple projects, one zynq 7020 per node is way more than we should ever need. =3


> While VHDL makes a fun academic toy language, ...

I spent the first half of my career working at some of the largest companies at the time on huge communication ASICs that were all written in VHDL, there was no Verilog in sight.

As much as I prefer to write Verilog now, VHDL is without question a more robust and better specified language, with features that Verilog only gained a decade later through SystemVerilog.

There's a reason why almost all major EDA tool support VHDL just as well as Verilog.


Note, amaranth supports both VHDL and Verilog language targets.

Cheers =3


I disagree. We've produced numerous complex chips with VHDL over the last 30 years. Most of the vendor models we have to integrate with are Verilog, so perhaps it is more popular, but that's no problem for us. We've found plenty of bugs for both VHDL and Verilog in the commercial tooling we use, neither is particularly worse (providing you're happy to steer clear of the more recent VHDL language features).

VHDL still dominates in medical, military, avionics, space etc. and it's generally considered the safer RTL language, any industry that requires functional safety seems to prefer it.

It's also the most used language for FPGA in Europe but that's probably mostly cultural.


Option A: use Amazon Prime Video to watch shows. Share your viewing habits with Walmart/Vizio and Amazon.

Option B: use Amazon Prime Video to watch shows. Share your viewing habits with Amazon.


Using a firestick means amazon collects data on what you watch regardless of the platform you view it on. Also ads.

I have more control over where Amazon shows me ads versus my own TV. Someone will always collect the data, I prefer USA's Amazon does it than Singaporean's Samsung. And I never seen Ads on Firestick, BUT that could be because I'm prime member, which actually works very well in my zip code (there is Amazon distribution Hub about 10 minutes away from my home)

-48V! :-)

One of the choke points of all modern video codecs that focus on potential high compression ratios is the arithmetic entropy coding. CABAC for h264 and h265, 16-symbol arithmetic coding for AV1. There is no way to parallelize that AFAIK: the next symbol depends on the previous one. All you can do is a bit of speculative decoding but that doesn’t go very deep. Even when implemented in hardware, the arithmetic decoding is hard to parallelize.

This is especially a choke point when you use these codecs for high quality settings. The prediction and filtering steps later in the decoding pipeline are relatively easy to parallelize.

High throughput CODECs like ProRes don’t use arithmetic coding but a much simpler, table based, coding scheme.


FFv1's range coder has higher complexity than CABAC. The issue is serialization. Mainstream codecs require that the a block depends on previously decoded blocks. Tiles exist, but they're so much larger, and so rarely used, that they may as well not exist.


In most civilized societies, there's an extremely high chance that somebody wielding a knife doesn't have a gun.


Wielding a knife is a deadly threat so I am not sure what the relevance is.


The relevance is that you don’t need to assume that the knife wielding person can hit you from a distance.

One way or the other, this doesn’t seem to be a problem in other countries.


Pulling a percentage out of my ass that can't be terribly inaccurate, 99% of police encounters with guns drawn the police are under 21 ft away, at which distance a knife is as dangerous as a gun.

If someone is less than 21 ft from you and they are going to be using a knifes against you, then you should still draw a gun just as often as if they had a gun. So at <21 ft you think guns should be drawn less because they have knives you should also be thinking guns drawn the ~exact amount less no matter which of the 2 weapon they had.


And one way or the other, none of that is a problem is other countries.


I don't dispute that. But in most those 'other countries' literally anyone could be hiding a knife as easily as someone in US could be hiding a gun. So it appears in the vast majority of the cases where people are already right next to each other where both a knife or gun could kill someone, whether it is a knife or a gun is almost a moot point. It's only at a distance that you can treat a knife as a less lethal threat. Therefore the problem lies with the police, by vast majority.


A knife is not equivalent to a gun. One is a kitchen appliance that can be used as a rather ineffective weapon, the other is a tool literally designed for eliminating life as efficiently as possible

Disarming someone who poses a threat with a knife (especially via the use of modern equipment) is absolutely possible and can be performed in most cases with training, even with just one officer. Meanwhile, disarming someone with a gun is a much more complex task, often requiring a coordinated effort from multiple officers


>a knife (especially via the use of modern equipment) is absolutely possible and can be performed in most cases with training

I want to see you attempt that in real life when someone is within 21 feet. If you watched enough training videos and the literal flood of body camera videos that show even tasers are more often than not infective you would not speak so conveniently about a split second life or death situation.


> I want to see you attempt that in real life when someone is within 21 feet

Sadly I did not film it, but you could have been! I have attended multiple classes during which I had to disarm people with knives and other weird objects. It is absolutely possible with the right circumstances and training, but it's a completely different story when it comes to guns - the element of luck is much more meaningful, as a instructor who was shot quite a few times in their career has pointed out


You can but you absolutely should not try just because you can in training, its a last resort when it too late and all other options are not possible.


> You can but you absolutely should not try just because you can in training

For incapable civilians like me - absolutely. But I expect more from police officers than from myself when it comes to non-lethal disarming capabilities


>But I expect more from police officers than from myself when it comes to non-lethal disarming capabilities

It's asinine to expect someone to put there life in danger for little to no benefit just because they are trained for it and there is a chance it might go well instead of certain death.

If someone comes at someone with a knife (which is deadly force), they should expect to be met with equal or more force, this isn't some game with retries. Others don't have to go along your stupid games just because you drag them into them. Play stupid games win stupid prices.


> It's asinine to expect someone to put there life in danger for little to no benefit just because they are trained for it and there is a chance it might go well instead of certain death.

Whether it's asinine or not depends on context. Where I live, it is expected for police officers to protect you with their lives from a lethal threat if necessary, both legally and socially, and failure to commit to that means that you shouldn't be a cop

For the US, I think that the Uvalde shooting revealed a lot about what people actually expect from cops in life threatening situations, never mind what they are legally obligated to do


I'm glad you're making this point. It's something that only people trained in combat would know, and it's very non-intuitive. But it has to do with reaction times, how quickly the person wielding the gun can pull the trigger, and how quickly the person wielding the knife can move. That 21 feet can close blindingly fast.


> under 21 ft away, at which distance a knife is as dangerous as a gun.

No, it is not as dangerous.

To use gun from 7 meters away, you have throw it, which takes way more movement hand movement and time. While you should not rely on it, it is very feasible to just move out of the way of the thrown knife.

Other possibility getting closer to you. Running will take 2 seconds. (Not a lot, but definitely not as dangerous as a handgun)


The statistic isn't related to thrown weapons. It's how quickly you can close the space between you and your adversary, as well as how much bearing drift you can create as you do so.


[flagged]


Then you shouldn't be a police officer. We can't have a society where police shoot first and ask questions later just because they want to make sure there is zero risk to them.


> Basically if someone has physical access to device, its game over.

It took more than a decade to exploit this vulnerability and even then there are fairly trivial countermeasures that could have been used to prevent it (and that are implemented in other platforms.)

Nothing is unhackable, but it requires a very peculiar definition of "game over".

(And as others have pointed out: only early versions of this Xbos One where vulnerable to this attack.)


The incentives to hack the XOne were few. Easy sideloading. No exclusives. Not a great performance per dollar ratio either. It is the opposite of Nintendo consoles if you think about it, and nintendo consoles are notorious for having a really quick homebrew scene.


Every time a console gets hacked, the checklist of SOC security architects grows a little longer. Boot ROMs are written in formally verifiable language, there are hardware glitch detectors, CPUs running in lockstep to guard against glitches, checks against out of order completion of security phases, random delay insertion, and so forth.

When it comes to SOC security, the past is not a good predictor of the present. The previous Nintendo SOC was designed 15 years ago. A lot has been learned since. It's become increasingly harder to bypass these mechanisms.

The fact that it took 13 years to hack the Xbox One is not because it's not an attractive platform: because of its high profile, it has been a popular subject for security research grad students from the moment it was released. And if anything, the complexity of the current hack shows how much SOC security has progressed over the years.


How does you spin his reply of "100%" to a "white solidarity is the only way to survive" tweet as not very extreme?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: