Wow, it's surprising to me there's a huge difference and in that direction. I always used to increasing resource requirements for synthesis->P&R->logic simulation->full chip simulation. I guess times have changed since I worked on chip design.
No, the RAM is mainly used by verilator partitioning simulation logic to multiple threads. The Scala part (Chisel -> FIRRTL) runs pretty happy with a few gigs of RAM.
yeah I'm super ignorant here, wondering, say you simulated this, could you run a VM OS against it? What kind of scale it it, personal computer or server (i7/xeon)?
Or you just use it to compile programs against it?
edit: I did skim the readme, I saw verilog makes me think FPGA
If you're simulating a complex CPU's HDL in Verilator then think in terms of single digit thousands of instructions per second. Vs single digit billions of instructions per second when it's made into an SoC.
i.e. on the order of a million times slower than the final product.
On an FPGA you might get 20 to 50 million instructions per second, only 100 times slower than the final SoC product. But for a large modern OoO design you'll need an FPGA costing $15000 (for a VCU118) or even more.
You could simulate this on an FPGA platform, those aren't going to be cheap to hold a large design. Usually by running simulations on a PC you're just running very specific test cases.
I've run CPU simulations on machines with 64GB of RAM before and it took several hours just to get to single-user shell in Linux. Different CPU design and computer, but the point is it's not something you'd typically use interactively.
This project is the first that tickles my brain in the right serendipitous ways; it merges topics of my recent interests.
However, after a VERY short perusal, I grew a giant sense of empathy for non-native English speakers. The readme is gentle enough to English speakers (aka: +95% English) no less I felt like I muddled through renaming tokens in my mind as I went. However, that quickly showed me two things.
1. It reminds me why I never seem to finish classic Russian literature… so often get lost in the introductory parade of names that are a cache miss for my usual set of names.
2. This is perhaps a significant cultural muscle that has never been necessitated for English speakers. Since the earth has largely been using English (in some capacity) for significantly longer than my life span - as my favorite joke says in the punch line “what do you call someone who only knows one language… uni-lingual… jk: American”
PS: it seems like there could be an open registry maintained by “Americans like me” who would rather pre-process the code for tokens within the docs and src… seems like a “DefinitelyTyped style” definitions registry would be very niche, but SUPER useful.
> as my favorite joke says in the punch line “what do you call someone who only knows one language… uni-lingual… jk: American”
"Another thing to keep in mind, when you get to feeling bad about being monolingual, is that the fair question is not 'how many languages do you know?' It is, 'of the languages spoken by five million people or more within a thousand miles or so of where you live, what percentage do you know?'"
> of the languages spoken by five million people or more within a thousand miles or so of where you live, what percentage do you know
By that metric you shouldn't feel bad for not speaking Russian in most of Russia, or for not knowing the most common languages in your immediate surroundings in large swaths of Africa (i.e. most Bantu languages would be excluded).
Lots of the heavily multilingual people in the world also have a lot of irl interactions that necessitate knowing languages other than their mother tongue. Of course that’s not the only reason to learn languages, but it is both common and effective. So in terms of expected number of languages spoken I think it’s a good baseline.
In the case of the US the languages that would meet that criteria in most parts of the country would be English and Spanish. But there are also hierarchies around languages, ie people that speak the more dominant languages are less likely to speak the less dominant languages, but the speakers of the less dominant languages are expected to speak the more dominant languages and suffer higher consequences if they don’t.
I prefer to view it as "what's the likelihood that, with your current knowledge of languages, you're able to communicate with any person you may presumably want to speak to in the future."
This is why it's so much easier to only speak English than it is to only speak another language.
> This is why it's so much easier to only speak English than it is to only speak another language.
It’s pretty easy to only speak German within the DACH countries. Huge online communities as well that speak German.
I’d wager it’s similar for several other large languages, e.g. Spanish or Chinese, OTOH they are even larger, OTOH they probably don’t have the same advanced dubbing industry that we have.
I am Spanish; we have a very strong dubbing industry and pretty much all movies have been dubbed since forever. In fact nowadays we usually get two dubs, one for Spain and one for LATAM, with serious online fights about which one is better xD
Oh interesting, I heard that it’s almost never the case that people are so attached to dubs as here, where people often don’t really care about the original voices and instead about the dubbers, which might even get movie billing.
> one for Spain and one for LATAM
What’s the difference there for someone who is only bilingual (English and German)? :D
There's a bit of variety in the vocabulary used between LATAM and Spain, and honestly even between LATAM countries there's variance.
An example that comes to mind, as a first year Spanish student (I'm doing my best but fact check me because I'm very much still learning!) with a Latin wife, is "el carro," which means car, and is common in some Spanish speaking countries but others might use "el coche" -- I believe this dialectic difference even exists within Latin American Spanish speaking countries!
There are differences in pronunciation too but that obviously doesn't apply to subtitles
Regarding "car", there is a third option: "auto", which if I am not mistaken, is the preferred word in the Southern Cone.
But yes, it is mostly a difference of vocabulary, accent and pronunciation, and also how or if to translate the names of characters and the movies themselves.
Does RISC-V org itself list/document suggested insn fusion sequences anywhere? I can't quite figure out whether they do because the technical wiki they use is quite hard to navigate and search, and their unofficial Github repos are not much better.
The ISA spec itself suggests one fusion - that of mul and mulh (lower and upper halves of n-bit * n-bit -> 2n-bit product).
The only other source I know of was this 2016 paper that suggested specific macro fusions. https://arxiv.org/abs/1607.02318 This predates the B extension so some of those fusions are dedicated instructions now, but I suppose Xiangshan still implements them for the sake of software that was not compiled with B enabled.
That's talking about the list of pseudo-instructions, which are just special-cased assembly mnemonics for a single RISC insn. I'm wondering whether there's also a documented list of suggested fusable sequences, comprising multiple insns each.
It important to be aware of the significant progress that China is making in AI, robotics, and processors, and admirable that they are open sourcing so much of it.
That said, an open ISA certainly invites people amenable to open-sourcing their implementations, which is why more exist for RISC-V than for x86 or ARM.
Quite possibly the open-sourcing is a part of the success story. The Chinese culture is built around incessant reproducing and gradual improvement, which dates back at least to the times of Confucius. Some think it's mindless copycatting, but it's more about learning deeply and then standing on others' shoulders. It's the "fork this" attitude that meshes quite well with open source, but is understandably infuriating for IP-licensing / patents crowd.
This doesn't mean that the openness extends easily to those outside China, its culture, and especially its language though.
I don't see that much opensource coming from china.
For example the popular arm rockchip offerings are so bad at contributing to mainline linux or even sharing documentation that the single person supporting them dropped it due to lack of fund and being unable to get any contact to share documentation.
IMHO, chinese opensource does at best the same thing many western companies do:
Opensource the things that are not you secret sauce/income, or things which would benefit you if people adopt them.
With the added caveat that china in generally is notorious for not abiding by patent laws and copying.
I think many companies in China have a different relationship with IP. They realize that people copy from each other, they will gladly copy from others and expect them to do the same, yet they realize that being hard-to-copy gives them a competitive advantage.
In the west, being hard-to-copy often just means slapping a proprietary license on your software and not worrying about much else. THe "source available" model is a prime example, the software is trivial to copy in theory, but everybody knows there would be legal consequences, so nobody serious actually does it.
In China, this doesn't work, so you have to obfuscate your code, build anti-reverse-engineering protections etc. This is similar to how the gaming market works in the west.
There are plenty of great writeups on reverse-engineering Chinese apps and hardware, and their reverse-engineering protections are often far more elaborate than what those same apps would have if made by western companies.
>The Chinese culture is built around incessant reproducing and gradual improvement
all successful cultures are, they're just divided on whether they admit it or not. Germany was late to industrialization and a lot of our businesses went to Britain under the pretense of doing business and copied chemical plant factory layouts one-to-one, but we like the "we're the country of engineers and thinkers" story much better than "we were a bit slow and ripped the British and French off".
The Chinese just don't pretend. I've spent a lot of time in China and around FB's attempted Snapchat acquisition in tech circles there used to be the joke that FB is the most Chinese American company because of Zuckerberg's relentless copying. The valley works the exact same way, they're just more likely to pretend everything is the result of some boy genius making it up.
Yes. And liberal democracy (the US in particular) is far from perfect. But it is objectively better than a one party dictatorship. It is also less "threatening" from an international relations perspective.
There, one of the things they mentioned was that they were hoping that RISC-V in general, and their cores in particular, becomes a platform for academic research. That way, cutting-edge things are shared in the open and the industry benefits globally.
The optimistic view of this is that it's hoping for a better future for all. The cynical view is that open source is much harder to sanction/export control. The truth is probably somewhere in the middle.
But it’s true that most inventions and innovations in the last 100 years have come from Americans (with work from immigrants, descendants of immigrants, etc). China’s progress economically and technologically has almost entirely been copying those innovations and competing on price due to poor labor and environmental laws and conditions. In many cases it literally involved espionage, and in others it involved unethical stealing from partners or even customers (for example companies that used a manufacturer there).
>> China’s progress economically and technologically has almost entirely been copying those innovations and competing on price due to poor labor and environmental laws and conditions.
That may have been true for a while when they were catching up. They are now leading R&D in a number of areas. They do have more issues (fraud, piling on co-authors) in published research, but there is a lot of legit stuff going on too. There are also cases where China has always been ahead of the US - flat panel displays have never really been manufactured in the U.S. for example. A lot of their innovations are also seen as "just cost reduction" on our end, but that still counts and is part of why they're still competitive even as their labor costs go up.
A large chunk of our western innovations are rooted in "1936" german research. US space program? From Mr Von Braun who was allowed on US land through operation paperclip. Axial jet engine?From Junkers Motoren engine propelling the Me 262. Modern submarines? From u-boats type XXI. Assault rifle? MP43. Highways, synthetic fuel, a lot of modern chemistry... So no, US did not come with most of the last 100s years innovations.
My family comes from a village in Belgium where the 12th panzerdivision SS committed mass murder, I therefore don't glorify these people, thank you. But you have to recognize what your modern comfort has to do with these people, and lower your morale standards accordingly. I'll help you: we are bad persons with low standards and a high, unjustified, self esteem.
Its idiotic to focus on the last 100 years. Those american innovations you mention came on the shoulders of innovations from the arabs, romans, ottomans and many other cultures/societies before them. Where is the praise for them?
Why are we so focused on who is the best. We seriously need a population reset. People can be so idiotic and self absorbed.
Oh yeah sure there are many, but "most" feels like an overreach?
Just off of the top of my head I can think of so many. TV, radar, Penicillin, jet engines, audio cassettes, first programmable computers let alone the ARM CPU you are probably reading this on right now AND the lithium ion battery that is powering it, GSM, DSL, Ariane launchers (not to mention the pioneering German work in WW2), DNA, large hadron collider, MP3, electron microscopes, MRI scanners, the human genome project, TFT screens, IVF and ICSI, the atomic bomb, bagless vacuums - the list is long and varied, and that is just off the top of my head.
Sure the US might have more tech giants than anywhere else, but that isn't where everything happens by a long shot.
> Oh yeah sure there are many, but "most" feels like an overreach?
Definitely, that's what I'm saying btw. Before WWII most invention and discoveries were actually made in Europe, and even though the US took the leadership after that point (in part thanks to immigration of Jewish or East-European scientists), it never was responsible for “most” of them.
Universities always make things open source. I also make everything open source that I make for the university.
It's because you cannot make money with it anyway and the university often allows it, so then it's really good for your CV.
In Europe there is also a lot of stuff open source, and I can honestly say there is no plot by the government to destroy the United States. I don't think Linux is a plan by the Swedish government to destroy the USA.
Most Chinese people in my experience also don't really have any country outside of China on their radar, unless they are in some trading business. China is just so huge compared to for example the US, they really live in their own bubble. It's not something they think about on a monthly let alone daily basis is my personal experience.
And if you want to do a commercial spin-off from your university work, it might be easier to put it all out as MIT/BSD/Apache and then start a completely independent company than to deal with all the paperwork.
This makes life easier for competitors, so it's definitely a tradeoff, but it's the right tradeoff in some circumstances.
I think this is greatly overstated. Generally it's just sheer volume of capital and dedication to building a business that dictates market advantage, not any given persons' knowledge or know-how (though that's obviously worth paying for if you have the capital). I'd argue the primary reason why proprietary ownership of software exists isn't to make life harder for competitors but rather to make it harder for folks to figure out what they're actually paying for and bypassing the fees to actually fix (let alone improve) the software.
Other motive could be to undermine western dominance in chip design. You give away free a good enough thing that is competing with your competitor's moat. If the gap is not that significant this can ruin the business for your competitor.
You can see the same with Apple and all big tech funding OpenStreetMap for example vs Google Maps. Even decreasing income for the competitor is worth it strategically. Or Satya Nadella saying in an interview that if ChatGPT just makes Google's costs higher for search is a big win for Microsoft.
Realistically, it's probably the group inside the Chinese Academy of Sciences wanting for a larger impact factor for their review, similar to all other state sponsored researching institutes.
The strategy is to do PR for China, market for adoption (to break chip duopoly), get contributions (free work), and build momentum to a point where the tech is commoditized (so they can be immune to sanctions).
Cool to see another project using Chisel. Anyone in the industry know what direction things are moving in? Seems like Verilog and VHDL are ripe for disruption. I remember one of them as outright unpleasant from university.
What has changed that they are now ripe for disruption? My BSc thesis supervisor runs a company that tries to disrupt Verilog and VHDL and they've been at it for well over a decade. I don't think they're making much of a dent yet but I didn't keep up with the space: https://github.com/clash-lang/clash-compiler. There's also a RISCV implementation written in it.
All the tooling in this space is so expensive to build and expensive to verify, with such a small market that it seems almost impossible for something to ever challenge the incumbent ecosystems in a reasonable span of time.
I wonder what 'high performance' means in that context. So far, RISC-V has been rather underwhelming in regards to performance when compared to ARM or Loongarch (3A6000).
That's simply not true. RISC-V cores, SoCs, and boards perform very comparably to even better than Arm devices with similar µarch e.g. SiFive U74 vs Arm A55, SiFive P550 vs Arm A72, THead C906 vs ARM1176.
While Arm CPUs have existed for 40 years, the initial RISC-V spec was frozen and formally published only in mid 2019 and most of the CPUs currently available to buy were designed around that time.
This is quite normal. For example the Arm A76 core that features in the latest Raspberry Pi 5 and RK3588 boards (Rock 5, Orange Pi 5) was announced in May 2018, and took until 2022/3 to be available in consumer SBCs. The A53, A55, and A72 had similar time spans from announcement until being available in SBCs: Pi 3 / Odroid C2; Odroid C4; Pi 4.
Only in November 2024 was a RISC-V ISA spec suitable for high performance phones, laptops, desktops, servers frozen and published: RVA23. Cores meeting this spec and with expected performance comparable to Arm A78 up to maybe X2 have been announced, but it will be several years before they are available in products.
RISC-V is simply very very new. It is doing very well in the markets which it currently addresses.
"The latest version of XiangShan achieves a normalized score of 45 at 3GHz on SPECint 2006. With the performance comparable to ARM Neoverse N2, it is the highest performing open-source processor to our knowledge."
TLDR in RTL simulation XiangShanV3 out-performs my current desktop (Zen1) per cycle in the tested scalar benchmarks by 2x. It's probably closer to 1.5x on regular code.
This roughly matches the reported SPECint2006 scores:
* XiangShanV2: 9.55/GHz
* XiangShanV3: 15/GHz
* Zen2 [1]: 10.5/GHz at boost, 9/GHz at base frequency (closest to Zen1 I could find)
There are probably some redundant steps, but this worked for me last time I tried.