Hacker Newsnew | past | comments | ask | show | jobs | submit | drujensen's commentslogin

100%.

3 years ago I modified his 6502 breadboard project and added a keyboard, display and then I modified MS Basic to get it to run. Also, I found a way to extend the RAM to 31kb. I called it eater-one.

All of his videos and kits are a must if you never understood how a cpu works. I highly recommend them.

https://github.com/drujensen/eater-one


Another suggestion is Anders Nielsen's channel - his video on a single breadboard 6502 computer (really a 6507) with a RIOT is really good: https://youtu.be/s3t2QMukBRs


This reminds me of the studies done related to traffic lights and stop signs.

Removing traffic lights and stop signs actually reduces accidents because drivers are more careful when driving through intersections which reduces speeds and drivers become more alert.

Developers will adapt to their toolset. If you have a statically typed language, you trust it will deal with type related issues and you become more lax with testing things related to types. When you develop in non-typed languages like Ruby, you tend to write more tests and not trust your compiler (because you don't have one). This is why you will find most Ruby developers are really good at writing tests and embracing TDD.


You're point is valid, but you really quickly move past just how slowly drivers have to be when there aren't traffic lights. As with everything they're a helpful tool for efficient traffic, just like static compilation.


I can't speak for all Ruby developers but I found that I could read a pull request from just about anyone I worked with a and find a spot where they hadn't covered a possible nil with a test. And yes, we had coverage checks.

A type system can keep you from having to write those tests.


> A type system can keep you from having to write those tests.

Because with a proper static lang (hint: not Java, not C#), nil doesn't exist? Right.


Those languages don't have null safety, but plenty of languages do. Rust, Kotlin, Swift, Haskell, etc.

The claim is true: a type system _can_ prevent null-related issues and eliminate the need to account for them in tests. That's not the same as saying every type system does.


Doesn't C# support this by enabling Nullable?


They all have nulls but a static lang will warn you that the value can be null.


This is false. There are plenty of languages without pervasive implicit nullability. Check out Haskell and Rust and Ocaml.


You are right. I should have said some. The point I wanted to make was that having type safety is better for checking null than having no type at all.


It can if you choose to return Optionals instead of nulls.


That's a good analogy, because just like when an intersection gets enough throughput, relying on drivers to navigate their way through becomes unrealistic. Once a codebase reaches a certain size or complexity, it starts becoming really time consuming to follow untyped logic all over the place and you run the risk of a rockstar developer putting a scooter object into the side door of your minivan object.

Static typing gives you assurances and tools with which to test your assumptions in the code, for those times when reading the whole stack is cumbersome, and you need to defend against less careful developers. It also transfers a bit of knowledge between developers in a trivial way that would otherwise be a pain to communicate.


I think this analog is close to the dynamic vs static debate. However, there are probably more factors to consider, such as competence of the driver (will the driver even care to slow down?), location of the intersection (an intersection just around a shallow corner) and value of the driver's car (does the driver care about a little damage?).

In my experience similar arguments hold for software developers. Especially caring can be a big factor; i.e. the "move fast, break things" mentality.

I've been back and forth between typed and untyped languages (somewhere in the range of haskell and tcl) and personally prefer less typing when hacking things together and more typing for high quality software. I'm currently working an infra job where we use both ansible and terraform. They're not direct competitors, but I tend to prefer terraform over ansible when possible, as terraform gives me more "static" guarantees, which translates to more confidence when we apply our code.


As former amateur physicist, who read a couple books over the years :)

To me it's quite simple. We haven't detected what is causing the waves that the particle of light is riding on. The particle of light is like a surf board, riding a wave and will always hit the shore in the interference pattern. Einstein and Bohr were both right.

What has been the wrong assumption over the years is that the light is generating the waves. It seems obvious to me that something else outside of the light (that we haven't detected yet) is generating the waves.

My amateur physicist guess is the waves are generated by the clock cycles of the computer simulation we are in. All computers require a clock to function. Why would our universe be any different?


Not saying you're wrong in the slightest, but why would our universe be like a computer? When our understanding of the universe was very primitive we thought everything was alive (animism). Then a little more technology and we thought the universe was like an orrery or a clock (mechanism)[0]. Then a little more and we think it was like a computer (turingism?) Occasionally you'll hear it's like a hologram, a simulation (what's it simulating?), a graph [1], or some other faddish concept.

We just like to make metaphors to put this thing in a box that we can't understand. But perhaps at its most fundamental level it will defy comprehension or even definition.

[0] In the philosophical sense; https://en.wikipedia.org/wiki/Mechanism_(philosophy)

[1] https://syncedreview.com/2020/04/17/stephen-wolfram-the-path...


What's the difference between a clock and a computer? And there's no difference between computer and computer simulation.

Being a hologram is something I see as a significantly different type of comment. That's about how a sphere of space is mathematically equivalent to a flat 2d shell using equivalent but warped physics. It doesn't change anything about the nature of the universe except sort of the number of dimensions. And it's orthogonal to those ideas.

I'm not sure how to categorize the graph thing but it's not widespread at all.


> why would our universe be like a computer?

Because information is fundamental.

Computers just so happen to be our best tools in the information domain.


> The particle of light is like a surf board, riding a wave and will always hit the shore in the interference pattern.

This is exactly Bohm's Pilot Wave theory that the article talks about. It has been debunked to some extent but I believe the debunking is still somewhat controversial for proponents. There are neat macro-level simulations of it called "Walking Droplets" if you search for them.

https://en.wikipedia.org/wiki/Pilot_wave_theory

https://www.pml.unc.edu/walking-drops


But then every point in space has it's own clock, so you'd have gazzilions of tiny clocks instead of one big global clock that tickets for the entire universe/reality.

Or perhaphs if we do have one big global clock, your distance from it warps/distorts other parts of the reality at each point in space? Maybe there is a limit to how far this heartbeat travels (edge of the universe)?

You can also then ask, at what speed or clock cycle is reality "rendering" and is it the same speed everywhere? It would seem each point renders itself and there would be no big global processor doing the rendering.

Then finally if you really want to dig deep, ask why is it being rendered to begin with. Might need some psychedelics for this one instead of math.

What if there are no clocks nor any points, what if matter is just a condensate of frequencies/sound/harmonics - that is, there is a "great piano player" and the "sound" it emits is the universe, just a side effect. If it stops playing, the universe disappears/collapses into nothingness.

note: A point here referring to a point in a massive 3d grid of points.


That's why time slows down near heavy objects, there is more calculus to be done and don't want to drop frames.


>All computers require a clock to function. Why would our universe be any different?

Clocks are not required at all, even for digital computers. They only make designing computers a lot simpler.


Clockless computers don't have a global clock but they still have components that "tick".


Depending on the design, no more than abacus has components that "tick". And I'd never say an abacus requires a clock to function.

If even a single bit gets hung up in a straightforward clockless design, everything else waits for it.

None of that is really visible to something inside the computer, though. The OP is describing calculation steps, not clock cycles.


Assuming that it is true, what is the clock/cpu speed?


The released speed, or the overclocked speed? Let's face it, if the universe isn't overclocked, then we've got some serious hacking to do.


I would assume: 1 / 1 t_P (Planck time) ≈ 1.8549×10^43 Hz (hertz)


lol, yes it did.


Your project is informative and awesome, I hope you don't mind the attention.


Hardware used is listed at the top of the readme. https://github.com/drujensen/fib/blob/master/README.md


yup, this was my original goal.


ok, owner of the repo here. So this project was purely to show the macro differences between interpreted ruby and compiled crystal to beginner ruby devs at a meet up.

I decided to add the top languages on github to help give some idea of how crystal performed against them.

I'm happy to see so much discussion about it and was taken by surprise when my inbox was full this morning. Thanks @anonfunction. ;-)

The breaking benchmark examples were just that, examples of how to break the benchmark. I didn't expect to get a memoized version of every language and really don't think comparing them from a performance benchmark makes much sense. Let me know if i'm wrong about that.

I am fine adding all of your change requests and will try to keep the benchmarks up to date.


Breaking benchmarks... Is that like the time I fiddled with the corresponding C program trying to get it to run at least in the same ballpark as the Rust version? That was until I determined that the Rust optimizer noted that the results of the 'meat' of the test were never used so it just optimized the whole thing away. :) (I thought it was a little disingenuous for the Rust folks to use this as an example of how performant Rust was.)


I used to benchmark using fibonacci, but the recursive method is just awful because it does a lot of function calls and you're essentially benchmarking that. Then I switched to finding primes using the Sieve of Sundaram. It uses arrays, hashes/maps/dicts and two loops. Also wastes a lot of memory if you don't split your search domain into ranges. The surprise was that Go and D (in that order) turned out to be faster than Rust mainly due to Rust's HashMap SipHash algorithm. I gave up trying to use other hash libraries (SeaHash specifically) that are not part of the standard lib, because it was quite frustrating compared to D.


> gave up trying to use other hash libraries

fwiw a Rust k-nucleotide program using indexmap:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


> really don't think comparing them from a performance benchmark makes much sense

No, it really doesn't — but if you provide times we all know that's exactly what people will do.

That's why the same comparison was removed from the benchmarks game and replaced with tasks that were still toy but more than a dozen lines.

> adding all of your change requests

These are the programs that were replaced:

https://salsa.debian.org/benchmarksgame-team/archive-alioth-...

https://salsa.debian.org/benchmarksgame-team/archive-alioth-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: