For GBDK-2020 we've been using the 6502 support in SDCC to support the NES as a target console for about 2 years alongside the existing Game Boy and SMS/Game Gear targets.
The 6502 port has been usable, but doesn't seem fully mature. There has been a lot of code churn for it during the last 12 months compared to the z80/sm83 ports as it gets improved. Recently (their recommended pre-release build 15614) this seems to have resulted in some breaking regressions that we haven't fully tracked down.
Perhaps this port is getting less testing coverage than the z80/sm83 port. Unsure. The majority of the 6502 work seems to be done by a newer member of their team, with the longer term members seeming to be somewhat hands-off. That might be an additional factor.
Edit: BTW, the 6502 port in SDCC at build 15267 (~4.5.0+) has been reasonably stable and usable, and is what we based our last GBDK-2020 release on (6 months ago).
Ah thank you, that is all very helpful - I've been using 4.4.0 which is fine for Z80 code, but yeah had the feeling 6502 code generation could be improved.
With regard to code size in this comparison someone associated with llvm-mos remarked that some factors are: their libc is written in C and tries to be multi-platform friendly, stdio takes up space, the division functions are large, and their float support is not asm optimized.
I wasn't really thinking of the binary sizes presented in the benchmarks, but more in general. 6502 assembler is compact enough if you are manipulating bytes, but not if you are manipulating 16 bit pointers or doing things like array indexing, which is where a 16-bit virtual machine (with zero page registers?) would help. Obviously there is a trade-off between speed and memory size, but on a 6502 target both are an issue and it'd be useful to be able to choose - perhaps VM by default and native code for "fast" procedures or code sections.
A lot of the C library outside of math isn't going to be speed critical - things like IO and heap for example, and there could also be dual versions to choose from if needed. Especially for retrocomputing, IO devices themselves were so slow that software overhead is less important.
More often than not the slow IO devices were coupled with optimized speed critical code due to cost savings or hardware simplification. Heap is an approach that rarely works well on a 6502 machine - there are no 16 bit stack pointers and it's just slower than doing without - However I tend to agree that a middle ground 16 bit virtual machine is a great idea. The first one I ever saw was Sweet16 by Woz.
I agree about heap - too much overhead to be a great approach on such a constrained target, but of course the standard library for C has to include it all the same.
Memory is better allocated in more of a customized application specific way, such as an arena allocator, or just avoid dynamic allocation altogether if possible.
I was co-author of Acorn's ISO-Pascal system for the 6502-based BBC micro (16KB or 32KB RAM) back in the day, and one part I was proud of was a pretty full featured (for the time) code editor that was included, written in 4KB of heavily optimized assembler. The memory allocation I used was just to take ownership of all free RAM, and maintain the edit buffer before the cursor at one end of memory, and the buffer content after the cursor at the other end. This meant that as you typed and entered new text, it was just appended to the "before cursor" block, with no text movement or memory allocation needed.
Presumably it would be straightforward to port the GB code generation to the Intel 8080 / Z80. There have been a few attempts for LLVM for those CPUs over the years. But none which panned out, I think?
Most attempts at developing new LLVM downstream architectures simply fail at keeping up with upstream LLVM, especially across major releases. Perhaps these projects should focus a bit more on getting at least some of their simpler self-contained changes to be adopted upstream, such as custom optimization passes. Once that is done successfully, it might be easier to make an argument for also including support for a newly added ISA, especially a well-known ISA that can act as convenient reference code for the project as a whole.
The CE-dev community's LLVM back-end for the (e)Z80 'panned out' in that it produced pretty decent Z80 assembly code, but like most hobby-level back-ends the effort to keep up to date with LLVM's changes overwhelmed the few contributors active and it's now three years since the last release. It still works, so long as you're OK using the older LLVM (and clang).
This is why these back-ends aren't accepted by the LLVM project: without a significant commitment to supporting them, they're a liability for LLVM.
There is a variation on the new-pcb and components approach called the "Ultra Boy Colour" which doesn't require any parts from an OEM Game Boy Color.
It can use a clone CPU that was manufactured (until recently?) for the "GBBoy Colour" (and could also be pulled from the sort of failed SGB-like clone called the "Extension Converter for GB").
> I redesigned the Gameboy Color from the ground up
> based on the premise that it wouldn't require any
> components to be harvested from a real GBC, so it
> would be compatible with both the original GBC and
> clone hardware taken from a GB Boy Colour- since
> the KF2007 clone CPU from the GBBC is practically
> a drop-in replacement for the CGB CPU found in a
> real GBC- no software compatibility issues at all.
GBStudio does use cart ROM banking (for code and assets), but extensible cart RAM (SRAM) is typically only used for save data if I remember correctly.
In general for Game Boy games the constraining factors are most often CPU and ROM (for large worlds and lots of graphics) and less so RAM.
Some of GBStudio's core is in C and some is in asm. The underlying compiler (SDCC) has been making noticeable gains in recent years, which helps. Plenty of room for SDCC to improve still, but very usable for projects.
FWIW, There is a large Game Boy homebrew competition put on by the community that just started this past week and runs for 3 months. (disclaimer: I'm one of the organizers)
Many participants (of all skill levels) will submit games written in GBStudio, and some will also write games in ASM and C.
Garmin watches work well with the desktop sports tracking program MyTourBook.
I've never had to activate a Garmin watch or register it online in order to use it. Not connecting it to Garmin Services and apps may limit access to 3rd party apps, or at least make it harder to load them load though. (I haven't researched side loading since base features are fine for me).
MyTourBook supports FIT/GPX/TCX/etc, has maps, calendars, charts and activity classification. It's built with Java so can run at least on Windows and Linux. There may be other good programs now, but as of a couple years ago it appeared to be among the better options.
It's open source (and they were supportive of PRs for improving Garmin import).
Right, anonymized data that shares your vitals 24/7 (which are fairly unique) and location data which totally cannot infer your home and work locations. You’re deluded if you believe this can truly be anonymized and stay anonymized.
Don't a lot of people start and/or end their regular fitness runs or bike rides at their doorstep? The thing about these fitness tracking tools is that you start wanting to track all your activities to get the statistical trends. It becomes integrated into your day to day lifestyle.
I currently live in a metropolitan area, so no, I don't. I usually have a brisk walk until I reach the place I consider the starting point, as where I live it's not very comfortable to run, and it's not until then I start tracking the exercise.
Most of my friends who exercise also does it at places at least a couple of hundred meters away from their home. The ones that live outside the city, probably do start their run right outside their doors though.
Yes, that is common practice for some athletes. The Garmin Connect platform has pretty good privacy controls which allow users to hide activities or mask tracks near certain locations. Of course if Garmin gets hacked then all of the raw data could be exposed, but the same concern applies to smartphone platforms as well.
This is correct. A phone number is NOT required to enable 2FA, at least in my experience within the last few months.
I set up 2FA to use Yubikey hardware keys for a google account, and was then allowed to generated app passwords. No phone number has ever been attached to the account.
I do agree that not allowing app-passwords to be generated without setting up 2FA is coercive and seems hard to justify, and it is plausible that it is being used to push people into attaching their phone numbers to their accounts. If I recall right, the current language for the setup process skews heavily toward phone numbers and does not do a good job of highlighting other (more privacy oriented) alternatives (as may be evidenced at least in the case of OP).
This may be a recent change, a few years ago when I tried this, I was definitely unable to add Yubikeys to a Google account until I added phone-based 2FA first.
If now it's just 'not recommended' then this is an improvement.
https://twitter.com/FG_Software/status/1491044035884371971
"Official #Wordle dictionary implemented, and the game can now select a solution from all those found in the original for the cost of 1 extra bit per word!
Uncompressed size (Raw text files): 76060 bytes
Compressed size: 26256 bytes"
https://twitter.com/FG_Software/status/1495298243668099073
"Words are stored in 2 bytes: 15 bits data, 1 bit to check if it's a solution. They're all sorted alphabetically, so I can algorithmically determine the first 2 letters with a lookup table, and stick the last 3 letters in 15 bits.
Bit more to it but you can't fit it all in a Tweet"
I've been working on a Game Boy Color (and regular GB) fork that in current builds uses the compression by arpruss.
https://github.com/bbbbbr/gb-wordle
The linked GBC version is my fork with some improvements (and more in the works).
The current published release uses a similar compression approach by zeta_two, but in current builds I've switched to the compression by arpruss since total data + decompression code size is now a couple hundred bytes smaller.
Not sure how big the word dict is in your latest version, but you can do much better simply by reordering how you create your index.
With alphabet in order, assembling letters ABCDE: 17345.00 bytes
With alphabet in order, assembling letters EDCBA: 16949.00 bytes
With alphabet order tweaked, assembling letters EDCBA: 16309.00 bytes
Where tweaked means you build your offset as if each position was ordered like this ([::-1] means reverse if you're unfamiliar with Python).
You can also use a prefix rather than variable length encoding, this means you can use 2 bits to represent a number bigger than 2^14, rather than 3. This might hurt your ability to decode though, as you'll have bits that cross byte boundaries.
You can get much smaller using length 3 varints rather than 7 (13,110 bytes), but I presume that would perform worse on GB hardware than staying byte aligned.
Not sure what technique you're using for the answers list, but the compress5.py suggests it's doing a basic bitmap.
Base bitmap is 12972 bits, or 1622 bytes (your file lists 1619, not sure why it's 3 bytes smaller, but all the same). You can "skip encode" (I don't know the formal name for this technique) into 1232 bytes by encoding runs of three [0, 0, 0] as [0], and anything else as [1, X, X, X], saving another 390 bytes.
I tried all combinations of runs between 1 and 7, and 3 is optimal.
ASM will tend to be more efficient, especially if written by someone with experience and skill, but that isn't enough to prevent other tools from being used successfully. It's described in much more detail in the linked article.
For GBDK-2020 we've been using the 6502 support in SDCC to support the NES as a target console for about 2 years alongside the existing Game Boy and SMS/Game Gear targets.
The 6502 port has been usable, but doesn't seem fully mature. There has been a lot of code churn for it during the last 12 months compared to the z80/sm83 ports as it gets improved. Recently (their recommended pre-release build 15614) this seems to have resulted in some breaking regressions that we haven't fully tracked down.
Perhaps this port is getting less testing coverage than the z80/sm83 port. Unsure. The majority of the 6502 work seems to be done by a newer member of their team, with the longer term members seeming to be somewhat hands-off. That might be an additional factor.
Edit: BTW, the 6502 port in SDCC at build 15267 (~4.5.0+) has been reasonably stable and usable, and is what we based our last GBDK-2020 release on (6 months ago).