With 'normal' DNS, UDP with the default and TCP is used if the packet size becomes too large. There are other TCP-only variants such as DoT (DNS over TLS) and DoH (DNS over HTTPS).
I don't think the performance would matter much with some basic caching (or even just OS-level caching), but there is limited memory in an ESP so maybe that is it. I have never noticed issues with DoT and DoH which are theoretically much heavier protocols.
That’s odd because DNS is the quintessential UDP-based protocol. “From the time of its origin in 1983 the DNS has used the User Datagram Protocol (UDP) for transport over IP.”. DNS over TCP was only introduced as a later addition (admittedly, in 1989).
Using a hotplate is like driving with one eye closed. Sure you’ll get there, but your accuracy will suffer!
Though I’ve had many a SOICs magically realign on the hot plate — they were pretty forgiving — it’s not worth it to run without proper temperature profiling, even if it’s from a hacked toaster oven :)
I've always wanted to build an "I told you so" webapp.
Something that would store phrases associated with people with an incremental counter. Obviously the list of phrases, people, and counts are unique to the user.
This way I can say to my daughter, "I've told you <checks webapp> 46 times now to clean up your room!"
Add an immutable time stamp to the phrase, so I can say
“Alice warned Bob about that problem on DD-MM-YYYY” so that when Bob gets hit with it 6 months later, exactly as Alice predicted, Bob can’t say “You never told me that”
As someone with a near perfect verbal memory (I can recall conversations verbatim from my whole life) it’s incredibly frustrating to have people deny that I told them something, when I can recall the exact date, time and words, when I did :P
I am not an expert, but I think it depends on your MCU. (Some may have better implementation of a soft-pull-up than others.)
I've only implemented i2c on esp32-s2, and left the pullup footprints unpopulated at first. Eventually I came to the same conclusion and put two 10k resistors on and called it a day. Cost was negligible.
Extended memory was memory above 1MB that real mode DOS couldn't access directly but could through a driver that put processors capable of protected mode (the 286--sort of--and later) into protected mode temporarily.
Expanded memory bank switched the additional memory into an area of high memory below 1MB.
Basically, with DOS, a program had to be designed and the system configured for one or the other.
> that put processors capable of protected mode (the 286--sort of--and later) into protected mode temporarily.
Or, on the 386 and later, permanently, running DOS as a vm86 task instead. EMM386 and similar did that. It was very transparent, so you usually wouldn't notice, and even for programs that would like to drive protected mode themselves, there was an interface called DPMI (earlier VCPI) with which they could wrestle protected mode away from the memory manager (being the exception to "permanent").
DOS4/GW, which most probably know from DOOM and other games, used that interface.
For software that did not use DPMI/VCPI, you usually got a message like "CPU already in protected mode" and had to disable your memory manager (usually by rebooting).
Great question and a puzzler for me at first. From my perspective only expanded memory mattered (bank-switched block in memory space < 1MB) because all the apps I was using at first didn't support extended memory, which requires some switching to protected mode and the effort to add that to real mode programs was high.
It's extra confusing because initially expanded memory was done with add-on cards and later extended memory drivers (on 286 and up) could emulate expanded.
I don't really recall extended memory ever being used much on PCs running a real mode operating system like DOS. I definitely had an expanded memory add-on board at one point though.
There were a lot of essentially kludges in the latter days of DOS that weren't really made unnecessary until Windows 3.0 (or other OSs with a protected mode) and the 80386 and later processors.
> I don't really recall extended memory ever being used much on PCs running a real mode operating system like DOS.
Extended Memory/XMS was pretty much a DOS thing, though. Anything using protected mode would not need it, and be able to just access the memory directly.
> that weren't really made unnecessary until Windows 3.0 (or other OSs with a protected mode)
Windows 3.0 was "sort of" a protected mode OS. The actual Windows part ran as a 16bit task, similar to DOS in EMM386. That changed, again "sort of", with Windows 3.1. There was a thing called "Win32s" to run 32bit Windows applications, but most of Windows was still a single 16bit task.
Windows NT was the real deal, though, shedding off its real mode roots with essentially a reimplementation. It only became part of the "mainstream" Windows versions with Windows XP (Windows 2000 was based on NT already, but still marketed alongside the "legacy" Windows 95/98/ME).
I have a VERY vague memory that you could copy win32s from 3.1 into 3.0 to help running 32bit apps there. There might have been some fiddling with some sys.ini involved. But I might just be dreaming awake or something...
I can imagine that you're right! Would not be too surprising if Win32s was self-contained enough, and the differences between 3.0 and 3.1 small enough (although there were larger ones at least in some other aspects), that this could be done reasonably. (Though most apps were still likely to be 16 bit apps at that time, so not making any use of Win32s.)
The original 8086 processor could directly address up to 1 megabyte of memory. The PC memory map allocated 640 KB of that for RAM.
The first kind of further memory expansion was classic bank-switching. Within that 1 megabyte address space, you insert a window, say 32 kilobytes, which can be set to some 32 KB section of the extra memory. Same way more than 64 KB was added to systems like the Apple II, which could only address 64 KB. It's a pain to program for this kind of arrangement.
Later, when running in protected mode, the Intel 286 would be able to directly address up to 16 megabytes of RAM, and the Intel 386 up to 4 gigabytes. Memory managers running in protected mode that could access this memory directly were created, which would handle memory for DOS applications.
Those two approaches for > 1 MB are XMS and EMS. Though I can't remember which is which!
Brings back memories. I remember my parents being astonished when little me fixed our old IBM PC/XT, which was not booting from the hard disk using a low level format „hack“ I found in a copy of PC magazine [0]. The trick was to boot from floppy first and trigger a BIOS interrupt to do the low level format.
Yes, but to be super pedantic, it stayed within the 1MB barrier[1] that real mode could "officially" address. It was perfectly okay to have your EMS window above the A000h segment in memory, i.e. above 640k, if no device was using it.
[1] Plus some change known as the "high memory area" (HMA), an artifact of how linear addresses are calculated from segment and offset.
Fantastic write up - thank you! I also love hearing about why each part was chosen. As an electronics hobbyist I often mimic portions of others' designs but I'm not always entirely sure why a certain part was chosen among the various options.
Do you prototype all this on a breadboard before making the PCB and picking specific parts? I'd be curious to hear more about your process. I feel like I always need to test everything I build on a breadboard first since I inevitably miss some small detail if I go straight to schematic + PCB design.
Great question! It really depends. If I'm working with a part I haven't ever worked with before, I'll often make a minimal breakout board that I can talk to with a devboard like an Arduino or Feather.
If I'm pretty familiar with everything, I generally dive into a rough PCB layout and debug from there. If I absolutely can't get anything to work, I'll go back to the drawing board and possibly do some little breakouts, for example: https://twitter.com/theavalkyrie/status/1457845661370568709
Edit: for this specific project I did make breakouts for testing the MOSFETs and solenoid drivers. I'm glad I did, since I was able to try out a couple of different options for each: https://twitter.com/theavalkyrie/status/1550878465876004865
Second the poster above, really enjoyed the reasoning around how and why you picked parts and the intent of various circuits..
Do you have any books you recommend on electronics? I feel I understand the basics, like what various components do in isolation, but would love to level up and understand circuit design like how you are approaching it here
I'll echo sibling's comment about just doing it. Theory only takes you so far, it's important to actually build and experiment, even if it's just in a simulator.
I can recommend Practical Electronics for Inventors as a solid base of projects to experiment with and learn from. If you want to learn how to improve circuits and optimize for specific behaviors, there is a wealth of information in manufacturer application notes. A lot of the protection circuitry used in my article follows advice found in application notes.
IMO the best way is to just pick a project and work on it. Using application notes and datasheets lets you piece together circuits, and you pick up design patterns along the way.
Usually when I read through a build-out description like this, the author writes it for an audience that has a lot more background knowledge than I do. This article, on the other hand, was lucid and the explication helped me follow along.
I got to the part about 3.3V being derived from 5V and thought to myself "why not get it from 24V?" and then just a tiny bit later, there's the bit about USB and how Starfish can just take 5V from the USB host, and it clicked.
I also learned about Sparkfun's QWIIC connector in the aside about the chosen i2c bus switch having two "extra" channels. Love it, thanks for the article, Thea.
Thank you for the kind words! I'm glad it was useful.
> why not get it from 24V?
There are DC-DC converters in that same family that can do 4.75~36V -> 3.3V, but since I needed the 5V anyway for the LEDs and I/Os, tossing an inexpensive and small LDO for 5V -> 3.3V is an easy call.
FWIW, when I was thinking more about text-based literate circuit design, one of the advantages I thought of was to leave ample place to comment upon selection rationale for (and subsequent experience with) particular components.
Unless you're manually injecting 0x08 into a database record, isn't it your keyboard that interprets your backspace into the UI layer. There's no way to type a backspace without literally pressing backspace :)
Regarding the null, if it's C based, theoretically your password just stops there. All other chars after that would be ignored.
Now I wonder, what would other non-C languages do if they see 0x00 in a string?
A password entry control doesn't have to interpret backspace the same way a regular text input control does (obviously meaning you'd need an alternative method of correcting typos). The 08 character can still be part of the control's "text" property that is ultimately hashed/stored etc.
As for nulls, languages not based on C handle them fine but you'd have to be very careful they never got passed to an OS-level function, which nearly always treat them as terminators.
The author cites it as performance reasons, but at this scale, even the uplink to cloudflare, would be negligible, no?