ASCII is part of Unicode, so for Unicode it's also «a limited set. Yes, errors are made and occur. They're reasonably easy to code defensively against».
> Unicode ... vastly expands the attack interface.
Are you aware of the maximum number of Unicode codepoints which can exist, offhand?
And how that compares to the number of 7-bit ASCII characters?
And how many special cases would have to be considered?
Scale matters.
The risks:reward ratio from 7-bit ASCII is low and manageable. The expressive capability is high. No, it's not a perfect representation for all languages. It is, however, a sufficient one, where common understanding is necessary.
The low number of ASCII symbols forces developers to reuse them for all purposes. The number of security holes with ASCII characters is order(s?) of magnitude larger than with Unicode. The problem of apple vs appIe (all Latin) vs аррІе(all Cyrillic) vs elppa (all Latin in reverse) is well known.
> Unicode ... vastly expands the attack interface.
Limit yourself to 0..9 to be safe.