Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are definitely a lot of difficult issues and trade-offs about implementing true "WYSIWYG" editors that let you layout text and graphics that looks exactly the same on the screen and when printed.

This Cedar demonstration video has some great examples of many of these issues. The Tioga text editor was "WYSIMOLWYG": what you see is more or less what you get.

https://www.youtube.com/watch?v=z_dt7NG38V4

Some problems were due to technology and aren't as bad any more. Cedar and NeWS used black-and-white screens and didn't have anti-aliased fonts. CPUs were much slower and memory was much tighter and graphics were not accelerated, so even with color displays (which were usually only 8 bits with a colormap that makes anti-aliasing very difficult), anti-aliased text wasn't practical.

And fonts rendered at screen resolutions have totally different measurements than high resolution printed fonts, so words have different widths and get flowed and wrapped differently, so the formatting is quite different.

NeWS had a "setprintermatch" primitive that makes the font rendering machinery use the official precise printer measurements for fonts, instead of the actual pixel screen measurements. That was useful for the PostScript previewer, and for editors previewing how formatted text will look. (There was a nice port of FrameMaker to NeWS, but I don't remember how its preview features worked.) But text rendered with printer metrics looks terrible and is hard to read, so you wouldn't want to actually edit text like that.

NeWS 1.1 manual, page 146:

http://www.bitsavers.org/pdf/sun/NeWS/NeWS_1.1_Manual_198801...

>setprintermatch: (boolean setprintermatch - ) Sets the current value of the printermatch flag in the graphics state to boolean. When printer matching is enabled text output to the display will be forced to match exactly text output to a printer. The metrics used by the printer will be imposed on the display fonts. This will usually cause displayed text to look bunched up and generally reduce readability. With printer matching disabled, readability will be maximized, but the character metrics for the display will correspond to the printer. See also: currentprintermatch

Windows are resizable, and while text is scalable, you still want to be able to read and edit text in narrow windows, and not waste screen space or get giant text in wide windows. So reflowing text differently for the screen while editing is very useful.

Also, rendering text in a wysiwyg previewer that precisely scales text is slow the first time and uses a lot of memory, because the particular point size of that font must be rendered into the cache, since it can't just round the point size to a whole number that may already be in the cache (like the scalable "jet" NeWS terminal emulator did to avoid blowing the cache). So it's better if the system renders all text in the same restricted set of point sizes, instead of scaling it continuously.

Also it's costly in terms of screen space to edit two columns of text at once, or pages at a time.

It's just ergonomically easier and wastes less screen space to edit formatted text in one continuous non-paginated column, resizable to any window width you want to use at the time, formatted as close as possible to how it will look on the printer, but without the exact same measurements.

And in this day and age, most text that people write is intended to be read in many different media, formats, screen and font sizes, and layouts. Now it's about WYSIWYG editing text and graphics in the way they will appear in a web browser, and those come in all shapes and sizes (thus Chrome Developer Tool's device preview mode).

Now printing text on paper is only an afterthought, something wasteful to be avoided. So fewer people actually need to edit two-column paginated text just like it will appear printed on paper in a journal, or fuss about exact text layout and worry about terrible "rivers" ruining their otherwise perfectly justified paragraphs.

https://en.wikipedia.org/wiki/River_(typography)

https://en.wikipedia.org/wiki/WYSIWYG

WYSIAWYG; what you see is almost what you get, similar to WYSIMOLWYG.

WYSIMOLWYG, what you see is more or less what you get, recognizing that most WYSIWYG implementations are imperfect.

http://foldoc.org/WYSIWYG

What You See Is What You Get <jargon>

(WYSIWYG) /wiz'ee-wig/ Describes a user interface for a document preparation system under which changes are represented by displaying a more-or-less accurate image of the way the document will finally appear, e.g. when printed. This is in contrast to one that uses more-or-less obscure commands that do not result in immediate visual feedback.

True WYSIWYG in environments supporting multiple fonts or graphics is rarely-attained; there are variants of this term to express real-world manifestations including WYSIAWYG (What You See Is Almost What You Get) and WYSIMOLWYG (What You See Is More or Less What You Get). All these can be mildly derogatory, as they are often used to refer to dumbed-down user-friendly interfaces targeted at non-programmers; a hacker has no fear of obscure commands (compare WYSIAYG). On the other hand, Emacs was one of the very first WYSIWYG editors, replacing (actually, at first overlaying) the extremely obscure, command-based TECO.

See also WIMP.



The Acorn Archimedes had anti-aliased text using 8 grey levels on a 4 or 8 bit display, on computers with 0.5MB RAM in 1990... http://telcontar.net/Misc/GUI/RISCOS/#text so in theory it could have been practical for Sun ...


If your display only has gray levels, then anti-aliasing is easy since you don't need to worry about rendering text over non-white backgrounds, and you don't need to worry about a colormap. But 8 bit displays with colormaps make it much harder to support anti-aliased text practically, especially over arbitrary backgrounds (because it would look terrible as well as being slow).

NeWS used a color cube with only a certain number of grays, and those were only useful for anti-aliasing black text against a white background (or the other way around), and you didn't have a lot of colors to choose from for antialiased color text or against colored backgrounds. It's much easier to render anti-aliased text in 24 bits, or with pure gray scale, since you aren't restricted to an 8 bit color palette, and you can efficiently mathematically blend colors together, instead of looking up rgb values in a colormap, blending to get the ideal desired color, and then searching for the nearest color to each pixel in the colormap (or indexing the color cube for the nearest color) (which won't be very near, and will look flat and washed out without dithering).

Better visually to render the whole image with antialiased text in full 24 bits, then use error diffusion dithering over the entire image at once (to avoid flat washed out regions, and distribute the error over space) to make an 8 bit palletized version, but that's quite slow and memory intensive.

Of course your application (like a monochrome PostScript previewer) could switch in an all-gray ramp colormap for efficiently rendering and beautifully displaying anti-aliased text, but then all the other windows on the screen would be shown in the wrong colormap, and the colormap would flash back and forth when you moved your focus between windows.


Ah, yes, Acorn didn't try to anti-alias properly over varying backgrounds. ArtWorks did full anti-aliasing but it was quite a lot slower to render https://en.wikipedia.org/wiki/ArtWorks




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: