Why can't a display behave exactly as bit-mapped memory? That is, you set this pixel in memory, and that pixel changes on the screen, at that time.
That is, like a vector display, but using the LEDs of a flat screen. There's no electron ray scanning over the phosphors as there was in a cathode ray tube.
--
The LEDs can switch only so fast - but does this latency prevent them from being switched independently?
The LEDs need to stay on for long enough for the human visual system to perceive them (and without flickering etc) - but this could be managed in other ways.
I think the main reason, apart from inertia, is that the larger market for displays is as TVs, where the concept of updating the whole frame (frames per second) is even more entrenched - though, there's no reason why video couldn't be displayed in the same way, it's just pushing out a kind of compression to the display itself. Light from real objects is not emitted one frame at a time.
I assume you're talking about LCD? Not too many LED displays around (though the principle is still similar).
While there's no scanning electron beam the electronics on these displays are still optimized for scanning. That is they push a whole bunch of adjacent pixels in every clock cycle. If you try to use those for random access your refresh rate is going to fall dramatically. The problem isn't whether or not each pixel can be switched independently it's how to efficiently address them and move data from the display controller to the display.
(editing with some more info)
If you changed the monitor's protocol to push X, Y, pixel and souped up the on-board electronics your pixel rate would probably fall by an order of magnitude. So your 80Hz display is now an 8Hz display (for full frames). In terms of how each pixel behaves they are independent (each has it's own transistor and capacitor) but the addressing is on a grid. So you can select your row and then set a whole bunch of pixels in this row (for example)...
Details will vary between different displays but my point is that due to the pixels being on a grid sequential access is going to be faster. This is not unlike memory.
"The TFT-LCD panel of the AMLCD is
scanned sequentially line-by-line from
top to bottom. Each line is selected by
applying a pulse of +20V to gate line Gn,
which turns on the TFTs in that specific
row. Rows are deselected by applying
–5V to G
n-1
and G
n+1, which turns off
all the TFTs in the deselected rows and
then the data signal is applied from the
source driver to the pixel electrode.
The voltage applied from the
source driver, called ‘gray-scale voltage,’ decides the luminance of the pixel. The storage capacitor (CS) maintains
the luminance of the pixel until the
next frame signal voltage is applied.
In this way, the next line is selected
to turn on all the TFTs, then the data
signal is fed from the source driver and
hence scanning is done."
Selecting rows like that is very similar to how DRAM works, just with the word size being the number of subpixels in a row, and with no read port. Bit-level random access is inefficient, but you don't have to write to the rows in sequential order, and you don't have to update all the other rows before issuing another update for the first row. That's purely a limitation of the current driving circuitry, but a replacement like G-SYNC doesn't have to be bound by sequential rasterization any more than it has to stick to a fixed refresh rate.
But it's most likely working that way because (analog) displays always worked like that and your display technology needed to be as compatible as possible for CRTs and TFTs.
From my point of view you have not explained why it's not technologically feasible. You're merely describing that the current display tech isn't working that way.. of course not..
It's hard to speak about feasibility in absolute terms here. I would disagree that the current tech is the way it is simply because of the history of CRTs (though there's definitely some influence). Displays have evolved to their current technology by optimizing for things like manufacturability, price and performance. Those obviously would come ahead of CRT compatiblity.
The motivation for the gridded layout is clear I think? You have this grid of transistors and you need to address them individually. Being able to drive an entire line and then select the columns is a good and relatively cheap solution. Now you can drive all pixels in one line concurrently if you need to and the performance of a single pixel becomes less of a bottleneck.
So the row/col grid structure isn't a result of needing to be compatible with CRTs... Also naturally accessing in sequence allows you to simply send the data and clock down the line. Random access would require either multiplexing the coordinates or widening your bus.
I would imagine it's possible to design a random access LCD. You would need better performing individual pixels, you will almost certainly need more layers and more conductors, you will complicated your interfaces and protocols. So you end up with a more complex and expensive system for practically little benefit. In many applications (games, videos) all pixels change every frame.
Sub-scanning a rectangular portion of the display is maybe a more reasonable target.
Because of overhead. It's much slower to communicate "turn pixel at X,Y on" to the display for millions of individual pixels than it is to communicate "here's all the millions of pixels you need to display in sequence" in one message.
Because telling the display to turn one pixel at a time will require crazy bandwidth, and plus that's not how monitors or videocards or drivers or software work.
That is, like a vector display, but using the LEDs of a flat screen. There's no electron ray scanning over the phosphors as there was in a cathode ray tube.
--
The LEDs can switch only so fast - but does this latency prevent them from being switched independently?
The LEDs need to stay on for long enough for the human visual system to perceive them (and without flickering etc) - but this could be managed in other ways.
I think the main reason, apart from inertia, is that the larger market for displays is as TVs, where the concept of updating the whole frame (frames per second) is even more entrenched - though, there's no reason why video couldn't be displayed in the same way, it's just pushing out a kind of compression to the display itself. Light from real objects is not emitted one frame at a time.