Sounds plausible to me. Adding virtual memory to get around a constraint of limited physical hardware & lack of MMU sounds like a software abstraction, which is what software engineers building an operating system are likely to want to build.
Your reasoning is valid, but it only explains virtual memory, not processes. Processes are primarily a means of isolation. If anything they're liable to increase memory usage, because the lookup tables need to be saved somewhere.
I didn't want to contaminate readers with my understanding. But enough time has passed now.
OP says:
> The computers for which UNIX was intended had a very small address space; too small for most usable end-user applications. To solve this problem, the creators of UNIX used the concept of a process.
My understanding is that processes were designed as a way to isolate users on a time-shared machine. Limitations on RAM were certainly correlated, but not the direct cause for the design of processes.
> A large application was written so that it consisted of several smaller programs, each of which ran in its own address space.
Pipelines don't save you space. All the processes in a pipeline run concurrently so their memory usage overlaps as well. If anything, splitting up into processes increases memory usage, because each process may not use all of the memory allocated to it.
Pipelines were designed as a way to reuse a small vocabulary of shared tools. Efficiency was only a secondary constraint. The primary goal was easily creating elegant programs for simple tasks.