R had a REPL from day one (or at least near it) because the S it was copying did. You could save your "workspace" or "session" and so on. Just because it was spartan compared to Jupyter or just because that might be spartan compared to MathWorks' GUI for Matlab doesn't alter "waiting/Attention Deficit Disorder (ADD)" aspects.
When you are being exploratory even waiting a half second to a few seconds for a build is enough time for many brains to forget aspects/drift from why they pressed ENTER. When you are being careful, it is an acceptable cost for longer term correctness/stability/performance/readability by others. It's the transition from "write once, never think about it again" to "write for posterity, including maybe just oneself"..between "one-liners" and "formatted code". There are many ways to express it, but it underwrites most of the important "contextual optimizations" for users of all these software ecosystems - not just "speed/memory" optimization, but what they have to type/enter/do. It's only technical debt if you keep using it and often you don't know if/when that might happen. Otherewise it's more like "free money".
These mental modes are different enough that linked articles elsewhere here talk about typeA vs typeB data science. The very same person can be in either mode and context switch, but as with anything some people are better at/prefer one vs. the other mode. The population at large is bimodal enough (pun intended) that "hiring" often has no role for someone who can both do high level/science-y stuff and their own low-level support code. I once mentioned this to Travis Oliphant at a lunch and his response was "Yeah..Two different skill sets". It's just a person in the the valley between the two modes (or with coverage of both or able to switch "more easiliy" or "at all"). This is only one of many such valleys, but it's the relevant one for this thread. People in general are drawn away by modes and exemplars and that represents a big portion of "oversimplification in the wild".
This separation is new-ish. At the dawn of computing in the 50s..70s when FORTRAN ruled, to do scientific programming you had to learn to context switch or just be in the low-level work mode. Then computers got a million times faster and it became easier to have specialized roles/exploit more talent and build up ecosystems around that specialization.
FWIW, there was no single cause for Python adoption. I watched it languish through all of the 90s being largely viewed as too risky/illegitimate. Then in the early noughties a bunch of things happened all at once - Google blessing it right as Google itself took off, numpy/f2py/Pyrex/Cython (uniting rather than dividing like the soon after py2/py3 split), a critical mass of libs - not only scipy, but Mercurial etc., latterday deep learning toolkits like tensorflow/pytorch and the surrounding neural net hype, compared to Matlab/etc. generally low cost and simplicity of integration (command, string, file, network, etc. handling as well as graphics output) - right up until dependency graphs "got hard" (which they are now), driving Docker as a near necessity. These all kind of fed off each other in spite of many deep problems/shortcuts with CPython design that will cause trouble forever. So, today Python is a mess and getting worse which is why libs will stay monoliths as the easiest human way to fight the chaos energy.
Nim is not perfect, either. For a practicing scientist, there is probably not yet enough "this is already done for me with usage on StackOverflow as a one-liner", but the science ecosystem is growing [1]. and you can call in/out of Python/R. I mean, research statisticians still tell you that you need R since there is not enough in even Python...All software sucks. Some does suck less, though. I think Nim sucks less, but you should form your own opinions. [2]
When you are being exploratory even waiting a half second to a few seconds for a build is enough time for many brains to forget aspects/drift from why they pressed ENTER. When you are being careful, it is an acceptable cost for longer term correctness/stability/performance/readability by others. It's the transition from "write once, never think about it again" to "write for posterity, including maybe just oneself"..between "one-liners" and "formatted code". There are many ways to express it, but it underwrites most of the important "contextual optimizations" for users of all these software ecosystems - not just "speed/memory" optimization, but what they have to type/enter/do. It's only technical debt if you keep using it and often you don't know if/when that might happen. Otherewise it's more like "free money".
These mental modes are different enough that linked articles elsewhere here talk about typeA vs typeB data science. The very same person can be in either mode and context switch, but as with anything some people are better at/prefer one vs. the other mode. The population at large is bimodal enough (pun intended) that "hiring" often has no role for someone who can both do high level/science-y stuff and their own low-level support code. I once mentioned this to Travis Oliphant at a lunch and his response was "Yeah..Two different skill sets". It's just a person in the the valley between the two modes (or with coverage of both or able to switch "more easiliy" or "at all"). This is only one of many such valleys, but it's the relevant one for this thread. People in general are drawn away by modes and exemplars and that represents a big portion of "oversimplification in the wild".
This separation is new-ish. At the dawn of computing in the 50s..70s when FORTRAN ruled, to do scientific programming you had to learn to context switch or just be in the low-level work mode. Then computers got a million times faster and it became easier to have specialized roles/exploit more talent and build up ecosystems around that specialization.
FWIW, there was no single cause for Python adoption. I watched it languish through all of the 90s being largely viewed as too risky/illegitimate. Then in the early noughties a bunch of things happened all at once - Google blessing it right as Google itself took off, numpy/f2py/Pyrex/Cython (uniting rather than dividing like the soon after py2/py3 split), a critical mass of libs - not only scipy, but Mercurial etc., latterday deep learning toolkits like tensorflow/pytorch and the surrounding neural net hype, compared to Matlab/etc. generally low cost and simplicity of integration (command, string, file, network, etc. handling as well as graphics output) - right up until dependency graphs "got hard" (which they are now), driving Docker as a near necessity. These all kind of fed off each other in spite of many deep problems/shortcuts with CPython design that will cause trouble forever. So, today Python is a mess and getting worse which is why libs will stay monoliths as the easiest human way to fight the chaos energy.
Nim is not perfect, either. For a practicing scientist, there is probably not yet enough "this is already done for me with usage on StackOverflow as a one-liner", but the science ecosystem is growing [1]. and you can call in/out of Python/R. I mean, research statisticians still tell you that you need R since there is not enough in even Python...All software sucks. Some does suck less, though. I think Nim sucks less, but you should form your own opinions. [2]
[1] https://github.com/scinim/ [2] https://nim-lang.org/