"To close the gates" is only reasonable when you're working (a) not for self-actualization, (b) not for fun, (c) not for learning, (d) not for public good or your understanding thereof, (e) not for any other reason not directly connected with extracting rents from a broad audience. As I see this as the only path where corporations can outpace you with AI capabilities.
And remember how many good products have been abandoned or killed by corporations because not marginal enough. So you're not very likely to be chased even if you do intend to extract rents from a broad audience.
The article is spreading dangerous FUD aimed (perhaps inadvertently) to hinder free and open ideas sharing and innovation.
The article is me coping with my existential crisis, trying to explore and accept my fears by writing it. And by exploring ideas I think I found some vision for my stance in all this - or hope if you will. I hope these feelings will be real, and I can write a positive blog post also, but I can't be certain if the feelings will survive the scrutiny at this point, or are just warm fuzzy delusions and next level of cope (I had these periods few times in last year).
I'm just trying to say, I am definitely not trying to deliberately spread FUD to hinder the open web -if that was your impression :P
Texts, like it or not, do not respect our intentions and live their own lives... The developments are indeed disturbing, and I also puzzle over the ways to navigate the mess they create.
The unfolding you foresee is definitely not to be dismissed: we've already seen quite a few people in this discussion sharing the same view, which I perceive almost totally ungrounded, and ready to act accordingly. But I think we should "grow the box" for our own good, despite not having proper credit from all those who benefit, because, while losing credit, our ideas gain impact, even ones that would have no credit anyway.
What makes me deeply concerned is the economic consequences. Although I believe that this time it's not different and eventually everything will play out to the greater public good, I also believe that this time it's not different and the transition is going to be very harsh. And I'm completely lost here.
Another thing is that I treasure my (very modest) role in public enlightenment. But advances in AI make me feel obsolete in this role, as I really cannot say more than a properly asked LLM. The only enlightenment remained in need seems to be enlightenment in prompting. I wonder whether others have the same feeling and what it can lead to.
(please, forgive me my midnight musings, I just had to say that at last)
I strongly support your point, but the example is still sand-in-the-eyes for me. I hold that one symbol should not alter the semantics of a program and there should never ever be sequences of one-symbol syntactic elements.
In an Ada-like language, it would be something like
generic
type Path_Type implements As_Path_Ref;
type Reader implements IO.File_Reader;
function Read(Path: Path_Type) return Reader.Result_Vector_Type|Reader.Error_Type is
function Inner_Read(P: Path) return Read'Result_Type is
begin
File: mutable auto := try IO.Open_File(P);
Bytes: mutable auto := Reader.Result_Vector_Type.Create();
try Reader.Read_To_End(File, in out Bytes);
return Bytes;
end;
begin
return Inner_Read(Path.As_Ref());
end Read;
Huh. My brain says logically I should find that better, but the -lack- of punctuation is making it really tricky to skim read the way I do most languages.
I'm not arguing the example you found sand-in-the-eyes is necessarily good but my mental skim reading algorithm copes with it much better.
It is indeed harder to skim, and I find myself much more relying on syntax highlighting and file outline when working in Ada than in C++. Not due to the lack of punctuation, though, which is in place but serves the guiding role only (it is MLs that tend to abolish all the unnecessary punctuation), but because of the overall dense style.
But while it is harder to _skim_, it is easier to _read_, as you don't have to concentrate on and decipher the syntax, risking to miss some of the crucial elements (oh, how do I hate missing ampersands in C++!).
Working in vehicle routing optimization for MSB, I would like to share a couple of insights/observations.
1. People can be very creative in solving their problems with your features (and bugs!), even completely unrelated at first thought. They just have to be (a) observable, (b) speaking the language your users understand, which mathematically oriented or generic packages do not.
2. On the other hand, the quite plausible (to me) approach of "obtain an initial solution — adjust for ad-hoc constraints — reoptimize the rest" constantly fails as "too complex" with users reverting to Excel instead.
However, I still stubbornly believe that mathematical optimization cannot do everything, and we should aim for domain-specific decision-support systems that are primarily manual where optimization is only a part of solution building, and UX really matters in that process.
But AI (as we have it today and in the foreseeable future) is nowhere near the definition of a species. It is an enormous _server farm_ doing series of matrix multiplications followed by nonlinear transformations, with us, humans, supplying input data (as well as hardware, electricity, and maintenance) and assigning meaning to the outputs. And we have completely no ideas of doing "AI" any other way.
I agree, that AI may be dangerous if used for destructive purposes, or if used for some critical tasks with too much trust (and the hype that "we're so dangerously near a superintelligence" makes the latter much more likely, in my opinion). But that Humanity will be displaced by autonomous server farms? No way.
(As to the original comment, I think that a bunch of nearly(?) demented elders holding nuclear buttons is a much worse (and immediate!) threat than a server farm which we finally conclude to be intelligent.)
>But AI (as we have it today and in the foreseeable future) is nowhere near the definition of a species. It is an enormous _server farm_ doing series of matrix multiplications followed by nonlinear transformations, with us, humans, supplying input data (as well as hardware, electricity, and maintenance) and assigning meaning to the outputs. And we have completely no ideas of doing "AI" any other way.
One gorilla says to the other: "Those human brains are just synapses firing. They depend on nature to survive. Not a problem"
Another way of thinking about it... Suppose we create a server emulation of a highly intelligent, manipulative serial killer, and speed up the emulation so it thinks 1000x as fast as a human. How do you feel about this? Is the fact that it's "just a server farm" reassuring?
> "Those human brains are just synapses firing. They depend on nature to survive. Not a problem"
When it comes to AI we have just a lone detached brain, not in control over anything, so that it cannot even "fire" by itself: someone has to provide its inputs.
> Suppose we create a server emulation of a highly intelligent, manipulative serial killer <...> How do you feel about this?
Quite indifferent: the only field I can see for such a simulation is game development, but that would be huge overkill.
>When it comes to AI we have just a lone detached brain, not in control over anything, so that it cannot even "fire" by itself: someone has to provide its inputs.
By assumption, this AI has already been created, so presumably someone is willing to do that -- and given its superhuman manipulation abilities, their willingness will probably not change.
Then, AI's abilities are effectively limited by those of this malicious operator who has to understand and perform what the AI suggests and feed back the results.
> If the malicious operator has a superhuman advisor, that will increase their ability
Only when it comes to information processing. The inputs may (and will) be incomplete, incorrect, ambiguously formulated... The outputs may (and will) be misunderstood. And misunderstood instructions may (and will) be poorly performed.
> If the emulation gets connected to the internet, it can work way faster. Many jobs can be done remotely
What one man has connected, some other always can disconnect... And not anything is on the internet.
> abandon wishful thinking and actually consider the possibility of a worst-case scenario
The problem with all those scenarios is equating superintelligence with omniscience and omnipotence, which is plain wrong. Physics matters.
Rows are tuples, not objects, and treated as such throughout the code. Only the needed data is selected in the form most appropriate to the task at hand, constructed in a hand-written sql query, maybe even taylored to the DB/task specifics. Inserts/updates are also specific to the task, appropriately grouped, and also performed using plain sql. Data pipelines are directly visible in the code, all DB accesses are explicit.
Maybe we need to use a different acronym than ORM, because to me the thing we can all agree we need is code that emits SQL. If you can't agree that projects need generated SQL because SQL is dog water for composition, then we can't really agree on anything.
Probably so: I can't agree with that particular inference.
1. Very often we need generated SQL because writing SQL for primitive CRUD operations is hell tedious and error-prone (as well as writing UI forms connected to these CRUD endpoints, so I prefer to generate them too).
2. Structured Query Language being very poorly structured is indeed a huge resource drain when developing and maintaining complex queries. PRQL and the like try to address this, but that's an entirely different level of abstraction.
3. Unfortunately, when efficiency matters we have to resort to writing hand-optimized SQL. And this usually happens exactly when we terribly need a well-composing query language.
I'd argue that "code that emits SQL" is never an inherent need but a possible development time-saver - we need code that emits SQL in those cases (and only those cases) where it saves a meaningful amount of development time compared to just writing the SQL.
Doesn't it also mean that any non-trivial migration (e.g. which requires data transformation or which needs to be structured to minimize locking) has to be defined elsewhere, thus leaving you with two different sources for migrations, plus some (ad-hoc) means to coordinate the two?
(I would say that it is conceptually perverse for a client of a system to have authority over it. Specifically, for a database client to define its schema.)
(not so much of a reply, but more of my thoughts on the discussion in the replies)
I would say the topic is two-sided.
The first is when we do greenfield development (maybe, of some new part of an already existent software): the domain is not really well known and the direction of the future development is even less so. So, there is not much to think about: document what is known, make a rough layout of the system, and go coding. Too much investing in the design at the early stages may result in something (a) overcomplicated, (b) missing very important parts of the domain and thus irrelevant to the problem, (c) having nothing to do with how the software will evolve.
The second (and it is that side I think the post is about) is when we change some already working part. This time it pays hugely to ponder on how to best accommodate the change (and other information the request to make this change brings to our understanding of the domain) into the software before jumping to code. This way I've managed to reduce what was initially thought to take days (if not weeks) of coding to writing just a couple of lines or even renaming an input field in our UI. No amount of exploratory coding of the initial solution would result in such tremendous savings in development time and software complexity.
At the microlevel (where we pass actual data objects between functions), the difference in the amount of work required between designing data layout "on paper" and "in code" is often negligible and not in favor of "paper", because some important interactions can sneak out of sight.
I do data flow diagrams a lot (to understand the domain, figure out dependencies, and draw rough component and procedure boundaries) but leave the details of data formats and APIs to exploratory coding. It still makes me change the diagrams, because I've missed something.
My perspective is that using NoSQL does not save time in data modeling and migrations. Moreover, one has to pay in increased time for these activities, because
(a) in most cases, data has to follow some model in order to be processable anyway, the question is whether we formally document and enforce it at a relational storage, or leave it to external means (which we have to implement) to benefit from some specifically-optimized non-relational storage,
(b) NoSQL DBs return data (almost) as stored, one cannot rearrange results as freely as with SQL queries, not even close, thus much more careful design is required (effectively, one has to design not only schema but also the appropriate denormalization of it),
(c) migrations are manual and painful, so one had better arrive at the right design at once rather than iterate on it.
That is, of course, if one doesn't want to deal with piles of shitty code and even more shitty data.
And remember how many good products have been abandoned or killed by corporations because not marginal enough. So you're not very likely to be chased even if you do intend to extract rents from a broad audience.
The article is spreading dangerous FUD aimed (perhaps inadvertently) to hinder free and open ideas sharing and innovation.
reply