Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's ridiculous. Programming is not all about multi-core performance. Also, programs do not all need to directly implement parallelism to be performant. The architecture can support parallel processing with sequential languages. A great example of that is Rails, when set up with mongrel processes, each of which can run on its own core if necessary.

Hyperbole, hyperbole, hyperbole <-- summary of this article.



This isn't really true; you can't just start spinning up multiple mongrels with no consequence to your application. If this were true, there would be no need for locking the database in your application code for a transaction. Plus you can only spin up so many mongrels before your database performance starts to suffer.


That's irrelevant to the discussion at hand - if anything, it goes to show that parallelism is already integrated into the rails architecture via transactions.


Its not built into rails transactions; you explicitly have to lock tables/rows (depends on the db/engine you're running) inside of transactions to avoid conflicts. We ran into this problem with Poll Everywhere when we had to update a counter cache column. We had something along the lines of

Poll.transaction { increment!(:results_counter) }

This worked fine with one mongrel in our dev and test environments but when we threw that out to our cluster of mongrels, we got all sorts of locking errors when a torrent of people would vote. To resolve the issue we had to add:

Poll.transaction{ lock!; increment!(:results_counter); }

If this isn't a bottleneck or leaky abstraction then I don't know what is. Locks are ugly and I consider them a hack. In our case an RDBMS probably isn't the best data store solution.


you should stop posting until you take classes in operating systems and compilers. really


You're a condescending fuckwit.


who is trying to make you less dumb


Since when does a decent database lock tables for concurrent transactions?


Right, I just hope in a year or two when we get 64 core machines they will sell RAM in the petshop by the kilo so we can feed it cheaply to those 256 hungry mongrels ;-)


That's ridiculous. Programming is not all about multi-core performance.

it will be. you're getting beaten over the head by chip designers telling you that your future cpu is going to consist of a (possibly large) array of processing cores with a high-capacity bus connecting them. they are telling you this is the only way they can give you higher performance. you had better start believing them because these systems are starting to get delivered now.

A great example of that is Rails, when set up with mongrel processes, each of which can run on its own core if necessary

?????? so mongrel comes with its own OS kernel that has better support for multicore than linux and freebsd? wow!! coolzzz!


?????? so mongrel comes with its own OS kernel that has better support for multicore than linux and freebsd? wow!! coolzzz!

Hi. You may not have noticed it, but this site is not Reddit. Please try to keep commentary like this to a minimum, where 0 is the minimum.

Anyway, please also read the posts you are replying to. They are saying that many applications get concurrency "for free", since the database library handles the concurrency for them. Yes, this can be a bottleneck, but it is a fundamental problem with the notion of locking. If you want maximum performance, don't lock. If you want absolute data integrity, you have to lock. That's a problem.

Concurrency should definitely be a part of CS programs, but Intel's thread library isn't the way to do it. CL-STM or Haskell's STM would be much better.


> you're getting beaten over the head by chip designers telling you that your future cpu is going to consist of a (possibly large) array of processing cores with a high-capacity bus connecting them. they are telling you this is the only way they can give you higher performance. you had better start believing them because these systems are starting to get delivered now.

If your chip designers are telling you that they're building a large array of procesing cores connected with a high-capacity bus, you need to get some new chip designers.

If you've got a bus that can actually support a modest number of cores, your cores are too wimpy and should be built with whatever was used for the bus.

More likely, you actually have a saturated bus that is the system bottleneck, so your cores are spending most of their time waiting for access.

There is no silver bullet. Many problems are bound by bisection bandwidth. The more cores, the worse the problem. You end up devoting proportionally more space and power to communication as you increase the number of processors.


"you're getting beaten over the head by chip designers telling you that your future cpu is going to consist of a (possibly large) array of processing cores with a high-capacity bus connecting them."

Sorry, but I don't buy that. We're also moving to a thin client world where we don't actually need that much power on our thin clients.

Of course the chip makers are saying that - they want to sell more chips. They have to come up with some other number they can increase.


Can you envision a world where your "thin client" is powered by 25 cores clocked at a low speed like 100 MHz? Why not?

Do you really think performance doesn't need to be increased from the current status quo? The number of cores will only go up from here.


No I can't. That would be simply ridiculous. The thirst for power is not infinite. At some point, most people will have enough power to do everything they need (Unless they're using say windows, which will always require 10 times more computing power than the previous version).


And 640 kilobytes of RAM should be enough for anybody.

There will always be an increase in power demand. To think otherwise is short-sighted. If you could keep what we have now or have a sentient computer sitting on your desk, which would you choose?


Personally? I'd keep what I have. There is nothing better or more rewarding than squeezing more performance out of fixed limited hardware. Where is the fun if you can just buy twice as many servers? The day hardware is free, is a very very sad day for programmers.

Of course there will be massive demands in the world of servers, research, gaming etc, but that's not everything.


"The day hardware is free, is a very very sad day for programmers."

It may be a sad day for those of you who program primarily for the challenge, but for those of us who want to get stuff done, it'll be a joyous day. :)


"Sorry, but I don't buy that. We're also moving to a thin client world where we don't actually need that much power on our thin clients."

Yeah, we've heard it before, several times, from the moment that networking was invented and onwards. Why will the push for this stick _this_ time around?


Sorry, but I don't buy that

then you clearly aren't buying new high-end servers for data crunching either, because these are already multicore


As surprising as that may sound, 99% of developers out there are not buying new high-end servers for data crunching, no.


what is in the cpus of those machines will be in the cpus of all machines. jesus, how much more legit can you get than intel telling you this is coming?


Intel may try to sell many-core but who wants to buy it?


you. you aren't going to be given the choice, these are coming to a laptop near you...like on your lap


So you're saying that no one will ever sell lower cost single core laptops any more? hrmmm I'm skeptical.


Um... you consider a chip maker, telling you that you need to buy new chips "legit"?


No, but I consider a chip maker telling me what kind of chips they're going to be making legit. Especially when all the other chip makers are saying the same thing (All the other chip makers: AMD, Freescale, Intel, Sun, Marvell, and others)


Itanium


"?????? so mongrel comes with its own OS kernel that has better support for multicore than linux and freebsd? wow!! coolzzz!"

In fact, quite the opposite. Handling concurrency by having multiple share-nothing processes relies on the OS to handle the scheduling and core assignment.

edit: "real" (system) processes.


uh, yeah, i know that. i thought the "coolzzz" would relate my sarcasm


Your sarcasm implies that mongrel's scheduling and IPC was inferior to the OS, which is not the case.


I don't write OS kernels, I write rails application. I don't give a rat's ass about the OS kernel. I don't even give a rat's ass about how Mongrel is programmed. The Rails applications themselves don't need to be altered to run in a multi-core environment. Mongrel naturally scales to as many servers or CPUs as you want to run it on, since there is no interaction between different mongrel instances, all that happens at the database.

Let me make that point even clearer: I don't give a shit how the database has been programmed. Someone there has obviously had to think about parallelism, but I don't need to, because I'm not writing a fricken database.

Got it?


I don't write OS kernels, I write rails application. I don't give a rat's ass about the OS kernel

then stop spouting off uninformed comments about how processes are scheduled

The Rails applications themselves don't need to be altered to run in a multi-core environment.

nor do any other program compiled for that architecture. its the OS that schedules processes, not your userland program. the point is, some programs can be written in a way that makes it easier for the OS to exploit multicore. since ruby is not a functional language, my guess is that it would tend to not help the kernel exploit these resources. but obviously in the worst case, a process can run inside one core and never get the advantages of the rest of the chip architecture. this is about exploiting multicore

Got it?

yes, i get that you know very little about how computers function


nor do any other program compiled for that architecture

Then none of the people writing those programs need to know or care about parallelism.

Therefore, the core message of the article is brain-dead.


NO. why don't you READ before you reply

a program compiled for a multicore CPU will RUN. the question is how OPTIMALLY does it run. a program with no potential for parallelism will not get any parallelism. it will run, but run slow compared to programs designed for parallelism.

programs written to exploit parallelism will be programs that bring new approaches to data and state. functional languages provide this today, which is why lots of people think they will be the way forward for multicore.

honestly i think you are just bordering on being a troll. why don't you do some reading on this topic before writing more uninformed replies




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: