Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

what the process to get that type of change request merged? I suspect it's complex to find reviewers for such a broad change set.


I imagine all the individual owners, or Linus on a treadmill would be the way to go.


Disclaimer: I know nothing about the Linux kernel!

It sounds to me like this work is re-organizing and moving around existing code in the header files to reduce the amount of code the C compiler has to wade through during a Linux build.

So one way to verify the changes is to compare the binary files (or the resulting executable) built the old way vs the new way. In theory, they should be identical. (Or I could be misunderstanding the whole thing and you should stop reading now!)

I did something like this for a Prime minicomputer emulator project: https://github.com/prirun/p50em

It was initially coded for a PowerPC chip because that is big-endian like the minicomputer being emulated, and to be honest, I wasn't sure I could even do the emulation, let alone do it and be mentally swapping bytes all the time. After about 10 years of it working well, I spent a week or two reworking it to add byte-swapping for Intel CPUs.

The method I used was to first add macros to all memory and register references. This was done on the PowerPC build, and after each set of incremental changes, I'd verify that the executable created with macros was identical to the original executable without macros. If it wasn't, I goofed somewhere, so backed up and added the changes in smaller sections until I found the problem. I didn't have to do any testing of the resulting executable to verify thousands of code changes. Practically every line of code in the 10K LOC project was touched because of the new macros, but each macro should have been a no-op on the PowerPC build with no byte swapping.

Next I built it on an Intel CPU where the byte-swap macros were enabled and it basically worked, maybe with a tweak here or there.

As a further test, I ran the PowerPC emulator through a bunch of existing tests with tracing enabled and the clock rate fixed. This records the machine state after each instruction, and with a fixed clock, yields a reproduceable instruction trace. Then ran the same test with the Intel version and compared the instruction traces. If there were any differences, it was because I forgot to add a swap macro somewhere.

After a few months of running it 24x7 myself w/o problems, a customer did find a bug in the serial device I/O subsystem where I had forgotten to add a macro. I hadn't done any testing of this subsystem (terminal I/O on real serial RS232 devices like printers.)

If something similar is being done with these Linux changes, verifying them may not be as hard as it seems initially.


There were code changes. e.g. Search for "per_task" in the cover letter ( https://lore.kernel.org/lkml/YdIfz+LMewetSaEB@gmail.com/T/#u). Also a lot of things were uninlined.


Each maintainer will check their subsystem/driver, etc, standard tests will be run, and someone like Linus will go over the methodology is likely what will happen.

Ingo has likely done a bunch of tests himself, of course.

As someone pointed out, it’s only like 0.13% of the kernel.


I think step 1 is “be Ingo or someone comparably legendary” :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: