Hacker Newsnew | past | comments | ask | show | jobs | submit | rapidlua's commentslogin

For go specifically, I find ko-build handy. It builds on the host (leveraging go crosscompilation and taking advantage of caches) and outputs a Docker image.


> How to manage pointer+offset address integrity/legality inside the kernel (for instance) has a proof by examples a-plenty in the other code

Let me provide some context here. These annotations aren’t there to help the compiler/linter. They exist to aid external tooling. Kernel can load BPF programs (JIT-compiled bytecode). BPF can invoke kernel functions and also some kernel entities can be implemented or augmented with BPF.

It is paramount to ensure that types are compatible at the boundaries and that constraints such as RCU locking are respected.

Kernel build records type info in a BTF blob. Some aspects aren’t captured in the type system, such as rcu facet, this is what the annotations are used for. The verifier relies on the BTF.


Go build is fundamentally better? How so? Go build is so light on features that adding generated files to source control is a norm in go land.


Generated files are noise.

Newer languages builds have built in version resolution to resolve dependencies together with smarter ways to reference dependencies without #include.

And that's better


> Packages may not circularly reference each other.

Actually possible with go:linkname.


Fair, but I think we can classify that under "unsafe" and ignore it under normal circumstances. I can also say things like "Go doesn't have pointer arithmetic" with a straight face, even though unsafe permits pointer arithmetic just fine. If you're programming with that routinely, you're out of the bounds of my advice for architecture anyhow. Whether for good or bad reasons would be left as an exercise for the architect in question.


Looking at the diagram for the SQLite VFS page I didn't think I was going overboard with designing my driver around 3 packages: https://sqlite.org/vfs.html

One layer wraps SQLite C API, below it lives a pure Go VFS, and above all that database/sql driver.

Even this coarse split (with a collection internal packages, the bigger one of them “utils”) is enough to need various band-aids to accommodate the impossibility of circular dependencies.

I honestly don't think it helps much.

At the module level, there are obvious benefits from the impossibility. Just like all the pain around v2 modules can be justified, even if I find it annoying.

When packages are also the only layer at which you can enforce visibility, it becomes worse.


See the last case of how to handle circular dependencies.

Ports are special cases. Always are.


Is it just me, or the piece doesn’t explain how to make edits without messing up the hunk?


If you were actually editing a patch file, that would be a concern; but in Git, you only edit one hunk at a time, so all’s fine as long as you don‘t mess up the context lines—Git will ignore the source and destination line numbers and counts that your editing has likely rendered incorrect. (An easy way to mess up the context lines is to have your editor strip trailing whitespace on save, as the unified diff syntax for an empty context line is a single space. If anybody asks, in no way did this cause me to be puzzled for literal weeks by hunk edits failing seemingly at random.)


I do occasionally attempt to edit patch files produced by git-format-patch. Frequently I end up with corrupt patch. Still curious how to fix those numbers.


With git format-patch I’d say take a worktree, git apply the patch to its intended base, commit, rebase, and git format-patch again :)

Otherwise, well, the numbers are -(old start line number),(old line count) +(new start line number),(new line count) for the entire hunk introduced by @@ (whether it contains one group of changed lines or more). I'm sure you see how to fix them up, but accumulating a line number shift as you go through the file sounds very fiddly and error-prone. It also sounds like something the computer should be able to do for you (given the old patch and the new messed-up patch whose hunks correspond one-to-one to the old).

ETA: I seem to have been nerd-sniped. Mind the bugs, etc etc:

  #!/usr/bin/awk -f
  # usage: awk -f fix-patch.awk ORIGINAL EDITED > FIXED
  # assumes hunks in ORIGINAL are grouped by file and sorted by line number
  # assumes every unchanged or deleted line in ORIGINAL remains in EDITED
  # can get confused by lines starting with @@ before start of diff
  
  function flush() {
      coline += odelta; odelta = (coline + cosize) - (oline[n] + osize[n])
      cnline += ndelta; ndelta = (cnline + cnsize) - (nline[n] + nsize[n])
      if (hunk) printf "@@ -%d,%d +%d,%d %s", coline, cosize, cnline, cnsize, hunk
      hunk = ""; coline = cosize = cnline = cnsize = 0
  }
  
  BEGIN { FS = "[-+, ]+" }
  FNR == 1 { n = 0 }
  /^@@ / { n++; coline = $2; cnline = $4 }
  coline { cosize += /^[- ]/; cnsize += /^[+ ]/ }
  /^@@ / && FNR == NR { oline[n] = $2; osize[n] = $3; nline[n] = $4; nsize[n] = $5 }
  cosize == osize[n] && !/^\+/ { flush() } END { flush() }
  FNR == NR { next }
  !hunk && /^\+\+\+/ { odelta = ndelta = 0 }
  /^@@ / { sub(/^@@ [-0-9,]+ [+0-9,]+ /, "") }
  coline { hunk = hunk $0 "\n"; next }
  { print }


Yeah, I was hoping for some simple rubric/trick/rules to help when needing to edit a diff file manually. Since 'git add -p' handles updating the hunk metadata for you, it handles what I'd consider the hard part.


Re nuclear reactor: higher tier virtualization products work flawlessly. It is a shame what a garbage virtual box turned into over the years.


It was fun to read, but it would’ve probably been easier to rely on seccomp filters instead.


Hardly. For starters, wasm doesn’t guarantee that a piece of code terminates in bound time. There are further security guarantees in ebpf such as any lock acquired must be released.


The eBPF termination checker is buggy anyway; you cannot rely on it.


You can apply additional static checks to Wasm, e.g. control flow analysis, and reject programs without obvious loop bounds or unbalanced locking operations. Or you could apply dynamic techniques like tracking acquired locks and automatically releasing them, or charging fuel (gas). The latter is quite common for blockchain runtimes based on Wasm.


Great writeup, thoroughly enjoyed! The provided "lab" is especially appreciated.

I'm curious if you had reasons to not use veth in noarp mode.


Not a bad suggestion, may even make the "pwru" analysis a bit simpler. Definitely worth playing with. You can even give it a try in the lab and lmk how it fares :).


Thank you for bpftrace! It was a vital aid for kernel spelunking. Very excited to see vmtest. I did a similar tool in the past [1] but never achieved this level of polish.

[1] https://github.com/mejedi/vmwrap


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: