I used a different Ghidra MCP server (LaurieWired's) to, umm, liberate some software recently. I can’t express how fun straightforward it was to analyze the binary and generate a keygen.
I learnt a ton in the progress. I highly recommend others do the same, it’s a really fun way of spending an evening.
I have some old software I wrote that calls home to a server that no longer exists to do a cert check that would never pass in order to install it. I tried writing my own Ghidra tool, skill, agent, MCP and still can’t seem to figure it out. I’m positive it’s a “human skill” issue but man… ironic that this pops up the week after I gave up trying.
The output is “identical” if you’re using the same model. Consider them separate front ends really.
OpenCode - you can easily see the changed files on your branch, it defaults to being permissive with tools, can easily fork a session so you can fix unrelated issues or small issues without consuming context, you can add more “modes” like build and plan as a first class construct, you aren’t limited to just Anthropic models.
Claude Code - you can do all of the above in one way or another, not least if you use an IDE with it
I started using OpenCode to get away from the horrible Claude flicker, but it seems that they have fixed that recently - I don’t think I’ll be going back though, and I use Opus/Sonnet exclusively.
Literally the most important thing you could have included in that README is a few samples of how it looks, and you might have found some people to answer the questions you have.
Even an examples directory in the repo with some samples would have helped.
Honestly, the fact that mobile browsers don't provide a way to see the contents of the title attribute is a severe UX failing on the part of the browser developers, not the website developers, who are literally using the attribute as intended.
Artifical limits, because they have 40 paid licenses that they can not use, because of a non-disclosed assignment limit that is NOT mentioned in the pricing page nor in the ToS.
If a company doesn't respond to this it tells you they likely only respond to lawsuits. As a paying customer whose business operations are impacted, you should have standing to sue. Your company could potentially extract from Zoom the entirety of the money that their dumb decision made your company lose. Consult a lawyer for actual advice and next steps.
Of course, it's also possible you signed a contract that basically says "we can just decide not to work and you can't do anything about it" in which case, sucks, and fire whoever negotiates your B2B contracts. But also, those clauses can be void if the violation is serious enough.
Probably not worth the effort, for a couple days of downtime, we'll just move somewhere else.
But I agree, I recognize the silence in that forum thread that was locked without a resolution: some boss said "let they complain or pay, we don't care about them otherwise".
Ignoring the size of the HTML in addition to the CSS, it’s fun, but not really fair when talking about code golf. Beyond a few numbers, you need to include some JavaScript and generating a million list elements. But those bytes count …
That was the idea, but from my memory, people chose their own meaning and conventions pretty much from the start. So much about the internet was envisioned with a completely different use case than what it actually was used for, it’s amazing things even kinda worked out in the end.
DNS allows search so we really should have started rejecting everything that isn't qualified with an end dot as punishment to ICANN.. Instead random common names might be treated differently on every network to make sure these people can't issue certs that will be trusted for them in your own network, etc.
Now prioratizing unambiguos naming would be somewhat acceptable if ICANN was tacobell and just a steward of naming on the side.
I'm not sure what you mean by "DNS allows search" -- by the usual definition of "search", the DNS doesn't: it is a lookup mechanism. I'm also not sure who "we" are in your idea or what you mean by "qualified with an end dot": all domains that get looked up implicitly have a "." (a zero length label that signifies the end of the query name) if it isn't explicit.
If you are not a consumer on an ISP emulating dialup it is quite likely that a popular name in a naming convention I.e. 'mercury' resolves to something for you and something for someone at a different firm (mercury.intranet.[firm].not-so-stupid-tld). A cert is possibly not a fully qualified one so when ICANN gives away mercury you need to append .asshat to everything ICANN names.
(Two firms have an unambiguous situation because they don't trust each others private roots but they both trust a cert issued for the public trust as a fqdn which is why TLDs expanding is a form of theft/breakage against every intranet..)
Ah, resolver (not DNS) search paths. They were a really bad idea that can and do lead to leaked queries that can result in all sorts of unpleasantness and risks.
As for certs, AFAIK, you can't get a certificate for a non-fqdn from a public CA since 2015.
If icann sells www as a tld domain then your use of www as a machine name you may refer to unqualified is a risk because virtually every piece of software in the world respects public issuance until you delete it all if you can.
The DNS naming confusion was largely dealt with by having a small number of TLDs and rarely referring to complex things like partially specified subdomains, but every once in a while a fool named their machine com, org, or net. (Though these as subdomains were far more toxic.)
I've done plenty of interesting things but a distributed correction attempt for ICANN's incompetence is never going to be adequate. You can read their own work on gTLDs in the past to know they understand this.
There's no leak being discussed. Everyone in the world sets resolving and it is what it is with the current TLDs when ICANN needs more coke money they possibly break every node in the world and a distributed group of thousands has to look if something bad happened.
There is the argument that ICANN should no longer be consulted ever by nodes of consequence but that is an argument that they have failed 100% in their responsibilities.
If you don't care at all about zone delegation and global resolution then you obviously don't have an opinion on how to evaluate ICANNs stewardship of global domain delegation.
We have run out if IPv4 addresses but there is NAT is not a satisfying answer to start. But we have let ICANN polute naming so let's implement shadow naming everywhere is an even less satisfactory answer.
I think that the scenario here is where the queries are explicitly not leaking, and you've raised a red herring.
If I understand correctly, the scenario is an internal machine named "george", which is being properly search-pathed and looked up as "george.example.org." with nothing leaking anywhere, becoming vulnerable to Walmart being able to issue certificates in the name "george", because the DNS client library's search pathing is not read out by the layers that simply know the machine as "george".
I'm not totally convinced by the premise here that certificate checkers never read out the final fully-qualified domain name from getaddrinfo().
This isn't a red herring at all. This is DNS resolution and client PKIX implementation. You could fix your whole network to not import anything from outside, ban all BYOD, etc, or you could fire ICANN clowns who think they need to make changes to the reserved list because, why? Money, corruption, self importance?
HN is full of people from SaaS startups who in essence want to buy the perfect 900 number. But DNS and delegation goes far deeper than selling one name for $20 and going to other $20 names to store your code and email at other SaaS providers.
With AI churning out a pretend blog or news site or web shop in a few minutes, that would be hard to enforce. I’m with you on the necessary death to TLDs, though.
> tsbro solves this by completely bypassing the browser's import system using synchronous XHR, transpile with swc wasm and a sophisticated ESM-to-CJS transpiler so that synchronous require is used everywhere:
I learnt a ton in the progress. I highly recommend others do the same, it’s a really fun way of spending an evening.
I will certainly be giving this MCP server a go.