Hacker Newsnew | past | comments | ask | show | jobs | submit | mijoharas's commentslogin

The thing I found frustrating was I wanted to merge changes with just files without their sync server (i.e. just import this other atuin sqlite dB) so I raised a PR to support that.

They closed it (which is fine) but there is no offline migrate alternative.

It's a shame, and fair enough, their project, but I don't think my wishes and the projects are very aligned.

I keep half meaning to move back to zsh-histdb (I think that's what it was called) but haven't found an impetus to.

I'll probably check if there's a file based sync option next time I switch machines and decide then.


> They closed it (which is fine) but there is no offline migrate alternative.

If it helps, you can copy the history table from one db to another in 3 simple lines of sqlite. I’ve done it myself a few times with zero fuss.


For anyone else curious after reading "-bashi" 40 times:

(Not gonna direct quote because the damn site doesn't allow copy-pasting so they don't get a link, paraphrased):

Kirai-bashi would be literally translated to "dislike-chopsticks" and means bad chopstick table-manners. Hashi is chopsticks and bashi is the voiced form of it.

So the bashi suffix/word on the end of all of these just means chopsticks it seems.


To add to this, voicing is also a way for Japanese words to become more “coherent”, the same way you write “dislike-chopsticks” as one combined noun, and not “dislike chopsticks”.

Someone downvoted this, but the poster is correct, so there was absolutely no reason for downvoting.

Rendaku, i.e. the voicing of the initial consonant, happens in the native Japanese words (i.e. not in the Japanese words of Chinese origin), in most cases when they are a part of a compound word and they are not the initial word. This serves indeed to distinguish a sequence of unrelated words from a compound word.

There are exceptions when rendaku does not happen, but typically whenever a word like hashi becomes a part of a compound word it will be voiced to -bashi.

"H" is a special case among the consonants, because in old Japanese it was pronounced as "p", which is why it is voiced as "b". Later, in initial positions the pronunciation was changed to "f" and even later the pronunciation was changed to "h". The "f" pronunciation has been retained only before "u", like in Fuji. In non initial positions, the original "p" has become later "v" and even later "w".

These pronunciation changes happened after the creation of the hiragana and katakana syllabaries, so they were not reflected in writing. The orthographic reform that was forced after WWII has brought the written form of the words closer to the pronunciation, e.g. by writing consistently "w" where it is pronounced so. Before WWII, many words written now with "-wa-" were still written with "-ha-", a spelling that has been preserved now only in the particle "wa" (like the spelling corresponding to the old pronunciation "wo" has been preserved for the particle "o").

While the Japanese orthographic reform had some positive effects, in simplifying a little the Japanese writing, it also had the effect that for someone who knows only the modern written Japanese it is difficult to read the Japanese books published before WWII, where many different kanji are used and also their hiragana transcriptions are different.

I assume that this was actually an effect intended by the American occupation forces, as a similar policy was applied by the Russians in all the territories of the Soviet Union (except the Baltic countries), where they forced the native populations to change their writing systems to the Cyrillic alphabet, in order to make difficult for the younger generations to read anything dating from before the Russian occupation.


> The "f" pronunciation has been retained only before "u", like in Fuji.

Well, there is a convention that syllables starting with h- are spelled with f- (in foreign transcription) if the following vowel is -u. There's not much difference in the pronunciation itself; maybe there was more of one when the spelling convention was set.


At least in the recent past and probably also today in some Japanese dialects, the "f" pronunciation must have been retained before "u".

For example, in some Okinawan dialects the "f" pronunciation has been retained before all vowels.

Because of this, after Okinawa was occupied by Japan in the last quarter of the 19th century, the Japanese used "fu" before vowels, to transcribe the Okinawan pronunciation. For instance, the Okinawan syllable "ha" (pronounced "fa") was transcribed by the Japanese as "fua", because writing it like "ha" would have resulted in a too different pronunciation.

So at least by that time "fu" must have been still perceived as clearly different from "ha", "hi", "he" and "ho".


> At least in the recent past and probably also today in some Japanese dialects, the "f" pronunciation must have been retained before "u".

I wasn't disputing that as to the recent past.

I searched up some Japanese-language videos on youtube as a followup, and I can report:

A noticeable "f" is present before "u" in many cases. (I found it in the words "tofu" and "daifuku", plus the obvious English loanwords "soft", "firm", and "waffuru". My best guess as to the vowel following "f" is "u" for "soft" and "a" for "firm".)

But, not consistently. You don't have to pronounce the syllable that way. (Observed also in "tofu" and "daifuku".)

The nature of my low-effort search precludes any statements about dialectal variation. I wouldn't want to claim that the syllable onsets are "clearly different" to modern speakers today. But (1) the option to have an "f" is still present in -u syllables, and (2) the existence of common loanwords where the foreign sound is recognized is, if anything, going to serve to strengthen awareness of the hypothetical difference.


I agree with you.

I was explaining the historical pronunciation, because without knowing it, for the English speakers there are many puzzling things related to the syllables starting with "h", e.g. why "hashi" is voiced to "-bashi", why hiragana "huji" is transcribed to Latin "Fuji", why the particle "wa" is written "ha" in hiragana, why the capital city of Okinawa, which is now written "Naha" (because now the traditional Okinawan pronunciation does not matter any more) can be found in older texts written as "Nafua", why "Yawara" (the original native name of what is now called "jiu-jitsu", through translation into Sino-Japanese) was written in hiragana as "yahara" in the old books, and so on.

As you have mentioned, modern Japanese frequently uses "fu" before vowels or in final position, in transcribing the words borrowed from English or other languages, to mark the consonant "f", which otherwise does not exist in Japanese, and in these borrowed words it is more likely to be pronounced as English "f".


From the title I thought this might be a reaction to the "Microslop" epithet and a commitment to increase code quality and reduce bugs.

Guess not.

It's a shame, I'd appreciate more than a single 9 of uptime from GitHub (luckily I don't need to interact with anything else Microsoft related)


One thing to note.

They were quite conservative in their approach, so the only things that were rejected were from people who had agreed not to use an LLM and almost definitely did use an LLM (since they fed hidden watermarked instructions to the llm's).

This means the true number of people that used LLM's in their review (even in group A that had agreed not to) is likely higher.

Also worth noting, 10% of these authors used them in more than half of their reviews.


Yes for those in group B I'd suspect many were doing exactly what these cheaters in group A were doing - submitting the unaltered output of an LLM as their review.

The rejection is based on the dishonesty of explicitly committing to standard A and then knowingly violating it, not on LLM use as such. I think that's pretty fair, considering that everyone could have just chosen B if they wanted to.

Sure, I'm just pointing out that the 2% headline figure is very conservative if not misleading as a far greater unknown number in group B will have done exactly the same (which I doubt ICML or those submitting papers actually want). This is probably a first step in clamping down on anyone doing this.

> I'm pretty sure most kids older than 12 do have access to kitchen knives. And actively use them too.

True, and it's the parents responsibility to ensure that children won't injure themselves with the knives, or take them out or to school or whatever.


Yes, but my point was - they are not handled the same way as guns (and many other things) and that you simply can't enforce some things.

Can you describe how you hooked Claude up to freecad?

I've messed around with freecad a bit (I'm still a beginner) and was just saying today I'd like to play around with trying to use llm's for 3d modelling.

EDIT: I found this mcp[0] after searching, was it that? The docs mention Claude desktop, but I assume it works fine with Claude code too, right?

[0] https://github.com/neka-nat/freecad-mcp


Hey, I'm the OP. I originally started with FreeCAD. There's not much to "hook up" to Claude. It can natively write for FreeCAD. You don't need to use the FreeCAD editor and can point to an external, local file with an import. At that point there's not much more than pointing your LLM to that file. You'll need to tell the FreeCAD desktop app to update on changes.

Eventually I moved to JSCAD for the application mentioned in my blog post because I realized I wanted a more complex UI (which meant a web app) than what FreeCAD provided natively. If you're looking for something simple with some var statements though, FreeCAD might be enough.

In my experience, the MCP isn't really needed. Claude at least already can write the code pretty well. The problems are more with getting it to understand the output, which the blog post covers.


Sorry what format of local file can it work with?

I can see it working with scad, and then having that generate some things. I'd imagine it'd struggle with an STL file. I don't know much about the format of FCStd files but I'd find that surprising if it worked fine. Obviously three.js code and everything it could be alright with.

It might be my lack of knowledge, because I've mostly just used Freecad to create and edit things and then just exported to STL (which doesn't feel like the thing Claude would be good at modifying)


I didn't hook up anything to anything, I copied-and-pasted the Python output of Claude to the Python console of FreeCAD. Run, check, delete, repeat.

Cool project. Why is it called the Baochip/Dabao?

Is it big Bao? Or take-away (just learnt the second meaning), or something else?


Personally, I love eating "bao" (a style of dumplings), but also coincidentally, a homophone of "bao" in Chinese (different character 保, similar sound) has a meaning of "protect; defend. keep; maintain; preserve. guarantee; ensure". So it means both things to me - one of my favorite foods, and also describes the technology.

"dabao" is just a pun on that - means "take-away" or "to-go". The dabao evaluation board is basically a baochip in a "to-go" package.


That would explain the naming of OpenBao, a fork of Hashicorp Vault. Goes with the other fork's name (OpenTofu) as well as the meaning you just mentioned.

I think it’s take-away, or to go. Like when you order some food to go.

This is what I think. I saw someone else on HN suggested provide an `X-User-Age` header to these sites, and provide parents with a password protected page to set that in the browser/OS.

Responsibility should be on the website to not provide the content if the header is sent with an inappropriate age, and for the parent to set it up on the device, or to not provide a child a device without child-safe restrictions.

It seems very obviously simple to me, and I don't see why any of these other systems have gained steam everywhere all of a sudden (apart from a desire to enhance tracking).


Seems simple until you try to figure out what's allowed for what age, which surely will differ by country at a minimum.

To me that's a geo-ip lookup on the online service, which they kinda need to do anyways so seems fine?

(if there are further restrictions then it gets messy, but I feel like that's the current state of things anyways? at least for online services which I'm mostly speaking about here.)

Mostly my point is I don't think attestation is required. I think that responsibility should fall upon parents, and I don't want to have to give my ID to any online sites, because I don't remotely trust them to keep that safe. I'm less worried about them storing a number I send them about how old I am.


There's ~195 countries with 195 sets of laws.

And 50 US states.


Yeah, and frankly if you have a porn site (for example) you already need to deal with the different country restrictions.

Having no restrictions would be great, but since a bunch of countries are passing these laws I'd appreciate having a minimally invasive version instead.


I don't understand, Claude code already has automatic prompt caching built in.[0] How does this change things?

[0] https://code.claude.com/docs/en/costs


It'd be great if they went to Mistral!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: