Hacker Newsnew | past | comments | ask | show | jobs | submit | tlarkworthy's commentslogin

?

Its like GET <namespace>/object, PUT <namespace>/object. To me its the most obvious mapping of HTTP to immutable object key value storage you could imagine.

It is bad that the control plane responses can be malformed XML (e.g keys are not escaped right if you put XML control characters in object paths) but that can be forgiven as an oversight.

Its not perfect but I don't think its a strange API at all.


That may be what S3 is like, but what the S3 API is is this: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3

My browser prints that out to 413 pages with a naive print preview. You can squeeze it to 350 pretty reasonably with a bit of scaling before it starts getting to awfully small type on the page.

Yes, there's a simple API with simple capabilities struggling to get out there, but pointing that out is merely the first step on the thousand-mile journey of determining what, exactly, that is. "Everybody uses 10% of Microsoft Word, the problem is, they all use a different 10%", basically. If you sat down with even 5 relevant stakeholders and tried to define that "simple API" you'd be shocked what you discover and how badly Hyrum's Law will bite you even at that scale.


> That may be what S3 is like, but what the S3 API is is this: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3

> My browser prints that out to 413 pages with a naive print preview. You can squeeze it to 350 pretty reasonably with a bit of scaling before it starts getting to awfully small type on the page.

idk why you link to Go SDK docs when you can link to the actual API reference documentation: https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operatio... and its PDF version: https://docs.aws.amazon.com/pdfs/AmazonS3/latest/API/s3-api.... (just 3874 pages)


It's better to link to a leading S3 compatible API docs page. You get a better measure of the essential complexity

https://developers.cloudflare.com/r2/api/s3/api/

It's not that much, most of weirder S3 APIs are optional, orthogonal APIs, which is good design.


Because it had the best "on one HTML page" representation I found in the couple of languages I looked at.

That page crashes Safari for me on iOS.

It gets complex with ACLs for permissions, lifecycle controls, header controls and a bunch of other features that are needed on S3 scale but not at smaller provider scale.

And many S3-compatible alternatives (probably most but the big ones like Ceph) don't implement all of the features.

For example for lifecycles backblaze have completely different JSON syntax


Last I checked the user guide to the API was 3500 pages.

3500 pages to describe upload and download, basically. That is pretty strange in my book.


Even download and upload get tricky if you consider stuff like serving buckets like static sites, or stuff like siged upload URLs.

Now with the trivial part off the table, let's consder storage classes, security and ACLs, lifecycle management, events, etc.


Everything uses poorly documented, sometimes inconsistent HTTP headers that read like afterthoughts/tech debt. An S3 standard implementation has to have amazon branding all over it (x-amz) which is gross.

I suspect they learned a lot over the years and the API shows the scars. In their defense, they did go first.

I mean… it’s straight up an Amazon product, not like it’s an IETF standard or something.

!!!

I’ve seen a lot of bad takes and this is one of them.

Listing keys is weird (is it V1 or V2)?

The authentication relies on an obtuse and idiosyncratic signature algorithm.

And S3 in practice responds with malformed XML, as you point out.

Protocol-wise, I have trouble liking it over WebDAV. And that's depressing.


HTTP isn't really a great back plane for object storage.

Etag and cache control headers?


My grandfather was still talking of this in the 90s. A very good joke!


I fell for it. I wasn't born in 1957, but I to this day I remember the picture. I must have seen it in a newspaper when I was around 5. It was before TV. I just accepted the picture as ground truth and it stuck with me for many years.

It came as quite a shock when I discovered as an adult spaghetti was made from flour.


No no, you've still got it wrong!

It's made from a flower, a rare but now successfully domesticated flower. The Tu-Tue flower does require extensive processing, sort of like how corn has to be soaked in something, like ashes, to release its nutrients.

Tu-tue requires a similar process, but just as with natives in the new world and corn, ancient Romans simply knew that washing the flowers in a hot-spring near Getti made the final product palitable, without knowing why.

Spa being of course, latin for 'hot wash', thus spa-getti.

Hope this helps.


Markov chains learn a fixed distribution, but transformers learn the distribution of distributions and latch onto what the current distribution based on evidence seen so far. So that's where the single shot learning comes from in transformer. Markov chains can't do that, they will not change the underlying distribution as they read.


https://www.lesswrong.com/posts/gTZ2SxesbHckJ3CkF/transforme... explains this more, see the part about mixed state presentation & belief synchronization

>Another way to think about our claim is that transformers perform two types of inference: one to infer the structure of the data-generating process, and another meta-inference to update it's internal beliefs over which state the data-generating process is in, given some history of finite data (ie the context window). This second type of inference can be thought of as the algorithmic or computational structure of synchronizing to the hidden structure of the data-generating process.


Oh yeah, I read that article and could not find it agin. Thank you.

It really open my mind to what is special about transformers.


U put something out on the internet, likely no-one cares, sometimes people will point out a better way of doing it. You gain in that knowledge. I learnt a lot from public critique... it makes you better and more knowledgeable. Harness the crowd, let it out.


well that inspired me to research getting those tests in Germany.

=> heart panel plus

https://en.minu.synlab.ee/heart-panel-plus/

I don't need doctors, I can get ChatGPT to analyse the results.


Yeah I was inspired after https://news.ycombinator.com/item?id=43998472 which is also very concrete


I love everything they've written and also Sketch is really good.


It unblocks that workflow, that's why it's so great. You can have a single script with inline dependencies that are auto installed on execution. That can expand to importing other files, but there is very little setup tax to get started with a script and it does not block expansion.


It's not about single-file scripts, it's about having a "sandbox" environment in which various things can be messed with before abstracting anything out into a project.


This is a divide among different Python devs it seems. A lot of people are running scripts, which I will do eventually, but I spend a ton of time in a REPL experimenting and testing.


Yeah, it's this experiment-first workflow that's not so well supported.


Interesting but I would like details on how spatial is handled specifically


Cool but 12MB WASM blob. I wish there was a way of making these WASM builds significantly smaller.


The funny thing is that the performance of a 12MB WASM blob is probably superior to most Shiny apps with more than light traffic.


Usually they are shipped in a compressed form. If 12MB is compressed it could be that it represents the entire R runtime to support the general R REPL. It could be possible to reduce the payload by compiling only what's necessary to run a particular R program into the wasm binary. That should cut down size considerably.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: