Its like GET <namespace>/object, PUT <namespace>/object. To me its the most obvious mapping of HTTP to immutable object key value storage you could imagine.
It is bad that the control plane responses can be malformed XML (e.g keys are not escaped right if you put XML control characters in object paths) but that can be forgiven as an oversight.
Its not perfect but I don't think its a strange API at all.
My browser prints that out to 413 pages with a naive print preview. You can squeeze it to 350 pretty reasonably with a bit of scaling before it starts getting to awfully small type on the page.
Yes, there's a simple API with simple capabilities struggling to get out there, but pointing that out is merely the first step on the thousand-mile journey of determining what, exactly, that is. "Everybody uses 10% of Microsoft Word, the problem is, they all use a different 10%", basically. If you sat down with even 5 relevant stakeholders and tried to define that "simple API" you'd be shocked what you discover and how badly Hyrum's Law will bite you even at that scale.
> My browser prints that out to 413 pages with a naive print preview. You can squeeze it to 350 pretty reasonably with a bit of scaling before it starts getting to awfully small type on the page.
It gets complex with ACLs for permissions, lifecycle controls, header controls and a bunch of other features that are needed on S3 scale but not at smaller provider scale.
And many S3-compatible alternatives (probably most but the big ones like Ceph) don't implement all of the features.
For example for lifecycles backblaze have completely different JSON syntax
Everything uses poorly documented, sometimes inconsistent HTTP headers that read like afterthoughts/tech debt. An S3 standard implementation has to have amazon branding all over it (x-amz) which is gross.
I fell for it. I wasn't born in 1957, but I to this day I remember the picture. I must have seen it in a newspaper when I was around 5. It was before TV. I just accepted the picture as ground truth and it stuck with me for many years.
It came as quite a shock when I discovered as an adult spaghetti was made from flour.
It's made from a flower, a rare but now successfully domesticated flower. The Tu-Tue flower does require extensive processing, sort of like how corn has to be soaked in something, like ashes, to release its nutrients.
Tu-tue requires a similar process, but just as with natives in the new world and corn, ancient Romans simply knew that washing the flowers in a hot-spring near Getti made the final product palitable, without knowing why.
Spa being of course, latin for 'hot wash', thus spa-getti.
Markov chains learn a fixed distribution, but transformers learn the distribution of distributions and latch onto what the current distribution based on evidence seen so far. So that's where the single shot learning comes from in transformer. Markov chains can't do that, they will not change the underlying distribution as they read.
>Another way to think about our claim is that transformers perform two types of inference: one to infer the structure of the data-generating process, and another meta-inference to update it's internal beliefs over which state the data-generating process is in, given some history of finite data (ie the context window). This second type of inference can be thought of as the algorithmic or computational structure of synchronizing to the hidden structure of the data-generating process.
U put something out on the internet, likely no-one cares, sometimes people will point out a better way of doing it. You gain in that knowledge. I learnt a lot from public critique... it makes you better and more knowledgeable. Harness the crowd, let it out.
It unblocks that workflow, that's why it's so great.
You can have a single script with inline dependencies that are auto installed on execution. That can expand to importing other files, but there is very little setup tax to get started with a script and it does not block expansion.
It's not about single-file scripts, it's about having a "sandbox" environment in which various things can be messed with before abstracting anything out into a project.
This is a divide among different Python devs it seems. A lot of people are running scripts, which I will do eventually, but I spend a ton of time in a REPL experimenting and testing.
Usually they are shipped in a compressed form. If 12MB is compressed it could be that it represents the entire R runtime to support the general R REPL. It could be possible to reduce the payload by compiling only what's necessary to run a particular R program into the wasm binary. That should cut down size considerably.
Its like GET <namespace>/object, PUT <namespace>/object. To me its the most obvious mapping of HTTP to immutable object key value storage you could imagine.
It is bad that the control plane responses can be malformed XML (e.g keys are not escaped right if you put XML control characters in object paths) but that can be forgiven as an oversight.
Its not perfect but I don't think its a strange API at all.
reply