Hacker Newsnew | past | comments | ask | show | jobs | submit | p1necone's commentslogin

Being limited to 8gb of ram is genuinely the only thing on that list I care about (no backlight and no fast charging are teetering on the edge of me caring, but they aren't worth multiple hundreds of dollars) - Apple silicone is so fast now that (at least for my purposes) the performance segmentation between price points is basically meaningless.

A keyboard backlight is such a cheap and useful addition to a keyboard, it feels insulting not to get it. I cannot believe this is one of the ways they decided to cheap out.

I wouldn’t even care about the 8GB of ram if I could just add some myself.


> A keyboard backlight is such a cheap and useful addition to a keyboard

Useless LEDs that burn battery budget.

The thing everyone seems to be missing is this isn't a laptop for you or me. It is to compete with Chromebooks in the educational market, and to have a SKU to sell in developing countries.


Thank goodness they removed this fantastic thing everyone wants to give you an extra fourteen seconds of use time per battery charge. Come on man.

As for the importance of it, if you want to give these to kids, you should have something more rugged, more replaceable, and more built for all kinds of environments (including kids who don’t have a conveniently well-lit place to focus on schoolwork at home).

A large school could have thousands upon thousands of broken Chromebooks waiting to be shipped off - literally multiple pallets. I’ve seen it more than once. Absolutely nobody is begging for an unrepairable, unexpandable, more-expensive version of what they all already have. It’s garbage for school, dead out of the gate.


>> fantastic thing everyone wants

I wouldn't normally comment on such stuff as it's clearly a personal preference, but just to underline that it is in fact a preference vs everyone, I have used keyboard lighting exactly once in the ~decade it's been available to me. On a laptop with predictable keyboard, it genuinely doesn't matter to me.

(On a laptop with unpredictable keyboard, light is mitigating, not fixing the problem :)


Why do you need to see your keyboard?

Touch typing is a useful skill for everyone to have and doesn't take long to acquire.

Not to mention even the light of the display should be enough for you to be able to read the key caps if you really need to. Keyboard backlight seems like a gimmick with limited use to me. I always thought it was purely aesthetic.


You're sitting back in a chair watching YouTube in the dark. Hit F for fullscreen. (OK, that was the easy level because of the key bump.) Now hit L to skip 10 seconds forward. Now hit < and > to adjust speed.

The backlighting is useful. But no, it's not for typing, for most people.


"everyone wants"? I am not even sure I understand the utility. Typing in the dark? For, idk, living in a cave?

14 seconds? Lights are expensive when to comes to batteries.

> I wouldn’t even care about the 8GB of ram if I could just add some myself.

I think that’s pretty unreasonable when they’re using an iPhone SoC to keep it cheap because they have massive volume. It was only ever available in 8GB and never designed for user upgradable memory because it’s for a phone.


It’s basically a web browser machine, that’s fine.

Damn, everyone is using AI for copyediting now aren't they? Once you notice the patterns you see it everywhere.

* "This isn't X. It's Y"

* "Some sentence emphasizing something. Describing the same thing with different framing. Describing it a third time but punchier.

* The em-dash of course

* A hard to describe sense of "cheesiness"

I only hope the models get good enough to not be so samey in the future.


Once you see it you can't unsee it. Although maybe this how corporate blogslop has always been, and we're just now noticing now that it's infected everything.

> "These are not complaints, merely observations."

> "There are repairable laptops, and then there are ThinkPads."

> "iFixit approached the relationship as collaborators, not critics."

> "[...] they didn’t declare victory and go home. They kept pushing."

> "Designing for repairability doesn’t mean compromising innovation or premium experiences; when done well, it actually drives smarter innovation, better modularity, and more resilient platforms."

> "It would be one thing to make a highly repairable but low-volume niche device or concept. Instead, Lenovo just threw down a gauntlet by notching a 10/10 repairability score on their mainstream-iest business laptop."

> "This is [...] how repair goes from being an enthusiast’s “nice-to-have” to being baked into procurement checklists and fleet-management decisions."


There's a desperate grasping for drama and simplicity about it -- same as most mass-media news stories. I recall reading somewhere that the two watchwords of journalism are "simplify, and exaggerate". Maybe add to that: "Make all your metaphors cliches, so the reader doesn't have to think about what is meant."

Yeah, it's weird. It's like one person writes articles for the whole world. Probably will be fixed in a few AI iterations to present more styles, but right now it's everywhere. Articles, even forum posts.

I found a way to 'de-smell' LLM copy: tell it to take a second pass that processes the text output with the William Burroughs cut-up method. Works well for a small subset of use cases.

Presumably the smelly AI text problem is just ... a problem that will be solved. Or maybe we'll just get used to it.


I believe it's already a solved problem especially with base models (pre RL) but they still push the LLM voice either to make it easy to identify or because they think it's likeable, so it's not that OAI, anthropic, Google can't get rid of the assistant voice it's that they don't want to

We've gone the wrong direction on the verbosity scale.

Unless I'm reading for pleasure, I want everything in concise summaries. I don't need flowery language. Or even complete sentences.

Maybe an LLM verbosity slider that dynamically truncates text we don't need. I'll dial mine down.



I recently destroyed the screen on a Google Pixel during a repair following a shoddily-written set of iFixIt instructions. I wish I had checked the comments, where many people complained that the instruction was wrong.

It was about a very fragile part of the process, and so it seemed like an error of omission that seemed atypical for iFixIt. It made me suspect the instructions might not have been wholly human written. I feel a bit vindicated for that suspicion.

The most generous interpretation I can have for this type of article is that it's a second-order phenomenon. If it was written by a human, it was written by one who consumes a lot of AI generated content and whose standards for what they produce have slipped.


I’ve only tried doing a phone repair per iFixit’s instructions once, and the instructions sucked. They explained in excruciating detail how to take the phone apart and then the instructions just ended. No details on reassembly.

>A hard to describe sense of "cheesiness"

This is the "Reddit" factor. I picked up on it being LLM written with this sentence:

"This is the treacherous, final-boss stage where repairability usually dies,"


Ah, yes, everything needs to be phrased as an existential crossroads now. Same thing the other day when I was debating between olives or pickles on my pizza.

Now that I know pickles are a pizza topping, maybe.

LLMs bring up the “final boss analogy a lot too. I’ve gotten that in my own prompts

> I only hope the models get good enough to not be so samey in the future.

Why would you hope to be more easily fooled?


Not GP but I'm personally hoping that if I'm inevitably doomed to be exposed to this horseshit every day that it becomes tolerable to read. For world-shaking language-based superintelligences, they can't write to save their very expensive lives.

> I'm personally hoping that if I'm inevitably doomed to be exposed to this horseshit every day that it becomes tolerable to read.

Thank you for replying, but that doesn’t answer the question. Why would you want to make made up bullshit output more tolerable to read? Being intolerable to read is a feature, it’s a useful signal to know a piece of text may not have had human review, and that you should spend your time reading something else.

I use that same strategy with website consent banners. If a website is so invasive that they go out of their way to make rejection hard (which, by the way, is against the law), I know it’s a company not worth supporting.


What annoys me the most is that the information has become much less dense. There's a lot of unnecessary repetition. I feel like I need to feed every article through an LLM just to get a summary of it.

If only a human could edit the output before posting.

Ironically, the editors probably haven't opened a text editor for months.

> everyone is using AI for copyediting now aren't they?

If the studies that say that humans prefer AI writers are to be believed then you'd be a fool not to


Depends on the type of human you want to attract.

* "This isn't X. It's Y"

I find that Gemini uses that phrase way too much.


Ugh I have actually started hating Gemini for this specifically.

Em dashes aren’t an actual tell IMO. Many people use them.

Surely you mean: Em dashes aren’t an actual tell IMO — many people use them.

Maybe he isn't one but has a close friend who is? That would describe me.

Em dashes aren’t an actual tell IMO: many people use them.

There are dozens of us!

— dozens!

It is though if the rest of the prose is trash.

Jokes on you—humans write trash all the time.

I don’t mind the AI generated aspect. I mind the lack of carrying that it looks like AI slop.

It indicates a baseline competency of the AI user or whomever they are trusting to use it and it will hurt brand trust and trusting humans even more.

I'm glad I haven't let AI write much for me, its better for it to help me develop my ideas and writing and do the work to learn, explore and end up with something where my brain is in the gym. . Passive generation might not always map well to passive consumption


Generate with carefully steered AI, sanity check carefully. For a big enough project writing actually comprehensive test coverage completely by hand could be months of work.

Even state of the art AI models seem to have no taste, or sense of 'hang on, what's even the point of this test' so I've seen them diligently write hundreds of completely pointless tests and sometimes the reason they're pointless is some subtle thing that's hard to notice amongst all the legit looking expect code.


My plan for this in my current toy language project is to allow things like 'import * from Foo', but save a package.lock esque file somewhere on first build - after that you need to run some kind of '--update' command to bring in totally new names.

The problem I'm trying to solve is more around ensuring that purely additive changes in libraries aren't technically breaking due to the risk of name clashes than general discoverability though.


I haven't seen one yet, but theoretically a case that secures the tablet in a holder that has a proper hinge (instead of the typical kickstand style) attached would work. You'd have to weight the keyboard a bit but there's no reason it wouldn't work, and effectively give you the exact same form factor as a laptop.

That sounds like the existing Magic Keyboard for the current iPad airs and pros, can you explain the difference a bit more?

i bought a magic keyboard for my 11" ipad pro and ultimately didn't use it much. it does have a traditional laptop-style hinge, but the way the ipad mounts to the case brings it forward over the keyboard more than with a regular laptop. the hinge also doesn't allow for a very wide range of motion (even compared to macbooks). finally, the center of gravity is really high compared to a laptop which makes it awkward to use as a literal laptop or when lying down.

it definitely looks cool (i could see the design having been inspired by the OG Mac and 20th Anniversary Mac) but works best on a stable surface; plus if you want to use it purely as a tablet, you're left with a big clunky keyboard case to deal with.

the idea of a laptop/tablet combo is cool but i haven't seen the concept executed very successfully from either starting point.


the point of that hinge, besides weight distribution, is to make it easy to reach and touch the bottom of the screen, and so that it's not fully perpendicular to your finger.

and that particular subproblem of using a tablet as a notebook - it solves it well! but it's still a little weird when you try to use it like a laptop. maybe this is a cop-out but it definitely feels like a product that would not have passed the Jobs test in its current form.

I think they just don't know about the Magic Keyboard.

You would be correct. If the ipad let you use full osx it would be pretty attractive to me and I probably would have spent the 5 minutes needed to discover the magic keyboard, but unfortunately the idea of buying a computing device with such insanely powerful hardware but being locked into standard tablet UX really doesn't excite me.

I'm no statistician, but the part about halfway through that says not to use PRNGs for random assignment into bins seems wrong to me?

Sure I can understand why for a research trial you might want just want to be totally safe and use a source of true randomness, but for all practical purposes a decent PRNG used for sorting balls into buckets is totally indistinguishable from true randomness is it not?

I was half expecting this to have been written a few decades ago when really bad PRNGs were in common usage, but the article seems to be timestamped 2025.


Plenty of previously well regarded PRNGs are distinguishable in quite surprising ways.

Perhaps you could say otherwise for a CSPRNG.


In PRNGs there is a compromise between speed and the quality of their statistical properties.

So you must choose a PRNG wisely, depending on the intended purpose.

There are PRNGs good enough for any application, including those that use cryptographic mixing functions, but in many cases people prefer the fastest PRNGs.

The problems appear when the fastest PRNGs are used in applications for which they are not good enough, so the PRNG choice must be done carefully, whenever it is likely to matter.

With recent CPUs, the PRNG choice is much simpler than in the past. They can produce high quality random numbers by using AES at a rate only a few times lower than they can fill memory.

Because of this, the speed gap between the fastest PRNGs and good PRNGs has become much narrower than in the past. Therefore, if you choose a very good PRNG you do not lose much speed, so you can make this choice much more often.

Many kinds of non-cryptographic PRNGs have become obsolete, i.e. all those that are slower than the PRNGs using AES, SHA-2 or SHA-1, which use the dedicated hardware included in modern CPUs.

The non-cryptographic PRNGs that remain useful, due to superior speed, contain a linear congruential generator or a Galois field counter, which guarantee maximum period and allow sequence jumps and the ability to generate multiple independent random streams, together with some non-linear mixing function for the output, which improves the statistical properties.


Note this doesn't apply to GPUs among other things. To that end, counter based PRNGs such as Philox that employ a weakened cryptographic function are useful.


For almost all practical purposes, a decent prng is just as good as a csprng. Cryptographic security only becomes relevant in an anverserial situation. Otherwise, you would need whatever weekmess exists in the prng to be somehow correlated with what you are trying to measure.


If practical purposes include simulating physical processes then the problems with orngs become quite important.


What is an example there?


> It had a notion of "type"s which were functions that returned a boolean 1 only if given a valid value for the type being defined.

I've got a hobby language that combines this with compile time code execution to get static typing - or I should say that's the plan, it's really just a tokenizer and half of a parser at the moment - I should get back to it.

The cool side effect of this is that properly validating dynamic values at runtime is just as ergonomic as casting - you just call the type function on the value at runtime.


> I expected to see measures of the economic productivity generated as a result of artificial intelligence use.

>Instead, what I'm seeing is measures of artificial intelligence use.

Fun fact: this is also how most large companies are measuring their productivity increases from AI usage ;), alongside asking employees to tell them how much faster AI is making them while simultaneously telling them they're expected to go faster with AI.


When your OKRs for the past year include "internal adoption of ai tools"


It is weird right? I don't remember any other time in my career where I've been measured based on how I'm doing the work.

In my experience, "good management" meant striving to isolate measurements as much as possible to output/productivity.


The generous interpretation is that it's meant to incentivize "carpenters who refuse to use power tools" for their own good.


productivity is such a nebulous concept in knowledge work - an amalgamation of mostly-qualitative measures that get baked into quantitative measures that are mostly just bad data


economic productivity is absolutely not nebulous. Its a measure of GDP per hour worked.

https://ourworldindata.org/grapher/labor-productivity-per-ho...


And in a business you can easily measure total profit and divide by total hours worked.

When you try and break it down to various products and cost centers is where it comes unstuck. It’s hard to impossible to measure the productivity of various teams contributing to one product, let alone a range of different products.


You can thank agile for that


You don't seem to like agile, whatever that word even means.


On the contrary. I like agile for when you don’t know exactly what you’re building but you can react quickly to change and try to capture it.

Moving fast and breaking things, agile.

On the other hand. When you know what you want to build but it’s a very large endeavor that takes careful planning and coordination across departments, traditional waterfall method still works best.

You can break that down into an agile-fall process with SAFe and Scrum of Scrums and all that PM mumbo jumbo if you need to. Or just kanban it.

In the end it’s just a mode of working.


Knowing exactly what you want to build is pretty rare and is pretty much limited to "rewriting existing system" or some pretty narrow set of projects

In general, delaying infrastructure decisions as much as possible in process usually yields better infrastructure because the farther you are the more knowledge you have about the problem.

...that being said I do dislike how agile gets used as excuse for not doing any planning where you really should and have enough information to at least pick direction.


If someone comes to you and says: "I want to build a platform that does WhizzyWhatsIt for my customers, it has to be on AWS so it's mingled with my existing infrastructure. It needs to provide an admin portal so that I can set WhizzyWhatsIt prices and watch user acquisition make my bank account go brrrrrtt. It needs the ability for my quazi-illegal affiliate marketing ring to be able to whitelabel and brand it as their own for a cut of the profits."

This is obviously satire but there's a clear ask, some features, from there you know what you need to have to even achieve those features, what project management process would you employ? Agile? Waterfall? Agile-fall? Kanban? Call me in 6 months?


Probably waterfall stuff that have actual clear functions and integrations (if you can extract all that system gets and what it does with it there is no reason to agile it) then slowly get thru the current mess, documenting it at each step while trying to replace it with something better.

Replacing existing system (and especially one you didn't write) is pretty much always the hardest case.


Nice way to make all that data meaningless. I already know some people who’s jobs have pushed adoption of AI tools and it’s clear that whether or not it meaningfully impacts their speed it is not going to do them any favors to say it doesn’t even when it does not


Is this really true? Mobile device users are all mostly forced to use apps rather than the browser for most stuff, and people on desktop PCs/laptops are probably either using them for gaming (all desktop apps), or work where a lot of stuff is desktop apps.

Sure regular consumer stuff like social media is webapps (if they're not mobile only), and if you're interacting with like salesforce or a customer support tracker or an issue tracker or something you're likely using a webapp, but the move to mobile devices for most consumer stuff means that people still using PCs are largely power users.


> if you're interacting with like salesforce or a customer support tracker or an issue tracker or something you're likely using a webapp

Precisely. I think most knowledge work (especially at business) still happens browser. That is the workflow we want to target!


I had some success recently making small hacks for nes/famicom roms using claude despite not having a lick of knowledge about 6502 assembly or the NES hardware, but struggling with doing any more indepth disassembly or code changes, so this popping up is serendipitous - I know what I'm doing this weekend.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: