Hacker Newsnew | past | comments | ask | show | jobs | submit | Centigonal's commentslogin

It's the new SEO. Enterprise software vendors are already tailoring their marketing pages and docs to influence AI for when some VP asks their chatbot "which XYZ software should we consider buying?" This is something a coworker is facing at work as we try to leverage AI to speed up market discovery/tool selection work.

“They were careless people, David, Megan, and Larry- they smashed up things and creatures and then retreated back into their money or their vast carelessness or whatever it was that kept them together, and let other people clean up the mess they had made.”

This is a wild take. Good frameworks come with clever, well-thought-out abstractions and defensive patterns for dealing with common problems experienced when working in the space the framework covers. frameworks are also often well-documented and well-supported by the community, creating common ways of doing things with well understood strengths and weaknesses.

In some cases, it's going to make sense to drop your dependency and have AI write that functionality inline, but the idea that the AI coding best practice is to drop all frameworks and build your own vibe-coded supplychain de novo for every product is ludicrous. At that point, we should just take out the middle man and just have the LLMs write machine code to fulfill our natural language product specs.


The other thing that's dumb about this is frameworks are usually consolidating repetitive boilerplate so it's going to cost a lot more tokens for an AI to inline everything a framework does.

Yeah, definitely a stupid take from OP. LLMs are very strong at using the frameworks, it makes it easier to hire people to work on your codebase, it makes it easier for future uses of LLMs since they'll have a lot of framework details in their training data, etc.

GP means the si url parameter, which is a token that helps google track how their videos are being shared.

How transient depends on the problem space. In chess, centaurs were transient. In architecture or CAD, they have been the norm for decades.

I don't understand why you're being downvoted. This is a topic worth discussing.

Like every previous invention that improves productivity (cf. copiers, steam power, the wheel), this wave of AI is making certain forms of labor redundant, creating or further enriching a class of industrialists, and enabling individuals to become even more productive.

This could create a golden age, or a dark age -- most likely, it will create both. The industrial revolution created Dickensian London, the Luddite rebellion & ensuing massacres, and Blake's "dark satanic mills," but it also gave me my wardrobe of cool $30 band T-shirts and my beloved Amtrak train service.

Now is the time to talk about how we predict incentive structures will cause this technology to be used, and what levers we have at our disposal to tilt it toward "golden age."


Considering the usage of LLMs by many people as a sort of friend or psychologist we also get to look forward to a new form a control over people. These things earn peoples "trust" and there is no reason why it couldn't be used to sway peoples opinions. Not to mention the devious and subtle ways it can advertise to people.

Also, these productivity gains arent used to reduce working time for the same number of people, but instead to reduce the number of people needed to do the same amount of work. Working people get to see the productivity benefits via worsening material conditions.


People need to develop memetic immunity to AI flattery. It's exactly like how conspiracy sites on the Internet worked. A lot of people get one-shor in the beginning, but 10 years later mostly everyone understands that you can't just believe what you read on the Internet.

You'd be surprised. I'm already sorry if I sound condescending, I just don't know how to rephrase this: please but please look around how effective is nowadays all that internet, dare to say more and more effective, in pushing "alternative truth" for the obvious goal of covering dirty businesses, wars, and even more crimes.

People have had several thousand years to develop immunity to flattery and yet here we are with a President where aides have to put his name in every paragraph of a memo to get him to read it.

https://www.independent.co.uk/news/world/americas/donald-tru...

At an individual level, we have a lot of psychological plasticity and can work to overcome our limitations. At societal scale, though, we are social primates and any system that takes advantage of natural social primate behavior is likely to succeed indefinitely.


Unlike every previous invention that improves productivity, It is making every form of labor redundant.


AIUI, in most lines of work AI is being used to replace/augment pointless paper-pushing jobs. It doesn't seem to be all that useful for real, productive work.

Coding may be a limited exception, but even then the AI's job is to be basically a dumb (if sometimes knowledgeable) code monkey. You still need to do all the architecture and detailed design work if you want something maintainable at the end of the day.


real productive work like what? What do you think all this hubub with robotics is about?

I mean, I know what you are getting at. I agree with you on the current state of the art. But advancements beyond this point threaten everyone's job. I don't see a moat for 95% of human labor.

There's no reason why you couldn't figure out an AI to assemble "the architecture and detailed design work". I mean I hope it's the case that the state of the art stays like this forever, I'm just not counting on it.


Robotics is nothing new, we had robots in factories in the 1980s. The jobs of modern factory workers are mostly about attending to robots and other automated systems.

> There's no reason why you couldn't figure out an AI to assemble "the architecture and detailed design work".

I'd like to see that because it would mean that AI's have managed to stay at least somewhat coherent over longer work contexts.

The closest you get to this (AIUI) is with AI's trying to prove complex math theorems, where the proof checking system itself enforces the presence of effective large-scale structure. But that's an outside system keeping the AI on a very tight leash with immediate feedback, and not letting it go off-track.


> It doesn't seem to be all that useful for real, productive work.

Even the most pointless bullshit job accomplishes a societal function by transferring wages from a likely wealthy large corporation to a individual worker who has bills to pay.

Eliminating bullshit jobs might be good from an economic efficiency perspective, but people still gotta eat.


The logic of American economic policy relies on a large velocity of money driven by consumer habits. It is tautological, and it is obsolete in the face of the elite trying to minimize wage expenses.

How is it obsolete? If everyone is unemployed and a few AI barons are obscenely wealthy, the velocity of money will be low because most people will be broke.

Seems to me like that's still a worthy target if chasing it fights that outcome.


How is it obsolete? If everyone is unemployed and a few AI barons are obscenely wealthy, the velocity of money will be low.

Seems to me like that's still a worthy goal.


If the only point is distributing money, then the pointless bullshit job is an unnecessary complication.

It's not unnecessary to the person who uses it to pay their bills.

I think GP meant that the money could be distributed directly without the job in between, i.e. UBI.

Of course that comes with its own set of problems, e.g. that you will lose training, connections, the ability to exert influence through the job or any hope of building a career.


That's certainly true.

But one is well-advised to inflate and test the new lifeboat before jumping out of the current one, not after.


People fought back. Who is fighting back now?

Capitalists have openly gloated in public about wanting to replace at least one profession. That was months or years ago. What are people doing in response? Discussing incentive structures?

SC coders paid hundreds of thousands a year are just letting this happen to them. “Nothing to be done about another 15K round of layoffs, onlookers say”


> Capitalists have openly gloated in public about wanting to replace at least one profession. That was months or years ago. What are people doing in response?

Great, let them try. They'll find out that AI makes the human SC coder more productive not less. Everyone knows that AI has little to nothing to do with the layoffs, it's just a silly excuse to give their investors better optics. Nobody wants to admit that maybe they've overhired a bit after the whole COVID mess.


This is exactly it, nobody is going to do anything about it

Buggy-whip makers inconsolable!


People regularly circumvent "blocking technology" (i.e. DRM) because they want to watch a TV show on a plane with no wi-fi, or because they want to save $20 on a cartridge of printer ink. If someone wants to kill another human being and evade detection, I'm sure they'll find a way to print their part.


I would say it's not a crime to circumvent DRM or whatever, but then I remember the DMCA exists


Good article, reflects my experience hiring at a small services firm, too.

One thing I'd add re: "non-obviousness." There are also tarpits; people who make you think "I can't believe my luck! How has the market missed someone this good!?" At this point, I have enough scar tissue that I immediately doubt my first instinct here. If someone is amazing on paper/in interviews and they aren't working somewhere more prestigious than my corner of the industry, there is often some mitigating factor: an abrasive personality, an uncanny ability to talk technically about systems they can't actually implement, a tendency to disappear from time to time. For these candidates, I try to focus the rest of the interview process on clearing all possible risks and identifying any mitigating factors we may have missed while getting the candidate excited to work with us assuming everything comes back clean.


Great point, definitely a possibility. I think I've gotten lucky in the past here where either the process caught that kind of abnormality early in the funnel, or these folks just happened to actually be super early in their careers and just hadn't had anybody take a chance on them.

Do you find that in the tarpit scenario they will typically have a work history hinting at these quirks?


Sometimes!

One person had 3-4 positions out of college, all between 8 and 14 months. Turns out they would join a large company, do nothing, and wait until they got let go. Not sure why they tried this at our smaller org, where the behavior was much more obvious.

Another flag for me is when an earlier-stage candidate claims deep expertise in multiple not-closely-related technologies. We hired one person who had deep ML, databases, and cloud services expertise - we have people like that on staff, so no problem, right? Turns out they struggled to do any of those (despite great performance on the take-home and really good, almost textbook-y answers in the interviews - this was before FinalRound and similar, so I assume they just prepped really well and had help from a friend). Now, I try to tease out the narrative of how they developed expertise in each area (e.g. "I started as a business analyst making dashboards, but then I got really interested in how databases worked and ended up building my company's first data warehouse"), which tends to be pretty illuminating in its own right. This sounds a little obvious, but a surprising number of candidates will explain their work history without ever mapping it to the skills they developed at each role unless prompted.

There were a few with really good resumes who got caught out during the interview process. Testing explicitly for humility in the interview helped a lot with this.


To underscore this: the boneheaded decision Tesla is making is forcing customers to choose between a $99/mo subscription for FSD, and no ACC or lanekeeping assist otherwise. It's like letting people buy a subscription to the iPhone Pro Max 17 or not have any phone at all.

By the way, FSD ("full self-driving") is just as inaccurately named as Autopilot. I don't know why Tesla can't call their technology, like, CyberDrive or something else that isn't glaringly inaccurate.


Autopilot is just cruise control/lane keep assist/slow down when the car ahead of you does.

It’s not close to FSD, Tesla wouldn’t call FSD as Auto pilot because auto pilots un the aircraft industry are pretty dumb (the first autopilot was literally a rope tied to the aircraft control stick). FSD used to be the expensive paid add on feature while autopilot was a more reasonably priced upgrade.


I think they will release dumbed down fad thats more akin to autopilot but for like $20 per month.


Thought the S was for supervised?


It is not. Though I noticed their main marketing page for FSD uses "Full Self-Driving (Supervised)". Not sure if this is new or how new.


MacOS has this feature as well. It used to be called "Allow my iCloud account to unlock my disk," but it keeps getting renamed and moved around in new MacOS versions. I think it's now tied together with remote password resets into one option called "allow user to reset password using Apple Account."


To be fair, which makes it even more ominous with Apple. At least Microsoft explicitly informs you during setup and isn't trying to hide it behind some vague language about "resetting password".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: