Hacker Newsnew | past | comments | ask | show | jobs | submit | anon84873628's commentslogin

Well, reporting the largest abuses of non-free software companies could be seen as a corollary to that.

I also found this confusing. And given how thorough and precise the author was with other elements, it seems like a deliberate gloss.

I'm skeptical of LLM "reasoning" but they sure as hell know a lot. That's what the embeddings are: a giant semantic relationship between concepts.

Embeddings are still mostly just vectors into n-dimensional K-means clusters. It isn't "knowing" two things are related and here's the evidence, it is guessing two things are statistically likely to be related, based on trained patterns, and running with it without evidence.

It has no "semantic understanding" as we would define it. It's just increasingly good at winning cluster lotteries because we've increased the amount of training data to incredible heights.


Encyclopedia and Wikipedia know a lot too. Knowledge isn't much of use on its own, it's about how you use it.

I agree with you, but a big drawback is that the accuracy or confidence of their output can't be estimated.

So they surely know a lot, but you are never sure if the info is correct or not.


Um, have you heard about the drone warfare in Ukraine?

There is a lot more then 3d printing going into that.

Yes, but would they have been able to develop the production capacity for resistance without it?

Certainly seems like the advantages of 3D printing came in clutch exactly when they were needed.


I think you’re missing the thrust of my comment and the responses. Nobody is saying 3-D printers are worthless, but if you remember what it was like when they were first emerging into the mainstream, you would think we would all have one in our living rooms by now just spitting out everything we need constantly. We would all be building our own furniture and repairing every niche thing in our house with them. We’d all be on some magical network sharing files with each other. We’d have a massive surge in printed guns.

Everything was theorized and it all was a variation of “nothing will be the same for anyone ever again,” not “some specific areas will be really different.”


Why not coach the people to use the AI correctly and continue rewriting until it is the correct length and level of detail? This whole thread is full of people talking as if you can only one shot the things, or they are incapable of being succinct.

Verbosity and repetitiveness? Which tools are you using?

Tell it that you want a succinct professional email and it will do that. Give it examples of your own writing and it will match that style. If there's something you don't like, tell it to rewrite the part differently.

Theses are literally the things language models are best at.


> Tell it that you want a succinct professional email and it will do that. Give it examples of your own writing and it will match that style.

This is not what the parent I replied to indicated, nor what people usually do.


There's no reason to assume that their output is as bad as whatever you have come to expect.

The thing is, eventually these products will be more integrated into business workflows and have access to all the context, so the three paragraph expansion probably will be a significant improvement upon the original input.

And either that person won't be employed anymore, of the thing they were asking for in the first place will be automated for them.

I've already got my agent building a dossier for everyone we interact with. I haven't started training it on their writing style so I can mirror back to them... yet.


Oh I know. In the past month I’ve moved several thousand dollars in spending away from companies that turned their support into a useless understaffed AI program.

The disease has spread to six figure enterprise contracts hallucinating about their own APIs.


This is a pretty gross privacy violation but it's also just... So depressing.

My employer already records every scrap of communications, I'm running everything on corporate infrastructure, and they sent the information to me.

Giving the AI knowledge of the org chart, who works on what, how they prefer to communicate, what their goals/biases are, is no different than what every ape implicitly collects in their own head.


As these products improve, one person sending the output and not the prompt will remain useless. The prompt captures the intent and level of real consideration of the person sending it, the receiver can augment that with additional information if they want to.

That's like saying I should just send the English teacher a description of what my essay will be about, instead of actually writing it.

It seems like no one responding to this understands scoped context retrieval.


Professional communication has a completely different goal than a student essay, and it's weird you conflate the two. A student paper is useless as an artifact, the actual value is for the student to learn how to write the paper. If a coworker sends me a long email for me to read it should provide some actual value.

I'm arguing against people who essentially say that running the LLM is useless; just send the prompt.. Obviously that is true if the person does zero additional value add, but then that person probably sucked as a colleague before LLMs anyway. When you use an LLM agent correctly you are adding value beyond just the prompt, and those three additional paragraphs won't just be extra noise. Especially if the agent is automatically fed your personal context.

An essay states a hypothesis and then uses first and second party sources to validate it. I'm not conflating anything, it's just a good abstract example of the type of knowledge synthesis work, which is why we make kids do them.

A business strategy proposal is nothing more than a specific type of essay where the research sources are internal research results, market trend analysis, etc.

A technical design doc is an essay about the best way to implement a feature.

An "executive summary" is just an abstract, and the MBR puts the latest research citations and raw results in bullet points.


> I've already got my agent building a dossier for everyone we interact with. I haven't started training it on their writing style so I can mirror back to them... yet.

have you asked these people how they feel about this? have you asked them for permission, for their consent to do this with their communications to you?

what you’re doing sounds incredibly creepy. like, meta/facebook kinda of creepy. granted, it’s at a more limited scale, but it’s still creepy af dude.

fwiw, if i was your colleague and you asked me how i felt about you doing this with me, i’d be seeing about getting HR involved.


Um, I absolutely expect my colleagues to update their internal model of me every time we communicate, to a greater or lesser degree depending on how much that communication deviates from their expectations, or how much new information it contains. In fact, that is essentially the purpose of communication.

Do you think you are not constantly being "influenced" to do what people want from you?

What do think happens during a peer review or promotion decision?

What do you think the pile of data in SharePoint / GDrive represents?

You think HR will care about someone taking prolific detailed notes at work?

I did phrase my comment in a glib way to draw out this type of reaction. But this type of stuff is what "intelligence augmentation" will include, and the corporate panopticon is already alive and well anyway.


their mental model. the human being’s mental model. the one in our private head. not some model on a corporate server, some secret “dossier” on every interaction you’ve ever had with them. you’re basically creating your own black book / surveillance tool on everyone you interact with dude.

just because the corporations do this to us doesn’t make it okay to do it to each other. just because your employer does it doesn’t mean it’s okay to do to your co-workers. like, there has to be a degree of trust between colleagues dude.

compiling a record of every single thing anyone has ever said to you, an individual human being who is not a corporation or a machine, all for the purposes of “it makes my emails better” is just plain fucking creepy.

i think you might need some time away from the screen. seriously.

> i did phrase my comment in a glib way to draw out this type of reaction.

maybe, just maybe, it would be a good idea to take a bit of time to seriously think about why being glib about this super creepy thing you’re doing is not a good thing.

bit of self-reflection. the thing us humans are supposedly still capable of doing and the machines are not.


Really makes you wonder about freewill and information determinism

Well, implicit in the TOS of things like Gmail, etc. is already permission to do this.

That’s not how the real world works. You will be kicked out of the workplace and rightly so.

does that make it morally okay to do with your colleagues?

like, jfc, these are fucking people were talking about building “dossiers” of. people the person works with where a degree of trust and bonding is necessary. people they probably spend at least a quarter of their waking hours interacting with.

and your defence for it is “well, google does it”.

the best engineers know what not to build. they don’t build every single thing under the sun because they can.

also, don’t you have to explicitly agree to google’s terms for that stuff to use their services?


Currently they are inferior.


It is a type of executive dysfunction or mental illness. They need to be in a conservatorship.

I think people are skipping over the fact that Google has had cars driving around taking photos for 20 years. I imagine that was used to build the world model in the first place.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: