Hacker Newsnew | past | comments | ask | show | jobs | submit | rafaelero's commentslogin

Location: Portugal

Remote: Yes

Willing to relocate: Yes

Technologies: Python, Deep Learning (PyTorch), NLP, Graph Knowledge Systems, Vector Search (Cosine Similarity), PostgreSQL, React/Svelte.

Résumé/CV: https://www.linkedin.com/in/rodrigo-heck-7280a218a/

Email: rodrigo.heck29@gmail.com

Hi HN, I’m Rodrigo. I’m a software engineer working at the intersection of applied ML and scalable systems. Lately, I’ve been obsessed with solving the "context window" problem through graph-based knowledge representations. My recent work focuses on treating graphs not just as data stores, but as a method for information compression and long-term memory permanence in AI agents. I develop pipelines that integrate LLMs with structured memory to allow for more efficient knowledge generalization and recall.

Key areas of expertise:

1. AI Memory Architectures: Building persistent, graph-oriented systems for reasoning and long-term retrieval.

2. ML Systems Engineering: Developing TTS systems, semantic search tools, and RAG pipelines that go beyond simple vector lookups.

3. Full-Stack Foundations: Bridging the gap between a PyTorch model and a production-ready Svelte/React interface, backed by robust Linux/Postgres infrastructure.

I’m looking for a role where I can contribute to the "Next Step" of LLM integration—moving past simple chat interfaces toward systems with true persistent memory and structured reasoning.


I like you.


Wanna get married?

(Ah man, I’ve done it again. Please don’t hurt me, for intruding on your personal circumstances with my mouth sounds and finger symbols)


I have no idea where this idea that Internet is toxic to children is coming from. Is that some type of moral panic? Weren't most of you guys children/adolescents during the 2000's?


Are you saying that social media isn't harmful to children?


This is like rhetorically asking, "Are you saying that doom and marylin manson aren't harmful to children?"

The problem with social media isn't the inherent mixing of children and technology, as if web browsers and phones have some action-at-a-distance force that undermines society; it's the 20 years or so they spent weaponizing their products into an infinite Skinner box. Duck walk Zuckerburg.

This is all assuming good faith interest in "the children," which we cannot assume when what government will gain from this is a total, global surveillance state.


Last time I checked there's no scientific consensus if social media causes harm at all. The best studies found null or very small effects. So yeah, I am skeptical it is harmful.


The equivalence of emotions to reward functions seem pretty obvious to me. Emotions are what compel us to act in the environment.


Idk, we seem to be the at the cusp of autonomous driving. Transportation is like ~8% of world's GDP. Payroll is, what, 30% of that? It seems like we can already have the return on all AI investment by just conquering this one application.


Transportation AI does not remotely require any of the heavy investments obligated to run LLMs.


Yeah, I suspect the reason the author didn't find a relationship between IQ and happiness / life satisfaction is probably because those studies were overcontrolling for intermediate variables. If money makes us happier and people with high IQ make more money, you will underestimate the relationship if you control for income.


Location: Portugal

Remote: Yes

Willing to relocate: Sure

Technologies: Deep Learning (PyTorch), ReactJS, Svelte, Natural Language Processing (NLP), Python (Flask, OpenAI API), Graph Knowledge Systems, Vector Search (Cosine Similarity), PostgreSQL, Linux System Administration

Résumé/CV: https://www.toptal.com/resume/rodrigo-heck

Email: rodrigo.heck@toptal.com

Lately, I’ve been exploring graph-based knowledge representations as a method for information compression and long-term memory permanence in AI systems — developing pipelines that integrate LLMs with structured memory to retain and generalize knowledge efficiently.

My background bridges applied machine learning and software engineering, from building text-to-speech systems and semantic retrieval tools to experimenting with persistent, graph-oriented architectures for reasoning and recall. I’m looking for opportunities at the intersection of AI systems engineering, knowledge representation, and scalable memory architectures.


It's honestly not that deep. If AI increases productivity, we should accept it. If it doesn't, then the hype will eventually fade out. In any case, having attachment to the craft is a bit cringe. Technological progress trumps any emotional attachment.


I sincerely wonder how some people go around having 0 emotional attachment to any of their hobbies or passions - or maybe you're just extremely unfortunate and live your life completely focused on producing output for someone else.


Contributing to human welfare through technological advances and productivity gains is a much better place to deposit my hopes and emotions, imo.


The problem with this approach to text generation is that it's still not flexible enough. If during inference the model changes its mind and wants to output something considerably different it can't because there are too many tokens already in place.


That's not true, you could just have looked at the first gif animation in the OP and seen that tokens disappear, the only part that stays untouched is the prompt, adding noise is part of the diffusion process and the code that does it is even posted in the article (ctrl+f "def diffusion_collator").


Looks like you are correct.


Could maybe be solved by reintroducing noise steps in between denoising step?


Didn't anybody add backspace to an LLM's output token set yet?


Things are already going severely wrong in 1% of the cases. At this point not getting a second opinion from an LLM is irresponsible, imo.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: