Well, like I said, there're hidden incentives behind the scene; in my case, the hidden incentive is that, the requester/client is one of the company's subpar broker, and PM probably decided to just offer an average level of commitment, not going above and beyond. Hence the plan was to do exactly what the broker want even though that was messy and inferior. You can't just write down that kind of motivation on paper anywhere.
---
I said it because I did the analysis, and realized that if I implement the original version, which basically is a crazy way to iteratively solve the MIP problem, it's much harder to reason with internally, and much harder to code correctly. But obviously it keep the broker happy (the developer is doing exactly what I said)
For what it’s worth, I’ve fallen into the trap of building an “ideal” system that I don’t use. Whether that’s a personal knowledge db , automations for tracking habits, etc.
The thing I’ve learned is for a new habit, it should have really really minimal maintenance and minimal new skill sets above the actual habit. Start with pen and paper, and make small optimizations over time. Only once you have engrained the habit of doing the thing, should you worry about optimizing it
This does not match my observations. Also, what I've heard from experts is that 'intelligent' people are more suggestible. The way society measures intelligence is thinking speed; which tends to correlate with learning speed.
Some people learn surface-level information quickly without deep integration; what educational researchers sometimes call "shallow learning." And specialization can create blind spots.
I'm sure there is an association between personality traits {openness, conscientiousness, extraversion, agreeableness, neuroticism} with preferences to specialize or learn broadly. That is seperate from the phenomena of nearly all cognitive tasks being correlated with each other positively, e.g. verbal scores are positively correlated with math and musical scores. This is referred to as g-factor in literature.
My overall point being, yes people learn differently, but it is also true that there exists outliers in general intelligence
I've seen very often people with good memory will be regarded as intelligent. They integrate "knowledge" by just recording verbatim phrases. That takes them a very long way... But when the time comes to analyze something, they break down. I've fallen in that myself, people I regarded as intelligent, because they "knew" so much things, could not keep up with the most basic syllogism, they were just stupid.
In my experience when I am able to pick something up quickly it's because I can exploit cross domain knowledge. I have ready-made analogies to things I understand, or I understand the domain which informs the fundamentals of the new domain.
You still need 2 deviations above the average college student to get to med school. As a rough proxy for intelligence. The bottom threshold for doctors is certainly higher than lawyers
In general your comment was false. You're just lying and making things up. There are lower-tier medical schools in California, Massachusetts, and most every other state. The state, whether it's Kansas or somewhere else, is almost totally irrelevant to the quality of physicians produced.
No I'm not. I'm referring to a specific bad school(s) in kansas. I never made a comment about Kansas itself.
I never said the state is correlated with the quality of the doctor, or even if the quality of the school is associated with the quality of the doctor. You made that up. Which makes you the liar.
>If you're referring to a specific school then name the school instead of making lame low-effort comments about a state.
You're fucking right. I should've named the specific school. (And I didn't make a comment about the state I made a comment about school(s) in the state which is not about all schools in the state.)
That's would I should do. What you should do is: Don't accuse me of lying and then lie yourself. Read the comment more carefully. Don't assume shit.
No point in continue this. We both get it and this thread is going nowhere.
Huh? So your complaint is that you keep getting black doctors? That’s dumb, but whatever - why not just… get a white doctor? Or Asian or whatever you think is the smart one?
Black students (among other minorities) with unacceptable MCAT (as in, if another race had them they would be rejected) are accepted, at a rate 6-10x more likely to be admitted with similar scores. The motivation is that doctors should match the demographic they treat, and minority doctors are underrepresented, so should be accepted at higher rates: https://www.uclahealth.org/news/article/clinical-outcomes-pa...
The obvious outcome is that minorities students, being less prepared as measured by MCAT and somewhat setup for failure, have a much higher failure rate, with black students being 85% more likely to leave medical school than white: https://news.yale.edu/2023/07/31/black-md-phd-students-exper...
The system was, as is the stated goal by all, setup to pass minority students that would have previously been rejected, at every step of them becoming a doctor, to provide a net positive for minority populations, since it's accepted that you'll get the best outcome if your doctor is the same race as you.
Thank you, this is it exactly. And my biggest concern isn't the % of unprepared students who leave medical school. It's the % who stay and get passed through.
"You know what they call the most unqualified insert-identity-here person in the med school class of 2025 who squeezed by because it would look bad if they didn't?"
"Dr."
Which is why 99% AI-driven diagnosis can't come fast enough.
I feel like only the last paragraph was relevant here - I don’t particularly care why he wants a doctor of race X instead of Y. My question remains the same: why not just… get a doctor with the skin color you prefer? You’re not exactly assigned one for life at birth.
I think you're bringing up a different (very related) point than him, with both points having truth:
1. You're free to pick the race of your doctor. Matching your race is a data driven positive.
2. 30 years ago, the system made sure a minority doctor was (at least, but probably more so do to discrimination) as competent as a white doctor. These days, the system is intentionally and deliberately set up so to help pass less competent minority students, due to the positives of #1.
The memories are the moat. Regardless of the current or future capability, most users already view chatgpt as a personal confidant that they are investing energy into building a relationship with. That will be a far stronger moat than anything else
I’ve had a couple of instances where when I describe a requirement, ChatGPT would not list an open source project like n8n and happen to only remember paid alternatives.
It’s an advertiser’s wet dream, being able to slowly creep and manipulate even the most uninterested people into using a product.
And it’s so personalized that ChatGPT may even refuse to tell you about products that are not paying them a cut and this can put out a company entirely out of business, because unlike search engines, the customer might not even learn about your product despite directly asking for it.
Pretty sure FTC rules force bloggers to disclose if they're being paid to promote a product. Maybe someone will be able to make a lot of money suing OpenAI if they violate those rules.
maybe not direct automation, but ask-respond loop of your HA data. How are you optimizing your electricity, heating/cooling with respect to local rates, etc
1. Shareholders are entrusting the CEO with their money to turn it into more money. Networking can absolutely be worth that much. Especially at the start up stage, the CEO is selling the ability to build the money maker, which requires knowing the right people.
2. In your hypothetical, who is deciding these high level "investments" into the company? who is accountable for their strategic success or failure?
sure, but that is less work. you can also have separate LLM QA prompts that assess test suite behavior to production behavior.
ultimately you are right, the buck needs to stop somewhere, but at least in my experience, the more you add quality/test checks as LLM workflows, the higher the rate of success.