Hacker Newsnew | past | comments | ask | show | jobs | submit | kristopolous's commentslogin

This was like the biggest tech drama of 2007, but mostly because Theil successfully ran the journalism company that reported it out of business.

https://en.wikipedia.org/wiki/Gawker


While I have no sympathy for Thiel, outing someone against their will is just wrong.

Journalism has to be responsible.

This article is a clusterfuck, no pun intended.


You mean because a trashy news article outed somebody as gay for clicks? By helping someone else whose sex tape was leaked without their consent?

I’m sure you wouldn’t want your own private life leaked this way.


If journalists were constrained by the consent of every public figure and institution they mention all you'd have is flowery propaganda.

You can see what this would look like already by searching for prnewswire https://news.google.com/publications/CAAqKQgKIiNDQklTRkFnTWF...

Companies pay to place those things and some outfits run them. It'd all look like that.

Anyways, journalist reach out for comment and are supposed to consider the response of the parties involved but that's about it.


Seriously? Would you be okay with a "journalist" showing the whole world your sextape?

1. I'm not a public figure

2. Sure. Have a great time


Not publishing articles on other people’s sex lives is not “flowery propaganda”. Christ.

That was an awful thing to defend, have a sense of shame and apologise Chris.


No. Finger wag elsewhere. Some journalism looks like TMZ and Business Insider.

You know it’s wrong.

You're litigating something I had no involvement in that happened 20 years ago as if I currently have the moral agency to change the outcome.

I don't know why you're engaging this way but this conversation is definitionally a waste of time.


I’m litigating this conversation - you supported outing somebody and publishing someone else’s sex tape. As an adult human being you know this is wrong. You are mistaken and you know you are mistaken.


No. I support the concept of an open society with an institution of journalism that isn't stultified by powerful forces that seek to control information

Sometimes that's sex tapes and Epstein Island

It's ok. We're not going to resolve things here


Who are also active in Republican politics.

https://cht.sh/ is my fave

curl it.

curl cht.sh/ffmpeg

you can even search

curl cht.sh/~screenshot


They've been around a couple years. This is the first model that has really broken into the anglosphere.

Keep a tab on aihubmix, the Chinese openrouter, if you want to stay on top of the latest models. They keep track of things like the Baichuan, Doubao, baai (beijing academy), Meituan, 01.AI (yi), xiaomi, etc...

Much larger chinese coverage than openrouter


> This is the first model that has really broken into the anglosphere.

Before Step 3.5 Flash, I've been hearing a lot about ACEStep as being the only open weights competitor to Suno.


>first model that has really broken into the anglosphere.

Do you know of a couple of interesting ones that haven't yet?


doubao (bytedance) seed models are interesting

Keep your eye on Baidu's Ernie https://ernie.baidu.com/

Artificial analysis is generally on top of everything

https://artificialanalysis.ai/leaderboards/models

Those two are really the new players

Nanbeige which they haven't benchmarked just put out a shockingly good 3b model https://huggingface.co/Nanbeige - specifically https://huggingface.co/Nanbeige/Nanbeige4.1-3B

You have to tweak the hyper parameter like they say but I'm getting quality output, commensurate with maybe a 32b model, in exchange for a huge thinking lag

It's the new LFM 2.5


Never heard of Nanbeige, thanks for sharing. "Good" is subjective though, in which tasks can I use it and where to avoid?

it's a 3b model. Fire it up. If you have ollama just do this:

    ollama create nanbeige-custom -f <(curl https://day50.dev/Nanbeige4.1-params.Modelfile)
That has the hyperparameters already in there. Then you can try it out

It's taking up like 2.5GB of ram.

my test query is always "compare rust and go with code samples". I'm telling you, the thinking token count is ... high...

Here's what I got https://day50.dev/rust_v_go.md

I just tried it on a 4gb raspberry pi and a 2012 era x230 with an i5-3210. Worked.

It'll take about 45 minutes on the pi which you know, isn't OOM...so there's that....


Thanks!

Doesn't work. You don't have full control over every color all the time and you're going to make assumptions. That's kinda the point.

My streaming markdown renderer does something like that https://github.com/day50-dev/Streamdown

You define a baseline color in HSV and then everything else is a multiplier of that

For example

[style]

HSV = [0.7, 0.5, 0.5]

Dark = { H = 1.0, S = 1.2, V = 0.25 } # Make dark elements less saturated and darker

Symbol = { H = 1.0, S = 1.8, V = 1.8 } # Make symbols more vibrant

As a result you can simply move around the HSV to your preference in the config and things don't look like garbage where you have to hand tweak every color to get something legible.

For example, this simple loop https://github.com/day50-dev/Streamdown?tab=readme-ov-file#c...

It's effectively a swatch


They should actually be science - mostly reproducing/validating things in rigorous methodical ways.

There's far too many adults that don't seem to grasp the basic principles of what the discipline of science is.

At the earliest level instead they should be replaced with kids having to come up with experiments to either show something is or is not real based on an existing demonstration.

For instance, I'll make the claim that each different color of Trix has a unique flavor. And that you can test this because when you eat a certain color you always taste the same flavor. Then the kids will have to come up with experiments that show that although that is true, the claim that they are flavored differently is false.

Another example. I'll claim that a tall slender glass holds more liquid then a short and narrow one of equal volume and as evidence I'll show that companies charge more for the tall and slender and people pay more for it willingly.

Essentially you start out with something where the response is "that's true but," and then an experiment is constructed from there.

This approach allows for some important lessons.

1. You don't have to successfully explain the phenomena to demonstrate the claim is false.

2. That science is about skepticism until you're forced to accept something essentially by exhaustion of objections

3. That poorly constructed tests can reproduce a phenomena that you're trying to isolate, etc.

The change is to not make it open ended or have projects or theme fairs. No posterboards or presentations.

Instead it's a form of debate club with interlocutors where groups are trying to fake and fool the others and separate things

When the general public accuses people of putting their "trust in science" I mean, this will kill that. No, science is all about strong methodological distrust - that's one of the basic premises.


> 1. You don't have to successfully explain the phenomena to demonstrate the claim is false.

Sometimes I feel like this is taken to the extreme for non-scientists that say that lack of evidence is in itself evidence. Depending on the circumstance and the tests, that could be true but its often a default mode.


> Sometimes I feel like this is taken to the extreme for non-scientists that say that lack of evidence is in itself evidence.

But of course, the lack of evidence is itself evidence, if you have a sufficiently large data sample and haven't seen the thing you're looking for. Keep pursuing the increasingly unlikely outcome, and you're just engaged in science-flavored religious catechism.

I see the fallacy routinely misapplied by all sides of most hot-button science-meets-politics issues. A great many scientists will regularly substitute their own pet theories for conclusions, and strenuously ignore the lack of supporting evidence, citing the old "absence of evidence is not evidence of absence" saw. Then they turn around and mock "non-scientists" for doing the same thing. Neither side is right, of course, but dressing in a lab coat doesn't make it better.

Just to circle it back to the topic of science education, I'd love to see a science curriculum at the middle- and high-school level that equipped people to reason through this kind of thing by focusing on tearing apart pop research. A "science fair" is actually hard to do well just because most science fails, but a "scientific bullshit fair" would have almost infinite fertile ground from nutrition studies alone.


You can believe in a phenomena and still do good science. It all depends on if your exp design is free of bias. Randomizing, blinding, instrumentation, pre-registration, statistical rigor - there all sorts of ways to do this. I say this because I think non-scientists regularly say that science is biased because scientists are biased. The cool thing about science is you don't have to have any pretense of objectivity as a person as long as your experiment is independent.

> I say this because I think non-scientists regularly say that science is biased because scientists are biased.

As any other field, the science is as good as the scientist that produced it. For example, there is a serious reproducibility crisis in multiple fields, like psychology, and social sciences. In the latter it is hard to say of its due to systemic educational failure of the PhD students in those fields, or that the field and personal politics are merging too tightly.

Unfortunately, all it takes is one bad scientist to discredit the rest, e.g., Wakefield.


I know what you are saying, but I'm arguing that science is great because it produces better output than the people that make it as long as they stick to good methods.

As for reproducibility, its my opinion that it has more to do with incentives and constraints than the ethics or intellectual capacity of the researcher (although those are real components too)


> I'm arguing that science is great because it produces better output than the people that make it as long as they stick to good methods.

I don’t think we have a contradiction here. What I am saying is that science is made by people, and we as scientists have to be extremely vigilant today not to let “ends justify the means” crowd to use the name of science for their own agenda.


Yeah I might be nitpicking. I agree with you. I get frustrated when people conflate a neutral scientific method with a neutral scientist: that a scientist cannot explore political/ideological topics with robust methods because its biased. Do you see the sleight of hand I'm talking about? I've noticed it a lot in the reproducibility discourse.

> Do you see the sleight of hand I'm talking about?

Yes. I have sociologists in my extended friends group, and we had a couple of heated discussions on why interviewing 20 subjects is not sufficient for the conclusions they have. But, turns out, it’s the norm in their specific field. Go figure


Did you get to the end of the article?

Still misses the mark.

It should be designed for the 95% of students who are not going to be scientists so that they become better citizens and we aren't just flooded by piles of misguided people when it comes to funding public policy. Instead we want people to have the basic literacy to know that say, MMS (Miracle Mineral Solution) is wildly unsound.

It should be seen as a society-wide improvement project like the declination of smoking or how people are exercising more and eating healthier then half a century ago.

This is the same kind of project.

People should know "What is science? What do people who do it do? If someone claims to do science, how can I know if their claims are legitimate? What are the red flags? If I'm presented with a scientific looking document, what questions should I ask?"

They should know it's a self-correcting system and not a belief tribe.

Example: My friend sent me this anti-vax report a few years ago showing how children who didn't get vaccinated had a lower reported occurrence of a number of diseases. I mean obviously - the parents who distrust clinicians aren't going to get their child diagnosed. Of course the reported occurrence is lower. Measurement bias.

That's the kind of thing I'm talking about. A graph that was so convincing to my friend shouldn't have been. They should have been inculcated from an early age to ask such questions and shouldn't have fallen for such bad science.

There's a bunch of subjects we can probably slack on with the general public without dire consequences. Adults can be ignorant on say american literature, chemistry, or ancient history without much affect. Core scientific literacy, however, is proving to be one of the important ones.


If it's for 95% if students who aren't interested in being scientists, then it shouldn't be an extracurricular science fair, but a part (maybe a big part) of the regular curriculum in science class. Science fairs are for the science enthusiasts, I think.

Yep, this should be mandatory. The "science nerds" can still nerd out on more complex topics, but the "normies" should be required to prove a basic level of scientific literacy.

still misses the mark

Okay, but we still want you to admit that GP was subsumed by the article.


It's a different targeting. The article is great in teaching modern empirical methods

However the problem lies in the lesson of new math - which was an effort to teach actual mathematician mathematics starting in elementary school as opposed to the number manipulating arithmetic that most people need.

As a result only future mathematicians really understood it and most people were baffled by it.

I like the sentiment but we have to acknowledge that our favorite thing in the world, whatever it is, is simply unapproachable to others

I run into this problem constantly with my software efforts. I think my stuff is obvious but everybody else thinks it's too arcane and obscure.

To reference a deep cut from 1945, Norman Corwin, Variety magazine, article "Radio not in a class by itself"

"[Radio] rises no higher and sinks no lower than the society which produces it." A few paragraphs later, "I believe people get the kind of radio, or pictures, or theater, or press they deserve... The gist of what I am saying is that the radio of this country cannot be considered apart from the general culture... If the American people support soap operas and tolerate singing commercials; if they pay higher honor to Gildersleeve than to Beethoven, then it is not primarily the job of radio to elevate their tastes."

And so with education - we can only build from the legos in the bucket.


I like the science and engineering fairs though because "I built a thing" doesn't fit into the scientific model. I always had this trouble in the fair with robotics because what's the hypothesis? "I hypothesize I can build a robot that..." no that's not science that's engineering.

To be fair, a lot of science doesn't follow the scientific method. I've yet to see an applied mathematician (to speak only of what I know) come up with a hypothesis, it's usually rather: here's how people solve this problem currently, this has this and that drawback, and our paper introduces a new method that solves bigger problems faster/new classes of problems.

The same could be said of theoretical work: here, we tightened up an inequality.

This is also research, not all of it is experimental!


Yeah I get it when giving projects to kids it's easier to be like "Here are the 5 sections you have to do" and then grade them on how well they did the 5 sections... but that's really limiting the spirit of the thing if the idea was to let the kids off the leash and see where they can take their minds.

I concur - research can include both scientific and engineering research.

I note MIT (like many universities) has a department of Electrical Engineering and Computer "Science".


It's interesting seeing the EECS and CS+CompENG programs splitting into two CompE and AI programs currently. This is happening in my department where we are standing up an AI major and we're all asking "Is the CS department the AI department now or what? Where do all the systems people go?"

It's also really damned hard to come up with an interesting, novel question, that is testable, with resources available to the average school child, in a reasonable amount of time.

Allowing engineering opens up the workable space by quite a bit.


Questions don't need to be new to be science.

My brother and I, for example, did an experiment where we tested pH of various water bodies around us. The hypothesis was based off of local drainage patterns.

Not a new question... Still scientific.


That's not science then. Just because engineering is equally enigmatic to most people as science doesn't mean they're the same.

Science means applying the scientific method.

"I hypothesize that method X is the best way to construct a robot that does Y and I've tested methods A, B, and C to validate that claim"

That's science.

Just building something can actually be a pursuit of non science..this is why many engineers have not invented here syndrome. They think they're thinking scientifically, but they don't. Thus, instead of checking their assumptions, they run with them.


I think you're agreeing with me? Or maybe you meant to reply to someone else because that's my point: building the robot is not science, but it's something I think kids should still do, that's why science and engineering fairs are my preference. Or maybe there should be engineering fairs?

No I'm saying a science and engineering fair doesn't really make sense. There's science in engineering, but just any engineering task is not scientific necessarily.

Oh, I see. I think they're combined as one fair instead of a separate science fair and an engineering fair because there's not enough students to justify having two separate events.

Fortunately computer science is often in the school of engineering.

We did ISEF in highschool. I always wanted to build something but just couldn't figure out a way to justify it. My teacher usually just said do an experiment please. I usually just did some lame science project that didn't really produce anything interesting.

Freshman year: effect of light wavelength on basil plant growth. I shined a black light, a regular light bulb, and a very bright IR light at some basil plants. I probably could have made it better by doing colored lights with controlled lux levels. Didn't win anything and the judges were unimpressed.

Sophomore year: effect of water pH on electrolysis gas production. Varied the pH on some water and put it in an electrolysis apparatus. I actually got 3rd place in the highschool chemistry group surprisingly. It wasn't a very rigorous project but I guess no one else did anything terribly interesting. Not enough to go onto regional or state level (not sure what came next). Even my parents were surprised.


btw, if you want to keep up with the freedos community (unfortunately) the facebook group is the best place to do it https://www.facebook.com/groups/freedosproject/

I refuse to create a facebook account. There are alternatives[0].

0. http://www.svardos.org/?p=forum


Great for svardos users, not ideal for the rest of the FreeDOS community, however.

[edit]Additionally, it looks like people's IP address is included with their posts. YUCK!


> [edit]Additionally, it looks like people's IP address is included with their posts. YUCK!

As an aside... I wonder how many people use VPNs among the HN crowd. I've been on a VPN so long, I feel really exposed when I'm forced to access anything without one.


I've never heard of sonarqube ... this looks very enterprisey ... isn't this just prompt engineering over the source with a harness? Why am I clicking through all this signup flow?

I'd buy the put this in your ".git/hooks" workflow ... but I don't know what's going on with this thing.

The strongest opensource contributors tend to be kinda weird - like they don't have a google account and use some kind of libre phone os that you've never heard of.

What a "real" solution would look like is some kind of "guardrails" format where they can use an lsp or treesitter to give dos and donts and then have a secondary auditing llm punt the code back.

There may be tools (coderabbit?) that do this ... but that's realistically what the solution will be - local llms, self-orchestrated.


SonarQube does static analysis and let's you set your own levels. Yes, enterprises use it for code and test quality as well as security checks.

I was just saying that good engineers can guide GenAI into creating good code bases. Seeing I got voted down, not everyone agrees.


eh, it sounds like you're hawking your own product. It doesn't look like you are and this looks to be a mass adopted fortune-100 product without large brand name awareness, but that's the risk with hn.

There's a lot of people trying to hustle their stuff on here. Strongly frowned upon unless it's genuinely free and even then...

Maybe something like "at work we use something called sonarqube and I've been using it on my own stuff. it's works really nice" might have been better


I was mostly pointing out that you can still create reusable open source software with GenAI. I could care less what tools you use but I do think strong engineering principles are the common denominator.

Mvc is why there's 3 languages: HTML CSS and JavaScript

The separation is already there

People have just failed to understand it


> The separation is already there

I wonder how you would map these three onto M, V, and C :-)


In the late 1990s there were a number of working groups for the w3c who were very familiar with the MVC paradigms.

Out of those multi-year-long working groups came cascading style sheets and their revisions along with JavaScript features like dom access.

The dominant paradigm is to let their work go unread and call it a flex.

I've frequently been belittled and mocked in online negging when I encourage people to take a deep dive and learn something new.

I'm not here to debate people who think mockery and dunking on people that have done hard work is good faith behavior


Then we surely don't need to add random additional elements to our Content, purely to properly layout the content right, which is the job of the View?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: