Hacker Newsnew | past | comments | ask | show | jobs | submit | sebow's commentslogin

If you post pictures of yourself on X and don't want grok to "bikini you", block grok.

Yes, under the TOS, what grok is doing is not the "fault" of grok(the reason is the causal factor of the post[enabled by 2 humans: the poster and the prompter]; the human intent is what initiates the generated post, not the bot; just like a gun is shot by a human, not by the strong winds). You could argue it's the fault of the "prompter", but we're going to circle back to the cat & mouse censorship issue. And no, I don't want a less censored grok version that's unable to "bikini a NAS"(which is what I've been fortunate to witness) just because "new internet users" don't understand what the Internet is.(Yes, I know you can obviously fine-tune the model to allow funny generations and deny explicit/spicy generations)

If X would implement what the so-called "moralists" want, it will just turn into Facebook.

And for the "protect the children" folks, it's really disappointing how we're always coming back to this bullsh*t excuse every time a moral issue arises. Blocking grok is a fix both for the person who doesn't want to get edited AND the user who doesn't want to see grok replies(in case the posts don't get the NSFW tag in time).

Ironically, a decent amount of people who want to censor grok are bluesky users, where "lolicorn" and similar dubious degenerate content is being posted non-stop AS HUMAN-MADE content. Or what, just because it's an AI it's suddenly a problem? The fact that you can "strip" someone by tweeting a bot?

And lastly, sex sells. If people haven't figured out that "bikinis", "boobs", and everything related to sex will be what wins the AI/AGI/etc. race (it actually happens for ANY industry), then it's their problem. Dystopian? Sure, but it's not an issue you can win with moral arguments like "don't strip me". You will get stripped down if it created 1M impressions and drives engagement. You will not convince Musk(or any person who makes such a decision) to stop grok from "stripping you", because the alternative is that other non-grok/xAI/etc. entities/people will make the content, drive the engagement, make the money.


In the last electoral cycle I've seen firsthand censorship applied to remote acquaintances because of the newly added EU DSA (this in and of itself would not be a huge disaster [by EU standards] if it wasn't accompanied by arrests), which was used as justification over some posts on TikTok and X; therefore I don't really care who hurts the pro-censorship faction within the EU. People have been arrested in WE for speech online for more than a decade now, but now it also happens in EE, where I live, bringing back communist-era "vibes". You would excuse me if the anti-Trump or anti-US (because of the current administration) rhetoric doesn't move me regarding this.

Or let me guess, "Trump bad and therefore we should accept DSA/Chat control 2.0/3.0/etc."? Sorry, I don't care. And people who think this is only about the recent X fine are also wrong (this started last year when Thierry Breton started influencing european elections while also boasting about how he can annul such elections without repercussions; you can deduce what I'm talking about by asking an LLM). This is in part US gov. protecting private companies (and thus itself) from fines, sure, but the broader point about censorship within the west applies. Everything that hurts the people making legislation regarding the Internet (or software in general) within the EU should be welcomed with open arms.

EU apologists would rather change the subject and talk about Trump and the polarizing social environment in the US rather than acknowledge that within the EU, there's not even a chance for discourse to be had about any policy(especially the nonexistent free speech) due to the aforementioned laws. The same people will act surprised when extreme positions regarding the EU are adopted by an ever-increasing number of people "until morale improves".


The EU does far too little to prevent election influencing. From Cambridge Analytica, proof of foreign bribery, algorithmic promotion of bot content by X and Meta specifically intended to undermine democracies, there's plenty of election fixing happening, and the EU should be much more aggressive about preventing it.

Individual free speech is not - of course - ethically or politically identical to "free speech" produced by weaponised industrial content farms funded by corporations and foreign actors.


Everybody knows about Cambridge Analytica being used in the US/UK, but, for example, little to no one knows that Cambridge Analytica was also used by political parties within the EU (I won't give specific names [for now], but parties [from Italy, Malta, CZ, and Romania], members of the euro-parliamentary groups EPP/RE/SD, in the 2014-2016 period). Why did nothing happen back then? Those mentioned parties were usually pro-EU, so it's not really surprising no such "scandal" was being discovered until later on, when Cambridge Analytica was being used by the UK/US.

And the Cambridge Analytica "phenomenon" is not really something you can realistically prevent. I'm sure it happens now with some other better firm (Palantir probably), but this is really beside the point. The point is that normal citizens, like you and me, are effectively censored upon suspicion before any burden of proof is provided. Nothing says "protecting democracy" like deleting posts from social media and then finding out the context.

> Individual free speech is not - of course - ethically or politically identical to "free speech" produced by weaponised industrial content farms funded by corporations and foreign actors.

Sure, nobody likes bots/paid shills. But of course, in a normal society, you have to prove those posts are made by actual bots/content farms before taking any action. Otherwise it's just censorship. Election interference always happens, without exceptions, but degrees vary. This is not to say we shouldn't point out when it happens, but to not do censorship against our own citizens because "the models indicate a pattern akin to foreign entities." Patterns are not burdens of proof, and thus employing a "crowdfunded" fact-checking system like Community Notes or the one from YouTube is at least partly the actual solution instead of directly removing content. Under DSA, you can effectively remove content without providing burden of proof regarding the identity of the poster. Platforms must provide a "statement of reasons" (Article 17) to affected users for any removal, including appeal rights, but this does not impose pre-removal identity checks on posters.


> Under DSA, you can effectively remove content without providing burden of proof regarding the identity of the poster. Platforms must provide a "statement of reasons" (Article 17) to affected users for any removal, including appeal rights, but this does not impose pre-removal identity checks on posters.

Unlike any other legislation, globally, the DSA actually has tools to contest this.

Take a look at out of court dispute settlement bodies.

Hell - you have more power to gain accountability under the DSA than you do under the US system.


Cambridge Analytica was also caught interfering in elections here in India.


"Interference" in elections, even foreign interference, is not a new problem. It has been a problem for at least 2500 years. The nice thing about a democracy, though, is you still have to convince masses of people to vote a certain way, rather than simply influencing a few bureaucrats/aristocrats. And well, if masses of people can be convinced to vote for something you don't like, in a democracy it's your responsibility to show them why they're wrong, rather than treating them like dummies without the intellectual capacity to make their own responsible decisions. If you think people are too stupid to make decisions in the face of the wrong propaganda, you are conceding that you don't believe in democracy at all - at best you believe in stage-managed popular support to make your non-democratic government appear legitimate.

The EU doesn't want to accept that millions of people don't share the EU elite consensus on several issues - usually still a minority of people, but a substantial minority. Instead of recognizing their responsibility to steer the ship of state with the winds of the times, they are simply declaring all bad political opinions to be the result of the Russians, the Americans, or the corporations, or some combination of the three. Countries in which serious conversations are had about banning one of the most popular political parties for wrongthink can only ironically be considered democratic.


The only arrest (including jail time) I've heard of over internet shit was someone named Tate, and I'm pretty sure it was over suspicion of online pimping/hustling (not sure how it ended up), so I would love to know who was arrested because of the DSA, to see if it match.


it is perfectly legitimate to want to regulate foreign (and domestic) media companies


I disagree in principle, but let's say the people decide to do so. Not only in US (under section 230) those are not media companies, but in EU too, social networks like Facebook/Instagram/etc. are treated legally as "public squares" and not media companies like BBC/etc. When you defame somebody on Instagram, you're the one being held legally responsible, not Meta. Why would social networks be responsible for DSA violations made by the users? This is beyond the fact that implementing an "instant-takedowns" censorship mechanism is draconian. DSA's Articles 16-17 do not require the person (who can also be anonymous, which is ironic) who is reporting the content to provide >legally sufficient< evidence for the takedown. Which goes directly against what I would consider "normal" in a society where you're innocent until proven guilty. The "trusted flaggers" (article 22) do need to submit more evidence, but this just becomes a problem of "partisanship" and bias. This basically means you can report someone for illegal activity, provide unnecessary evidence(in the legal sense), and the content is taken down, with the "battle" starting afterwards.

YouTube's system of DMCA takedown(the copyright issue being way more serious legally than what DSA is supposed to protect against) is not perfect and cannot be perfect (proven by the fact that content is unjustly taken down all the time). DSA is just the same, except more vague, more complicated and (imo) ultimately worse.

DSA has an appeal mechanism, with an option for out-of-court settlements, which means you can employ independent fact-checkers (certified by Digital Services Coordinators (DSCs)); the list of certified bodies is, of course, maintained by the European Commission. The problem is that these DSCs are appointed by each country's gov., which means there's potential room for conflict of interests not only at a national level(I find hard to believe appointed DSCs are completely impartial to the gov. that appointed them) but also at an EU-wide level(certified fact-checking bodies who are supposedly not influenced by EC when judging cases pertaining to EU in international cases).


> Why would social networks be responsible for DSA violations made by the users?

because we don't want some to suffer the same fate as the US?

a demented proto-dictator co-opting our political systems because facebook decided it's good for engagement

if that makes their business non-viable, well, what a shame

not as if we'd be losing any tax revenue as a result


Internet ID + Covid policies + OPSEC, are you seriously saying you think the EU has not suffered the same fate as the US?


You obiously have no idea hoe the EU or Europe works. Go read something other than social media


Please convince me how gov. funding is better than the private sector. Before people jump to the "late capitalism and everything will be profit-incentivized" bandwagon, I fail to see how things like finding a new good medicine/the next propulsion system/new most efficient energy solution/etc. cannot be linked into the more theoretical fields, which I'm assuming are some of, if not most of the positions/areas of science affected by this.

Everything can be "sold", especially in today's age with the new methods of discoverability. But I would argue scientists don't need to "sell" something in the capitalist sense. They need to link the hope of a new discovery to inventors, innovators and entrepreneurs. Sure, some things might "fail" to continue by failing to adjust to the markets, or some scientific discoveries might be used for bad things (ethically), but this is (1) both inevitable and (2) the responsibility of the scientists & the people buying the end product/service. If I'm not mistaken, most bad/evil/etc. discoveries were made by scientists working FOR the government/king/etc. throughout history. If anything, democratizing science through the capitalist markets seems like a more beneficial way to develop self-sustaining science. The key thing is transparency, which can be less present in the private sector, especially when corruption is involved(assuming transparency is demanded by the gov.).


Let me counter with this: Can you point out one country in the post world war era that had minimal government investment in science but had very productive scientific output? Or can you point out one country where scientific productivity increased after public sector investment in science was slashed?


> Everything can be "sold"

How do you sell having lost $50M on research which ultimately went nowhere?

If you can't, then how do you guarantee that your research will always bear fruit?

The bottom line is: You have to be willing to fund MASSIVELY-expensive losses in addition to wins in order to make real progress. Scientists aren't magicians.

For every success there are countless failures which you don't hear about.


Cause goverments funds basic research and private sector does not. Also, results of private sector research are secret, patented and generally dont create competitive markets.


The amount of basic research funded by the private sector has been growing for decades. It is now a large percentage of the total in the US.

Government investment didn’t decline, private investment massively grew. Same thing happened in applied research decades earlier.


That's not how science works. Fundamental science works on much longer timescales (10/20/50/100 years), that are not accessible to companies


This is anecdotal but as a current PhD student who was doing research at a large tech company for a few years prior to this, the incentives as an individual are very different across the two programs. In tech even in a research role there was little to no incentive to dive deeper into potential high-risk, high-reward research because your career trajectory was determined by maximizing certain metrics for promotion cases. The general vibe among my coworkers was spend your day on the guaranteed progress projects and then go home. This was actively incentivized by leadership who asked for frequent progress updates especially as AI began to takeoff.

As a grad student so far though I've found the incentives to be very locally driven and the kind of research you can do is almost wholly determined by yourself and your advisor. This can be good or bad but if you find an advisor who is in a stable spot (tenured or nearly-tenured) and not a jerk they'll generally give you leeway to pursue what you believe to be high-impact work even if it doesn't align with the general consensus on what to do next, especially if you have proven credentials and a clear image of a research plan in mind. Additionally progress is largely driven by the individual so there's a larger personal motivation to really delve into a problem and be consumed by it. For me personally, I have access to significantly fewer resources than before but have gained the freedom and time to not be attached to the paper-mill or some measurable metric and am spending months of my time trying to get at a deeper problem than I ever would have been able to in industry. While this may be different than the usual narrative about academia, I think it's more true than people say since there are such huge variations in how academia works as a result of school, advisor, and the individual researchers themselves. The disgruntled tend to be those who complain the most while those happy with the field are busy doing other things. I'd compare my experience in academia thus far to the startup of the research world whereas the industry jobs (at least in tech) consume far more resources and are pressed to provide steady, measurable impact. Maybe it's upsetting that we do waste some resources on stupid research which does exist, but the odds of getting a researcher like Einstein dedicating 10 years to discovering relativity in an industry job are vanishingly small. I'll probably be unsuccessful but there are 100's of people in my field doing related but different approaches and this kind of swarm approach is more likely to give a fundamental discovery on a population level than the large alignment of goals found in private research who would do a great job building on any basic science discovered in academia. I don't think it's wasted resources if 99 researchers fail in different ways and 1 succeeds since traversing the tree is inherently valuable even if most of the leaf nodes are failure. That's far more likely to happen in academia imo than industry.

It's not that private sector funding is inherently worse, but in reality it is different and as such will lead to different results due to how people and our economic system at large work. While I'm sure there are exceptions where individuals at private research labs are highly-motivated and feel the push to go the extra mile and try to find some deeper truth than is necessary for their personal well-being, in my experience many doing research at these companies are apathetic as a direct result of the environment in which it's being conducted. It's hard to feel motivated to make a large step in basic science when you think it'll just be consumed by the large institution you exist within who's stock price you have no real effect on rather than being open-sourced for peoples' benefit. We should have diversity in how we fund science.


Thank you for the detailed insight. You've touched on an aspect that outsiders (like me) cannot truly grasp but can only guess about: motivation. And it's definitely true, motivation in the private sector is somewhat harder (you've explained it best), or at least motivation compared to the majority of the private companies; but, like you've mentioned, it doesn't seem like it's a problem with the system itself but with the kind of environments that grow in companies. Corporate culture is, more often than not, very toxic, especially when big money is involved (and/or big ideas; the subject of research could be even more important than money in science).

Or maybe it is a problem with the structure that fosters an environment. What comes to my mind is the exceptional case of OpenAI, which started as a nonprofit. Sure, it "ended badly" because of the known drama, but my guess is that besides the money that was poured into it, it thrived because researchers had kind of an "emotional safety net," meaning that they wouldn't be pressured for results as much. Probably the reason some startups perform much better too.

I think career continuity matters, and you don't necessarily get that in the private sector for sure. This discontinuity then leads to practical work discontinuity, which means less work done (which is amplified by the non-decentralized nature of working in private compared to shared science in public, as you've explained).

My bottom line is that the private field could do better, and frankly it's kind of their loss. What I'm curious about is whether a "semi-private" approach is better: a non-profit or some kind of foundation. I guess in practice they're still private, but whether the money part can be "solved" through crowdfunding/some modern methods and whether they're viable long-term remains to be seen. One thing is for sure: a culture appreciative of science will definitely open more doors into novel methods of funding and organizing (maybe in the future these methods could rival the "traditional ways" of public science).


(If you're OP: this is not a solution per se but more of a generalist rant; just so you don't waste critical time)

People talk about changing laws or technical solutions, but the inconvenient truth is that technically literate people should peer-pressure nearby friends/family/etc. into being more aware of such possibilities. I've done so, to the extent that some people find it ether borderline schizophrenic/paranoid (to my "luck", I live in an ex-communist country, where most people are usually skeptical in many contexts with strangers; so this group of people is relatively small).

People who know better bear a responsibility towards helping others who don't; towards those who are too kind (or naive) for their own good; Even though I'm the "tech guy" in my close circles (family, friends,etc.) like many here, I often do the >opposite< of what other "pro-technologists" do these days: I don't encourage people, especially the older generation OR the more tech-illiterate ones to use more technology, because it is obvious that doing so "injects" another vector of attacks into their lives. More often these days this is not possible, everything gets digitalized to the detriment of such groups, but this also delves into the politics of keeping "older options" (cash, paper trails, etc.) available even if digitalization happens. Often times the older options are more secure, though obviously less convenient.

This is a non-solution, yes, but it is the correct way to approach this (imo), as more and more places LEGALLY force digitalization of different institutions(banking, gov. agencies, etc.) which inherently either add, or worse, completely shift the risk into virtual spaces. This is why a "legal" solution is more often than not either a slow one or a completely pointless one. It will always be an arms-race between scammers(which operate more effectively[in theory] due to their decentralized nature) and the gov./banks/etc., which operate in a more centralized fashion, thus demanding and imposing more control above all included parties. A legal way will always demand more than it's worth.

I digress from my shift into politics, but bottom line is this: don't let your peers/family/closed ones get into these situations. If you have "an authoritative" voice regarding tech, use it to first cultivate awareness regarding dangers, before cultivating hype/or anything else. (Obviously not talking about anyone specifically, but the whole "geeksphere" as a whole)

Good luck to you and your family.


The bigger issue here (imvho) is that financial institutions / systems / companies accept (maybe even invite) / tolerate a small degree of fraud as it's "good" for the system.


I'm more impressed by the fact that there's still DDR2 going around. I know DDR3 is still alive and well, even manufactured(I myself noticed the appearance of new DDR3 kits, which is weird); but didn't knew DDR2 was still in stock. I'm assuming industrial/embedded applications still use it for obvious reasons, but I have to wonder to what degree DDR2 kits are being produced.


Surprisingly by boatloads by Chinese manufacturers. Nothing really shady about it (standard concerns about raw materials excepted), but it is still used mainly for random embedded stuff where there is a need for a memory module but the design is from a time where DDR3 chips are not available. An ubiquitous example are those DVD players from random Chinese brands that are based on Mediatek's designs from 2004(-ish).


AI & Social Media are only exacerbating the decline of morality in society, spreading it and making it more visible. Morality has been "breaking", objectively speaking, for at least a century, most noticeable with the advent of postmodernism.


Firstly, EU =/= Europe.

Secondly: the death of EU cannot come soon enough. 10 years ago euroskeptics like me were wrongfully called "russophiles" (lmao), even though I'm from a country that is constantly threatened by Russia with drones, propaganda, etc. (RO for one's curiosity). Ironically enough, for those coming from ex-communists countries, EU sure looks increasingly like USSR, but with blue instead of red. It's infinitely better than communism, sure, but the optics and path of EU resemble those of USSR in it's "wellbeing of workers"(and other socio-cultural issues) propaganda phase(the irony is Russia here, obviously).

History never repeats, but it rhymes. And a calcified supranational institution delves into authoritarianism in the later stages of its existence. Reform never happens, and if it does, it’s at face value. It’s much “easier” (for the people in those institutions) to double-down on the status quo position rather than reform; and obviously it becomes increasingly harder for the vast majority of people to voice their opinions or concerns, especially those not aligned with the status quo. (* Key difference here is NATO which is US-led and EU which is still Western Europe-led; US is still a functional democracy unlike EU institutions)

Although it’s not really a very complex topic and the causal factors are relatively simple (at least to identify, solutions are much harder to propose [mainly due to the mentioned constant double-downing]), it would take a long time to explain/convince why the existence of the “current” EU is detrimental to Europeans (at least to the people not aligned with the status-quo). So-called benefits stopped at the common market treaty (EEC) iteration of the EU. Security is not and should not be in the EU’s purview, we have NATO for that. And the most obvious issues that the EU keeps worsening are socio-cultural positions that either (1) dilute the differences between different nations (2) [in case (1) was not a “problem” due to shared values] directly propose completely different values and/or positions. There’s no “objective morality” debate to be had here, democracy does not inherently mean choosing the most “scientific”/“moral”/(any other metric) position in policy: it simply means choosing what the majority of people want (If you want to change policy: change people’s minds). The double-downing of the “EU regime” is usually defended with the rhetoric that it does so in the name of “democracy”, “morality”, “tolerance”, “objective wellbeing of society”, etc. but there’s no mechanism for true democracy if the EU undermines/punishes the will of individual nations on the pretext that “it does not conform to EU-wide proposed policy”. This leads to “multi-step” issues and other regional conflicts between interests of nations (the winning bloc [usually the wealthier one] gets to basically impose policy on the smaller one).

I’m quite off-topic on the subject of privacy, but my key point is this: don’t expect things to get better; or at least not without a huge cost. Recent pullbacks from the EU regarding the DSA/AI Act/GDPR are done so out of necessity: the EU is losing ground massively (= money) in the tech space due to stupid policies made by dumb bureaucrats. Half-assed “reforms” like these will not make huge improvements (mainly due to the unchanged fiscal policies [which are going to get worse: upcoming euro stablecoin]) but will keep the EU afloat amongst those who can’t see the sinking ship. Oh, you like privacy? Well, expect it to get worse, as eID is surely coming for all citizens in the name of safety (which has been eroded due to “our” [i.e. regime’s] stupid policies).

Finally, as I foresee some will keep replying with “Russian <something>”, let me just say this: pray the downfall of the EU won’t be an opportunity for Russia to do anything. At this rate in the regional conflict, Russia doesn’t look good long-term, but if history has tried to teach us anything, it is that Russia’s unpredictable.


I'm not sure that focusing on this mudslinging towards OpenAI (or any other company, for that matter) will achieve anything. It didn't work in the past and it won't work in the future. The reality is that parents, guardians, teachers and society as a whole need to be held responsible(at least morally if not legally) in order to address the core issue of suicide and similar behaviors, such as murder.

Besides the obvious compromise in quality that companies would have to make to appease the 'karens' of society (not to mention the additional compliance and regulatory burden imposed on new companies), wouldn't it be simpler to just have users take a basic 'TOS test' when creating an account? Sure, it's inconvenient, but at least the company would be legally protected. The purpose is obviously not to protect companies, but to move the spotlight towards the real causal factors.

No matter how simple the TOS acceptance process becomes, people will still find a way to blame the product or company, ignoring the core issue of how someone got into a mental state where they use LLMs to cause self-harm. I don't see people suing rope manufacturing companies for facilitating suicide.


(Imo) They will turn to all of these(especially to porn and gambling) when the core model of "enhance your life" will slowly fade away. The "academia space", the teens/boomers demographics, all of those will stop using OpenAI at scale if they're bombarded with vices (porn, gambling, etc).

Ads & referrals are already in the works, and people are generally tolerant of those. But, as with any company, appearances matter. ChatGPT will definitely lose users at the slightest possibility of having non-sanitized content served to more morally sensible groups.


there will be differentiated cohorts


Same here on 2 accounts, both having 2fa through a HW key(yubikey; though passkeys have the same behavior). At some point today(few hours ago) both my desktop and phone got reditected to x.com/account/access, where the loop started.

Frustratingly enough, i had already done the "re-enrollment" a long time ago(basically when they announced it was mandatory), but it seems like that was pointless(hopefully not).

I saw some prompts about birdhouse, re-did the enrollment, and badly enough (I think i dug my own hole with this one) it asked to remove the other 2FA option (SMS), to which I clicked yes.

This might sound bad but I sincerely hope X fixes it somehow, and all the keys enrolled/re-(re-[etc])-enrolled are not lost, especially those that were not added today. It might be a good idea (in practice, bad for security) to disable this new "https://x.com/account/access?flow=two-factor-security-key-po..." garbage fully, as I don't see myself contacting X support anytime soon(for obvious reasons).


(replying instead of editing for timestamp purposes) I clicked "enroll" randomly again > an "error has occurred" message appeared > the page randomly refreshed and everything works now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: