Hacker Newsnew | past | comments | ask | show | jobs | submit | Libidinalecon's commentslogin

No offense, but you really can't possibly understand how bad this place is if you have never been in VRChat.

Picture a 30 something guy in a hotdog avatar telling children how he can't help be a pervert.

Picture playing a game of chess in a chess room that should be really cool for all ages. Then a drunk woman is telling the room about the blowjobs she has given. Of course you can hear by the voices that some are little kids talking.

If you put on a headset and go in VRChat right now, you too can have the same experience. Anyone who says this is not true is completely full of shit because everyone inside VRchat knows this almost like it is an inside joke.

I would never bag on someone for being socially awkward. I was so awkward as a teenager. Social awkwardness is not the problem at all.

Oh yea how about kids running around yelling the n word for no reason other than they can? That is standard.

If you never used modern VR, the immersion is incredible. That is what makes the VRChat experience so disturbing.


Most people with any sense avoid public instances. Most of the healthy interactions in VRC are almost certainly in highly curated Friends instances at the broadest and Invite or Invite+.

The VR raves though at least kind of work. You can do a lot visually with the medium and sound.

The problem is that generally VRChat is like a masked ball with a combination of alcoholics, repressed perverted losers, obnoxious personalities and children.

Anyone who downvotes this to me is suspect as being part of that ingroup.

It is one thing to be socially awkward. I was quite awkward when I was young too. VRChat is something else. Like the worst aspects of a 90s chat room but with immersion and real voices.


So it's not just me...

If you have hardware synths you are going to have a decent midi and audio interface that this is not a problem. It wasn't even a problem 25 years ago. There is no reason for consumer grade audio to be able to do this because most people will never use it.

I have maybe 20 hardware synths and I do a lot of sequencing. And yes it wasn't a problem 25 years ago, that is exactly why I still use an Atari STe! :-) But today it is a problem. It is just not possible to do complex and tight sequencing today with a normal Win, Mac or Linux computer. Even with my RME PCIe card. Your argument, "it wasn't a problem decades ago, so it cannot today either" is simply not correct.

Midi from a browser suffers from slowdownw due to have javascript is just too slow, non-threaded. There afde ways around it, but those are all workarounds.

Just move put of focus, and you will see how it handles sending clock. I went to a hardware based, external clock signal, and using spp to force syncs between my tools, and use rtmidi+c


From what I understand, midi messages can have timestamps into the future, but that implies buffering on the receiver end. Do most MIDI instruments not support enough buffering to overcome lag? Because in sequencing, the future is pretty-well known.

MIDI 1.0 messages do not have timestamps. (Sys Real Time does, but notes and controllers don't.) Timing is managed by the MIDI sender, and any buffering happens in the interface.

MIDI over MIDI cables is fundamentally not a tight protocol. If you play a four note chord there's a significant time offset between the first and last note, even with running status.

With early MIDI you had a lot of information going down a single cable, so you might have a couple of drum hits, a chord, maybe a bass and lead note all at the same moment.

Cabled MIDI can't handle that. It doesn't have the bandwidth.

Traditional 80s/90s hardware was also slow to respond because the microprocessors were underpowered. So you often had timing slop on both send and receive.

MIDI over USB should be much tighter because the bandwidth is a good few orders of magnitude higher. Receive slop can still be a problem, but much less than it used to be.

MIDI in a DAW sent directly to VSTs should be sample-accurate, but not everyone manages that. You'll often get a pause at the loop point in Ableton, for example.

The faster the CPU the less of problem this is.

If you're rendering to disk instead of playing live it shouldn't be a problem at all.


Bandwidth never was the problem with MIDI, that is actually enough, but your right with _some_ devices in the 80s/90s, that the processor was under-powered for the bandwidth. For example my Roland Alpha Juno 2 from 1986 is somewhat under-powered and not the tightest, but my Casio CZ-5000 also from 1986 is just doing fine! I mean this is almost 40 years ago and there were device that could handle it without problems. The problem with USB though is, that is does buffer in a "non real time safe" way, which leads to unpredictable jitter and interrupts. That means, for MIDI, USB is worse then the original DIN connection.

I am not talking of MIDI in a DAW, without any physical connections, this works just fine.


> MIDI over USB should be much tighter because the bandwidth is a good few orders of magnitude higher.

"should be" != "is". The Atari ST had a ROCK SOLID MIDI clock and direct, bare-metal hardware access that meant the CPU could control the signals directly, with known precise timing. This is simply not possible with modern operating systems and hardware interfaces because of all the abstraction layers, with attendant time indeterminacy, that have been inserted in between. It's physically impossible to match the low latency and jitter of an Atari ST doing MIDI with a modern system.


Yes, they have timestamps. But if you do buffer (or better to say, delay it), you introduce latency, which is even more worse then jitter. The ideal is 0 latency. And another downside with buffering, you would need to manifest the buffer time at all device you trigger to be the same time otherwise you do not stay in sync.

Edit: Actually midi note on events that are being sent to devices do _not_ have a timestamp! Only events that are persisted in a file may have timstamps.


These opinions are not helpful.

What do you think it is conscious and the answers are just deceptive?

We really need a national campaign on phenomenology 101.

Gemini outputs this correctly. It doesn't "experience" the passage of time.

The models don't experience the passage of time because they are not finite beings in the world.

They are a like a new category of the book. We don't say the math textbook "knows" math because the book doesn't "know" anything. The book isn't bored sitting on the shelf because no one is reading it.


Not to mention that language models don't experience ANYTHING.

Anyone can get a better explanation from Gemini directly if you ask it "can you explain how don't experience anything?"


No this misses the point.

I wanted to eat unlimited junk food when I was a kid but my parents wouldn't let me.

You can change it even to unlimited protein shakes. It is the same point. It is almost like kids are kind of stupid if you let them do whatever they want.


Unknown nobodies like Dave Brubeck.

You should care because people vote and the social consequences are going to be devastating.

It is easy for me to take this perspective too because I never had much student debt or children.

The median though is getting crushed if they went to college and are paying for daycare.

If you are getting crushed for going to school and having children that is a pretty clear breakdown of the social contract.

The consequences are obvious. People are going to vote in socialist policies and the whole engine is going to get thrown in reverse.

The "let them eat cake" strategy is never the smart strategy.

It is not obvious at all our system is even compatible with the internet. If the starting conditions are 1999, it would seem like the system is imploding. It is easy to pretend like everything is working out economically when we borrowed 30 trillion dollars during that time from the future.


> The median though is getting crushed if they went to college and are paying for daycare.

> If you are getting crushed for going to school and having children that is a pretty clear breakdown of the social contract.

That isn't a factor in wealth inequality. Inequality is how much money they have relative to people like Musk and Bezos - or just local business owners. The poor side of that comparison always has such little wealth/income that their circumstances don't really matter. Someone poor will be sitting in the +-$100k band and not be particularly creditworthy. When compared to a millionaire the gap is still going to be about a million dollars whether they're on the crushed or non-crushed side of the band.

Part of the reason the economic situation gets so bad is because people keep trying to shift the conversation to inequality instead of talking about what actually matters - living standards and opportunities. And convincing people to value accumulating capital, we're been playing this game for centuries, inter-generational savings could have had a real impact if people focused on being effective about it.


"The airplane wing broke and fell off during flight"

"Well humans break their leg too!"

It is just a mindlessly stupid response and a giant category error.

The way an airplane wing and a human limb is not at all the same category.

There is even another layer to this that comparing LLMs to the brain might be wrong because the mereological fallacy is attributing the brain "thinks" vs the person/system as a whole thinks.


You are right that the wing/leg comparison is often lazy rhetoric: we hold engineered systems to different failure standards for good reason.

But you are misusing the mereological fallacy. It does not dismiss LLM/brain comparisons: it actually strengthens them. If the brain does not "think" (the person does), then LLMs do not "think" either. Both are subsystems in larger systems. That is not a category error; it is a structural similarity.

This does not excuse LLM limitations - rimeice's concern about two unreliable parties is valid. But dismissing comparisons as "category errors" without examining which properties are being compared is just as lazy as the wing/leg response.


Totally agree. I just got a free trial month I guess to try to bring me back to chatGPT but I don't really know what to ask it to display if it is on par with Gemini.

I really have a sinking feel right now actually of what an absolute giant waste of capital all this is.

I am glad for all the venture capital behind all this to subsidize my intellectual noodlings on a super computer but my god what have we done?

This is so much fun but this doesn't feel like we are getting closer to "AGI" after using Gemini for about 100 hours or so now. The first day maybe but not now when you see how off it can still be all the time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: