Cute but also: a small village has their lights flickering whenever Momo wants a treat. Also, you can actually play with your dog and give them treats instead of tasking a random text generator with that bit.
It failed. The outcome was europeans see “yet another nonsense” coming from the US. Also, it barely made the news because of other nonsense coming from the US and generally that’s limited to “international news”.
Also, we don’t actually have censorship in Europe, not in the way the US is trying to suggest.
Yet, your ISPs don't give you access to the full Internet. First it's porn (age verification), then it's soccer, then it's social media (ID verification), then it's libraries. Soon, you even stuff that you take for granted, such as playing an online game, may require age/ID verification. At this rate, all you will be able to access soon will be center-left Euro propaganda.
Are you forgetting how the Americans blocked Stormfront and Silk Road? They don't have full access to the Internet either, they're just not so obviously totalitarian about it as the Europeans.
The ISPs do what our elected governments direct them to do. It’s how democracy works. If you don’t like what people are voting for, get into politics and talk to your community. Or at least email your MEP. There is no conspiracy here.
Cute that you think that's how it works. I guess you're also thinking everyone that voted for the current administration agrees with them on everything they do and voted them in exactly for that. I am at least glad you didn't say if you don't like how it works, move elsewhere.
I know that’s how it works and I also know it’s not a zero sum game. That’s why every law or policy gets time for comments and debate and sometimes policy gets revised. It’s how governance works.
But if you feel you have the perfect solutions, then by all means get yourself on the ballot so we can finally see the light.
What websites a person is allowed to access should not be a matter of debate, it is for the individual to decide. Other people's opinions are not relevant. Even if 99% of people think a person should not be able to access a website, it is still their right to do so and they have no need to justify it.
Democracy is for deciding what to do with taxpayer money. It shouldn't be a mechanism by which people can vote to take away other people's freedoms.
Does that apply to websites full of CSAM, or that sell for-hire animal torture real-time streaming services, or that provide hitman hiring services, or...
I think your view on how government and the internet works is somewhat outdated. Social media is not just "what websites a person is allowed access to" and government is so much more than what we do with taxpayer money.
The US is evidently a poor example of what a fully formed government is so I wouldn't use that as a basis for one's world view.
It's only UK that does that, and they're not in the EU. Many us states do the same, and the administration wants to ban porn completely and jail those who make it.
I read through this drivel and it's nothing more than conjecture and anecdotes from someone who seems never to have been to Europe. Nearly every example of his critique is of the UK, not Europe as a whole, and each of them has plenty of counterexamples of the same thing happening in the US.
In short: nonsense. Completely made up narrative filled with quotes from same-belief people, claiming moral outrage about issues they either don't understand or wilfully misrepresent.
I'm pretty sure the President and CEO of the leading free expression organization today understands what he's talking about and is fully aware that there are bad things happening in this vein in the US.
Do you have any specific disagreements you can share with the criticism of the actual content that the parent comment gave, or do you think that the author's job title is more important than whether what they said is actually correct?
Well, I mean, I think it's pretty obvious that when someone claims that the author is "claiming moral outrage about issues they either don't understand or wilfully misrepresent" then what he does for a living matters.
Sure, but if I'm trying to verify the accuracy of their claims, their job both giving them potential subject expertise but also potential bias towards making the exact claims that are being criticized, it doesn't really clear anything up, so I'm back to trying to understand if there's any counterargument to the criticism other than their pedigree.
The parent comment in question has essentially zero in the way of supporting evidence. The author's first claim happened to be verifiable. I attempted to verify it, and it was pretty clearly false.
What's asserted without evidence can be dismissed without evidence.
I guess I might not have been clear. I'm specifically wondering about this part, which is what I was referring to about the critique they gave of the actual content:
> Nearly every example of his critique is of the UK, not Europe as a whole, and each of them has plenty of counterexamples of the same thing happening in the US.
Separated from the ad hominems on both sides, it seems like a pretty reasonable criticism to me. It doesn't seem obvious to me that it should be dismissed as irrelevant.
The guy who literally actively helped to create the current USA situation? Yeah. All the while he pointificated about free speech, he had clear favorites whose speech mattered and who should shut up.
When the left is censoring more (as was true in the run-up to Trump's election), of course a free speech organization will be opposing left-wing censorship more frequently.
Trump's election was a reaction to left-wing cancel culture. If people had listened to FIRE, and refuted bad ideas instead of censoring them, maybe Trump wouldn't have been elected: https://qr.ae/pYCVXO
>Nearly every example of his critique is of the UK
I just used a word count tool to sanity-check this claim. It said there are 1061 words about the UK and 1684 words about non-UK countries.
You appear to be fibbing about easy-to-check facts. Anyone who trusts you on your harder-to-verify claims is a fool.
There seems to be a bit of a pattern I've noticed with Europeans on HN. They criticize the US constantly, yet flip out instantly when their countries are criticized, to the point of reflexively lying about stuff which is easily checkable.
I can sorta understand lying about claims which are hard to verify. It's distasteful, but I can understand why a certain type of person would do it. But, why lie about stuff which takes under 60 seconds to check? What are you trying to accomplish?
BTW, I hope you aren't in Germany. It's a crime to insult someone or spread malicious gossip online in Germany. Your usage of "drivel" might be considered an insult which could get your phone confiscated: https://www.youtube.com/watch?v=-bMzFDpfDwc#t=3m
There was a comma, after which it said "not in the way the US is trying to suggest." You evidently missed that part, or are you saying that it is in exactly the way the US is trying to suggest?
I also see another side of the problem - too many services are proxied via CloudFlare making it easy to disrupt at the same time. Folks really need to try and choose alternatives instead of feeding the “world firewall”
How is that a bad thing? Our goal should be to maximize the amount of collateral damage that any censorship causes, with the ideal case being that the only two choices available to the censors are "no censorship at all" or "completely air gap yourself like North Korea".
That extreme centralization makes the single choke-point vulnerable to all kinds of other problems. The web is supposed to be decentralized and distributed.
I agree with you on the technical premise, but I think the point made was that the bigger the disruption, the greater the backlash and swift reversal, in ideal theory at least.
I'd hardly call decentralization a "hypothetical" issue: we've already seen governments are willing to issue gag orders so that we can't even find out what they're doing inside major companies. That's clearly a lot easier to do when there's a single central point of control.
If there's a single central point of control, then that also means an outage takes everything offline, instead of just 1-2 tools. That also makes it a bigger target for attackers.
It doesn't even need to be an attacker - CloudFlare themselves have managed to take down impressive portions of the internet more times than should be accepted just this year.
So do you apply the same logic for measures gov/Apple/etc put out about on-device scanning and e2e messaging stuff? It's always "hypothetical" until it hits the fan.
Sure, I agree there are bad things about extreme centralization. I'm just saying that the increased collateral damage of censorship is a silver lining of it, not one of the bad things about it.
Some people genuinely believe the european copyright system (and La Liga and the Spanish judiciary) has more than 0% legitimacy… is it truly that hard to imagine?
I think the fact that there is no one company behind it helps a great deal. Monetising user attention is simply incompatible with the architecture of the fediverse - even Meta couldn’t twist it into that and eventually decided to give up.
I’m quite literally experiencing a physical reaction whenever I need to browse some algorithmic timeline. Even YouTube, what used
to be a couple of related videos is now a wall full of “recommendations” - the unskippable ads on every video are more relevant than the actual videos…
Mastodon and related (for me Loops mainly) are a breath of fresh air and I wish more people can (re)learn to enjoy that.
YouTube recommendations are very well tuned for me. You need to mark videos "not interested" and downvote stuff you don't actually like, as well as stopping videos when you've decided you're not interested. This and other aspects WILL improve your recommended feed. So if your recommended feed sucks, well that's on you there bud, you can influence it completely.
Do you use youtube intending to be drawn into watching things you never intended to watch? I don't want a feed but the people operating these sites do not care that they are destroying people's time. Go to twitter, click on "following". Next time you sign in, somehow it's on "For you" (the algorithmic feed).
Thankfully on Youtube I can completely disable recommendations on the site and I use it purely as a source of information, not as a dopamine addiction funnel.
I just get the same videos recommended over and over. I liked it when the YT feed would recommend stuff new and different i might find interesting. Now its just hyperoptimized to get me to click on ads.
Cyberspace promised us we can all work together to create things, like one species coming together to solve problems. Now in 2026, we need to “space” for every little tribe…
Exactly. There never was a declaration of independence of cyberspace. BUT government and law moved too slowly by years and years. And they have, of course, not learned their lesson.
For example: suing Nappster 2 years after it launched. And that was just because it was an extremely clear-cut case. By the time they did that there were 10 such networks, none of which were sued, none of which had clear laws or court decisions stating clearly one way or the other if it was legal.
And when we're talking a vague issue, for example how copyright affects search engines, the first actually settled case (which was still a far cry from establishing the rules) happened in 2006, 16 years after the initial search engine started operating and over 8 years after Google started it's meteoric rise. The specific decision the courts deigned to make, after 16 years? That caching a page so it can be used to build a search index in the first place does not by itself violate copyright. Great, well, that covers it then. My point is, by then the cat was out of the bag, ran to the neighbors house, got 6 kittens, who each got 6 kittens themselves and one of it's grandchildren ate the sandwich the judge was hoping to have for lunch and one of the other kittens got adopted by the president of the US, while the rest invaded and destroyed the houses of publishers that tried to protect their copyright.
Imagine the insanity, the damage that any real court decision against search engines would do today. "No you can't show previews". "Ads don't respect trademarks". There is no room for any such decisions now. The few decisions they have made (in >30 years) have amplified the damage to the victims that the court system tried to help (just ask a few newspapers).
Of course, none of this has instilled any sense of reasonableness, modesty or urgency in any parliament, court or even executive around the globe. For instance, they could PRE-clarify the laws before AI takes over 5 industries. Does AI training violate copyright? What are the rights of an employee that gets fired because AI does their job? No government felt the need to answer the copyright question when it mattered, 7 years ago, and there is ZERO action on the second question. Are they planning to answer the people displacement question once 99% of companies have done it because competition forced them to?
Now any answer they give on the copyright front is beside the point since no court or Parliament actually has the power to order existing (potentially law-violating) models to be destroyed. Once again, they have placed themselves into a position where they are totally irrelevant. Now one might ask, the time is to decide if you violate copyright by training a model using a model that was trained while violating copyright. Perhaps that one is still relevant. But nothing will be done.
And please, it doesn't matter what your position is on the issue. Can model training violate copyright? Yes or no? We live in a democracy and no decision is made. This is an important part of why big companies get to openly violate laws on an unprecedented scale for billions and billions without consequences while kids sometimes get locked up for stealing a single candy.
That assumes LLMs are relevant and will be around a year from now. Let’s not forget NFTs.
Your comment is also blind to the absurd amount of research and projects which are born here but later move to look for funding.
So the EU is not irrelevant, on the contrary, we’re just mourning the fall of the US and transitioning to an independent future. Who would’ve though, we’d end up needing to build a copy of everything…
I used it as another “there was a strong tech push but ultimately we couldn’t make it work” kind of idea. With NFTs the grift was immediately visible, with LLMs it’s a bit harder, the whole “AI” facade gives people hope - I want to believe and stuff.
Rubio is a mouthpiece for a regime that’s not qualified to discuss Europe, or even his very own US of A. All he meant in his speech is that his government has chosen isolation
Swift never felt truly open source either. That people can propose evolution points doesn’t change the fact that Apple still holds all the keys and pushes whatever priorities they need, even if they’re not a good idea (e.g. Concurrency, Swift Testing etc)
Also funny enough, all cross platform work is with small work groups, some even looking for funding … anyway.
Apple has been always 'transactional' when it comes to OSS - they open source things only when it serves a strategic purpose. They open-sourced Swift only because they needed the community to build an ecosystem around their platform.
Yeah, well, sure they've done some work around LLVM/Clang, WebKit, CUPS, but it's really not proportional to the size and the influence they still have.
Compare them to Google, with - TensorFlow, k8s, Android (nominally), Golang, Chrome, and a long tail of other shit. Or Meta - PyTorch and the Llama model series. Or even Microsoft, which has dramatically reversed course from its "open source is a cancer" era (yeah, they were openly saying that, can you believe it?) to becoming one of the largest contributors on GitHub.
Apple I've heard even have harshest restrictions about it - some teams are just not permitted to contribute to OSS in any way. Obsessively secretive and for what price? No wonder that Apple's software products are just horrendously bad, if not all the time - well, too often. And on their own hardware too.
I wouldn't mind if Swift dies, I'm glad Objective-C is no longer relevant. In fact, I can't wait for Swift to die sooner.
Sort of an exception that proves the rule. Yes, it's great and was released for free. But at least partially that's not a strategic decision from Apple but just a requirement of the LGPLv2 license[1] under which they received it (as KHTML) originally.
And even then, it was Blink and not WebKit that ended up providing better value to the community.
[1] It does bear pointing out that lots of the new work is dual-licensed as 2-clause BSD also. Though no one is really trying to test a BSD-only WebKit derivative, as the resulting "Here's why this is not a derived work of the software's obvious ancestor" argument would be awfully dicey to try to defend. The Ship of Theseus is not a recognized legal principle, and clean rooms have historically been clean for a reason.
>> some teams are just not permitted to contribute to OSS in any way
My understanding is that by default you are not allowed to contribute to open-source even if its your own project. Exceptions are made for teams whose function is to work on those open-source project e.g. Swift/LLVM/etc...
I talked to an apple engineer at a bar years ago and he said they aren’t allowed to work on _anything_ including side projects without getting approval first. Seemed like a total wtf moment to me.
I have never had a non wtf moment talking to an apple software engineer at a bar.
I can recall one explaining to me in the mid 20 teens that the next iPhone would be literally impossible to jailbreak in any capacity with 100% confidence.
I could not understand how someone that capable(he was truly bright) could be that certain. That is pure 90s security arrogance. The only secure computer is one powered off in a vault, and even then I am not convinced.
Multiple exploits were eventually found anyway.
We never exchanged names. That’s the only way to interact with engineers like that and talk in real terms.
No, as far as I know, at Apple, this is strict - you cannot contribute to OSS, period. Not from your own equipment nor your friend's, not even during a vacation. It may cost you your job. Of course, it's not universal for every team, but on teams I know a few people - that's what I heard. Some companies just don't give a single fuck of what you want or need, or where your ideals lie.
I suspect it's not just Apple, I have "lost" so many good GitHub friends - incredible artisans and contributors, they've gotten well-payed jobs and then suddenly... not a single green dot on the wall since. That's sad. I hope they're getting paid more than enough.
Every programming job I've ever had, I've been required at certain points to make open source contributions. Granted, that was always "we have an issue with this OSS library/software we use, your task this sprint is to get that fixed".
I won't say never, but it would take an exceedingly large comp plan for me to sign paperwork forbidding me from working on hobby projects. That's pretty orwellian. I'm not allowed to work on hobby projects on company time, but that seems fair, since I also can't spend work hours doing non-programming hobbies either.
The fact that Swift is an Apple baby should indeed be considered a red flag. I know there are some Objective-C lovers out there but I think it is an abomination.
Apple is (was?) good at hardware design and UX, but they pretty bad at producing software.
For what it’s worth, ObjC is not Apple’s brainchild. It just came along for the ride when they chose NEXTSTEP as the basis for Mac OS X.
I haven’t used it in a couple decades, but I do remember it fondly. I also suspect I’d hate it nowadays. Its roots are in a language that seemed revolutionary in the 80s and 90s - Smalltalk - and the melding of it with C also seemed revolutionary at the time. But the very same features that made it great then probably (just speculating - again I haven’t used it in a couple decades) aren’t so great now because a different evolutionary tree leapfrogged ahead of it. So most investment went into developing different solutions to the same problems, and ObjC, like Smalltalk, ends up being a weird anachronism that doesn’t play so nicely with modern tooling.
I've never written whole applications in ObjC but have had to dabble with it as part of Ardour (ardour.org) implementation details for macOS.
I think it's a great language! As long as you can tolerate dynamic dispatch, you really do get the best of C/C++ combined with its run-time manipulable object type system. I have no reason to use it for more code than I have to, but I never grimace if I know I'm going to have to deal with it. Method swizzling is such a neat trick!
It is, and that’s part of what I loved about it. But it’s also the kind of trick that can quickly become a source of chaos on a project with many contributors and a lot of contributor churn, like we tend to get nowadays. Because - and this was the real point of Dijkstra’s famous paper; GOTO was just the most salient concrete example at the time - control flow mechanisms tend to be inscrutable in proportion to their power.
And, much like what happened to GOTO 40 years ago, language designers have invented less powerful language features that are perfectly acceptable 90% solutions. e.g. nowadays I’d generally pick higher order functions or the strategy pattern over method swizzling because they’re more amenable to static analysis and easier to trace with typical IDE tooling.
I don't really want to defend method swizzling (it's grotesque from some entirely reasonable perspectives). However, it does work on external/3rd party code (e.g. audio plugins) even when you don't have control over their source code. I'm not sure you can pull that off with "better" approaches ...
Many of the built-in types in Objective C all have names beginning with “NS” like “NSString”. The NS stands for NeXTSTEP. I always found it insane that so many years later, every iPhone on Earth was running software written in a language released in the 80s. It’s definitely a weird language, but really quite pleasant once you get used to it, especially compared to other languages from the same time period. It’s truly remarkable they made something with such staying power.
>It’s truly remarkable they made something with such staying power
What has had the staying power is the API because that API is for an operating system that has had that staying power. As you hint, the macOS of today is simply the evolution of NeXTSTEP (released in 1989). And iOS is just a light version of it.
But 1989 is not all that remarkable. The Linux API (POSIX) was introduced in 1988 but started in 1984 and based on an API that emerged in the 70s. And the Windows API goes back to 1985. Apple is the newest API of the three.
As far as languages go, the Ladybird team is abandoning Swift to stick with C++ which was released back in 1979. And of course C++ is just an evolution of C which goes back to 1972 and which almost all of Linux is still written in.
And what is Ladybird even? It is an HTML interpretter. HTML was introduced in 1993. Guess what operating system HTML and the first web browser was created on. That is right...NeXTSTEP.
In some ways ObjC’s and the NEXTSTEP API’s staying power is more impressive because they survived the failure of their relatively small patron organization. POSIX and C++ were developed at and supported by tech titans - the 1970s and 1980s equivalents of FAANG. Meanwhile back at the turn of the century we had all witnessed the demise of NeXT and many of us were anticipating the demise of Apple, and there was no particularly strong reason to believe that a union of the two would fare any better, let alone grow to become one of the A’s in FAANG.
I actually suspect that ObjC and the NeXT APIs played a big part in that success. I know they’ve fallen out of favor now, and for reasons I have to assume are good. But back in the early 2000s, the difference in how quickly I could develop a good GUI for OS X compared to what I was used to on Windows and GNOME was life changing. It attracted a bunch of developers to the platform, not just me, which spurred an accumulation of applications with noticeably better UX that, in turn, helped fuel Apple’s consumer sentiment revival.
Good take. Even back in the 1990s, OpenStep was thought to be the best way to develop a Windows app. But NeXT charged per-seat licenses, so it didn't get much use outside of Wall Street or other places where Jobs would personally show up. And of course something like iPhone is easier when they already had a UI framework and an IDE and etc.
Assuming you mean C (C++ is an 80s child), that’s trivially true because devices with an ObjC SDK are a strict subset of devices that are running on C.
Yes, that is why I don't find it "insane" like the grandparent does, like yeah, devices run old languages because those languages work well for their intended purpose.
You should feel that C’s longevity is insane. How many languages have come and gone in the meantime? C is truly an impressive language that profoundly moved humanity forward. If that’s not insane (used colloquially) to you, then what is?
Next was more or less an Apple spinoff, that was later acquired by Apple. Objective-C was created because using standards is contrary to the company culture. And with Swift they are painting themselves into a corner.
> Objective-C was created because using standards is contrary to the company culture
Objective-C was actually created by a company called Stepstone that wanted what they saw as the productivity benefits of Smalltalk (OOP) with the performance and portability of C. Originally, Objective-C was seen as a C "pre-compiler".
One of the companies that licensed Objective-C was NeXT. They also saw pervasive OOP as a more productive way to build GUI applications. That was the core value proposition of NeXT.
NeXT ended up basically taking over Objective-C and then it became of a core part of Apple when Apple bought NeXT to create the next-generation of macOS (the one we have now).
So, Objective-C was actually born attempting to "use standards" (C instead of Smalltalk) and really has nothing to do with Apple culture. Of course, Apple and NeXT were brought into the world by Steve Jobs
> Objective-C was created because using standards is contrary to the company culture.
What language would you have suggested for that mission and that era? Self or Smalltalk and give up on performance on 25-MHz-class processors? C or Pascal and give up an excellent object system with dynamic dispatch?
C's a great language in 1985, and a great starting point. But development of UI software is one of those areas where object oriented software really shines. What if we could get all the advantages of C as a procedural language, but graft on top an extremely lightweight object system with a spec of < 20 pages to take advantage of these new 1980s-era developments in software engineering, while keeping 100% of the maturity and performance of the C ecosystem? We could call it Objective-C.
Years ago I wrote a toy Lisp implementation in Objective-C, ignoring Apple’s standard library and implementing my own class hierarchy. At that point it was basically standard C plus Smalltalk object dispatch, and it was a very cool language for that type of project.
I haven’t used it in Apple’s ecosystem, so maybe I am way off base here. But it seems to me that it was Apple’s effort to evolve the language away from its systems roots into a more suitable applications language that caused all the ugliness.
Some refer to the “Tim Cook doctrine” as a reason for Swift’s existence. It’s not meant to be good, just to fulfill the purpose of controlling that part of their products, so they don’t have to rely on someone else’s tooling.
That doesn’t really make sense though. I thought that they hired Lattner to work on LLVM/clang so they could have a non-gpl compiler and to make whatever extensions they wanted to C/Obj-C. Remember when they added (essentially) closures to C to serve their internal purposes?
So they already got what they wanted without inventing a new language. There must be some other reason.
The Accidental Tech podcast had a long interview with Lattner about Swift in 2017 [0]. He makes it out as something that had started as side-project / exploration thing without much of an agenda, which grew mostly because of how good positive feedback the project had got from other developers. He had recently left Apple back then, and supposedly left the future of Swift in other peoples' hands.
I definitely agree with the first point - it's not meant to be the best.
On the second part, I think the big thing was that they needed something that would interop with Objective-C well and that's not something that any language was going to do if Apple didn't make it. Swift gave Apple something that software engineers would like a ton more than Objective-C.
I think it's also important to remember that in 2010/2014 (when swift started and when it was released), the ecosystem was a lot different. Oracle v Google was still going on and wasn't finished until 2021. So Java really wasn't on the table. Kotlin hit 1.0 in 2016 and really wasn't at a stage to be used when Apple was creating Swift. Rust was still undergoing massive changes.
And a big part of it was simply that they wanted something that would be an easy transition from Objective-C without requiring a lot of bridging or wrappers. Swift accomplished that, but it also meant that a lot of decisions around Swift were made to accommodate Apple, not things that might be generally useful to the lager community.
All languages have this to an extent. For example, Go uses a non-copying GC because Google wanted it to work with their existing C++ code more easily. Copying GCs are hard to get 100% correct when you're dealing with an outside runtime that doesn't expect things to be moved around in memory. This decision probably isn't what would be the best for most of the non-Google community, but it's also something that could be reconsidered in the future since it's an implementation detail rather than a language detail.
I'm not sure any non-Apple language would have bent over backwards to accommodate Objective-C. But also, what would Apple have chosen circa-2010 when work on Swift started? Go was (and to an extent still is) "we only do things these three Googlers think is a good idea", Go was basically brand-new at the time, and even today Go doesn't really have a UI framework. Kotlin hadn't been released when work started on Swift. C# was still closed source. Rust hadn't appeared yet and was still undergoing a lot of big changes through Swift's release. Python and other dynamic languages weren't going to fit the bill. There really wasn't anything that existed then which could have been used instead of Swift. Maybe D could have been used.
But also, is Swift bad? I think that some of the type inference stuff that makes compiles slow is genuinely a bad choice and I think the language could have used a little more editing, but it's pretty good. What's better that doesn't come with a garbage collector? I think Rust's borrow checker would have pissed off way too many people. I think Apple needed a language without a garbage collector for their desktop OS and it's also meant better battery life and lower RAM usage on mobile.
If you're looking for a language that doesn't have a garbage collector, what's better? Heck, what's even available? Zig is nice, but you're kinda doing manual memory management. I like Rust, but it's a much steeper learning curve than most languages. There's Nim, but its ARC-style system came 5+ years after Swift's introduction.
So even today and even without Objective-C, it's hard to see a language that would fit what Apple wants: a safe, non-GC language that doesn't require Rust-style stuff.
I think that their culture of trying to invent their own standards is generally bad, but it is even worse when it is a programming language. I believe they are painting themselves into a corner.
>For example, Go uses a non-copying GC because Google wanted it to work with their existing C++ code more easily. Copying GCs are hard to get 100% correct when you're dealing with an outside runtime that doesn't expect things to be moved around in memory.
Do you have a source for this?
C# has a copying GC, and easy interop with C has always been one of its strengths. From the perspective of the user, all you need to do is to "pin" a pointer to a GC-allocated object before you access it from C so that the collector avoids moving it.
I always thought it had more to do with making the implementation simpler during the early stages of development, with the possibility of making it a copying GC some time in the feature (mentioned somewhere in stdlib's sources I think) but it never came to fruition because Go's non-copying GC was fast enough and a lot of code has since been written with the assumption that memory never moves. Adding a copying GC today would probaby break a lot of existing code.
To add to this, whatever was to become Obj-C's successor needed to be just as or more well-suited for UI programming with AppKit/UIKit as Obj-C was. That alone narrows the list of candidates a lot.
reply