Hacker Newsnew | past | comments | ask | show | jobs | submit | Arcaire's commentslogin

Care to explain why it's overused in comparison to other toolkits/frameworks/libraries? Taking in to account, of course, the target use-cases.


There's a six-hour old reddit thread[0] on this article with comments from (mostly) Australians in /r/australia, for the interested. It may provide additional context and opinions, although I'm sure you can imagine what the overwhelming opinion is going to be.

[0] https://www.reddit.com/r/australia/comments/5t3arf/nbn_ceo_s...


I believe the parent meant that it's not a bad idea to reevaluate the usage of any software that is business-critical and may have similar repercussions should your license to use it expire.


That is exactly what parent meant.


I don't / wouldn't do that simply because my handwriting pushes the boundaries of the word 'atrocious'. I'd imagine others who spend the majority of their time using computers are in the same boat.


> Yes I know you can mitigate this with LTS but that's a big compromise.

Genuine question as a Ubuntu user, where do you win using Debian on a laptop? I totally understand using it on servers, as it is the definition of 'rock solid' and you can ensure that it will work 100% of the time.

Admittedly I don't pay a huge amount of attention to the differences, but isn't using Ubuntu LTS compromising in the same way as Debian stable? Specifically, packages are oriented for stability. Consequently, you don't get improvements to the kernel, or other packages you may use frequently. On my laptop, I don't want to use Kernel 3.6 that doesn't have the latest improvements to work with Intel processors, direct rendering management, and power saving. I don't particularly want to use quite outdated versions of GNOME either. Six months, to me, is a solid amount of time to sit on a release.

Some people suggest (misguidedly, according to Debian volunteers) using Unstable on a workstation / laptop, which caused my system to outright break entirely because I was using a Nvidia graphics card, so I just installed Ubuntu and it worked from first boot.

Using testing is sworn against by almost everybody, as it's the worst of both worlds - it's not stable and it's not fixed quickly.

Stable plus backports, suggested elsewhere in these comments, sounds nice only if you consider ensuring that water drains out of each hole in a colander equally an adequate use of your time. You then have what I'm fairly sure it's explicitly listed as "don't do this" on the DontBreakDebian[0] page.

[0] https://wiki.debian.org/DontBreakDebian


If you run Debian on your servers then having the same software, and the same versions available, is a good for consistency.

There's something to be said for running multiple distributions so you spot binaries in different places, etc, but really running the same system "everywhere" has more value to me.

My laptop/desktop run Debian stable, my servers run Debian stable, and so I know what to expect.

When I need things that aren't available, or have to be backported, I know how to do that. For extreme changes I can use containers, or virtual machines. But for the past few years I've not been convinced in the feature-churn or value added by Ubuntu. (Especially when you see that their "universe" isn't really supported by anybody - hell even looking at bug reports for their supported packages is often depressing. Bugs open for very long time with no updates because of lack of people willing/able to fix them - combined with forum advice which is often the blind leading the blind.)


> Using testing is sworn against by almost everybody, as it's the worst of both worlds - it's not stable and it's not fixed quickly.

Nonsense. Testing is stable and up to date. Ubuntu is based off of testing. Debian's rolling release system makes upgrades easy.


Do you happen to have a link to that? I'm interested to hear it.


Haven't listened to it yet, but it seems like this is the mentioned episode http://www.se-radio.net/2016/06/se-radio-episode-261-david-h...


DRM there refers to Direct Rendering Manager[0].

A few days ago the author those changes are listed under (Dave Airlie) was on the HN frontpage for his comments on an AMD RFC[1].

[0] https://en.wikipedia.org/wiki/Direct_Rendering_Manager

[1] https://news.ycombinator.com/item?id=13136426


so when the AMD patch was refused, it was refused for this LTS version of the kernel? that's gotta be a huge blow against AMD, no?


The merge window for 4.9 closed long before this recent spate of publicity. The AMDGPU patch would have been on track for more like 4.11.


ah ok, thanks for the information.


Really? They seem to be opening up quite a bit more with regards to Linux (.NET Core, WSL, etc).


To me it looks like they are giving people running linux the minimum tools to run that stuff, but at the same time not giving enough to encourage devs to stay on Linux (WSL goes in the opposite direction, its purpose is to help devs move to Windows). But this is all IMHO, take it with a grain of salt.

By the way, since it's based off MonoDevelop, you can just use it con Linux.


I've been using Visual Studio Code on Linux to do some TypeScript development. It's actually pretty nice, and works really well in a general JavaScript environment (npm, etc).


We are waiting since years for a simple xamarin studio for linux, even just supporting only ubuntu and letting the other distro to manage the packages for them... The reality is it seems they don't want at all to bring Xamarin on Linux, sadly Miguel was very clear about it in the years.


IIRC he said they discontinued Xamarin for Linux because of a lack of demand


The idea is to limit exposed routes to the standard CRUD routes: create (/new), read (/show), update (/edit), delete (/destroy).


Ugh, no. The idea is to use standard HTTP verbs (GET, POST, PUT, DELETE) with noun routes, like:

    Create: POST /resources (or PUT /resources/42)  
    Read: GET /resources/42  
    Update: PUT /resources/42 (or PATCH /resources/42)  
    Delete: DELETE /resources/42
    List/Search: GET /resources


I thought the idea was to make an API behave like a webpage by using hyperlinks that represents the state of the application, so in theory a "smart" client could navigate an API autonomously like a human interacts with a web page ? Because otherwise it's no different from RPC.


I think you are thinking of HATEOAS. Read about it here: https://en.wikipedia.org/wiki/HATEOAS?wprov=sfla1


This was linked previously, and the single comment therein linked to a discussion on reddit[0] about the issue.

Of note, this change has been reverted now[1].

[0] https://www.reddit.com/r/golang/comments/5alxa3/gos_alias_pr...

[1] https://github.com/golang/go/issues/16339#issuecomment-25852...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: