Hacker Newsnew | past | comments | ask | show | jobs | submit | kkylin's commentslogin

I had an even older iPad I was happily using for similar use cases. Until one day a family member bricked it and I needed to factory reset. No big deal, I thought -- nothing important on it. Turns out it needed to phone home to do the factory reset, and since the server it wanted to talk to was no longer up (or perhaps the address changed?) I couldn't factory reset the iPad.

If someone has a work-around I'd love to hear it. Until then, or until Apple changes this design, I think I'm done with iPads. I don't want to pay that much to "own" something that Apple can simply make obsolete by reconfiguring or turning off a server somewhere.

Edit: fix typo


You should be able to DFU, but when it phones home it'll require a software upgrade


Apple recently had an issue with expired certs they had to remedy. That tends to be their bottleneck now.


Yeah that just tripped me up trying to recomission a 2012 Macbook Pro.

Couldn't connect to wifi except through a password-less hotspot. Then I couldnt get online because nothing with SSL was working.

I didnt have a pen drive so I had to FTP off another machine, via my phone hotspot. We got there though!


There are “service providers” on EBay that I’ve used in the past to unlock iPads used by former unresponsive employees. Not sure about exact situation, in my case they defeated the iCloud lock. Was about a $100 a pop. All done remotely.


There’s simply no way this happened, Apple has servers running that’ll talk to even the earliest idevices.

Temporary downtime? Maybe.


In my couple decades as an academic mathematician I've only ever met one. He was a strong advocate, and got me to install & try it, but I could never convert to using it fulltime.


I've had the same dream! thanks for the pointer.



Coyotes are on their way too


Came here to say this -- looks like the data assimilation is still done the "old fashioned" way. I wonder how long that will last?


There are multiple efforts and a good number of VC working on AI DA system. DA is fundamentally a hand-crafted optimization process just like NN. I once reimplemented an EnKF in pytorch and it works amazingly fast. But our observations are so dirty and sparse. ECMWF tuned their system so well. NOAA definitely has potential being even better, but no hope any soon future IMHO.


Yup. Back in my day there was 1.00, a Civil Engineering course, a pretty standard intro to programming in plain old C. I don't know if it still exists. There was nothing of that sort in EECS, though there are lots of IAP courses (which take place in January, before spring semester starts). IMO a month is about right to spend on (leisurely) picking up a programming language for fun. A friend and I learned APL that way.


In 2004 or so, 1.00 was an intro to Java course. I took it very cynically to pad out my units; I was a course 6 senior at the time. I got side-eyed by TAs a lot.


Yes, 1.00 was popular with Course 6ers who wanted easy units.


when I took 1.00 it was FORTRAN IV on IBM 370... with actual punchcards, batch.


We've still got this:

https://en.wikipedia.org/wiki/Eunice_aphroditois

Thankfully they don't live on land.


That's fresh nightmare fuel all right


Not really bothered by snakes, sharks or spiders. But those things (and cave centipedes) look terrifying.


I have been using mu4e for years, and am generally happy with it, and yet... I've never recommended it to anyone else. Unlike, say, org-mode or magit, which I'd happily evangelize.

The pain points are what other commenters have said:

- I don't find the default config a good fit for me, and run it heavily customized. As someone said everything in Emacs turns into a project...

- Performance can be an issue, especially indexing new mail (and especially if you like to lug around a copy of most of your emails locally as I do). On a laptop while traveling this used to be more of a problem, but newer versions are notieably quicker and newer laptops have better battery life.

- HTML rendering isn't great. Thankfully I don't get too many important messages that isn't just plain text. This might be a reasonable use case for xwidget-webkit though I'd imagine there are security/privacy issues to work out. (Another Emacs project -- yay!)

When I started I thought it would be an efficient way to get through lots of emails, and it has been for the most part. I'm just not sure I've saved time overall unless one counts the hours configuring it as "entertainment / hobby" rather than "work".


I too am a bit surprised this made it on the front page. Mu4e is definitely niche, and I wouldn't crow about it like I do org or magit. I've only been using it for less than a month and it will be a while before I know whether it is a net win.

Also, the real test would have been my much more voluminous work email!

The HTML rendering isn't great, as you said, but you are two keystrokes from opening that email in a browser, if you have to.

And I have tweaked the config several times now, but I think that's mostly because I'm changing my (and the charity's) email, which involves a lot of shuffling about. Again, in six months, I'll have another look and decide whether it _really_ helped.


:-)

I don't question this decision is sometimes (often) driven by the need to increase publication count. (Which, in turn, happens because people find it esaier to count papers than read them.) But there is a counterpoint here, which is that if you write say a 50-pager (not super common but also not unusual in my area, applied math) and spread several interesting results throughout, odds are good many things in the middle will never see the light of day. Of course one can organize the paper in a way to try to mitigate the effects of this, but sometimes it is better and cleaner to break a long paper into shorter pieces that people can actually digest.


Well put. Nobody want salami slices, but nobody wants War and Peace, either (most of the time). Both are problems, even if papers are more often too short than too long.


Not only that but in the academic world 20 papers with 50 citations is worth more than one paper with 1000. Even though the total citation count is the same the former gives you an h-index of 20 (and an i-10 of 20) but the latter only gives you an h-index of 1 (ditto for i-10).

Though truthfully it's hard to say what's better. All can be hacked (a common way to hack citations is to publish surveys. You also just get more by being at a prestigious institution or being prestigious yourself). The metric is really naïve but it's common to use since actual evaluating the merits of individual works is quite time consuming and itself an incredibly noisy process. But hey, publish or perish, am I right?[0]

[0] https://www.sciencealert.com/peter-higgs-says-he-wouldn-t-ha...


That's a fantastic example of that that which gets measured gets optimized. The academic world's fascination with this citation metrics is hilarious, it is so reminiscent of programmers optimizing for whatever metric managements has decided is the true measure of programmer productivity. Object code size, lines of code, tickets closed and so on...


It's definitely a toxic part of academia. Honestly if it weren't for that I'd take an academic job over an industry one in a heartbeat.

Some irony is my PhD was in machine learning. Every intro course I now (including mine) discusses reward hacking (aka Goodhart's Law). The irony being that the ML community had dialed this problem up to 11. My peers that optimized this push out 10-20 papers a year. I think that's too many and means most of the papers are low impact. I have similar citation counts to them but lower h-index and they definitely get more prestige for that even though it's harder to publish more frequently in my domain (my experiments take a lot longer). I'm with Higgs though, it's a lazy metric and imo does more harm than good.


in the academic world 20 papers with 50 citations is worth more than one paper with 1000

It depends. If your goal is to get a job at OpenAI or DeepMind, one famous paper might be better.


That's exactly right. A couple more things:

- Differenting a function composed of simpler pieces always "converges" (the process terminates). One just applies the chain rule. Among other things, this is why automatic differentiation is a thing.

- If you have an analytic function (a function expressible locally as a power series), a surprisingly useful trick is to turn differentiation into integration via the Cauchy integral formula. Provided a good contour can be found, this gives a nice way to evaluate derivatives numerically.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: