A link shortener is such an easy thing to code, it's essentially one database table with a redirect. To add to that, there are many open source libraries to implement link shortening, including analytics and stuff. Even then Bitly and Rebrandly have customers (from their website) like Toyota, Cisco, Oracle, Monday.com, New York Times, etc.
Are these companies unable to build a link shortener? It's also so easy to migrate off shortener service. If they can and still choose to use these shortening services, there must be other reason. And that reason is that they simply don't want to. This has nothing to do with AI.
I run a software company and one of the reasons customers say they want to migrate from their homegrown spreadsheet is because the guy who built it left. A freaking spreadsheet!
Such blog posts and probably many comments here are the perfect answer to "Tell me you don't run a real business without telling me you don't run a real business"
Regarding your last comment, majority of the people here are costal with FAT paychecks slinging code for VCs. It’s a totally different universe than running a Saas. That said, still a valuable forum.
To tackle this, I highly recommend the percentile technique from MIT paper, "A Structured Approach to Strategic Decisions". [1]
If you are judging any dimension, say priority, then assigning percentile rating to a task (i.e where does this given task stand relative to all the others) can be quite helpful to overcome the "SuperEvenMoreImportantEmergencyBugfix" cases. This is what the article is also suggesting.
And if you end up assigning 90% percentile to more than 50% of the tasks, you know your judgement is wrong, and it can be corrected accordingly. And it can also be standardized much better across the organization. Everyone can now judge their own judgements.
Rating on multiple dimensions with low correlations important for your company, say signup rate, retention, security, etc. and adding them up is a good way to not miss something important. It's not important to assign weights. Equal weights is fine [2]
I'm finding this technique quite useful in deciding what to work on next with fewer doubts about their priority. I failed to make Trello work for this and use a spreadsheet.
To add to this, I have a 6 year old iPad. There are no issues with it except the operating system itself. Hulu won't install since it needs iOS 11 for some reason. iOS won't update to 11 because it's too old. Netflix works and updates just fine.
The iPad is in perfect condition with no hardware issues. It's the software that will make it an ewaste soon.
My Android phone updates all the apps (even many OS services) without any problem.
Have you ever downloaded Hulu with your account? If so, it should let you download the “last compatible version”. It worked a few months ago for me on my old first gen iPad.
If you haven’t downloaded it before, you can download it with an older version of iTunes or a newer device if you have one and then it will let you download on your old device.
I use https://github.com/dan-v/algo fork which has Wireguard VPN and PiHole combined. It takes minutes to spin up a Digital Ocean VPN and have it working on all my devices. I'm very happy with this setup.
Pi-hole dashboard is quite useful to see what's being blocked and add new domains/lists easily. For example, I also add all facebook domains (https://github.com/jmdugan/blocklists/blob/master/corporatio...) and sometimes Hacker news when I want to be productive.
Just a few thoughts I've been mulling for a while about this topic:
Machine learning is something that I believe can take advantage of analog computing. A machine learning algorithm does not need highly precise or accurate representations. Most current implementations of such processing units use fewer bits (usually 8).
However, even if we use fewer bits, the engineering effort (design, layout, lithography, etc.) that goes into making the processing unit still assumes that those few bits are error free. The manufacturing process treats it like any other digital circuit. It assumes data processing part should be fault free (e.g. treat MSB and LSB the same). Digital circuits also demand higher power compared to analog versions.
If an analog circuit can be designed for such algorithms, not only could it be much faster, it will probably consume far less power. With a super high bandwidth consuming little power, an analog processing chip may give us a much better playground to try advanced algorithms. The materials can then be optimized and we might end up with something like a brain.
Brains (all animals) process far more information for the power they consume.
Digital circuits give us low level reliability and so they are really good for simple control. Analog/biology don't give us that. But they can give us a high level reliability while delegating the low level reliability to digital counterparts.
I think you’re wrong about the ML precision. You need highly precise for most recursive machine learning tasks because you’re compounding errors otherwise.
Typically you can’t even use floating point representation: not accurate enough.
Disagree. https://arxiv.org/abs/1805.08691 demonstrates 8-bit architecture for a pre-trained CNN provides more than acceptable results with lower latency and higher throughput than a higher precision version.
Analog can actually be much more precise than 8-bit since there's no quantization noise (yes, this is not A2D, but any intermediate results can take middle values that will only make the final value more precise than digital signal).
This is one of the most amazing things about humans to me. I marvel at how it's possible to get so much information about our universe by observing so little. Astronomy is another field that I'm really impressed with. It's possible to infer things about galaxies, stars, planets, etc. just buy observing some light coming from them to our little blue planet. Absolutely crazy.
How many such techniques we haven't even discovered or technology is not there to observe such things?
I really enjoyed reading Our Mathematical Universe by Max Tegmark that explores such things.
I always love reading the crazy new techniques for extracting text from rotten old paper and papyrus. I distinctly remember reading a few months back an article about someone detecting trace chemicals from an old pocket bible or something, and deduced the owner had liver disease or something equally insane. wish I could find that article!
Under the right conditions, proteins can survive for millions of years. In recent years, proteomic studies of art works and archeological remains have yielded biological information of startling clarity, revealing gossamer-thin layers of fish glue on seventeenth-century religious sculptures and identifying children’s milk teeth from pits of previously unrecognizable Neolithic bones.
Regarding astronomy, the more amazing things are what we've been able to predict by mathematical inference before we had the technology to validate the predictions.
on similar lines, imho, mendleev’s periodic table is also quite amazing. it is now close to 150 years old !!!
from a random page on the web
‘’’
Mendeleev realized that the physical and chemical properties of elements were related to their atomic mass in a 'periodic' way, and arranged them so that groups of elements with similar properties fell into vertical columns in his table.
Gaps and predictions Sometimes this method of arranging elements meant there were gaps in his horizontal rows or 'periods'. But instead of seeing this as a problem, Mendeleev thought it simply meant that the elements which belonged in the gaps had not yet been discovered. He was also able to work out the atomic mass of the missing elements, and so predict their properties. And when they were discovered, Mendeleev turned out to be right. For example, he predicted the properties of an undiscovered element that should fit below aluminum in his table. When this element, called gallium, was discovered in 1875 its properties were found to be close to Mendeleev's predictions. Two other predicted elements were later discovered, lending further credit to Mendeleev's table.
Yes. I generally like to make a reverse calculation when evaluating such offers. To get $1 million out of this risk, the company has to exit for $1 billion if I have 0.1% stake. How likely is it? And that's before considering dilution, preference stocks, option exercise problems, etc.
Joining a BigCo can give $1 million (above startup salary) in 5 years with a very high probability.
Ditto! I have been at a startup that was pretty successful and generated a payout for me of about $1M post taxes.
The original grant plus all the refreshers would have originally amounted to ~$8M (I was within the first 3 employees), but joining early means that at each and every single round you'll be massively diluted (20%+, and there are many of those from a seed round up to a series D/E), and this is without counting the liquidation preference (which in my case was a good 1X non-participating) and other stuff (e.g. emitting new shares for the newly hired fancy CEO that will help us sell the company, refreshers will have a higher cost basis, ...).
If you join early, expect your relative slice of the pie to shrink by roughly an order of magnitude. In the best case.
He has lost all credibility as a trustworthy individual. Anything he builds will be seen as tainted by IP stolen from Google (whether true or not). Most companies will avoid doing business with him for the fear of being pulled in some lawsuit and/or bad PR. This will put a major drag on his company in attracting engineers, investment, etc. He is definitely an extremely talented engineer, but his problems may be beyond any technical ingenuity that he can come up with.
I feel that the time to file for IPO for Lyft and Uber is running out. If Waymo One starts getting popular and gets more and more into news cycle, valuations of both of these companies will be considered based on the progress of self driving cars.
Are these companies unable to build a link shortener? It's also so easy to migrate off shortener service. If they can and still choose to use these shortening services, there must be other reason. And that reason is that they simply don't want to. This has nothing to do with AI.
I run a software company and one of the reasons customers say they want to migrate from their homegrown spreadsheet is because the guy who built it left. A freaking spreadsheet!
Such blog posts and probably many comments here are the perfect answer to "Tell me you don't run a real business without telling me you don't run a real business"