It feels like apple is “ a square peg in a round hole” when it comes to AI - atleast for now.
They are not the hardware provider like nvidia, they don’t do the software and services like OpenAI or even Microsoft/oracle. So they are struggling to find a foothold here. I am sure they are working on a lot of things but the only way to showcase them is through their phone which ironically enough feels like not the best path for apple.
Apple’s best option is to put llms locally on the phone and claim privacy (which is true) but they may end up in the same Siri vs others situation, where Siri always is the dumber one.
Being late in AI race or not entering it from training side is not necessarily bad, others have burned tons of money, if Apple enters with their hardware first (only?) it may disrupt status quo from consumer side. It's not impossible that they'll produce hardware everybody will want to run local models that will be on par with closed ones. If this happens it may change real money flow (as opposed to investor based on imaginary evaluation money that can evaporate).
They are the leader in manufacturing consumer systems with sufficient high-bandwidth memory to enable decent-sized LLMs to be run locally with reasonable performance. If you want to run something that needs >=32GB of memory (which is frankly bottom-end for a somewhat capable LLM) they're your only widely-available choice (otherwise you've got the rare Strix Halo AI Max+ 395 chip, or you need multiple GPUs, or maybe a self-build based around a Threadripper.)
This might not be widely recognised, as the proportion of people wanting to run capable LLMs locally is likely a rounding error versus the people who use ChatGPT/Claude/Gemini regularly. It's also not something that Apple market on, as they can't monetize it. However, as time goes on and memory and compute power gradually decrease in price, and also maybe as local LLMs continue to increase in ability (?) it may become more and more relevant.
All current use cases, the ones that caught the public eye, just don't have a need for locally run LLMs. Apple has to come up with functionality that can work with on-device LLMs and that is hard to do. There aren't that many use cases for it as the input vectors all map to an app or camera. Even then a full fledged LLM is always better than a quantized, low precision one running locally. Yeah, increased compute is the way, but not a silver bullet as Vision and Audio bound LLMs require large amounts of memory
I am absolutely looking forward to robust, on-device AI. I would rather not send my data to a third party who, in all likelihood, will use it to build ad-driven, sensationalist, addictive experiences.
They are not the hardware provider like nvidia, they don’t do the software and services like OpenAI or even Microsoft/oracle. So they are struggling to find a foothold here. I am sure they are working on a lot of things but the only way to showcase them is through their phone which ironically enough feels like not the best path for apple.
Apple’s best option is to put llms locally on the phone and claim privacy (which is true) but they may end up in the same Siri vs others situation, where Siri always is the dumber one.
This is interesting to see how it plays out