As is the case with just about all tech service offerings these days, whether they charge you or not, my first question is: How much of the info it extracts from me to provide me with this service is it going to hoard and then resell to others for the sake of "improving user experience" or some other bullshit justification for extracting more money.
The ugly thing is that this vast privacy invasion and resell has become so pervasively normalized that even the fucking services which you pay for almost universally sell off to others ad nauseam anything about you that isn't nailed down.
Since this thing offers to give you personalized AI based on everything you see and do in your day, that's some very private, juicy info to resell.
Perhaps "limitless" refers to what they are allowed to do with the data/information they collect.
NB. Data/information can be transferred to other entities in ways that not meet the definition of "resell".1
In addition to transfer, there must be limits on _use_. Obviously a "restriction" like, "The company shall be limited to using the data to improve the service" is meaningless. Defeating privacy improves the service.
The CEO said on Twitter: "In fact, we built Confidential Cloud in such a way that only you can decrypt your data. Your employer, we as software providers, and the government cannot decrypt your data without your permission, even with a subpoena to do so."
So I think a monthly subscription is the business model, not ads.
I do think they’re unlikely to sell user data, but it’s important to note that their privacy claims aren’t true and it would be possible for them to[1].
This side steps the question though. Transcription isn’t happening on the device, which means your raw audio is sent off to someone else’s computer. Whether or not it is finally stored in some secure manner is irrelevant at that point.
They do make some effort to push the “confidential cloud” aspect of their product, where the AI is supposed to somehow operate on your encrypted data, but these days I only believe it when I see the white paper. It is unfortunately quite common in our industry is to posture as more security-conscious and privacy-focused than you actually are.
I’m gonna doubt they’re operating AI on encrypted cloud data. In other words, there’s no technical reason they couldn’t be listening to your conversations.
If these guys have accomplished homomorphic encryption they shouldn’t be building a wearable, they should be licensing their IP to Apple.
Every rumor points at Apple announcing OS integration of local LLM assistant stuff at WWDC in June, with specialized chips to go with it. I suspect they're going to Sherlock a lot of small "AI" companies simultaneously.
1. The data is encrypted and they can’t operate on it in the cloud.
2. The data is not encrypted.
3. The data is encrypted and they have built a state of the art homomorphic encryption algorithm for their AI to operate on.
When I saw them compare it to E2EE, since that’s at least a specific thing that can’t really be misinterpreted, I thought they were serious, but turns out it’s not at all[1] and they are advertising themselves as being far more private and secure than they actually are. Considering their investor list[2], maybe this is more common than we realise?
It's all sent to OpenAI for the LLM processing, so you'd better make sure you're happy with them getting text transcripts of everything that ever shows up on your computer screen.
Especially disappointing given that before they renamed the company they were built on the idea that everything was stored locally. Seems like a pivot solely to gain from the AI hype cycle.
I really wish someone would make a pendant like this which is focussed on local-only use. It could connect to a mobile app or computer for transcription and summarisation, but using local models or an API key that the use chooses. I would pay serious money for a device high quality that did this (>$500) if it was backed by open-source platform/infrastructure for processing the managing the audio. Even better if it was written as an extension or fork of something like obsidian, or if the data structures were fairly open and there was an API for querying the data in interesting ways.
Recording and processing audio locally is something I already do. But there's absolutely no way I'm EVER going to trust a startup with this sort of data, any anyone in any serious job would almost certainly be unable to use it for work.
I guess the risk is that someone would sell a crappy pendant that uses the same software, but I still think there's a potential niche here for people who want something that just works.
I'd also purchase the software from the App Store if it was otherwise only available as a non-signed binary.
Yes, a local-only always-on recorder/transcriber would be a killer product. Combine with a local LLM for summarization, and a simple search engine. But there's no way in hell I'm paying a subscription for this, or letting some 3rd party have this data.
It doesn't seem like this should actually be that difficult to build an open-source version of, modulo battery-life concerns.
I have been wanting this since I first used whisper and saw it’s capability. Nothing you can buy off the shelf is sufficient at the moment and if you diy it’ll be too big.
I actually pitched this idea as a project for students on an MSc EE course a friend runs. Sadly none of the students picked it but we looked at it in some detail and it’s so simple that even the dev boards +battery + mic array could have been made pretty small (according to him anyway). I really hope someone with the hardware chops fancies scratching the itch!
This looks incredible. Site looks fantastic. In a perfect world, I'd pre-order immediately.
I sure as hell am NOT subjecting friends and family to being recorded and having those recordings going to some cloud I don't control.
Just this morning I got an email from AT&T about how they regret that my data was leaked but it's not so bad, sign up for this monitoring service. I have no choice but to use certain services. I can choose not to engage with this beautiful product.
i haven’t personally used the AI pin, but based on reviews, it doesn’t feel like these devices are going to feel productive until a much higher % of queries is answered on-device (without going to the cloud).
MKBHD asked his pin what he was looking at, and in the time humane’s pin to take a pic, send it up to the cloud, decide what model is most appropriate, and narrate a really long (possibly hallucinatory) answer; he simply took his phone out and google lens answered correctly with a lot of time to spare.
I know these are first-gen products and the fact that this one would presumably have a much higher degree of context on my personal life might make this better (without speaking to privacy implications). Still, over-reliance on the cloud and the fact that this doesn’t interface with my phone (where most reminders, alarms, messages, etc, happen) is going to make these devices a tough sell.
Hardware is hard. We would have happily done this as a native iPhone or Watch app if it were possible. We explored that first. But, unfortunately both iOS and watchOS don't support it. In particular, anytime you listen to audio (a YouTube video, phone call, etc) it turns off the microphone and you have to remember to turn it back on.
I pre-ordered the rewind, and noticed that I'll be upgraded for free! But my question is for the subscription, will there be any discounts for the pro plan?
Also, I do notice that the free version has 10 free hours a month of AI features.
How easy would it be for me to export the recordings, and use it with say Google Gemini Pro for transcribing?
Do you plan to add support for microsoft calendars and email? I have gmail but none of my work stuff is in there so I can't really make great use of this :(
Aren't there wiretapping laws that make recording conversations in certain situations illegal? I love the idea of this type of technology but it seems tricky to deploy from a moral and legal perspective.
I've previously worked on the Google Assistant from several angles, and this demo'd awesome, and I was very surprised. Then I saw it is listed as "soon" on the roadmap. Then I remembered the demo was very much a video, not a demo.
I might be jaded from years of bigco, and I'm rooting for you, hopefully you're already set enough financially you can ignore this, innovate, and already have teams of people demo'ing a solution internally:
As a company, you can't get trust back. Fudging a bit and projecting what you'll have when you ship is very tempting when the competition is this thick. Having this much competition also implies there will be choice, and given the use case, it's likely people will always opt for choices that appear more trustworthy.
On a completely separate note, I've seen many, many, teams of extremely bright people be funded for 2-4 years on things that you'd think "it can't be that hard..." and it turns out it's impossible. Not this specifically, but voice adjacent stuff.
Again, rooting for you, but a forthright version of me would have just said it'll never work as demo'd, and it's worth considering what impact it'll have long-term on your success if even just 20% of whats on the roadmap doesn't work out, you're already talking about it in present tense, and it's absolutely key to your user trust story. 20% is conservative in my experience.
I'm sure you've considered how the BIPA law in Illinois applies since it's one of (if not the) strongest biometrics privacy laws in the U.S. Could you share some detail on how you store and process unknown voices before consent?
You're wording in this comment (and the twitter/comment video) gives off the same vibes as the google april 1st videos for things like gmail motion (https://www.youtube.com/playlist?list=PLAD8wFTLnQKeDsINWn8Wj...). I honestly thought this was full sarcasm at first.
Just a tip - consent shouldn't be a mode, it should be the default. Might want to re-think how you market the idea because done correctly, it is a powerful feature.
So you record people without consent and pinky swear that your software will “unrecord” after the fact? I don’t know if you can just hand wave that away… ship has probably sailed on anyone enforcing these kinds of laws I guess, good luck
I'm reminded of how I felt it was unfair that the hotword detector for assistants was discussed as "recording", but it's really just parsing a byte stream, never storing it.
Voice ID systems I'm familiar with work on a similar premise.
Given how new this product is, shouldn't trailer focus on what it does and how to use it in daily lives, rather than showing off different colors and finish of the product? With iPhones, Pixels, I can understand this focus. No one needs to explain what they do. Not this one. This the first time I am hearing/seeingyou. So, show me what you can do. Not what colors or material you're made with.
The value of taking notes isn’t in the text content that is produced. It’s in the actual act of taking the notes, that’s what actually creates meaningful connections in your brain and leads to greater insight and on demand memory later.
Would you rather have someone who when asked a question can answer it on demand or someone who has to go searching for a needle in their haystack of voice / text / LLM summarized notes?
That’s why people take notes. Not to produce text. It seems like every few years a new crop of dictation as a service companies crop up and learn this lesson all over again. Users who can’t be bothered to take their own notes never go back searching through them.
They don't really give any specifics and I'm not sure if they give you the keys or explain how the keys are derived (which I assume must be based on your login if they don't make you enter it otherwise they must be able to decrypt it whenever they want) but they mention they worked with Latacora[1]. Also curious if anyone else has any ideas on how they prevent themselves from being able to decrypt user data while implying they're not using E2EE[1].
Edit: I just tried it. They don't give you encryption keys you need to enter when signing in and the server literally sends you your transcripts with no encryption. Maybe they're including a key somehow derived when signing in with Google/a magic link in the request, but I don't think anything would stop them from just logging API responses even if that was the case. They're definitely not using E2EE. They might just be encrypting at rest and storing their keys in AWS KMS which sounds like false advertising.
Confidential Cloud is similar to end-to-end encryption, but with the added benefit of letting your personalized AI work for you even when you aren’t using the app.
You control who can decrypt your data
Your employer, we as software providers, and the government cannot decrypt your data without your permission, even with a subpoena to do so.
Your data is anonymized
There is an initial mapping from your application request to an anonymous ID, but after that, even we don't know whose encrypted data is whose.
Your data will never be sold
Not only do we pledge never to sell your data, but we couldn't even if we wanted to because we can't decrypt it without your permission.
Envelope encryption with unique secret keys
Data keys are used to encrypt your data and are themselves encrypted under a secret key. No one, including us, can export the secret keys.
Limitless is encrypting at rest, not using end-to-end encryption.
E2EE suggests that only the user (or at least only people the user knows about in the case of e.g. group chats) is able to see/access the decrypted data, which is false. Limitless does not decrypt data on the client using a key only the user has access to, it decrypts the data on the server (in this case using AWS KMS) and sends it to the client. Even if we remove just decrypting everyone’s data out of the equation using AWS KMS (since the user does not control the key), you could trivially write a Cloudflare Worker (since you use Cloudflare on your API subdomain) that simply sends the (unencrypted) API response along with the email from the Supabase JWT used in the header to a server that accumulates everyone’s recording names, transcripts, generated notes and generated summaries. If someone gained access to your Cloudflare account they could also do this. You’re advertising Limitless as if you aren’t able to see people’s transcripts even if you wanted to, which is false. Even your employer can if they TLS MitM you with their own TLS certificates, which is not rare. On the other hand, Signal cannot see your data unless they modify client code, nor can your employer unless they install a modified Signal client on your device or install spyware on your device, which is reading decrypted data from memory. This is what separates encrypting at rest and E2EE (which you say your solution is just as secure as and is better than) for the end user and it feels like false advertising. Limitless, your employer and a potential hacker can all read your data, at the minimum while you’re using Limitless.
There is no way to anonymize transcripts where I and others refer to me by name and reference uniquely identifiable information. What happens when the AI provider is served with a warrant or subpoena for transcripts mentioning MacsHeadroom in the 30 day window?
Your "anonymization" does nothing. You have major egg on your face right now.
The only remotely anonymous version of a service like this involves a private LLM endpoint.
I have Mixtral 8x22B and CommandR+ running on my home LLM sever. How can I use Limitless without handing my "anonymized" (but NOT actually anonymized in any way shape or form) PII over to a third party who retains it for at least 30 days (or longer if someone with a badge or gavel asks)?
This is nonsense. If you can send the data over to third party AI providers for transcription and summarization, you can just as well send it to anyone else who asks for it (e.g. government wiretap, advertisers) whilst removing the "anonymization" part. It doesn't matter that you cannot export the keys used to decrypt in-memory if you control the code that then processes the decrypted data.
It is not at all "similar to end-to-end encryption".
Sorry, but what? How do you do any of the AI training or inference without decrypting the data in the cloud? I seriously doubt it's all running locally on the pendant.
I'm speechless, honestly. Are you seriously bragging about the sheer amount of scrolling animations and other useless but highly annoying bling that you've managed to spring onto the poor visitors of the website? Because, yes, that is a truly impressive amount of it, but the only thing I can take away from it all is that you don't care about usability.
Dear god, just watching some of those videos is painful. Like when trying to scroll literally two lines of text on the screen instead highlights them word by word and pops up icons to "illustrate" each word: https://twitter.com/Stammy/status/1779867509388165203. I don't think I've seen a website condescendingly "speaking slow" to a user before...
Big fan of Rewind (despite the impact on battery life!). My favorite part of it is that everything is verifiably local. I'll take an inferior AI model to shipping recordings of everything I see/hear to someone else.
Hard pass on Limitless. I don't care how good the encryption is, sending this kind of thing to anyone's cloud is a nope for me.
It's not clear from your website who you are and where you are based. Given that you want so much personal data, it might be better to put that up front and center to gain some trust.
We wouldn’t be where we are today without our existing customers, so all Rewind Pro subscribers get Limitless Pro for free.
We are shifting our focus to Limitless because we think it’s a better approach to solving the same problems. In fact, we plan to implement many of your favorite Rewind features directly in Limitless.
We are so bullish on Limitless that we decided to change our company name from Rewind to Limitless.
That said, we have no plans to stop supporting Rewind.
You can even use both products side-by-side and decide for yourself which one you like better.
Since the start of February, Rewind seems to have had 3 PRs merged for macOS (they auto deploy on new commits to main for both iOS TestFlight and macOS) and it’s had known bugs that have been around for months so I’m not holding my breath. Hopefully it’s at least updated for new macOS releases but I’m expecting an announcement in the future saying it won’t be supported anymore or it just not getting any updates for a year.
The ugly thing is that this vast privacy invasion and resell has become so pervasively normalized that even the fucking services which you pay for almost universally sell off to others ad nauseam anything about you that isn't nailed down.
Since this thing offers to give you personalized AI based on everything you see and do in your day, that's some very private, juicy info to resell.