Thanks a lot for you notes! Building a wrapper around ChatGPT I couldn't hold myself back from some self-irony, thus "yet another". Contexts do seem very useful for myself and the most interesting feature for the paying users I've got so far, so maybe you're right that I should out this feature front and center for this app marketing.
My aim with this app was to provide a sleek macOS ChatGPT/GPT-4 experience for users who couldn't be bothered to sign up for Open AI developer account. Of course, along the way I considered that there is a sizeable audience who would prefer to avoid any third-party proxy and use their keys directly. Unfortunately, as mentioned by DerJacques in this thread, it is not certain that allowing this would pass Apple reviews. However, we could try? If I see enough interest in "bring your own keys" model, it could be possible to offer this version of the app outside the App Store. Would you be willing to pay ~$30 one-time purchase for such an app?
As a person who is quite annoyed by slow websites, I'm really not that bothered by ChatGPT (as chat.openai.com). So a native ChatGPT Mac app would be a hard sell to me. I'm only interested in the API access, which OpenAI doesn't provide a UI for.
I payed 5€ for Machato, and would probably pay up to $10 to try out other apps. For me, $30 would already be in a territory that I would not pay upfront, but only after regularly using the app for at least a few weeks. That's pretty much what I paid for Apollo Ultra Lifetime, after many months of regular use. Oh dear.
Also, thanks! Very nice to hear that someone appreciates my work, means a lot! Sad to hear Apple didn't allow you to offer the "bring your own keys" model. I was considering that it would be an attractive option for more tech-savvy users to avoid the apple tax. As app developers, we could still charge for the app, but not for usage. For more professional applications like coding with GPT-4 that would make a noticeable difference for end users spending.
Yes, they considered it a double whammy trademark violation, Mac + GPT = total criminal. Nonetheless, I'm getting requests to also bring the contexts thing to iOS, so maybe whatever happens happens for a reason.
I was also rejected for the app description, however I decided to argue against their ruling. I changed the description slightly, however when submitting a new build I pointed out that all my materials are written in accordance with OpenAI brand guidelines, and the app successfully passed the review.
Your data is yours, the app is designed to never record or log your queries. The monetisation is simple - reselling Open AI services in a convenient package. I hope that one day it would be possible to ship a local model of a comparable quality, and simply charge for the convenience of the app and allow unlimited use. Until then, the plan is to offer various levels of subscriptions and work out the reselling model which works both for the users and the sustainability of my indie app dev studio efforts.
Overall, I learned to appreciate the predictability and great standards and documentation of the web platform. With Apple tech, the moment you stray away from neatly designed WWDC demos you find yourself more or less alone against not-so-well documented platform and having to work with decades old APIs and strange behaviours. It's much harder to find ready-made answers, and if anything, ChatGPT itself was more help than the Stack Overflow in dealing with weird bugs and strange behaviours.
Coming to native development, you kinda expect that you'll be writing beautiful code, in the spirit of Apple (beautiful outside, beautiful on the inside), but you quickly realise that the closeness of the platform works against it, and very often you have to resort to hackity-hack solutions and your code is anything but beautiful after all.
Nonetheless, I enjoyed building a native app, there is something different to it, like you're building something physical, something that you can feel and touch and experience. And I've barely touched the surface of what's possible with it, constrained by time being a solo dev. Swift itself is a beautiful language and a pleasure to work with, and Swift UI is very easy to pick up if you're familiar with React. Until you have to make something non-standard and find hackity ways to do it.
What surprised me the most is the ratio of product and around-the-product work. The core app itself was ready within a couple weeks, but it lacked payments, user quotas and trials. Implementing these took me about 70% of time spent on the app. It's incredibly hard to implement subscriptions as a solo developer, with both Stripe or App store options presenting their own challenges. To the point that for my next apps it would be a major factor if the app is sellable as a one-time purchase. For the sake of learning, I decided to go all-in with Apple ecosystem and use Sign in with Apple and distribute over the Mac App Store. Looking forward to seeing how it works out.
If so, yeah AppKit has some warts, and it isn’t all that well documented. That’s how it’s been since I got started with it back in the early-mid 2000s, where your best sources for learning were random blog posts or books (the latter of which I couldn’t afford as a teenager).
If you ever do iOS dev, UIKit is a lot nicer to use in almost every way. It’s been polished and modernized a great deal in comparison, and because iOS as a platform is so much more popular/important it’s throughly documented end to end.
Still, AppKit does have some advantages, like its batteries-included nature which allows one to build complex apps with few or no third party dependencies.
100% SwiftUI! As a new Apple developer, I wanted to use their latest and greatest. It does get restrictive sometimes, but compared to having to dive into AppKit/UIKit hackity-hack SwiftUI solutions here and there are a lesser evil IMO.
Ahh yeah, SwiftUI isn’t fully baked yet unfortunately, particularly on macOS. I’m just now starting to use it in significant capacities in iOS projects and haven’t yet started on Mac because of that.
When you install the app you get a trial with 100,000 tokens. 1 token = 1 Open AI ChatGPT token, or 1/30 GPT-4 tokens (aligned with Open AI pricing).
The in-app purchase grants you 4 million tokens per month, with unused tokens rolling over to the next month.
My aim is to provide a sleek macOS ChatGPT/GPT-4 experience for people who wouldn't bother signing up for Open AI developer account and bringing their own API keys.
Alternatively, if there is demand, I was thinking of charging just for the convenience of my app and allowing people to "bring your own keys". Let me know if that would be a more attractive option for you. Not sure how that would fly with Apple in-app purchase policies, but could be worth a try if that's what people want.
I'm a big believer in local AI. Unfortunately, it's just not there yet, but with the way things are going it surely seems like we're going to have OpenAI level models locally soon. When that happens, I'd love to be on the frontier of bringing it to the public as consumer apps. Until then, I decide to pretend it's private and work on user experience, so when the tech is here it can be shipped fast.
I mean that as an app developer, I wish I could run models as powerful as ChatGPT on consumer hardware. Unfortunately, at the moment it's not possible, despite all the great progress and projects like llama.cpp. However, I see that this tech would become viable very soon. So, as a developer focusing on consumer apps, but concerned about my own and my users privacy, I decide that it's worth it to use Open AI. For now, for the sake of exploring possible new user experiences and interactions. The moment local tech is viable, I would be the first to opt out of sending my data to Open AI and use LLMs locally. Until then, we pretend that one day it's gonna be fully private and local and explore UX/UI with Open AI at the backend. Best I could do at the moment is to respect user privacy myself, and this app backend is designed to only handle your queries for a short time in-memory, never logging or recording user queries, only passing it to Open AI for streaming responses and measuring usage for billing.
To be frank, this makes the website extremely deceptive. That isn't "Private by design" - it's the opposite. You choose to prioritize the better AI assistant _over_ privacy.
The claim that "Your data is never recorded" may even be legally fraudulent. I'm glad that you don't record the data, but OpenAI does - making the claim as written a lie.
I understand why you made these decisions, but the website should not contain borderline false advertising.
You're right, the website should contain a more explicit note that user data is subject to OpenAI data retention policies. (However, I do mention that "Your data is never recorded, and opted out of future model training according with OpenAI API policies"). OpenAI did change their policies recently, and opted out any API data from future model training and promised us they delete it within 30 days. I wouldn't call it false advertising, but note taken, users must be explicitly told that their data is sent to OpenAI.