When it comes to UI, most of us know native wins for actual feel and simplicity. Cross-platform UI tends to create new headaches, not solutions. Still, having more options like this pushes the ecosystem forward. Let's just hope the platform docs stay sharp and open for devs who want to build confidently.
I think some of the React vs Backbone debate misses how web projects often evolve in unpredictable ways. Most 'tiny' apps pick up complexity as features are added. So it's useful to build on a platform that scales smoothly and encourages best practices and grows with you.
React has become that platform that it is because teams can reliably ship, maintain the codebase and onboard new folks. Preference for 'right tool for the job' is good but real life means sticking to tools that won't bite you a year later.
Open source drive tools live or die on three things.
1) Simple sync that never surprises.
2) Clean conflict handling you can explain to a non tech friend.
3) And zero drama upgrades.
If Twake nails those and keeps a sane on prem story with S3 and LDAP, it has a shot. The harder part is trust and docs. Clear threat model. Crisp migration guides from Drive and Dropbox. And a tiny CLI that just works on a headless box. Do these and teams will try it for real work, not just weekend tests.
I'd add a fourth; "Make it easy to do backups and verify they're correct".
I don't think I've ever considered a data store without that being one of my top concerns. This anxiety comes from real-life experience where the business I worked at had backups enabled for the primary data store for years, but when something finally happened and we lost some production data, we quickly discovered that the backups weren't actually possible to restore from, and had been corrupted this whole time.
Heh - I once made a little chunk of change, because a former client from 10-years previous discovered the shiny "DVD/CD" backups had succumbed to "bit-rot" and needed some source code.
I grabbed the hard-drive off the shelf, put it in an enclosure and handed them the source-code... (At the time, every time I upgraded my system, I would just keep my old drives, so... had a stack of them - buy a new external enclosure, slot it and park it.)
Depends. Even something basic like "Check if the produced artifact is a valid .zip/.tar.gz" can be enough in the beginning, probably would have prevented the issue I shared before.
Then once you grow/need higher reliability, you can start adding more advanced checks, like it has the tables/data structures you expect and so on.
I had a funny where I somewhat regularly test an sql backup, then one day it didn't work, it worked the second time, the 3rd and the 4th. I have no idea why it didn't work. It turned into a permanent background process in the back of my head. The endless what-if loop.
I’m not sure what your point is. Business continuity requires a disaster recovery plan that must be tested regularly. It might be considered slog work, but like taking out the garbage, it’s non negotiable and must be done.
"Great, first you wanted more money to buy compute and storage for dev and staging separate from production, and now you even more for 'testing backups'?!"
I'd like a manual "sync now" option. Sometimes I put stuff in google drive using windows explorer and it's not immediately obvious if it is syncing, why it is or isn't, or what I need to do to make it.
I've got a theory that progress bars for main functionality tasks and the associated manual triggers in modern software are out of favor, as it creates a stage for an error to be displayed and creates expectations the customer can lean on. Less detail in errors displayed to the customer removes their ability to identify a software problem as unique or shared among others.
I think you're right and I think I insufficiently considered malice as the reason for a lot of this type of minimalism. This "SWW" message is great as it doesn't even give a hint as to whether the problem is with the server (all vendor's fault), the network (not vendor's fault), or a client fault (maybe vendor's fault, maybe customer just needs to update it). Users can just do brute force things like "Swipe up the app and open it up again" and eventually just give up.
Syncing should be in the control of users. user should be able to trigger or abort the sync. Also it should provide some sort of indicator of progress.
Seeing frameworks like this pop up reminds me how much the LLM ecosystem is moving toward more modular and hardware-aware solutions. Performance at lower compute cost will be key as adoption spreads past tech giants.
Curious to see how devs plug this into real-time apps; so much room for lightweight innovation now.
It's amazing how much DIY problem-solving comes out of necessity. Always interesting to see the basic household tech getting repurposed in a creative ways.
An interesting consideration is how the chosen modularization approach can impact onboarding time for new contributors. A well structured breakdown might not just aid initial development speed, but also reduce ramp up friction for future team members or external collaborators. This is an impartant factor often under-estimated in solo-driven projects.
I love how it shows that you can have both performance and safety without jumping through hoops. The little details of memory management really shape how we end up writing code, often without even noticing. Rust makes it possible to get close to the hardware while still keeping you out of the usual trouble spots. It’s one of those things you appreciate more the longer you use it
Erm what? The article contradicts this. I'd push back on "without jumping through hoops". The article itself demonstrates 6 layers of abstraction (Vec<T> -> RawVec<T> -> RawVecInner -> Unique<u8> -> NonNull<u8> -> *const u8) built on unsafe primitives. The standard library does the hoop-jumping for you, which is valuable, but the hoops exist, they're just relocated.
I bet Vec's implementation is full of unsafe blocks and careful invariant management. Users face different hoops: lifetime wrangling, fighting the borrow checker on valid patterns, Pin semantics, etc. Rust trades runtime overhead for upfront costs: compile-time complexity and developer time wrestling with the borrow checker which is often the right trade, but it's not hoop-free.
The "close to hardware" claim needs qualification too. You're close to hardware through abstractions that hide significant complexity. Ada/SPARK gives formal correctness guarantees but requires proof effort and runtime checks (unless you use SPARK which is a subset of Ada). C gives you actual hardware access but manual memory management. Each has trade-offs - Rust's aren't magically absent.
I had no idea, that is wild, especially considering how everyone has been outcrying "Rust is safe". Thanks for the info though.
One should wonder what else is there... but they do not wonder, they are sadly rigid with their thinking that Rust is the perfect memory safe language with zero runtime overhead.
It's not real unsoundness because it only applies within the private implementation details of Vec<T> - you still can't cause UB from outside code. But it's a real oversight that only proves how hard writing unsafe code is.
I come from a Swift/Kotlin background and I've been learning Rust for fun in my spare time
From what I heard online I was expecting it to be a lot harder to understand!
The moving/borrowing/stack/heap stuff isn't simple by any means, and I'm sure as I go it will get even harder, but it's just not as daunting as I'd expected
It helps that I love compiler errors and Rust is full of them :D Every error the compiler catches is an error my QA/users don't
The language and associated tooling keep improving.
Over the course of the last decade I've made several attempts to learn Rust, but was always frustrated with having code that reasonably should compile, but didn't and I had to run the build to even find that out.
Now we have rust-analyzer and non-lexical lifetimes, both tremendously improving the overall experience.
I still don't enjoy the fact that borrows are at the struct level, so you can't just borrow one field (there's even a discussion on that somewhere in Rust's) repo, but I can work around that.
To drive the point home: I'm a frontend developer. This is a very different environment compared to what I'm used to, yet I can be productive in it, even if at a slower pace in comparison.
Rust is not the most productive language by any means... it is hardened AF if you avoid unsafe, but productive?
I can code much faster in almost any language compared to Rust. It creates mental overhead. For example, to compare it to siblings, Swift and C++ (yes, even C++, but with a bit of knowledge of good practices) lets you produce stuff more easily than Rust.
It is just that Rust, compared to C++, comes extra-hardened. But now go get refactor your Rust code if you started to add lifetimes around... things get coupled quickly. It is particularly good at hardening and particularly bad at prototyping.
They seem to be the opposite in the absence of borrowing without a garbage collector, since you need to mark the lifetime in some way.
Rust is not that bad at prototyping, you need to use the right featureset when writing quick prototype code. Don't use lifetimes, use clone() and Rc/Arc freely to get around borrow-checker issues. Use .unwrap() or .expect("etc.") whenever you're not sure how to pass an error back to the caller. Experiment with the "Any" trait for dynamic typing and downcasts, etc. The final code will still be very high-performance for prototype code, and you can use the added boilerplate to guide a refactoring into a "proper" implementation once the design stabilizes.
In fact, most of the time, I would favor styles like this even for not prototyping.
At the end, refactoring is also something natural. I would reserve lifetimes for the very obvious cases wirh controlled propagation or for performance-sensitve spots.
> It helps that I love compiler errors and Rust is full of them :D Every error the compiler catches is an error my QA/users don't
amen! I despise Python even though I used to love it, and that's because it's full of runtime errors and unchecked exceptions. The very instant I learned more strongly typed languages, I decided never to go back.
And then comes the point, where you get a scenario, which is actually, you'd think, at least, something simple, but then gets really tough to express in strongly, statically typed languages, and you might take a step back and consider for a moment, how simple it would be to express this in a language like Python, while still being memory safe.
Statically typing things is great, and I enjoy it too, when it is practical, but I don't enjoy it, when it becomes more complicated than the actual thing I want to express, due to how the type system of that particular language or third party tool (in Python) works.
The new foundation could be a turning point for React, but whether it truly decentralizes decision-making depends on how governance works in practice, not just on the list of corporate sponsors. Open source foundations have helped some projects thrive by formalizing community input, but they can also be slow to adapt if board dynamics favor stability over innovation. The real question is whether small developer voices and radical ideas will shape React's future, or if practical influence stays with the largest sponsors. Compared to one company's oversight, a well-run foundation can make React less vulnerable to a single vendor's agenda—but only if its structures actively foster broad participation and accountability. We'll see if React's evolution speeds up or settles into consensus-driven conservatism.
Are you implicitly complaining that React is not moving fast enough? What the JS ecosystem needs is for some big players to CHILL a bit and take backwards compat more seriously.
This looks nice, WinBoat gives teams a simple way to use linux for everyone, without losing access to Windows apps when needed. There’s no need for fancy cloud setups or switching between lots of devices—just one system and quick access to what works.
Onboarding is easier for everyone, and IT does less work with only one setup to care for. This means companies can pick what’s best without making things messy or complicated.
For what you're describing, there are already Enterprise-grade solutions that's even simpler and more robust, such as Azure Virtual Desktop with RemoteApps, and the even more mature and battle-tested Citrix XenApps / Cloud.
Like for my work, I use a Linux laptop, and access our Windows-only apps and environments via Citrix and it works really well. And a good chunk of our apps are cloud-based anyways so we just need a web browser to access them.
I also own a MacBook and have an Android phone, and I can access my work environment from all my devices. So at least for our workplace, the end-user OS has been largely irrelevant.
This reads like AI and almost none of it makes sense. Why would a Windows desktop fleet be more heterogeneous than a Linux one? Why would Linux be an easier on-boarding experience?
Orgs use Windows because non-technical users expect it and execs don't get fired for choosing Microsoft.
Onboarding non-technical people to a computer that’s running half the apps in a different is virtualized and all the things that can go wrong with that sounds a nightmare.
How you gonna justify buying a windows license for each user and then not just using that and forcing them to use some interface they’re unfamiliar with.
I get the vision but ultimately if they need to run windows apps for work, just have them run windows.
There’s places where people should consider Linux but that isn’t one of them.
Coding agents tend to assume that the development environment is static and predictable, but real codebases are full of subtle, moving parts - tooling versions, custom scripts, CI quirks, and non-standard file layouts.
Many agents break down not because the code is too complex, but because invisible, “boring” infrastructure details trip them up. Human developers subconsciously navigate these pitfalls using tribal memory and accumulated hacks, but agents bluff through them until confronted by an edge case. This is why even trivial tasks intermittently fail with automation agents. you’re fighting not logic errors, but mismatches with the real lived context. Upgrading this context-awareness would be a genuine step change.
Yep. One of the things I've found agents always having a lot of trouble with is anything related to OpenTelemetry. There's a thing you call that uses some global somewhere, there's a docker container or two and there's the timing issues. It takes multiple tries to get anything right. Of course this is hard for a human too if you haven't used otel before...