When I tried this in the past, it was non-trivial because the editorial changes are mixed with the technical changes. Reverting the editorial changes broke the technical changes.
Monospace text is objectively less dense, which means you have to move your eyes more. Every eye movement is an opportunity for error. Monospace text only makes sense when seeing exact character counts matters (which it often does in computer code).
One could argue that less density, as well as standardised widths, significantly reduces opportunity for error compared to cluttered text that is constantly varying how it is displayed. Perhaps moving your eyes more increases opportunity for error by 10% but easier-to-parse characters decreases the opportunity for error by 20%?
You might be comfortable taking that risk yourself, but if you misrepresent your FOSS contributions as your own copyright you impose that risk on third parties. Tricking people into infringing your employer's copyright is asshole behavior.
I'd be surprised if there was any actual burden on the upstream maintainer to care whether I was on my lunch break or whether I was on the clock when I made the fix.
Although in this particular case, I tend to agree with Igor as he was employed as a system administrator not a software developer so it's unlikely that there were any real contractual constraints imposed on him in relation to copyright or invention transfer.
Unlike with fuel, we're not burning the EVs, so even if China cuts off the supply we can keep using the ones we've already got. It would be inconvenient, but not an urgent problem like loss of access to fuel.
Only true for a plug-in hybrid with a series drivechain (a.k.a. "extended range electric vehicle"). The more common type has two parallel drivechains linked with clutches, so you still have all the drawbacks of a conventional internal combustion engine drivechain when you're using it.
> The more common type has two parallel drivechains linked with clutches, so you still have all the drawbacks of a conventional internal combustion engine drivechain when you're using it
I don't know about the whole world, but in both the US and Europe nearly half of the hybrids on the road are from Toyota, so unless nearly everything else is two parallel drive chains linked with clutches whatever Toyota does is the more common type.
Toyota uses a series-parallel system that works by having a planetary gear system that connects the ICE, a large electric motor, a small electric motor, and a drive shaft all together.
The planetary gear system functions as a power splitting device and a continuously variable transmission. It lets them direct power flow in a bunch of different ways. Here's a summary based on Wikipedia. (MB == the bigger battery, 12V == the regular 12V batter, ICE == the ICE engine, MG1 == the smaller electric motor, MG2 == the larger electric motor):
This is a big part of why Toyota hybrids are at the top of reliability rankings. Compared to a pure ICE they replace the clutch, the transmission, the starter motor, the alternator, the reverse gear set, and the flywheel with the planetary gear power splitting device. the two electric motors, and electronics. The power splitting device has very few movings parts--just the gears themselves, a pawl that can mechanically lock the gears when parked, and fluid pumps. The gears only move by rotating, unlike in a conventional transmission where they also change position. This makes their hybrids mechanically much simpler than a pure ICE.
The UK is well suited to wind power, already has many wind turbines, and continues to install more. We have a good amount of solar panels too. Renewables provide the majority of electrical power when conditions are good and the share will only increase. Electric vehicles avoid the biggest weakness of renewables (unreliable base load), because they can be set to charge unattended when cheap electricity is available. Electricity suppliers offer variable rate tariffs specifically for electric vehicles.
You start running numbers the cost of solar and wind capacity to power an electric car is about 10% of the purchase price. And considering they have a battery that can store a weeks worth of energy and spend 95% of the time just sitting. Basically not a problem.
Software moving the mouse cursor is only acceptable when the window is full-screen. If the user makes an application go full-screen, they are opting out of the normal desktop UI conventions. It's expected that full-screen software completely takes over the UI, and there are legitimate uses for moving the mouse cursor in full-screen software, e.g. centering an invisible cursor every frame in a first-person shooter game so endless view rotation is possible. But if it's windowed then it should be impossible.
Blender (3D modeling & animation software) implements this cool thing when rotating/resizing objects: if the mouse cursor moves out of the window it reappears on the other side (enabling resizing/rotating ad infinitum).
I think a better way to implement that feature would be a mechanism for programs to temporarily enable off-screen mouse cursors. This should also track the position where the cursor would be if it had been clipped to the screen boundary as normal, and immediately return the cursor to that position when the off-screen mode ends. Note that the OS returns the cursor, not the application, so applications can't abuse this mechanism for repositioning the cursor.
It's better because it's the minimum change to mouse cursor behavior that allows the feature to work. You don't need to see the cursor while it's off-screen because the point is manipulate the 3D object, and you can look at the 3D object instead. The same is true for things like controls in an audio DAW which might also benefit from off-screen mouse movement.
If there's really a case where you need to see the exact position of the cursor while it's off-screen, you could display it wrapped around only while it's actually off-screen. But this would potentially confuse new users, so it should be optional and disabled by default.
> You don't need to see the cursor while it's off-screen because the point is manipulate the 3D object, and you can look at the 3D object instead.
Disagreed. Seeing the cursor at all times gives you some point of reference, and once you release the tool, you know where your cursor is.
> If there's really a case where you need to see the exact position of the cursor while it's off-screen, you could display it wrapped around only while it's actually off-screen.
I don’t understand what this means. If it’s not off-screen then it’s automatically also not wrapped around.
> But this would potentially confuse new users, so it should be optional and disabled by default.
This presumes that “cursor is suddenly allowed to be off-screen and not visible” is less confusing.
>Seeing the cursor at all times gives you some point of reference, and once you release the tool, you know where your cursor is.
Seeing is an inferior means of knowing where the cursor is compared to intuition. When I move the cursor, I know where it is with no conscious effort because I treat it as part of my hand. I disable mouse acceleration to make this easier. I don't need to look at my hand to know where my hand it. My subjective experience of mouse clicking is the same: I look at the target and the mouse cursor automatically appears there. If you allow software to move the mouse cursor you weaken this intuition.
>I don’t understand what this means. If it’s not off-screen then it’s automatically also not wrapped around.
When the cursor moves off-screen, it could be displayed with position modulus the screen width/height. Additionally, the cursor shape could be changed to make it obvious it's not the true position. This might make sense if you really need to know the exact off-screen position and the GUI control you're manipulating doesn't provide sufficiently precise feedback.
>This presumes that “cursor is suddenly allowed to be off-screen and not visible” is less confusing.
It is less confusing because other than extending the range of the mouse off-screen, the mouse behavior doesn't change. As soon as the off-screen action finishes, the mouse cursor snaps back to the position it would have otherwise been in.
An alternative option would be to snap back to the position it was where the special off-screen mode was initiated. This might actually be better, because it makes the off-screen mousing mode an extension of moving the mouse while it's lifted off the mouse pad, which users already have intuition for.
> Seeing is an inferior means of knowing where the cursor is compared to intuition. When I move the cursor, I know where it is with no conscious effort [...]
Realize it consciously or not, visual feedback is a critical part of this loop.
> As soon as the off-screen action finishes, the mouse cursor snaps back to the position it would have otherwise been in.
The cursor jumping to the edge of the screen, which is not somewhere the user ever saw it and may be outside of the application, seems worse than any current issue while still being insufficient for most legitimate use-cases.
I don't really see any fake cursor approach that isn't going to behave awkwardly in practice - e.g: is it your real (invisible?) cursor or fake cursor that can click to focus another application, and what happens to your cursors when you do so?
Just letting the user deny mouse control for an app (like on Wayland) seems sufficient to solve your annoyance. Maybe adding a separate permission for control while unfocused, since that's rarer. No need to break all windowed applications with reason to capture/move the mouse.
I generally never want programs to go fullscreen because I like to keep taskbar shown, so I can keep track of time, notifications and whatnot.
Well designed video games that rely on fast and precise mouse input capture the cursor during the gameplay until menu is shown.
The only times I have to go fullscreen is for the games that fail to capture the cursor and where accidentally clicking outside of the game window leads to a loss.
Can't imagine a non-game program other than a video player that I would want fullscreen.
> But if it's windowed then it should be impossible.
I have one monitor, so fairly often have games/editors windowed with something else alongside them (a video, documentation, …). There are also uses where the mouse is only captured temporarily - like FPS-controls flying mode in Godot and Blender. Some image editors also allow for things like moving the cursor with arrow keys, which I find useful.
> But if it's windowed then it should be impossible.
I worked on several apps for the visually impaired that automatically move the mouse cursor to different UI elements in the front-most application, regardless of the window state. It’s a good reminder that “impossible” often just means “I haven’t accounted for that use case yet.”
If it's part of the OS's standard accessibility framework then it's acceptable. The important point is that applications shouldn't be able to arbitrarily move the mouse in situations when it's unexpected.
Coming from Linux, the accessibility framework is just another series of programs. My main a11y program is a tiny little binary that uses the keyboard to move the mouse around at will; I certainly don't want the system to try and restrict that.
You are arguing for uniformity. It does make a lot of sense: the global UI makes a considerable effort to build a single perfect UI, but that can only work if the apps actually make use of it.
But why shouldn’t the global UI itself make use of mouse warping?
> The important point is that applications shouldn't be able to arbitrarily move the mouse in situations when it's unexpected.
That is quite a different statement from "It should be impossible." What should be impossible is for the OS to prevent this type of usage when it is clearly useful. Beyond accessibility, I use these features to automate testing of native macOS GUI apps.
Character counting errors are a side effect of tokenization, which is a performance optimization. If we scaled the hardware big enough we could train on raw bytes and avoid it.
reply