Hacker Newsnew | past | comments | ask | show | jobs | submit | somat's commentslogin

I think stacking windows looks better, makes for a cool screenshot when trying to sell the thing. But tiling windows are more ergonomic for actually using the infernal machine.

For me the revelation was that I have never said "Oh boy I sure am glad this window partially overlaps this other window" I either want one full screen windows or a few windows side by side. Why do I have to handle this myself? and went to the dark side, a tiling window manager. To the point that it really chafes now when I use stacking windows, It feels like I spend most of the time shuffling windows around.

To ease the overlapping window pain many linux window managers have a feature where the focused window does not have to be the top window and this makes things a lot better, you can be looking at the top window and typing/clicking on the partially obscured bottom window.


I think stacking windows make more sense in the context of the pre-OS X Macintosh UI. The Mac was built entirely around the concept of spatial manipulation. When you opened a folder, it would remember the exact position and size of its window, where all of the icons were, where your scroll bar was, whether the folder was set for icon, list, or detail view, etc. Every window was permanently and unambiguously associated with a single folder. This made navigating the computer possible entirely through muscle memory. Just like you know where all the light switches and doorknobs are in your house after a few months, you would gain the ability to navigate through files on your computer extremely quickly because when you double click on a folder you already knew where it would open and what would show up in the window. Instead of remembering a file path, you would remember a series of mouse motions to get where you wanted with very little conscious thought. Obviously this workflow isn't suitable for everyone, but a lot of people enjoyed it and I think it's a shame that Apple decided to throw it out for Mac OS X.

Another feature that gets a lot of flack from some Linux users is desktop icons. This is something else that a lot of UIs screwed up (maybe stemming from Windows 95? I'm not sure). The classic Mac UI let you drag whatever files you were currently working on to the desktop and do whatever you needed to do. Then when you were done with the files, you could highlight them and select "Put Away" and they'd all get moved back to their original locations. The desktop was a temporary space for what you were actively working on, not a giant catch-all storage location like how modern UIs treat it.


The primary value of overlapping windows is spacial memory: you remember where a given window is positioned on a 2D surface. The moment I grasped this I had the “oh boy I sure am glad this window partially overlaps this other window.”

(At one moment, I used to work on a single desktop with around 20 windows, no dock, just windows, on my 14in MacBook with 125% DPI. Too much but possible. Now I keep only 6-7 windows.)

This is not to say that dynamic window management is worse. Far from it. But it excels at this: dynamic, rapidly changing environments, where at almost any given moment something is either opening, closing, or changing its dimensions. This is usually the case with specialized programs like web browsers or IDE, but not with the main system WM.

The main problem is that overlapping windows and automatic window management are incompatible. The former assumes that user sets the dimensions and is always right, which makes the latter powerless to follow any efficient algorithm. To give an example, if you manage your windows with a dock and “maximize” button, they’d break overlapping patterns.

> I either want one full screen windows or a few windows side by side.

You’re not wrong to work like this, but it may be a byproduct of modern hybrid systems making it harder to fully internalize the overlapping windows concept.


Most of the time, I want the active application window in the middle of the screen, but not necessarily filling the whole screen or the whole height, and also not necessarily centered. The window position and size depends on its contents, what sidebars it has, and so on. This inherently leads to overlapping windows. I use a tool that automatically moves and resizes windows to the application-specific desired position, while also having the ability to arrange a split-screen view using keyboard shortcuts when needed.

To be fair, in the era when resolutions like 512x342 or 640x480 were common overlapped windows were quite useful.

Focus management is such a tricky beast.

It's the duality of user interface design. Two forces at war with each other.

The professional interface is a complete mess. flat not nested, functionality duplicated all over the place, widgets strewn across the screen like a toddler just got done playing legos. Exactly what one needs when they will be working with it for hours at end.

Contrast with the casual interface, nested, one way to do things, neat compartments for everything. What is needed to gently guide the user through an unfamiliar task they may only do once a year.

And this is ignoring the dark side, the "designer" interface. Where it just has look good functionality be damned. Take note. The big lie about design is that it exists in a vacuum, that there can be an independent design title. Real design is fundamentally a holistic process that has to consider and integrate all aspects. Including deep engineering. A real designer is an engineer with taste, a rare find to be sure.


And then the professional never uses it anyway because he knows all the shortcut keys!

> The professional interface is a complete mess. flat not nested, functionality duplicated all over the place, widgets strewn across the screen like a toddler just got done playing legos. Exactly what one needs when they will be working with it for hours at end.

Neovim users disagree.


It's flat (technically modal, but that does not make it more casual) interface with everything as invisible hotkeys and a near command line interface (the legos all over the place, actually in this case a better analogy would be legos all over the place under water in a bathtub)

no, it fits.


Obviously(/s) the solution is to change to a sunset centered day. new day starts at sunset so people can get up late and enjoy the maxim number of daylight hours.

I always find it strange how particular people are about the numbers attached a purely astronomical phenomena(myself included, but I am pretty hard in the "let the sun figure it out camp"). If they want more "daylight" hours then get up at a time to enjoy them. But people would rather bend over backwards fiddling with the numbers as if that is going to change how long a day is.


The problem is that work does its best to capture all of my daylight hours.

I think that midnight should be around current 4AM because that's the brief moment when party people already sleep and work people aren't awake yet.

Does the night belong to the day it follows or the day it precedes?

Does it become Friday at dawn, at sunset, at noon, or at midnight?

This is all convention and not something that can be decided objectively.


The new day ought to begin at the darkest hour. Opposite of high noon. Which is apparently called 'solar midnight'.

It is probably true, but it is also a useless statistic, let me explain.

8 billion people, avg lifespan 70 years, 115 million people die each year, percentage in capitalist economic zones... No clue, does china count? but probably 80 to 90% so about 10 years for capitalism to cause a billion deaths.


What a ridiculous law, smells of some sort of frog boiling scheme to me.

step1: "lets see if we can get away with imposing a small easy requirement, you know 'think of the children'"

step2: "now that we have a foot in the door, lets see if we can get some real tracking in place, for the children of course"

Anyhow: as far as I can tell compliance on linux would be as simple as

    echo $YEAR_BORN > ~/.config/ca_ab_1043
It's an accessible interface(it is the same user interface many linux programs use), applications can use a well known api to access the data.(using the common unix filesystem interface) and it only presents the minimum needed information to the application.

The requested info is age range, not actual year born. So that's actually giving too much info, and could be breaking the law. Especially because 18+ is a completely valid range.

I have a theory that the renaissance and perhaps more critically the industrial revolution that followed was in a large part driven by coffee.

Middle ages, things are a bit sleepy, dopey. Everybody is drinking beer all the time. progress runs at a slow pace.

Then there is this popular new tea sweeping the scene and boy howdy does it get you up and going. Now people are waking up and doing things.

Caffeine, It's a hell of a drug.


It’s more accurate to say that the “modern era” (1600s and onwards, the Enlightenment , etc.) was boosted by coffee, because the Renaissance was larger over by the time the bean arrived from Arabia.

Definitely a lot of modern ideas and institutions had their origins in coffee shops, though.


> Definitely a lot of modern ideas and institutions had their origins in coffee shops, though.

There are accounts of discussions between Robert Hooke, Edmund Halley, and Isaac Newton in a London coffee house. It's a wine bar now and not notably highbrow :)


Lloyd's the insurance company was founded as a coffee house.


Erdos famously took amphetamines his whole life, and they made him fabulously productive:

> In 1979, Graham bet Erdös $500 that he couldn't stop taking amphetamines for a month. Erdös accepted the challenge, and went cold turkey for thirty days. After Graham paid up--and wrote the $500 off as a business expense--Erdös said, "You've showed me I'm not an addict. But I didn't get any work done. I'd get up in the morning and stare at a blank piece of paper. I'd have no ideas, just like an ordinary person. You've set mathematics back a month." He promptly resumed taking pills, and mathematics was the better for it.

I think about this a lot. I drink a lot of coffee and I feel reasonably productive. But hey, maybe I should try something a bit stronger... :


You’re in good company. Tom Standage makes the same argument in his book A History of the World in Six Glasses.

https://www.goodreads.com/book/show/3872.A_History_of_the_Wo...


> I have a theory that the renaissance and perhaps more critically the industrial revolution that followed was in a large part driven by coffee.

Don't forget the concentrated wealth created during the Trans-Atlantic slave trade through the use and selling of slaves by the Portuguese between Africa and South America


Now the curious thing will be if people attribute the rapid pace of technological development in this new century to the advent of widespread amphetamine. A large number of Stanford students are on it, and likely many other top universities have similar properties.

To get the coffee and other things european men had to be sent out on ships to rape the world, and they would only do that if they were drunk. The Renaissance and the Industrial revolution were built on the spoils of exploitation, of which coffee was one.

Yes, I've been thinking this as well. Although, earlier civilisations probably also consumed lots of stimulants; mayas, incas, probably countless more.

Plus, plenty, maybe half, of humans/mammals do not respond to caffeine in positive ways. While one half are evangelical, the rest manage with water.

True

I had wondered about the same for nicotine, being a neurostimulant.

Turns out Otis Redding was singing about the renaissance in Cigarettes and Coffee

It gets you up and going until you build resistance then it becomes a need.

nah coffee really didn’t do much for me, i started drinking daily at 30

Ans your parent's could? Maybe coffee improved their cognitive functions so u were born smart

Now I am curious, Understand that I am from the states, and consequently have zero intuition as to what a VAT is. But... the hard drive importer is directly using the HDDs and as such is not adding any value to the item, why are they paying a value added tax?

If I had to guess it is probably on the value that could have been added to the item.


It’s just the name for sales tax. Why is there a tax on sales, isn’t a sale a discount? Then is the sales tax negative because it’s the tax on the difference between the full price and the discounted price? You’d probably end up with a refund for buying the thing, unless your state has no sales tax.

Sales tax is actually very different beacuse it is usually either cumulative and added to each part of the chain, or only the last one; whereas VAT is deducted in all but the last part of the chain.

Yea, the idea is that the VAT effectively taxes the added value in each step of the value chain because there's a limit to how much you can charge for an item or service. E. g. a 25 % VAT does not necessarily mean the goods become 25 % more expensive; most of those 25 % would have been profit for the reseller, intermediates and manufacturer if it were not for the VAT. Perhaps a little contra-intuitively, a high VAT keeps prices down and business efficient because every intemediate is indirectly taxed even though the VAT is only charged to the final consumer.

TV's are optimized around decoding video, At least they can generally do this at full speed, this is coupled with the cheapest cpu the manufacturer can find. Even this would be manageable, There have been great UI's on weaker hardware. But then they want to program everything in html/javascript/css 7 layer lasagna stacks, this is where things start to get bad. Then the marketing team gets their slimy hands in and proceed to stuff the telemetry in until full. It is still "technically" usable, but nobody is enjoying the experiance. Package it up and sell it to some rube as a "Smart" TV.

> But then they want to program everything in html/javascript/css 7 layer lasagna stacks, this is where things start to get bad.

The alternative is that every TV SoC has its own SDK and most of them don't get [your preferred streaming app]. Those apps that get ported would probably perform better, but most TV makers don't want to take the risk of missing out on an app that will lead customers to someone else. LG and Samsung do stand apart with WebOS and Tizen, but those aren't exactly high performing UXes either.

At the end of the day, I'm not sure if 'UX is not so bad' is a marketable feature for a TV, much as I'd like it to be.

My personal journey has led me to stand alone Rokus, but I'd love to find something that can do "everything": I want to play blu-ray 4k discs from the network, without transcoding and with the full hdr10+ (when available) and bitstreamed atmos and the silly menus, regular blu-ray and dvd too; I would like a selection of top teir streaming apps to work properly (at least Netflix and Amazon, one of the heavily ad supported one that has a lot of 80s tail content too would be nice); it needs to have a spouse acceptable interface; shouldn't cost more than $100.

Roku + optical player works pretty well. My living room tv has that; I'm running out of patience for apps running on the projector in the theater, and it'd be nice if I could get a new box that replaces apps and the optical player so I could move the 4k optical player to the living room.

People say Apple TV or NVidia Shield, but they're both pricy and I'm not sure either really does 4k BluRay with menus?


The history behind BAR is also fascinating.

There was the Total Annihilation RTS and while it had the normal 2d overhead view, all the data was in 3d.

A Swedish gaming clan put together an accelerated full 3d engine to replay Total Annihilation recorded demos. As it got more and more features it was realized that most of what was needed to play TA was being recreated so they closed the loop and made it into a full game engine which they called SpringRTS. There was the default accurate TA game code but there was also a very popular mod that was not afraid to change things a bit, basically "we like Total Annihilation but also think it could be better" and they called it Balanced Annihilation. We are almost there. BA lived under the Spring project for a few years, but really when you think about it there are ip problems with it using the TA assets, also, I suspect someone wanted to do engine work but was having a hard time with upstreaming it, so it forked off the Spring project, they rebuilt all the units(same unit different skin) are doing a ton of great engine work and called it BAR (retronymed into Beyond All Reason but I suspect it originally stood for Balanced Annihilation Reborn or something like that). So BAR is basically a highly modified legally distinct Total Annihilation.

Zero-K is another great RTS based on this engine. It drifts further off the TA formula than BAR does.


I think that is the way it is headed. But you never know. Sometimes when comparing it helps me to reduce these things down to lower levels.

What is a battery? A chemical cell to store hydrogen and oxygen(true, it does not "have" to be hydrogen and oxygen but it usually is) to later get energy out of. For example lead-acid(stores the oxygen in the lead-sulfate plates and the hydrogen the the sulfuric acid liquid) or nickle-metal(charges into separate oxygen and hydrogen compounds, discharges into water) the lithium cell replaces hydrogen with lithium. Consider a pure hydrogen, oxygen fuel-cell, it could be run in reverse(charged) to get the hydrogen and oxygen and run forward(discharged) to get electricity out of it. So it is a sort of battery, a gas battery. Gas batteries are generally a bad idea, mainly because they have to be so big. Much time and effort is spent finding liquids that can undergo the oxidation/reduction reactions at a reasonable temperature. But now consider that there is quite a bit of oxygen in the air, if we did not have to store the oxygen our battery could be much more efficient, This is the theory behind free-air batteries. But what if our battery did not have to run at a reasonable temperature. We could then use a heat engine to get the energy out. And thus the Mirai. They are shipping half of the charged fluid to run in a high temperature reaction with the other half(atmospheric oxygen) to drive a heat engine that provides motive power.

As opposed to having the customer run the full chemical plant to charge and store the charged fluids to run in a fuel cell to turn a electric motor for motive power. Honestly they are both insane in their own way. But shipping high energy fluids tend to have better energy density. Perhaps the greatest problem in this case is that it is in gaseous form(not very dense) so has no real advantage. Unfortunately one of the best ways to retain hydrogen in a liquid form is carbon.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: