yeah indeed, I just found myself having way more ideas I'll ever be able to build.. so why not share them :) Im planning to add an idea every day.. will keep me busy for a while
Completely agree. But look how much support he maintained in Hollywood, even winning an Oscar for Best Director. The fact that someone who raped a child, can maintain such support should tell us that getting people and institutions to really go after harassers(who while awful are not child rape level of awful) is going to be an uphill battle.
There was, and like so many other things these days, it's pointless listening to partisan summaries. If you want to know, dig and read the source material.
5 day old account and a statement like that? Sounds a bit like a CableCo shill to me.
Comcast offers service at speed tiers. When you pay for a speed tier you would expect to be able to use that speed a reasonable amount of time without having to spend extra. Nobody is expecting to max out their 150 Mbps connection 24/7 but that same connection starts hitting additional fees after 14 hours of maxing that connection in a month.
Data caps are a clear money grab because they need new revenue for their dying TV business. A business dying because of their failure to innovate and constant gouging of customers due to monopolistic practices.
I expect to be able to max out my advertised connection bandwidth 24/7. I don't see why I shouldn't have that expectation despite being physically impossible for everyone to do so. They shouldn't advertise those speeds alongside unlimited data perhaps.
Metered service would be fine, if the marginal cost per byte were fixed or decreasing. When using an additional 25% of traffic causes one's bill to go up by 75% (especially when most of the price is fixed infrastructure costs), it's clear that the goal is just to gouge a captive userbase.
The article doesn't provide any argument to support the ridiculous headline.
Before Comcast had a punitive financial threshold they used to throttle the heaviest users. Now they simply charge those users, while presumably the majority of people are cognizant that they should use some discretion to avoid the fees.
Another user opines "it’s still limited by your maximum throughput and the number of days in a month" and this is an argument that seriously rubs me the wrong way because it's effectively a tragedy of the commons type argument -- I love having blistering fast internet when I need to download something, etc. But I realize I don't have a committed 500Mbps across the internet, and not far from me it's a shared resource.
I was responding to the claim that an unthrottled connection is “infinite.” I wasn’t actually making an argument about throttling or overage charges.
I actually think charging for use makes a lot of sense. There’s no reason my monthly bill should be the same as my neighbor when they use it for nothing but email.
As a thought experiment, what if the caps went away tomorrow? Is it possible that the network would become saturated? If so, that is to say the caps might be reasonable. If not, how do you know that's the case?
I'm not any happier with Comcast than anyone else; I recently moved from 1Gbps service to 250Mbps service with them, and I was always bumping up against the 'cap' on my 1Gbps service. I want a better provider. But nothing in the article proves that the caps are useless nor a money grab.
Stating that a headline doesn't match an article is a different claims from stating that an article fails to be convincing.
One can disagree with the case they're making. But the headline was (now changed here): "Comcast disabled throttling system, proving data cap is just a money grab." The article covers both the fact that Comcast has disabled their throttling system, and why the writer thinks the remaining data cap is a money-grab.
Whether you agree that the data caps are money-grabs or not is completely irrelevant to whether the headline is (was) appropriate. Your argument seems to be with the article, not the headline.
The headline is like the label on box, and your argument seems to be with the contents of the box.
Cost of network construction and maintenance is always passed on the customers, though. Without considering Comcast specifically, generally it'd be somewhat unfair and unreasonable if most users have to pay more to subsidize the "elite 0.1%"'s data usage. So I think it's not as black-and-white as you make it out to be.
A serious practical problem with threads mirrors the same problem with C++, which is that many programmers reach for it first when they should be reaching for it last. Both of these technologies are like swallowing glass, and the wise programmer will avoid them if at all possible.
I don't claim these are "go-to" solutions, but only that there are multiple solutions to pick from.
One solution is processes (mentioned in the post). Fork a process which does your computationally expensive thing and then get the result when you are done. For the security minded, we've seen this make a bit of a come back because separate processes can be run with more restrictions and can crash without corrupting the caller. We see this in things like Chrome where the browser, renderers, and plugins are split up into separate processes. And many of Apple's frameworks have been refactored under the hood to use separate processes to try to further fortify the OS against exploits.
Another solution is break up the work and processing in increments. For example, rather than trying to load a data file in one shot, read a fraction of the bytes, then on the next event loop, read some more. Repeat until done. This can work with both async (like in Javascript) or you can do a poll model. Additionally, if you have coroutines (like in Lua), they are great for this because each coroutine has its own encapsulated state so you don't have to manually track how far along you are in your execution state.
More expensive to start than threads, and far more expensive and complex and restrictive to move data around. Sounds like with the exception of some specific corner cases, threads are a better solution.
> Another solution is break up the work and processing in increments
Either the tasks aee broken into ridiculously fine-grained bits that are hard to make sense or keep track,or you still get a blocking UI. Furthermore, the solution is computationally more expensive.
Fork/exec time for extra processes is usually unimportant. If data transfer is truly a bottleneck, shared memory is as fast as threading.
These costs, though, are generally trivial compared to the lifecycle costs of dealing with multithreaded code. Isolation in processes greatly enhances debuggability, and it's almost impossible to produce a truly bug-free threaded program. Even a heavily tested threaded program will often break mysteriously when compiled with a different compiler/libraries, or even when seemingly irrelevant code changes are made. It's a tar pit.
Maybe, but, on Linux, processes and threads are almost the same thing.
Additionally, even where a process is a bit more expensive to create, it is not enough to block the UI thread from being responsive. I have first hand experience with this on different operating systems, including Windows, and it is more than fast enough to keep the UI completely responsive.
> and far more expensive and complex and restrictive to move data around.
Not necessarily. For threading, synchronization patterns are not necessarily simple. (This is why computer science instruction spend time on these principles.)
Furthermore, some languages and frameworks provide really nice IPC mechanisms. Apple's new XPC frameworks are pretty nice and make it pretty easy to do.
> Either the tasks aee broken into ridiculously fine-grained bits that are hard to make sense or keep track,or you still get a blocking UI.
As I mentioned, coroutines make this dirt easy. It principle, this doesn't have to be hard.
> Furthermore, the solution is computationally more expensive.
That doesn't really follow. The underlying task is the where the computation is. You are just moving it, either to a process, a thread, or dividing it up, or something else (e.g. send it to a server to process). At the end of the day, it is the same work, just moved.
Yes, you might need some state flags for breaking up the work, but threading also requires resources such as creating and running the thread, the locks and protecting your shared data, and so forth. There is no free lunch any way you do this.
Processes might be more expensive but they do have advantages.
If you do use a lot of CPU time, spawning a process instead of a thread might not have any noticeable impact at all.
Additionally, IPC isolates the process, meaning it can be more resistant to hostile takeover (if you drop privs correctly) and additionally you avoid any and all shared state that could possible result in unforeseen bugs.
You will want two have two (conceptual) independent entities, doesn't matter if they are processes or threads. Depending on the architecture they may not even live in the same machine. One entity will deal with user input, which will cause some work to be requested. The other entity will perform the work and report results. You pass messages between them.
The exact architecture will vary according to your needs. There was one project I was involved with, which contrary to what Joel Spolsky would say, we recommended that it be entirely rewritten. The biggest problem? Spaghetti code and threads. Or rather, they way threads were misused. You see, there was no logical module separation, they had global variables all over the place, with many threads accessing them. There were even multiple threads writing to the same file (and of course, file corruption was one of the issues). To try to contain the madness, there was a ridiculous amount of locking going on. They really only needed one thread, files and cronjobs...
For the rewrite, since we were a temporary team and could not trust whoever picked maintenance of the code to do the right thing, we split it into not only different modules, but entirely different services. Since the only supported platform was linux (and Ubuntu at that), we used d-bus for messaging.
This had the not entirely unexpected side effect of allowing completely independent development and "deployment", way before microservices became a buzzword. You could also restart services independently and the UI would update accordingly when they were down.
Even then, at least one of these services used threads (as tasks). Threads are great when they are tasks, as they have well-defined inputs, outputs and lifecycle.
At another project, I had to call a library which did not have a "thread-safe" version. A group at another branch was using Java, and they were arguing that it would be "impossible" to use that library without threads. The main problem was, as expected, that the library used some shared state. We would just fork() and call the library and let the OS handle.
Threads are a nice tool, but that is only one of the available tools in your toolbox. Carpenters don't reach for a circular saw unless there is no other way, because it is a dangerous, messy and unwieldy tool.
I just rewrite an expensive task so that it explicitly processes a chunk that takes a limited amount of time...which also helps with running out of resources in many cases.
If a Canadian state decided to rename itself Maine, pretty sure Americans would have zero f*s to give about it. To quote Peter Griffin, "Who cares?!!".