The actual algorithms used for autopilots are really simple.
A simple autopilot will do nothing more than hold the wings level so the pilot doesn't need to continually keep their hands on the controls. From there, more functionality is added as layers.
Heading mode makes the autopilot always point towards a compass heading. Nav mode is a computer that changes requested compass heading every time it reaches a Nav point.
A more advanced autopilot will add modes for speed-hold and altitude-hold that trade between altitude and speed. When combined with the auto-throttle, the autopilot can implement constant rate climbs and decent.
The autoland mode simply updates to heading mode and constant rate decent to follow the glide slope.
While it's true that a modern autopilot can takeoff, fly to a destination and land without the pilots touching the flight controls, the pilot is required to constantly switch between the various modes, feed nav points, and adjust the autopilots mode. At the same time, the pilots are doing a bunch of other tasks to keep the plane flying and safe.
The simplicity is very intentional. Pilots are expected to know what the autopilot is doing at any point and understand why the autopilot is doing that. They are expected to spot when it's doing something weird very quickly and disconnect it.
Constant adjustment based on density altitude changes, waypoint changes, ATC inputs, depending on turbulence - disable or alter inputs, it is required to be constantly monitored (I am a low hour pilot, but have had one need to be disconnected due to malfunction).
It’s basically a glorified cruise control for airplanes. It will fly a general route, but anything beyond that requires manual input (at least for small plane autopilots).
A good example might be a cop rerouting traffic. This would be an ATC route change in the airplane world necessitating you to reprogram waypoints. I’m afraid of what Tesla’s “autopilot” does in this situation.
One last addition edit - in the airplane world, the PIC has ultimate control and responsibility over the airplane. There are a ton of disclaimers and training that goes along with a real autopilot that makes sure you know what its limitations are. Tesla also has failed here IMO.
I sympathize, yet encapsulation is appropriate for some projects. Not all users of a library want to frequently update to their own code. Forbidding encapsulation and deprecation would increase this cost. Also, it's comforting to be able to refactor the guts of a class without harming users.
The issue is less about encapsulation and more about language-enforced encapsulation. Encapsulation is good, but encapsulation that's enforced by the language is debatable.
> If the language isn't enforcing the encapsulation, how is it encapsulated?
What do you mean by "enforce"? Java's private modifier doesn't enforce encapsulation. Javascript's objects do not have a private modifier, but still provides encapsulation via closures. It's hard to have a meaningful discussion when loose terms like "enforce" are thrown around.
Sure, the private access modifier doesn't strictly "enforce" encapsulation. Perhaps the term should be "language supported".
I guess, for me, that the point of private is to clearly communicate the intent of the interface (small "i" interface) of a type. That intent is generally "don't use this, use this other part instead" or "if you couple, to this, it may break on you".
There are other ways of expressing that intent, I just really like having the compiler help me and my collaborators from making stupid mistakes.
Most dynamic languages use convention, which seems to work pretty well in practice. My take is that if other developers are accessing the encapsulated parts of your library, then your API or your documentation is broken - possibly both - and you should, like, fix that. Not use language features to lock them out.
One of the things that I have used access controls for is to simplify the exposed surface that collaborators work with. This isn't a condescending "I don't trust you" intention. It's more along the lines of "of all the types and methods here, you only need to know about this small subset". It's customer service, I tell ya!
And if the API isn't sufficient by not exposing enough, that's fine. It's always easier to expose something later than to make it private later.
Maybe easier for you (very debatable), but never easier for the guy who needed something exposed today to get his job done.
I have never been bitten by over-promiscuous code entries in Python. The times I've gone beyond the published API, I knew I was doing it so I knew I had to keep track of it. And I've gone deep here (replacing Django's database handling in their unit testing framework).
On the other hand, I can't count how many times I've been stuck in Java figuring out how to get around somebody's final class or private method that I really needed to tweak just a little bit or, worse, I needed access to a field I can see in my debugger. Needing to reflect through to get at it is STUPID.
You may have a planes-that-crash bias there. You don't notice all the times you benefit from a well encapsulated service because it just works the right way. The rare handful of times when something that would be useful is marked private is what sticks out in your mind.
I know that I would prefer to work with an interface with 10 public classes with 50 public methods than an interface with 100 public classes with 5000 public methods.
The conceptual weight of wading through all of that stuff has a cost. There is often value in not knowing or caring about implementation details.
Although I do agree that final/sealed is generally just mean-spirited and pointless.
Your argument assumes that you need language-enforced mechanisms in order to define a well-encapsulated API. That is simply not that case, as the many well designed APIs in Python, Ruby, and Lisp can attest (and many poorly designed APIs in Java as well). You can use documentation, conventions, and other mechanisms to define the publicly exposed API so one doesn't have to know the guts.
But when I need to deviate from that API, for whatever reason is important to me but not to the library designer (from a bug to a weird environmental issue specific to me), if I can't get done what I need to, the language is getting in my way instead of helping me.
I totally agree that language-enforced mechanisms aren't the only way to get meaningful encapsulation. People can (and do) write good code or bad code in any language.
I was just trying to point out that there are both benefits and drawbacks to language-enforced encapsulation and the benefits may be less obvious than than the drawbacks, which are generally more painful.
You have a good point there. You generally have to think about what you are exposing if the language demands it. And I think it is great when a language guides you into doing the right thing. Guides being the operative word.
Given the choice, I'll choose consenting adults. :)
Yes, I do a lot of work in C#, which has properties that are code-compatible with public fields. The last time I did a Java project, I was surprised at how much I missed them.
Your view differs from some of the OO language designers. For example, Stroustrup defines encapsulation as "the enforcement of abstraction by mechanisms that prevent access to implementation details of an object or a group of objects except through a well-defined interface. C++ enforces encapsulation of private and proteced members of a class..."
Every useful software engineering term is actually undefined, until you define it for the purpose of some discussion. Encapsulation, strong typing, object orientation, you name it, it's undefined. By undefined I do not mean "completely without meaning", but that the term is used so many ways that the information content of pointing at something and calling it "encapsulation" is actually very low.
Some OO traditions choose to combine encapsulation with language-enforced access control. Some schools then teach that if you don't have enforced access control, you don't have encapsulation. They're right... by their definition. They are not right by all definitions. If you don't lay out the definitions you are using when you explain whether one is necessary to the other, you're just making undefined statements. And usually one will be related to the other by definition, which means the other basic alternative is to make a vacuous argument.
I say that like a lot of other things that are mistaken as language features, encapsulation is an attribute of the program, not the language. Encapsulation is when there is a clear boundary of code that accesses a certain data structure. I have seen many C programs that have perfectly well encapsulated data structures, despite the lack of language support for access control enforcement. But that's just my definition. It is not the definition.
> Your view differs from some of the OO language designers.
Oh come on, that's not true at all. C++ just happens to use access modifiers to provide its brand of encapsulation. However, languages without the `private` reserve word can still provide encapsulation -- they're not the same thing.
What if Java had no notion of private? It would be very difficult to provide data hiding (not that they're hidden anyway, but that's beside the point), so instead you would be forced to put little flags on your names and warnings in your documentation delineating the parts that people shouldn't touch. If they did then that's their fault no?
Encapsulation can be achieved quite efficiently if you have closures ... you can have good encapsulation (i.e. preventing access to the implementation details of an object) even in Javascript.
Many dynamic languages also have tools you can use to make your life easier ... with Python I'm using pylint/pychecker to keep me honest. My Emacs instance screams at me whenever I access a protected field of some object.
Also ... private/protected fields or final classes have caused much trouble for me. Overriding the behavior of a class is the easiest way to workaround various bugs without modifying the original source ... which in some cases is a PITA, while in other cases is impossible.
I once worked on a Java project that used a commercial library with no source-code ... to fix a stupid bug I had to manipulate the bytecode at runtime. Which shows again that private/protected/final access modifiers are pretty useless as guarantees ... a determined developer can get passed them.
It's just that you begin to hate life a little bit more.
I reported to a manager that attempted to measure our productivity fixing bugs by comparing the lines of code before and after the fix. Most fixes reduced the lines of code, so our numbers were usually either negative or close to zero. A couple weeks later the manager interrogated the team as to why we were so unproductive.
Another fun experiment is to measure developers using lines of code, and then switch to unit test coverage. For some reason, the code always seems to shrink.
Counting lines of code for maintenance work is a little like evaluating your mechanic based on the weight difference of your car before and after he fixes it.
Counting lines of code for maintenance work is a little like evaluating your mechanic based on the weight difference of your car before and after he fixes it.
Has the Java the brand completely evolved from a enterprise Sun product, into a counterculture language? With GWT and other JVM languages, Java is starting to look cool again.
It's funny that you characterize Sun as "enterprise", when Java is now an Oracle product (which is even more "enterprise") while Sun, at least sometimes, was actually pretty cool.
No way it could be as ugly as an x86. The x86 has been with us since the 8080 days (the 8086 was a 16-bit 8085, done in a hurry because the i432 flopped). It's a kludge wrapped in a another kludge.
It's not a generic VM. Sure it's got a lot of basic instructions for adding (signed) numbers together but a lot of important instructions are very high-level and very specific to Java. It was never designed to run anything else. So non-Java languages ported to the JVM do involve kludges. The comparison to x86 isn't far off.
yes, and implementing things like tail call recursion requires major kludge.
But, albeit we could discuss about stack based vs register based VM instruction set design, some of the limitations of the JVM are not inherent to its design, but rather to the inability to evolve.
Who's going to introduce bytecode extensions to the JVM now? And correctly manage the upgrades, backward compatibility process etc.
Mature products sometimes have this kind of issues. But, who knows, perhaps there will be a time frame when such a change will be welcome (along with java7 closures?) by the community, even if causing some migration problems.
> Java 7 is much friendlier to dynamic languages ... it would need to be really inelegant to warrant a comparison with the x86
Not that much, and not that inelegant ... at least x86 evolved a lot since 8086. The JVM itself evolved only in the internal architecture, but the bytecode itself is almost the same since Java 1.
The best VM for multiple language will soon prove to be LLVM, only because it makes your code cross-platform, while being low-level enough and not being strangled by a standards body.
It's actually easier to build a compiler for LLVM than it is for the JVM ... you might not go at first with a generational GC, and the speed might be terrible ... but at least you have room to grow ...
I have yet to see a language on top of the JVM beat LuaJIT2 (even Java itself can hardly beat it in simpler benchmarks). And the optimizations in Java7 can be achieved today in Java6 (with lots of workarounds, of course).
Saying the x86 evolved is like saying that, it would be evolution if I grew half a dozen tentacles, wings, two more unconnected brains, an exoskeleton and poison bags.
If you start with a can opener and attach it seats, engine, wheels, transmission and a steering wheel, all in ways it can still be in your kitchen and open cans, is it a car or a can opener? That's an x86.
I am not saying the JVM is the VM to end all VMs. It's just that it's nicely done. Much unlike the x86.