If what you are referring to as the standard library is in that 1.5MB, then disregard my comment on LOC.
If it's in that remaining 12MB or so of stuff, then I'm wondering if LOC counts should include what is in there that is required for these programs to run.
Look at it this way. If I download 12MB of code and then I write 500 lines, does that mean I am a master of writing small, compact code?
Sure, if you ignore the 12MB I had to download first.
I'm not singling out Python. Perl, Ruby, etc. are equally large.
The point is you are downloading 1000's of LOC to enable you to write "short" programs.
Nothing wrong with that. But those 12MB that were needed beforehand... should we just ignore all that when we count LOC?
Maybe one has to do embedded work to have an appreciation for memory and storage limitations and thus the sheer size of these scripting libraries.
Yes, we should ignore library code LOC as there's no associated cognitive/maintenance overhead, which is what we are really trying to count. I have happily used (C)Python for a decade without peaking at the source. Same goes for, say, math.h
"It's an interesting project but practically speaking $100 (no contract) can already buy a pretty good smartphone these days... and one with a more "unofficially open" SoC too."
Does that smartphone with a "more unofficially open" SoC
boot from SD card?
Is it OS agnostic?
I would actually be willing to pay more than the going rate
for a "smartphone" that that allowed me to use my own
bootloader and my own choice of OS, and one with no ties
to search engine ad sales companies or the like.
I just want a handheld computer with a decent enclosure
and which can be controlled with time-tested, open source
drivers.
It does not need to have impressive specs, just a small form
factor and easy to control with any open source OS.
As soon as they have something that boots BSD,
where starting the graphics layer is optional
and it has decent networking, I will definitely
purchase one of these. I wouldn't mind a little
FORTH, too.
NetBSD and FreeBSD were already ported to Neo Freerunner few years ago, and you actually don't have to start the graphics layer on any OS. You can also already use whatever programming language you like, it runs just a standard operating systems like your desktop do, just compiled to different cpu arch (arm).
If there's any BSD running on OMAP3 boards like BeagleBoard, then running it on Neo900 will be trivial and you can do it by yourself. Any documentation needed for more specific support will be public, so you can do that by yourself as well.
also, Openmoko is long in the past. Long live OpenPhoenux! :)
RPi doesn't allow you to use your own bootloader. You cannot even start it without closed blob, so it's in contradiction to your previous requirements.
I have yet to find a device that meets all my requirements (some of which I did not mention).
In RPI's favor I will say that 1. it is not "in development" but is "on sale" and 2. it does not come with Linux preinstalled. Letting users choose their own OS and not limiting the choices is a start in the right direction. There is of course further to go. Alas, it does not stop at the bootloader.
So, where do you recommend purchasing the device(s) you mentioned when they eventually go on sale?
Neo900 probably won't come with any OS preinstalled, maybe except some Debian based BSP which you'll be encouraged to replace with OS of your choice, as it won't be supposed to be actively used - it will be there for easy testing if all hardware works as it should.
It is a small project with very small target audience (currently about 350), so it won't be built to shelf, as there's simply not enough money to do that; therefore if you eventually want it you should express your interest now by donating at least 100 EUR, so you'll be counted on component sourcing. Donation will act as a rebate on the price of the device when it's out (in few months).
More details: http://neo900.org/#donate
Where to buy? Straight from its manufacturer, of course.
There's also GTA04 - http://gta04.org/ - which is already out for few years, but now it's unfortunately hardly obtainable and it seems like there's not enough interest from customers to do next production batch (well, most of potential customers probably were stolen by Neo900 :P). Neo900 is based on GTA04 and produced by the same company.
And of course there's also old good Openmoko Neo Freerunner, which despite of being massively underpowered, meets all requirements you mentioned. It should be still available at some retailers, like Golden Delicious Computers, Pulster or IDA Systems.
Also, you might want to look at MIPS-based Qi-Hardware Ben NanoNote. It's not a phone, but it also meets your requirements and it's available (though supplies seem to be near end, so you better hurry if you decide that you want it. AFAIK Pulster and IDA Systems still have it, but you can count remaining units with hand). I don't know if someone launched any BSD on it, but I can't see why it shouldn't be possible.
So, to make it clear, every device I mentioned so far in all my comments, except Neo900, already went on sale. With Neo900 it's a matter of few months.
From future devices also worth noting, there's DragonBox Pyra, an OpenPandora successor, which will also provide optional modem (the same like in Neo900 btw - those two are actually sister projects, sharing some developers).
The only thing Raspberry Pi is better than any of those devices is price, but when you'll count money and time spent on making Raspberry Pi an acceptable option when operating on battery, this probably won't be true anymore.
Part of the problem is that many years ago certain
people decided it would be a good idea to tightly
couple email to domain names (DNS). Previously email
needed only IP addresses to work.
The result is that now when you are configuring SMTP
you have to also configure DNS. That means more things
that can go wrong, and more things to check as you are
setting things up.
It also means you may need to pay a fee for a domain name.
This is because we all submit to the notion of an ICANN root
and commercial registrars selling (renting) names that
cost nothing to create. Thus email is not solely under your
control. You generally have to play the ICANN DNS game, only
because your email recipients are playing. Nothing stops
anyone from running their own root though. And this is what
is done with private DNS inside organizations.
And then, as if that DNS complication was not already enough
to take control of email away from you, you have various
schemes trying to prevent spam that discriminate for or
against mail you send based on IP address and domain name.
Can you operate email without DNS? Technically yes. There
was a time before DNS, and email worked just fine.
Practically speaking, today you need DNS, whether it's under
ICANN's root or your own.
All this hassle steers you to just accept third party
email hosting. Profiting from this arrangement has become
a career for many a man. And with "the cloud" many are
hoping to cash in yet again, as organizations who once ran
controlled own email feel pressured to let a cloud computing
vendor control it for them.
The fact that all this third party control makes
warrantless search and surveillance so easy is but one side
effect. Centralising hundreds and thousands of accounts in
third parties make the spammer's job easier, too. If you
think about it, there are many unwanted side effects of
centralizing email. When every sender and recipient are
connected directly to each other via a network, why would
you want to prevent them from sending messages to each other
directly?
With the constant connectivity and bandwidth we have today
in many places, the centralisation and outsourcing of email
is baffling to me... it is nonsensical... until you remember
how much of a PITA it is setting up email.:)
It's no wonder we let third parties handle it. Is this PITA by design? Who
cares? Let's just fix it. More of these projects should
exist. Or made public (I imagine many of these are personal
setups now being released for public use). I have my own
that uses qmail.
DNS is not the issue with mail and MTAs. Setting up an MX record is something you can do after googling and reading for about ten minutes. I have only anecdotal evidence to prove this, but that's basically how I set up my own first mail server.
What was a lot more difficult was setting up the actual mailserver itself. Even a simple, two-mailbox-operation was an exercise in frustration when it came to trying to get mail working on a little VPS of mine. Shit, you have to make the sendmail config. How balls-out insane is that?
More recently, there's little working tutorials to get yourself a working dovecot/postfix server, which are relatively easy to understand (thanks, digitalocean!) but I just checked out the first one I found on my google search and it's 2,800 words long. 20 pages if you were to print it out dead-tree style.
I can give you a tutorial on DNS and MX records in much less time than it would take to go through setting up any MTA on linux, and that's the trouble.
But I am actually referring to something different:
hosting your own mailserver.
So when I say "set up DNS" I mean set up a DNS server,
not simply an MX record. This allows you to create your
own domain names and hence email addresses. As I said
above, these email addresses are valid so long as you
and the recpipient use the same DNS root (e.g., ICANN's
root in the case of the public internet).
As bad as things are in terms of the relative difficulty
of setup, I think there are defenders of the status quo
for email and I imagine this explains how I could be
downvoted for my comment.
Don't get me wrong, I love email. It is the reliance on
others to handle 100% of it that troubles me. It is purely
a control issue.
No, you said that. I said that having email so closely
coupled to DNS is part if the problem.
I'm not sure why you would have to remember IP addresses.
We routinely "dial" telephone numbers by selecting from a
list of contacts. IP addresses are approximately the same
length as telephone numbers. The folk wisdom is that people
can remember about 7 digits. But even if you disagree on
all of this, what does that have to do with letting someone
else control our email? The issue here is control, not
whether we use names or numbers or something else when we
enter the address of the recipient.
The nonportability of IP addresses is a problem in its own
right, but I don't see the relevance here. Again, you are
trying to engage me in a debate over domain names versus IP
numbers. Perhaps that is an interesting issue, but here
I am interested only in the issue of control over email (and
because email and DNS have been coupled together, DNS).
And that is what the OP is interested in as well.
I said that email and DNS are closely linked and this makes
email more challenging for any user to control. 1. Because
it complicates the setup and 2. because DNS as we currently
accept it is controlled by third parties.
You are trying to suggest that I am advocating against having email addresses that use names instead of numbers. I am
not.
If email is linked to DNS, and someone else controls DNS,
then you cannot control email.
If you disagree with the preceding statement then please
explain.
People seem confident with OpenSSH's authentication
mechanism. Why not use that?
At some point one has to trust that the IP address
one is sending/retrieving data to/from is the correct
one. That's easier said than done if some host wants
to keep changing its IP address every few days.
The SSL PKI scheme (the SSL approach to authentication),
as implemented for public websites, is not much of a
confidence-builder, IMO. Opinions may differ.
If websites maintained consistent IP addresses and we
could authenticate these machines using OpenSSH keys,
I would be more willing to believe we could verify
their "authenticity".
Exactly. The problem with SSL/TLS that it relies on a broken promise with the CAs, and as we have seen several times it is extremely easy to exploit. Hello rogue CAs.
At least with devices running iOS, if you let the charge
drop to zero, and it loses its cached network settings,
and then when you try to reconnect to your home wifi router,
it will look for a file called
library/test/success.html at www.apple.com
which contains one line: Success.
If that file is missing, you will not be allowed to connect
to your own home LAN, because Apple thinks you are behind a
"captive portal". I redirect *.apple.com to stop iAd, ntp
and other Apple crap even when I'm connected to the internet,
so unlike the OP, this is a problem even when www.apple.com
is not down for maintenance.
One solution is to redirect www.apple.com to your own httpd
serving a local copy of /library/test/success.html.
In short, you are wrong. Redirect apple.com comains to your
own httpd and read your logs.
There is lots of dialing home coming from iOS and indeed some
of it will affect your ability to read content.
Do you have to compile everything from source, e.g. gcc, glibc, etc? There is no need to get into the whole "reflections on trusting trust" spiel, but I imagine at some point you must rely on your distro? Just use the chromium package from your distro. If not why not build on a desktop/server and copy package to laptop?
I would not recommend that. I compiled Chromium a few month ago with my i7 desktop with 8gig ram and it took at least an hour to finish, if not longer. I don't think RAM is that much of an issue, but I wouldn't go lower than at least 4gig of RAM.
What then?