It's exactly that - single image, and a depth map calculated from a series of shots made upwards. Both these are bundled in a fake-bokeh image. There is obviously no more pixels. However...
If you choose your subject wisely, the effect can be pretty believeable, using a simple displacement map.
On iOS though it's something different. There's no depth map in there, just a flat image moving counterwise to your hand movements.
It sounds like there are two issues here. One is how the depth map is generated, and the other is how the resulting image file is formatted. For the former, several still images are collected while the camera is moving, which provides parallax which can be used to generate the depth map. For the latter, I don't know, but it would certainly be possible to bundle both the depth map and multiple "layers" of photograph that could be used to actually expose new pixels in the background when the foreground moves.
There is an app for iOS - seene.co. The amount of movements you have to do to capture enough pixels is prohibitive for my taste. I think that google has nailed it - it's super simple.
As for storing the layers - you would only have the "from above" pixels, and only a few. Probably that's why there is only a LensBlur in their app in the first place.
If you just want a small displacement effect like on depthy - then the key is no sharp edges in the depthmap. We will try to tackle this in one of the upcoming updates...
Now that you explained the IOS7 trick, it's obvious. (Given the environment: background filling image on a physical device movement as opposed to a image on a website).
There is more pixels captured compared to a single shot image as you said: "series of shots made upwards". So it captures some pixels that is hidden when the camera moves upwards, but if the simulated parallax is bigger than the original camera movement then there will be still missing pixels. Probably this could be improved by doing bigger movements with the camera, like with other 3D reconstruction software.
On iOS though it's something different. There's no depth map in there, just a flat image moving counterwise to your hand movements.