I am pretty sure that in the 1990s there was a pretty good general opinion, in the right circles, that this digital thing was a flash in the pan and that the real action was in new film emulsions and new film formats. We were pretty excited by the Ektar films and the TMax films, and we were pretty much assuming that the progression would go on. Pretty soon, we'd see new film cameras with ultra-flat film planes to take advantage of the newer super high res emulsions, and so on. There was a certain amount of derision about APS film, but there was broad consensus, I think, that there was future work to be done in improved film cassette formats (maybe APS style auto-loading, but in a decent size?). I was there, but there's been some kids and beer between then and now, so my memory is a little vague.
Of course, we know where that went.
Now we have the DSLR crowd, looking onwards to more megapixels, and maybe something that really takes advantage on the on-chip phase detection circuits, and why can't they do ETTR exposure modes? And surely the way forward is by supporting DNG RAW format in-camera, etc.
Sound familiar? The single lens, single sensor solution is perfect, and eternal, it will never ever go away, it will always dominate. The future is is fiddling with the details and making it ever better. There's some rumors of single lens single sensor cameras with super high dynamic range, or 1billion ISO performance, or whatever. It's all extremely familiar.
I don't know where the future is. I do know the general theme that is being dismissed, though:
Computational photography is the category, and it's here to stay. If I had to predict, which luckily I am not being forced to do, I would predict that it's the core of future photographic tech. I don't know what variant of it will win out. Quite likely, something we haven't seen yet. We have light-field work, with a single sensor and a bunch of microlens widgets (Lytro, Raytrix). We have multiple sensor multiple lens system (Light). There's all manner of research stuff ("this camera sees around corners!") that gets talked about at TED conferences.
People actually use it now, with focus stacking and HDR techniques, albeit clumsily. Interestingly, when someone dares to build it into a camera, the establishment dismisses it. Imagine, if you will, focus stacking in-camera. Draw a line on the live view touchscreen "make all these points in focus". Clickwhirclickwhirrclickwhirrrrrr DONE. The camera is so much better positioned to do focus stacking and HDR than the user is it's not even funny. But the establishment is pretty opposed to computational photography as a thing, so, no.
We're seeing a lot of experimentation with form factors and user interfaces, leading to more derision from the establishment. These things don't even look like a DSLR, therefore they are lame and stupid.
What's it going to mean? I'm not sure.
A commonality in the technologies seems to be some sort of 3D information available, which has some implications for editing. It's probably a lot easier to remove the inconvenient mailbox in the background, and you can set DoF in post. These are basic technical details, and I feel like somewhere in those kinds of details is the thing, or set of things, that unpacks into something much more important which is actually the thing that changes everything.
Nobody guessed that digital photography was going to result in a billion pictures and a million memes a day. Nobody guessed that we'd have web sites (what are those?) devoted entirely to letting people put funny captions on to pictures (upload your own, or select from our collection of stolen pix!). Digital slammed us with these massive social changes, and it's not even clear what -- exactly -- about digital did that. Something about ubiquity, something about malleability of the product, something about web-native formats.
Is there something about mobile, about phones, that's going to turn up, here? Probably. Just as digital photography equals JPEGs equals the native format of the web was a huge component. What about phones and small tablets? They're small, but are or will be super high resolution. Apple's trying something out with their Live Photos (um, hello Vine? Snapchat?) but I'm not sure that's quite it.
Anyone who's edited photos on a phone knows that malleability of the product kind of drops away. Yeah, you can do some stuff, but ain't nobody doing frequency separation retouching on a phone. The pictures may be malleable, but at a higher level. If not literally with voice commands, at the level where you could express it as such. "crop this, open the shadows" as opposed to pixel-level stuff, layers, masks, and brushes.
Something about computational photography, something about phones, that we haven't noticed yet. Possibly it hasn't been invented yet.