A refinement of the previous remarks.
Envision a device, call it an L32. It's a medium-sized tablet, maybe a bit on the thick side. Tripod socket on the long side and another on the short side. Good quality display. Camera... stuff on the back. The details don't matter, it's computational photography stuff.
Here's how you use it:
Take a bunch of pictures. There's depth information natively in the pictures so selecting objects in the scene is as simple as tapping them. Composite together a picture out of parts in a few taps and swipes. Set focus point with a tap and select depth of field. Here's the subject, here's the background, focus on the eyes. Shallow DoF.. no, a little more. There.
Pick a lighting setup from your library or start fresh. Adjust positions, color, diffusion to taste to relight your composite. Five lights. Move the hair light back, main up, up, up, there. Dim the background light a touch, move the fill in a little. Dark card over there to control that. Now warm the color up a little.. Yep.
Save it as a 16 bit TIFF to the SD card. With, because why not, a complete set of mask layers for the various objects in frame. Off to Photoshop we go for retouching.
Is that worth something to you? If the Light guys are to be believed we can build this now. The first edition would be slow, battery life would be awful, and sometimes it just wouldn't do that great of a job. You might have to do some poking at surfaces in the picture and cueing the software "that's skin, dummy, not chrome!" here and there. But the second model would be better.