Featured Post

Pinned Post, A Policy Note:

I have made a decision to keep this blog virus free from this point forward, at least until the smoke clears. This is not a judgement about ...

Wednesday, November 4, 2015

ETTR

Expose To The Right. Often cited as a desirable exposure mode that camera companies willfully refuse to implement because they are dumb and don't listen to photographers. Let us set aside the fact that ETTR is a dumb idea.

Let's actually consider this. What would it take?

What is it? Well, usually the pundit will say that the camera should select the greatest exposure which will blow out at most some percentage (which translates trivially into some number) of pixels. Question: Before or after Bayer de-mosaic? Question: What do the error bars looks like? If the camera should blow no more than 10,000 pixels, does that mean plus or minus 10 pixels? Plus or minus 1000 pixels? Or do we insist on exactly 10,000 pixels?

How will the camera do this? Well, it has to take one or more exposures, and count blown pixels. If too many pixels are blown out, then the camera has to guess how much to reduce the exposure by. It can't calculate it, it has to guess, and try. The pixels are blown, we don't know by how much, after all. The built-in meter will probably help.

Let's suppose that the camera has a worst case of 5 guesses to get close enough to meet the criteria. At a 60 fps frame rate (let's assume we can retask some of the video logic for this process), we're talking almost 1/10 of a second of lag here just to calculate the exposure. Oops. Note: the more precisely we want to manage the number of blown pixels, the more test exposures we're going to need, with consequences that flow through the system.

How will we do these calculations? We're not going to chew through 5 frames of 24 megapixels each with some dippy little micro-controller, it's got to be the image processing engine. Question: Can we program the engine to count blown pixels? Question: Is there a way to get the count of blown pixels out of the image processing engine into the exposure control logic? Note: If either answer is no then we need to be looking at some new features in the next generation image processing chip, and ETTR exposure mode is impossible in this generation.

So now we're processing frames. We can reduce the shutter lag by doing rolling exposure estimation, but now we're running the image processing engine constantly, draining the batteries at a brisk clip. Question: How can we best architect this to give acceptable shutter lag performance in most cases, without excessively compromising battery life?.

So our best case development plan here involves several design meetings sorting out exactly what the feature actually does, how it works, and how to balance the various compromises. All compromises will produce noticeable shutter lag at least some of the time, and will reduce battery life by some degree. Let us assume that we do not need to build a new chip.

Next up is writing software for both the main processor (menus, settings and so on), some software for actually setting exposure, some software for the image processing engine to do the heavy lifting.

Then there's the manuals and the QA and the manufacturing and the translations and the reviews and all that stuff.

I am not seeing this as less than 1 month of full-time-equivalent staff time, assuming it's even doable inside the current camera architecture. A week from this guy, a day there from her, a couple days over there. A month is a light estimate.

An FTE (full-time-equivalent) that this level runs a fully loaded cost of something like $150,000 to $300,000 a year (salary, benefits, rent on office space, janitorial services, depreciation on computers and furniture, insurance, etc) so a month is going to run you $12,500 to $25,000, roughly. You need to recover that in profits, and since you're not running a commune here you need a multiplier on that, so you need to see a reasonable potential for something like $40,000 to $100,000 in profits. Assuming that you're looking at gross margins of something like 30%, you need to see $120,000 to $300,000 in sales (to the dealers, call it $150,000 to $350,000 retail). If it takes 2 months instead of 1, or you need to redo a chip, you could be getting into "a million bucks, retail" territory, which is 100s of bodies.

Now, when you do release this thing, the people who clamored for it will, you can be assured, complain loudly about the shutter lag and battery life issues introduced by ETTR, and urge their readers to wait "for the second generation when they will have finally sorted out these critical issues, we (sigh) hope." So your actual increased revenue will be somewhere in the area of $0, plus or minus.

I dunno about you but I cannot for the life of me see why camera companies aren't rolling out this simple, obvious, feature.

16 comments:

  1. eh? How does the question itself arise in the first place, though? Since doing the ettr thing using exposure comp. is basically the same thing as having a lower iso since shutter speed drops anyway... Maybe a little difference, but I can't see it.

    So, they'd have to invent a new standard for sensitivity as well, an ETTR 100,200,400... For no visual improvement.

    ReplyDelete
  2. I think the 'need' for this no longer exists with the current generation of cameras or probably any camera after about 2012. Shadow noise is just really not the problem it was when ettr was proposed as a work around.

    ReplyDelete
    Replies
    1. That is, roughly, what Ctein says, and he's a fairly clever fellow.

      Delete
  3. OK, so my unreasonable "simple" demand has always been a hyperfocal distance setting. Or even some kind of rough'n'nready "zone focus". Given its universal absence, yet obvious utiility, I assume it's either impossible or too difficult to implement. Something to do with the difficulty of setting the lens focussing distance from an internally-passed parameter, rather than in response to a real external object, maybe? Otherwise it's just simple arithmetic...

    Mike

    ReplyDelete
    Replies
    1. So the lens is focused to the hyperfocal distance for the current aperture in Av and M, and racked as needed per the computed aperture in Tv?

      That IS kind of cool! Probably take a weeks effort ;)

      Delete
  4. Couldn't you just use lenses with a depth of field scale on them and manually set your hyperfocal point in a second or so?

    ReplyDelete
    Replies
    1. Well, of course... But most of us don't own (or use) such lenses these days. I miss precisely that -- align the "infinity" symbol with the f/stop, and Job Done. Even my Fuji lenses (the only ones I now own with f/stops) can't do that. And wouldn't it be good to have hyperfocal d-o-f with zoom lenses?

      Mike

      Delete
    2. Sorry, when I say "the only ones with f/stops" I really mean "the only ones with any useful information engraved on them at all"... Obviously, having an f/stop ring is irrelevant to setting hyperfocal distance. The manual focus ring on every lens I own is completely unhooked from any relationship to the actual distance focussed. Understand, you're dealing with a complete amateur, here, Kirk -- a [whisper it] kit-zoom user... ;)

      Mike

      Delete
    3. My guess is that Kirk is being a little cheeky here.

      Delete
    4. Damn, you think I've been out-ironied? Ah, you Americans... ;)

      Mike

      Delete
  5. I leave the histogram running g full time, pseudo-real time on my XE1. Could the camera not just read the data that drives that display, and expose accordingly? Perhaps not as precise as your description, but, in tandem with today's dynamic range, close enough? (another reader who found you via Kirk and bookmarked you!)

    ReplyDelete
    Replies
    1. Hi and welcome to my blog!

      I don't think that helps much, actually. You still take one exposure and calculate a histogram, and then try to adjust. If it's blown out the top, you just have to guess, and it will take a couple stabs in the worst case.

      If you're under, you could probably do OK with a second exposure, sure. It would be imprecise, which might or might not satisfy the pundits ;)

      Delete
  6. It really wouldn't be that difficult, in any modern camera with an EVF or even a live Histogram to implement this. The source data is already provided live, and the camera simply needs to adjust exposure based on the count of likely to fully expose pixels, which in my Panasonic GX8 have already been identified and are happily flashing zebras!

    The argument about the grossed up costs of manufacturing changes may be valid, but the statements about technical difficulty are plain wrong. If a manufacturer wanted to achieve this, preferably as part of the normal product development cycle, it really wouldn't be remotely hard.

    ReplyDelete
    Replies
    1. With respect, please read more carefully.

      Suppose the camera is set up Av, at ISO100, f/5.6. At a shutter speed of 1/250 the display is half blinkies. Be the camera. You know you have to adjust the shutter speed to a faster one.

      Quick now, what's your target shutter speed?

      Hint: You don't know and you can't calculate it from the data on hand.

      You have to do the same thing the user does, which is jog the shutter speed around a bit (either in reality, or virtually), and make some more tests. This takes time. Which means either shutter lag, or rolling calculation which means the EVF/LiveView is running more of the time, which drains the battery.

      Which is what I actually said in the original post.

      Delete
    2. With respect, please read more carefully.

      Suppose the camera is set up Av, at ISO100, f/5.6. At a shutter speed of 1/250 the display is half blinkies. Be the camera. You know you have to adjust the shutter speed to a faster one.

      Quick now, what's your target shutter speed?

      Hint: You don't know and you can't calculate it from the data on hand.

      You have to do the same thing the user does, which is jog the shutter speed around a bit (either in reality, or virtually), and make some more tests. This takes time. Which means either shutter lag, or rolling calculation which means the EVF/LiveView is running more of the time, which drains the battery.

      Which is what I actually said in the original post.

      Delete
  7. I get that adding / changing features after the fact can be costly, but what I don't get is why camera manufacturers don't do a better job of it when they design their _next_ cameras.

    Presumably there's more room in the budget to implement improvements when the camera exists only inside of a computer and not yet in tangible form?

    ReplyDelete