Here's more things to think about.
I've been thinking about how much stuff we look at doesn't really have real colors any more; so much of what we see is a perceptual color created by mixing a small set of dyes or lights. We spend hours looking at a computer screen, and more hours looking at printed materials. We spend almost no time looking at apples, and no time at all really looking at an apple.
Color-managed workflow, that paragon, that ideal state of perfection, is all about managing those perceptual colors. It tries to ensure, and largely succeeds in ensuring, that the perceptual color I see on one monitor is the same perceptual color I see on another is the same perceptual color I see on the printed page.
A good process can actually do this, for a pretty broad range of colors. The color my visual cortex comes up with based on the Red, Green, and Blue light sources on one monitor is more or less the same as the color my visual cortex comes up with based on the Cyan, Magenta, Yellow, and Black inks used to print the page. With certain fairly stringent restrictions.
What's missing from this? Did you notice the elephant that we're not talking about?
There's nothing in there that makes sure that the color on the monitor, or the page, looks like the color of the thing I photographed.
A side note. We occasionally see some eggheaded idiot explaining that the Red Channel is Clipped, and how that's Unacceptable. This same person would not complain that the highlights are blown where the sun appears in the frame, because anyone -- even an idiot -- may know that the sun is simply out of range. If you're going to get a picture of anything else, you have to let the sun go. Yet, when photographing an object with an out-of-gamut color, precisely the same problem exists. One or more color channels will clip. Fixing it will destroy the rest of the picture.
Some experiments to try: Use your favorite method for getting a "correct" white balance, and photograph a handful of small objects, all in one frame. Use a grey card or whatever you like to "get the white balance correct". Produce an image file with correct white balance. Now hold your objects up to the monitor, or set them near the monitor in good light. Do the colors on the screen match the objects? What happens when you tweak the color to get one of the objects spot-on?
In my tests the answer is "ha, ha, ha, not even close" but your mileage may vary. The hell of it is that on-screen they look pretty darn good. They're convincing when compared with my memory of the object, but place the object close to the screen, and the illusion falls away. The object I was mostly interested in was not even out-of-gamut, I was able to persuade my monitor to produce the color quite closely without much trouble. Correcting the picture for that color simply ruined the others, though. With an out-of-gamut color, I cannot imagine the situation would be any better.
Cameras, for excellent technical reasons, tend not to actually see things the way eyes do. Some cameras do better than others.
The result of this experiment might be liberating. If the prospect of getting the color accurate is taken away from you, then you don't have to worry about it any more. Make your picture look nice.