A short note today about leading lines. We use those to lead the viewer into a photo and call attention to the subject. You can use a wide-angle lens, and you can look for lines naturally occurring in the environment. Like the perspective lines here, in the parking lot at Place d’Orléans mall, that seem to point to Rose:
As soon as I saw r=those “Alhambra-like” columns, I knew we had a photo. It’s all about opening your eyes.
This, incidentally, is one of those images that can also work very well in Black and White – here, with the super-cool grainy Tri-X film look – and you really need to see it full size to judge:
In this particular case I am not sure which one I prefer – I love both. What do you think? Let me know!
When we fix images, as we do daily in the store (www.michaelwillemsphoto.com) sometimes it is easy – and sometimes we need a lot more effort. Like in this before/after example:
White balance isn’t enough – not even close. For these colours I needed to use Lightroom’s white balance, extensive HSL, and especially the new excellent “Color Grading” tool. If you haven’t needed it yet – you will. And then a coloured local adjustment brush to add some skin colour, quite often – this is an art as much as it is a craft and a science.
But then there’s also Photoshop to remove the small imperfections, and an AI-based de-noise tool to lower noise. (“AI” stands for “Artificial Intelligence” – it’s not the name “AL”…)
In the end, it is always worth it. Memories preserved. Because when the photos fade, the memory itself fades.
You can use some gels (colour filters) for correction, Here, from 2015, is a post with an example.
Take this: I am lit pretty much OK by my flash, and with the camera set to FLASH white balance,, but the background is a tungsten light, so it looks red. I happen to like that, but what if I want that background to look normal, white, the way it looks to me?
Well… can I not just set the white balance to Tungsten?
No, because then, while the background would look good, the parts lit by the flash would look all blue, like this:
Part 1 of the solution: make the light on me come from a tungsten light source too, so we both look red. We do this by adding a CTO (colour Temperature Orange) to the flash.
Part 2 of the solution: Now you can set the white balance on your camera to “Tungsten”, and both I and the background will look neutral:
Done. Now we both look normal.
So, in summary: when you are dealing with a colour-cast ambient light, gel your flash to that same colour cast, and then adjust your white balance setting to that colour cast.
Today, a repeat of a 2015 post that is particularly useful for travel photographers.
With the camera on a tripod and exposure set to manual, I can take pictures like these, one by one:
…and on on. As said, I am using a tripod, so the only thing that varies is me (I used a self timer).
And then I can use Photoshop or the GIMP (the latter is a free equivalent) to do things like this very easily:
Or even this:
OK.. so a cool trick. You do this with layers and masks. Hellishly complicated user interface, but once you know the silly UI, the process itself is very simple. It’s the only thing I have the GIMP for.
So. Why would I think this is useful, other than for fun?
Well…. think. You can also use it the other way. Instead of replacing the wall by me, replace me by the wall. And now you can perhaps see a benefit looming.
No? Think on. You are at the Eiffel Tower. Or the Grand Canyon lookout point. Or whatever tourist attraction you can think of. What do you see? Tourists. Right. It attracts them: that’s why it is a tourist attraction.
But not in the same spot all the time. So all you need to do is the same I did here: take a bunch of pictures. Say 10-20 of them. So that you have each spot of attraction at least once without a covering tourist. Then you put them into layers—one each—in PS. And then you manually remove tourists. One by one, poof.. they disappear.
Or you go one further: depending on your version, you can use function File > Scripts > Statistics. Now choose “median” and select the photos. And you end up automatically with an Eiffel tower without tourists, a Grand Canyone without other onlookers, and so on.
Google DeepDream is the neural network technology Google developed to see what a deep neural network is seeing when it is looking in a given image. Now, the algorithm has become a new form of psychedelic and abstract art.
Often, the “dreams” are pretty disturbing. Like this pretty well known “Dog Spaghetti” image:
Scary, but then… that is the stuff dreams are made of, right?
And here’s one of mine – two images together, tulips and a frog, and how the AI, after crunching away at them for a while, makes sense of them: