post-handling images

and other variables in web design.

On the web, pretty much everything is about appearance – also when it comes to content. If visual content on the web is not what it appears to be, few think twice about it. No-one expects to find unmanipulated images anymore for instance, as we all apply at least some manipulation to those images we present – even if it is just the camera settings that affect the outcome.

Many regular photos on this site are quite sharp – sharper than in real life that is. This is of course intentional, as I want details to stand out. So, I sharpen them as part of the post-processing I apply to all images on site.

Some times I also intentionally overdo the sharpening of whole or parts of an image to create an effect, which may make it look a bit unreal. All as part of how I choose to present content on a particular page.

As I quit using PhotoShop years ago, all image post-handling is done in GIMP nowadays. GIMP is supposedly lacking some “advanced” features compared to PhotoShop, but none that I have been missing so far.

My idea is to try to optimize otherwise good, natural, photos for the web, which means I don't apply much in the way of “artistic effects”. It is all about trying to “tell a story” in pictures and text, so actual “image distortion” doesn't do much for me.

pretty basic…

I always set cameras for max image-reso­lu­tion (not using raw). Gives me max­i­mum to work with during post-pro­cessing.
For my Canon EOS 400D, max-res is: 3888px by 2592px. For the other cameras I some­times use, max-res is a little less but at least twice of what I use on the web.

“standard” image post-pro­ces­sing proce­dure:
  • Checking and correcting image-rotation – also small deviations of less than a degree.
  • Cropping image, if necessary.
  • Resizing image for its intended place and “max-size” on a page.
    (Some images are saved in several sizes, so I can “pick and choose” later and/or use them in several places on various pages.)
  • Checking and correcting light / contrast – a little goes a long way.
  • Adding edge-sharpening - several small steps until I'm happy with it.
  • Checking and correcting color-saturation.
  • Save in dedicated folder: I often use date in file-names so I know when they were snapped without including metadata.

May take from a couple of minutes, to maybe 15 minutes, to process each photo in a series, following the “standard” procedure listed above. Nearly all photos I have taken myself fall into this “rather easy” category, but there are of course exceptions…

“non-standard” procedures for special cases:
  • Blurring and smearing of details, followed by careful re-sharpening.
  • Partial deconstruction and/or reconstruction of objects, px by px and layer by layer if neccesary.

Nothing special in the “non-standard” procedures. But, as I like to fine-tune the details until everything looks “just right” to me, it may take hours, even days, to get a single image ready for launch. Many attempts may be reversed, or deleted, in the process.

lining up and resizing on-line…

Of course, launching an image is just half the work. In fluid design (what some call “responsive”), all images on a page have to work in context on any screen-size from the largest to the smallest.

I do not often switch, or replace, images to go with variations in screen-width or screen-resolution between devices. In most cases the gain is minimal with switching, and often the actual download size/time increases with such techniques.

For resizing of images to go with the variable page-width – compressing them that is, max-width is the most used tool. Several examples on this page, and all images that should stay smaller than the width of their containers are given their own classes for specific max-width.

Note: I am reusing one variant of the image above, for all examples on this page.

1: line-up basics…

Here I am using basic float and com­pres­sion tech­niques, to line up two images on the right side in a paragraph, and scale them at fixed percentages of the width of that paragraph.

Compres­sing images for large screens to fit on small screens, doesn't increase actual resolution. It does increase “available resolution” though, and works well for all but very special images.

The images used (again) are the same, with one given a max-width of 45%, and the other a max-width of 20%. Their inherent width is equal to 100% of the container's width on the widest screens, which is the max-width they will default to if no class for max-width is applied to them. The method is really that simple.

Note that I usually do not compress images initially. Images should ideally be no wider than the chosen initial size on widest screens, to prevent or avoid initial com­pres­sion. Thus, when sizing images in GIMP I have already decided where those images go in a particular page and what initial size they should have there.

(FYI: max image-width in main column on this site is 670px. It is 460px in side-notes, and 1200px in addendum. Most images are sized smaller than those max-values in GIMP, and max-width is then declared – classed in – to let images stay at inherent size as long as possible during down-sizing of browser-window.)

2: arranging images…

For image-overlapping and over-the-edge arrangement on pages, margin-offsets on floating images are used a lot on my sites.

Several examples of that too on this page, but the same image-lineup with negative margin-right added on both images, shows how it works in its simplest form.

By having classes for margin-offsets on floats both in px and %, and combining them with classes for relative-offsets both in px and %, pretty much any form for image-overlapping and over-the-edge positioning, that will behave as intended in fluid designs, can be achieved.

To repeat: having images auto-size with their containers, and floating images pulled and pushed around by their margins, is what I have found to be the most cross-browser reliable image lineup and positioning methods for responsive designs.

In that respect nothing much has changed over the last decade or so, apart from that browser-support for these techniques has improved. Again, a set of very simple but under-utilized techniques, that are supported to perfection by all major browsers today.

3: layering elements…

Lastly, I may want to control image-shadows for such an image-overlapping, to keep shadows from covering up anything. In this example the small image shall be in front of the larger one, but the shadow on the smaller image shall stay behind that of the larger image.

This is of course achieved by trans­fer­ring all image-styles onto an element containing the image, and manipulating the z-index.
It is otherwise the same old margin manipulation technique at play, and it works just as well now as it did in the past.

Some front-end coders seem to regard use of negative margins in this way, as hacks that are to be avoided. Well, the method is based on W3C standards, so what's “hacky” about it?

Anyway, the above is a small set of the many techniques I have for controlling images on web pages, and all my methods are based on standard.

recycling old news.

After having been in web design for about 15 years, I have found that most “new” in web standards is about improving or replacing the methods by which we can achieve the same layouts / designs that we attempted all those years ago.

Not much that is actually “new and useful” in more recent web standards, that has been given broad support across browser-land. It is a slow process, and there isn't much that can be done from my end to speed it up.

Be that as it may: as long as I personally have a good insight into how the various browsers actually handle the techniques that are available to me today, the slow progress doesn't cause me any real problems.

sincerely  georg; sign

Weeki Wachee 07.may.2015
last rev: 10.oct.2020



www.gunlaug.com advice upgrade advice upgrade navigation