Image Ethics Shortcourse for Research Scientists

image of graph showing increase in image fraud cases

When publishing, an increasing number of journals are paying greater attention to images. Some journals are actively checking images for image manipulation through the use of software.

Furthermore, from an internal report at the Office of Research Integrity (ORI) by John Krueger, the number of incidences of ORI cases involving falsified images have increased to an approximate 80% of all cases since 1990.

When publishing–and when your lab has published–authors are responsible for incorporating Good Laboratory Practices into all aspects of their research, including the acquisition and processing of images. This practice is generally known as “image integrity,” and it involves keeping the published image as close to the original image as possible–and to what was once seen by eye–without arbitrarily enhancing parts of the image to make experimental evidence stand out from everything else.

Two major types of images in light microscopy

What kind of images are covered in this shortcourse?

  • Images primarily from microscopes
  • Brightfield images (dark features on lighter background)
  • Darkfield images (bright features on a dark background

Darkfield images include those images in which fluorescent labels are used on specimens. Brightfield images include those in which chromagenic labels are used, such as H&E.

Images defined by intent

What is the intent of images?

  • As representative images showing what was once seen
  • As quantitative images for measurement and analysis
  • As images to aid visualization

In this shortcourse, the primary image intent is that of being representative. Much of author’s guidelines at publications define ways in which representative images are to be post-processed, with specific guidelines for electrophoretic samples (gels, blots, etc.). Images intended for visualization and images other than electrophoretic samples are not always specifically denoted.

In general, images for visualization can include maximal post-processing changes and would include details about the changes.

Images for quantitation, with the exception of those images intended for densitometry, may also include post-processing filters (such as median or bandpass filters) to aid in the segmentation of features and would also include details about post-processing in methods.

Representative images are expected to be minimally processed. These post-processing changes have historically gone unreported, but should be reported.

Images intended for densitometry are not post-processed except for cropping, rotating, setting image dimensions and resolutions for publication. Some exceptions may include post-processing for flatfield correction (correcting for uneven illumination, best done when acquiring images on microscopes) and, possibly, linear histogram changes to match microscopy sessions over time. Do not remove artifacts, such as specks, from gels and blots. If you do, report it.

If working with images intended for densitometry and unaware of language used in the paragraph above, be sure to research this method of measurement before acquiring images. Note that electrophoretic samples are often acquired on scanners, and scanners are generally not subject to uneven illumination and the need to flatfield correct. However, electrophoretic samples must be properly acquired so that non-stained areas of the image are not pure white (oversaturated).


Remember Ra Ra!
Record what you’ve done
Archive original images
Report what you’ve done
Apply any changes to the image in post-processing equally and globally to related images.


  • Electrophoretic samples (gels, blots, etc.): do NOT change tones to these sample in post-processing to avoid potential accusations of scientific misconduct down the road.
  • Samples meant for densitometry: do NOT change tones to these images either.


Save in Manufacturer’s Format
If you have newer versions of software, and you save images in the manufacturer’s format (e.g., CZI format for Zeiss files) often MetaData is included with the image that includes all your microscope and camera settings.

Otherwise, take extensive notes on microscope and camera settings.

Record what you have done to images in post processing. Some software programs offer a means to do that.
In Photoshop, a History Log can be recorded in versions of Photoshop more recent than version CS (vs. 10) (see image above).
You can see what was done to an image under File > File Info in the History tab. The record of what was done to your image is saved to both the image and to a text file when you select Both for Edit Log Items in the History Log. The History Log will run in the background from creating settings forward (Photoshop will remember to record History Log).
Image J:
Workaround: Record a macro of your steps that you have applied to all images: Plugins>Macros>Record.
Image Pro Plus (Media Cybernetics):
This post-processing program allows for audit trail. Do a search for “audit trail” to find function.

retain original images slide


  • Do NOT save over original images.
  • Save in manufacturer’s format (if it exists).
  • Save to a DVD/CD that is NOT rewrite-able. Instead, save on a DVD- (minus) or DVD + (plus) or combination minus/plus. Even better to Blu-Ray or a hard drive, the cloud, or via a means provided by your institution.
  • Do NOT store in Powerpoint, or run the risk of having the image’s pixel resolution reduced (see above).
  • Provide some means in post-processing software for eliminating the possibility of writing over an original image, such as an automated method for duplicating an image and then closing the original image (something that can be done, for example, in Photoshop).
report what you've done image

Always report all post-processing steps; either in Methods, or in ancillary material provided to both reviewers and to publications.

Researchers and Principle Investigators hesitate to report that images have been “enhanced” because it could open the door to questions about the degree of enhancement (anecdotal evidence from talking with scientists). Too many don’t wish to open that door.

However, with an increase among journals to spot check images, it is no longer useful or advisable to withhold information. By withholding post-processing steps researchers open the door to scientific misconduct accusations. Furthermore, experiments cannot be repeated at other labs with similar results when post-processing methods are withheld.

A rationale for enhancement can be included in the methods, such as the statement shown in the image above, or for the reason most enhancements are done: to fill the tonal dynamic range of the image, or to brighten, or to match one image’s brightness to another (when images are representative and not showing densitometric information).

gamut image

Gamut is the range of colors and tones that are reproducible to a particular output. Monitors can reproduce a larger (and different) range of colors and tones than a printing press (see top and bottom images above). Therefore an adjustment to fit colors and tones to a printing press would be a legitimate rationale for making tonal and color adjustments as long as the publication allows it.


  • In post-processing, be sure to apply changes to the whole image, and not to parts of it.
  • Apply any changes to one image equally to all related images.
  • Do not add to existing picture from other pictures (splice pictures in).
  • Do not add features from other pictures to existing picture (splicing in features)
  • Images should be minimally processed
  • Avoid resampling images for publication, if possible.

Post Processing steps should not involve enhancement that includes over-brightening experimental evidence to max values (saturation), minimizing non-specific or undesired parts of specimen that could be mistaken for evidence (intentional obscuring of visual information), and changes are made in post-processing solely to improve visualization of hidden or obscured details.

If experimental images are described as “brighter” or “darker” than control images, then tones should not be adjusted, except to match exposures linearly over several microscope sessions to the same “baseline.”

Do NOT apply tonal changes to electrophoretic samples, even when the publication does not specifically prohibit it. Your lab will potentially be at risk for accusations of scientific misconduct. Do NOT acquire electrophoretic images with pure white backgrounds.

Changes that can be applied are typically found in the author’s guidelines for the publication of interest. The guidelines for “Nature” do a good job of describing what is and isn’t acceptable: You can also find a bibliography at the “Nature” site.

Image Size box in Photoshop

Post Processing in Photoshop and Using the Image Size Function
Images taken with cameras and other detectors should contain adequate pixel resolution for high quality reproduction. Generally, a pixel dimension greater than 1000 pixels across is desirable; however, for low fluorescence images smaller pixel resolutions are often required for detection (in that instance, the images are expected to be small “thumbnails” in publication).

When sending images to publishers, often the image size needs to be determined, along with the output resolution (in dots per inch or mm, or pixels per inch or mm). While the reference to “dots” is a specific reference to the creation of halftones for printing presses in which more than one pixel is used for each dot, it’s likely that in the author’s instructions “dots” and “pixels” are synonymous (call the publisher if you wish to have the unit of measurement clarified).

Many users rely on Photoshop to fit images to publication dimensions and pixel resolutions. The Image Size function in Photoshop is often used incorrectly and is called the most dangerous function in Photoshop. Part of that has to do with a misunderstanding of the dialog box itself:
1. The Pixel Dimensions show the (Pixel) Resolution of your image. This is the inherent pixel resolution of your image.
2. The box marked Resolution: is NOT the resolution of your image. This is the resolution of your image were it to be printed. In other words, this is the Print Resolution of your image, and it depends upon the dimensions you set.
3. The box marked Resample Image is what you check when you want to change the number of pixels that make up your image into less pixels or more pixels (so each pixel is really a sample of that part of your specimen: resampling means that part of your specimen will be re-made into more or less pixels).

The Resample box is normally left UNchecked, EXCEPT when you need to resample for publication. When going to Powerpoint, Illustrator, Word, etc., only the Width and Height are set to “tell” that software its dimensions. Ignore Resolution.

When going to publication, attempt to set the Width and Height to the column dimensions and the Resolution to 300 or 400 (depending on publication). Attempt to fit images to column widths without having to resample: you may not fill the column edge to edge with your figure or image. . If images are too small and you wish for the images to fit across 2 columns, consider one column when resampling is required.

As a general rule, resample figures to smaller dimensions versus larger. Resampling algorithms for bicubic interpolation (a method for determining new pixels by neighboring 4×4 pixel information) result in clearer images when divisible by 8 (according to the author’s experience). If Bilinear interpolation is used (a method using 2X2 neighboring pixels to determine resampled pixel at each position) doubling or halving pixel resolution results in most accurate resampling (see Avoiding Twisted Pixels: Ethical Guidelines for the Appropriate Use and Manipulation of Scientific Digital Images)

Resampling in Image j: Use Image > Scale and choose interpolation.

While post-processing has led to scientific misconduct and attracts a great deal of attention, almost no attention is paid to errors when acquiring or displaying images.
The list above details common mistakes. Some of these can be “repaired” in post-processing, but the following likely results in un-usable, un-measurable and un-publishable images:

  • Use of Automatic Exposure with images in which densitometry will be done
  • No flatfield correction with images in which densitometry will be done (unless measuring from the same location)
  • Oversaturation of pixels resulting in detail permanently obscured

Each of the mistakes will be covered one by one in what follows.

Common Mistake #1: Poor Display of Images on Computer Screen and in Software
Images should be displayed on a computer monitor that shows all the tones. If the monitor is in a well-lit room or next to window light, then likelihood of seeing the tones in the image are drastically reduced.

And if the monitor suffers from a limited viewing angle (called the “pitch”), then both colors and tones will be misinterpreted by any person viewing images off axis.

All the tones on the images above should be visible. If you can see all the tones only when moving your head in relation to the screen, the viewing angle is too small, or you should be viewing further away.

The numbers under the square tones in images above indicate the tonal value on a scale of 0 (pure black) to 255 (pure white).
You can more effectively test your screen by going to this link.

Computer monitors should be calibrated with a hardware calibrator device, and/or use a large pitch screen (generally unavailable on twisted nematic screen laptops) with IPS (In-Plane Switching) monitors or equivalent.

image ethics bit depth image

The raw image display (leftmost image) viewed in Photoshop shows an image saved with 12-bits of tonal information (0 – 4095 potential tonal values) on an x-axis of 16-bits (0 – 65,535 tonal values). Note that 12-bits comprises only 1/16th of the 16-bit range,so the 12-bit image is displayed appropriately in Photoshop. After scaling to 16-bits (middle image) the tones are displayed as these would reproduce. In Image J, using automated “Appearance” settings, the image (rightmost image from different field) is auto-scaled for display to show tones from 330 – 1860 on the x axis and appears to be optimally bright when that is not true of the true image. Note that the display can be changed in Image J to prevent autoscaling: Edit > Options > Appearance: select x-axis to match bit depth of image.

Common Mistake #2: Incorrect Display of Images on Monitor
Imaging and analysis software may display images in such a way that these are arbitrarily auto-brightened. Sometimes images can also be displayed so that no discrete pixels can be seen even at the highest zoom. The best of reasons lies in making it easier for users to visualize images; but the danger is that users see a completely incorrect image.

In NOT seeing the true tones of the image on the screen, or pixelation at reasonable zooms (e.g., 100% zoom), users are prevented from seeing how images may reproduce.

Furthermore, it eliminates a standard viewing environment in workgroups and when images are shared with other workgroups.

The need to know the bit depth of images is absolutely critical because mistakes are made when making tonal corrections, and, even worse, images can potentially be saved by accident with incorrect tonal values. Conventions in consumer imaging include only 8-, 16- and 32-bit images in the TIF format. In scientific imaging, 10-, 12- and 14-bit images can be placed in the TIF format, but these lower bit depths are displayed as 16-bit in consumer software leading to a darker image display (as in Photoshop); or images are automatically scaled to 8-bits.

The practical consequence of bit depth is in the number of tones contained in the image. 16-bit images contain 0 – 65,535 (2^16th including 0) possible tones whereas a 12-bit image is 0 – 4095 (2^12th including 0) possible tones. Thus, a 12-bit image displayed with a 16-bit x axis would display only tones a user could not make out because these are at the darkest range.

The figure above shows how images are displayed in Image J and in Photoshop when the “Appearance” setting is on automatic in Image J. If using other software for your work, determine whether your images are set automatically, and whether that means your images are auto-scaled to appear optimal when displayed. If images are auto-scaled, you will not see the inherent brightness or darkness of your image.

Note that an optimal appearance as a result of auto-scaling is useful when visualizing images: when adjusting tones and when changing bit depths (e.g., from 12-bit to 8-bit) for publication, however, unexpected results can occur.

It’s best to know the bit depth of your image and then to set the display to match the bit depth. In Image J that is done as follows: Edit > Options > Appearance: select x-axis to match bit depth of image.

calbiration standards in microscopy

Common Mistake #3: No Calibrated Slide Taken at Microscopy Sessions
Some method for determining that microscope and camera settings are the same one day as they are the next must be implemented. Otherwise, it is difficult to determine that conditions are the same. This is especially important when troubleshooting unusual results at a microscopy session.

Calibration absolutely must be done when measuring optical intensities and densities, unless ratioing within images against a constant, such as what is done with calcium ion imaging.

Calibration can be against known standards, or reference standards. A reference standard provides an average tonal value at the first microscopy session, and at subsequent sessions, when necessary, the same average tonal value is achieved by adjusting exposure levels or attenuating light levels.

Two sources for calibration devices are shown in the above image, the top image for fluorescence and the bottom for electrophoretic samples (gels, blots, etc).

Other sources for fluorescent standards include:
Argolight slide:
Fluorescent beads: Spherotech (for varied intensities)

Common Mistake #4: No Koehler Illumination (for Brightfield images)
Under the stage, on microscopes intended for brightfield viewing and imaging, a condenser (lens) can be adjusted to widen or narrow the cone of light that strikes the sample.

The ideal cone of light to provide the highest resolution is referred to as “Koehler” Illumination (for info on how to set, go to YouTube link).

Often the condenser aperture setting is used to reduce the brightness of the light going to the eyepiece or camera, or to increase contrast. If the cone of light is narrowed too far, false structures can be created in the image (see image to right in above figure).

Note that some microscope systems automatically set Koehler.

Common Mistake #5: No White Balancing, or Poor White Balance
When taking images of Color Brightfield samples (e.g., H&E stained samples), the camera must be white balanced at the beginning of the session, or colors will be incorrectly interpreted by the camera.

Generally, the camera is white balanced by clicking a button AFTER the sample is moved away from the imaging area.

It is critical that any automatic white balancing is then turned off, or each image will be slightly different in color.

Check with the microscope salesperson to understand how to white balance, or with a colleague who knows the software. Note that once the light is attenuated to the ideal brightness the light must stay at the same setting or color balancing must be done every time the light intensity is attenuated. However, microscopes using xenon and LED light sources are not subject to great differences in color temperature when light is attenuated.

Common Mistake #6: No Frame Averaging
When a sample is static (e.g. a fixed sample on a microscope slide), and when fluorescence is low, take more than one picture and average these together (frame averaging). This method will reduce the noise that commonly accompanies the long exposures and high gain necessary to image dim samples.

In confocal imaging, the most common method for frame averaging is known as Kalman Averaging.

Note that pixelation can also result from taking pictures without frame averaging (versus noise), especially when collected on a confocal.

Picture of image with JPEG artifacts

Common Mistake #7: Saving as a High (Lossy) Compression JPEG
When saving from many software acquisition programs as a JPEG image, a high compression is applied. In so doing, visual data is thrown away and replaced with a “blocky” appearance (see bottom image on above figure).

JPEG compression is known as “LOSSY” compression. The word “Lossy” implies what the compression does: it loses visual data.

Additionally, JPEG images may not retain Metadata and so all information about the image is thrown away.

Note that in Photoshop, JPEG compression can be accomplished with low compression. Low compression (a value of 12 in Photoshop) throws away data that is imperceptible to the eye.

gamma changes picture

Common Mistake #8: Gamma Settings at Values Other than 1 (one)
Gamma settings at a value of 1 retain linear gray tones, assuming that the acquisition device records tones linearly (not always 100% true, especially when noise interferes close to min tonal values).

When stating that one tone is brighter or darker than another, and when performing densitometry on images, a gamma of 1 is crucial because it preserves linearity.

gamma change is necessary as shown by this image

However, when samples contain a limited number of tones, a change in the gamma is necessary to collect the image as it is seen by eye (as seen in image above). Conversely, when the dynamic range of tones in the sample is greater than the dynamic range of the camera, a gamma change is essential to collect all the tonal information. In these instances, it is useful to change the gamma. Be sure to report this change in Methods, or in ancillary material when publishing.

information on what gamma means

Gamma is applied according to the formula shown to each pixel of an image shown on the slide above. Note that a gamma calculation is often more complex than what is described.

Common Mistake #9: Taking Pictures that Exceed the Dynamic Range of the Camera
When a picture is acquired, and parts of the image are at max (pure white) and min (pure black) tones, you over- and under-expose these areas. The result is a permanent loss of detail.

It is inevitable that a small portion of significant tones (not artifact) are at max and min values (about 1 percent), but any more than that results in loss of detail in structures.

Note that artifact (debris, detritus) will often be at max or min values, but these are not significant parts of the image.

LUT applied to images

Many image acquisition systems in the sciences have a way to overlay a graphic interface on your image, called a LUT (Look Up Table). The LUT will notify you that you have under- and over-exposed parts of your image, as in the image above.

You may also be able to bring up a histogram. By looking at the x-axis, you can see when you have spread the histogram to either the right (max) or left (min) end. If you have, you are under- and over-exposing your image.

If the image is over-exposed (at max reading for the image bit depth: 255 for 8-bit, 4095 for 12-bit, and 65,535 for 16-bit), you can decrease the dwell time, shutter, exposure, gain, aperture, etc. If the image’s black level is at a reading of 0, you can increase the Black Level (also found as Contrast).

Uneven illumination correction

Common Mistake #10: Not Correcting for Uneven Illumination
Microscope images, especially at lower magnification (10x, 4x, 2x, etc) suffer from uneven illumination from edge to edge. Uneven illumination can be corrected by taking a flatfield image (also known as a Blank Field or Shading image). The flatfield image is divided into the original image to create an evenly illuminated image.

Flatfield correction must be done when measuring optical intensity or density, unless measuring from the same location on the sample, or when measuring the entire field, assuming the uneven illumination pattern is the same for all images.

Additionally, flatfield correction is useful when montaging (stitching) images together so that brighter sides of images do not border darker sides of the next image.

Many camera acquisition software packages include a means to flatfield correct. This also involves taking a background image, which is an image of the sample with the light source turned off.

Find out from a colleague or microscope salesperson how to flatfield correct when acquiring images.

Note that uneven illumination can result in false densitometric measurements, depending on how the baseline is set (see bottom image above).

Common Mistake #11: Using Automatic Exposure
Method for setting exposure
Generally, when at a microscopy session, the first picture is taken of a representative sample with experimental evidence, the more evidence the better (e.g. the microscope slide with the most fluorescent labeling). To make setting exposure easier, you may use an automated method to determine exposure on this first sample, generally by clicking an “Auto Exposure” button. Then, unclick or disable the Automatic exposure (if available) and manually set the exposure while looking at a LUT overlay or a histogram until the brightest significant part of the sample is near the max value (non-significant labeling would include artifacts).

Keep the same manual setting for the remainder of your pictures.

Otherwise, if you have artifacts in the image, the exposure will change from image to image (see above image on left without black features vs image on right).

Manual exposure is mandatory when reading intensities or densities of images.


See the Instruction Page for an index of all instructional videos and documents on post-processing and microscope methods.