Eliminate Red Green or Blue Corel Draw Confocal Microscopy

Abstract

Digital imaging has provided scientists with new opportunities to acquire and manipulate data using techniques that were difficult or impossible to employ in the past. Because digital images are easier to manipulate than film images, new problems have emerged. One growing concern in the scientific community is that digital images are not being handled with sufficient care. The problem is twofold: (1) the very small, yet troubling, number of intentional falsifications that have been identified, and (2) the more common unintentional, inappropriate manipulation of images for publication. Journals and professional societies have begun to address the issue with specific digital imaging guidelines. Unfortunately, the guidelines provided often do not come with instructions to explain their importance. Thus they deal with what should or should not be done, but not the associated 'why' that is required for understanding the rules. This article proposes 12 guidelines for scientific digital image manipulation and discusses the technical reasons behind these guidelines. These guidelines can be incorporated into lab meetings and graduate student training in order to provoke discussion and begin to bring an end to the culture of "data beautification".

Keywords: Digital image, Ethics, Manipulation, Image processing, Microscopy

Introduction

The transition to digital image data over the last two decades has provided scientists with a wealth of opportunities for furthering their research. In particular, semiconductor-based sensors have allowed microscopists (as well as astronomers, radiologists, et al.) to move away from photographic film and tube-based cameras. The charge-coupled device (CCD) 1 technology allows the capture of digital images at high speed, high spatial resolution, and high bit depth (albeit, not always at the same time or with the same camera).

The scientific community is growing concerned that scientists are not taking sufficient care when handling digital image data (Abraham 2007; Abraham et al. 2008; Benos 2006; Castillo 2008; Committee on Ensuring the Utility and Integrity of Research Data in a Digital Age, National Academy of Sciences 2009; Couzin 2006; Nature Cell Biology Editorial 2004a, b, 2007, 2009; Nature Editorial 2006; Gravitz 2006; Guneri and Akdeniz 2004; Krueger 2002, 2005; MacKenzie et al. 2006; McInnes 2001; Nouraei et al. 2005; Paalman 2000; Pearson 2005, 2007; Pritt et al. 2003; Rossner and Yamada 2004; Schekman 2008; Wager et al. 2009). Many of the things that were done in the darkroom years ago are no longer appropriate choices for preparing digital images for publication. In addition, the damage to the credibility of science caused by the publication of falsified image data has created a climate in which the full disclosure of image manipulations is becoming a requirement (Nature 2009).

A strong parallel can be drawn between scientific publication and photojournalism. Concerns regarding the truthfulness of images and the potential loss of credibility have been voiced since the early days of newspaper photography (Wheeler 2002). The National Press Photographers Association (NPPA) (2004) has long had a professional code of ethics, last updated in 2004, and the Association drafted a statement of principle specific to digital images as early as 1990 (1990). The NPPA website includes an on-line ethics training module entitled "Ethics in the Age of Digital Photography" that was written in 1999 (Long 1999).

Scientists have not been as proactive as photojournalists in producing a code of ethics regarding the appropriate manipulation of digital images. The scientific community first began to express concern about digital images in the early 1990s (Anderson 1994; Taubes 1994). At that time, there were no known cases of fraudulently manipulated digital images, but already journal editors were calling for the development of policies to guard against fraud and to determine boundaries for acceptable image manipulations (Anderson 1994). Informal discussions on the Confocal and Microscopy listservers (Archives of the Confocal listserver 1995; Archives of the Microscopy listserver 1998), as well as several short articles in Microscopy Today (Oliver 1998; Revel 1993a, b), indicated that microscopists and microscopy core facility managers were trying to determine what was appropriate. In the late 1990s, a short list of digital imaging guidelines was posted on the World Wide Web (WWW) by the Fred Hutchinson Cancer Center (Image Analysis Lab ca. 1995). Radiologists (Richardson et al. 1994) and pathologists (Furness 1997) expressed their concern about the potential for fraud, but the proposed guidelines were very general in nature. The American Academy of Dermatology (1997) established a fairly clear policy in 1997, possibly making it the first professional society to go on record with specific digital image guidelines. By 1998, the Council of Biology Editors had begun to make recommendations regarding the workflow aspects of digital imaging; however, they did not explicitly delve into ethical issues (Rossner et al. 1998).

Arguably, the first article to clearly discuss the ethics of the manipulation of scientific digital images appeared in 2000. Because it appeared in a specialty journal, Journal of Biocommunication, it did not receive a wide audience (Hayden 2000). 2 In 2001 some of the guidelines presented below were published as a short, ten-point essay on the WWW, entitled "Digital Imaging: Ethics" (Cromey 2001). In 2001–2003, several good articles were published in a number of journals (Microscopy Society of America 2003; Cutrone and Grimalt 2001; Krueger 2002; Pritt et al. 2003; Suvarna and Ansary 2001). The publication in 2004 of the high-profile article by Rossner and Yamada (2004) in the Journal of Cell Biology (JCB) finally brought the issues related to scientific image manipulation to the forefront and, for the first time, provided an overview of the problem from an editor's perspective. The article revealed that the JCB had begun the practice of closely examining images in accepted, peer reviewed manuscripts in 2002 (Rossner and Yamada 2004). If the journal had questions about the image data in a manuscript, the editors requested more information from the authors (Pearson 2005). According to JCB managing editor, Michael Rossner, approximately 25% of accepted manuscripts contained at least one figure that needed to be remade, due to manipulations deemed in violation of the journal's published guidelines (Rossner 2006). In about 1% of accepted manuscripts, the JCB has revoked an article's acceptance due to fraudulent image manipulations that affected the interpretation of the data (Rossner 2006) 3 occasionally reporting this information to an institutional investigative body. In a 2006 interview, Rossner observed that the 25% value had "held steady over time" (Nature Editorial 2006). 4,5

The United States Department of Health and Human Services, Office of Research Integrity (ORI) has been tracking image falsification issues, and has seen an increasing number of cases in which the misuse of image data met the criteria for allegations of scientific misconduct (Krueger 2005) (up from 4.1% of cases in 1993–1994, to 68% in 2007–2008, Krueger 2009). ORI investigator, John Krueger, believes that in many instances where students falsified data, the appropriate oversight of a mentor might have averted the problem (Krueger 2002; Wright et al. 2008).

Despite the plea of an editorial in the Journal of Clinical Investigation that problem scientists "stop misbehaving" (Neill 2006), the high level of over-manipulated images reported by the JCB indicates an ignorance of the issues involved in working with digital images and of the seductive power of image-processing software. Since the vast majority of these cases do not meet the classification of fraud, it would appear that educating scientists is an important first step in changing the culture of "data beautification" (Nature Cell Biology Editorial 2006b). The problem may be far greater than even the JCB appreciated. If journals like the JCB have such a high percentage of problem images, in spite of their well-known screening program, one should ask what is happening at journals that do not forensically examine the images submitted?

Journals and professional societies have begun to specifically address the issue of appropriate image manipulation; however, there is occasional disagreement in the details. 6,7 In addition, the policy statements and instructions to authors do little to educate readers and society members as to why some manipulations are appropriate and others are not. The guidelines for ethical digital image use and manipulation provided below are accompanied by a brief explanation of the scientific importance of each guideline in order to make this a useful catalyst for discussion in lab meetings and graduate student training programs. Such education is badly needed, since—in the author's experience—the problem is not the few individuals who intentionally falsify images, but the many who are ignorant of basic principles.

The Guidelines 8

1. Scientific Digital Images are Data that can be Compromised by Inappropriate Manipulations

Digital images should be acquired in a manner that does not intend to deceive the viewer or to obscure important information that might allow for alternative interpretations of the data. Images should be acquired from well-designed experiments with all the appropriate controls and using properly maintained and aligned instrumentation (North 2006). Images should be representative of the variability in the specimen (McNamara 2006). This means that the user has carefully studied the specimen and is very familiar with the appearance of the control and normal specimens.

Because manipulating digital image data is so easy, users sometimes feel they can work with images with less scientific rigor than would typically be applied to numerical data. The greyscale or color data in a digital image represent a numerical sampling of the specimen as presented by the data acquisition system (e.g., optical systems, like telescopes and microscopes) to the sensor (e.g., cameras, photodiodes, or photomultipliers). The data acquisition system and sensor are subject to all of the limitations and aberrations that physics and instrument design (engineering, electronics, software, etc.) may impose on the two devices. To the observer, the image data may appear to accurately represent what can be seen. However, it is the user's responsibility to understand the limitations of the particular instrument.

The process of image acquisition often allows the user to choose specific settings on the acquisition device. Using the example of fluorescence microscopy, inexperienced users tend to over-saturate detectors in the pursuit of a brighter image, or to be far too aggressive in setting the black level in an attempt to achieve a cleaner background. Manipulating the data acquisition settings in this way is poor technique and misrepresents the data. Over-saturation creates whole areas in an image where the pixel values are at the maximum value (e.g., 255 or white in an 8-bit greyscale image). (See Fig. 1.) Any subtleties of information in that area are lost and the "washed out" information can never be recovered. Since the digital image becomes the record of the experiment, that record will always be incomplete. Aggressive use of black level settings can create an almost perfectly "clean" background with the pixel values set at the minimum value (e.g., 0 or black in a greyscale image). A low level of background signal is common to most specimens, as is a certain amount of camera and detector noise. A perfectly clean background would be unlikely, and probably indicates inappropriate acquisition settings and/or overly zealous image processing (Adler 2005). The presence of some background, rather than calling for artificial elimination, should be considered a reassurance of the authenticity of the data (Nature Editorial 2006).

An external file that holds a picture, illustration, etc.  Object name is nihms575182f1.jpg

Histograms and images. Confocal microscopy image of a mollusk embryo at the 4-cell stage, showing the cytoskeleton of a single cell. The image is courtesy of James Cooley and Lisa Nagy, University of Arizona. Unprocessed image—The original 8-bit (256 shades of grey) image. No post-acquisition image processing was performed. The intensity of this image ranges from the darkest pixel value of 11 to the brightest pixel value of 186. The intensity histogram scale, by convention, runs from darkest on the left, to brightest on the right. There are no true black or white pixels in this image. Appropriately processed image—The same image, after an appropriate contrast/histogram stretch. Using the Photoshop levels tool, the value of 11 from the original image was re-mapped to black (=0) and the value of 186 was remapped to white (=255). Note that the shape of the histogram is essentially the same as in the unprocessed image. The gaps in the histogram are a result of the contrast/histogram stretch. This is generally considered an acceptable image processing step. With color images that will be used for illustrative purposes, it can be useful to apply the levels tool in this way to each of the red, green, and blue channels. If the color images are for quantitative use, or if the relationships of the intensities or colors to one another will be interpreted in any way, this is not recommended. Over-processed image—The same image, this time with a contrast/histogram stretch that was too aggressive. Using the Photoshop levels tool, the value of 20 from the original image was re-mapped to black and the value of 145 was re-mapped to white. Compare the shape of the intensity histogram with the original. Note that the data at each end of the histogram have changed. The data at the ends of the original histogram have been truncated, creating the spikes at black and white (arrows). Nothing scientific can be inferred about these white and black pixels, as their relationship to the rest of the data has been lost. This is a common image processing mistake, arrived at by a number of different techniques, as users try to create striking, "contrasty" images. Boxes—50 × 50 pixel areas from the same area in the un-processed and overprocessed images above. The areas have been enlarged using the Photoshop CS2 nearest neighbor algorithm. Box 1—Note the loss of information in the darkest pixels. The loss is easier to see in the intensity histogram than in the image (arrow). Scientists are often not interested in this end of the histogram; however, backgrounds that are too "clean" do not accurately represent real biology. Box 2—Note the over-saturation of many of the brightest pixels in this image (arrow). Since many journals are using on-line images as the "journal of record", the data of record are missing some of the fine detail that may be of more interest to the reader than they were to the authors

Correct use of the full dynamic range of a given detector during acquisition will give the best scientific results. If the images do not present well, then appropriate and properly documented image processing can be performed. Even at the image processing step, the data must not be mathematically altered in a way that recreates the conditions of over-saturation and artificially clean backgrounds.

Once the data have been acquired, they are typically displayed on a computer screen as a matrix or grid of individual picture elements (pixels). Each pixel has an underlying numerical value that tells the computer what shade of grey or color the pixel represents. 9 Greyscale images are typically displayed as 256 shades of grey (8 bit), with the ability to acquire data with as many as 65,536 shades of grey (16 bit), although very few computer monitors can display more than 256 shades of grey. Color images are a mix of red, green, and blue values, ranging from 8 bits of each color (16.7 million color shades), up to 16 bits of each color (281 trillion color shades). Most computer monitors are able to display a theoretical maximum of 16.7 million color shades (24 bit). Human beings may only be able to accurately discern approximately 30 grey shades and somewhat more than a thousand colors (Russ 2004), 10 with the result that an image often contains more information than can be detected by the eye. John Russ, author of The Image Processing Handbook (Russ 1998), states that "The basic message is that humans are not very good observers, that our vision system ignores a lot of information, that having names and labels for recognized features is very important, and that we often think we see what we expect to see" (Russ 2004). Given the limitations of humans when working with digital image data, it is imperative that the data be acquired and manipulated with care.

2. Manipulation of Digital Images Should only be Performed on a Copy of the Unprocessed Image Data File (Always Keep the Original Data File Safe and Unchanged!)

The original, unprocessed image data file is the standard to which the final processed image can be compared. The investigator must be able to compare the original image to a manuscript figure before submitting it for publication, to ensure that important scientific information has not inadvertently been processed out. A copy of the unaltered image is also the strongest protection against accusations of misconduct. A growing number of journals have indicated in their instructions to authors that the editors may request a copy of the original image if questions arise regarding the image processing (Journal of Cell Biology 2009; Nature 2009). Finally, working with a copy of the original image allows users to recover from a mistake made during image processing. The Microscopy Society of America has suggested that all data be copied immediately to read-only media such as CD-R or DVD-R, to prevent the possibility of over-writing the data (MacKenzie et al. 2006), however, given the ever-increasing size of data sets (Committee on Ensuring the Utility and Integrity of Research Data in a Digital Age, National Academy of Sciences 2009), other forms of high capacity, redundant, protected storage are probably more appropriate.

In many cases the original image file may be in a manufacturer's proprietary file format that includes metadata containing some or all of the instrument settings used during acquisition, and may include magnification information. This supplemental information is vital if a user needs to repeat an experiment. Some journals require these data acquisition settings as part of an article's online supplemental materials (Journal of Cell Biology 2009; Nature 2009). It is often necessary to convert the image data from a manufacturer's proprietary image file format into a more universal file format. The Microscopy Society of America currently recommends only the tagged image file format (TIFF) for scientific images (MacKenzie et al. 2006). TIFF is a loss-less file format capable of storing image information in a wide range of bit depths. TIFF files can be opened with almost every image processing software that is currently available. The JPEG file format is not appropriate for scientific digital images (See guideline 10 for a more detailed explanation).

Research that falls under the United States Food and Drug Administration's "Final Rule on Electronic Records and Electronic Signatures" (21 CFR part 11) is subject to specific requirements for maintaining the integrity of the original image (Horn and Sterchi 2005; Tengowski 2004). The nature of the work done by forensic scientists (Scientific Working Group Imaging Technology 2004) and in clinical/diagnostic labs (Pritt et al. 2003) requires particular attention to protecting the integrity of the original image. Given the ability to analyze images from online articles and supplementary materials, scientists need to ensure that their data retention practices allow access to the unprocessed data in the event their research is questioned sometime in the future (Committee on Ensuring the Utility and Integrity of Research Data in a Digital Age, National Academy of Sciences 2009).

3. Simple Adjustments to the Entire Image are Usually Acceptable

Simple adjustments include techniques that are similar to standard photographic darkroom techniques (e.g., different contrast grades of paper, changes in development time). With digital images, this includes reasonable adjustments of the brightness and contrast, levels, and gamma settings. Users should become familiar with and understand the intensity histogram graph that is part of most image processing programs in order to protect against over-processing an image. (See Fig. 1.) Photoshop users should avoid that program's auto contrast, auto levels and auto color tools: these image adjustment tools tend to over-process images, and it is nearly impossible to specifically report what the tools have done to the image.

Brightness and contrast are useful adjustments when performed in moderation. These adjustments are primarily tools for greyscale images, although they will work on color images. Brightness shifts the entire intensity histogram to the right (increasing brightness) or to the left (decreasing brightness). Contrast expands the intensity histogram (increasing contrast) or contracts the histogram (decreasing contrast). Large adjustments to brightness and contrast are usually not advisable, since it is very easy to truncate or over-saturate the intensity information in the image.

Performing a contrast or histogram stretch can be useful for transforming a low-contrast image into a higher contrast image. In Photoshop, this is typically performed with the black and white triangles of the Levels tool. When performing this technique, the darkest level on the intensity histogram is re-mapped to black and the brightest level is remapped to white. No additional information is added, no information is lost, and, by spreading the contrast over the entire available range of intensities, the new image appears to be much higher contrast than the unprocessed image. Color correction of images (e.g., to accommodate the different sensitivities to color that are often found in digital cameras) can be performed similarly by adjusting the red, green and blue channels individually. A contrast stretch is usually considered a simple adjustment. Although the term "histogram equalization" may sound similar to a histogram stretch, it is a different form of image processing that is not a linear operation. This technique can alter the relationship between brightness and structure in an image (Russ 1998). The use of histogram equalization must be reported in the figure legend or the methods section.

Image gamma is the digital equivalent of the photographic exposure-density curve (Russ 1998). Changing the gamma primarily affects the mid-range of intensities and does not change the darker or lighter intensity pixels to the same extent (see Fig. 2). Because gamma is described by a curve, this adjustment is considered non-linear. In almost all cases, changes to image gamma must be declared in the figure legend. Those working with images on Apple Macintosh computers (gamma = 1.8) and or using Microsoft Windows computers (gamma = 2.2) should note that different operating systems make different assumptions about the gamma settings inherent in the monitor display. Computer monitor calibration devices, while primarily intended for the graphic design community, can be used to ensure that all members of a research group are seeing the images under similar conditions. Users of Adobe Photoshop should run the Adobe gamma utility to ensure that their monitors are calibrated to display images correctly (Adobe Systems 2002).

An external file that holds a picture, illustration, etc.  Object name is nihms575182f2.jpg

Gamma. A greyscale gradient from pure black to white was created using Adobe Photoshop CS3 with an assumed gamma level of 1.0. Gamma values of 1.5, 1.25, 0.75 and 0.5 were applied to the gradient using ImageJ 1.37 (Rasband 1997–2009). A line profile analysis was performed at each gamma level using ImageJ. The curves were plotted and smoothed with a polynomial trendline in MS Excel and the trendlines are presented in the graph. The x axis is the distance along the gradient and the y axis is the 8-bit greyscale intensity value. To determine how much a greyscale intensity value has been shifted by the application of gamma (a non-linear function), find the place where a y axis gridline intersects the gamma = 1 trendline (solid line), and follow it over to the left or right along the gridline to one of the other gamma trendlines. Gamma enhances the mid-range intensity values more than the extremes of dark or light. Below the graph are examples of the gradients with gamma values = 0.5, 1.0 and 1.5. The arrows show where the greyscale intensity of 96 falls on each gradient

What constitutes a "reasonable" adjustment of image settings such as brightness and contrast, etc.? The instructions to authors of the Journal of Cell Biology rightly describe such adjustments as those that "do not obscure, eliminate, or misrepresent any information present in the original" (2009). Keeping track of the "protocol" used to manipulate individual images will allow the user to reproduce the processing that was performed on the image data and report it to the journal in the event of questions.

4. Cropping an Image is Usually Acceptable

To crop an image is "to cut off part of an image, such as unneeded sections of a graphic or extra space around the borders" (Microsoft Corporation 1997). Often, cropping is used to simplify an image so that the viewer is not distracted by extraneous information. In many cases, acquiring an image involves a judgment as to what is important. Framing the image during acquisition is, in a sense, a form of cropping and scientists should make every effort to avoid observer bias when acquiring images. Acquiring adequate numbers of representative images of controls and treated groups (McNamara 2006; North 2006) allows a user the opportunity to review the data later, in order to ensure that the interpretation is still valid.

Cropping is usually considered an acceptable form of image manipulation (MacKenzie et al. 2006; Rossner and Yamada 2004). When cropping an image, a researcher should carefully examine his or her motivation. Is the crop intended to improve the "composition" of the image? Or is it being used to remove something that they do not like or understand in the image, or to hide something that disagrees with their lab's preferred hypothesis? Legitimate reasons for cropping include centering an object of interest, trimming "empty" space around the edges of an image, and excluding a piece of debris. Questionable forms of cropping include removing information in a way that changes the context of what remains in the image after it has been cropped. An example of changing the context would be cropping out dead or dying cells to only display a healthy cell, or cropping out gel bands that might disagree with the hypothesis being proposed in the paper.

When cropping an image, it is important to leave enough pixels so that the image will reproduce well in a scientific journal. Most journals require a minimum of 300 pixels per inch (ppi or dpi) for digital images, meaning that a figure that is 3.5 inches wide must be 1050 pixels across. If the image has to be cropped so tightly that an inadequate number of pixels will remain, the image should be reacquired at a higher magnification or higher pixel density. Enlarging an image with a small number of pixels can lead to artifacts (see guideline 12). In other words, one should not use image manipulation to replace good science. Rather than enlarge a picture that has too few pixels, one should reacquire the image with an adequate number of pixels to correctly reproduce the data. It is simply good scientific technique.

5. Digital Images that will be Compared to one Another Should be Acquired under Identical Conditions, and any Post-acquisition Image Processing Should also be Identical

Accurate comparison of digital images from treated and control samples is difficult or impossible if the images have not been acquired under similar conditions (North 2006; Waters 2009). This can be a problem, particularly with confocal fluorescence images and other imaging modalities where the signal is electronically amplified. The process of signal amplification may cause small features in some images to appear larger or smaller than they do in other images solely due to the degree of amplification and artifacts of aliasing. (See Fig. 3.)

An external file that holds a picture, illustration, etc.  Object name is nihms575182f3.jpg

a Sampling—theoretical. This illustration shows three bright (intensity = 255) spots that are aligned in different ways with the pixels on a sensor. Each spot is the same size; in each row the detectors are sampling the spot at higher frequencies. The left-most spot can be accurately measured at all three sampling levels. The other two spots give widely varying measurements that are more accurate as the sampling level increases. At 10× oversampling (not shown), all three spots measure exactly 10 pixels in both the x and y axes. In this example, it is assumed that there is no space between the pixels, though in reality this is never the case. In places where the spot did not fill the pixel, the mean intensity was measured using ImageJ (Rasband 1997–2009). This is similar to how a CCD camera treats a partially-illuminated pixel. These reduced-intensity pixels demonstrate how a feature is displayed using aliasing. If a user is too aggressive when processing an image, the aliased pixels can become bright enough to look like a real feature. This would be particularly problematic in the 1 to 1 sampling image, since the three spots could be blended together by over-processing, thus leading to a possible misinterpretation of the data. Note, this example assumes maximum brightness in every portion of the spot. In reduced light conditions (e.g., fluorescence microscopy), these assumptions may not be correct. b Sampling—example. A small area from the same field of view as that used in Fig. 4 was imaged on a Zeiss LSM 510 confocal microscope at 2048 × 2048 pixels, 1024 × 1024, 512 × 512, 256 × 256 and 128 × 128 pixels using the same optical magnification. The areas have been enlarged using the Photoshop CS3 nearest neighbor algorithm and each image was individually contrast stretched and a gamma adjustment of 1.1 was applied to clearly show the pixilation

Scientists would do well to ask their students and staff about the conditions under which images that are to be compared were acquired. Shared instrumentation facilities are frequented more often by students and staff members than by principal investigators. While principal investigators are usually not involved in the specific day-to-day decisions about instrument settings, they are often deeply involved in the final interpretation of the data. If the instrument's acquisition settings vary widely for each image and the images are judged without consideration of the affect of the settings on the image, errors in interpretation can occur. Thus, an incomplete understanding of data acquisition can inadvertently lead to bad science.

When images are to be compared to one another, the processing of the individual images should be identical. This includes acquisition techniques such as background subtraction or white-level balancing, which should be documented in the methods section. The same principle applies to publication figures, especially if multiple images will be published together in a single figure. This assists the reader in understanding how each image relates to the others in the group. Individual images within a figure should only be processed differently if there are compelling reasons to do so. In such cases, the differences must be explained in the methods section or the figure legend. Honesty, and completeness, are the best policies.

6. Manipulations that are Specific to one Area of an Image and are not Performed on Other Areas are Questionable

This includes image processing procedures that are analogous to the darkroom techniques of dodging and burning. Purists believe that selective enhancement should never be performed. However, there are rare occasions when it may be legitimate to enhance a specific area in an image. An example of a legitimate use of selective enhancement is a 16-bit greyscale image that has important features at both extremes of its dynamic range. To enable readers to see both features, a scientist might decide to selectively enhance the image. If portions of a published image are selectively enhanced, however, the author must clearly state this in the figure legend. Selective enhancement of an image that is not reported would most likely be viewed as research misconduct.

Nearly every other type of selective image enhancement would probably be considered inappropriate. This includes the selective enhancement of specific bands or lanes in gel images. Some types of software filters for digital images can be selective in their application within an image (Russ 1998). See guideline 7 for more comments on software filters.

7. Use of Software Filters to Improve Image Quality is Usually not Recommended for Biological Images

Commercial software designed for desktop publishing cannot be relied upon to appropriately and scientifically manipulate the data in a digital image. Digital image filters are mathematical functions (frequently convolution kernels) that numerically change the data in the image (Spring et al. 2007). If the filters are not used carefully, they can create artifacts in an image that may lead the user to misinterpret the data (Baird and Cohen 1999). If filters must be used, they should be noted in the figure legend. The notation should include the software version, filter name(s), and any special settings that were used (Nature 2009).

Software filters and to some extent "cloning" (see guideline 8) have been used to clean up the background of images. Scientists should keep in mind the possibility that a reader may evaluate the data in a way the authors had not considered. Perhaps the collagen matrix, support media, interface between two structures, or other seemingly unimportant features in the image contain information that will spark an idea for a reader's research. If authors change these "unimportant" parts of an image to enhance aspects of the image that they regard as important, the author has misrepresented the data and misled the reader, and possibly removed an opportunity for a serendipitous finding (Russ 2004). "Data beautification" is a form of misrepresentation even when it does not completely cross over the line to outright, intentional falsification (Nature Editorial 2006).

8. Cloning or Copying Objects into a Digital Image, from Other Parts of the Same Image or from a Different Image, is very Questionable

Users often consider employing the clone stamp tool (Photoshop, or similar tools in other programs) to "clean up" a dirty preparation. This tool is used to take a sample of a region in an image, and then apply it over other regions of the same or a different image (Adobe Systems 2005). This tool is often used to cover up imperfections in the image. If an image requires this much processing, the best solution is to take another image from the specimen, or from a new specimen prepared under the same conditions. Use of this, or other retouching tools such as the Adobe Photoshop Spot Healing Brush, Healing Brush, Patch, Dodge, Burn, Smudge and Sponge tools, is a form of selective image processing, and is inappropriate for images presenting scientific data.

The use of cloning or copying techniques specifically to create or move objects in an image that did not exist there originally is research misconduct (falsification or fabrication). Cloning and copying of data have frequently been used to falsify images. 11 Because of the historical misuse of these tools, the undeclared use of cloning in a published image could lead to charges of research misconduct. Examples of misconduct would include copying gel bands into an existing gel image to create a new result, or any other image "seamlessly" created from the combination of portions of two or more images.

In some instances the combination of two images into a single figure is appropriate, as long as it is clear to the editor, reviewers, and journal readers that the two parts come from separate images. A common example would be aligning lanes from two or more gels into one figure. Most journals require a line or small gap between the combined images to clearly show that they are from separate gels (Journal of Cell Biology 2009; Nature 2009). A Nature Cell Biology editorial explains that, while authors should "avoid splicing different gels together", if splicing is unavoidable, authors should "clearly demark the point of splicing and avoid overextending quantitative interpretations across splices." (2004a)

9. Intensity Measurements Should be Performed on Uniformly Processed Image Data, and the Data Should be Calibrated to a Known Standard

Intensity measurements are difficult to perform in an appropriately rigorous and scientific manner (Zwier et al. 2004). Many variables must be considered and controlled for, before the data can be regarded as meaningful (Russ 2004). An example in which achieving the necessary rigor is difficult is the analysis of colocalization data from fluorescence microscopy, which is essentially the comparison of intensity data from two different wavelengths. Unfortunately, many papers do not provide sufficient detail to reassure the reader that the intensity data were correctly gathered and interpreted.

A particular difficulty with intensity measurements is the need for a robust and repeatable standard to calibrate or normalize the data. Developing such standards is not a trivial task.

Intensity measurements of light-absorbing materials (also known as densitometry) are typically more straight-forward to perform than measurements of light-emitting materials (i.e., fluorescence). This is because light-absorbing materials (e.g., DAB staining used in immunohistochemistry 12 ) do not change appreciably in intensity after exposure to light. Light-emitting materials, such as fluorescent dyes, are prone to fading after exposure to light, making them difficult to use when measuring intensity values.

The many types of microscopes that are used to capture fluorescence images are subject to a number of known fluctuations over time, as well as other limitations due to physical and electronic parameters (Pawley 2000). Scientists who are unaware of, or cannot account or control for, the limitations of the acquisition instrument, should not perform intensity measurements.

In general, intensity measurements should be performed on unprocessed data or certainly with carefully processed data to avoid potential artifacts. If normalization, flat-field correction, calibration, or any other image processing is performed on the data, it should be performed uniformly across all the data, and the procedures should be carefully described in the methods section of an article. Scientists should be extremely cautious about using software filters (i.e., convolution kernels), since the filters may introduce unexpected artifacts in the images (see guideline 7), possibly leading to the measurement of artifacts rather than actual data.

10. Avoid the use of Lossy Compression

Image file compression comes in two basic types. Loss-less file compression reduces the size of the image file while maintaining the integrity of the image data. An example of loss-less file compression is the compressed TIFF file format, which uses the Lempel–Ziv–Welch (LZW) algorithm. This type of file compression is appropriate for scientific data, although compressed TIFF files are not universally supported by imaging software.

The other, more common, type of image file compression is lossy file compression provided by the JPEG and other file formats. JPEG is an ISO/ITU 13 standard file format for storing images that uses the discrete cosine transform to compress the file (Microsoft Corporation 1997). The JPEG file format is unsuitable for images of scientific data because of the changes that the transform makes to the image data. The discrete cosine transform changes the spatial resolution of the image and the intensity value of any given pixel (Russ 2004). The Joint Photographic Experts Group (JPEG) says that "many aspects of scientific and industrial usage involve subsequent processing of a digital image, for example to enhance features or count items. Using any form of lossy compression for images in this context may create problems—after all the information thrown away during lossy compression is generally that information that is imperceptible to a human eye—not necessarily showing the same characteristics as computer image processing software" (Joint Photographic Experts Group 2007).

John Russ adds, "The reason for recording images in scientific studies is not to keep remembrances of familiar objects and scenes, but to record the unfamiliar. If it is not possible to know beforehand what details may turn out to be important, it is not wise to discard them. And if measurement of features is contemplated (to measure size, shape, position or color information), then lossy compression, which alters all of those values, must be avoided" (Russ 2004). The Journal of Cell Biology simply states, "It is tempting to acquire your image files in JPEG format to save disk space, but doing so compromises your data. Always use TIF format." (Rossner and O'Donnell 2004). An excellent tutorial demonstrating the problems with using JPEG for scientific images is available at Florida State University's Molecular Expressions website. 14 See also the example in Fig. 4.

An external file that holds a picture, illustration, etc.  Object name is nihms575182f4.jpg

JPEG compression. The left image is of an Invitrogen/Molecular Probes (Eugene, OR) FluoCells #2 slide stained for anti-a-tubulin, captured with a Zeiss LSM 510 confocal microscope. The file was exported from the native Zeiss LSM file format to TIFF and cropped slightly. The TIFF image was then saved as a JPEG file in Photoshop CS3 (save for web and devices) using either the 100 quality factor (minimal compression) or the 60 quality factor (higher compression, with a barely noticeable level of artifacts). Since digital images are a representation of the numerical intensity values for each pixel, image comparison can be performed using simple mathematics. If two images are identical, subtracting one image from the other should yield a product that equals 0. Since JPEG images can change individual pixel values above or below their original value, subtracting two images and then adding 128 will prevent negative values from being missed. A value of 128 (mid-grey) indicates that there has been no change from the original image to the JPEG image. The top intensity histogram is the result when the TIFF image was subtracted from itself and 128 was added. Since the images were identical, the resulting image has a histogram where all the values are an intensity of 128. The middle intensity histogram is of the image that results from subtracting the JPEG (QF 100) from the TIFF image and adding 128. The bottom intensity histogram was made in a similar manner using the JPEG (QF 60) image. Where these two histograms deviate from 128, the original intensity data has been compromised. The image on the right is the image that was created when the JPEG (QF 60) was subtracted from the original TIFF image and then 128 was added to the result. This image was not processed additionally. There is a considerable amount of information that was altered in the JPEG (QF60) image

If the user must save a file as JPEG—for example, to post an image on a web site—they should perform the compression as the final step and use the highest quality factor available. Opening and saving a JPEG image multiple times applies the compression algorithm to the image repeatedly, which degrades the image each time.

Users should be aware that some other computer programs use lossy compression. The Adobe Acrobat program is often configured by default to apply JPEG compression to images embedded in documents that are being transformed into PDF files. Microsoft PowerPoint, is used by some labs to lay out figures for publication. PowerPoint is undoubtedly simpler to use than most raster image manipulation programs (e.g., Adobe Photoshop, Corel Photo-Paint, ImageJ); however, resizing images in this program can cause the loss of data (see guideline 12) and compressing the PPT (PowerPoint) file may apply the JPEG algorithm to all the embedded images. PowerPoint is designed to prepare presentations at screen resolutions (72 dots per inch [dpi]), rather than print resolutions (typically 300 dpi for images, 1200 dpi for text and line art). PowerPoint is not a good tool for assembling figures for print, and many journals will not accept files in the PPT format or TIFF files that have been exported from PowerPoint (Journal of Cell Biology 2009).

11. Magnification and Resolution are Important

Digital images of real world objects sample the object such that each pixel in the image has a scale. This scale may be in light-years per pixel for telescope images, or in tenths of microns per pixel for microscope images. Ideally, the scale is the same in both the X and Y dimensions; however, this is not always the case. In confocal microscopy and other sectioning techniques, the XY pixel also represents a volume (voxel), because the image includes a Z dimension (Russ 2004). The Z dimension in confocal microscopy is typically two to three times larger than that of the XY resolution (Benham 2002), an issue that should be noted in order to avoid misinterpretation.

It is imperative that the scale of the pixels in the XY and Z dimensions be maintained so that features in the image can be correctly interpreted. The magnification of an image is determined by the difference between the original scale of the pixel and the scale of the pixel in its final form (e.g., paper printout, projected on the wall of a large lecture hall). Since it is often impossible to know in advance what the final magnification will be, a scale bar of known size is the best way to express the magnification. Journals may resize an author's submitted figure to better fit the dimensions of the printed page; resizing by the journal makes any magnification number provided by the author in the figure legend incorrect, whereas a scalebar will resize along with the image. In addition, the out-dated practice of stating the magnification of a light microscope objective in the figure legend—without taking into account other instrument optics and image processing—is sloppy science, and omits important information.

Microscopes can, in several imaging modes, visualize objects that are smaller than the diffraction-limited resolution of the instrument. Visualizing sub-resolution structures is only possible if the objects are well separated from other objects in the image field. Resolution is defined as the ability to separate two closely adjacent objects, and is limited by the diffraction of the imaging system. Sub-resolution objects typically appear to be the same size as objects that are at the actual diffraction-limited resolution; however, this perception is an artifact (Waters 2009). Because of this problem, measurements of objects in this size range are likely to be inaccurate.

Another important issue with sampling small objects using digital image capture is the need to correctly oversample the object. (See Fig. 3.) The Nyquist/Shandon sampling theorem suggests that the smallest resolvable object in the image should be oversampled at least two times in X and Y (Spring et al. 2006b). If light is limited, higher levels of sampling (hyper-sampling) can yield increased accuracy of feature measurements (particularly for larger objects) often, however, with a resulting loss of contrast (Spring et al. 2006a). Because adequate contrast is essential to correctly resolve structures in microscopy, 2.5–3 times oversampling is a more appropriate middle ground between sampling and contrast (Spring et al. 2006c). Scientists studying moving objects should also consider oversampling on the temporal scale to avoid artifacts. 15

The reason for 2.5–3× oversampling is to avoid image artifacts. Undersampling an image—using too few pixels to accurately describe a small feature—can yield aliasing artifacts that may misrepresent the feature's true size and shape. In addition, aliasing artifacts may masquerade as real structures (Spring et al. 2006b, c) in the sample (see Fig. 5), which can lead to misinterpretation of the image data. While oversampling does not lead to artifacts, it does not increase the spatial resolution data in the image beyond the diffraction limit (Spring et al. 2006c). When in doubt, oversample.

An external file that holds a picture, illustration, etc.  Object name is nihms575182f5.jpg

Moiré. The top image is of a portion of a diatom acquired using a Zeiss LSM 510 confocal microscope in differential interference contrast mode. The bottom image is a demonstration image of the kind of moiré artifact that can occur in images that have repeating structures and have been incorrectly down-sampled or were initially undersampled. Note the curved artifacts (arrows) in the diatom in the bottom image. This artifact is the result of aliasing the periodic features in the diatom. This is a somewhat extreme example; however, users need to be aware that down-sampling an image (i.e., reducing the total number of pixels in X and Y) can reduce the information content of an image, and may introduce unwanted, and unnoticed, artifacts. Noise was removed from the top image using Photoshop CS2's despeckle filter and a conservative contrast stretch performed to enhance the image. The bottom image is a screen capture taken when the above image was viewed at 33% on screen and then the captured image was enlarged using the Photoshop CS3 nearest neighbor algorithm

12. Be Careful when Changing the Size (in Pixels) of a Digital Image

All the care taken to correctly acquire digital images can be for naught if the image is post-processed injudiciously. Because the original image data file may not precisely fit the journal page, users typically need to reduce or enlarge the image to make the image(s) fit. Changing the size of an image (the number of pixels in X and Y) can introduce aliasing artifacts.

Decreasing the image size in pixels reduces the spatial resolution in an image. If the size reduction is not by a power of two, the software program performs an interpolation to create both a new XY resolution and new intensity values for each pixel. Very small or thin structures may disappear or be affected disproportionally by a reduction in the image size (see Fig. 3b). If the image has regular, repeating structures, size reductions have the potential to create moiré artifacts in the final image. Wavy moiré artifacts can best be visualized by viewing a highly structured image on a computer monitor at less than 100% enlargement (See Fig. 5), however, moiré artifacts can also occur during image size reduction or from the interaction of the halftone printing process with the image.

Increasing the image size in pixels also causes the software program to interpolate the new XY resolution and pixel-intensity values. Enlarging an image does not increase the spatial resolution; to the contrary, it may make specific features more difficult to resolve because aliasing artifacts tend to make the edges of features less distinct.

When enlarging or reducing an image in size, users should insert a magnification scale bar prior to changing the total number of pixels in an image. Magnification may be nearly impossible to calculate afterwards. Ideally the image size should be changed only once to prevent resizing artifacts from compounding one another. 16

Conclusions

In the past, capturing images to film required a certain degree of skill, as well as the expense of film, chemicals, paper, and labor. In many ways, film techniques were more-or-less self-regulating from an ethical standpoint, because creating an intentionally false image was not a trivial task. With the advent of digital imaging, the expense of maintaining a fully equipped darkroom has been replaced with the much lower cost of the popular Adobe Photoshop (Foster 2000) (or similar programs) and some additional RAM for the lab computer. Any technical barriers to inappropriate image manipulation have been greatly reduced resulting in the possibility of doing almost anything to an image with just the click of a mouse.

That possibility needs limits. Journals and professional societies have only in the last few years begun to establish those limits. Meanwhile, scientists have felt pressure to submit cleaner and more beautiful images (Pearson 2005). Biomedical photographers such as Felice Frankel have advocated for more attractive images (Frankel 2002, 2004). Stunningly beautiful journal covers, which typically feature the most attractive image in a given issue, have added to the pressure. One would do well to remember that "beautiful pictures are not inherently any more truthful than ugly ones" (Mullin 1998).

Research on human memory has shown that humans have better recall of news stories if they are accompanied by an image (David 1998). Other research has shown that doctored photographs can affect people's memories of, and attitudes about, recent historical events (Sacchi et al. 2007). Extrapolating this research into the way readers interpret scientific digital images, it would seem that readers would be just as likely to remember the images and information from a falsified article as those from a true article, even if they later learned about the falsified article. A recent literature survey found that an estimated 95% of the authors who cited papers that were found to have been falsified (published ORI findings) were seemingly unaware that the papers had been retracted or were mentioned in a misconduct finding (Neale et al. 2010). Given the growing percentage of falsified images found in ORI cases (Krueger 2009), it would seem that fraudulent images are a significant short and long-term problem for the scientific record.

Over the last 10–15 years there have been a few highly public instances of falsified images (Abbott 1997; Aldhous and Reich 2009; Bagley 2009; Greenberg 1996; Katsnelson 2007; McCabe and Wright 2000; Normile 2009; Rolph and McNerthney 2007; Rossner 2006; Vogel 2006, 2008; Weissmann 2006; Xin 2006; Young 2008), but most of the problem lies with the lack of a basic understanding of how to properly handle image data. Graduate student training often includes a mandatory ethics class (students supported by NIH training grants are required to receive responsible conduct of research training, NIH 1994, 2009) and many of these courses now include discussion of the Rossner and Yamada paper (2004).This is a good start, but until students (and their more established colleagues) understand the reasoning behind the detailed instructions to authors found in many journals, these instructions will continue to be poorly understood and compliance will suffer. It is hoped that the present article will begin to help scientists and students understand the importance of handling images carefully.

In addition, the disconnect between the generations in the lab needs to be addressed (Couzin 2006). Often younger members of the lab are more familiar with Photoshop than their research supervisors, so students become the creators of figures submitted for publication. Laboratory heads need to check the work of students and staff to ensure that image data are not over-processed or otherwise inappropriately processed (Wright et al. 2008). A publication figure should be compared with the original image data long before the manuscript is submitted. This will ensure that the science is interpreted correctly and that any embarrassing image processing mistakes are caught beforehand. This type of supervision and review is an important responsibility for every author, certainly for the senior-most author, (Nature Cell Biology Editorial 2006a, b) and it is one aspect of what the ORI refers to as the "responsible conduct of research" (Steneck 2007).

The final piece of the puzzle belongs to the journals. Many editors do not have the resources to screen submitted images in a manner similar to that of the Journal of Cell Biology (Couzin 2006). Some journals are hopeful that digital image forensics software developed by Dartmouth engineering professor Hany Farid (Coburn 2008; Pearson 2006) or the Rigour™ software (Suprock Technologies 2009) will be able to automatically flag manipulated images. Michael Rossner warns that reliance on software to screen images may lead to a false sense of security for the journals (Rossner 2008) and others have expressed concern about the potential damage to a scientist's reputation that could be caused by a false positive flagged by software (Young 2008). Many journals have not yet formulated specific guidelines for the appropriate manipulation of images. Other journals have adopted all or part of the wording from the JCB's instructions to authors, 17 possibly because this wording is recommended by the Council of Science Editors (2009), or because of the Rossner and Yamada article (2004). Unfortunately there are still instances of journals where the figure submission guidelines read more like instructions for graphics professionals than for scientists. Until there is a fairly unified response from journals as to what constitutes appropriate image manipulation, the problem of "data beautification" will continue to plague science.

Ethical guidelines for the appropriate use and manipulation of scientific digital images

  1. Scientific digital images are data that can be compromised by inappropriate manipulations.

  2. Manipulation of digital images should only be performed on a copy of the unprocessed image data file (Always keep the original data file safe and unchanged!).

  3. Simple adjustments to the entire image are usually acceptable.

  4. Cropping an image is usually acceptable.

  5. Digital images that will be compared to one another should be acquired under identical conditions, and any post-acquisition image processing should also be identical.

  6. Manipulations that are specific to one area of an image and are not performed on other areas are questionable.

  7. Use of software filters to improve image quality is usually not recommended for biological images.

  8. Cloning or copying objects into a digital image, from other parts of the same image or from a different image, is very questionable.

  9. Intensity measurements should be performed on uniformly processed image data, and the data should be calibrated to a known standard.

  10. Avoid the use of lossy compression.

  11. Magnification and resolution are important.

  12. Be careful when changing the size (in pixels) of a digital image.

These guidelines can also be found as part of the "Online Learning Tool for Research Integrity and Image Processing", the development of this website was funded by a grant from the Office of Research Integrity. See: http://www.uab.edu/researchintegrityandimages/ or: http://ori.dhhs.gov/education/products/RIandImages/ (Retrieved 12/06/2009)

Acknowledgements

This essay began as a brief two-page newsletter article in February of 2001 that was intended primarily for graduate students and staff. As the guidelines have been refined and revised over the last several years, I have benefited greatly from the insight and feedback of colleagues at the University of Arizona, with specific thanks to: Carl Boswell, David Elliott, Patty Jansma, R. Clark Lantz, Claire Payne, Dana Wise, and Jeb Zirato. Additional feedback from John Krueger of the Office of Research Integrity, and Sara Vollmer of the University of Alabama—Birmingham, is appreciated. The author would like to specifically thank Michael W. Davidson and his colleagues at the Molecular Expressions website (Florida State University) for developing the online resources that carefully explain some of the technical concepts referred to in this article. Adobe and Photoshop are registered trademarks of Adobe Systems Incorporated, San Jose, CA. Microsoft, Powerpoint, and Windows are registered trademarks of the Microsoft Corporation, Redmond, WA. Apple and Macintosh are registered trademarks of Apple Computer, Inc., Cupertino, CA. Corel and Photo-Paint are registered trademarks of the Corel Corporation, Ottawa, Ontario, Canada. This work was supported in part by the Southwest Environmental Health Sciences Center (SWEHSC), a National Institute of Environmental Health Sciences (NIEHS) funded center (ES006694). The views, opinions, and conclusions of this essay are not necessarily those of the SWEHSC, the NIEHS, or the University of Arizona.

Footnotes

Originally presented at the University of Alabama at Birmingham/Office of Research Integrity conference entitled "Statistics, Images, and Misconduct", held September 2006.

1Underlined terms are defined in a glossary that is provided.

2While that article had many excellent points, several of the examples described image manipulations that were performed on specific areas of the image. Today, many of these manipulations would be considered falsifications or fabrications, unless the figure legend or methods section contained a detailed explanation of how the figures were created. Interestingly, Mr. Hayden has become more conservative with regard to image manipulations since that article was written in the year 2000. (See interview with Jaime Hayden [Couzin 2006].)

3In at least one instance that he is aware of, Rossner has seen a paper that was rejected by the JCB that was subsequently published in a different journal without corrections to the inappropriate image manipulations (Young 2008).

4Not all journals employ this pre-publication examination. Interestingly, if pre-publication screening had been applied to the infamous Hwang stem cell cloning paper in Science (Neill 2006), questions might have been raised before its publication, instead of afterwards (Rossner 2006).

5A survey performed by the American Journal of Respiratory Cell and Molecular Biology found that figures in 23% of the accepted articles in that journal had images that had undergone some alteration, including 'erasure or filling in of parts of the background, splicing of bands from one gel into another, and "cloning,"' (Abraham et al. 2008). The Journal of Clinical Investigation has seen some evidence of tampering in 10–20 accepted articles per year, and about 5–10 of those papers warranted a more thorough investigation (JCI publishes about 300–350 articles per year) (Young 2008). A pilot study carried out by Blood found that "approximately 20% of accepted manuscripts contained one or more figures with digital images that had been manipulated inappropriately" (Shattil 2007).

6The Microscopy Society of America position on Ethical Digital Imaging considers gamma correction to be a "generally, acceptable (non-reportable) imaging operation" (Microscopy Society of America 2003). The Instructions to Authors for the JCB state that "Non-linear adjustments (e.g., changes to gamma settings) must be disclosed in the figure legend" (Journal of Cell Biology 2009). The Nature Publishing Group states "If 'Pseudo-coloring' and nonlinear adjustment (for example 'gamma changes') are used, this must be disclosed" (Nature 2009).

7Phillip Sharp, co-chair of the National Academies of Sciences committee that was initially tasked with coming up with general data handling guidelines (a task that was begun in response to a call from the editors of major journals regarding the problem of inappropriate image manipulation), acknowledged in an interview in Science (Kaiser 2009) that coming up with acceptable image manipulation guidelines in the committee became impossible. Sharp said that "The problem was that every time a panelist made a detailed proposal, another member would say it would not work in their field…" (Kaiser 2009). The National Academy's report, Ensuring the Integrity, Accessibility, and Stewardship of Research Data in the Digital Age (Committee on Ensuring the Utility and Integrity of Research Data in a Digital Age, National Academy of Sciences 2009), ultimately dealt with issues of researcher's responsibilities for data integrity, data accessibility and archiving.

8Although these guidelines are strongly influenced by the author's background in biological microscopy, most of the guidelines have broad application to scientific images of all types.

9For more information on the "Basic Properties of Digital Images," see the Molecular Expressions web site at: http://micro.magnet.fsu.edu/primer/digitalimaging/digitalimagebasics.html (Retrieved 12/06/2009).

10These numbers (Russ 2004) are at the lower end of the scale. There are higher values that are quoted (without references) for the number of grey shades and colors the human eye can perceive. Methodologies for accurately determining these values are difficult. Do trained observers, such as artists and photographers, see more colors than the average person? No one appears to know for sure. The possibility that some women may have a fourth optical pigment (Jameson et al. 2001) complicates this issue even more.

11John Krueger (2007) Office of Research Integrity, U.S. Department of Health and Human Services, personal communication.

12Caveat "…using DAB as a chromogen is problematic because a linear relationship between the amount of antigen and staining intensity exists only at low levels of the latter." (Bernardo et al. 2009; see also Taylor and Levenson 2006).

13ISO = International Standards Organization, ITU = International Telecommunications Union.

14JPEG artifacts—http://micro.magnet.fsu.edu/primer/java/digitalimaging/processing/jpegcompression/ (Retrieved 12/07/2009).

15An example of a temporal sampling artifact can be seen when watching the wheels on an automobile. At certain speeds it will appear as if the wheels are rotating in a direction that is opposite of the direction of travel of the vehicle. This artifact, sometimes referred to as the "wagon wheel effect", has been known for a long time ("Why Movie Wheels Turn Backward; An explanation of the illusion and a suggested method for correcting it," 1918) and is caused by the sampling rate of the image capture device or the rate at which the eye/brain processes the images (Purves et al. 1996).

16For resizing tips for Adobe Photoshop, see "Potentially the most dangerous dialog box in Adobe Photoshop™"—http://swehsc.pharmacy.arizona.edu/exppath/resources/pdf/Photoshop_Image_Size_dialog_box.pdf (Retrieved 12/06/2009).

17Based on a Google search (November 2009), the following (non-exhaustive) list of journals have image submission and manipulation guidelines which are very similar, or identical to, the wording used by the Journal of Cell Biology; J Gen. Physiology, J Exp. Medicine, Biology of the Cell, J App. Physiology, Biochem J, J Invest Derm, Blood, J. Exp. Botany, J Endocrinology, ASN Neuro, European Resp. Rev., and Diabetes. In addition, publisher Springer (http://www.springer.com/authors?SGWID=0-111-7-574914-0) uses similar wording.

Eliminate Red Green or Blue Corel Draw Confocal Microscopy

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4114110/

0 Response to "Eliminate Red Green or Blue Corel Draw Confocal Microscopy"

Publicar un comentario

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel