Thoughts on image analysis. Many of the resources (URLs) were sent by members of the confocal listserv in response to a question I posted asking about references for teaching reliable image acquisition & analysis of microscopy.

Quantifying microscopy images: top 10 tips for image acquisition by the commercial image analysis provider CellProfiler is an excellent brief introduction to proper imaging and realistic expectations.

Based on reading Part I and the beginningof Part II, this online book appears to be an excellent introductory text on what a micrograph image is, how to get a good one, and how quantify it using ImageJ or other similar software.
This is an excellent overview, but I have to disagree with the enthusiastic embrace of Fiji over ImageJ. Fiji is rife with complex plugins that new users apply to their images without any idea what is going on under the hood. They apply the plugins, get answers, report them as truth, but haven't characterized what the numbers mean. (This is discussed below.) Beginners should start with plain ImageJ, not jump to Fiji.

For beginners perhaps too technical without step-by-step figures for beginners, but a good overview of issues regarding fluorescence microscopy.
Fluorescence microscopy - avoiding the pitfalls. Claire M. Brown. Journal of Cell Science 2007 120: 1703-1705; doi: 10.1242/jcs.03433

This article by Douglas W Cromey is required reading for any discussion of the issue of image enhancements or manipulations in the sciences.
Importantly, it addresses image manipulations during image acquisition which make the so called "raw data" inherently corrupt and it gives specific examples to address criticism it levies on other articles:
"[P]olicy statements and instructions to authors do little to educate readers and society members as to why some manipulations are appropriate and others are not."
What is called for, and should be heeded, is educating scientists.
"Such education is badly needed, since—in the author’s experience—the problem is not the few individuals who intentionally falsify images, but the many who are ignorant of basic principles."
This article by David Piston does an excellent job of outlining problems of using automated tools without understanding how they work. (One of the misused ones I see constantantly is colocalization software.) Six years after its publication, there are phone apps for colony counting and other imaging lab tasks. Does anyone using these apps know how they work? As the opening anecdote makes clear, however, these may lead to failure. And more important, they prevent researchers from really looking at the experiments themselves, using their eyes to see what is in front of them, and perceiving any glimmer of subtlety. He alludes to this when he says, "young scientists often don't realize that they need to ask questions," but puts more of the blame on lack of "time and resources for students to learn enough about how their equipment and techniques work to be able to use them to best advantage."
Part of the problem is that nobody is training them. Piston says, "In the past, much of this practical training was conducted by a lab's principal investigator, who is now spending increasing time chasing funding." But principal investigators often don't know the material themselves. The problem is that nobody is training the students how to do proper imaging. When users go to a core facility, they want results immediately, like the art class that sends home objects to be pinned to the refrigerator instead of teaching how to be an artist, to observe the world, to apply craft. Proper imaging is a hole in biology graduate education.
Training could be contracted out to, "summer courses such as those offered by the Marine Biological Laboratory in Woods Hole, Massachusetts. More and more of these intensive short courses are being offered worldwide, but they are always oversubscribed." These courses also conflict with course and lab work at the primary institution and may be expensive.
Regardless, the article is an important call for adding didactic imaging education to the list of formulating hypotheses, designing research approaches, writing grant applications and manuscripts, and amassing knoweldge of the particular biological field being studied.


Other fun &/or scary stuff:

This video shows features that are erased from images and using surrounding cues &/or libraries of expected features (such as facial features) where the holes are corrected or (re)invented. Cosmetically or aesthetically fascinating, but what does this augur for journalistic or scientific imaging? Ouch! I won't be surprised when scientists use this to fix histology images where the tissue is torn. On the other hand, this may turn out to be useful for mapping structures; IA may see connections & classifications people are missing.

This is an aggressive denoising method that appears to work. Here is the video version. What does this mean for scientific imaging? Will it work for low light noisy images where the phenomena being interrogated really are unknown? To what extent will AI correctly identify the structures and convert them to a form amenable to human vision? To what extent will divergent (thus irrelevant, misleading, wrong) images be generated? And for AI itself, do the images need to be presentable to humans at all?

This article was billed as essential reading for correlative microscopy, for taking images from different modalities and matching them. The abstract appears highy relevant, but the article is largely handwaving and also has the weakness of favoring "predictive" microscopy. This is making stuff up. When looking at something new, the whole point is that it is new, without prior knowledge. The algorithm would need to be trained to flag the unexpected.

Amusing how this trained neural net fails. and Not amusing if the AI encounters something unusual in radiology imaging that needs to be used for diagnosis or surgery and fails to report something odd or makes up some result that is completely irrelevant or harmful. Part of the problem is context which comes largely from breadth. In the failed examples regarding recognizing goats or sheep, the tag generator does not have deep enough and broad enough network of meaning.