From telescopes to satellite cameras to electron microscopes, scientists are producing more images than they can manually inspect. This tutorial will introduce automated image analysis using the "images as numpy arrays" abstraction, run through various fundamental image analysis operations (filters, morphology, segmentation), and finally complete one or two more advanced real-world examples.
Image analysis is central to a boggling number of scientific endeavors. Google needs it for their self-driving cars and to match satellite imagery and mapping data. Neuroscientists need it to understand the brain. NASA needs it to map asteroids and save the human race. It is, however, a relatively underdeveloped area of scientific computing. Attendees will leave this tutorial confident of their ability to extract information from their images in Python.
Attendees will need a working knowledge of numpy arrays, but no further knowledge of images or voxels or other doodads. After a brief introduction to the idea that images are just arrays and vice versa, we will introduce fundamental image analysis operations: filters, which can be used to extract features such as edges, corners, and spots in an image; morphology, inferring shape properties by modifying the image through local operations; and segmentation, the division of an image into meaningful regions.
We will then combine all these concepts and apply them to several real-world examples of scientific image analysis: given an image of a pothole, measure its size in pixels compare the fluorescence intensity of a protein of interest in the centromeres vs the rest of the chromosome. observe the distribution of cells invading a wound site
Attendees will also be encouraged to bring their own image analysis problems to the session for guidance, and, if time allows, we will cover more advanced topics such as image registration and stitching.
The entire tutorial will be coordinated with the IPython notebook, with various code cells left blank for attendees to fill in as exercises.