Self-driving cameras: automated camera capture for biological imagingTools McGirr, Joseph (2021) Self-driving cameras: automated camera capture for biological imaging. MRes thesis, University of Nottingham.
AbstractEfficient quantitative analysis of plant traits is critical to keep pace with advances in molecular and genetic plant breeding tools. Machine learning has shown impressive results in automating a lot of these analytical processes however, many of the algorithms rely on a surplus of high-quality biological imagery. This data is currently collected in labs via static camera systems, which provide consistent images but are challenging to tailor to individual plants, species, or tasks. Current research in autonomous camera systems use object detection or tracking methods to control the camera. Unfortunately, this quickly falls apart for static biological imagery as large inter- and intra-species variations, even within the same specimen, make object detection less robust and stationary targets make tracking unusable. Inspired by the success of deep learning in the autonomous driving space, we apply an end-to-end learned approach to directly map saliency-augmented input frames from an RGB monocular camera to a pan-tilt-zoom (PTZ) actuation. Our results show our model correctly classifies which direction to move the camera in 87% of instances and has an average offset error of 250 and 140 pixels for a 1920x1080 image, respectively. Results on a much smaller, plant-only dataset demonstrates the applicability of the model to biological imagery and we demonstrate saliency’s effectiveness in improving accuracy by up to 4%.
Actions (Archive Staff Only)
|