Thursday, September 19, 2013

A Raspberry Pi based multispectral camera

 

I have been working on constructing a new version of the DIY multispectral camera that I began last year.  The first version was based on a paper by Shaw et al a group at Montana State University. Their MS camera made use of solid-state DVR boards with a 1MP black and white cameras. A X-bee based radio relay triggered all the cameras to take photos on 4 separate SD cards. Each of the cameras has a filter to create Red, NIR, Green and Blue bands. Put together and one can get 4-band multispectral image with good control over the contents of each of the bands (due to the pass/cut filters that select a range of wavelengths of interest for each camera). 
 
Last year, I replicated what they were doing but found that the firmware on the DVR cards (http://www.electronics123.net/amazon/datasheet/dvr8100.pdf) left a lot to be desired. The software interface on the cards is clunky and randomly stopped working. With the control software entirely in the firmware there wasn't much I could do to fix things or troubleshoot.  
 
With the release of the Raspberry Pi camera, I figured this might offer a cheaper and more robust solution. The RPi camera is also 5MP which offers better resolution. With separate Raspberry Pis controlling each camera, I can then potentially add more sophisticated processing (on the fly), have some sanity checks, and generally make the whole thing more robust.  Potentially I can create NDVIs on the fly so that the analysis part is complete by the time the images are downloaded. The demonstration that the IR filter can be removed clinched the decision for me and I dove into building a new RPi based version. 
 
For this I have 4 Raspberry Pis driving 4 separate CMOS cameras. Separate controllers are needed for two reasons: (1) the PI only has one port for driving a camera. (2) In an aerial setting one needs to take the photos all at the same time -- not in sequence since the camera is moving.  Ultimately, I will add the filters to restrict the light to each of the cameras.  The camera run a python script that listens to a GPIO port for a triggering announcement. A fifth Pi runs as a master. It has a GPS and a real time clock on board to track its location and time.  When sufficient distance from the previous photos has passed, the master pi triggers the other cameras with the GPIO. The master then ftps the images back to a central location and (when the script is done), these are assembled into a single multispectral image using GDAL libraries (aligned and so on). A UBEC provides the power to the whole thing so that it can be run off of the balancing plug of a LiPo battery (assuming its installed in some sort of UAV/plane/copter). 
 
As of last night, It is all assembled (see photo) and I'm working on the software part of the project.  Ive got the master program working with the gps and so on and the basic triggering scripts for the individual cameras. Using GPIO on the Pi is pretty sweet though having to run the scripts as sudo is a bit wonky IMHO. Ultimately the individual pis will boot in headless fashion and begin running their scripts. Ive got the master pi now functioning as a wireless access device so that the other pis can connect to it. I still have to work on getting the transfers going. That will requiring making sure that the script can find the right images that go together. One way to do that might to have the slave pis ftp to the master using a number scheme to make it clear what goes with what. That is work yet to be done... 
 
Post a Comment