Thursday, September 19, 2013

A Raspberry Pi based multispectral camera


I have been working on constructing a new version of the DIY multispectral camera that I began last year.  The first version was based on a paper by Shaw et al a group at Montana State University. Their MS camera made use of solid-state DVR boards with a 1MP black and white cameras. A X-bee based radio relay triggered all the cameras to take photos on 4 separate SD cards. Each of the cameras has a filter to create Red, NIR, Green and Blue bands. Put together and one can get 4-band multispectral image with good control over the contents of each of the bands (due to the pass/cut filters that select a range of wavelengths of interest for each camera). 
Last year, I replicated what they were doing but found that the firmware on the DVR cards ( left a lot to be desired. The software interface on the cards is clunky and randomly stopped working. With the control software entirely in the firmware there wasn't much I could do to fix things or troubleshoot.  
With the release of the Raspberry Pi camera, I figured this might offer a cheaper and more robust solution. The RPi camera is also 5MP which offers better resolution. With separate Raspberry Pis controlling each camera, I can then potentially add more sophisticated processing (on the fly), have some sanity checks, and generally make the whole thing more robust.  Potentially I can create NDVIs on the fly so that the analysis part is complete by the time the images are downloaded. The demonstration that the IR filter can be removed clinched the decision for me and I dove into building a new RPi based version. 
For this I have 4 Raspberry Pis driving 4 separate CMOS cameras. Separate controllers are needed for two reasons: (1) the PI only has one port for driving a camera. (2) In an aerial setting one needs to take the photos all at the same time -- not in sequence since the camera is moving.  Ultimately, I will add the filters to restrict the light to each of the cameras.  The camera run a python script that listens to a GPIO port for a triggering announcement. A fifth Pi runs as a master. It has a GPS and a real time clock on board to track its location and time.  When sufficient distance from the previous photos has passed, the master pi triggers the other cameras with the GPIO. The master then ftps the images back to a central location and (when the script is done), these are assembled into a single multispectral image using GDAL libraries (aligned and so on). A UBEC provides the power to the whole thing so that it can be run off of the balancing plug of a LiPo battery (assuming its installed in some sort of UAV/plane/copter). 
As of last night, It is all assembled (see photo) and I'm working on the software part of the project.  Ive got the master program working with the gps and so on and the basic triggering scripts for the individual cameras. Using GPIO on the Pi is pretty sweet though having to run the scripts as sudo is a bit wonky IMHO. Ultimately the individual pis will boot in headless fashion and begin running their scripts. Ive got the master pi now functioning as a wireless access device so that the other pis can connect to it. I still have to work on getting the transfers going. That will requiring making sure that the script can find the right images that go together. One way to do that might to have the slave pis ftp to the master using a number scheme to make it clear what goes with what. That is work yet to be done... 

A very useful tool ... Offline USGS topo maps on your phone

Friday, September 13, 2013

The Dangers of the Recent "Anything Goes" Trend in Archaeology...


Last night, the IgNobles were announced:

ARCHAEOLOGY PRIZE Brian Crandall and Peter Stahl, for parboiling a dead shrew, and then swallowing the shrew without chewing, and then carefully examining everything excreted during subsequent days — all so they could see which bones would dissolve inside the human digestive system, and which bones would not.

REFERENCE: “Human Digestive Effects on a Micromammalian Skeleton,” Peter W. Stahl and Brian D. Crandall, Journal of Archaeological Science, vol. 22, November 1995, pp. 789–97.

Thursday, September 5, 2013

Mobius ActionCam as a Near Infrared Camera

I have been working on modifying cameras to allow us to be able to take remote photos using our various platforms (quadcopters, fixed wing, etc) in the near-infrared. Doing so gives us an ability to generate NDVI images and study differences in vegetation health (and thus potentially archaeological features ala crop marks, etc.). The criteria we've had is that the cameras have to be (1) light and ideally (2) cheap. We know we can do this for Canon cameras and get good results.  What is required is taking the camera apart and removing the IR filter that is ordinarily placed in front of the CMOS or CCD sensor.  Some cameras are easier than others to modify in this way.  I have done this for a Canon A380 and for a couple of GoPro Hero (Naked and 2). We also sent one of our Ricoh GR III cameras to a commercial service to have this done (this cost a couple of hundred bucks).

With these point-and-shoot cameras there are some drawbacks.  First, they are mechanical and have parts in them that are easy to mess up once you take the cameras apart. Ive screwed up one or two of our cheaper Canon's this way -- once some spring get sprung they can be nearly impossible to reassemble.  Second, they are heavier than they need to be. With remote sensing we are basically focusing on infinity and we want a relatively wide angle view. As a result, we don't need all the focusing mechanics or any optical zooming and what not. This just complicates the entire camera and weighs it down.  Third, they somewhat expensive. Once you start opening up cameras and yanking out parts, it is preferable to have the cameras cost well under $100 each.


Iam Bouret, one of our fantastic collaborators on our NSF REU program ( suggested that we look into the Mobius ActionCam as a possible candidate for our NIR camera. This camera costs about $70 has a 5MP sensor, is configurable via open firmware and software, is solid-state and easy to take apart.  You can read all about the camera here:  A windows-based GUI allows the camera to be customized ( Using this interface one can set the resolution of the images, the default modes, and the intervalometer.  

Taking the camera apart was very easy. If you follow the guide on the website you can follow along. Basically there are a couple of screws holding the case together and the lens is removable by detaching a ribbon cable from the main board. Once you do that you can open the back of the lens and remove the IR filter (its immediately above the CMOS sensor). Then the entire thing goes back together again in the opposite order. There are thankfully no springs or mechanical bits to make the job messy - it really is about a 10 minute job. 


Once you remove the IR filter you have a camera that records images that include a portion of the near-infrared spectrum.  This is great but what you are doing is simply allowing more light in the camera. The results are still split into the RGB bands - there are just additional wavelengths present (this is why the IR filter is used - to just show the visible spectrum).  To isolate the NIR you need add a filter so that you get the near IR to use one of the bands.  In black and white film photography this is usually done with a yellow filter that cuts out the bus and thus shifts the spectrum up in wavelength so that you get nearIR (N), red (R ) and green (G).  An alternative that seems to be preferable for digital images is to use a blue filter to eliminate the red.  What you get then is a file that records just the NGB (the red is removed). Ideally one would take simultaneous shots with a regular color camera to provide 4-band images (NRGB) but that requires 2 cameras and a way of registering the two sets of images. 

Jeffrey Warren of Infragram ( project on Public Lab sent me a piece of their "infrablue" filter to use with the NIR modified camera. This filter seems to be well suited for removing the red while allowing the near IR band to be collected by the CMOS sensor.  Here is a sample NIR image (compare to the RGB).

Finally, I ran the NIR image through the Infragram sandbox to generate an NDVI. The results look pretty good, but Ill have to check how the spectra to see if we are getting good separation of the bands.  Apparently the Infragram folks have found that some of the cheap-o CMOS based camera sensors are set up in a way in which there is a lot of blue leaking into the NIR band. Obviously this will screw the image up for analytic purposes. From what I understand this is caused in some cases by the fact that a Bayer RGB filter is not always used to separate the light into 3 bands. So there is a need to search for a cheap camera that still has good properties in terms of splitting the bands.  

Given these results, though, it appears that the Mobius Action cam might do the trick. Ill be interested to hear what the folks at the Infragram project think.