Friday, March 28, 2014

Catching a Mouse with Cheese (or candy) !


So at this year's career day at Mahopac High School I was asked to to talk about what it's like to be a software engineer (or maybe I'm an architect – I dunno, I'm sort of an “all of the above” person I guess). Well, previously I had embedded a number of LED's into my poster board, and then using my Arduino, I ran a sketch that would light the LED's in predefined patterns based on the setting of a 4-position DIP switch. The idea was that the flashing lights (flashing in all kinds of patterns) would draw the students to my table. It worked – a little. I Got some interest – kids wanted to flip the switches to get new patters - a few of the kids talked about the programming they were doing and some even heard of the Arduino or Raspberry Pi (gasp!)

So this year, I thought I'd try something similar. I have the same poster board with the lights (see my previous blog entry "Too much time on my hands") but this time I put a parallax PING sensor on it.  Taking the lead from my friend John Cohn I put this together in a single sitting (something we call a "two beer integration" - ie, doing the work, or integration, within two beers)

Here's the idea:

  • Put the Arduino in front of my poster pointing out towards the students.
  • If somebody comes within 2-3 feet of my board, the LED's will start “firing” in random patterns making for a “pretty” display – when they move away (further than 2-3 feet, the LED's stop firing)
    • I'm also putting candy out in front of my poster this year ;-)




  • When a student gets within 10 inches of arduino (to say, grab a piece of candy) my spotlight will flash on them ! When they aren't within 10 inches it will shutoff.

The hope is that I can add a little fun to the event – I figure if I can break the ice maybe I can talk about technology, Arduino's, engineering and programming.

Here's what I did:

the code itself was pretty simple - I just started with the example code in the Arduino IDE and rather than just printing out the distance of an object I used the distance to either light up the LED's or turn on the spotlight.  Here's a code snippet:

if (inches < 30)
  {
    digitalWrite(light,LOW);
    light = random(6,13);
    digitalWrite(light,HIGH);
  }
  else {
          digitalWrite(light,LOW);
  }
 
  if (inches < 10)
  {
    digitalWrite(4,HIGH);
      }
  else {
          digitalWrite(4,LOW);
  }


What's a great event !!!   I had more students than I could talk to !!  They LOVED it - who would have thought such a simple sensor would attract that much attention.   And before anybody says it, yes - they came for the candy - but the important thing is they were engaged in discussion AND they were interested - so I'd say it was a success !!!  

Check out a video of one of the students and his reaction on my YouTube page



Here's a picture of the full board:



Saturday, February 2, 2013

Raspberry Pi Timelapse camera

I was looking for another project for my Raspberry Pi and my USB camera (similar to the Internet Accessible Christmas tree I did previously). I read an article in the Adafruit blog where someone did a time lapse movie of their greenhouse and I thought that might be a fun project to try. So during the winter, my wife I move a bunch of our “outside plants” into to my office. One them starting coming to life and flowering (which was kinda nice during the dreary winter months). So, when there was a new bud I got to working on my “time lapse” Raspberry Pi Camera. The idea was to take a single frame picture every 5 minutes until the bud bloomed into a flower. Then, using software I found on the Internet (something called JPGVideo) “assemble” into a movie. One of the problems with this approach was nighttime. I needed someway to “light” the flower during the evening hours. But hey, I’m doing this with little mini Linux box – anything should be possible. So here’s what I came up with:
The first part is the camera. I connected my old Logitech Quickcam to the USB port of my Raspberry Pi and using the “fswebcam” software I can grab a single frame image. Like I said, one of the problems was lighting. So rather than leave the lights on in my office for 24 hours, I used an Adafruit Powertail switch to turn on an old Photographer’s spotlight everytime I wanted to take a picture. The setup in my office looks like this:



So I wrote a little script that when run:

  1. creates a unique filename (based on the date/time) for the image
  2. turns on the light
  3. pauses 5 seconds
  4. snaps a picture
  5. pauses 5 seconds
  6. and then turns the light off.


Using the Pi’s crontab I just run this script every 5 minutes and quick build up a set of images. On the Pi I used the command “crontab –e” to edit my cron table. Cron enables users to schedule jobs (commands or shell scripts) to run periodically at certain times or dates in a Unix environment.

The format of a user’s cron table is:

MIN HOUR DOM MON DOW CMD 

These fields accept the following values:

MIN - Minute field 0 to 59
HOUR - Hour field 0 to 23
DOM - Day of Month 1-31
MON - Month field 1-12
DOW - Day Of Week 0-6
CMD - Command Any command to be executed.

So in my case, I use:

0,5,10,15,20,25,30,35,40,45,50,55 * * * * /home/pi/matt/takepic

Which simply says when the minute is 0,5,10,15 (etc) (any value specified) during any hour (the *), on every day (the *) for any month or day-of-the-week – execute the command “takepic”. My script (/home/pi/matt/takepic) looks like this:

#!/bin/sh filename=$(date +"%m-%d-%y-%H%M%S")
/usr/local/bin/gpio mode 0 out
/usr/local/bin/gpio write 0 1
sleep 5
fswebcam -d /dev/video0 /home/pi/matt/$filename.jpg
sleep 5
/usr/local/bin/gpio write 0 0


The most important part of the script is the creation of a unique filename for the image that is captured. The format of the filename is simply:

month-Day-Year-(hour)(minute)(seconds).jpg

so for example, on January 31, 2013 at 10:45 the name would be: 01-31-13-104501.jpg (the value of the seconds field could vary slightly).

To turn the light on for the picture, I used the GPIO package to “flip” the value on I/O pin 17 – which is connected to the Adafruit Powertail. Turning the pin “on” (/usr/local/bin/gpio write 0 1) turns on the light – and then writing a zero to the pin turns it off (/usr/local/bin/gpio write 0 0).

So once I have all these images (and I’m over a 1000 images at this point) they need to be assembled into a movie. I use a great little program called JGPVideo on my windows laptop. The main point of JPGVideo is to create videos out of web camera images. Every so often I move the images from my Raspberry Pi to my laptop to “compile” them into a movie.

The JPGVideo program is a very slick little application that does just that. When you fire it up you’ll see: The first thing you need to do is “configure” the program. Basically setup the input/output directories as well as how to read the files. Under the Configuration option, It's possible to activate the “drop identical frames” feature to ensure that no two images are too similar. Through some experimentation I chose to drop frames that were more than 5% identical. I also chose to set the frames per second to 3 (but again, depending how many images you have, this number can vary).

So the movie I created (over a couple of days) for the blooming of my plant here is on my YouTube account here.

After I did this I had another idea – filming the melting of an ice cube - but I think the reverse is actually better – here’s the other movie.

Tuesday, December 4, 2012

Merry Christmas Pi



I just received my Raspberry Pi and I was looking for a project to get my feet wet with.   Given that it supports a full Linux distribution and comes with built in USB ports and Ethernet adapter – the types of projects someone could tackle are quite different from what I normally do with my Arduinos.    The idea I came up with was to control the power to device and watch the change via a video camera – all controlled through a simple web application.  Looking around my office, I thought it might be fun to allow Internet control to turn the lights on/off for a small Christmas tree on my book shelf.   Looking  at it from a high level here’s what I created:


Basically I’m using a program called fswebcam (see below) to capture an image every 2 seconds.  This image gets stored in the webserver’s DocumentRoot (so it can be accessed via the web).   The HTML that is served will contain some code that reloads the image every few seconds to the browser (sort of a “poor man’s” streaming solution.   The webpage will also contain two buttons that will execute CGI programs that turn the lights either on or off (basically flipping one of the Raspberry Pi’s I/O pins).

Webcam

The first step was to experiment with a USB connected webcam to see if I could capture pictures on the raspberry pi.   I have an old Logitech Connectix Quickcam so I was hoping to use that.  A quick search on Google landed me on the SLB Labs page that describes a cool little program called: fswebcam.
fswebcam is a great application that easily allows you to programmatically capture single image frames at a specified timer interval.    For my Raspberry Pi I had to install the package, but that was simple enough using the command:
sudo apt-get install fswebcam

The command I then use to start grabbing pictures looks like:



fswebcam -q --subtitle "Matt's Christmas Tree" -d /dev/video0 -l 2 --set gain=55 --set exposure=250 /var/www/tree-view.jpg

  • -q = quiet mode
  • --subtitle text – sets the text in the lower lefthand portion of the image
  • -l n - loop every n seconds (take an image every n seconds)

Finally the last parameter on the command line is the filename (image) to be created.
There are LOTS of options for fswebcam, so I’m not going to list them all here.   But let me talk about a one that I found useful:



fswebcam –list-options

               The lists available controls and their current values for
               the default source device). In my case, the options
               available are “gain” and “exposure” (thus I’m able to
               set them in the call to fswebcam with the –set flag).


Web Server

For the web server portion of the project I initially looked at the lighttpd – after spending about an hour trying to get some basic CGI working, I gave up and reverted to the tried-and-true Apache2 implementation – it was just easier.   So I won’t go into the details about installing and running apache (there’s lots written about that).  To install apache2 on my Raspberry Pi, simply use the command:

sudo apt-get install apache2

There are several locations that typically need to be used:

/var/www is the default DocumentRoot (where the HTML to be served is located)
/usr/lib/cgi-bin is the directory (or folder) for your CGI programs (which I’ll use to turn the tree on or off)

The HTML  
        
My design basically calls for a webpage that reloads a single image every 2-3 seconds to watch for activity with the tree lights.   A good part of this code was derived from the discussions I found over on the webdeveloper.com site (see: http://www.webdeveloper.com/forum/showthread.php?188022-Webcam-image-refresh-with-preloading. The full html page can be downloaded here

So to start off, I created a “simple” webpage that would display an updated image every 2 second (served from the Raspberry Pi).  That page looks like:



<!DOCTYPE html>
<html>
<head>
</head>
<body>
<script language="javascript" type="text/javascript" >
<!--
function ChangeMedia(){
var d = new Date();
var t = d.getTime();
var url = this.location.href;
document.getElementById('camera').src = url + "/tree-view.jpg?" + t;
}
var reloadcam = setInterval("ChangeMedia()",2000);
-->
</script>
<h1>My Office Christmas Tree </h1>
<img src="tree-view.jpg" alt="Loading..." name="camera" width="450" height="335" border="0" id="camera" />
</body>
</html>



The HTML is pretty simple at this point.  It’s basically a page that displays an image (tree-view.jpg).  The real “power” is in the Javascript.   There is a routine defined called ChangeMedia() which, when invoked will update the element labeled “camera” with a new image – note, to ensure that we always fetch a new image, the javascript adds a parameter to the URL  (the time of day) so that the http request “looks” like a unique URL call - in this way caching won’t get in the way.  The real key is the setInterval("ChangeMedia()",2000)  This sets a timer to call ChangeMedia() every 2000ms (2 seconds) so the image gets refreshed every 2 seconds.

Controlling I/O with the Pi

So now that I have a image of the tree being served from the Raspberry Pi, the next step is to turn the power on  and off under program control.   What I decided to do was use a PowerSwitch Tail from Adafruit.   The powerswitch is a compact 120V 3-pronged extension cord (with a relay board embedded in the middle).  By Connecting one of the General Purpose I/O (GPIO) pins to the relay (using two screw terminals) and turning the pin “on” power is “passed” across the relay turning on my tree.  By setting the GPIO pin on the Raspberry Pi “low” or off cuts power at the relay and turns my tree “off”. 

Getting access to the Raspberry Pi’s I/O pins is a little “challenging” (I guess that’s speaking as an active user of Arduino where access to the pins is really simple using the headers on the board).   The easiest method I’ve seen is a breakout board (again from Adafruit) called the Raspberry Pi Cobbler – it simply provides a cable and breakout board from the Pi to a breadboard for easy access (disclaimer:  I am not affiliated at all with Adafruit – I just LOVE their products/services). 

So once I had access to the I/O pins, I needed an easy way to turn them on and off.  Again, a quick search on the Internet turned up a great package WiringPi.  There’s a great set of examples and descriptions of the package at drogon.net.   Once the WiringPi code is compiled, turning pins on and off is as simple as:


                         gpio mode 0 out      
                              (set’s pin 0 – really GPIO 17 – to output)
                         gpio write 0 1          
                             (write a binary 1 (high) to pin 0 – GPIO 17)


The WiringPi code makes accessing the I/O pins a little easier by renumbering them from 0-7 rather than using their device pin numbers – again see the drogon site (https://projects.drogon.net/raspberry-pi/wiringpi/pins/)

Name
WiringPi
Raspberry Pi  Pin
GPIO 0
0
17
GPIO 1
1
18
GPIO 2
2
27
GPIO 3
3
22
GPIO 4
4
23
GPIO 5
5
24
GPIO 6
6
25
GPIO 7
7
4

Putting it all Together

The last part is sending a request from the webpage to the Raspberry Pi that will turn the powerswitch either on or off.   For that I decided to add two buttons to the webpage so when a user clicks on one of the buttons the webpage makes a request for a cgi called “turn-on.sh”  which looks like:

               #!/bin/sh

               echo "Content-type: text/html\n\n"
               echo "";
               gpio mode 0 out
               gpio write 0 1
               date >> /var/www/turned.on


The echo statements at the top of the script returns valid http headers so the CGI’s can be called without producing an error.   For tracking purposes I added a log file (called /var/www/turned.on that keeps track of when the tree was turned on.   There is a “sister” CGI called “tree-off.sh” that is exactly the same other than it issues a ‘gpio write 0 0’ instead of a 1 (in other words it turns the pin off (zero) rather than on (one)).
The HTML was modified to add two buttons to allow the tree to be turned on and off, so it now looks like:


<h1>My Office Christmas Tree </h1>
<img src="tree-view.jpg" alt="Loading..." name="camera" width="450" height="335" border="0" id="camera" />
<p>Feel free to light me up or turn me off !</p>

<button onclick="turnon()">Turn Me ON</button>
<button onclick="turnoff()">Turn Me OFF</button>

</body>
</html>
xxx







This requires two new javascript functions to be defined - turnon() and turnoff() which look like:


function turnon() {
                  xmlhttp=new XMLHttpRequest();
                  xmlhttp.open("GET","/cgi-bin/tree-on.sh",true);
                  xmlhttp.send();
                    }

function turnoff() {
                  xmlhttp=new XMLHttpRequest();
                  xmlhttp.open("GET","/cgi-bin/tree-off.sh",true);
                  xmlhttp.send();
                    }

In the end, the final page looks like:



 

Monday, October 8, 2012

Arduino's, MQTT and IoT



I’ve been experimenting with MQTT lately as a way to get devices to “talk” to each other across the Internet.   This is all related to the concept of the Internet of Things.   According to Wikipedia:
“The Internet of Things refers to uniquely identifiable objects (things) and their virtual representations in an Internet-like structure”  (see entry here).

The idea is that in the near future there will be a large number of devices on the internet all communicating with each other or to larger entities – so we need to have an easy (or painless) way to get these devices to communicate.  

According to it’s website, MQTT is a machine-to-machine (M2M)/"Internet of Things" connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging transport. It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium.  
Hey, my Arduino has a small footprint – this seems like a perfect fit.

MQTT is a simple method for allowing devices to “publish” data while other devices “subscribe” to it.   Any number of devices can subscribe to a “channel” of data – thus if one device wanted to broadcast something to a large of subscribers, a single message would do the trick.   A device can subscribe to or publish to as many channels of data as they want.   Key to this is a single “message broker” which acts like a hub.  The hub “worries” about who is subscribed to a specific channel of data and then just simply sends the data as appropriate.


So I was thinking: "what if I put my Ardunino up on the Internet and had it subscribe to a specific data channel?"   If it “saw” data, it could receive it and act appropriately.   So I dug through my parts box and found a tri-color LED (still in its original packaging that I no doubt had good intentions for) and decided to use that.  The idea would be that if you published the word “red” on a channel, my Arduino would make the LED turn red.   Say “blue” and it would change to the color blue; say “green” and it turns green.  Not overly useful, but I think kinda interesting.   Here’s a Fritzing diagram of what I have.


For testing I use some great software called from mosquito.org to publish messages to my mqtt broker (which is running on an IBM implementation at realtime.ngi.ibm.com).     So I have my Arduino running Nick O’Leary’s MQTT code and listening on a channel called “MattTest” (yes, case is important).   

I have two “status” LED’s on my breadboard – One will “light up” when the Ethernet initializes on the network and when the arduino successfully subscribes to the channel, the Ethernet LED goes off and the “subscribed” LED comes on – so you know the system is ready.   After that, the words: red, blue, green or clear published on the channel will cause the tri-color LED to change.

Next up, controlling it from an iOS application called UIF (also from IBM).   Watch this space.
If you’re interested in the code – let me know

Tuesday, August 21, 2012

Too much time on my hands ?


Last year I was asked to attend my local high school’s career day event here in Mahopac, NY.
I love doing these things – I participate in engineering week in the elementary schools, career day in the middle school, etc.   Most of these are sessions where I like to get the students interacting with each other – here’s a picture of some of my elementary students building the tallest free standing structure possible out of a single piece of paper and a 1.5 foot length of masking tape.   It typically goes really well – especially when I turn it into a competition where the classroom that produces the tallest structure (of the 5-6 classes I visit) gets a box or two of Dunkin’ Donut munchkins (this pushes the interest into the stratosphere ;-)

So, I was asked to set up a table at the high school (like a trade show) where over 300 students could stop by and talk about the opportunities in the various area.  So I was thinking – how can I draw attention to my table and get the most traffic possible?

I’ve been spending quite a bit of time with my Arduino’s using them to build sensors for some work in the Internet of Things - I'll post about that later.   So what I decided to do was get a bunch of Light Emitting Diodes (LED’s) and insert them into my poster board and then use my Arduino to drive them in a variety of patterns – sort of like a flashing billboard.  Here’s the schematic of what I built.  


There are 8 LED’s on the board – alternating between red and green.   Each one corresponds to a bit in an 8 bit integer – so if I send a decimal 129 out the port (binary 1000 0001) it turns on the first and last LED and the sets the others off.  Basically the code (available if anybody is interested) runs through a sequence of numbers turning each LED on or off based on the binary value.  To to make a “moving” dot, I would send the sequence:

01, 02, 04, 08, 16, 32, 64, 128 

(reversing the sequence causes the light to “bounce” from one side to the other).

If you look at the circuit diagram there is a series of switches.  Depending on the configuration of switches I use a different set of light patterns – so when the students stopped by to talk about it, they could flip switches and get a wide variety of patterns flashing.   There's also a variable resister to allow the students to control the speed of the "moving" LED…….Overall it seemed to cause quite a bit of interest (and turned out to be fun).   Here's what the board looked like:


Tuesday, November 30, 2010

Becoming Agile (take the step)

I talk to lots of teams that want to become more Agile in their processes. The other day I realized that I’ve actually internalized the scrum master role (at least I think I have). I noticed that every day, when I’m talking to teams I work with, I’m always trying to dig into what they are doing and understand what problems need to be removed from their path - and then removing the obstacles.

Often the problem is something like what happened to me the other when I was trying to send a package to Denmark. I went to the mailroom and dropped off the box, and filled out the export form. About 3 days later the mailroom guy calls me and says:

“Your package hasn’t gone out yet. I’m waiting for the guys in the other building to make a decision about how we ship, and I can’t do anything until they tell me”. Sigh.

So, out of habit, I just ask “who is it ?”

I lookup the person’s phone number, make a call and POW – roadblock removed within about 30 minutes. Turns out they in turn were waiting from someone in California (“oh, you know, they never called me back”).

I mean it really does work – you just need to take the extra step of making that phone call and asking the question “why is this being held up?” and from what I’ve found is that 9 out of 10 times the roadblock goes away!

I’ve noticed myself doing this more and more lately – I just don’t accept the answer “I’m waiting on (x)”. I guess talking about Agile (and living it for so long) – I’ve really internalized it.

Try it: It’s amazing how fast things can really get done – just take that extra step!

Sunday, November 14, 2010

Discovering Agility

I’m flying back from a pretty successful trip to Copenhagen. I was here as part of something IBM calls an “Innovation Discovery Workshop”. We met with one of IBM Denmark’s largest customers. It was my first time doing one of these, so I wasn’t sure what to expect. Lots of high level, smart people talking about how the bank could change or adapt its business in an agile or nimble fashion based on customer needs – interesting but scary. The topic of the session was Business Agility - I was asked to attend so I could bring my perspectives on Agile Software Development into the discussion.
The two day session was held at a Spa/Hotel just outside of Copenhagen in a town called Skodsborg. My first challenge when I landed on Wednesday morning was finding a train from the airport to the small town about 40 minutes away. I have to say, I was pretty impressed with myself, having found the train and very easily wandering around the town to find my way to the Kurhotel and Spa.
Anyway, there were about 15 people from Nordea there and maybe 10 IBM’ers all from a variety of disciplines – and if you know me, I know NOTHING about banking and money so I was feeling a little out of my element!!! Anyway the idea behind the two days was to explore areas within the bank’s process and business model where they could be more innovation and agile in an attempt to respond to changes in the business climates. The first day wasn’t too bad – lots of good conversation and getting to know each other – the wonderful three course meal at the end of the day (complete with three courses of wine) helped to loosen everyone up.
I got to speak in the morning of the second day. I was scheduled to talk about Agile Software development and specifically I was going to focus on Scrum as an Agile technique. Unfortunately (for me) I got to follow one of our Distinguished Engineer’s and VP of Innovation in the CIO’s office: Francoise Legoues. She did an outstanding job – simply “wowing” the room and I think really setting the stage for just how innovative IBM can be. Lucky for me (not): A very tough act to follow.
So, armed with my lucky pink shirt I jumped in feet first and gave it my all! I think it went well – and it typical “Matt Style” (very animated and excited) I think I convinced them I know a little about the topic. There were lots of questions and comments afterwards – so I always take that as a good sign! I focused not so much on how to do Agile for software development, but rather how to “think” with an Agile and iterative (small step) mindset. They’re doing Agile development already. In fact Nordea is at the beginning of their Agile journey and like most teams just starting out, they have lots of questions (and lots of confusion) – They’re going to do fine though – they are very committed and very passionate about what they’re doing! It’s really exciting to see!
Did we help them ? I don’t know. They seemed to really get a lot out of it and watching Nordea talk amongst themselves about how to proceed (using ideas we forged together) I think I would say we did a good job!

I didn’t get to much sightseeing. The day ended around 4:00pm on Friday and a quick look at geocaching.com showed that there was a quick geocache about 0.2 miles from my hotel. After a brisk walk I quickly found the “cave” where the cache was hidden. It was really pretty cool. Around the back of that big opening was a little opening with stairs that led to the top (where the cache was hidden). Of course I left a Samstone there in memory of Sam Cohn (the son of a fellow IBM’er John Cohn) who was tragically killed in a car accident at the age of 14.
I did make it into Downtown Copenhagen for Dinner with two of the IBM workshop attendees: Adam Cutler and John Vergo – John works about 200 yards from me in Hawthorne, NY but we had to travel about 4,000 miles to meet – go figure! Other than the rain, it was a good night. It felt like Copenhagen should feel: wet and cold – in a good (European) way!
So, back home and back to the grind. Business travel for this year is done (I’m pretty sure).