Apr 5, 2015

Added v2.3 support to Facebook::OpenGraph

Last weekend I added Graph API v2.3 support to Facebook::OpenGraph, and released it to CPAN. Basically this module does not require a lot of code change to support different API versions because it focuses on minimal implementation with maximum convenience. When I designed it, I did not want to require developers to cover both Graph API and this module's specification; Client modules should be as thin as possible so developers do not have to be conscious of its existence.
This time, however, Graph API v2.3 has major change on /oauth/access_token response. It used to return URL encoded query string, but is going to return JSON object from its latest version. So I added several new methods and some code change as below:
  • Now Facebook::OpenGraph::Response has methods for Graph API version comparison
  • Facebook::OpenGraph checks API version, and if it is higher than v2.3, it parses the response body as JSON object
These version comparison methods are handy. $response->api_version returns API Version that is returned from Graph API, which I think is safer than using the version given by developers. The designated version and actual experiencing version may differ sometimes as described on document.
we now return an HTTP response header called 'facebook-api-version' which indicates which API version your app is actually experiencing - this may be different to the API version you specify in your request due to upgrades.
I am not creating brand new Facebook App lately, so there might be some edge-cases that I am missing. I will be happy to have your feedback on github.

Jan 18, 2015

Progress Report: Add mbed AudioCODEC (TLV320AIC23B) support to Raspbian

Lately, on last cyber Monday, I purchased a new Raspberry Pi B+ and some peripherals from Adafruit. I already had a RasPi B, and I was fairly satisfied with my previous camcorder project, but those cool features including HAT's concept and improved electrical stability seemed attractive to me. So I decided to migrate my camcorder project to RasPi B+, and hopefully add audio support.
Here are some features my camcorder previously offered:
  • GPS logging
  • Live preview on PiTFT with current driving speed and recording status
  • Video recording
My first plan to add audio recording feature was to connect electret microphone amplifier via ADC, but turned out to be a bad idea; The combination of ADC, I2C connection and linux (time-sliced OS) can not provide enough sampling rate of 40kHz or more.
I decided to use my mbed AudioCODEC, instead. The latest Raspbian kernel does not include support for this device, so I had to go through a lot of search. What I found really helpful were koalo's article and jasaw's project. With their help, what I have done so far are 1) adding device driver, 2) cross compiling Raspbian kernel, 3) applying this modification to running B+, and 4) wiring. I am going to describe each step to show what I did and how things failed.

Add device driver

There are multiple ways to add device driver. You may just build with required files, and then install them. I think it is the fastest way because you will never have to compile the whole Raspbian kernel. However, for my better understanding and later convenience, I chose to add required files to proper location, add modification to Kconfig and Makefile, and cross compile the entire Raspbian kernel. All modifications can be found at here, and are self-explanatory.

Cross compile Raspbian kernel

To avoid confusion regarding gcc version issue and OSX's case-sensitivity problem, I prepared a new Debian environment on VirtualBox. This way, if anything happens, I can just get rid of this environment and start all over again.
First I cloned my repository to $HOME/dev/oklahomer/linux/, and tools repository to $HOME/dev/raspberrypi/tools/. Then export some environment variables as below:
➜ linux git:(feature/mbed_support) export CCPREFIX=$HOME/dev/raspberrypi/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian-x64/bin/arm-linux-gnueabihf-
➜ linux git:(feature/mbed_support) export MODULE_TEMP_PATH=~/work/modules
Then run `make`. The output of `make menuconfig` is located at here.
➜ linux git:(feature/mbed_support) make ARCH=arm CROSS_COMPILE=${CCPREFIX} menuconfig
➜ linux git:(feature/mbed_support) make ARCH=arm CROSS_COMPILE=${CCPREFIX} -j4
Then it complains about absence of GLIBC_2.14 as below:
/home/oklahomer/dev/raspberrypi/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian-x64/bin/arm-linux-gnueabihf-gcc: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by /home/oklahomer/dev/raspberrypi/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian-x64/bin/arm-linux-gnueabihf-gcc)
Follow the instruction described here, and install glibc >= 2.14.
  1. Add following lines to /etc/apt/sources.list
    deb http://ftp.iinet.net.au/debian/debian wheezy main contrib non-free
    deb http://ftp.iinet.net.au/debian/debian wheezy-backports main
    deb http://ftp.iinet.net.au/debian/debian jessie main contrib non-free
  2. Add following lines to /etc/apt/preferences
    Package: *
    Pin: release a=testing
    Pin-Priority: 10
    
    Package: *
    Pin: release a=stable
    Pin-Priority: 900
  3. Install
    ➜ linux git:(feature/mbed_support) sudo apt-get install -t testing libc6-dev
Try again.
➜ linux git:(feature/mbed_support) make ARCH=arm CROSS_COMPILE=${CCPREFIX} -j4
➜ linux git:(feature/mbed_support) make ARCH=arm CROSS_COMPILE=${CCPREFIX} INSTALL_MOD_PATH=${MODULE_TEMP_PATH} modules
➜ linux git:(feature/mbed_support) make ARCH=arm CROSS_COMPILE=${CCPREFIX} INSTALL_MOD_PATH=${MODULE_TEMP_PATH} modules_install
Prepare files to scp.
➜ linux git:(feature/mbed_support) find ./ -name zImage
./arch/arm/boot/zImage
➜ linux git:(feature/mbed_support) mv arch/arm/boot/zImage ~/work/.
➜ linux git:(feature/mbed_support) cd ~/work/
➜ work tar czf modules.tar.gz modules
➜ work  ls -l
total 15576
drwxr-xr-x 3 oklahomer oklahomer     4096 Jan  4 14:36 modules
-rw-r--r-- 1 oklahomer oklahomer 12688435 Jan  4 15:20 modules.tar.gz
-rwxr-xr-x 1 oklahomer oklahomer  3254856 Jan  4 15:14 zImage
Finally send zImage and modules.tar.gz to RasPi B+.

Apply changes

SSH login to RasPi B+.
  1. Place modules
    cd /tmp
    tar xzf modules.tar.gz
    cd /lib
    mv modules modules_org
    mv /tmp/modules/lib/modules /lib
    chown -R root:root /lib/modules
  2. Place kernel image
    cd /boot
    mv kernel.img kernel.img.org
    cp /tmp/zImage kernel.img
  3. Reboot
  4. Check update/upgrade
    1. sudo apt-get update
    2. sudo apt-get upgrade
  5. Add modules to /etc/modules
    snd-bcm2835
    
    # i2c related modules are required for i2s
     i2c-bcm2708
     i2c-dev
    
    # for i2s
    snd_soc_bcm2708_i2s
    bcm2708_dmaengine
    
    # for mbed AudioCODEC
    snd_soc_tlv320aic23
    snd_soc_rpi_mbed
  6. Reboot
  7. Check i2cdetect.
    It seems O.K. to me since UU on 0x1b indicates that this is reserved by kernel. At least I thought so.
  8. Check aplay.
  9. Check arecord.

Wiring

RasPi b+ has different GPIO pin layout than older model. For mapping, I referred to this PDF.
I am not 100% sure about the mapping below, and I think it has something to do with the problems I am going to introduce later.
mbed AudioCodec   |     Raspberry Pi
----------------- +---------------------
    BCLK   (I2S)  |       P5 - 03
    3V3           |       3V3
    DIN    (I2S)  |       P5 - 06
    DOUT   (I2S)  |       P5 - 05
    SCLK   (I2C)  |       P1 - 05
    SDIN   (I2C)  |       P1 - 03
    GND           |       GND

Problems

Here is the output of `dmesg` that confuses me.
[    5.067470] i2c i2c-1: Failed to register i2c client tas5713 at 0x1b (-16)
[    5.338188] i2c i2c-1: Can't create device at 0x1b
[   11.813418] snd-rpi-mbed snd-rpi-mbed.0:  tlv320aic23-hifi <-> bcm2708-i2s.0 mapping ok
[   11.933579] tlv320aic23-codec 1-001b: ASoC: Capture Source DAPM update failed: -5
The entire output is located at gist. I am not sure if it is the direct reason, but `aplay` does not give me any sound at all. I am going to edit or post when I have any progress.

Nov 3, 2014

Trying out picamera's overlay function: Raspberry Pi drive recorder with GPS logger

I introduced a new version of picamera and its new feature on last article. This new feature lets us add overlays to the preview. You'll never have to continuously capture image from camera and add some image or text to it before sending to display device as preview; just render preview with show_preview() and add as many layer as you wish to show some statistics, recording status or anything you want.


Below are example of both implementations.

1. Capture image, add text to it, and send it to display device continuously

It was the only way to add modification to preview until picamera ver. 1.8 came out. Because this implementation shows preview by continuously updating image, frame rate can be lower as the image modification process gets slower; Slower the loop process, fewer the image capture, poorer the preview quority.

2. Show preview on the bottom layer and add overlay to show text

This way, preview can be shown with show_preview() just like plain procedure without any modification. Text or additional image can be added as an overlay on top, and this layer may be replaced with your desired frequency. As you probably figured out, the additional layer doesn't affect preview quality. The problem is that this preview output directly goes to default display device, /dev/fb0, while PiTFT is assigned to /dev/fb1. Therefore we need to copy /dev/fb0 output to /dev/fb1 with rpi-fbcp. Fbcp takes a snapshot of /dev/fb0, copies it to /dev/fb1 and waits 25ms before repeating. Snapshotting takes ~10ms and with a 25ms delay it gives roughly 1000/(10+25) = 28fps.

Comparison

Despite the latency rpi-fbcp gives, the solution with overlay works better for me. I also like this in a point that we can separate preview and overlay implementation.
Now I'll take a break and wait till PiTFT HAT comes out.

EDIT 2015-01-25: Now PiTFT HAT is out in the market.

Sep 6, 2014

What I've learned about MMAL and picamera: Raspberry Pi drive recorder with GPS logger

To interact with Raspberry PI Camera Board from Python, I chose a library called picamera. It provides us interface to control Raspberry Pi Camera without using raspistill, raspivid or raspiyuv utilities. Actually it provides even more features such as taking images and recording video at the same time, modifying options while recording, etc. I think this library is reliable because it is well maintained by its owner, waveform80, and is used in adafruit's sample project. While reading its code, I found it was very important to understand the concepts and basics of Multi-Media Abstraction Layer (MMAL), so I began reading MMAL's specification. I am going to introduce what I have learned about MMAL and how picamera works with it.

MMAL

First of all, Raspberry Pi has BCM2835 that has VideoCore IV inside. As I already mentioned, MMAL stands for Multi-Media Abstraction Layer, and it provides lower level interface to multi-media components running on this VideoCore thing. Actually MMAL runs over OpenMAX so there is another layer below MMAL, but I decided to ignore it. As long as MMAL works fine as an abstraction layer and fulfil my requirement, I do not have to be aware of anything behind it.

Here is how MMAL works.
  1. Client creates MMAL component via API.
  2. When creation is done a context is returned to component.
  3. This context provides at least one input port and one output port. This number and format may vary depending on what this component represents. The format of input port must be set when component is created; The format of output port may be decided later and can be modified by client.
  4. Client and component send buffer headers to each other through input/output port to exchange data. Component can also exchange buffer headers with other connected component.
  5. There are two kinds of buffer headers:
    • One contains meta data and pointer to actual data.
      Payload, the actual data to be exchanged, is not stored in buffer header so the memory can be allocated outside of MMAL.
    • The other one contains event data.
  6. These buffer headers are pooled and each of them gets recycled when it is no longer referenced.

How picamera works with MMAL

The good thing about MMAL is that MMAL components can be connected to each other so they can exchange buffer headers. This well designed library, picamera, creates and uses some different kinds of components effectively to work with Raspberry Pi Camera Board. My favorite thing is that it creates a splitter component that receives video output from camera component and splits this input to 4 output ports. This way we can capture image via video port while video recording is in progress.

Components created on initialization

When picamera is initialized, it creates components below:

Camera component

This is the most important one. Everything starts from here. This component provides 3 output ports:
  • preview
  • video
  • still
The other components I am going to introduce receive data directly or indirectly from these output ports.

Preview component

Basically we have two components here.
One of these components' input port is connected to camera component's preview output. On initialization and when actual preview is not needed, null-sink component is connected. It is necessary because if we do not connect any component to the preview output provided by camera component, the camera doesn't measure exposure and captured images gradually fade to black. When start_preview() is called, it creates preview renderer component and replaces null-sink component with this preview component that actually shows preview.

Splitter component

Its input port is connected to the camera component's video output. It has 4 output ports and each output port works as a copy of camera component's video output. This way, we can record data from one of its output ports and capture image via another output port at the same time. Yes, capturing image via video port is faster and you might want to use it (camera.capture('foo.rgb', use_video_port=True)).

Other components

Some other kinds of components including encoder and renderer are created when necessary. Thanks to waveform80, A new version of picamera, ver. 1.8, was released today and it provides interface to create and manage MMAL renderer components. This must be really handy when you want to add overlay to your preview. I, for one, wanted to have this feature so badly that I asked @wafeform80 about release schedule, last night.
He was nice enough to respond to me with a sense of humor and he really released it!
I am going to rewrite the live preview part and try its new features over this weekend.
EDIT: Follow-up article is now available.

Aug 31, 2014

Start messing around with Python: Raspberry Pi drive recorder with GPS logger

Previously I finished setting up Raspberry Pi, attaching required peripheral devices, testing those devices, installing required software packages and making backup. I think it's a good time to start writing some code for this project. When this coding is done the prototype looks like this.
click to enlarge
Live preview with current recording status and driving speed are displayed on PiTFT. The good thing is that this can record and preview at the same time. I didn't really have to display current speed on it, but I added it anyway because my wife always care about how fast I'm driving. Maybe I could subtract 10 or 20 from this speed to make her comfortable.
I'm going to explain what modules I created and how they work together.

Modules

For this project, I decided to go with modules below:
  • run.py
  • Odyssey.py
  • GPSController.py
  • PiCamController.py

run.py

This module is basically responsible for 3 things.
  • It sets environmental variables used for gpsd and PiTFT.
  • It maps each tactile button and corresponding function.
  • Initialize Odyssey and start this camcorder application
In this way other modules don't have to deal with GPIO and can concentrate on dealing with their own process and logics. In fact, run.py is the only module that imports RPi.GPIO.

Odyssey.py

This is the key part of this project. On its initialization, it initializes GPSController and PiCamController and stores those instances so it can manage them on user interaction. This provides interface to switch preview, recording and GPS logging. Run.py uses this interface.

GPSController.py

Obviously this handles GPS device. On initialization it starts gpsd so we can receive data from GPS Breakout . It provides access to GPS data and when recording status is active it logs current location and speed data to designated file every five seconds.

PiCamController.py

While GPSController deals with GPS device, this deals with Raspberry PI Camera  and PiTFT . I started with having separate module for camera and PiTFT: PreviewController.py and PiCamController. However camera and preview works closely together( e.g. share the same pythone-picamera instance) so I combined them in one module.

How they work together

GPSController and PiCamController both inherit threading.Thread so they create new threads for each of them. This way these 2 instances don't block each other and Odyssey can still have control over them.

I'm going to explain how each module works on later articles. Making live preview and recording work at the same time and overlaying current status on preview were some tricky. I'll spend some time explaining them, too.

Jul 21, 2014

Make a copy of your SD card: Raspberry Pi drive recorder with GPS logger

With previous steps, we installed required hardware/software and finished basic configuration so we can now take photo, shoot video, output data to external touchscreen and fetch location data. Before writing some Python code to let those modules work together, I think we should take some time to make a copy of the disk we worked on. This way, even if you make a huge mistake in the near future and everything is messed up, you can install this saved disk image and start from where we are now. It's just like playing your favorite RPG. You don't wanna start from scratch, right?

Making a copy

Just like when we installed Raspbian OS to empty SD card, we use dd command to make a disk image. You insert your SD card to your iMac and hit `diskutil list` to see which disk we are going to use. Then execute `sudo dd if=/dev/disk1 of=Odyssey-Mk1_20140721.dmg`. You should change 'if(input file)' and 'of(output file)' value to suit your environment and preference. My 8GB SD card took 20-30 minutes to complete. It will be longer if you have SD card with larger space.

Installing

When you want to install the saved disk image, the process is almost the same as when we installed Raspbian to a fresh SD card. First you insert an SD card that you want to install this disk image to. Then unmount this just like the initial installation. Then `sudo dd if=Odyssey-Mk1_20140721.dmg of=/dev/disk1`. Again, you must set 'if ' and 'of' value to suit your environment.
It takes longer than making an OS image. In my case it took me more than 2 hours. You must be really patient.

My trick

If your SD card has larger space and you have chosen to 'Expand Filesystem' on the initial configuration, both steps take longer time to complete. So I make my 8GB SD card to be a master and make a copy of it. This way it doesn't have to copy each and every byte of 64GB. Then I install this to Micro SD card with larger volume. Right after this installation, this Micro SD card only utilizes 8GB of the entire space because I made a copy of 8GB SD card.
So I `sudo raspi-config` again and choose 'Expand Filesystem' to utilize all spaces left.

Jul 20, 2014

Setting up GPS module: Raspberry Pi drive recorder with GPS logger

Previously we attached PiTFT and finished touchscreen configuration. Now we are going to add GPS module to fetch location data.

For my project, I chose Adafruit's Ultimate GPS Breakout. I searched some GPS modules on the web and Adafruit's one have detailed official guide and many reference article by users. Spec. seemed good, too:

Freeing UART

2 ways of connection are introduced on the official guide:

The former seems easier and the guide recommends this way, but I chose the later one because I didn't want to use one of two USB ports. To do it some extra steps are required.

First you need to edit /boot/cmdline.txt

You need to remove those parts that include console=ttyAMA0 because we are going to use ttyAMA0 from now. I added rpitestmode=1 because other users' blog entries seem to include it.

Second you edit /etc/inittab

Then reboot and activate your change. This step is done.

Installing required packages

Before connecting GPS module, install required packages. It includes some interface that you can later use to interact with GPS data from python code.

Connecting GPS module

This time we use the cobbler cable I introduced in my previous article about PiTFT. It looks like the capture below.



As shown in the second capture, you should connect GPS module's VIN to Pi's 5v, GND to GND, RX to TXD and TX to RXD. Note that TX and RX are cross wired since module's input is Pi's output and vise versa.

Check and debug

You execute `sudo gpsd /dev/ttyAMA0 -F /var/run/gpsd.sock` and now gpsd is running. Check if you can properly read the receiving data by `cgps -s`. If it's working, the output is something like below.

In my case it didn't work at the first place and I had to go through debug process.
To add debug option to gpsd you add -D followed by a number that indicate debug level. e.g. `sudo gpsd /dev/ttyAMA0 -N -D3 -F /var/run/gpsd.sock`. I saw messages like below. It kept saying 'Satellite data no good (1 of 1).'

I couldn't really tell if the wiring connection has a problem or the GPS module is not receiving data. So I checked what's in /dev/ttyAMA0.

The GPS module was set by a window, soldering seemed O.K. and it looked like my GPS module was receiving some partial data. I couldn't really tell what to do so I posted to Adafruit's forum. As shown on the forum topic, they were kind and helped me well. Now my GPS module works perfect and I can receive consistent data.

Recommended

I didn't know GPS module is so sensitive until I faced the problem. Now I have my external antenna attached. You can set this antenna on the roof of your car to receive better data. To do this you'll need:
Basically we installed all required packages and attached all modules we need. We are going to make a copy of OS before making any mistake and being forced to start all over again.