1. Raspberry NAS

    I’ve got a Raspberry Pi lying around for some time, and finally I got time to set it up as a small NAS for my home. After some poking around I got a nicely working system that can expose shares to my Mac computers, stream to the TVs with everything backed up on a secondary USB disk.

    The hardware setup is as follows:

    • Raspberry Pi Model B, with 4GB flash card
    • Two external USB drives of same size, first drive is used for file sharing, second for backup.

    I installed and configured the following software:

    Raspbian:

    Raspbian “Wheezy” is downloaded from http://www.raspberrypi.org/downloads and then put on a flash card. When booting up the Pi you need a keyboard and screen attached to do the initial configuration.

    To install software use apt-get and to enable the software to run at every boot you can use update-rc.d. Example:

    apt-get install netatalk
    update-rc.d netatalk defaults

    udev

    udev is essential when working with usb devices on Linux, it makes sure that usb devices always gets assigned the same device name. I tried different mapping strategies and finally found the method described here to be the one working best, i.e. based on serial number of the devices. Note that the command usbadm info does crash on Raspbian. Instead use the command usb-devices to get the serial IDs. After getting the serial IDs of my two USB disks I created a new file /etc/udev/rules.d/50-usb-hd.rules with the following content:

    SUBSYSTEMS=="usb", ATTRS{serial}=="DEF10C3BCD6C", KERNEL=="sd?", NAME="%k", SYMLINK+="usb-share", GROUP="storage"
    SUBSYSTEMS=="usb", ATTRS{serial}=="DEF10C3BCD6C", KERNEL=="sd?1", NAME="%k", SYMLINK+="usb-share", GROUP="storage"
    SUBSYSTEMS=="usb", ATTRS{serial}=="000000560938", KERNEL=="sd?", NAME="%k", SYMLINK+="usb-backup", GROUP="storage"
    SUBSYSTEMS=="usb", ATTRS{serial}=="000000560938", KERNEL=="sd?1", NAME="%k", SYMLINK+="usb-backup", GROUP="storage"

    To test, restart udev (you may even need a reboot), and then check if the devices are created.

    automount

    Good. Old. Automount. First introduced in Solaris, autofs is used to automatically mount filesystems. Using automount instead of hard mounting is way safer when using USB drives. Should one of your disks or controllers die for some reason it would not bring down your system. Edit your /etc/auto.master and add the line:

    /media /etc/auto.media --timeout=100,defaults,user,exec

    Then add a new file /etc/auto.media with the following content:

    usb-share-hd -fstype=auto :/dev/usb-share
    usb-backup-hd -fstype=auto :/dev/usb-backup

    After restarting autofs check that it’s working first cd to /media and do an ls. The directory should be empty. then type cd usb-share-hd which should mount the file system.

    Netatalk

    There used to be CAP, luckily today we have Netatalk for file sharing with Macs (you would want a Samba server if you run Windows). After you’ve setup udev and autofs you just need to edit your /etc/netatalk/AppleVolumes.default to add your shares, in my case I added the line:

    /media/usb-share-hd          "USB Drive"

    hdparm

    hdparm is used to spin down your hard drives when they are not in use. Actually this proved to be the most tricky part of getting my home NAS up and running. There exists many tools to configure the hard drives to spin down. I tried all of them; on my hardware the only one that worked was hdparm. This may depend on your disks and usb adapters. After installing hdparm, test it by running:

    hdparm -S1 /dev/usb-share

    This should spin down your drive after 10 seconds of inactivity. If it works, go ahead and add the following lines to your /etc/hdparm.conf and add hdparm to the init.d startup sequence:

    /dev/usb-share {
            write_cache = off
            spindown_time = 120
    }
    /dev/usb-backup {
            write_cache = off
            spindown_time = 120 
    }

    minidnla

    minidnla for streaming videos (I also use kissdx for streaming to my old Kiss DP-1500) to UPnP enabled TVs. If you don’t have an UPnP enabled TV an excellent option is to run openelec on a Raspberry.

    minidnla is easily configured by adding the following line to /etc/minidnla.conf:

    media_dir=/media/usb-share-hd/Video

    rsync

    For backup I actually setup two disks, one which I share using netatalk, and one that I use for backup. It would be wonderful to have a full ZFS stack on Debian, but from what I’ve seen it’s still fairly unstable. I opted for a tried and tested solution using rsync to sync files from my shared disk to the backup disk rsync -a /media/usb-share-hd /media/usb-backup-hd does the job. I actually run it manually right now to be on the safe side.

    And that’s it. Once you’ve configured everything, remember to take a backup of the flash card!

     
  2. Foundational Issues in Touch-Surface Stroke Gesture Design — An Integrative Review

    Some years ago I was researching the use of sound as feedback in basic Human-Computer Interaction tasks. I did the work at IBM Research together with Shumin Zhai and Per Ola Kristensson using their ShapeWriter product. Now the results of the research is collected into a integrative review available in Foundations and Trends in Human-Computer Interaction. A sample chapter is available here.

     
  3. iOS and Facebook

    I started to work on a new app, this time looking into iOS Facebook integration. With iOS 6 out the promise is a great native Facebook integration with your app. What exactly is possible with the native integration:

    • Facebook app install - Signup to your app with one click, without having to go through Facebook app switch, html web views etc.
    • Facebook login - If the user is already logged into Facebook using iOS, and the user has already installed your Facebook app, nothing more is needed, you don’t need to show a login screen
    • Ability to publish directly from your app with one click

    These features are basic but functions very well, they solve a lot of the complexity of using OAuth2 web views inside native apps.

    Next question: Which SDK to use? Apple’s or Facebook’s? Luckily the answer is easy: The new Facebook SDK incorporates the functionality of Apple’s native integration, and thus, if you use Facebook’s SDK you get all the extra functionality that’s not part of iOS in addition to the native platform features.

    Going deeper, there’s actually more Facebook iOS goodies available: It’s now possible to deep link from the Facebook app into your very own app. This can be done directly from status updates coming from your app. When a user clicks on a status update he is redirected directly to a screen within your app. If your app is not installed the user is taken to the App Store page of your app: (Thanks to @techdonovan for the tip). To me this is really nice feature that help lower the barrier for social viral distribution.

     
  4. Augmented Listening

    During the summer I released my first iPhone app, Listener (written Lis10er to emphasize the digital aspect of the app). The app is a sound installation, an experiment with audio and augmented reality. The app contains music and sounds composed by the guys from the U.S.O. Project. Instead of listening to the same sounds over and over, the app combines sounds recorded with the microphone with sounds contained in the app.

    You can hear what it sounds like over at SoundCloud (http://soundcloud.com/lis10er ). This week Lis10er is available at a special low price - only €0.79 compared to the normal price €2.99.

     
  5. Site-wide split testing

    For the past few weeks I’ve been looking into a practical setup for doing split testing. Content Experiments from Google is free but really does only allow for simple split testing of individual pages. Testing a site templalte or making content experiements is not supported and quickly gets complex.

    Instead I’ve been looking into Visual Website Optimizer (VWO) and the cheaper Optimizely. VWO works great and I still have to run my first experiement on Optimizely.

    I’ve been looking around for ways to setup my site-wide experiment and after some searching found the Domain Acccess Drupal module. Domain Access is intended to run several sites off the same installation and content base. It doesn’t appear to be designed for split testing but to me it appears to be almost ideal; It allows to run the same site off serveral URL’s, say www.mysite.com and v2.mysite.com. It’s then possible to swap out the theme depending on which URL you hit, or even the content. I’m now working on setting this up to run my first experiement, stay tuned for the results.

     
  6. image: Download

    Bluetooth development kit has finally arrived, now it’s time to dust off the soldering iron and start soldering…

    Bluetooth development kit has finally arrived, now it’s time to dust off the soldering iron and start soldering…

     
  7. Drupal VPS

    For some time I’ve been using VPS hosting instead of a fully managed PHP server for my Drupal experiments. This gives me the freedom to combine PHP with other technologies such as Java and JavaScript. I use Java for running Solr. I’m also experimenting with building scalable HTML5 and mobile apps using NodeJS and the sexy Vert.x framework.

    I’ve collected a set of instructions that I use on a freshly installed Debian Squeeze system. The setup is used for hosting a PHP website using Apache, MySQL and Apache Solr with search integration. The instructions are Drupal specific but should be a good starting point for any PHP based hosting setup.

     
  8. Wireless Sensing Platform: Selecting a Chipset

    I’m working on building my Own sensing platform. I decided to go for Bluetooth low energy.

    The next step is to find a chipset and a good development platform. Key criteria is cost: I need something that can be produced at low cost and at the same time is easy to manage. Also I don’t want to waste time and energy on designing my own PCB.

    A quick search revealed that the fairly new chipset from Texas Instruments, CI2541, is really cheap and also seems to be feature complete. At volume discount, it comes in below 2 euros/chip - this is exactly the kind of thing I’m looking for. They also sell a development kit for around 250 euros, but it doesn’t come with a PCB for production use. Luckily other companies are making that.

    I found OLP425 from connectBlue that looked really nice but unfortunately has a high cost, around 50 USD. With such a price the final product would easily go beyond 100 euros which is really too much for the hobby market that I want to target.

    BlueGiga’s BLE112 is more interesting; 20 euros per board is much more reachable. They also seem to have good support with developer forums and open source PCBs so this is what I’ll be using. I’ve now ordered my dev kit and can’t wait to get started.

     
  9. Wireless Sensing Platform: Choosing a Standard

    I want to build a remote sensor for use at home. For now just for hobby use, but who knows, maybe this can become a business. I’ve been looking at different technologies, I would prefer something fairly standard to be able to easily extend it in the future. I’ve been looking at ZigBee, Ant+ and Bluetooth 4.0 Low Energy (BLE). The ZigBee’s are really interesting small and low energy consuming boards. They are also fairly cheap. Then there’s Ant+; Ant+ is used a lot in wellness for personal monitoring, but it seems not to have caught on beyond that. Finally there’s BLE that’s comparable to ZigBee in many ways. 

    So what is exactly the difference between Zigbee and BLE?
     Well, for one, ZigBee has been around for a longer time and there’s many more development kits and boards available. Bluetooth 4.0 is fairly new and is only in the latest phones. But maybe the fact that it is in phones is the second big differentiator. I’m still to see a laptop or phone to be equipped with a ZigBee chipset. Third, there’s a difference in how the protocols has been designed: ZigBee is optimized for mesh networking and Bluetooth for star based networks. 

    While I really like the mesh optimized features of ZigBee I think that the general connectivity options with BLE being integrated in consumer devices is going to be the killer feature for me.
     
  10. Reverse OAuth

    OAuth2 is fast becoming the de-facto standard for identity sharing and single sign on.

    OAuth is a protocol that addresses the users needs of having one central identity store, i.e. one username and password for many services. Also, OAuth addresses the security concerns of REST raised in the heated debates on REST vs SOAP a few years ago.

    Now, one thing that is entirely missing from the OAuth2 spec is the ability to allow third parties to authenticate a user to an identity store. In the current OAuth spec it is only possible for the user to identify himself by whatever means (username, password) that are put in place by the identity store.

    Consider this: A user is browsing the Internet using a mobile phone and 3G connection. Here the mobile operator already knows the phone number (MSISDN) of the user and thus the identity of the user. However, there is no way for the user to allow the mobile operator to authenticate to the identity store on behalf of the user. 

    What is possible today with OAuth is for the telco to act as an identity provider. However, with already established identity providers such as Facebook, Google, OpenID and LiveID I’m not sure how much value that will add in the end. Instead, with what we could call Reverse OAuth, a user could authorize their telco to authenticate the user at the identity provider when the user is using the mobile network. In this way the user would no longer need to enter Facebook, LiveID or other authentication info when browsing from the mobile phone.

    Allowing trusted parties (in the above example a telco) to authenticate the user could improve the user experience. Also it could be a standard way for the Telco to share valuable identity information.