Raspberry Pi Camera sends Pictures to ownCloud

Of course, there are many variants to take pictures with your raspberry pi camera and save them somewhere.

I use my raspberry pi camera to observe my flat when I am away. I want to do this in a as secure way as possible, i.e., I do not want to open any special ports on my router nor do I want to send unencrypted images over the web.

Inspired by  http://blog.davidsingleton.org/raspberry-pi-webcam-a-gentle-intro-to-crontab/  my idea was to take a photo every 5 or 10 minutes and save them to my ownCloud server via WebDAV with SSL encryption.

Step by step:

1.) Make sure you have a raspberry pi with a camera module and an owncloud installation reachable over https  somewhere out in the web (with picture app activated).

2.) Mount your owncloud drive on your raspberry pi: Create the directory /home/pi/owncloud  and add the following line to fstab:

https://[your.owncloud.domain.name]/remote.php/webdav/ /home/pi/owncloud davfs rw,noexec,noauto,user,async,_netdev,uid=pi,gid=pi 0 0

then mount the drive by calling mount owncloud/  or by calling echo -e “y” | mount owncloud/   if your server has a self-signed certificate as mine has.

3.) My idea is to take a picture, say, every 10 minutes, and take them for 24 hours with any need to remove them manually. So, write a script like the following, name it “take_photo.sh” and make it executable:

#!/bin/bash
filename="/home/pi/owncloud/myflat_$(date +%H%M).jpg"
raspistill -o /home/pi/image.jpg
mv /home/pi/image.jpg $filename

4.) Set up the crontab by calling crontab -e  and add the following line:

*/10 * * * * /home/pi/take_photo.sh

5.) You are finished. Raspberry pi will take every 10 minutes a photo and save them to your ownCloud where you have a beautiful picture viewer that you can access wherever you are on whatever for a device.

Compute SPSS like mean index variables

Suppose the problem discribed under theanalysisfactor.com.

The point is to compute meaningfull mean index variables while missing values are present. In R you have the switch na.rm  to tell a function – here  the mean()  function –  what to do with missing values. Setting this to true – mean(…, na.rm=TRUE)  – forces the function to use all non-missing values, even if there is only one. In the other case – mean(…, na.rm=FALSE)  – the function will return NA  even if there is ony one missing value.

To handel this situation I have written a very handy function that works like the MEAN.{X}() function in SPSS, where {X} denotes the minimal number of variables that should be non-missing to be incorporated in computing the mean value.

My single line R function looks like

spss.row.means <- function(vars, not.na=0) {
  apply(vars, 1, function(x) ifelse(sum(!is.na(x)) >= not.na, mean(x, na.rm=TRUE), NA))
}

As the first argument you have to pass the variables (in columns) and the second argument is the minimal number of variables that should be non-missing.

Have fun(ction)!

 

The Beauty of Unique Alphanumeric Random Codes

My challenge was to generate random codes of variable length and size. The codes should be all unique and build of alphanumeric symbols. I wrote the following function using the build-in function sample():

alpha.rnd <- function(length, size) {
	if(size > 36^length) stop("\n size cannot be greater than 36^length")
	b <- NULL
	repeat {
		a <- NULL	
		for (i in 1:length) {
			a <- paste0(sample(c(letters, 0:9), size, replace = TRUE), a)
		}
		b <- c(a,b)
		if(sum(duplicated(b)) == 0) break
	  	size = sum(duplicated(b))
		b <- b[!duplicated(b)]		
	}
	return(b)
}

Example:

> set.seed(77)
> alpha.rnd(3,5)
[1] "5qk" "94z" "f45" "au8" "qq0"

Enjoy!

Transition from Google Reader to Feedly to Tiny Tiny RSS (via Google Reader)

As Google Reader tells me since a few months, “Reader will not be available after July 1, 2013. Please be sure to back up your data.”, I looked for a new solution to read all my feeds. After a googling for a second, I thought to have found the solution: Feedly.

I was happy with Feedly until I saw a tweet mentioning Tiny Tiny RSS as alternative to Google Reader. So I downloaded and installed Tiny Tiny RSS on my own server. Installation procedure went easy, but how to get in all my feeds…

As Feedly provides any export functionality, I had to export my feeds from Google Reader and to add manually feeds added since the transition from Google Reader to Feedly. To import to Tiny Tiny RSS you need to activate the plugin “googlereaderimport” to import starred items and “import_export” to import all other feeds.

On my Samsung Galaxy S3, I have installed the Tiny Tiny RSS app. Its a great app that do exactly what I want, and it is also possible to take unread feeds offline.

To conclude, there is one main reason to use Tiny Tiny RSS in favor of Feedly, namely the data sovereignty (“Datenhoheit”). It is as more important to me as you are not even able to export your data from Feedly.

Happy RSSeading!

Compiling R v3.1.0 with MKL Support on OS X v10.8.4

Inspired by Compiling R 3.0.1 with MKL support I compiled R v3.1.0 with MKL (Intel® Math Kernel Library) Support on OS X v10.8.4 and I was wondering if I could see any increase in performance without the need to use parallelism.

First of all you have to download and install MKL for OS X. Unfortunately there is no single package including only the library, instead you have to download either Intel® C++ Composer XE 2011 for Mac OS X or Intel® Fortran Composer XE 2011 for Mac OS X (see also “How can I download the Intel® IPP and Intel® MKL for Mac OS* X?“). Both are not freely available, but you can download and install the “Free 30-Day Evaluation Version”, as I did with Intel® C++ Composer XE 2011. Installation procedure works fine and I trusted the final screen of the installation process who told me, that the installation was successful done.

Then the big nightmare begann by trying to compile/build R from source….

Before you start, you have to install all this developer tools such as Xcode and Fortran.

My first attempt was to compile R v3.0.1. After a few tries I ended up with this configure parameters:

./configure --enable-R-shlib --enable-threads=posix --with-lapack --with-blas="-fopenmp -m64 -I$MKLROOT/include -L$MKLROOT/lib/intel64 -lmkl_gf_lp64 -lmkl_gnu_thread -lmkl_core -lpthread -lm" r_arch=x86_64 SHELL="/bin/bash" CC="gcc -arch x86_64 -std=gnu99" CXX="g++ -arch x86_64" OBJC="gcc -arch x86_64" F77="gfortran -arch x86_64" FC="gfortran -arch x86_64" --with-system-zlib

Both variables MKLROOT and LD_LIBRARY_PATH must be defined before, e.g., with the following command:

source /opt/intel/mkl/bin/mklvars.sh intel64

“configure” went right, but then “make” ended up with errors, namely with:

** testing if installed package can be loaded
*** arch - R
ERROR: sub-architecture 'R' is not installed
*** arch - x86_64
ERROR: loading failed for ‘R’
* removing ‘/Users/phsz/Downloads/R-3.0.1/library/MASS’
make[2]: *** [MASS.ts] Error 1
make[1]: *** [recommended-packages] Error 2
make: *** [stamp-recommended] Error 2

After googling for a wile I found the hint to try with the R-devel from svn/trunk, i.e, with R v3.1.0. Here everything went right with “make”. Errors occurred on “make install”, nevertheless I found, that R was working if running with the “–arch” option, i.e., by executing

R --arch=x86_64

First, I did the R benchmark, which showed impressible results:

R 3.1.0 with MKL:

Total time for all 15 tests_________________________ (sec):  7.47700000000002
Overall mean (sum of I, II and III trimmed means/3)_ (sec):  0.443668541312943

R 3.0.1 without MKL:

Total time for all 15 tests_________________________ (sec):  34.8980000000001
Overall mean (sum of I, II and III trimmed means/3)_ (sec):  1.34762633295107

My unspoken hope was to be able to increase performance in functions like unidimTest(). But afterwards clearly it could not perform better, because of the chains and loops…

Have fun and be prepared for some more happenings during compiling!

BTW uninstalling MKL is easy:

sudo /opt/intel/composer_xe_2013.3.171/uninstall_ccompxe.sh

 

Parallel computing unidimTest in IRT

In the R package ltm by Dimitris Rizopoulos there is a function called unidimTest(). Computations of this function are very power consuming due the Monte Carlo procedure used inside. Without parallelizing only one core of your (surely) multicore computer is used for this computation. A simple modification of unidimTest() makes it possible to use the quite easy to use R package foreach for parallel computing using the R packages parallel and doParallel. See also parallel-r-loops-for-windows-and-linux by Vik Paruchuri.

I have changed the lines of unidimTest()

for (b in 1:B) {
  if (!missing(object)) 
    z.vals <- rnorm(n, ablts, se.ablts)
  data.new <- rmvlogis(n, parms, IRT = IRT, z.vals = z.vals)
  T.boot[b, ] <- eigenRho(data.new)$ev
}

to

T.boot[,] <- foreach(b=1:B, .combine="rbind", .errorhandling="pass") %dopar%
{
  #if (!missing(object)) 
  z.vals <- rnorm(n, ablts, se.ablts)
  data.new <- rmvlogis(n, parms, IRT = IRT, z.vals = z.vals)
  eigenRho(data.new)$ev   	
}

and named the function unidimTest.p().

The function is called as

library(foreach)
library(parallel)
library(doParallel)

registerDoParallel(cores=4) # put in here how many cores you want to be used

set.seed(666)
proc <- proc.time()
uni.p <- unidimTest.p(fit.rasch.u, B=100)
proc.time() - proc

and works at least on an Mac OS X (10.8.3) environment with R version 3.0.1.

 

 

Organizing life with Samsung Galaxy S3

My life is organized as following:

A) I have dates/appointments.
B) I have todos.
C) I take notes.
D) I have contacts.
E) I save bookmarks.
F) I share files.

I want all this to have in sync on all my devices, i.e. currently on my

  • [MBP] MacBook Pro (OS X 10.8)
  • [S3] Samsung Galaxy S3 (Android OS 4.1.2)
  • [iPod] iPod Touch 4th Generation (iOS 6).

I have the following cloud based solutions for…

A) owncloud.org
I have owncloud installed on my own server. The apps I use are iCal (MBP), S Planner (S3) and Calendar (iPod). Together with the S Planner I have installed CalDAV-sync beta to work with owncloud via CalDAV protocol.

B) astrid.com
Astrid is great for todo lists. I use the Astrid app (iPod, S3) and the web app (MBP).

C) gnotes.me
Gnotes is a powerfull and easy to use service to take notes. The Android app (S3) is really great and a little more powerful than the iOS app (iPod). The web app (MBP) is also a bit restricted, but easy to use.

D) owncloud.org
For contacts I use Contacts (MBP, iPod, S3). To sync with owncloud I use CardDAV-sync beta.

E) bit.ly
This service is great to save and share bookmarks across all my devices. On iOS there is the bitly app (iPod) and on the other devices I use the (mobile optimized) web app (MBP, S3).

F) dropbox.com
Dropbox is essential for file sharing across devices and for uploading photos taken by S3. On the mobile devices (S3, iPod) I use the dropbox app and on the desktop (MBP) the app as well as the web app (MBP).

 

 

Multiple varying coefficients with multiple group-level predictors

Ever tried to set up a multilevel model for e.g. classical educational settings?
I do it with R and JAGS using the rjags package. If you want to use a sort of BUGS language on Mac OS X together with R you have to use JAGS.
My book recommendation for everything about Multilevel Models is  “Data Analysis Using Regression and Multilevel/Hierarchical Models (2007)” by Andrew Gelman.
I was able to set up the BUGS code of the book running in JAGS.
But be carefulI. There is an error in the BUGS code of Chapter 17.2 of the aforementioned book.
The error is in the file “17.2_Bugs_codes.bug” in the section “# Multiple varying coefficients with multiple group-level predictors” on line 131:
B.raw[j,1:K] ~ dnorm (B.raw.hat[j,], Tau.B.raw[,])
should be: 

B.raw[j,1:K] ~ dmnorm (B.raw.hat[j,], Tau.B.raw[,])
as on page 380, line 6 of the book.
A second error is on line 138:
G[k,l] <- xi[k] + G.raw[k,l]
should be: 

G[k,l] <- xi[k] * G.raw[k,l]
The complete corrected code looks like:
# Multiple varying coefficients with multiple group-level predictors
model {
  for (i in 1:n){
    y[i] ~ dnorm(y.hat[i], tau.y)
    y.hat[i] <- inprod(B[classid[i],],X[i,])
  }
  tau.y <- pow(sigma.y, -2)
  sigma.y ~ dunif(0, 100)

  for (k in 1:K){
    for (j in 1:J){
      B[j,k] <- xi[k] * B.raw[j,k]
    }
    xi[k] ~ dunif(0, 100)
  }  
  for (j in 1:J){
    B.raw[j,1:K] ~ dmnorm(B.raw.hat[j,], Tau.B.raw[,]) #this line is erroneous in 17.2_Bugs_codes.bug!
    for (k in 1:K){
      B.raw.hat[j,k] <- inprod(G.raw[k,], U[j,])
    }
  }
  for (k in 1:K){
    for (l in 1:L){
      G[k,l] <- xi[k] * G.raw[k,l] #this line is erroneous in 17.2_Bugs_codes.bug!
      G.raw[k,l] ~ dnorm(0, .0001)
    }
  }

  Tau.B.raw[1:K,1:K] ~ dwish(W[,], df)
  df <- K+1
  Sigma.B.raw[1:K,1:K] <- inverse(Tau.B.raw[,])
  for (k in 1:K){
    for (k.prime in 1:K){
      rho.B[k,k.prime] <- Sigma.B.raw[k,k.prime]/
        sqrt(Sigma.B.raw[k,k]*Sigma.B.raw[k.prime,k.prime])
    }
    sigma.B[k] <- abs(xi[k])*sqrt(Sigma.B.raw[k,k])
  }
}