Counter Update

I just finished my latest improvements to the legacy version of my counter script.
I just added the lookup for ISPs and added dynamic scaling for the axis legend.
I will now going forward to change the whole system to a more sophisticated software, e.g. using a Datawarehouse approach. The first version of the Data-Model is finished.

So from the old version (that was just some plain tables, flowing around)

 

 

 

 

 

 

 

 

 

 

I created a new Model with more Tables, connected to each other. Basically i devided the Model into a basic Fact (Count) and some Dimensions (for every Value):

 

 

 

 

 

 

 

 

 

 

With my current values (around 22k Facts), i have already to limit the facts that are queryed from the Datebase. I wrote a short script to migrate all old Datasets to the new Data-Model.

Joyent SmartOS VM MYSQL Setup

I just played around with the new SmartOS from Joyent ( Homepage ). I followed a basic tutorial from a Blog called opusmagnus to setup my basic SmartOS Machine on Virtualbox. You basically just have to insert the latest iso image and startup the VM (SmartOS is a system that runs from a live Medium – an DVD- or USB Image, so all your harddisk belongs to your VMs).

I will just summarize the basic steps to setup a basic VM for Mysql (you should also read the original post here).
After you installed your VM (setup Networking/ZFS Pool – you should use >3 virtual harddisks), you login into your new system and perfom the following steps:

Check for VM-Templates to install:

# dsadm avail
UUID                                 OS      PUBLISHED  URN                    
9dd7a770-59c7-11e1-a8f6-bfd6347ab0a7 smartos 2012-02-18 sdc:sdc:percona:1.3.8  
467ca742-4873-11e1-80ea-37290b38d2eb smartos 2012-02-14 sdc:sdc:smartos64:1.5.3
7ecb80f6-4872-11e1-badb-3f567348a4b1 smartos 2012-02-14 sdc:sdc:smartos:1.5.3  
1796eb3a-48d3-11e1-94db-3ba91709fad9 smartos 2012-01-27 sdc:sdc:riak:1.5.5     
86112bde-43c4-11e1-84df-8f7fd850d78d linux   2012-01-25 sdc:sdc:centos6:0.1.1  
...
5fef6eda-05f2-11e1-90fc-13dac5e4a347 smartos 2011-11-03 sdc:sdc:percona:1.2.2  
d91f80a6-03fe-11e1-8f84-df589c77d57b smartos 2011-11-01 sdc:sdc:percona:1.2.1  
...
9199134c-dd79-11e0-8b74-1b3601ba6206 smartos 2011-09-12 sdc:sdc:riak:1.4.1     
3fcf35d2-dd79-11e0-bdcd-b3c7ac8aeea6 smartos 2011-09-12 sdc:sdc:mysql:1.4.1    
...
7456f2b0-67ac-11e0-b5ec-832e6cf079d5 smartos 2011-04-15 sdc:sdc:nodejs:1.1.3   
febaa412-6417-11e0-bc56-535d219f2590 smartos 2011-04-11 sdc:sdc:smartos:1.3.12

I choose a percona VM (that is a VM with MySQL and some Backup Stuff pre-install – more here).

# dsadm import  a9380908-ea0e-11e0-aeee-4ba794c83c33
a9380908-ea0e-11e0-aeee-4ba794c83c33 doesnt exist. continuing with install
a9380908-ea0e-11e0-aeee-4ba794c83c33 successfully installed

Then you need to create a basic VM-Config file /tmp/percona-vm:

{
        "alias": "percona-vm",
        "brand": "joyent",
        "dataset_uuid": "a9380908-ea0e-11e0-aeee-4ba794c83c33",
        "dns_domain": "haussleiter.de",
        "quota": "10",
        "nics": [
                {
                        "nic_tag": "admin",
                        "ip": "192.168.178.42",
                        "netmask": "255.255.255.0",
                        "gateway": "192.168.178.1"
                }
        ]
}

You are now able to create your VM:

# vmadm create -f /tmp/percona-vm
Successfully created df4108a7-c3af-4372-b959-6066c70661e9

You can check if your new VM is running:

# vmadm list
UUID                                  TYPE  RAM      STATE             ALIAS
df4108a7-c3af-4372-b959-6066c70661e9  OS    256      running           percona-vm

# ping 192.168.178.42
192.168.178.42 is alive

You can login in your VM (that is a Zone to be precisely with the zlogin command):

# zlogin df4108a7-c3af-4372-b959-6066c70661e9
[Connected to zone 'df4108a7-c3af-4372-b959-6066c70661e9' pts/2]

Becourse i could not found any information about the predifined MySQL Password, i just change it using standard-Mylsq CMDs:
First you need to stop the Mysql Service:

# svcadm disable mysql:percona

Then you need to start Mysql again with skipping the tables for user-credentials.

# mysqld_safe --skip-grant-tables &

Enter Mysql CMD-Tools and change the root Password:

# mysql -uroot
mysql> use mysql;
mysql> update user set password=PASSWORD("NEW-ROOT-PASSWORD") where User='root';
mysql> flush privileges;
mysql> quit;

You need to shutdown your MySQL instance:

# prstat
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP       
...
 10921 root     4060K 3016K cpu2     1    0   0:00:00 0.0% prstat/1
 10914 root     3200K 2236K sleep    1    0   0:00:00 0.0% bash/1
 10913 root     3172K 2244K sleep    1    0   0:00:00 0.0% login/1
 10894 mysql 207M 37M sleep 1 0 0:00:00 0.0% mysqld/27
...
kill 10894

You now can start the MySQL Daemon again.

# svcadm enable mysql:percona

I hope i will have time to look into more features of SmartOS. It seems to be a great System to Virtualize a lot of differend Service. It also supports virtualizing Windows VMs with KVM.

maven is not ant

Neben vielen berechtigten Kritikpunkten die manch einer dem Build-Tool Maven vorwerfen kann, ist oftmals der falsche Einsatz einer der Hauptgründe für die schlechte Perfomance (d.h. lange Build-Zeiten) die fehlerhafte Nutzung von Maven Features.

Oft sieht man, dass Module untereinander durch relative Pfadangaben verknüpft sind. Dies mag bei Ant-Builds ein gängiges Mittel sein, aber wenn man sich vor Augen hält, dass jeder Modul eines Projektes für sich genommen ausgecheckt und gebaut werden können soll, so ist klar, dass man eine abstractere Art und Weise benötigt, einzelne Module miteinander zu verbinden.

Und auch hierfür eignet sich das Mitte der Dependencies. Zu einem vollständigen (und nützlichem) Maven-Build-System gehört zwangsläufig auch ein (eigenes) funktionierendes Maven-Repository (sei es nun Archiva oder Nexus). So ein Repository dient nicht nur zum Lagern von Artefakten, sondern auch als Dreh- und Angelpunkt der einzelnen Module untereinander.

Nehmen wir an, ein Projekt besitzt folgende Struktur:

Root
  |-Modul A
  |-Modul B
    |-Modul C

Nehmen wir weiterhin an, dass Modul A von C abhängt, weil C zum Beispiel XSD Scripte enthält, die separat von einem Fachler aktualisiert werden, aber für die Generierung von Java-Klassen in A benötigt werden.

Würde es nach Ant-Manier gehen, so wäre folgendes denkbar:

<project>
...
    <build>
        <plugins>
            ...
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>xmlbeans-maven-plugin</artifactId>
                <version>2.3.3</version>
                <executions>
                    <execution>
                        <phase>generate-sources</phase>
                        <goals>
                            <goal>xmlbeans</goal>
                        </goals>
                    </execution>
                </executions>
                <inherited>true</inherited>
                <configuration>
                    ...
                    <schemaDirectory>../B/C/xsd</schemaDirectory>
                    <classGenerationDirectory>${project.build.directory}/classes</classGenerationDirectory>
                </configuration>
            </plugin>
            ...
        </plugins>
    </build>
...
</project>

Dies dürfte aber spätestens beim Release Build scheitern, weil hier jedes Modul für das Releas separat innerhalb des eigenen Target-Dirs ausgecheckt wird (spätestens hier kommt es dann mit den relativen Pfaden nicht mehr hin).

Was also tun?
Abhilfe schaft das maven-dependency-plugin, welches dem aktuellen Modul erlaub direkt auf die Daten eines einzelnen Artefacts zugreifen zu können. Damit das Artefact durch das Plugin verarbeitet werden kann, sollte es (logischerweise) auch als Dependency im eingetragen sein. So ist sogar sicher gestellt, dass immer der aktuellste Codestand verwendet wird, denn sollt ein Artefact im Repository neuere sein als das, welches lokal vorgehalten wird, so wird dieses heran gezogen.

Die obige Konfiguration würde sich also wie folgt ändern:

<project>
...
    <dependencies>
        ...
        <dependency>
            <groupId>com.example</groupId>
            <artifactId>C</artifactId>
            <version>${project.version}</version>
        </dependency>
        ...
    </dependencies>
    <build>
        <plugins>
            ...
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-dependency-plugin</artifactId>
                <version>2.3</version>
                <executions>
                    <execution>
                        <id>src-dependencies</id>
                        <phase>generate-sources</phase>
                        <goals>
                            <!-- use copy-dependencies instead if you don't want to explode the
                                sources -->
                            <goal>unpack</goal>
                        </goals>
                        <configuration>
                            <artifactItems>
                                <artifactItem>
                                    <groupId>com.example</groupId>
                                    <artifactId>C</artifactId>
                                    <version>${project.version}</version>
                                    <classifier>resources</classifier>
                                    <type>zip</type>
                                    <includes>**/*.xsd</includes>
                                    <overWrite>true</overWrite>
                                    <outputDirectory>${project.build.directory}/C</outputDirectory>
                                </artifactItem>
                            </artifactItems>
                        </configuration>
                    </execution>
                </executions>
            </plugin>            
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>xmlbeans-maven-plugin</artifactId>
                <version>2.3.3</version>
                <executions>
                    <execution>
                        <phase>generate-sources</phase>
                        <goals>
                            <goal>xmlbeans</goal>
                        </goals>
                    </execution>
                </executions>
                <inherited>true</inherited>
                <configuration>
                    ...
                    <schemaDirectory>${project.build.directory}/C/xsd</schemaDirectory>
                    <classGenerationDirectory>${project.build.directory}/classes</classGenerationDirectory>
                </configuration>
            </plugin>
            ...
        </plugins>
    </build>
...
</project>

Zu beachten ist hier, dass die Reihenfolge der beiden Plugins wichtig ist, weil ansonsten die Code-Generierung nicht auf die entpackten Dateien der Dependency zugreifen kann. Ich gebe zu, es handelt sich hier um ein einfaches Beispiel, aber zumindest der Drang, sich durch relative Pfade innerhalb von Maven-Files zu helfen ist meiner Beobachtung nach recht weit verbreitet :-/.

Show Build-Information in your iOS App About Panel

Sometimes it might be useful to have an exact piece of information about what version of an app you have currently running. Especially if you have a decent Testing-Group, it is important to track the versions in which a bug appears. The goal of this post is to achieve a info panel like this in your application.
You get the Application version (from the Application Bundle), the Repository Revision and the Date of the last Commit.

Picture 1: Example Application About Dialog

 

We are using here the build-in functions of subversion to update given keywords with the repository information. More about this topic here. There is also a way to use this method with git, but i did not test it yet. You may find out more about this here
The first step is to create a File-Template you can import in your code, with which you can access all the necessary details:

#define APP_VERSION   
[[[NSBundle mainBundle] infoDictionary]   
objectForKey:@"CFBundleVersion"]
#define APP_EXECUTABLE   
[[[NSBundle mainBundle] infoDictionary]   
objectForKey:@"CFBundleExecutable"]
#define APP_NAME   
[[[NSBundle mainBundle] infoDictionary]   
objectForKey:@"CFBundleName"]
#define APP_BUILD_REVISION @"$Rev$"
#define APP_BUILD_DATE @"$Date$"
#define APP_LAST_AUTHOR @"$Author$"

Code 1: version.h template
The next step is to tell Subversion to replace the placeholder with the subversion values.
You can do this with setting the subversion keyword for that file.
After that, with every commit of the file “version.h” the values will be updated.

svn propset svn:keywords 'Revision Author Date' version.h

Code 2: version.h template
The very last step is to make sure, that “version.h” will be updated each time you make a change to your application. Assuming you build your app every time you made a change, you can use the functions, build into Xcode to force an update on “version.h”. We use the trick, that every change on the propsets of “version.h” is equal to a file modification itself.
So we create a small bash script, setting the propset “build” to a new value. After that, “version.h” needs to be commited as a new version.

#!/bin/sh
DATE=`date`
HOST=`hostname`
svn propset build "$HOST $DATE" Version.h

Code 3: buildUpdate.sh
Now we need to add the run of “buildUpdate.sh” to our Build-Cycle. (Picture 2 & Picture 3).

Picture 2: Project Target Settings

 


Picture 3: Insert Script Call

After a successful commit, the file “version.h” will look something like this:

#define APP_VERSION   
[[[NSBundle mainBundle] infoDictionary]   
objectForKey:@"CFBundleVersion"]
#define APP_EXECUTABLE   
[[[NSBundle mainBundle] infoDictionary]   
objectForKey:@"CFBundleExecutable"]
#define APP_NAME   
[[[NSBundle mainBundle] infoDictionary]   
objectForKey:@"CFBundleName"]
#define APP_BUILD_REVISION @"$Rev: 1047 $"
#define APP_BUILD_DATE @"$Date: 2011-01-21 18:53:38 +0100 (Fri, 21 Jan 2011) $"
#define APP_LAST_AUTHOR @"$Author: phaus $"

Code 4: updated version.h
You might modify the output (e.g. filter out the $s or reformat the date) to get a more stylish output.

Using UIAutomation for Multilanguage iOS Applications

With the appearance of iOS 4.0 Apple introduced a new Test-Framework for automatically UI Testing: UI Automation. Based on Javascript and build-in into Instruments, UI Automation is a very useful tool during the Developing of iOS Application.
A very good introduction in UIAutomation is here and here.
During the development of a iOS Application, we decided to port it to iOS 4.0 and therefor use also UIAutomation for regression testing (before that we used GHUnit Tests for Component Testing – but thats another story).
As we are primarily a company dealing with web-based application, we had almost zero afford to deal with the Javascript syntax of UI Automation. But we had to deal with the fact, that we developing a dual language Application (de and en), and therefore need a possibility to test the whole UI in both languages.
If you are familiar with UI Automation, you probably know that the Framework uses the accessibility labels of your UI and also often Button Labels. So you have to deal with the actual language of the current UI Setting. But wait. There is already a valid mapping of different language to a given key. If you internationalize your application you will use so called Localizable.strings to do your language Mapping (more here).
So we just need a way to move our already existing Mapping into our UI Automation world. UI Automation supports the import of separate JavaScript Files to use your own Libraries and Settings. So i build a conversation script to translate your different Localizable.strings to JavaScript and moving all languages into one big collection.
So for example a String like this:

    "Library" = "Bibliothek";
    "Shop" = "Kiosk";

Will be converted to:

UIA.Localizables = {
    "de":{
        ...
        "Library" : "Bibliothek",
        "Shop" : "Kiosk",
        ...
    },
    "English":{
    }
    ...
}

The next step is to determine during your UIAutomation Test which language Setting you need to Load from your Localization File.
It is possible to readout some System Settings during an UIAutomation Test. The basic functions to find your current language and to read the correct language Array look like this:

UIA.getCurrentLang = function(){
    if(application.preferencesValueForKey("AppleLanguages")[0]  == "en")
        return "English";
    else
        return application.preferencesValueForKey("AppleLanguages")[0];
}
UIA.getCurrentLocalizables = function(){
    return UIA.Localizables[UIA.getCurrentLang()];
}
var Localizable = UIA.getCurrentLocalizables();

The first function is necessary to capture a quirk of the recent Xcode Versions (some people calling it a bug 🙂 ).
So now we can just use our String within our Test-Cases.

#import "lib/Localizables.js"
function delay(seconds){
    UIATarget.localTarget().delay(seconds);
}
function tapTab(name){
    var window = UIATarget.localTarget().frontMostApp().mainWindow();
    window.tabBar().buttons()[name].tap();
}
var window = UIATarget.localTarget().frontMostApp().mainWindow();
tapTab(Localizable['Library']);
delay(1);
tapTab(Localizable['Shop']);
delay(7);

I attached the conversion script to this post.
You just need to alter the source and destination folders of your i18n files and the UIAutomation-Tests directory.
Download file

Philipps 5 mins: Graph-Fun with AJAX and Canvas

I always searched for an efficient way add dynamic diagrams to a web-project without using flash or other plugin-based magic.
With the support of the canvas tag element in almost all mainstream browser, i thought it would be a good time for creating a short demo how things workout.
You will need at least two Parts for this demo. First of all you will need a Source JSON feed. For this demo i just hacked together a very basis PHP script:

<?php
header('Content-type: application/json');
echo'{';
echo '"value":"' . rand(0, 60) . '"';
echo '}';
?>

The result is something like:

{"value":"34"}

Secondly you need a Webpage, where you want to insert your canvas element, load the data from the json feed and draw the changing values to the canvas element.
For a better performance, we will implementing pulling the data and drawing the data within two parallel cycles. The Common data Storage will be an array of 300 value (for our diagram with a width of 300px).
We are using two additional JS Files. The first we need for creating our XHTTPRequest Object and handling the response within a callback method. The second script is for parsing the JSON Feed as a Javascript Object in a safe way (an ordinary eval works, but is to unsecury).
Our main-script works in several steps:
First we initialize an array with empty elements:

function init(){
    for(var i=0; i < 300; i++){
        randomValues[i] = 0;
    }
}

 


This step is optional, but then you have a nice “zero line” at the beginning.

Secondly we have a method, that pushes a new value to the existing array, and drops the first entry, if the length of the array is greater than 300.

function addValue(arr, value){
    if(arr.push(value) > 300){
        arr.shift();
    }
}

 


The next two methods are necessary for sending our ajax-request and for handling the response in a callback method.
Basically the callback method just calls the addValue method.

The timeout variable is set to 200 ms. So the script calls our backend periodically every 200 ms and then adds a new value to our array.

function pullValue(){
    sendRequest('random.php',handleRandomRequest);
    setTimeout(pullValue, timeout);
}

function handleRandomRequest(req) {
    var text = JSON.parse(req.responseText);
    addValue(randomValues, text.value);
}

The last method is for the drawing functionality:

function draw(){
    ctx.clearRect(0, 0, 300, 60);
    ctx.fillStyle = "rgba(101,101,101, 0.5)";
    ctx.fillRect (0, 0, 300, 60);
    ctx.lineWidth = 1;
    ctx.strokeStyle = 'blue';
    ctx.beginPath();
    ctx.moveTo(1, 60-parseInt(randomValues[0]));
    for (var i=1; i<randomValues.length; i++){
        value = 60-parseInt(randomValues[i]);
        ctx.lineTo(i,value);
    }
    ctx.stroke();
    setTimeout(draw, timeout);
}

ctx is a 2d context of the canvas element.
On every call of the draw method, all elements of the array are painted. The first element is always the start point.
Because the canvas coordinate system has the point 0,0 in the upper left corner but the 0,0 point of our diagram should be in the lower left corner, you have to subtract the array-values from 60 to get the right drawing coordinate.
This method also runs periodically every 200 ms. But it also works for two times for pulling the data an drawing it.

Here you can see the script in action

MacOS update 10.6.4 and GPGMail

So the latest MacOS Update broke the gpg-mail Plugin again. But it seems that the old UUID Trick saves the show.
First close Mail.app.
Then you need to find out the new PluginCompatibilityUUIDs from Mail.app and the Message Framework.

cat /System/Library/Frameworks/Message.framework/Resources/Info.plist | grep UUID -A 1
cat /Applications/Mail.app/Contents/Info.plist |grep UUID -A 1 | grep UUID -A 1

Then you need to open the gpg-mail Bundle.
Look into /Users/[username]/Library/Mail/Bundles
You can open GPGMail.mailbundle via the Context Menu. You need to add the two new UUIDs to the Node SupportedPluginCompatibilityUUIDs in the Info.plist file.

Save and then restart Mail.app.

via gpgmail-users@lists.sourceforge.net and @morrow

creating JNI with Swig

I am currently playing around with JNI and Java due the colleagues question to make the connect features of jack-audio (http://jackaudio.org) accessible to java.
There is already a javalib (http://jjack.berlios.de) with some features, there seems still some needes ones missing.
So i started today to have a look into SWIG (http://swig.org).
“SWIG is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages.”
After some hours of research i ended up with some facts:
To created yourself a Java binding to a given c/c++ Program or Library you need one or more Interface files (*.I) and swig file with all the necessary swig module descriptions.
There is an example on the swig homepage ( http://www.swig.org/Doc1.3/SWIGDocumentation.html#Introduction) to explain the workflow of SWIG.
There is a c file exmple.c:

/* File : example.c */
double  My_variable  = 3.0;

/* Compute factorial of n */
int  fact(int n) {
    if (n <= 1) 
        return 1;
    else 
        return n*fact(n-1);
}

/* Compute n mod m */
int my_mod(int n, int m) {
    return(n % m);
}

The mapping example.i files looks as the following:

/* File : example.i */
%module example
%{
/* Put headers and other declarations here */
    extern double My_variable;
    extern int    fact(int);
    extern int    my_mod(int n, int m);
%}
extern double My_variable;
extern int    fact(int);
extern int    my_mod(int n, int m);

As you can see, the Interface file has a similar syntax with some additional meta information.
You can now create your JNI bindings:

swig -java example.i
There are also flags for different other languages:
-allegrocl - Generate ALLEGROCL wrappers
-chicken - Generate CHICKEN wrappers
-clisp - Generate CLISP wrappers
-cffi - Generate CFFI wrappers
-csharp - Generate C# wrappers
-guile - Generate Guile wrappers
-java - Generate Java wrappers
-lua - Generate Lua wrappers
-modula3 - Generate Modula 3 wrappers
-mzscheme - Generate Mzscheme wrappers
-ocaml - Generate Ocaml wrappers
-octave - Generate Octave wrappers
-perl - Generate Perl wrappers
-php - Generate PHP wrappers
-pike - Generate Pike wrappers
-python - Generate Python wrappers
-r - Generate R (aka GNU S) wrappers
-ruby - Generate Ruby wrappers
-sexp - Generate Lisp S-Expressions wrappers
-tcl - Generate Tcl wrappers
-uffi - Generate Common Lisp / UFFI wrappers
-xml - Generate XML wrappers

As a result you get three new files:

  • example.java
  • exampleJNI.java
  • example_wrap.c

The example_wrap.c can be used to compile the needed library file for your JNI access.
The two java Files are the basic JNI implementation:

    class exampleJNI {
        public final static native void My_variable_set(double jarg1);
        public final static native double My_variable_get();
        public final static native int fact(int jarg1);
        public final static native int my_mod(int jarg1, int jarg2);
    }

And a basic java example how to access these functions:

public class example {
    public static void setMy_variable(double value) {
        exampleJNI.My_variable_set(value);
    }
    public static double getMy_variable() {
        return exampleJNI.My_variable_get();
    }
    public static int fact(int arg0) {
        return exampleJNI.fact(arg0);
    }
    public static int my_mod(int n, int m) {
        return exampleJNI.my_mod(n, m);
    }
}

To get into working with SWIG i can advise the sources of the G4Java Project.
There is also a maven plugin to use SWIG from within your maven build: http://java.freehep.org/freehep-swig-plugin.
I am currently trying to create the necessary Interface files from the jack-audio sources to use them for a first run of SWIG. For python and tck you can use cmake to create these files.

Wiederherstellen eines MacOS Festplatten-Backups mit Hilfe von DD

Das Festplattendienstprogramm von MacOS bietet unter einer übersichtlichen Oberfläche ein umfangreiches Tool um mit Festplatten zu arbeiten.
Allerdings gibt es hier einige Probleme, welche oftmals einen Umweg über das Terminal benötigen.
Ich habe das Tool dazu benutzt um eine Festplatte aus einem neu erworbenen Netbook zu sichern, bevor ich mit verschiedenen Linux Distributionen spiele :-).
Das war notwendig, weil diese Festplatte eine Recovery-Partition enthält, von der man dann ggf. das Windows-System wiederherstellen kann.
Das Festplattendienstprogramm ermöglicht es sehr einfach, ein Image von einem kompletten Device (einer Festplatte) zu ziehen. Hierbei werden auch gleich nicht gefüllte Bereiche ausgespart, sodass von der 160 GB Platte ein knapp 8GB großes Image übrig bleibt. Bis zu diesem Zeitpunkt befand ich mich noch in dem Glauben, dass ich das Image zu einfach wieder zurückspielen könnte.
Achja: Für das Backup habe ich die 2,5″ SATA Platte ausgebaut und mit Hilfe eines USB-SATA Adapters meinem MacBook Pro zur Verfügung gestellt.
Die Struktur der Festplatte sieht wie folgt aus:

bash-3.2# diskutil list
...
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: FDisk_partition_scheme *160.0 GB disk1
1: Windows_NTFS System 85.9 GB disk1s1
2: DOS_FAT_32 69.6 GB disk1s2
3: 0xDE 4.5 GB disk1s4
...


 

Dem unbedarften Leser scheint hier nichts besonderes aufzufallen, allerdings ist die letzte Partition vom Typ EISA-Konfiguration und kann von MacOS nicht gemountet werden. Interessanterweise ist es dem Festplattendienstprogramm aber möglich, die Partition mit in ein Gesamt-Image zu sichern, wenn man das komplette Device sichert. Dummerweise ist eine Wiederherstellung auf Device-Ebene nicht vorgesehen :-).
D.h. es ist möglich die Partition mit der (aktuellen) Windows Partition(NTFS), sowie eine weitere Partition mit Update-Daten(FAT32) wiederherzustellen, aber die eigentlich Revocery-Partition bleibt im Nirvana verschollen :-/. Weiterhin ist es hierzu notwendig, das sowohl Zielfestplatte, als auch Backup-Image die identische Partition-Struktur haben – d.h. legt ein Linux-Installer ein eigenes Partition-Schema an, so ist es nicht mehr so einfach möglich, das Backup wieder einzuspielen.
Was uns bei beiden Problemen hilft ist das Unix-Tool “dd”.
Als allererstes ist es wichtig, herauszufinden, wie die beiden Devicenamen lauten. Hierzu mounten wir das Backup-Image und schließen die Festplatte wieder an den Mac an.
Danach lassen wir uns die Disk-Device auflisten:

bash-3.2# diskutil list
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *200.0 GB disk0
1: EFI 209.7 MB disk0s1
2: Apple_HFS Imotep HD 199.7 GB disk0s2
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: FDisk_partition_scheme *160.0 GB disk1
1: DOS_FAT_32 DISK1S1 84.9 GB disk1s1
2: Linux_Swap 970.6 MB disk1s3
3: DOS_FAT_32 69.6 GB disk1s2
4: 0xDE 4.5 GB disk1s4
/dev/disk2
#: TYPE NAME SIZE IDENTIFIER
0: FDisk_partition_scheme *160.0 GB disk2
1: Windows_NTFS System 85.9 GB disk2s1
2: DOS_FAT_32 69.6 GB disk2s2
3: 0xDE 4.5 GB disk2s4

 

Unsere Quelle ist /dev/disk2, unser Ziel /dev/disk1. Als allererstes kopieren wir den MBR vom Image auf die Festplatte (hier ist auch die Partition-Tabelle gespeichert - man erspart sich das aufwendige Neu-Partitionieren). Der MBR befindet sich innerhalb der ersten 512k einer Festplatte.
bash-3.2# sudo dd if=/dev/disk2s1 of=/dev/disk1s1 bs=512 count=1
Nun sind wir in der Lage die beiden sichtbaren Partitionen über das Festplattendienstprogramm wiederherzustellen. Hierzu wählen wir unser Ziel an. Unter dem Tab "Wiederherstellen" ziehen wir einmal unsere Zielpartition in das Input-Feld "Ziel" und aus dem gemounteten Image die Quell-Partition in das Input-Feld "Quelle". Sollte das Programm eine Fehlermedung ausgeben, so ist es ggf. notwendig, die Partitionen erst zu deaktivieren (Partition anwählen und über die Toolbar oben deaktivieren). Nach einigen Minuten sollte das Backup eingespielt sein. Dies ist ein großer Vorteil gegenüber von "dd", weil dd die Daten sektorweise wiederherstellt (also auch Nullsektoren 1:1 überträgt), während das Festplattendienstprogramm nur die reinen Daten überträgt und Nullsektoren ausspart.
Was bleibt ist die letzte nicht-sichtbare Partition. Dies kopieren wir nun abermals per "dd". Um nicht kB-Weise zu kopieren wählen wir hier 512MB Slices:

dd if=/dev/disk2s4 of=/dev/disk1s4 bs=512m


Obwohl es nur knapp 5GB sind, nimmt der Kopiervorgang einiges an Zeit in Anspruch, sodass sich abschätzen lässt, wie zeitaufwendig ein Wiederherstellen der kompletten 160GB per "dd" wäre.
Mich hat diese Erkenntnis eine halbe Nacht gekostet :-). Vielleicht steht irgendjemand einmal vor dem gleichen Problem (z.B. sichern/wiederherstellen von reinen Linux Partitionen).