Install CoreOS on Proxmox

Flattr this!

Some words before we start…

Hello Blog, it’s been a while. I still have to deliver the last part of the Munin Plugin Development Series (Part 1, 2, 3).

Today I would like to write something about the Setup of a CoreOS Environment on Proxmox. Proxmox is a Debian based Distribution that bundles a Web UI for OpenVZ+KVM and some great Tools for Clustering and Multi-Tenancy Installations. I am using Proxmox as a Hosting Platform for some years now and I am still amazed about the stability and the way things work out so far. I plan to create another Series about things around Proxmox (e.g. Cluster Setup using Tinc/Live Migration of VMs and the overall Network Setup).

But now, let’s dive into the Topic…

 

VM Setup

My Proxmox Hosts uses private Networks, both for OpenVZ Containers as well as for KVM VMs.
Both private Networks have Internet Access via the Standard Linux IP Forwarding Functions.
Configuration is done via iptables, e.g. for our private KVM Network 10.10.0.0:


iptables -t nat -A POSTROUTING -s 10.10.0.0/24 -o eth0 -j SNAT --to ${EXT_IP}

Now, create a (KVM) VM in Proxmox. I picked 2 Cores and 2Gigs of RAM. Choose VirtIO for the Disk as well as the Network. This will provide much better Performance and works out of the Box, since CoreOS has build-in support for VirtIO.

The basic steps for the Setup are:

setup1setup2setup3setup4setup5setup6setup7setup8

Now start you VM and open the Console:

start.new.vm

 

Preparations

Downlaod the CoreOS ISO

[user@proxmox]# pwd
/var/lib/vz/template/iso
[user@proxmox]# wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_iso_image.iso

Note your public SSH Key

[user@proxmox]# cat ~/.ssh/id_rsa.pub

becoming root

coreos ~ # sudo su - root

update the root password

coreos ~ # passwd

Setup the basic Network.

coreos ~ # ifconfig eth0 10.10.0.111 netmask 255.255.255.0 up

SSH into your system

[root@cleopatra iso]# ssh root@10.10.0.111
The authenticity of host '10.10.0.111 (10.10.0.111)' can't be established.
RSA key fingerprint is XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX.
Are you sure you want to continue connecting (yes/no)? yes    
root@10.10.0.111's password:
CoreOS stable (766.3.0)
Update Strategy: No Reboots  

Finish Network Configuration

coreos ~ # route add default gw 10.10.0.1
coreos ~ # echo "nameserver 8.8.8.8" > /etc/resolv.conf

Installation

see https://coreos.com/os/docs/latest/installing-to-disk.html

Download Config Template

coreos ~ # wget https://gist.githubusercontent.com/phaus/e52241b66576d4484f6f/raw/9032faaa69bc05ebc8b08efb518f2a90bfef4dca/coreos1-config-coreos.yml

Adjust the Configuration as required

coreos ~ # cat coreos1-config-coreos.yml
#cloud-config
hostname: "coreos1"

# include one or more SSH public keys
ssh_authorized_keys:
  - ssh-rsa XXX

coreos:

  units:
    - name: systemd-networkd
      command: stop
    - name: 00-static.network
      runtime: true
      content:  |
        [Match]
        Name=eth*
        [Network]
        Gateway=10.10.0.1
        Address=10.10.0.111/24    
        DNS=8.8.8.8    
    - name: systemd-networkd
      command: start
    - name: etcd2.service
      command: start
    - name: fleet.service
      command: start

Replace XXX with your public SSH Key.

Install CoreOS to /dev/vda (it is vda since VirtIO Device are mapped to vdX)

coreos ~ # coreos-install -d /dev/vda -C stable -c ~/coreos1-config-coreos.yml
Checking availability of "local-file"
Fetching user-data from datasource of type "local-file"
Downloading the signature for http://stable.release.core-os.net/amd64-usr/766.3.0/coreos_production_image.bin.bz2...
2015-09-28 20:59:39 URL:http://stable.release.core-os.net/amd64-usr/766.3.0/coreos_production_image.bin.bz2.sig [543/543] -> "/tmp/coreos-install.2oAX9KwZlj/coreos_production_image.bin.bz2.sig" [1]
Downloading, writing and verifying coreos_production_image.bin.bz2...
2015-09-28 21:00:09 URL:http://stable.release.core-os.net/amd64-usr/766.3.0/coreos_production_image.bin.bz2 [195132425/195132425] -> "-" [1]
gpg: Signature made Wed Sep  2 04:32:09 2015 UTC using RSA key ID E5676EFC
gpg: key 93D2DCB4 marked as ultimately trusted
gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: Good signature from "CoreOS Buildbot (Offical Builds) <buildbot@coreos.com>" [ultimate]
gpg: Note: This key has expired!
Primary key fingerprint: 0412 7D0B FABE C887 1FFB  2CCE 50E0 8855 93D2 DCB4
     Subkey fingerprint: EEFA 7555 E481 D026 CC40  D8E6 A5A9 6635 E567 6EFC
Installing cloud-config...
Success! CoreOS stable 766.3.0 is installed on /dev/vda

Check your Installation

coreos ~ # mount /dev/vda9 /mnt
coreos ~ # cd /mnt/

Please keep in mind, that most of the Configuration will take place during the first boot of your new Instance.

Time for a Shutdown

coreos ~ # shutdown -h now
PolicyKit daemon disconnected from the bus.
We are no longer a registered authentication agent.
Connection to 10.10.0.111 closed by remote host.
Connection to 10.10.0.111 closed.    

First Boot

Start the VM again (this time it should boot from the internal disk – you can also remove the ISO File, just to be sure). Also the Node should apply the correct Network Configuration.

You should see something like this:

start.instance

 

SSH into your new node

[root@cleopatra iso]# ssh core@10.10.0.105

You might get this Warning:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending RSA key in /root/.ssh/known_hosts:13
RSA host key for 10.10.0.111 has changed and you have requested strict checking.
Host key verification failed

That is fine, since the CoreOS Host just changed it’s SSH Host Key. Just remove the problematic line (in this case line 13) from you /root/.ssh/known_hosts.

After that you should be fine:

[user@proxmox]# ssh core@10.10.0.111
Last login: Tue Sep 29 08:50:48 2015 from 10.10.0.1
CoreOS stable (766.3.0)
Failed Units: 1
  user-cloudinit@var-lib-coreos\x2dinstall-user_data.service
core@coreos1 ~ $ sudo -s
coreos1 core #

Now we need to fix the Configuration. Before that, we should create two more CoreOS Hosts to have a Cluster ready.

Writing Munin Plugins pt2: counting VPNd Connections

Flattr this!

Preamble

Every Munin Plugin should have a preamble by default:

#!/usr/bin/env perl
# -*- perl -*-

=head1 NAME

dar_vpnd a Plugin for displaying VPN Stats for the Darwin (MacOS) vpnd Service.

=head1 INTERPRETATION

The Plugin displays the number of active VPN connections.

=head1 CONFIGURATION

No Configuration necessary!

=head1 AUTHOR

Philipp Haussleiter <philipp@haussleiter.de> (email)

=head1 LICENSE

GPLv2

=cut

# MAIN
use warnings;
use strict;

As you can see, this Plugin will use Perl as the Plugin language.

After that you have some information about the Plugin Usage:

  • Name of the Plugin + some description
  • Interpretation of the delivered Data
  • Information about the Plugins Configuration (not necessary here, we will see that in the other Plugins)
  • Author Name + Contact Email
  • License

# MAIN marks the beginngin of the (main) code.

Next you see some Perl Setup, using strict Statements and also show warnings.

Gathering Data

First you should always have a basic idea how you want collect your Data (e.g. which user will use what command to get what kind of data).

For Example we can get all VPN Connections in Mac OS (Server) searching the process List for pppd processes.

ps -ef | grep ppp
    0   144     1   0  5Mär14 ??        10:35.34 vpnd -x -i com.apple.ppp.l2tp
    0 29881   144   0  4:12pm ??         0:00.04 pppd serverid com.apple.ppp.l2tp nodetach proxyarp plugin L2TP.ppp ms-dns 10.XXX.YYY.1 debug logfile /var/log/ppp/vpnd.log idle 7200 noidlesend lcp-echo-interval 60 lcp-echo-failure 5 mru 1500 mtu 1280 receive-all ip-src-address-filter 1 novj noccp intercept-dhcp require-mschap-v2 plugin DSAuth.ppp plugin2 DSACL.ppp l2tpmode answer :10.XXX.YYY.233
    0 22567   144   0  4:12pm ??         0:00.04 pppd serverid com.apple.ppp.l2tp nodetach proxyarp plugin L2TP.ppp ms-dns 10.XXX.YYY.1 debug logfile /var/log/ppp/vpnd.log idle 7200 noidlesend lcp-echo-interval 60 lcp-echo-failure 5 mru 1500 mtu 1080 receive-all ip-src-address-filter 1 novj noccp intercept-dhcp require-mschap-v2 plugin DSAuth.ppp plugin2 DSACL.ppp l2tpmode answer :10.XXX.YYY.234    

Collecting only the IP you need some more RegExp using awk:

ps -ef | awk '/[p]ppd/ {print substr($NF,2);}'
10.XXX.YYY.233
10.XXX.YYY.234

We are only interested in the total Connection Count. So we use wc for counting all IPs:

ps -ef | awk '/[p]ppd/ {print substr($NF,2);}' | wc -l
       2

So we now have a basic command that gives us the Count of currentyl connected users.

Configuration

The next thing is how your Data should be handled by the Munin System.
Your Plugin needs to provide Information about the Field Setup.

The most basic (Perl) Code looks like this:

if ( exists $ARGV[0] and $ARGV[0] eq "config" ) {
    # Config Output
    print "...";    
} else {
    # Data Output
    print "...";
}

For a more Information about fieldnames, please follow the above Link.

Our Plugin Source looks like this:

# MAIN
use warnings;
use strict;


my $cmd = "ps -ef | awk '/[p]ppd/ {print substr(\$NF,2);}' | wc -l";

if ( exists $ARGV[0] and $ARGV[0] eq "config" ) {
    print "graph_category VPN\n";
    print "graph_args --base 1024 -r --lower-limit 0\n";    
    print "graph_title Number of VPN Connections\n";
    print "graph_vlabel VPN Connections\n";
    print "graph_info The Graph shows the Number of VPN Connections\n"; 
    print "connections.label Number of VPN Connections\n";
    print "connections.type GAUGE\n";   
} else {
    my $output = `$cmd`;
    print "connections.value $output";
}

Implementation

To test the Plugin you can use munin-run:

> /opt/local/sbin/munin-run dar_vpnd config
graph_category VPN
graph_args --base 1024 -r --lower-limit 0
graph_title Number of VPN Connections
graph_vlabel VPN Connections
graph_info The Graph shows the Number of VPN Connections
connections.label Number of VPN Connections
connections.type GAUGE
> /opt/local/sbin/munin-run dar_vpnd
connections.value        1

Example Graphs

Some basic (long time) Graphs look like this:

munin_vpnd_connections_macos

Writing Munin Plugins pt1: Overview

Flattr this!

Writing your own Munin Plugins

Around February this year, we at innoQ had the need for setting up a Mac OS based CI for a Project. Besides building of integrating some standard Java Software, we also had to setup an Test Environment with Solaris/Weblogic, Mac OS for doing a CI for an iOS Application and a Linux System that contains the Jenkins CI itself.
Additionally the whole Setup should be reachable via VPN (also the iOS Application itself should be able to reach the Ressources via VPN).

To have the least possible obsticles in Setting up the iOS CI and the iOS (iPad) VPN Access, we decide to use Mac OS Server as the Basic Host OS. As the Need for Resources are somehow limited for the other Systems (Solaris/Weblogic, Linux/Jenkins), we also decide to do a basic VM Setup with VMWare Fusion.

Since we have a decent Munin Monitoring Setup in our Company for all our Systems, we need some Monitoring for all Services used in our Setup:

Beside the Standard Plugins (like Network/CPU/RAM/Disk) that was basically

  • Jenkins CI
  • VMware Fusion
  • VPN

After searching through the Munin Plugin Repository we couldn’t find any plugins providing the necessary monitoring. So the only choice was to write your own set of plugins. Since all three Plugins use different Approaches for collecting the Data, i plan two writer three different posts here. One for each Plugin. The Sources are availible online here and might be added to the main Munin Repo as soon as the Pull Requests are accepted.

How Munin works

But first a brief overview of Munin. Munin is a TCP based Service that has normally one Master and one Node for each System that needs to be monitored. The Master Node ask all Nodes periodicly for Monitoring Updates.
The Node Service, delivering the Updated Data runs on Port 4949 per default. To add some level of security, you normal add a IP to a whitelist, that is allowed to query the Nodes Data.

You can use normal telnet for accessing the Nodes Data:

telnet localhost 4949
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
# munin node at amun

Every Node delivers Information about specific Services provided by Plugins. To get an overview about the configured plugins you do a:

# munin node at amun
list
df df_inode fusion_mem fusion_pcpu fusion_pmem if_en0 if_err_en0 load lpstat netstat ntp_offset processes users

A Plugin always provides a Configuration Output and a Data Output. By Default if you query a Plugin, you will always get the Data Output:

# munin node at amun
df
_dev_disk1s2.value 34
_dev_disk0s1.value 48
_dev_disk3s2.value 62
_dev_disk2s1.value 6
_dev_disk2s2.value 32

To trigger the Config Output you need to add a config to your command:

# munin node at amun
df config
graph_title Filesystem usage (in %)
graph_args --upper-limit 100 -l 0
graph_vlabel %
graph_scale no
_dev_disk0s1.label /Volumes/Untitled
_dev_disk1s2.label /
_dev_disk2s1.label /Volumes/System-reserviert
_dev_disk2s2.label /Volumes/Windows 7
_dev_disk3s2.label /Volumes/Data

You can also use the tool munin-run for doing a basic test (it will be installed when installing your munin-node Binary)

 munin-run df
_dev_disk1s2.value 34
_dev_disk0s1.value 48
_dev_disk3s2.value 62
_dev_disk2s1.value 6
_dev_disk2s2.value 32
munin-run df config
graph_title Filesystem usage (in %)
graph_args --upper-limit 100 -l 0
graph_vlabel %
graph_scale no
_dev_disk0s1.label /Volumes/Untitled
_dev_disk1s2.label /
_dev_disk2s1.label /Volumes/System-reserviert
_dev_disk2s2.label /Volumes/Windows 7
_dev_disk3s2.label /Volumes/Data

Summary

So a Plugin needs to provide an Output both modes:

  • Configuration Output when run with the config argument
  • The (normal) Data Output when called withouth additional arguments

Plugins are Shell Scripts that can be written in every Programming language that is supported by the Nodes Shell (e.g. Bash, Perl, Python, etc.)

Since it is one of the easier Plugins we will have a look at the Plugin, monitoring the VPN Connections at our Mac OS Server in the next Post.

Build and Test Project TOX under MacOS

Flattr this!

Some Steps to do

  1. You need to have XCode with installed CLI Tools (see here)
  2. If you are using MacPorts (you really should), you need to install all necessary Dependencies:
    port install libtool automake autoconf libconfig-hr libsodium cmake
  3. Checkout the Project TOX Core Repository:
    git clone --recursive https://github.com/irungentoo/ProjectTox-Core.git
  4. cd ProjectTox-Core
    cmake .
    make all
  5. You need two tools:
    DHT_bootstrap in /other
    and nTox in /testing
  6. Bootstrap Tox (aka get your Public Key):
    ./DHT_bootstrap
    Keys saved successfully
    Public key: EA7D7BD2566A208F83F81F8876DE6C1BDC1F8CA1788300296E5D4F4CB142CD77
    Port: 33445

    The key is also in PUBLIC_ID.txt in the same Directory.

  7. Run nTox like so:
    ./ntox 198.46.136.167 33445 728925473812C7AAC482BE7250BCCAD0B8CB9F737BF3D42ABD34459C1768F854

    Where:

    Some Tox Node
    198.46.136.167
    Port of that TOX Node
    33445
    Public Key of that TOX Node
    728925473812C7AAC482BE7250BCCAD0B8CB9F737BF3D42ABD34459C1768F854
  8. Et voilà:
    /h for list of commands
    [i] ID: C759C4FC6511CEED3EC846C0921229CA909F37CAA2DCB1D8B31479C5838DF94C
    >>

    You can add a friend:

    /f ##PUBLIC_ID##

    List your friends:

    /l

    Message a friend:

    /m ##friend_list_index##  ##message##

Fixing Redirects of a Play! App behind an Apache2 SSL Proxy

Flattr this!

So you just finished your first Play! App. You want to run that thing behind an Apache2 as a HTTPS Proxy, because you do not want, that your User-Credentials are read as clear text.

So a very basic Apache Configuration looks like this:

    <IfModule mod_ssl.c>

        Listen 443
        SSLRandomSeed startup builtin
        SSLRandomSeed connect builtin

        <VirtualHost _default_:443>

            SSLEngine on
            ServerName example.com
            ServerAdmin admin@example.com
            ErrorLog /var/log/apache2/ssl_error_log

            SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
            SSLCertificateFile /etc/apache2/ssl/example/newcert.pem
            SSLCertificateKeyFile /etc/apache2/ssl/example/webserver.nopass.key
            SSLCACertificateFile /etc/apache2/ssl/demoCA/cacert.pem
            SSLCARevocationPath /etc/apache2/ssl/certs/demoCA/crl

            ProxyPass               /play            http://127.0.0.1:9000/play
            ProxyPassReverse        /play            http://127.0.0.1:9000/play

        </VirtualHost>

    </IfModule>

I did already explained how to run a Play! Application within a Application Context. Here our Context is just “play”, but you can set it to something else. You can also change and add seperate instances with different ports (over 9000!!!).

You should alter two settings in your conf/application.conf

    # you need to add this
    context=/play

    # you need to uncomment this to prevent Play! from serving aside your Apache2 Proxy
    http.address=127.0.0.1

   # you may uncomment and change this port number for chaning it
   # http.port=9000

Time for a Test Run. At a first glance it seems to work pretty nice. But as soon as you want to use the nifty routing redirects from Play!, the whole system breaks, because Play! still things, it runs in plain http on port 9000. To solve this, you need to change to things:

  1. Make Apache2 to send a specific header, that the Request was send through a Proxy
  2. Make Play! to fix the redirect to the correct URL

The first part is pretty easy. Just add

    RequestHeader set X_FORWARDED_PROTO 'https'

within your VirtualHost Tag – you may need to enable the headers module first to make that work.

The second part is a little bit more difficult. You have to add a before filter to your application controller:

    public class Application extends Controller {

        @Before
        private static void checkSSL() {
            if (request.headers.get("x_forwarded_proto") != null
                    && "https".equals(request.headers.get("x_forwarded_proto").value())) {
                request.secure = true;
                request.port = 443;
            }
            if (request.headers.get("x-forwarded-server") != null) {
                request.domain = request.headers.get("x-forwarded-server").value();
            }
        }
        ...
    }

You can find the sources on GitHub.
B.t.w. the same problem also might appears in Rails Apps, i might write about this later on.

Run local/remote terminal commands with java using ssh

Flattr this!

Sometimes you need to use some CLI-Tools before you want to create or search for a native JNI Binding.
So there is a common way, using the Java Process-Class. But then you might meet two problems i had to face in the past during several problems:

  1. There are (a really small) number of CLI-Tools, that giving no constant output over the STD-OUT (the standard output the Process-Class uses for output)
  2. There is no “elegant” way to implement a process call into your project.

To solve this Problem I created a basic HelperClass, that calls the System over SSH (with the Convenience to work remote and the side-effect to always get STD-compatible output).
I am primarely using it for a fun project SAM i started some months ago to try to create a Management-Tool for Unices and Windows with a very low client-side footprint.

The first Class is used to capsulate the basic SSH Calls:

 
// some imports... 
 public class SystemHelper {
    private Runtime r;
    private String sshPrefix = "";

    // call with $user and 127.0.0.1 to run local command.     
    public SystemHelper(String user, String ip) {
        r = Runtime.getRuntime();
        sshPrefix = " ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no " + user + "@" + ip;
    }

    public void runCommand(String command, ProcessParser pp) {
        try {
            Logger.info("running: " + this.sshPrefix + " " + command);
            Process p = r.exec(this.sshPrefix + " " + command);
            InputStream in = p.getInputStream();
            BufferedInputStream buf = new BufferedInputStream(in);
            InputStreamReader inread = new InputStreamReader(buf);
            BufferedReader bufferedreader = new BufferedReader(inread);
            pp.parse(bufferedreader);
            try {
                if (p.waitFor() != 0) {
                    Logger.info("exit value = " + p.exitValue());
                }
            } catch (InterruptedException e) {
                System.err.println(e);
            } finally {
                // Close the InputStream                 
                bufferedreader.close();
                inread.close();
                buf.close();
                in.close();
            }
        } catch (IOException ex) {
            Logger.error(ex.getLocalizedMessage());
        }
    }
}

ProcessParser is an interface that defines the methode parse, accepting a BufferedReader for parsing the output of the Process. Unfortunately there is no timeout ATM to kill a hanging SSH-Call.

 public interface ProcessParser { 
     public void parse(BufferedReader bufferedreader); 
 } 

The most basic (Output-)Parser looks like this:

 
    public String getPublicSSHKey() {
        SimpeOutputPP so = new SimpeOutputPP();
        String command = "cat ~/.ssh/id_rsa.pub";
        runCommand(command, so);
        if (!so.getOutput().isEmpty()) {
            return so.getOutput().get(0);
        }
        return "";
    }

This returns just the public SSH-Key of the current user. I implemented some more parsers for the output of apt (dpkg), rpm and pacman. You can find them in the github project here.

Multicast – genauer Nachgeschaut

Flattr this!

Da das ja heute bei der Tafelrunde eher etwas zusammengesucht war, habe ich mich noch mal hingesetzt und mir die Dinge an-/eingelesen.
Ich werde einfach mal versuchen die Fragen, die da aufkamen wiederzugeben und dann mit passenden Texten beantworten:

  1. Was ist Multicast?
    Multicast ist eine Nachrichtenübertragung von einem Punkt zu einer Gruppe von Empfängern (auch Mehrpunktverbindung genannt).
    Daneben gibt es noch weitere Arten von Übertragungen:

    • Unicast: eine Punkt-zu-Punkt-Verbindung (klassische Client<->Server Verbindung)
    • Broadcast- und die Anycast-Übertragung (“ich bin da – wer noch?” – ping an x.x.x.255)
    • Geocast, ein besonderer Multicast, der räumlich begrenzt ist.
  2. Was sind die Vorteile von Multicast gegenüber Unicast?
    Es sind gleichzeitige Nachrichten an mehrere Teilnehmer möglich, ohne dass sich die Bandbreite des Senders verändert (für den Sender ist es als würde man eine Nachricht an einen Empfänger senden).
    Handelt es sich um paketorientierte Datenübertragung, findet die Vervielfältigung der Pakete an jedem Verteiler (Switch, Router) auf der Route statt.
  3. Wie geht das nun genau?
    Multicast ist die übliche Bezeichnung für IP-Multicast, das es ermöglicht, in IP-Netzwerken effizient Daten an viele Empfänger zur gleichen Zeit zu senden. Das passiert mit einer speziellen Multicast-Adresse. In IPv4 ist hierfür der Adress-Bereich 224.0.0.0 bis 239.255.255.255 (Klasse D), in IPv6 jede mit FF00::/8 beginnende Adresse reserviert.
    Bei der Übertragung über Ethernet werden die IPv4- bzw. IPv6-Multicastadressen auf bestimmte Pseudo-MAC-Adressen abgebildet, um bereits durch die Netzwerkkarte eine Filterung nach relevantem Traffic zu ermöglichen.
  4. Okaaaaay…. Wer macht sowas? Ist sowas nützlich?
    Ja ist es. Bekannte Anwendungen sind:

    • Audio- und Videoübertragungen (Protokolle wie RTP)
    • Verwendung beim Clustering und beim Routing nach dem Routing Information Protocol (RIP) Version 2.
    • ist für ein funktionierendes AppleTalk-Netzwerk notwendig.
    • Als Service Location Protocol und Multicast DNS wird als Teilimplementierung von Zeroconf Multicast (Rendezvous – inzwischen Bonjour) betrieben
      • automatische Zuweisung von IP-Adressen ohne DHCP-Server
      • übersetzen von Hostnamen in IP-Adressen ohne DNS-Server
      • automatisches Finden von Diensten im lokalen Netzwerk ohne einen zentralen Directory-Server
    • In Windows wird es im Simple Service Discovery Protocol benutzt
    • Weitere Multicast-Protokolle
      • Internet Relay Chat (IRC) bildet Netzwerke, welche einen einfachen TCP-basierten Multicast-Baum realisieren – wer hätte das gedacht ^^
      • Es wird überlegt in Jabber Multicast nach zurüsten.
  5. Und das geht jetzt auch über das Internet oder wie jetzt?
    Jein… also:
    Multicast-Pakete werden von den meisten Routern im Internet nicht verarbeitet. Deswegen werden multicastfähige Teilnetze über Tunnel zu Multicast Backbones (MBones) verbunden.
    Um Multicast-Pakete zwischen mehreren Netzen zu koordinieren, werden spezielle Multicast-Routing-Protokolle verwendet.
  6. Ahja… sehr schön – kann da nix passieren / durcheinander kommen?
    Es existieren bei der Verwendung bestimmter Adressbereiche einiger Switches Probleme bei der Weiterleitung von Multicastnachrichten.
    Die Adressen von 224.0.0.0 bis 224.255.255.255 sind für Routingprotokolle reserviert und für diese Adressen sendet der Router keine IP-Multicast-Datagramme. Die Adressen von 239.0.0.0 bis 239.255.255.255 sind für scoping reserviert, eine Weiterleitung innerhalb dieses Adressbereichs ist ebenfalls Switch abhängig. Adressen im Bereich 225.x.x.x bis 238.x.x.x sind frei verfügbar.
  7. Nachwort:
    Multicasting wird wieder populär, weil IPTV darauf basiert.
    Für verteilte Chat-Netzwerke wurde mittlerweile allgemein eingesehen, dass sie nicht mittels IP-Multicast realisiert werden können.
    Der Einsatz weiterer Multicast-Protokolle ist daher unumgänglich.  Na da haben wirs!

Schamlos kopiert von http://de.wikipedia.org/wiki/Multicast – teiweise gekürtzt und ein wenig angepasst.
Die Formulierungen sind auf die frühe Stunde zurück zu führen.
Anschließend zur Entspannung noch ein wenig Java – ein einfacher Chat-Server:
Zuerst der “Server” – Der ja eigentlich auch Teil jedes Clients ist:

public class NameServer{
        // the multicast group address sent to new members
        private static final String GROUP_HOST = "228.5.6.7";  
        private static final int PORT = 1234;      // for this server
        private static final int BUFSIZE = 1024;   // max size of a message
        private DatagramSocket serverSock;
   
        // holds the names of the current members of the multicast group
        private ArrayList groupMembers; 
        try {  // try to create a socket for the server
          serverSock = new DatagramSocket(PORT);
        }catch(SocketException se){
            System.out.println(se);
            System.exit(1);
        }
        groupMembers = new ArrayList();
        waitForPackets();
       
        // das ist dann hier wieder die typische Server-While-Schleifen-Methode
        private void waitForPackets(){
            DatagramPacket receivePacket;
            byte data[];
            System.out.println("Ready for client messages");
            try {
              while (true) {
                data = new byte[BUFSIZE];  // set up an empty packet
                receivePacket = new DatagramPacket(data, data.length);
                serverSock.receive( receivePacket );  // wait for a packet
       
                // extract client address, port, message
                InetAddress clientAddr = receivePacket.getAddress();
                int clientPort = receivePacket.getPort();
                String clientMsg = new String( receivePacket.getData(), 0, receivePacket.getLength() ).trim();
                processClient(clientMsg, clientAddr, clientPort);
              }
            }catch(IOException ioe){ 
                System.out.println(ioe); 
            }
        }
        ...
    }

Und hier dann noch mal der “Client”-Anteil:
   

public class MultiChat {
   
      // timeout used when waiting in receive()
      private static final int TIME_OUT = 5000;   // 5 secs
     
      // max size of a message
      private static final int PACKET_SIZE = 1024;
      // NameServer address and port constants
      private static final String SERVER_HOST = "localhost";
      private static final int SERVER_PORT = 1234; 
      /* The multicast port. The multicast group address is
      obtained from the NameServer object. */
      private static final int GROUP_PORT = 5555; 
      // for communication with the NameServer
      private DatagramSocket clientSock;
      private InetAddress serverAddr; 
      // for communication with the multicast group
      private MulticastSocket groupSock;
      private InetAddress groupAddr;  
      public MultiChat(String nm){
         /* Attempt to register name and get multicast group
         address from the NameServer */
         makeClientSock();
         waitForPackets();
      } // end of MultiChat();
     
      private void makeClientSock(){
        try {   // try to create the client's socket
          clientSock = new DatagramSocket();
          clientSock.setSoTimeout(TIME_OUT);  // include a time-out
        }catch( SocketException se ) {
          se.printStackTrace();
          System.exit(1);
        } 
        try {  // NameServer address string --> IP no.
          serverAddr = InetAddress.getByName(SERVER_HOST);
        }catch( UnknownHostException uhe) {
          uhe.printStackTrace();
          System.exit(1);
        }
      }  // end of makeClientSock()
     
      private void waitForPackets(){
        DatagramPacket packet;
        byte data[];
        try {
          while (true) {
           data = new byte[PACKET_SIZE];    // set up an empty packet
            packet = new DatagramPacket(data, data.length);
            groupSock.receive(packet);  // wait for a packet
            processPacket(packet);
          }
        }catch(IOException ioe){ 
            System.out.println(ioe); 
            }
      }  // end of waitForPackets() 
    }

Wie man ziemlich gut erkennen kann, ist es auch nicht wesentlich anders, als wenn man sich direkt Sockets erstellt. Gefunden habe ich das Ganze in dem Buch “Killer Game Programming in Java” – heißt wirklich so – hin und wieder finden sich da wirklich interessante Dinge besprochen (Link: hier!).
So das wars.
Die Tage wollte ich noch mal bissel was zu OSGi schreiben – so denn das schöne Buch ankommt.
Und aus – leider aktuellem Anlass – zu IPMI und wieso es eigentlich schon fast unverschämt ist, dass ein _mitgelieferetes_ Linux-Tool fehlerhaft ist und deswegen vom Hersteller empfohlen wird, extra dafür ein Windows aufzusetzen. Nichts gegen Windows, aber wieso liefert man ein fehlerhaftes Tool dann mit?
P.S.: Schreibt mal wieder 😉

SSL erzwingen in einem Java Application Server

Flattr this!

Morgen :-),
so langsam bin ich dabei alle meine Problem gelöst zu bekommen 🙂 – ja wird auch Zeit :-D.
Bezogen auf diesem Post:
Ich bin an der Implementierung mit JAAS noch dran, allerdings habe ich herausgefunden, dass man dem Server einen Filter unterschieben kann, der bei jedem Aufruf überprüft, ob eine SSL Verbindung besteht, und wenn dies nicht der Fall ist, diese per Redirect erzwingt.
Das ganze funktioniert über Filter (wie es ja Ruby on Rails auch macht).
Man erstellt sich eine Klasse, welche javax.servlet.Filter implementiert.
Diese bindet man dann per web.xml ein:

    <filter>
        <filter-name>SSLFilter</filter-name>
        <filter-class>de.hausswolff.cotodo.security.SSLFilter</filter-class>
    </filter>
    <filter-mapping>
        <filter-name>SSLFilter</filter-name>
        <url-pattern>/*</url-pattern>
    </filter-mapping>

D.h. alle Anfragen, die an die Applikation gesendet werden, werden durch diesen Filter geschickt.
Der Filter im einzelnen vergleicht ob request.getScheme() == “https” ist. ansonsten redirected er einfach auf entsprechnede Seite.
In meinem Fall (da eine nicht-https Verbindung nur bei einem nicht angemeldeten Benutzer vorhanden ist auf eine Login-Seite)
Ich hoffe, dass ich morgen dann meine Anmeldung vervollständigen kann.
Für alle die sich noch näher mit den Feature von JEE 5 befassen wollen:
Unter http://java.sun.com/developer/releases/petstore/  kann man sich mit petstore eine Beispielanwendung herunterladen, die die Neuerungen (inkl. AJAX) demonstriert. Duke’s Bank scheint demnach nicht mehr genutzt zu werden 🙂