Per Brage's Blog

const String ABOUT = "Somehow related to code";

Monthly Archives: March 2012

Event Broker using Rx and SignalR (Part 2: Implementation)

In part one I wrote about my reasoning and background of why I chose to have some fun by creating my own event broker. Then I took you through a scenario, the events that will make up the scenario, and then finished up the post by specifying a few interfaces that will make up the fluent API of the event broker.

Now it’s time to implement the first pieces of the broker. Let’s just throw ourselves at it, shall we?

Local event streams

Subscribing to a local event stream is something that can be implemented quite easily using Reactive Extensions (or LINQ to Events, if you prefer that name). Out of the box, Reactive Extensions provide two classes that will simplify the implementation dramatically, Subject<T> and EventLoopScheduler.
Basically, Subject<T> is both an Observable<T> and an Observer<T> that handles all publishing, subscriptions, disposing etc. for us, whereas the EventLoopScheduler class ensure event concurrency on designated thread.

So without further ado, here is the first draft of our Reactive Extensions event broker for local event streams, implemented with the interfaces described in part one of this series.

    public class EventBroker : IEventBroker
        private readonly IScheduler _scheduler;
        private readonly ISubject<IEvent> _subject;
        public EventBroker()
            _scheduler = new EventLoopScheduler();
            _subject = new Subject<IEvent>();
        public IEventBroker Publish<TEvent>(TEvent @event) 
            where TEvent : IEvent
            return this;
        public ISubscribe Locally()
            return this;
        public ISubscribe Remotely(String remoteEventStreamUri)
            throw new NotImplementedException();
        ISubscribe ISubscribe.Subscribe<TEvent>(Action<TEvent> onConsume) 
            ((ISubscribe)this).Subscribe(null, onConsume);
            return this;
        ISubscribe ISubscribe.Subscribe<TEvent>(Func<TEvent, Boolean> filter, 
                                                Action<TEvent> onConsume) 
            _subject.Where(o => o is TEvent).Cast<TEvent>()
                                .Where(filter ?? (x => true))
            return this;
        public void Dispose()

Remote event streams

To be able to subscribe to remote event streams, we need to communicate and send our events over the network. There are many ways of doing this, but fortunately, there is this library that I guess no one can have missed with all the buzz it has created lately, namely SignalR. Using SignalR it becomes very easy to create a publisher that subscribers can connect to and receive events. What makes it even better is that SignalR has built in support for Reactive Extensions, allowing us to reuse and provide the same API for both remote and local subscriptions.

SignalR does however not provide any safety against loss of data, clients will just pick up listening to the new events as they reconnect, and be totally oblivious about the events they missed while being disconnected. This is one of the major reasons behind the caveat in the first post of the series.

SignalR can be hosted in various ways but, for this example I will use the self-hosting server since I’m only using Console Applications in the example.

Starting SignalR Self-Hosting Server

SignalR provides a class named PersistentConnection that you inherit from and override various methods to handle negotiation, data received etc. But we will only use the broadcast feature of the server connection in this example, so we will only create an empty class, by the name EchoConnection.

    public class EchoConnection : PersistentConnection

We can now start the self-hosting server, extract the connection through the provided dependency resolver and then use this connection to broadcast events to all subscribers.
To start the event broker in a self-hosting mode we add an overloaded constructor taking a publishingUri, which the server will register itself on.

        private readonly IConnection _serverConnection;

        public EventBroker(String publishingUri)
            : this()
            var server = new SignalR.Hosting.Self.Server(publishingUri);
            var connectonManager = server.DependencyResolver.Resolve<IConnectionManager>();
            _serverConnection = connectonManager.GetConnection<EchoConnection>();

Connecting to SignalR servers

We need to refactor a bit to be able to connect to a remote SignalR server (e.g. our remote event stream). First we add a stack of client connections meaning that this event broker is a subscriber to zero or more other event publishers. I guess a dictionary and naming publishers might be a good idea, but a Stack will suffice for this example.
Let’s add two private methods that will help us register subscriptions, both locally and remotely. First is the GetCurrentConnection() that peeks the top connection of the client stack, which will always contain the current remote we are adding subscriptions to. The other method GetCurrentObservable<TEVENT>() will, depending on if we are registering local or remote subscriptions, return an IObseravable<TEVENT> from either the Subject<T> or the client connection.

This code also shows how SignalR supports Reactive Extensions by using the AsObservable<>() on the current client connection.

        private readonly Stack _clientConnections;

        private IObservable<TEvent> GetCurrentObservable<TEvent>()
            return _inLocalSubscriptionMode ? _subject.Where(o => o is TEvent).Cast<TEvent>()
                                            : GetCurrentConnection().AsObservable<TEvent>();
        private Connection GetCurrentConnection()
            return _clientConnections.Peek();

But wait a minute, there is a problem here that we haven’t addressed yet. I don’t know if this is intended or bug in SignalR, but when you use AsObservable<TEvent>() on a client connection, that doesn’t mean that you will filter incoming events by TEvent, SignalR will rather try to deserialize every incoming event (that would be all events of all types published) into the TEvent type. Some events might work, some fail, some get mixed and this is definitely not how we want it to work. So the solution to this problem is to take care of the serialization and deserialization ourselves, and not rely on SignalR’s default serialization.

Json.NET has a TypeHandling setting that can be used to add the type name of an object as metadata under the property name $type. Let’s use this feature and verify this property on all incoming events. What we’ll do is to use the AsObservable() to get an IObservable<String> and upon that instance apply some type filtering, deserialization and then use AsObservable() again. The code below has been refactored to accomplish this.

    private IObservable<TEvent> GetCurrentObservable<TEvent>()
        return _inLocalSubscriptionMode ? _subject.Where(o => o is TEvent).Cast<TEvent>()
                                        : GetCurrentConnection()

With those two methods in place we can refactor the main Subscribe method to use GetCurrentObservable<TEVent>(), and also add the subscription to our subscriptions collection, so we can close and dispose them when exiting the application. And by those small changes we can now subscribe to a remote event stream.

        ISubscribe ISubscribe.Subscribe<TEvent>(Func<TEvent, Boolean> filter, 
                                                Action<TEvent> onConsume)
                                .Where(filter ?? (x => true))
            return this;


Servers are up, connections can be made. Now we just need to broadcast events.

The publishing method have two new lines of code, and those two last ones now broadcasts the event if the event broker was instantiated with a publishing Uri and by that registered itself as a self-hosting server. Remember from above that we also wanted to take care of our own serialization?

        public IEventBroker Publish<TEvent>(TEvent @event)
            where TEvent : IEvent

            if (_serverConnection != null)

            return this;


Full source at my GitHub repository

Part 1: A Fluent API
Part 2: Implementation
Part 3: Event Consumers
Part 4: Solving the Scenario

Event Broker using Rx and SignalR (Part 1: A Fluent API)

Caveat: Please understand that this is a very simple event broker that was implemented during a short period of time, and that it does NOT include anything like message reliability, acknowledgement, queuing or similar things to prevent data loss. The remote feature is built on-top of SignalR and publishes events ‘live’ for anyone listening at the time of publising, meaning that subscribers can’t pick up where they left after a connection loss. The event broker was developed for fun and educational purposes only and is provided as is. I would suggest not using, nor basing your own implementation on this example for any mission critical application/software. If you need a reliable and performing service/message bus, I suggest you take a look at NServiceBus, MassTransit or similar products. Thank you!


Recently, I started to build what will be a very small application based on the architectural pattern CQRS. When done, the application is supposed to be released for free and will be launched on a very small scale for a few select people! At the start of development, I was unsure of the hosting environment, and what technologies would be available to me (e.g. what could I install and things like that). So one of my basic requirements became to refrain from using any, or as few frameworks, server and/or third-party products as possible, unless I knew for sure that they could be embedded in my code, or would work at the specific hosting environment. Things have however changed, and the original hosting provider has been replaced with AppHarbor, or at the time of writing, still in the works of switching.

In CQRS we use an event store to persist all events which are the results of applying commands in our domain. At the time you persist them, you also want to dispatch the events to the read side, so they can be transformed and persisted in the read model. This is typically done with a service bus, it doesn’t have to, but it’s definitely preferred way of doing it from my point of view. Due to licensing, hosting, another use-case and the fact that I’m still trying to limit the use of third-party products, I decided I would have some fun developing something temporary and very basic, to handle the event dispatching to the read side (or read sides, as the application will have more than one read model), hence my EventBroker implementation was born. I deliberately choose the name EventBroker instead of ServiceBus, or MessageBroker, as I will only use this component to publish and broadcast events. Even so, in my opinion events are the only thing you should ever send over a service bus, but I guess that would be another discussion.

To provide a nice example I created a scenario that will be used throughout the series to base implementation on. The hard part was inventing a scenario described only by events, as we are not creating a full system-wide implementation with client, commands and the works.

Ordering a product – An example scenario

To begin with, I wrote this science fiction scenario taking place in the year 3068. Humans are scattered all over the universe since our beloved planet Earth ceased to exist during the 300 years’ war in the middle of previous millennia! But after a few pages, I realized I was writing a science fiction novel, rather than a blog post about an event broker. So I refactored my scenario into a small online computer shop, which sells computers and components over the internet (Exciting huh?!).The scenario consists of a web site, which ships all its orders from two stocks, where one stock handles off-the-shelf computers and laptops in various variations, whereas the other stock only handles computer components. To protect these two stocks from stockout, there is also procurement that resupplies the stocks as their products reaches their respectively order point.

There are likely hundreds or even thousands of events within this context, but for this example I narrowed them down to these three.

Published by the website as soon as a product has been ordered (Yes, my shop has a horrible user experience as customers can only order one product at the time.) Both stocks subscribe to this event so they can prepare, pack and ship the ordered product. The website itself also subscribes to this event locally, to be able to send out order confirmation emails.

This event is published by both stocks as soon as an ordered product has been packed and shipped. In this example we only subscribe to this event locally for sending out shipping confirmation emails.

Published by the stocks as inventory gets low on a certain product, and subscribed to by procurement to order resupplies.

The scenario and these three events will be enough to demonstrate the entirety of the event broker, but before implementing our broker we will be defining our fluent API through a few simple interfaces.

Defining our fluent API

Looking at the events above and their short descriptions, it’s not hard to notice that we want to subscribe to both local and remote event streams. And it doesn’t take much brain activity to realize that our local event subscriptions don’t need to be routed through any network stack. Hence we can divide the event broker’s subscriptions into two types, local and remote subscriptions. We also need to be able to filter events, as for example, the stocks are not interested in all ProductOrderedEvents, but rather the ones they can process and complete. We also want to publish events, but we don’t want to separate that into local and remote publishes. A publish is a publish, it shouldn’t care who is listening, may it be a local or remote subscription.

So let’s start with defining the interfaces that will fulfill the above requirements we derived out of our scenario.

The IPublish interface defines our Publish method, but there isn’t really any options or method-chaining paths after you call the publish method. The return type could actually be just void, but we might as well return the IEventBroker interface so we can reset our path, and make all choices available again after a call to the publish method. This will also allow us to chain publish methods, for those small use-cases that would actually be useful.

    public interface IPublish
        IEventBroker Publish<TEvent>(TEvent @event)
            where TEvent : IEvent;

The ISubscribe interface defines our subscription methods, and we currently have two of them. One that subscribes to a specific event and one that subscribes to a specific event that matches a predicate, our filter. In our example scenario we will use this type of filtering for our stocks when they subscribe to the ProductOrderedEvent, as one stock only wants information about computers and laptops, while the other stock wants events about all other products. Each Subscribe method will return the ISubscribe interface which will allow us to chain subscriptions. Also, the ISubscribe interface will be implemented explicitly to force the end user to use Locally() or Remotely(remoteEventStreamUri) methods first, and then add subscriptions, otherwise we wouldn’t know what to register our subscriptions upon.

    public interface ISubscribe : ISubscriptionSource
        ISubscribe Subscribe<TEvent>(Action<TEvent> onConsume)
            where TEvent : IEvent;

        ISubscribe Subscribe<TEvent>(Func<TEvent, Boolean> filter, Action<TEvent> onConsume)
            where TEvent : IEvent;

Did you notice that I added the ISubscriptionSource on the ISubscribe interface? By doing that, we will be able to switch registering subscriptions between local and one or more remote event streams without completing the statement.

    public interface ISubscriptionSource
        ISubscribe Locally();
        ISubscribe Remotely(String remoteEventStreamUri);

Now the IEventBroker interface just needs to inherit from all the interfaces we defined above. Remember, ISubscriptionSource is included through our ISubscribe interface.

    public interface IEventBroker : IDisposable, IPublish, ISubscribe

We have every interface we need to start implementing the broker now, but that will have to wait until the next post, which will be ready and posted together with this one.


By faking the event broker, our fluent API will now allow us to write statements like the one below, even though we haven’t even written a single line of implementation code yet. In the next part of the series we will make this work, and after that we will continue adding a few more features to the event broker in upcoming posts.

Example of chaining subscriptions


Example of chaining our publish method

    eventBroker.Publish(new ProductOrderedEvent())
               .Publish(new ProductShippedEvent());


Full source at my GitHub repository

Part 1: A Fluent API
Part 2: Implementation
Part 3: Event Consumers
Part 4: Solving the Scenario

Release: Alias Be Gone extension for Visual Studio 2010

Alias Be Gone is a C# alias to .NET CLR type replacer extension for Visual Studio 2010 that provides a keyboard shortcut to quickly replace all aliases to its CLR equivalents in current active document. Alias Be Gone also provides optional snippets that can be installed to provide development using aliases, that quickly converts into CLR type when you double-tap Tab like any other snippet.

Target audience

Are you a C# developer who is also a CLR purist that would never use aliases like short, int or long in your code? Do you have a war in your shop where other developers replace your beautiful code with those pesky aliases colored like keywords? Then this extension is for you!

Alias Be gone is your extension to quickly replace all C# aliases to corresponding CLR type and put you one step ahead in the fight against C# aliases.


  • Install Alias Be Gone extension through Visual Studios 2010 Extension Manager like any other Visual Studio extension. If you have something else bound to Ctrl-K, Ctrl-J you may need to bind the extension to another shortcut.
  • Optional! If you want to install the bundled snippets you can pull down the Edit menu after restarting Visual Studio and choose Install Snippets right under Alias Be Gone menu command.


Alias Be Gone provides a menu command with shortcut (Ctrl+K, Ctrl+J) to quickly replace all aliases. Simply open up a C# code file and use Alias Be Gone shortcut and all your aliases will be converted into CLR types.

If you installed the bundled snippets you can develop using the alias names and then press tab twice after like any other snippet. For example, type bool and then press tab twice to instantly replace it to Boolean.


Alias Be Gone on Visual Studio Gallery
Full source on GitHub

HTML5/JavaScript Cube (Part 1: Applying old school)

In my first post I showed a picture of a custom design that contains a spinning cube in the upper left corner. That cube wasn’t supposed to end up in the design since it originally was an endeavor I started to try out the HTML5 canvas. I figured I would do that through applying some old 3D programming knowledge I learned in the early nineties while me and a couple of friends did a lot of 3D programming (or at least was trying to). For reasons I don’t even remember today, I didn’t have time to dig further into the abyss of 3D programming and the high point of my 3D programming career ended with a gouraud shaded dolphin spinning in an X11 window. I did attempt to create a 3D world with a plane flying through it, but for reason already stated I never got around to get it working.

This time around I only wanted to create a basic flat shaded cube! I know, it’s 2012 and how sexy is a spinning cube these days, really? Even so, I wanted to take some time blogging about two ways of accomplishing the same task and I will try to keep this first post as simple as possible! So here is part one of creating a spinning cube using HTML5/JavaScript.

Creating the mesh

First of all we need to construct the cube, and to create the cube we need to create some vertices, connect the vertices to create polygons and then finally add them to a mesh. These three concepts is what’s needed to model a simple 3D object, so I started out with creating a small mesh creator that produces vertices, polygons and finally a cube mesh!


A vertex represents a coordinate in 3D space, and is a construct of three values X, Y and Z. These three values specify how the vertex relates to the center of the object we are modeling. I made a small function that produces a Json object representing a vertex which contains both the initial values, the values we use at design time and base our calculations on, and the current values which are the calculated values used during drawing.

    var createVertex = function (x, y, z) {
        return {
            "initialX": x,
            "initialY": y,
            "initialZ": z,
            "currentX": 0,
            "currentY": 0,
            "currentZ": 0


Polygons are flat geometry shapes consisting of straight lines that are joined to form a circuit. Each corner of the polygon is defined by a vertex. For example, to form a triangle polygon we need to have 3 vertices, one for each corner of the triangle. The same vertex can however take part in multiple polygons.


Polygons come in various forms and shapes, but for 3D programming we always use triangles otherwise we get into trouble on more advanced topics. However, for this cube I am using square polygons for the single reason that I didn’t want to develop my own interpolating triangle method to avoid the gap between aligned polygons. (I’m actually surprised no browser actually solves this issue built-in). Anyways, this small function is actually all we need to produce a square polygon.

    var createPolygon = function (vA, vB, vC, vD) {
        return {
            "vertices": [vA, vB, vC, vD],
            "averageZ": null

The averageZ property is something I will use later on for both shading and polygon sorting.


A mesh is the actual 3D object we are modeling, and it consists of all the vertices and polygons in such a way that it looks like an object, in our case, a cube.

    var createCubeMesh = function () {
        var mesh = { "polygons": null };
        var size = 70;

        var vertices = [];
        vertices.push(createVertex(size, size, size));
        vertices.push(createVertex(size, -size, size));
        vertices.push(createVertex(-size, size, size));
        vertices.push(createVertex(-size, -size, size));
        vertices.push(createVertex(size, size, -size));
        vertices.push(createVertex(size, -size, -size));
        vertices.push(createVertex(-size, size, -size));
        vertices.push(createVertex(-size, -size, -size));

        var polygons = mesh.polygons = [];
        polygons.push(createPolygon(vertices[0], vertices[2], vertices[3], vertices[1]));
        polygons.push(createPolygon(vertices[0], vertices[2], vertices[6], vertices[4]));
        polygons.push(createPolygon(vertices[0], vertices[1], vertices[5], vertices[4]));
        polygons.push(createPolygon(vertices[1], vertices[3], vertices[7], vertices[5]));
        polygons.push(createPolygon(vertices[3], vertices[2], vertices[6], vertices[7]));
        polygons.push(createPolygon(vertices[4], vertices[6], vertices[7], vertices[5]));

        return mesh;

Right now we can actually get a cube drawn on our canvas since we can just omit the Z value of each vertex and draw each polygon of the mesh. We wouldn’t really see a 3D cube though, but rather a square as we would only see one side of the cube. (You’re right, this does depend on the initial values in each vertex, and it could have been designed from another angle.)

Adding rotation

Rotation is were we need to start using some math, and we need to use linear algebra to apply rotation to each vertex. Now, I’m definitely not an expert in linear algebra and won’t go into the depths of the topic. But if you want to dig deeper, I suggest you Google it! I’m quite sure you will find good information among the almost 10 million hits you will get!

To get our cube to rotate, we basically need to create one matrix for each rotation around an axis, that means one for each of the X-axis, Y-axis and Z-axis. Each of these matrixes will be calculated based on an identity matrix with the degrees (converted to radians) of rotation we wish to apply.

When all three rotation matrixes have been created we need to multiply them together to get a single matrix that represents all three rotations. We can then use this combined matrix to calculate each and every vertex and apply our rotation to them. For this we need two functions, first one to multiply matrixes, and then another one to apply a matrix to a vertex.

    var multiplyMatrixes = function (matrix1, matrix2) {
        var matrix = createIdentityMatrix();
        for (var i = 0; i < 3; i++) {
            for (var j = 0; j < 3; j++) {
                matrix[i][j] =
                    (matrix2[i][0] * matrix1[0][j]) +
                        (matrix2[i][1] * matrix1[1][j]) +
                            (matrix2[i][2] * matrix1[2][j]);
        return matrix;
    var applyMatrixToVertex = function (matrix, vertex) {
        vertex.currentX = (vertex.initialX * matrix[0][0]) + 
                          (vertex.initialY * matrix[0][1]) + 
                          (vertex.initialZ * matrix[0][2]);

        vertex.currentY = (vertex.initialX * matrix[1][0]) + 
                          (vertex.initialY * matrix[1][1]) + 
                          (vertex.initialZ * matrix[1][2]);

        vertex.currentZ = (vertex.initialX * matrix[2][0]) + 
                          (vertex.initialY * matrix[2][1]) + 
                          (vertex.initialZ * matrix[2][2]);

        return vertex;

Some final calculations

Applying perspective

After we have calculated the rotation using matrixes we need to apply some perspective to our cube, that is the further away a vertex, polygon or mesh is, the smaller it should look. Think of a long train before you get hit by it, the locomotive looks pretty huge while the last wagon is pretty small, almost like you could squash it between your fingers (that is if you are fast enough before YOU get squashed). So applying perspective is basically that, as Z gets larger, distance is increasing and the object should look smaller. There are various ways of applying perspective but this simple z-divide function does the trick well enough.

    var applyPerspective = function (vertex) {
        vertex.currentX = vertex.currentX * perspectiveCoefficient / 
                          (vertex.currentZ + perspectiveCoefficient);
        vertex.currentY = vertex.currentY * perspectiveCoefficient / 
                          (vertex.currentZ + perspectiveCoefficient);


When drawing the polygons on the canvas we can’t just draw them in the order they were modeled, we need to draw them in the order of their location on the Z-axis, from back to front. A real 3D engine would not even draw those in the back that are covered by polygons in front of them, or polygons that are facing away from the camera, but by simply sorting the polygons we can achieve the same result without complex calculations.

First we need to calculate the Z average of each polygon,

    var calculateZAverage = function (polygon) {
        var zSum = 0;
        for (var i = 0; i < polygon.vertices.length; i++) {
            zSum += polygon.vertices[i].currentZ;
        polygon.averageZ = zSum / polygon.vertices.length;

And then we simply sort the polygons,

    var sortPolygons = function (polygons) {
        return polygons.sort(function (polygon1, polygon2) {
            return polygon2.averageZ - polygon1.averageZ;

Flat shading

Using this average Z value of each polygon we can now also apply flat shading, meaning that depending on a polygons average Z we can apply a color between 0 and 255, which mean from black to white in this example. This type of shading is also a trick to avoid introducing light sources into our scene, which would require us to calculate polygon normals and the distances from light sources, and its directions etc.


We have finally reached the stage where it’s time to draw the polygons on the canvas, which should result in a cube rotating on our screen. After initialization we add a timer calling a refresh function which basically does 4 things.

  1. Increase the angle
  2. Calculate the rotation, perspective and z-order
  3. Clear the current canvas
  4. Draw the cube

The actual drawing functions of the mesh and the polygon looks like this.

    var drawMesh = function () {
        for (var k = 0; k < mesh.polygons.length; k++) {
    var drawPolygon = function (polygon) {
        var shade = calculateShade(polygon);
        ctx.fillStyle = 'rgb(' + shade + ',' + shade + ',' + shade + ')';
        ctx.moveTo(polygon.vertices[0].currentX, polygon.vertices[0].currentY);
        for (var i = 1; i < polygon.vertices.length; i++) {
            ctx.lineTo(polygon.vertices[i].currentX, polygon.vertices[i].currentY);


Here is a picture of the result! Produced by about 250 lines of JavaScript!



Starting a blog (or how not to!)

Starting a blog these days is something you do in minutes or even seconds, either by using a blogging services or a one click installer which is available on most personal web space hosting. It should be easy enough to get a blog going, right? In my case I made it a little bit more complicated than it has to be, and I thought I’d start this blog with writing how not to start a blog!

It all started a few years ago when I occasionally was tempted to write posts but I didn’t have a blog to publish them to, and once I started looking at creating a blog I felt I had nothing to write about. This catch 22 went on for quite some time, and if I dug enough I really didn’t have the urge to get it going either. Still, it has annoyed me at times when I really wanted to write a post about something!

Last fall I started playing around with different services, which resulted in several “Hello world!” posts showing up on various sites. There are probably some still around, and they all look like the post you can find on this blog dated August 2011. Eventually I registered a hosted web space, where I planned to get going with BlogEngine.NET using a custom design that I made. It even had a spinning 3D cube in its upper left corner that I wrote in HTML5/JavaScript. Yes, I did have fun writing it at the time even though I strayed from my goal; starting a blog!

My custom design made for BlogEngine.NET

This is a screenshot of the design I created for my blog last fall! Maybe in the future I might convert this into a WordPress theme and transfer my blog to some web hosting site again!

After poking around, pulling my hair (ok that’s a lie, I don’t have any), fixing things to bring order to my design and other aspects of the blog, I got more and more frustrated as well as I caught this feeling that this isn’t what I’m supposed to do! I’m doing it wrong and it is certainly not the quickest way to get a blog up and running. Instead of simply just be able to write posts, I was deep (neck up) in technical issues that shouldn’t be there in the first place.

That continued for a while until I got bored with it again, and I just put the whole blog thing on hold until recently. This time around I’m taking the shortcut, the easy way to escape all troubles! Let me just use the one click install feature to get WordPress running on my hosting site, choose an existing theme and get going!


No, not really! Apparently my hosting web space is slower than slow, and after some consideration I felt the only viable thing now if I wanted to get started was to head over to and get it done. The main reason to go with is solely that If I ever want to go back and design my own theme, or convert the one above, I can always get some web space (preferably fast) and move there quite easily.

Starting a blog shouldn’t be about getting web space, creating a design or developing your own blog engine (I was actually thinking of that first, sigh!). Just start it! That is what I finally did and the reason why you can read this post, yay!

So then, what can you expect from me and this blog? All posts will in some way be related (sometimes maybe far-fetched)  to developing software, may it be craftsmanship, modeling, agile, UI,  and what not, but one thing is for sure, we’ll have to wait and see if I will manage to put together a glorious mix that will make this soup taste …… something!