Per Brage's Blog

const String ABOUT = "Somehow related to code";

HTML5/JavaScript Cube (Part 2: Improving with Three.js)

In part one I showed you how to create a spinning cube by building a mesh, and then applying basic linear algebra to make it rotate. The development was fun in itself but looking at the surrounding world, and what happened the past 15 years or so, evolution definitely leaped forward. These days there are tons of 3D engines, frameworks, libraries and what not, so there is really no need to do all that work yourself. As it happens to be, there is this JavaScript 3D library named Three.js, which enables you to create stunning 3D graphics with a small amount of effort. I thought it would be fun to see if I could improve my cube by refactoring it to use Three.js, and at the same time add some new features like texture mapping and light sources.

However, when adding features like texture mapping there is a rather large issue with browser compatibility that presents itself, which I think requires a bit of explanation. Three.js has three different main renderers; CanvasRenderer, SVGRenderer and WebGLRenderer (with some variations of WebGLRenderers). They all have a few pros and cons, but I’m focusing on the problems related to my features.

  • CanvasRenderer
  • Works on all major browsers since its just a HTML5 Canvas and JavaScript. The bad part is that the texture mapping routine is not perspective correct, and there is no interpolation in the triangle drawing routine, which produces gaps between aligned polygons. And when I tested it, I couldn’t even get the texture mapping to work in IE9.

  • SVGRenderer
  • Doesn’t support texture mapping at all, so that is a no go, at least for the new features I want to add.

  • WebGLRenderer
  • The best of all three renderers, but the problem is that WebGL is not supported by all major browsers, especially IE. It seems that people are still using IE for some unknown reasons, but in my opinion they should just remove the 6 out of http://www.ie6countdown.com/. Even so, I’m not sure IE will ever support WebGL since it’s based on OpenGL, which is a direct competitor to Microsoft DirectX. Microsoft also openly stated that WebGL is insecure, as it directly exposes hardware functionality to the web. I guess they may have a point there!

This creates a problem with Three.js, which is that there is not a single solution that allows me to texture map a cube and have it work on all major browsers (speaking of recent versions). For the sake of this post and the challenge I started, I will use WebGLRenderer and strictly require Chrome and Firefox browsers to use it. So if you can’t click and see the demo below, switch to a real browser.

Refactoring to use Three.js

In my new cube I have reduced the lines of code from about 250 lines, to 90 lines of code (not minified JavaScript). And the features I wanted to add, texture mapping and light sources was of course provided by Three.js out of the box. The code, when refactored turned into a few simple steps; create renderer and a scene, add a camera, a cube and then just add our light sources to the scene. Still, I have to take care of the actual spinning myself, but that can now be done without any linear algebra.

Here are the few very simple functions that replaces most of my code from part one.


var buildCamera = function () {
    var camera = new THREE.PerspectiveCamera(50, viewPortWidth / viewPortHeight, 1, 1000);
    camera.position.z = 500;
    return camera;
};

var buildCube = function () {
    var cubeGeometry = new THREE.CubeGeometry(200, 200, 200, 1, 1, 1, buildMaterials());
    return new THREE.Mesh(cubeGeometry, new THREE.MeshFaceMaterial());
};

var buildDirectionalLight = function () {
    var directionalLight = new THREE.DirectionalLight(0xffffff);
    directionalLight.position.set(1, 1, 1).normalize();
    return directionalLight;
};

var buildAmbientLight = function () {
    return new THREE.AmbientLight(0x999999);
};

The refresh function is similar as before, but Three.js takes care of the drawing now. So the refresh function was refactored into a calculateCube function that increases the angle, and then sets the angle on the cubes rotation properties before it renders the scene.


var calculateCube = function (cube, scene, camera) {
    angle++;
    cube.rotation.x = degreesToRadians(angle);
    cube.rotation.y = degreesToRadians(angle);
    cube.rotation.z = degreesToRadians(angle);
    renderer.render(scene, camera);
};

Now we simply add the output of each and every function above to the scene, and then start a timer to call our calculateCube function.

        var scene = new THREE.Scene();
        var camera = buildCamera();
        var cube = buildCube();

        scene.add(camera);
        scene.add(cube);
        scene.add(buildAmbientLight());
        scene.add(buildDirectionalLight());

        setInterval(function () {
            calculateCube(cube, scene, camera);
        }, 20);

Result

threejscubeResult

Links

Live demo of spinning cube using Three.js
Full source at my blog-examples repository on GitHub

Demonstrations of the Three.js library.

Misconceptions about domain events

There have been a lot of posts around the web, even for years now, trying to explain domain events. More often than not, I see people fail at understanding what domain events are. I guess it is often developers that blog about it, and when they hear domain events, they inherently thinks of system events.

Domain events has nothing to do with system events, event sourcing, any technical patterns or architectural pattern like CQRS. Domain events are something that actually happens in your customers business, not in your software. I think it is about time people understand that domain driven design is more about modeling complex behaviors in a business, and not so much about actual development (especially not building frameworks). The development in domain driven design is an iterative approach, where you continuously test the knowledge you gain from your domain experts through discussions and modeling. Repeating this, trying to reach an agreement with your domain experts is what will bring clarity to your model, which will also pave your way towards a supple design. This is part of something that Eric calls the Whirlpool, which I guess could be a blog post on its own.

The domain event pattern was not included in the blue bible, but it is definitely a major building block in domain driven design. Here is an excerpt from the domain event pattern, as defined by Eric Evans.

“Model information about activity in the domain as a series of discrete events. Represent each event as a domain object. These are distinct from system events that reflect activity within the software itself, although often a system event is associated with a domain event, either as part of a response to the domain event or as a way of carrying information about the domain event into the system.

A domain event is a full-fledged part of the domain model, a representation of something that happened in the domain. Ignore irrelevant domain activity while making explicit the events that the domain experts want to track or be notified of, or which are associated with state change in the other model objects.” – Eric Evans

Think of domain events as a log, or history table of things that have happened. A domain event is implemented more or less the same way as a value object, but typically without any behavior. An event is something that has already happened, and therefore cannot be changed, hence they are just like value objects, immutable! Do you often change things that happened in your past? If so, could you drop me a mail and enlighten me?

Let Mighty Moose open failed tests in MonoDevelop

A few days ago I blogged about how to use Mighty Moose in Ubuntu, and provided a guide on how you could continuously run your SpecFlow/NUnit tests while working in MonoDevelop. If you have not read that post you may want to check it out first, as it is a prerequisite to this one. You can find it here. While writing that post, there was this one thing that I left out on purpose, as it was not really necessary to get everything working. But, it is a nice feature to have, so I thought I would make a quick post and show you how to configure it.

Take a look at this screenshot. A test failed! I have marked it and I clicked the Test output link. You will notice that Mighty Moose correctly displays the reason as to why our test failed, but there is not much information in the stack trace.

To get Mighty Moose to display more information in the stack trace of a failed test, we need to configure nunit-console to hand out more information while running our tests. To do this, we can edit /usr/bin/nunit-console file and add a debug parameter. It should look something like this:

#!/bin/sh

exec /usr/bin/cli --debug /usr/lib/nunit/nunit-console.exe "$@"

If we restart everything and run the tests again, then click on the Test output link, we will now see that Mighty Moose have more information in the stack trace, and most important, we can see the file and the line number that failed our test.

Now that our stack trace includes a file and a line number, we can tell Mighty Moose to use that information together with MonoDevelop. Just add the following few lines to your AutoTest.config under ~/.local/share/MightyMoose/ and then restart everything.

<CodeEditor>
    <Executable>monodevelop</Executable>
</CodeEditor>

Now, double-click a failed test in Mighty Moose and it will automatically open the code file in MonoDevelop, and put the cursor on the line that failed the test. Enjoy!

Using Mighty Moose with SpecFlow/NUnit on Mono/.NET4 (Ubuntu)

The past few days, I’ve been working on getting a Mono-environment up and running in Ubuntu 11.10. The reason for this is that I have two projects I believe would benefit greatly from being able to run on Mono. I also reckoned this would be a perfect time to refresh my Linux knowledge, learn something new, and have some fun. My requirements were pretty basic, and they were as following:

  • Use latest stable Ubuntu
  • Use latest Mono and MonoDevelop
  • Use SpecFlow for my scenario tests
  • Use NUnit for my unit tests

I started on these requirements, trying to solve them one-by-one. But after having tons of issues, especially getting my tests to run with the latest version of MonoDevelops NUnit integration, I shouted out my frustration on twitter. That’s when Greg replied with the following message:

Yes, why didn’t I even think of that? I’ve used Mighty Moose with Visual Studio before, while working on some small projects, and it’s a great product. Also, I actually knew there was a cross platform standalone client available. Silly me! The first thought I had; Why even bother getting MonoDevelop’s NUnit integration to work, since that has only been failing me so far! Instead, let’s go all-in for the Mighty Moose approach!

I quickly extended my requirements

  • Use Mighty Moose to run all my tests. Continuously!

Getting these last three requirements running together wasn’t as easy as I first had hoped, and it took me a couple of evenings of resolving issues while trying to configure NUnit, SpecFlow and Mighty Moose! I thought perhaps others might be interested in how to get this working without spending the amount of time I did, hence I decided to write this blog post. I hope you will find it useful, but if not, at least it will be available to me the next time I need to configure a Mono-environment!

Installing Ubuntu

I created a VHD with VirtualBox and installed Ubuntu on it, which will allow me to boot Ubuntu natively, and use it as a VM. It took me some time to get it configured, and set up the way I want it, as it has been like 10 years since I last used any kind of Linux distribution for my desktop. I won’t go into depths of how to install Ubuntu, other than mentioning that I downloaded Ubuntu 11.10, installed it, and upgraded it with all the latest packages. There are tons of information on the internet about these things, go Google it if you get stuck here!

One thing worth mentioning though is that you should definitely install Compiz, which will help you resolve key-binding issues. If you are like me, and have a Visual Studio background, keys like F10 to -Step Over- while debugging is something you might not want to re-learn. Using Compiz you can resolve these issues with certain keys being assigned to functions in the Unity desktop, making them unavailable to applications like MonoDevelop.

Getting the latest versions of Mono and MonoDevelop

The Ubuntu repository provides older versions of Mono and MonoDevelop, but I wanted to get my hands on more recent versions. Badgerports.org is a repository that provides recent builds of Mono and MonoDevelop, as well as other related packages. Currently, they have MonoDevelop 2.8.6.3, which isn’t the cutting-edge-latest, but so far I find it rather stable, and it’s recent enough. To set up badgerports.org, please follow the steps here.

Caveat: Upgrading MonoDevelop may break MonoDevelop’s built-in NUnit test runner. This blog post will not deal with how to fix that issue, as we will use Mighty Moose to run all tests. If you rely on this integration, then don’t upgrade to MonoDevelop 2.8.

After you’ve setup badgerports.org according to the instructions. Open a terminal window and issue the following commands.

sudo apt-get update
sudo apt-get upgrade –t lucid mono-complete
sudo apt-get dist-upgrade
sudo apt-get install –t lucid monodevelop

There are other mono related packages you may want to install, but the above ones are enough to fulfill the requirements.

Installing SpecFlow

Installing SpecFlow at first seemed like a rather easy task, but confusion caused by weird crashing errors from SpecFlow’s generator threw me for a loop, a long loop. However, what for some time looked quite hopeless, did in fact have a quite easy solution. Here’s how to get it installed. Launch MonoDevelop and head into the Add-in manager under the Tools menu. Check the gallery and search for SpecFlow. You should find SpecFlow Support under IDE extensions. Mark it and click Install.

Now we need the latest SpecFlow binaries, download the zip archive from here (I used version 1.8.1). Extract the archive and head into the tools folder, and then execute the following commands to install all the required SpecFlow assemblies into the Mono GAC, and also make the command-line utility available by typing, yes that’s right: specflow


sudo gacutil –i Gherkin.dll
sudo gacutil –i IKVM.OpenJDK.Core.dll
sudo gacutil –i IKVM.OpenJDK.Security.dll
sudo gacutil –i IKVM.OpenJDK.Text.dll
sudo gacutil –i IKVM.OpenJDK.Util.dll
sudo gacutil –i IKVM.Runtime.dll
sudo gacutil –i TechTalk.SpecFlow.dll
sudo gacutil –i TechTalk.SpecFlow.Generator.dll
sudo gacutil –i TechTalk.SpecFlow.Parser.dll
sudo gacutil –i TechTalk.SpecFlow.Reporting.dll
sudo gacutil –i TechTalk.SpecFlow.Utils.dll
sudo mv specflow.exe /usr/bin/specflow
cd /usr/bin
sudo chown root:root specflow
sudo chmod 755 specflow

Voila! SpecFlow can now generate code-behinds correctly in MonoDevelop! Also, remember to add a reference to TechTalk.SpecFlow.dll in your assembly containing your specifications.

As SpecFlow uses the NUnit framework, we now need to get NUnit running somehow.

Installing and configuring NUnit for Mono/.NET4

This was probably the most annoying part of it all. MonoDevelop’s NUnit integration is compiled against a certain version of NUnit, which it expects in the Mono GAC. When I actually got the correct version installed, MonoDevelop started throwing errors about needing an even earlier version of NUnit, sigh! I also got a bunch of weird errors that I didn’t actually know how to solve, like missing methods and classes within NUnit core, which also seemed like typical version issues. The other issue was that I just couldn’t get any NUnit test runner to run tests written in Mono/.NET4.

As I decided to use Mighty Moose, the solution was to break free from all test runners I’ve been trying to configure so far. Instead, to get Mighty Moose running our tests, we can focus on the nunit-console runner. If we get that working properly, we can configure Mighty Moose to use it.

NUnit may or may not be installed on your system, but issuing the following command will make sure you have it installed.

    sudo apt-get install nunit

Then head into the configuration located at /usr/lib/nunit/, and edit the nunit-console.exe.config file. Just under the <configuration> tag, add the following lines:

    <startup>
        <requiredRuntime version="v4.0.30319”/> 
    </startup>

And then add the following two lines under the <runtime> tag (the first one might already be there).

    <legacyUnhandledExceptionPolicy enabled="1" /> 
    <loadFromRemoteSources enabled="true" /> 

Now you should be able to use the nunit-console test runner on a Mono/.NET4 unit-test project.

Installing and configuring Mighty Moose

You have reached the last part of this guide, and hopefully you haven’t had any troubles so far. Mighty Moose is what will tie the knot, and I won’t keep you any longer. Download the cross platform standalone client and extract the files somewhere and head into that location, then issue the following commands:

    sudo mkdir /usr/bin/continuoustests
    sudo cp –R . /usr/bin/continuoustests/
    cd /usr/bin/continuoustests/
    find -name "*.exe"|sudo xargs chmod +x
    cd /usr/bin
    sudo touch mightymoose
    sudo chmod +x mightymoose

Now open the mightymoose file we just created in a text editor and paste these lines in it.

#!/bin/sh

exec /usr/bin/mono /usr/bin/continuoustests/ContinuousTests.exe $(pwd)

Before we can move on, we need Mighty Moose to create a config file for us. Just issue these commands below, fill in the information and configure Mighty Moose to your liking. Also, remember to close Mighty Moose afterwards. (You may receive errors here, but just ignore them)

    cd <your solution directory>
    mightymoose

As already mentioned, we now need to switch test runner in Mighty Moose, and preferably use the nunit-console runner we got working with Mono/.NET4. Locate your AutoTest.config under ~/.local/share/MightyMoose/ and open it in a text editor. Add these tags within the <configuration> tag.

    <UseAutoTestTestRunner>false</UseAutoTestTestRunner>
    <NUnitTestRunner>/usr/bin/nunit-console</NUnitTestRunner>
    <BuildExecutable>/usr/bin/xbuild</BuildExecutable>

What these three lines do is, first switch of the built-in test runner, then tell Mighty Moose to use nunit-console for NUnit tests. The last line will allow Mighty Moose to be notified when you save files. So instead of having your tests run when you compile your solution, Mighty Moose will now run affected tests when you save a file.

Now we are done! Each time you want to start Mighty Moose, you can just issue the following two commands to start having continuous tests of your project.

    cd <your solution directory>
    mightymoose

Result

If you have followed all the steps above, you should now have a working MonoDevelop environment, with MightyMoose running all your SpecFlow/NUnit tests, each time you save a file. I’ve been using this for a couple of days now, and it works great so far. Although, what I really miss now is my favorite isolation framework, FakeItEasy, as it sadly doesn’t work on Mono. But, Rhino.Mocks works right out of the box, and will do for now.

Last but not least

I want to send out my best regards to Svein A. Ackenhausen and Greg Young for their awesome 24/7 support while I’ve been working on this environment. Without your help this would have taken ages, had it succeed at all.

Thank you!

Links

VirtualBox
Ubuntu
Mono
MonoDevelop
Continuous Tests (Mighty Moose)
SpecFlow
Rhino.Mocks
Badgerports.org

Boost pair-programming with a remote session!

Do you work in an environment that doesn’t promote pair programming? Or worse, it’s not even allowed? Or do you find yourself in a highly productive and positive environment, which really wants pairing to thrive? Perhaps you are already experts in pairing and have formalized a process around it? No matter the situation, I guess you are reading this post out of interest of boosting your pair programming.

Remote! what? NO!?

I guess the title immediately hits most of you seasoned agile practitioners with an alarming red screaming alert; anti-pattern, anti-pattern. Yes, not long ago, I would have agreed with you on that thought, until recently when I actually tried it for myself. First, let’s go over the short story that led up to our remote pair-programming session, and then take a look at the positive effects and how it may boost your pairing!

What on earth made us pair remotely?

A colleague of mine and I needed to get this feature done, which in itself wasn’t particularly hard, but it consisted of several tasks, some easier, some harder. We both had kind of an idea how to solve the feature due to each of us having previous experiences on the topic, and the fact we spent a lot of time in meetings discussing various angles to attack the feature in a way that would suit our platform.

The time had come to transform a very basic proof on concept that was already in place, to something great. We booked a meeting so we wouldn’t get disturbed, and locked ourselves up in a war room, joining forces for the duration of a day. We had a very nice session, indeed! We used one computer, hooked up to a projector which displayed the code all over the wall, we had whiteboards to model, discuss and test ideas. We started out by transforming the proof on concept implementation we already had, writing unit tests, deferring coding tasks etc., and all in all, it felt good. We did get far, accomplished a lot, and we ended the session with a smile on our faces after a great day spent together, and speaking for both of us now, looking forward to the next day, for us to continue in much the same way.

When we finally got around starting a new session the day after, my wife called and I needed to leave to get our daughter to the hospital, so the session ended before it had even started. We didn’t get much done that day, obviously!

Shame on those who give up!

Next morning, I decided to work from home in case I needed to go back to the hospital, and so did my colleague for other reasons. But, we decided to keep pairing no matter the distance, and we continued where we left off. We fired up a shared desktop and launched Visual Studio, in which we could both code at the same time. We also used our phones with headsets to be able to speak (we call for free within our company; otherwise we would have used something like Skype or similar products).

We went on pairing over the phone and this shared desktop, switching between driver and observer back and forth! It felt very naturally without having to use any kind of time boxing or clock telling us when to switch. At the end of the day, after about 6 hours of remote pairing we both actually felt that this was a very nice way of pairing, and it removed several of the impediments we often felt when pairing at the office.

What are the positive effects then?

These are the key areas we felt was boosted by a remote session compared to pairing in either co-located, or at our desks!

  • Intimate
  • The feeling of a truly shared coding experience, it almost felt like I was writing code through my pairing partner. It doesn’t get better than that!

  • Natural switching
  • At any point during our coding session, we could switch back and forth between driver and observer. It felt very natural, without having to ask permission, moving a laptop/keyboard or swapping chairs. We just started coding, and sometimes it brought laughter upon us, as silly things can happen with 2 keyboards and 1 Visual Studio.

  • Focus
    • External distractions
    • We were working from home and I guess it was quite easy for us to avoid getting disturbed! But, had we been at the office and somehow communicated through our headsets I think we could achieve the same result. No one would interrupt a colleague that’s obviously on the phone, much too busy to be disturbed with questions or even a cheerful “Good Morning”! Having no external distractions allowed us to really enter this intense focus I think neither of us would be able to sustain working in our open-plan office.

    • Internal distractions
    • Communicating over the phone, with our headsets, removed almost all internal distractions as there was no time, or opportunity to alt-tab into an email client, chat, twitter, surf, text or whatever people usually do during their coding. Out of respect, interest and a shared goal we were focused on one thing, and one thing only, together, which led to no task switching, what so ever.

  • Breaks
  • Having breaks is great, and everyone really needs to have both shorter and longer breaks to be able to stay focused and keep working at a sustainable pace. Again, as we were on our phones with headsets, we could take short breaks and go get a drink or whatever we wanted and still remain focused. If I was coding, my colleague just took over while I was gone grabbing coffee and vice versa. For our long breaks we hung up and left our computer to get detachment which enabled us to come back really refreshed, ready for new endeavors.

Conclusion

All in all, I must say I was very pleasantly surprised after this full day of remote pair-programming, and I can only recommend it, no matter if you are beginners or experienced pair programmers. At least I will not even think twice before heading into a remote session again if the opportunity presents itself!

My colleague mentioned in this article, and also reviewer of this blog post, can be found on twitter as @perakerberg. You can also find his software quality blog here.

Event Broker using Rx and SignalR (Part 4: Solving the Scenario)

The time has come to start implementing the scenario. The scenario I invented, and then refactored from a science fiction novel into this simple online shop, which just happens to sell computers and components. As promised earlier, by the time you are reading this post the full source will be available on my GitHub repository, just follow the link below.

Configuration

This post will wrap up the series with the fluent configuration for our brokers; Website, ComponentStock, ComputerStock and Procurement. Let’s begin with the website since that’s where everything starts.

Website

The website creates the event broker by specifying a publishingUri, which will register the event broker so subscribers can connect to it. We can also see how a local subscription is added here, which we will use for sending out confirmation mails within our ProductOrderedEventConsumer. Then we start ordering products and publish the events using the OrderProduct() method, which just randomly creates ProductOrderedEvents.


            using (var eventBroker = new EventBroker("http://localhost:53000/"))
            {
                eventBroker.Locally().Subscribe(new ProductOrderedEventConsumer());

                Console.WriteLine("Press any key to start ordering products");
                Console.ReadKey();

                for (var i = 0; i < 30; i++)
                {
                    eventBroker.Publish(OrderProduct());
                    Thread.Sleep(200);
                }

                Console.ReadKey();
            }

Computer and Component stock

Both our computer and component stock register themselves to allow remote subscribers, while they also remotely subscribe to ProductOrderedEvent (through an event consumer). Internally, but not shown here, the event consumer for the computer stock makes use of a specification to filter the incoming events, whereas the component stock uses lambdas to show the difference.


            using (var eventBroker = new EventBroker("http://localhost:53001/"))
            {
                eventBroker.ConnectionStatus += (s, ev) => 
                    Console.WriteLine("Component stock: " + (ev.Success ? "Connected!" : ev.ErrorMessage));
                eventBroker
                    .Locally()
                        .Subscribe<ProductShippedEvent>(x => 
                            Console.WriteLine(String.Format("{0} order packed and shipped", x.ProductName)))
                    .Remotely("http://localhost:53000/")
                        .Subscribe(new ProductOrderedEventConsumer(eventBroker));

                Console.ReadKey();
            }


    public class ProductOrderedEventConsumer : EventConsumer<ProductOrderedEvent>
    {
        private readonly IEventBroker _eventBroker;
        private readonly Random _random;

        public ProductOrderedEventConsumer(IEventBroker eventBroker)
        {
            _eventBroker = eventBroker;
            _random = new Random();

            RegisterSpecification(new ItemsInLaptopOrComputerProductGroupSpecification());
        }

        public override void Handle(ProductOrderedEvent @event)
        {
            _eventBroker.Publish(new ProductShippedEvent
                                     {
                                         ProductName = @event.ProductName
                                     });

            if (_random.Next(10) > 5)
                _eventBroker.Publish(new ProductOrderPointReachedEvent()
                                         {
                                             ProductName = @event.ProductName
                                         });
        }
    }

Procurement

The last of our configurations! We set up remote subscriptions to our stocks and start listening for events telling us that the order point was reached, so procurement can order new products to fill up our stocks. Here you can also see an example of dual remote subscriptions added through the fluent API.


            using (var eventBroker = new EventBroker())
            {
                eventBroker.ConnectionStatus += (s, ev) => 
                    Console.WriteLine("Procurement: " + (ev.Success ? "Connected!" 
                                                                          : ev.ErrorMessage));
                eventBroker.Remotely("http://localhost:53001/")
                                .Subscribe(new ProductOrderPointReachedEventConsumer())
                            .Remotely("http://localhost:53002/")
                                .Subscribe(new ProductOrderPointReachedEventConsumer());

                Console.ReadKey();
            }

Result

By running the solution you will see four console windows, which will display information when they receive and process an event. I added links to images for each console window as an example of how they would look after completing 30 product orders. But a better example of the result would be to run the source available in my repository.

Website console window
Component stock console window
Computer stock console window
Procurement console window

Links

Source
Full source at my GitHub repository

Navigation
Part 1: A Fluent API
Part 2: Implementation
Part 3: Event Consumers
Part 4: Solving the Scenario

Is it an Entity or a Value object?

This is a question that seems to surface over and over again when I talk to fellow developers, and while it is somewhat clear to many what the difference between entities and value objects are, it seems it is not clear when to actually use an entity and when to use a value object. For some reason, people also seem to favor entities over value objects when modeling, when it really should be the other way around. If you are working with only entities, then my friend, it sadly seems you are not alone out there!

Why is this? And why are people so reluctant to implement and use value objects?

I figured I would try to clear up the misconceptions a bit, and provide two examples that I often use to exemplify the difference between entities and value objects. Both of them are insights taught by Eric Evans during his DDD Immersion class, which I attended in Stockholm about a year ago. They are slightly modified as I don’t remember the exact phrases used by Eric. I also extended, and twisted each of the examples.

Context

In domain driven design, context is everything, and it is also a key factor when choosing between modeling an object as either an entity or a value object. A context (or bounded context to be more specific) has explicit defined borders and is in itself an application of its own. Within this context we define a model, create our ubiquitous language and implement behavior among other things to fulfill our requirements.

The main thing here is to understand that a context’s model, is an abstracted view of either your, or your customer’s business, and will not solve all the problems of a domain. An object in one context might be something completely different in another context, even though both contexts are within the same domain, and the object has the same name. It all depends on which abstraction your requirements emphasize, and is also the reason why two customers operating in the same domain, may have two completely different models. In one of those models, an object might be a value object, whereas in the other model it is an entity.

The – $100 dollar bill! – example

You and your colleague are each holding one of your own $100 bills in your hands, and there isn’t anything wrong with them or anything like that. If I asked you to swap those two bills, none of you would really care, swap them, be none the wiser, and move along like nothing had happened. After all, neither of you earned nor lost money! A $100 bill is a $100 bill, and you would most likely only compare the number of zeros printed on the bill. After you compared that value you wouldn’t really care which one you hold in your hand, and since we do not care about instance, and we compare them by state, we are talking about typical value objects.

But, doesn’t a $100 bill have an identification number? Well, actually it does, and that means that in some context, the exact same bill you are holding in your hand is very important to somebody, in their context. And, in that context it might actually be an entity. However, even though the $100 bill has an identification number and we may think that it’s an entity, it is not necessarily so. As always, it depends on the context.

With the same context as above, how many of you would swap your credit cards, if each card had a $100 balance on it?

The – A glass of water! – example

Imagine you and I just sat down to have a meeting. I poor two glasses of water, and then give you the opportunity to pick one of the glasses. You would not really care which one of the glasses you would pick, since it is just two glasses on a desk. So far, in this context, both glasses are the same, and its equality would be determined by the type of glass, amount of water, and perhaps the quality of the water. Since we do not care about instance, and we compare by state, we are talking about typical value objects once again.

Now, let’s redo the same scenario, but as I poor the two glasses of water, I take a sip from one of the glasses. Most people (I don’t know about you though) would by default pick the glass I have not taken a sip from, as the other glass would have immediately become my glass of water. At this point, the type of glass, amount of water and quality is no longer of any concern, because in this new context I have, by the sip I took, polluted one glass as mine, thus imposed it with an identity. So as this context has evolved, both glasses of water are now entities, even though I only sipped from one of them.

You could argue that the glass of water is still a value object, but now attached to a person object. But, I didn’t swallow a glass, I drank from the contents of a glass. That content might in itself be a value object, which was temporarily attached to the glass, but is now attached to the person object. So now we separated the glass of water into two objects, and now we have to reevaluate the whole scenario as our context evolved.

For fun, let’s back up and twist this -sip to impose identity- example further, and I will generalize a bit here (sorry all smokers). Smokers often ask for the butt of a cigarette if they are temporarily out of smokes. They don’t mind putting a cigarette to their mouth that someone else already smoked on, but at the same time they wouldn’t share a glass of water with the same person. So by taking a sip from a cigarette, we may still be using value objects.
What we changed here is the object we interact with, and by changing from a glass of water to a cigarette we also changed an attribute of the context. It might be that we are in two completely different contexts, but if we are in the same context, both a cigarette and a glass of water would most likely not inherit from a shared abstraction like, Product, as they would be modeled and implemented quite differently. Watch out for those generalized abstractions, as they will most likely impede you reaching a supple design.

Consider the context above and these method signatures, which would you prefer? These,


    void Drink(IDrink drink);
    void Smoke(ICigarette cigarette);

or


    void Sip(IProduct product);

Again, it comes down to our context and how we want to implement our behavior. And while we are on the subject of behavior, we actually changed a few behaviors in the model, without really mentioning it, but the most important change to the scenario above is that we can no longer assume that, taking a sip from something, will automatically be okay by a person object.

Those examples didn’t help me at all

Here are some information and a few tips about value objects that you might want to use as guidance.

  • Context

    Always understand the context you are in, listen to the language and the concepts that your domain experts talk about. Without a clear perception of your context, you are much more likely to fail before you even get started.

  • Identity through state

    A value objects identity is based on its state. Two value objects with the same property values would be considered the same object.

  • Immutability

    Value objects are immutable objects, which means you cannot change a value object’s state without replacing the whole object. Immutable objects are always easier to handle and understand, and since immutable objects are inherently thread-safe, they are a better choice for today’s multi-threading applications.

  • Interchangeable

    Value objects are throwaway objects. During a transaction you may fetch one, and then by using the Side-Effect Free pattern, which is especially useful when dealing with value objects, create n-objects before reaching your final instance, which is the one that will be persisted.

  • Explicit identity

    Even though the object you are trying to model has an explicit identity, do not get fooled into making it an entity just because it has this identity. Does the identity mean anything in your context?

  • Temporary name

    At times when you are unsure of what an object is and what it should do, it can help to give your object some random name that doesn’t mean anything. This will allow you, and people around you, to not get stuck on a particular concept, and hopefully you will be able to continue refining your model. This will also help you to stick with a value object implementation as long as possible.

  • Refactoring

    When you need to refactor your model, it will always be easier to make an entity out of a value object, than the other way around.

Still in doubt?

When in doubt, and you wonder what an object is, and how it should be modeled, always favor value objects. I will let Bart repeat it a few times on his chalkboard, and perhaps you will remember this chant, and think twice the next time you are about to add another entity to your model.

I will always favor value objects

Event Broker using Rx and SignalR (Part 3: Event Consumers)

If you go back and look at the result from the first post of the series, you will notice that during registration of subscriptions, we add filter predicates and assign actions to be executed as events are consumed. This isn’t really such a good idea, since we will end up having all our logic in our configuration and it will clutter the code among other bad things! So what can we do about it?

Event Consumers

The solution to our problem is to outsource both filtering and processing into event consumers. For those of you familiar with CQRS, just think of a command handler that implements IHandle<T>, but for an event. With event consumers we end up with classes that handle each particular event. We will add simplicity, separation and if named correctly, end up with a more declarative code.

Let’s start with an interface describing an event consumer, and also an abstract class implementation that will provide a register method, which will help us with registering a Func<TEvent, Boolean> (e.g. a filter). The func is added to our multicast delegate property, which will allow us to add several filters on an event consumer.

    public interface IEventConsumer<in TEvent> : IHandle<TEvent>
    {
        Func<TEvent, Boolean> Filters { get; }
    }
    public abstract class EventConsumer<TEvent> : IEventConsumer<TEvent>
        where TEvent : IEvent
    {
        public Func<TEvent, Boolean> Filters { get; private set; }
 
        protected void Register(Func<TEvent, Boolean> filter)
        {
            if (Filters == null)
                Filters = filter;
            else
                Filters += filter;
        }
 
        public abstract void Handle(TEvent message);
    }

Let’s implement the ProductOrderedEventConsumer that will be used at the component stock in our scenario. When the component stock receives a ProductOrderedEvent, we will have to make sure it won’t act upon events for laptops and computers, as the computer stock will handle those two types of products. To accomplish this we just register a filter in the constructor to exclude all events for products in the laptop or computer product group. The handle method will now only process events matching our registered filter.

Speaking of the Handle method, it won’t do much more than publish a ProductShippedEvent, and at random, the ProductOrderPointReachedEvent which will simulate that our inventory is getting low on a particular product.

    public class ProductOrderedEventConsumer : EventConsumer<ProductOrderedEvent>
    {
        private readonly IEventBroker _eventBroker;
        private readonly Random _random;

        public ProductOrderedEventConsumer(IEventBroker eventBroker)
        {
            _eventBroker = eventBroker;
            _random = new Random();

            Register(x => x.ProductGroup != "Laptop" && x.ProductGroup != "Computer");
        }

        public override void Handle(ProductOrderedEvent @event)
        {
            _eventBroker.Publish(new ProductShippedEvent
            {
                ProductName = @event.ProductName
            });
            
            if (_random.Next(10) > 5)
                _eventBroker.Publish(new ProductOrderPointReachedEvent()
                {
                    ProductName = @event.ProductName
                });
        }
    }

Using Specification pattern to apply filtering

Nitpicker corner: Yes, specifications can be a bit cumbersome and yes, it does add a lot of code that could at times be written with a few simple lambdas. But remember the ubiquitous language? Specifications allows us to communicate about rules and predicates with everyone involved in developing our software! To be able to talk about the ‘Items in laptop or computer product group’ instead of the ‘x rocket x dot product group equals laptops or x dot product group equals computers’ – predicate, provides a lot of value not to be neglected! At the same time we need to be pragmatic about it and not implement specifications for every little equality check we create, as that would definitely overwhelm our code base. For this particular use-case we might be overdoing it, but I wanted to show a simple example of using Specifications

The specification pattern in its essence matches an element against a predicate, and respond with a boolean if the input data is satisfied by the predicate. This interface describes it rather well.

    public interface ISpecification<TElement>
    {
        Boolean IsSatisfiedBy(TElement element);
    }

There is also a simple abstract class (available in the full source) that I use to avoid repeating myself. There are far more advanced implementations of the specification pattern available online if you are interested, and I would suggest using one of them if you want to start using specifications in your code. Below is the implementation of the ItemsInLaptopOrComputerProductGroup specification I mentioned earlier, which we will use in our scenario to apply filtering of incoming events in the computer stock.

    public class ItemsInLaptopOrComputerProductGroupSpecification:Specification<ProductOrderedEvent>
    {
        public ItemsInLaptopOrComputerProductGroupSpecification()
        {
            AssignPredicate(x => x.ProductGroup == "Laptop" || x.ProductGroup == "Computer");
        }
    }

Now we have a specification that declares intent with a name instead of a lambda. All we need now is a way of registering this specification into our event consumer, and that can be done with the addition of this method.

        protected void Register(ISpecification<TEvent> specification)
        {
            Register(specification.IsSatisfiedBy);
        }

Registering a specification now becomes a single line of code within our event consumers.

        Register(new ItemsInLaptopOrComputerProductGroupSpecification());

Links

Source
Full source at my GitHub repository

Navigation
Part 1: A Fluent API
Part 2: Implementation
Part 3: Event Consumers
Part 4: Solving the Scenario

Event Broker using Rx and SignalR (Part 2: Implementation)

In part one I wrote about my reasoning and background of why I chose to have some fun by creating my own event broker. Then I took you through a scenario, the events that will make up the scenario, and then finished up the post by specifying a few interfaces that will make up the fluent API of the event broker.

Now it’s time to implement the first pieces of the broker. Let’s just throw ourselves at it, shall we?

Local event streams

Subscribing to a local event stream is something that can be implemented quite easily using Reactive Extensions (or LINQ to Events, if you prefer that name). Out of the box, Reactive Extensions provide two classes that will simplify the implementation dramatically, Subject<T> and EventLoopScheduler.
Basically, Subject<T> is both an Observable<T> and an Observer<T> that handles all publishing, subscriptions, disposing etc. for us, whereas the EventLoopScheduler class ensure event concurrency on designated thread.

So without further ado, here is the first draft of our Reactive Extensions event broker for local event streams, implemented with the interfaces described in part one of this series.


    public class EventBroker : IEventBroker
    {
        private readonly IScheduler _scheduler;
        private readonly ISubject<IEvent> _subject;
        
        public EventBroker()
        {
            _scheduler = new EventLoopScheduler();
            _subject = new Subject<IEvent>();
        }
 
        public IEventBroker Publish<TEvent>(TEvent @event) 
            where TEvent : IEvent
        {
            _subject.OnNext(@event);
            return this;
        }
 
        public ISubscribe Locally()
        {
            return this;
        }
 
        public ISubscribe Remotely(String remoteEventStreamUri)
        {
            throw new NotImplementedException();
        }
 
        ISubscribe ISubscribe.Subscribe<TEvent>(Action<TEvent> onConsume) 
        {
            ((ISubscribe)this).Subscribe(null, onConsume);
            return this;
        }
 
        ISubscribe ISubscribe.Subscribe<TEvent>(Func<TEvent, Boolean> filter, 
                                                Action<TEvent> onConsume) 
        {
            _subject.Where(o => o is TEvent).Cast<TEvent>()
                                .Where(filter ?? (x => true))
                                .ObserveOn(_scheduler)
                                .Subscribe(onConsume);
            return this;
        }
 
        public void Dispose()
        {
        }
    }

Remote event streams

To be able to subscribe to remote event streams, we need to communicate and send our events over the network. There are many ways of doing this, but fortunately, there is this library that I guess no one can have missed with all the buzz it has created lately, namely SignalR. Using SignalR it becomes very easy to create a publisher that subscribers can connect to and receive events. What makes it even better is that SignalR has built in support for Reactive Extensions, allowing us to reuse and provide the same API for both remote and local subscriptions.

SignalR does however not provide any safety against loss of data, clients will just pick up listening to the new events as they reconnect, and be totally oblivious about the events they missed while being disconnected. This is one of the major reasons behind the caveat in the first post of the series.

SignalR can be hosted in various ways but, for this example I will use the self-hosting server since I’m only using Console Applications in the example.

Starting SignalR Self-Hosting Server

SignalR provides a class named PersistentConnection that you inherit from and override various methods to handle negotiation, data received etc. But we will only use the broadcast feature of the server connection in this example, so we will only create an empty class, by the name EchoConnection.


    public class EchoConnection : PersistentConnection
    {
    }

We can now start the self-hosting server, extract the connection through the provided dependency resolver and then use this connection to broadcast events to all subscribers.
To start the event broker in a self-hosting mode we add an overloaded constructor taking a publishingUri, which the server will register itself on.


        private readonly IConnection _serverConnection;

        public EventBroker(String publishingUri)
            : this()
        {
            var server = new SignalR.Hosting.Self.Server(publishingUri);
            server.MapConnection<EchoConnection>("/echo");
            server.Start();
 
            var connectonManager = server.DependencyResolver.Resolve<IConnectionManager>();
            _serverConnection = connectonManager.GetConnection<EchoConnection>();
        }

Connecting to SignalR servers

We need to refactor a bit to be able to connect to a remote SignalR server (e.g. our remote event stream). First we add a stack of client connections meaning that this event broker is a subscriber to zero or more other event publishers. I guess a dictionary and naming publishers might be a good idea, but a Stack will suffice for this example.
Let’s add two private methods that will help us register subscriptions, both locally and remotely. First is the GetCurrentConnection() that peeks the top connection of the client stack, which will always contain the current remote we are adding subscriptions to. The other method GetCurrentObservable<TEVENT>() will, depending on if we are registering local or remote subscriptions, return an IObseravable<TEVENT> from either the Subject<T> or the client connection.

This code also shows how SignalR supports Reactive Extensions by using the AsObservable<>() on the current client connection.


        private readonly Stack _clientConnections;

        private IObservable<TEvent> GetCurrentObservable<TEvent>()
        {
            return _inLocalSubscriptionMode ? _subject.Where(o => o is TEvent).Cast<TEvent>()
                                            : GetCurrentConnection().AsObservable<TEvent>();
        }
 
        private Connection GetCurrentConnection()
        {
            return _clientConnections.Peek();
        }

But wait a minute, there is a problem here that we haven’t addressed yet. I don’t know if this is intended or bug in SignalR, but when you use AsObservable<TEvent>() on a client connection, that doesn’t mean that you will filter incoming events by TEvent, SignalR will rather try to deserialize every incoming event (that would be all events of all types published) into the TEvent type. Some events might work, some fail, some get mixed and this is definitely not how we want it to work. So the solution to this problem is to take care of the serialization and deserialization ourselves, and not rely on SignalR’s default serialization.

Json.NET has a TypeHandling setting that can be used to add the type name of an object as metadata under the property name $type. Let’s use this feature and verify this property on all incoming events. What we’ll do is to use the AsObservable() to get an IObservable<String> and upon that instance apply some type filtering, deserialization and then use AsObservable() again. The code below has been refactored to accomplish this.


    private IObservable<TEvent> GetCurrentObservable<TEvent>()
    {
        return _inLocalSubscriptionMode ? _subject.Where(o => o is TEvent).Cast<TEvent>()
                                        : GetCurrentConnection()
                                                    .AsObservable()
                                                    .Where(IsEventOfCorrectType<TEvent>)
                                                    .Select(JsonConvert.DeserializeObject<TEvent>)
                                                    .AsObservable();
    }

With those two methods in place we can refactor the main Subscribe method to use GetCurrentObservable<TEVent>(), and also add the subscription to our subscriptions collection, so we can close and dispose them when exiting the application. And by those small changes we can now subscribe to a remote event stream.


        ISubscribe ISubscribe.Subscribe<TEvent>(Func<TEvent, Boolean> filter, 
                                                Action<TEvent> onConsume)
        {
            _subscriptions.Add(GetCurrentObservable<TEvent>()
                                .Where(filter ?? (x => true))
                                .ObserveOn(_scheduler)
                                .Subscribe(onConsume));
            return this;
        }

Publishing

Servers are up, connections can be made. Now we just need to broadcast events.

The publishing method have two new lines of code, and those two last ones now broadcasts the event if the event broker was instantiated with a publishing Uri and by that registered itself as a self-hosting server. Remember from above that we also wanted to take care of our own serialization?


        public IEventBroker Publish<TEvent>(TEvent @event)
            where TEvent : IEvent
        {
            _subject.OnNext(@event);

            if (_serverConnection != null)
                _serverConnection.Broadcast(JsonConvert.SerializeObject(@event, 
                                                                        Formatting.None, 
                                                                        _includeTypeJsonSetting));

            return this;
        }

Links

Source
Full source at my GitHub repository

Navigation
Part 1: A Fluent API
Part 2: Implementation
Part 3: Event Consumers
Part 4: Solving the Scenario

Event Broker using Rx and SignalR (Part 1: A Fluent API)

Caveat: Please understand that this is a very simple event broker that was implemented during a short period of time, and that it does NOT include anything like message reliability, acknowledgement, queuing or similar things to prevent data loss. The remote feature is built on-top of SignalR and publishes events ‘live’ for anyone listening at the time of publising, meaning that subscribers can’t pick up where they left after a connection loss. The event broker was developed for fun and educational purposes only and is provided as is. I would suggest not using, nor basing your own implementation on this example for any mission critical application/software. If you need a reliable and performing service/message bus, I suggest you take a look at NServiceBus, MassTransit or similar products. Thank you!

Background

Recently, I started to build what will be a very small application based on the architectural pattern CQRS. When done, the application is supposed to be released for free and will be launched on a very small scale for a few select people! At the start of development, I was unsure of the hosting environment, and what technologies would be available to me (e.g. what could I install and things like that). So one of my basic requirements became to refrain from using any, or as few frameworks, server and/or third-party products as possible, unless I knew for sure that they could be embedded in my code, or would work at the specific hosting environment. Things have however changed, and the original hosting provider has been replaced with AppHarbor, or at the time of writing, still in the works of switching.

In CQRS we use an event store to persist all events which are the results of applying commands in our domain. At the time you persist them, you also want to dispatch the events to the read side, so they can be transformed and persisted in the read model. This is typically done with a service bus, it doesn’t have to, but it’s definitely preferred way of doing it from my point of view. Due to licensing, hosting, another use-case and the fact that I’m still trying to limit the use of third-party products, I decided I would have some fun developing something temporary and very basic, to handle the event dispatching to the read side (or read sides, as the application will have more than one read model), hence my EventBroker implementation was born. I deliberately choose the name EventBroker instead of ServiceBus, or MessageBroker, as I will only use this component to publish and broadcast events. Even so, in my opinion events are the only thing you should ever send over a service bus, but I guess that would be another discussion.

To provide a nice example I created a scenario that will be used throughout the series to base implementation on. The hard part was inventing a scenario described only by events, as we are not creating a full system-wide implementation with client, commands and the works.

Ordering a product – An example scenario

To begin with, I wrote this science fiction scenario taking place in the year 3068. Humans are scattered all over the universe since our beloved planet Earth ceased to exist during the 300 years’ war in the middle of previous millennia! But after a few pages, I realized I was writing a science fiction novel, rather than a blog post about an event broker. So I refactored my scenario into a small online computer shop, which sells computers and components over the internet (Exciting huh?!).The scenario consists of a web site, which ships all its orders from two stocks, where one stock handles off-the-shelf computers and laptops in various variations, whereas the other stock only handles computer components. To protect these two stocks from stockout, there is also procurement that resupplies the stocks as their products reaches their respectively order point.

There are likely hundreds or even thousands of events within this context, but for this example I narrowed them down to these three.

ProductOrderedEvent
Published by the website as soon as a product has been ordered (Yes, my shop has a horrible user experience as customers can only order one product at the time.) Both stocks subscribe to this event so they can prepare, pack and ship the ordered product. The website itself also subscribes to this event locally, to be able to send out order confirmation emails.

ProductShippedEvent
This event is published by both stocks as soon as an ordered product has been packed and shipped. In this example we only subscribe to this event locally for sending out shipping confirmation emails.

ProductOrderPointReachedEvent
Published by the stocks as inventory gets low on a certain product, and subscribed to by procurement to order resupplies.

The scenario and these three events will be enough to demonstrate the entirety of the event broker, but before implementing our broker we will be defining our fluent API through a few simple interfaces.

Defining our fluent API

Looking at the events above and their short descriptions, it’s not hard to notice that we want to subscribe to both local and remote event streams. And it doesn’t take much brain activity to realize that our local event subscriptions don’t need to be routed through any network stack. Hence we can divide the event broker’s subscriptions into two types, local and remote subscriptions. We also need to be able to filter events, as for example, the stocks are not interested in all ProductOrderedEvents, but rather the ones they can process and complete. We also want to publish events, but we don’t want to separate that into local and remote publishes. A publish is a publish, it shouldn’t care who is listening, may it be a local or remote subscription.

So let’s start with defining the interfaces that will fulfill the above requirements we derived out of our scenario.

IPublish
The IPublish interface defines our Publish method, but there isn’t really any options or method-chaining paths after you call the publish method. The return type could actually be just void, but we might as well return the IEventBroker interface so we can reset our path, and make all choices available again after a call to the publish method. This will also allow us to chain publish methods, for those small use-cases that would actually be useful.


    public interface IPublish
    {
        IEventBroker Publish<TEvent>(TEvent @event)
            where TEvent : IEvent;
    }

ISubscribe
The ISubscribe interface defines our subscription methods, and we currently have two of them. One that subscribes to a specific event and one that subscribes to a specific event that matches a predicate, our filter. In our example scenario we will use this type of filtering for our stocks when they subscribe to the ProductOrderedEvent, as one stock only wants information about computers and laptops, while the other stock wants events about all other products. Each Subscribe method will return the ISubscribe interface which will allow us to chain subscriptions. Also, the ISubscribe interface will be implemented explicitly to force the end user to use Locally() or Remotely(remoteEventStreamUri) methods first, and then add subscriptions, otherwise we wouldn’t know what to register our subscriptions upon.


    public interface ISubscribe : ISubscriptionSource
    {
        ISubscribe Subscribe<TEvent>(Action<TEvent> onConsume)
            where TEvent : IEvent;

        ISubscribe Subscribe<TEvent>(Func<TEvent, Boolean> filter, Action<TEvent> onConsume)
            where TEvent : IEvent;
    }

ISubscriptionSource
Did you notice that I added the ISubscriptionSource on the ISubscribe interface? By doing that, we will be able to switch registering subscriptions between local and one or more remote event streams without completing the statement.


    public interface ISubscriptionSource
    {
        ISubscribe Locally();
        ISubscribe Remotely(String remoteEventStreamUri);
    }

IEventBroker
Now the IEventBroker interface just needs to inherit from all the interfaces we defined above. Remember, ISubscriptionSource is included through our ISubscribe interface.


    public interface IEventBroker : IDisposable, IPublish, ISubscribe
    {
    }

We have every interface we need to start implementing the broker now, but that will have to wait until the next post, which will be ready and posted together with this one.

Result

By faking the event broker, our fluent API will now allow us to write statements like the one below, even though we haven’t even written a single line of implementation code yet. In the next part of the series we will make this work, and after that we will continue adding a few more features to the event broker in upcoming posts.

Example of chaining subscriptions


    eventBroker.Locally()
                    .Subscribe<ProductOrderedEvent>(Console.WriteLine)
                    .Subscribe<ProductOrderPointReachedEvent>(Console.WriteLine)
               .Remotely("http://localhost:53005/")
                    .Subscribe<ProductOrderedEvent>(Console.WriteLine);

Example of chaining our publish method


    eventBroker.Publish(new ProductOrderedEvent())
               .Publish(new ProductShippedEvent());

Links

Source
Full source at my GitHub repository

Navigation
Part 1: A Fluent API
Part 2: Implementation
Part 3: Event Consumers
Part 4: Solving the Scenario