Per Brage's Blog

const String ABOUT = "Somehow related to code";

Monthly Archives: April 2012

Misconceptions about domain events

There have been a lot of posts around the web, even for years now, trying to explain domain events. More often than not, I see people fail at understanding what domain events are. I guess it is often developers that blog about it, and when they hear domain events, they inherently thinks of system events.

Domain events has nothing to do with system events, event sourcing, any technical patterns or architectural pattern like CQRS. Domain events are something that actually happens in your customers business, not in your software. I think it is about time people understand that domain driven design is more about modeling complex behaviors in a business, and not so much about actual development (especially not building frameworks). The development in domain driven design is an iterative approach, where you continuously test the knowledge you gain from your domain experts through discussions and modeling. Repeating this, trying to reach an agreement with your domain experts is what will bring clarity to your model, which will also pave your way towards a supple design. This is part of something that Eric calls the Whirlpool, which I guess could be a blog post on its own.

The domain event pattern was not included in the blue bible, but it is definitely a major building block in domain driven design. Here is an excerpt from the domain event pattern, as defined by Eric Evans.

“Model information about activity in the domain as a series of discrete events. Represent each event as a domain object. These are distinct from system events that reflect activity within the software itself, although often a system event is associated with a domain event, either as part of a response to the domain event or as a way of carrying information about the domain event into the system.

A domain event is a full-fledged part of the domain model, a representation of something that happened in the domain. Ignore irrelevant domain activity while making explicit the events that the domain experts want to track or be notified of, or which are associated with state change in the other model objects.” – Eric Evans

Think of domain events as a log, or history table of things that have happened. A domain event is implemented more or less the same way as a value object, but typically without any behavior. An event is something that has already happened, and therefore cannot be changed, hence they are just like value objects, immutable! Do you often change things that happened in your past? If so, could you drop me a mail and enlighten me?

Let Mighty Moose open failed tests in MonoDevelop

A few days ago I blogged about how to use Mighty Moose in Ubuntu, and provided a guide on how you could continuously run your SpecFlow/NUnit tests while working in MonoDevelop. If you have not read that post you may want to check it out first, as it is a prerequisite to this one. You can find it here. While writing that post, there was this one thing that I left out on purpose, as it was not really necessary to get everything working. But, it is a nice feature to have, so I thought I would make a quick post and show you how to configure it.

Take a look at this screenshot. A test failed! I have marked it and I clicked the Test output link. You will notice that Mighty Moose correctly displays the reason as to why our test failed, but there is not much information in the stack trace.

To get Mighty Moose to display more information in the stack trace of a failed test, we need to configure nunit-console to hand out more information while running our tests. To do this, we can edit /usr/bin/nunit-console file and add a debug parameter. It should look something like this:

#!/bin/sh

exec /usr/bin/cli --debug /usr/lib/nunit/nunit-console.exe "$@"

If we restart everything and run the tests again, then click on the Test output link, we will now see that Mighty Moose have more information in the stack trace, and most important, we can see the file and the line number that failed our test.

Now that our stack trace includes a file and a line number, we can tell Mighty Moose to use that information together with MonoDevelop. Just add the following few lines to your AutoTest.config under ~/.local/share/MightyMoose/ and then restart everything.

<CodeEditor>
    <Executable>monodevelop</Executable>
</CodeEditor>

Now, double-click a failed test in Mighty Moose and it will automatically open the code file in MonoDevelop, and put the cursor on the line that failed the test. Enjoy!

Using Mighty Moose with SpecFlow/NUnit on Mono/.NET4 (Ubuntu)

The past few days, I’ve been working on getting a Mono-environment up and running in Ubuntu 11.10. The reason for this is that I have two projects I believe would benefit greatly from being able to run on Mono. I also reckoned this would be a perfect time to refresh my Linux knowledge, learn something new, and have some fun. My requirements were pretty basic, and they were as following:

  • Use latest stable Ubuntu
  • Use latest Mono and MonoDevelop
  • Use SpecFlow for my scenario tests
  • Use NUnit for my unit tests

I started on these requirements, trying to solve them one-by-one. But after having tons of issues, especially getting my tests to run with the latest version of MonoDevelops NUnit integration, I shouted out my frustration on twitter. That’s when Greg replied with the following message:

Yes, why didn’t I even think of that? I’ve used Mighty Moose with Visual Studio before, while working on some small projects, and it’s a great product. Also, I actually knew there was a cross platform standalone client available. Silly me! The first thought I had; Why even bother getting MonoDevelop’s NUnit integration to work, since that has only been failing me so far! Instead, let’s go all-in for the Mighty Moose approach!

I quickly extended my requirements

  • Use Mighty Moose to run all my tests. Continuously!

Getting these last three requirements running together wasn’t as easy as I first had hoped, and it took me a couple of evenings of resolving issues while trying to configure NUnit, SpecFlow and Mighty Moose! I thought perhaps others might be interested in how to get this working without spending the amount of time I did, hence I decided to write this blog post. I hope you will find it useful, but if not, at least it will be available to me the next time I need to configure a Mono-environment!

Installing Ubuntu

I created a VHD with VirtualBox and installed Ubuntu on it, which will allow me to boot Ubuntu natively, and use it as a VM. It took me some time to get it configured, and set up the way I want it, as it has been like 10 years since I last used any kind of Linux distribution for my desktop. I won’t go into depths of how to install Ubuntu, other than mentioning that I downloaded Ubuntu 11.10, installed it, and upgraded it with all the latest packages. There are tons of information on the internet about these things, go Google it if you get stuck here!

One thing worth mentioning though is that you should definitely install Compiz, which will help you resolve key-binding issues. If you are like me, and have a Visual Studio background, keys like F10 to -Step Over- while debugging is something you might not want to re-learn. Using Compiz you can resolve these issues with certain keys being assigned to functions in the Unity desktop, making them unavailable to applications like MonoDevelop.

Getting the latest versions of Mono and MonoDevelop

The Ubuntu repository provides older versions of Mono and MonoDevelop, but I wanted to get my hands on more recent versions. Badgerports.org is a repository that provides recent builds of Mono and MonoDevelop, as well as other related packages. Currently, they have MonoDevelop 2.8.6.3, which isn’t the cutting-edge-latest, but so far I find it rather stable, and it’s recent enough. To set up badgerports.org, please follow the steps here.

Caveat: Upgrading MonoDevelop may break MonoDevelop’s built-in NUnit test runner. This blog post will not deal with how to fix that issue, as we will use Mighty Moose to run all tests. If you rely on this integration, then don’t upgrade to MonoDevelop 2.8.

After you’ve setup badgerports.org according to the instructions. Open a terminal window and issue the following commands.

sudo apt-get update
sudo apt-get upgrade –t lucid mono-complete
sudo apt-get dist-upgrade
sudo apt-get install –t lucid monodevelop

There are other mono related packages you may want to install, but the above ones are enough to fulfill the requirements.

Installing SpecFlow

Installing SpecFlow at first seemed like a rather easy task, but confusion caused by weird crashing errors from SpecFlow’s generator threw me for a loop, a long loop. However, what for some time looked quite hopeless, did in fact have a quite easy solution. Here’s how to get it installed. Launch MonoDevelop and head into the Add-in manager under the Tools menu. Check the gallery and search for SpecFlow. You should find SpecFlow Support under IDE extensions. Mark it and click Install.

Now we need the latest SpecFlow binaries, download the zip archive from here (I used version 1.8.1). Extract the archive and head into the tools folder, and then execute the following commands to install all the required SpecFlow assemblies into the Mono GAC, and also make the command-line utility available by typing, yes that’s right: specflow


sudo gacutil –i Gherkin.dll
sudo gacutil –i IKVM.OpenJDK.Core.dll
sudo gacutil –i IKVM.OpenJDK.Security.dll
sudo gacutil –i IKVM.OpenJDK.Text.dll
sudo gacutil –i IKVM.OpenJDK.Util.dll
sudo gacutil –i IKVM.Runtime.dll
sudo gacutil –i TechTalk.SpecFlow.dll
sudo gacutil –i TechTalk.SpecFlow.Generator.dll
sudo gacutil –i TechTalk.SpecFlow.Parser.dll
sudo gacutil –i TechTalk.SpecFlow.Reporting.dll
sudo gacutil –i TechTalk.SpecFlow.Utils.dll
sudo mv specflow.exe /usr/bin/specflow
cd /usr/bin
sudo chown root:root specflow
sudo chmod 755 specflow

Voila! SpecFlow can now generate code-behinds correctly in MonoDevelop! Also, remember to add a reference to TechTalk.SpecFlow.dll in your assembly containing your specifications.

As SpecFlow uses the NUnit framework, we now need to get NUnit running somehow.

Installing and configuring NUnit for Mono/.NET4

This was probably the most annoying part of it all. MonoDevelop’s NUnit integration is compiled against a certain version of NUnit, which it expects in the Mono GAC. When I actually got the correct version installed, MonoDevelop started throwing errors about needing an even earlier version of NUnit, sigh! I also got a bunch of weird errors that I didn’t actually know how to solve, like missing methods and classes within NUnit core, which also seemed like typical version issues. The other issue was that I just couldn’t get any NUnit test runner to run tests written in Mono/.NET4.

As I decided to use Mighty Moose, the solution was to break free from all test runners I’ve been trying to configure so far. Instead, to get Mighty Moose running our tests, we can focus on the nunit-console runner. If we get that working properly, we can configure Mighty Moose to use it.

NUnit may or may not be installed on your system, but issuing the following command will make sure you have it installed.

    sudo apt-get install nunit

Then head into the configuration located at /usr/lib/nunit/, and edit the nunit-console.exe.config file. Just under the <configuration> tag, add the following lines:

    <startup>
        <requiredRuntime version="v4.0.30319”/> 
    </startup>

And then add the following two lines under the <runtime> tag (the first one might already be there).

    <legacyUnhandledExceptionPolicy enabled="1" /> 
    <loadFromRemoteSources enabled="true" /> 

Now you should be able to use the nunit-console test runner on a Mono/.NET4 unit-test project.

Installing and configuring Mighty Moose

You have reached the last part of this guide, and hopefully you haven’t had any troubles so far. Mighty Moose is what will tie the knot, and I won’t keep you any longer. Download the cross platform standalone client and extract the files somewhere and head into that location, then issue the following commands:

    sudo mkdir /usr/bin/continuoustests
    sudo cp –R . /usr/bin/continuoustests/
    cd /usr/bin/continuoustests/
    find -name "*.exe"|sudo xargs chmod +x
    cd /usr/bin
    sudo touch mightymoose
    sudo chmod +x mightymoose

Now open the mightymoose file we just created in a text editor and paste these lines in it.

#!/bin/sh

exec /usr/bin/mono /usr/bin/continuoustests/ContinuousTests.exe $(pwd)

Before we can move on, we need Mighty Moose to create a config file for us. Just issue these commands below, fill in the information and configure Mighty Moose to your liking. Also, remember to close Mighty Moose afterwards. (You may receive errors here, but just ignore them)

    cd <your solution directory>
    mightymoose

As already mentioned, we now need to switch test runner in Mighty Moose, and preferably use the nunit-console runner we got working with Mono/.NET4. Locate your AutoTest.config under ~/.local/share/MightyMoose/ and open it in a text editor. Add these tags within the <configuration> tag.

    <UseAutoTestTestRunner>false</UseAutoTestTestRunner>
    <NUnitTestRunner>/usr/bin/nunit-console</NUnitTestRunner>
    <BuildExecutable>/usr/bin/xbuild</BuildExecutable>

What these three lines do is, first switch of the built-in test runner, then tell Mighty Moose to use nunit-console for NUnit tests. The last line will allow Mighty Moose to be notified when you save files. So instead of having your tests run when you compile your solution, Mighty Moose will now run affected tests when you save a file.

Now we are done! Each time you want to start Mighty Moose, you can just issue the following two commands to start having continuous tests of your project.

    cd <your solution directory>
    mightymoose

Result

If you have followed all the steps above, you should now have a working MonoDevelop environment, with MightyMoose running all your SpecFlow/NUnit tests, each time you save a file. I’ve been using this for a couple of days now, and it works great so far. Although, what I really miss now is my favorite isolation framework, FakeItEasy, as it sadly doesn’t work on Mono. But, Rhino.Mocks works right out of the box, and will do for now.

Last but not least

I want to send out my best regards to Svein A. Ackenhausen and Greg Young for their awesome 24/7 support while I’ve been working on this environment. Without your help this would have taken ages, had it succeed at all.

Thank you!

Links

VirtualBox
Ubuntu
Mono
MonoDevelop
Continuous Tests (Mighty Moose)
SpecFlow
Rhino.Mocks
Badgerports.org

Boost pair-programming with a remote session!

Do you work in an environment that doesn’t promote pair programming? Or worse, it’s not even allowed? Or do you find yourself in a highly productive and positive environment, which really wants pairing to thrive? Perhaps you are already experts in pairing and have formalized a process around it? No matter the situation, I guess you are reading this post out of interest of boosting your pair programming.

Remote! what? NO!?

I guess the title immediately hits most of you seasoned agile practitioners with an alarming red screaming alert; anti-pattern, anti-pattern. Yes, not long ago, I would have agreed with you on that thought, until recently when I actually tried it for myself. First, let’s go over the short story that led up to our remote pair-programming session, and then take a look at the positive effects and how it may boost your pairing!

What on earth made us pair remotely?

A colleague of mine and I needed to get this feature done, which in itself wasn’t particularly hard, but it consisted of several tasks, some easier, some harder. We both had kind of an idea how to solve the feature due to each of us having previous experiences on the topic, and the fact we spent a lot of time in meetings discussing various angles to attack the feature in a way that would suit our platform.

The time had come to transform a very basic proof on concept that was already in place, to something great. We booked a meeting so we wouldn’t get disturbed, and locked ourselves up in a war room, joining forces for the duration of a day. We had a very nice session, indeed! We used one computer, hooked up to a projector which displayed the code all over the wall, we had whiteboards to model, discuss and test ideas. We started out by transforming the proof on concept implementation we already had, writing unit tests, deferring coding tasks etc., and all in all, it felt good. We did get far, accomplished a lot, and we ended the session with a smile on our faces after a great day spent together, and speaking for both of us now, looking forward to the next day, for us to continue in much the same way.

When we finally got around starting a new session the day after, my wife called and I needed to leave to get our daughter to the hospital, so the session ended before it had even started. We didn’t get much done that day, obviously!

Shame on those who give up!

Next morning, I decided to work from home in case I needed to go back to the hospital, and so did my colleague for other reasons. But, we decided to keep pairing no matter the distance, and we continued where we left off. We fired up a shared desktop and launched Visual Studio, in which we could both code at the same time. We also used our phones with headsets to be able to speak (we call for free within our company; otherwise we would have used something like Skype or similar products).

We went on pairing over the phone and this shared desktop, switching between driver and observer back and forth! It felt very naturally without having to use any kind of time boxing or clock telling us when to switch. At the end of the day, after about 6 hours of remote pairing we both actually felt that this was a very nice way of pairing, and it removed several of the impediments we often felt when pairing at the office.

What are the positive effects then?

These are the key areas we felt was boosted by a remote session compared to pairing in either co-located, or at our desks!

  • Intimate
  • The feeling of a truly shared coding experience, it almost felt like I was writing code through my pairing partner. It doesn’t get better than that!

  • Natural switching
  • At any point during our coding session, we could switch back and forth between driver and observer. It felt very natural, without having to ask permission, moving a laptop/keyboard or swapping chairs. We just started coding, and sometimes it brought laughter upon us, as silly things can happen with 2 keyboards and 1 Visual Studio.

  • Focus
    • External distractions
    • We were working from home and I guess it was quite easy for us to avoid getting disturbed! But, had we been at the office and somehow communicated through our headsets I think we could achieve the same result. No one would interrupt a colleague that’s obviously on the phone, much too busy to be disturbed with questions or even a cheerful “Good Morning”! Having no external distractions allowed us to really enter this intense focus I think neither of us would be able to sustain working in our open-plan office.

    • Internal distractions
    • Communicating over the phone, with our headsets, removed almost all internal distractions as there was no time, or opportunity to alt-tab into an email client, chat, twitter, surf, text or whatever people usually do during their coding. Out of respect, interest and a shared goal we were focused on one thing, and one thing only, together, which led to no task switching, what so ever.

  • Breaks
  • Having breaks is great, and everyone really needs to have both shorter and longer breaks to be able to stay focused and keep working at a sustainable pace. Again, as we were on our phones with headsets, we could take short breaks and go get a drink or whatever we wanted and still remain focused. If I was coding, my colleague just took over while I was gone grabbing coffee and vice versa. For our long breaks we hung up and left our computer to get detachment which enabled us to come back really refreshed, ready for new endeavors.

Conclusion

All in all, I must say I was very pleasantly surprised after this full day of remote pair-programming, and I can only recommend it, no matter if you are beginners or experienced pair programmers. At least I will not even think twice before heading into a remote session again if the opportunity presents itself!

My colleague mentioned in this article, and also reviewer of this blog post, can be found on twitter as @perakerberg. You can also find his software quality blog here.

Event Broker using Rx and SignalR (Part 4: Solving the Scenario)

The time has come to start implementing the scenario. The scenario I invented, and then refactored from a science fiction novel into this simple online shop, which just happens to sell computers and components. As promised earlier, by the time you are reading this post the full source will be available on my GitHub repository, just follow the link below.

Configuration

This post will wrap up the series with the fluent configuration for our brokers; Website, ComponentStock, ComputerStock and Procurement. Let’s begin with the website since that’s where everything starts.

Website

The website creates the event broker by specifying a publishingUri, which will register the event broker so subscribers can connect to it. We can also see how a local subscription is added here, which we will use for sending out confirmation mails within our ProductOrderedEventConsumer. Then we start ordering products and publish the events using the OrderProduct() method, which just randomly creates ProductOrderedEvents.


            using (var eventBroker = new EventBroker("http://localhost:53000/"))
            {
                eventBroker.Locally().Subscribe(new ProductOrderedEventConsumer());

                Console.WriteLine("Press any key to start ordering products");
                Console.ReadKey();

                for (var i = 0; i < 30; i++)
                {
                    eventBroker.Publish(OrderProduct());
                    Thread.Sleep(200);
                }

                Console.ReadKey();
            }

Computer and Component stock

Both our computer and component stock register themselves to allow remote subscribers, while they also remotely subscribe to ProductOrderedEvent (through an event consumer). Internally, but not shown here, the event consumer for the computer stock makes use of a specification to filter the incoming events, whereas the component stock uses lambdas to show the difference.


            using (var eventBroker = new EventBroker("http://localhost:53001/"))
            {
                eventBroker.ConnectionStatus += (s, ev) => 
                    Console.WriteLine("Component stock: " + (ev.Success ? "Connected!" : ev.ErrorMessage));
                eventBroker
                    .Locally()
                        .Subscribe<ProductShippedEvent>(x => 
                            Console.WriteLine(String.Format("{0} order packed and shipped", x.ProductName)))
                    .Remotely("http://localhost:53000/")
                        .Subscribe(new ProductOrderedEventConsumer(eventBroker));

                Console.ReadKey();
            }


    public class ProductOrderedEventConsumer : EventConsumer<ProductOrderedEvent>
    {
        private readonly IEventBroker _eventBroker;
        private readonly Random _random;

        public ProductOrderedEventConsumer(IEventBroker eventBroker)
        {
            _eventBroker = eventBroker;
            _random = new Random();

            RegisterSpecification(new ItemsInLaptopOrComputerProductGroupSpecification());
        }

        public override void Handle(ProductOrderedEvent @event)
        {
            _eventBroker.Publish(new ProductShippedEvent
                                     {
                                         ProductName = @event.ProductName
                                     });

            if (_random.Next(10) > 5)
                _eventBroker.Publish(new ProductOrderPointReachedEvent()
                                         {
                                             ProductName = @event.ProductName
                                         });
        }
    }

Procurement

The last of our configurations! We set up remote subscriptions to our stocks and start listening for events telling us that the order point was reached, so procurement can order new products to fill up our stocks. Here you can also see an example of dual remote subscriptions added through the fluent API.


            using (var eventBroker = new EventBroker())
            {
                eventBroker.ConnectionStatus += (s, ev) => 
                    Console.WriteLine("Procurement: " + (ev.Success ? "Connected!" 
                                                                          : ev.ErrorMessage));
                eventBroker.Remotely("http://localhost:53001/")
                                .Subscribe(new ProductOrderPointReachedEventConsumer())
                            .Remotely("http://localhost:53002/")
                                .Subscribe(new ProductOrderPointReachedEventConsumer());

                Console.ReadKey();
            }

Result

By running the solution you will see four console windows, which will display information when they receive and process an event. I added links to images for each console window as an example of how they would look after completing 30 product orders. But a better example of the result would be to run the source available in my repository.

Website console window
Component stock console window
Computer stock console window
Procurement console window

Links

Source
Full source at my GitHub repository

Navigation
Part 1: A Fluent API
Part 2: Implementation
Part 3: Event Consumers
Part 4: Solving the Scenario

Is it an Entity or a Value object?

This is a question that seems to surface over and over again when I talk to fellow developers, and while it is somewhat clear to many what the difference between entities and value objects are, it seems it is not clear when to actually use an entity and when to use a value object. For some reason, people also seem to favor entities over value objects when modeling, when it really should be the other way around. If you are working with only entities, then my friend, it sadly seems you are not alone out there!

Why is this? And why are people so reluctant to implement and use value objects?

I figured I would try to clear up the misconceptions a bit, and provide two examples that I often use to exemplify the difference between entities and value objects. Both of them are insights taught by Eric Evans during his DDD Immersion class, which I attended in Stockholm about a year ago. They are slightly modified as I don’t remember the exact phrases used by Eric. I also extended, and twisted each of the examples.

Context

In domain driven design, context is everything, and it is also a key factor when choosing between modeling an object as either an entity or a value object. A context (or bounded context to be more specific) has explicit defined borders and is in itself an application of its own. Within this context we define a model, create our ubiquitous language and implement behavior among other things to fulfill our requirements.

The main thing here is to understand that a context’s model, is an abstracted view of either your, or your customer’s business, and will not solve all the problems of a domain. An object in one context might be something completely different in another context, even though both contexts are within the same domain, and the object has the same name. It all depends on which abstraction your requirements emphasize, and is also the reason why two customers operating in the same domain, may have two completely different models. In one of those models, an object might be a value object, whereas in the other model it is an entity.

The – $100 dollar bill! – example

You and your colleague are each holding one of your own $100 bills in your hands, and there isn’t anything wrong with them or anything like that. If I asked you to swap those two bills, none of you would really care, swap them, be none the wiser, and move along like nothing had happened. After all, neither of you earned nor lost money! A $100 bill is a $100 bill, and you would most likely only compare the number of zeros printed on the bill. After you compared that value you wouldn’t really care which one you hold in your hand, and since we do not care about instance, and we compare them by state, we are talking about typical value objects.

But, doesn’t a $100 bill have an identification number? Well, actually it does, and that means that in some context, the exact same bill you are holding in your hand is very important to somebody, in their context. And, in that context it might actually be an entity. However, even though the $100 bill has an identification number and we may think that it’s an entity, it is not necessarily so. As always, it depends on the context.

With the same context as above, how many of you would swap your credit cards, if each card had a $100 balance on it?

The – A glass of water! – example

Imagine you and I just sat down to have a meeting. I poor two glasses of water, and then give you the opportunity to pick one of the glasses. You would not really care which one of the glasses you would pick, since it is just two glasses on a desk. So far, in this context, both glasses are the same, and its equality would be determined by the type of glass, amount of water, and perhaps the quality of the water. Since we do not care about instance, and we compare by state, we are talking about typical value objects once again.

Now, let’s redo the same scenario, but as I poor the two glasses of water, I take a sip from one of the glasses. Most people (I don’t know about you though) would by default pick the glass I have not taken a sip from, as the other glass would have immediately become my glass of water. At this point, the type of glass, amount of water and quality is no longer of any concern, because in this new context I have, by the sip I took, polluted one glass as mine, thus imposed it with an identity. So as this context has evolved, both glasses of water are now entities, even though I only sipped from one of them.

You could argue that the glass of water is still a value object, but now attached to a person object. But, I didn’t swallow a glass, I drank from the contents of a glass. That content might in itself be a value object, which was temporarily attached to the glass, but is now attached to the person object. So now we separated the glass of water into two objects, and now we have to reevaluate the whole scenario as our context evolved.

For fun, let’s back up and twist this -sip to impose identity- example further, and I will generalize a bit here (sorry all smokers). Smokers often ask for the butt of a cigarette if they are temporarily out of smokes. They don’t mind putting a cigarette to their mouth that someone else already smoked on, but at the same time they wouldn’t share a glass of water with the same person. So by taking a sip from a cigarette, we may still be using value objects.
What we changed here is the object we interact with, and by changing from a glass of water to a cigarette we also changed an attribute of the context. It might be that we are in two completely different contexts, but if we are in the same context, both a cigarette and a glass of water would most likely not inherit from a shared abstraction like, Product, as they would be modeled and implemented quite differently. Watch out for those generalized abstractions, as they will most likely impede you reaching a supple design.

Consider the context above and these method signatures, which would you prefer? These,


    void Drink(IDrink drink);
    void Smoke(ICigarette cigarette);

or


    void Sip(IProduct product);

Again, it comes down to our context and how we want to implement our behavior. And while we are on the subject of behavior, we actually changed a few behaviors in the model, without really mentioning it, but the most important change to the scenario above is that we can no longer assume that, taking a sip from something, will automatically be okay by a person object.

Those examples didn’t help me at all

Here are some information and a few tips about value objects that you might want to use as guidance.

  • Context

    Always understand the context you are in, listen to the language and the concepts that your domain experts talk about. Without a clear perception of your context, you are much more likely to fail before you even get started.

  • Identity through state

    A value objects identity is based on its state. Two value objects with the same property values would be considered the same object.

  • Immutability

    Value objects are immutable objects, which means you cannot change a value object’s state without replacing the whole object. Immutable objects are always easier to handle and understand, and since immutable objects are inherently thread-safe, they are a better choice for today’s multi-threading applications.

  • Interchangeable

    Value objects are throwaway objects. During a transaction you may fetch one, and then by using the Side-Effect Free pattern, which is especially useful when dealing with value objects, create n-objects before reaching your final instance, which is the one that will be persisted.

  • Explicit identity

    Even though the object you are trying to model has an explicit identity, do not get fooled into making it an entity just because it has this identity. Does the identity mean anything in your context?

  • Temporary name

    At times when you are unsure of what an object is and what it should do, it can help to give your object some random name that doesn’t mean anything. This will allow you, and people around you, to not get stuck on a particular concept, and hopefully you will be able to continue refining your model. This will also help you to stick with a value object implementation as long as possible.

  • Refactoring

    When you need to refactor your model, it will always be easier to make an entity out of a value object, than the other way around.

Still in doubt?

When in doubt, and you wonder what an object is, and how it should be modeled, always favor value objects. I will let Bart repeat it a few times on his chalkboard, and perhaps you will remember this chant, and think twice the next time you are about to add another entity to your model.

I will always favor value objects

Event Broker using Rx and SignalR (Part 3: Event Consumers)

If you go back and look at the result from the first post of the series, you will notice that during registration of subscriptions, we add filter predicates and assign actions to be executed as events are consumed. This isn’t really such a good idea, since we will end up having all our logic in our configuration and it will clutter the code among other bad things! So what can we do about it?

Event Consumers

The solution to our problem is to outsource both filtering and processing into event consumers. For those of you familiar with CQRS, just think of a command handler that implements IHandle<T>, but for an event. With event consumers we end up with classes that handle each particular event. We will add simplicity, separation and if named correctly, end up with a more declarative code.

Let’s start with an interface describing an event consumer, and also an abstract class implementation that will provide a register method, which will help us with registering a Func<TEvent, Boolean> (e.g. a filter). The func is added to our multicast delegate property, which will allow us to add several filters on an event consumer.

    public interface IEventConsumer<in TEvent> : IHandle<TEvent>
    {
        Func<TEvent, Boolean> Filters { get; }
    }
    public abstract class EventConsumer<TEvent> : IEventConsumer<TEvent>
        where TEvent : IEvent
    {
        public Func<TEvent, Boolean> Filters { get; private set; }
 
        protected void Register(Func<TEvent, Boolean> filter)
        {
            if (Filters == null)
                Filters = filter;
            else
                Filters += filter;
        }
 
        public abstract void Handle(TEvent message);
    }

Let’s implement the ProductOrderedEventConsumer that will be used at the component stock in our scenario. When the component stock receives a ProductOrderedEvent, we will have to make sure it won’t act upon events for laptops and computers, as the computer stock will handle those two types of products. To accomplish this we just register a filter in the constructor to exclude all events for products in the laptop or computer product group. The handle method will now only process events matching our registered filter.

Speaking of the Handle method, it won’t do much more than publish a ProductShippedEvent, and at random, the ProductOrderPointReachedEvent which will simulate that our inventory is getting low on a particular product.

    public class ProductOrderedEventConsumer : EventConsumer<ProductOrderedEvent>
    {
        private readonly IEventBroker _eventBroker;
        private readonly Random _random;

        public ProductOrderedEventConsumer(IEventBroker eventBroker)
        {
            _eventBroker = eventBroker;
            _random = new Random();

            Register(x => x.ProductGroup != "Laptop" && x.ProductGroup != "Computer");
        }

        public override void Handle(ProductOrderedEvent @event)
        {
            _eventBroker.Publish(new ProductShippedEvent
            {
                ProductName = @event.ProductName
            });
            
            if (_random.Next(10) > 5)
                _eventBroker.Publish(new ProductOrderPointReachedEvent()
                {
                    ProductName = @event.ProductName
                });
        }
    }

Using Specification pattern to apply filtering

Nitpicker corner: Yes, specifications can be a bit cumbersome and yes, it does add a lot of code that could at times be written with a few simple lambdas. But remember the ubiquitous language? Specifications allows us to communicate about rules and predicates with everyone involved in developing our software! To be able to talk about the ‘Items in laptop or computer product group’ instead of the ‘x rocket x dot product group equals laptops or x dot product group equals computers’ – predicate, provides a lot of value not to be neglected! At the same time we need to be pragmatic about it and not implement specifications for every little equality check we create, as that would definitely overwhelm our code base. For this particular use-case we might be overdoing it, but I wanted to show a simple example of using Specifications

The specification pattern in its essence matches an element against a predicate, and respond with a boolean if the input data is satisfied by the predicate. This interface describes it rather well.

    public interface ISpecification<TElement>
    {
        Boolean IsSatisfiedBy(TElement element);
    }

There is also a simple abstract class (available in the full source) that I use to avoid repeating myself. There are far more advanced implementations of the specification pattern available online if you are interested, and I would suggest using one of them if you want to start using specifications in your code. Below is the implementation of the ItemsInLaptopOrComputerProductGroup specification I mentioned earlier, which we will use in our scenario to apply filtering of incoming events in the computer stock.

    public class ItemsInLaptopOrComputerProductGroupSpecification:Specification<ProductOrderedEvent>
    {
        public ItemsInLaptopOrComputerProductGroupSpecification()
        {
            AssignPredicate(x => x.ProductGroup == "Laptop" || x.ProductGroup == "Computer");
        }
    }

Now we have a specification that declares intent with a name instead of a lambda. All we need now is a way of registering this specification into our event consumer, and that can be done with the addition of this method.

        protected void Register(ISpecification<TEvent> specification)
        {
            Register(specification.IsSatisfiedBy);
        }

Registering a specification now becomes a single line of code within our event consumers.

        Register(new ItemsInLaptopOrComputerProductGroupSpecification());

Links

Source
Full source at my GitHub repository

Navigation
Part 1: A Fluent API
Part 2: Implementation
Part 3: Event Consumers
Part 4: Solving the Scenario