Monday, January 6, 2014

Introduction to NCQRS

At Cognitive X Solutions, we use a piece of technology called NCQRS, which is a .NET implementation of the CQRS, Command Query Responsibility Segregation, pattern. At its heart, CQRS, is a simple notion that you can use a different model to update information than the model you use to read information. This simple notion leads to some profound consequences for the design of information systems. Normally, when we build information systems, we think in turns of CRUD (Create, Read, Update, Delete).  With CRUD, the expectation is that when you perform a database altering operation (Create, Update, Delete) that the next Read will reflect that change.  Not necessarily so with a CQRS system, as the Read and Write model are actually separate and the read model may take a bit of time to be updated from the write model because with CQRS it is possible that the machine that handled the update is not the same machine responsible for the read model.

So why would anyone want to use a CQRS data model?  It easily allows scalability. Before I can explain how, you first need to understand how CQRS works.  First off, CQRS is a pattern that enables Domain Driven Development, and is by nature object orientated.  So all modifying operations occur against an object, but not directly, you send a command to the CQRS system and it in turn performs the modification (create, update, delete).  Example:  You have an object that represents a Client, and to change their shipping address, you issue the command ChangeShippingAddress to the CQRS system.  The Client object then generates events that get answered (ie.  ShippingAddressChanged), and it is the answering of these events that perform the actual update.

So how does this command -> object -> event -> update process make for scalability?  Simple, the computer that issued the command isn't necessarily the computer that generates the events, nor does it have to be the computer that handles the events.  In fact you can go as far as having each domain object (ie.  Client, Inventory, Bill) handled by a separate computer, and each event generated by the domain objects handled by different computers.  This allows you to create a massively scalable system that can start as a single computer and very easily span multiple systems as demand dictates (especially if you introduce an Enterprise Service Bus (ESB) or some other type of mesasging middleware).

So over the next series of posts, I will lead you through developing a complex CQRS system with NCQRS.

Wednesday, November 27, 2013

Back from the Dead

Well not exactly, I didn't die (but the time since my last post might lead some to believe I did...).

It's been almost 5 years since I posted, and a lot has changed since then.  I'm working at a new company (Cognitive X), got a new job title (Senior Software Architect), got married (Yeah!  Amazing woman, beautiful too), and have 2 kids (one's 12, the other 16 months).  But since this is a "work-related" blog, I'm going to focus on what I'm doing at work.

The key things I've been involved in recently and will be the nature of my posts over the next while are:


  • Domain Driven Development
  • CQRS (specifically NCQRS) and Event Sourcing
  • Aspect Orientated Program (AOP), especially PostSharp for .NET (as our company is an official partner and I will be doing a lot webinars for them each month)
  • Message / Event Driven Development
  • Enterprise Service Bus / Service Orientated Architecture (mainly with MassTransit)
  • Visual Studio 2013 (with a name like ASP.NET Addict, you can guess that I'm going to be focusing on web technologies and with Visual Studio 2013, there are a lot of web goodies in there!!)
So I'm working on my first posts in a long while (this one doesn't count :-) )  Not sure which one I will start with, probably talk about NCQRS and Dynamic Snapshotting as I have taken the existing code surrounding that and amped it up seriously (increased speed of NCQRS in general by 50% and dynamic snapshotting by over 500%.  I've actually got it to a point where trying to optimize it actually is slowing it down, pretty cool huh?).

Wednesday, March 19, 2008

Time to Get Organized

Well, it's been a while since I posted to this blog... I've been quite busy lately working on a new project called The Organizing Connection. It is built using ASP.NET (but that should be obvious :-), but what is new is that it the video will be served up via a custom Silverlight based video player with integrated AJAX video lookup. It is also an example in securing video files so that only logged in users can actually view the content without having to resort to DRM (yes, you can intercept the video and save it to disk if you are clever enough and have paid the $200/year to belong to the site...) So please, don't go trying to prove me wrong.

We are having quite a bit of fun. I will let you know when it launches.

UPDATE:  Site never launched.  The customer walked away from the project still owing us $30,000 (which nearly killed the company as we were a young company).

Tuesday, August 7, 2007

Pay Attention Class!!

Well, here's something scary: I'm going to be helping "mold" the next generation of Media Developers at McKenzie College. It is mainly a Flash focused course (Egad, you're not using Silverlight? I would love to, but the tools are not quite there yet and it's still alpha, and there isn't a market here in New Brunswick for it yet. But as soon as it's ready, I'm going to start using it for Trimedia projects). We are going to teach them basics of color, lighting, art (we actually start them with pencil drawing and oil painting, we want "real" media developers first, digital second). Then we are going to teach them the basics of HTML, database (probably use MySQL, they are on iBooks), and then teach them C. Yes, you heard that right, C, not C#. We want to start with a language that doesn't require an understanding of OOAD and forces them to deal with types, control/flow, the basics. Then we will be moving to Flash and ActionScript 3.0 with a focus on Game Design in Flash. It should be interesting.

So for my fellow ASP.NET Addicts, whose ears are still ringing from hearing Apple and Flash, rather than their Microsoft counterparts, it wasn't for lack of trying. I suggested that we focus more on C#, ASP.NET, and SQL 2005, but McKenzie is a Adobe licensed training center with a deal with Apple to provide each student with a laptop, so there isn't much I could do :-) The bottom line is I finally get to have input into the training of students (instead of getting them after they come out of 13 months of programming school and don't even know what an Hashmap, Set, or List is. Come on schools, at least teach your students about some kind of Collection API because they will spend a great deal of their web developing career dealing with them).

Wednesday, July 18, 2007

Rules, Rules, Rules !!

When we were all teenagers, we all hated rules. Do this, Don't do that. I was a fairly well behaved, albeit strange, teenager. So rules weren't a problem for me (well, except the one that I had to do all my chores before I could use the computer). But the kind of rules I'm talking about is something a bit different.


Enter Microsoft's .NET Framework 3.0 and the new technologies of Windows Worfklow (now called Workflow, or WF for short), Windows Communication Framework (WCF), and Windows Presentation Framework (WPF). For this post, I'm going to focus on Workflow and more precisely the Rules engine within Workflow. But before I go there, let's talk about what Workflow is.


Workflow is just that, the flow of work. It allows you to model a business process within your program and execute it. The age old example is requesting vacation time. In a large company, if you want vacation time, you have to request it and you can only have it if someone else from your department hasn't already requested that time off, if your manager approves it, etc. With Workflow, you can automated most of this process. Let's see how: You go to a website, select the start and end date for your vacation and the workflow begins it's job. It looks up the date range you have selected, it makes sure that you still have that many days available for vacation, checks the calendar for your department to make sure the days are available and notifies you via email if they are not (and terminates the workflow). Once it verifies the availability, it sends an email to your manager with all the necessary info. Within that email are two buttons: Approve and Decline. At this point the workflow goes to "sleep" and persisted to the database (to free up memory). Once your manager opens the email and clicks a button, the workflow is reloaded and picks up where it left off. If he clicked the decline button, you receive a "Sorry" email. If he clicked the "Approve" button then the calendar is updated with your vacation days and you receive a "Enjoy your vacation" email. Through this whole process the only human interaction was you making the request and your manager clicking a single button. All the complex work is performed by the workflow. This is wonderful, especially considering the development of the work flow is all done visually:



As you can see from this picture, you can define and visualize your workflow very easily and thanks to Visual Studio integration (download it here for Visual Studio 2005 or get the latest Beta of Visual Studio 2008 from here), you can easily and visually debug your workflow. This comes in really handy when dealing with logic snafus (when you start mixing ! and & and and (), things can get crazy).


A less known part of the Windows Workflow system is the Rules engine that is hidden beneath parts of certain activities (each part of a workflow is called an activity, they are the colored boxes in the picture). Anytime you can set a condition and have the choice of using a "declarative condition", you are using Rules.



What rules come down to is simply a CodeDOM XML document that gets parsed and converted into code that runs against the workflow allowing you to define complex conditions that take into account the whole workflow and not just the immediate state. The rules can be stored seperate from the workflow itself (in an XML file, database, come from a web service, wherever you want to put them basically), allowing you to change the logic without having to recompile the workflow. (For more on CodeDOM go here).


Editing the rules within Visual Studio is quite easy thanks to the built in Rules Editor that comes complete with IntelliSense (though a dumbed down version compared to the main code editor).

That's all fine and great if you are using workflows. But what if you want to use it seperate from the workflows? Well, that's actually quite easy. It really comes down to only 4-5 classes that you have to deal with and instead of using the Workflow as the target object, you can use whatever object you want as the root.

Let me demonstrate. Assume we have a CodeDOM document defining a ruleset (I'll show you later how to generate said document), the following code will convert it back into a RuleSet:




private RuleSet DeserializeRuleSet(string ruleSetXmlDefinition)
{
if (!String.IsNullOrEmpty(ruleSetXmlDefinition))
{
WorkflowMarkupSerializer serializer = new WorkflowMarkupSerializer();
StringReader stringReader = new StringReader(ruleSetXmlDefinition);
XmlTextReader reader = new XmlTextReader(stringReader);
return serializer.Deserialize(reader) as RuleSet;
}
else
{
return null;
}
}


Once you deserialize the document (and because that method simply takes a string, the source can be from anywhere), you have to run the RuleSet, that is done with the following block of code:





  RuleValidation validation = new RuleValidation(root.GetType(), null);
  RuleExecution execution = new RuleExecution(validation, root);
  ruleSet.Execute(execution);

root is the object against which to run the RuleSet. Within the Ruleset, you basically specify a boolean statement (basically the "If" part of a If/Else statement) and what to do if it is true and what to do if it is false. You can specify multiple sets of If/Else rules within a single CodeDom document. All of them will be run against your root object.



Now, I know most of you who are programmers are saying "Great, show me the code." Click Here to go to the page to download the code. This is the ExternalRuleSet Demo project that first taught me how to use RuleSets. What!? You actually thought I figured this out on my own? Unfortunately, no. My thanks goes to Matt Winkler who wrote the example. Matt is the "technical evangelist" for Windows Workflow.



Ok, back to the code. Basically, there are 4 projects in the solution that you downloaded. The first ExternalRuleSet library contains two classes: RuleSetInfo (used to hold the name and version of the RuleSet you want to load from the database as you can version the rules) and RuleSetData (which holds the data loaded from the database and contains the code to deserialize the XML Data back into a RuleSet). The second library contains the RuleSetService. This class is responsible for loading the RuleSetData from the database and is used within the Workflow Engine to provide a RuleSet service to the Engine (if you want to use it that way). The third project is the PolicyActivities project, that provides you with an activity that you can place on your workflow to run the RuleSet against the workflow. But I basically stole the code for running workflows out of here and put it in my own project. The final project is the most important project as it is actually the rules editor. That project is called .... RuleSetTool. It is the RuleSetTool that generates and stores the CodeDOM XML Document within the database.



So go ahead, take a look at the code and see all the cool things you can do with Rules!!

Friday, July 13, 2007

"Sell Out"

Yes, I admit it. I am a sell out, a turn coat, and a fair weather friend, at least as far as it goes when it comes to development platforms and methodologies.

You see it all began 5 years ago.... (Cue the time warp ripple effect)... when I had just bought my first computer. My previous 2 computers were bought by my Dad (thanks Dad) and were an NEC V20 (no, that's not a Vic 20, but a 12Mhz XT clone, almost a 286) and a 386SX Magnovox (it ran Windows 3.1, I was so excited, it had 8MB of RAM, ah those were the days). Anyways, enough rambling, back to the point. I had just discovered Linux and now that I had a new computer, it was time to install it! I was taking Computer Science at UNB at the time, and could do most of my homework on Linux (C, C++, Modula2, COBOL, except VisualBASIC, had to dual boot Windows 98 for that). I was enjoying the fact that I could install so many programs for free. It was a dream come true for a poor struggling university student.

As time went on and I moved into the work force I discovered Java, and then Java running on Linux, and then a web development framework called Tapestry. It was awesome. It was a Model View Controller framework with a property pull model. Most frameworks at the time were a push model, you wanted to change the text on a button, you had to issue a command to change it, with Tapestry, you changed the string that the button monitored, the button changed it's text. I know it sounds petty, but it really was a better model. In fact Visual Studio 2008 and to some extent .NET Framework 3.0 will/do support this model, 5 years after Tapestry had it, and I realize that probably WebObjects on the Mac had it too as it inspired Tapestry (just in case an Apple Web geek reads this).

So I loved my Tapestry, built many a website with it. Discovered Eclipse, loved it. Got into Object Relational Mapping first with Torque and then with Hibernate. (For those who don't know, ORM basically means that instead of having to deal with data from a database as fields, I could model the whole record as an object and deal with it in a very Object Orientated (OO) way). Life was good. Then my job went down the tubes and I was looking for new work.

That is when I met ASP.NET (I was freelancing for a great company, I just can't tell you their name). It was at ***** that I discovered ASP.NET and suddenly all of the stuff I used to have to do by hand (like crafting HTML, CSS, XML, SQL Statements, etc) and had to build from scratch (don't even go there), where included. It was a hard learning curve, but now that I know it, I can never go back. Too much is given to you "out of the box". I guess there is a reason why Microsoft has gotten to the top (other than the stealing, the lies, the cheating, and the buying off of key officials. Well, ok, some of that I made up. Well, the last one is made up as far as I know....)

Let me give you an example: Portals. I couldn't find a good portal technology that didn't mean rewriting my whole website system (and we had built a good one) from scratch or trying to adapt the Sun Microsystems Portlet Specification to the platform. When I opened the Visual Studio 2005 box, it was included!! All I had to do was build or buy Portlets (WebParts as Microsoft calls them). Then there was Authorization / Authentication, Data Presentation (Grids, Datalists, Forms, etc), Validation (though Tapestry had that), Navigation and Menus, Reporting. Then there was C#, it finally introduced Properties (but to be fair Delphi / Pascal had them long before C# and I loved Delphi before I got into Web Development) and operator overloading. And now there is LINQ, Entity Data Models, and AJAX Extensions. It's too late, I am hooked.

The only regret I have is that Tapestry had an awesome IOC container, that Microsoft has yet to catch onto (though they are trying with their Web Client Factory, but it's just not the same...) I'm going to look into Spring.NET and see if they are doing any better.

So I admit it, I'm a "Sell Out", but I would do it all over again.

Welcome to my corner of the World

Welcome to the inside of my head!! Well, not really, but this is where I share my insights, thoughts, ideas, and tutorials on everything related to ASP.NET, Web 2.0, the Web in general, and things completely unrelated to the Web (though I'll keep that to a minimum).

The reason I started this blog was in response to some suggestions made at a GAS (Gaming Animation and Simulation) meeting, which is a loosely knit group of companies, experts, and other such people working in the, well, Gaming Animation and Simulation industry!! We felt it was important to get our name out there and show the world that working in New Brunswick didn't mean fishing or cutting down trees, but that there are some really cool companies here in the Maritimes and that people should really look into coming to work here in New Brunswick. If you are young, straight out of school, kid, then go to Toronto, but when you get tired of city life and want to settle down where you actually can know all your neighbours on your street, you don't have to worry about being mugged at night in front of your house (assuming you make enough to even own a home), and your kids can actually walk to school and you can bike to work (instead of driving 45min), then remember I told you that working in New Brunswick is a good idea :-)

So without further fuss....

Welcome to my Blog