Friday, March 27, 2015

MVC and MVVM Together in Modern Web

I've spent the last few years heavily in the web space and really believe that we're finally in a pretty good spot with design patterns, support from browsers, web standards, client hardware (smart phones) that can do more and more complex things...  But there is still so much going on and things are changing quickly!

I want to look at how to use the MVC and MVVM patterns together to provide a very powerful client application.

Lets start by breaking down these patterns.

Model

Models are typically our business objects.  More specifically, these should be Business Value Objects.  I like to make that distinction as these objects should be purely data (no methods).  How are you going to transmit the method to the client with your model?

View

A representation of data.  A view uses a model to construct some representation, typically a visual representation.  The view is in charge of binding the model to the eventual markup in the case of ASP.NET MVC.

Controller*

A controller is something that handles requests and return values.  I am leaving this slightly vague at the moment, but the main point is that they handle requests.

View Model

Typically a composition of models and other view models that are able to manipulate client side objects, and manage user interaction to the system.  For example, if you have a button on a page, there should exist some view model that will handle the click event.  Typically, this event handler will call out to a controller, take a response asynchronously, and update the view accordingly.

That leads us to the second reason for a view model - to control the state of the page.  Whether an expander is opened or closed should be driven by an observable property of the View Model.

The View Model is delivered to the user in the View returned by the Controller after a request.


Let us consider a simple web example.
  1. User makes a request to my site to see a product catalog
  2. Request for markup is handled by a Controller
  3. Controller Finds a View based on the Route of the request
  4. Controller Finds a Model based on the arguments of the request (route and/or message body)
  5. The Model is given to the View to construct some visual representation of the Model (render HTML)
  6. Within the generated markup, a view model must exist to handle user interaction and manage page state.  This is typically going to be some JavaScript object
  7. Markup is delivered to the client and the View Model is initialized
  8. Once the view model is initialized, it may call back to the server to fetch some data (eg, product list meta data)
  9. Request for data is handled by a Controller
  10. Controller does not need to find a view since we are requesting data, it simply finds the Model and returns it as data
  11. The View Model receives the data asynchronously and updates the markup by rendering the list of products via template binding
At this point our user experience is improved as the site is loaded quickly to deliver the layout, minimal data, but a view model that can take over.  This improved responsiveness is very noticeable.  Consider the alternative where the all the data must be fetched before the view can be rendered and delivered to the client.  The client gets a blank web page with a spinner.

So, you can see, there are really two types of requests we are dealing with here.  Controllers that respond to requests with markup are typically referred to as MvcControllers.  Controllers that respond to requests with data (json, xml, file, ...) are typically referred to as ApiControllers.

In the above example, we're using MVC to get us into some interesting section of the application.  After the View is delivered, MVVM takes over and the View Model handles user interaction and manages page state.  Using MvcControllers to deliver data is not natural or easy if you are using ASP.NET MVC.  WebAPI (aka, ApiControllers) were developed specifically to provide an easy way to provide data formatted in a commonly understood format (json, xml, ical, ... think mime).

I find this is a very powerful approach where the View Models become unit testable instead of having to be tested via CodedUI or similar.  Once the View Model is tested, it's just a matter of binding a view which isolates errors to binding errors.

Overall, I think this is going to become one of the more common patters to see when developing new, modern web applications.

WCF Best Practices Part 1: Setting up the solution

Windows Communication Foundation (WCF) is one of the most commonly used technologies to connect to services in the Microsoft Stack.  I found it surprisingly hard to find any guidance on how to setup your projects when using WCF.  Please refer to this dnrtv episode for guidance on how to setup your projects.  It's an oldy, but still applies today.

Some of the biggest take aways that I finally uncovered from this episode were:
  1. Service References are not your friend - DO NOT USE THEM!
  2. There are 5 major components when dealing with WCF
While I noted that service references are not your friend is the number 1 take away, I will cover the 5 components first as that will bring some context as to why they are bad.

1.  Contracts

Contracts are the core foundation of any service and are the most critical aspect to get right as early as possible.  The contracts describe the agreement that a client has with the service code driving it.  Contracts are not secret, in fact, that would be counter productive.  I want to consume a service, but I do not know how to interact with that service.  How would I know how to request something, to even how to talk to it (eg, TCP, HTTP, ...)?

In the contracts, you will have two primary types of objects:
  1. Data Transfer Objects - DTOs
  2. Service Interfaces
Notice, only the interface to the service exists in this project, not the implementation code.

Contracts will exist in their own project so that they can be shared with clients (you can openly distribute this to your customers).  If the customer is able to use your contracts dll directly, less work for them.  If the customer cannot use your contracts dll directly, WCF supports discovery while allows tools to auto-generate types (eg, Service References).

I highly, highly recommend decorating all of your DTOs with the [DataContract(NameSpace="something.that.is.not.the.default")]

This project is typically a Class Library

2.  Client Proxy

This project contains proxy classes implementing the service in question.  In .net, these classes would look something like:

// in the Contracts project
public interface IDoStuff
{
    ReturnedStuff DoStuff(WithStuff thisStuff);
}

// in the client Proxy
public class IDoStuffProxy : ClientBase<IDoStuff>, IDoStuff
{
    ReturnedStuff DoStuff(WithStuff thisStuff)
    {
        return this.Channel.DoStuff(thisStuff);
    }
}

This can easily be auto-generated and is not secret so you can provide it openly to customers.  Tools like Service Reference will generate these proxy types for you.  As you can see, there is not much involved with generated these by hand even.  svcutil (which Service References uses) can generate these for you as well.

This project is typically a Class Library

3.  Service Implementation

Of course you'll eventually have to provide some working implementation for the service contracts in the contracts project.  This type will also contain additional types that help support the service, but which are not simply DTOs or should not be given out to the customer because they contain some proprietary functionality, etc.

This is your secret sauce and should not be given out.  This code can be updated at any time as long as it does not break the contract with the client.  We know it will not as that would require a change in the Contracts project.

This project is typically a WCF Service Library

It is imperative that this be host independent as you may want to have on prem, azure, aws, ... or even iis and apache.  Trust me, you are better off keeping this separate from the service host.

Service Libraries also have a nice feature where the WcfTest client will open with a reference to your service if you start the service library directly.

4.  Service Host

If you are using iis and/or iis express the service host is extremely simple.  It contains only the web.config file and transforms.  That's it.

This project is typically a WCF Service Application

5.  Client

Finally, we have the client.  A client is anything that uses your service.

Why 5 Separate Projects

Primarily, it keeps you honest.  You should not change an API to suite the needs of an implementation and when the interface the service implements resides in a completely separate project, it really makes you consider if that is the right thing to change.

It has the added benefit of providing bits you can provide to customers which are not secret so they don't have to generate those types.  Think, Barrier to Entry.

Finally, it lends itself to easy deployment to multiple environments and even multiple platforms.

Service References

Are not your friend because Visual studio tries to manage things for you and when things break, you have no idea what you are looking at; worse, it sometimes breaks more when you 'fix' it.  Specifically, it tries to generate the DTO and proxy types and manage the web config file related to those services (it is very, very bad at this management).  Under the covers, it uses the svcutil.exe which you can use as well from command line and I guarantee you will do a better job!

The only reason to use the Service Reference would be to have it generate the types which you then cleanse and add to source control as your own code, the remove the service reference.  At that point, you may as well use the svcutil.exe.

When you own the contracts dll (maybe another team within the company) or the service provider just gives out the contracts dll directly, use it*.

*One nice thing the svcutil.exe can do is generate Task-based methods of the service interface, even when the service interface is not Task-based.  In this case, you may want to generate your own types anyway.


This was the biggest missing link for me when I jumped into WCF.  I started after version 4.5 was released so the configuration files were super easy to figure out.  I just didn't understand how to structure the projects so hopefully someone out there benefits from this!  :)

Tracing is EVIL: Part 1

So, for the last year or so I've been looking into application logging and event tracing.  I looked into ETW after listening to a .net rocks episode a long time ago, but quickly stopped when I realized that starting and stopping the tracing required elevated privileges on the server.  I haven't worked at many places where that would even be up for disucssion...  So I eventually came across System.Diagnostics.TraceSource.  This seemed to be a good fit as I could turn it on and off via configuration file and I could add custom listeners to defer the 'write' to be handled by some external handler(s).

The application under test is built around interfaces and base classes implementing the tracing where constructed.

Let's bring in an example...

/// interface for loading stuff
public interface ILoadStuff {
    /// load the stuff with the specified id
    Stuff LoadStuff(int id);
}

public class TracingStuffLoaderWrapper : ILoadStuff {
    protected readonly ILoadStuff wrapped;
    protected readonly TraceSource trace;
    public TracingStuffLoader() : this(new StuffLoaderProxy()/*WCF/service proxy*/, new TraceSource("NS.TracingStuffLoaderWrapper", Information)){}

    public TracingStuffLoader(ILoadStuff wrapped, TraceSource trace){
        // arg null check
        this.wrapped = wrapped;
        this.trace = trace;
    }

    public Stuff LoadStuff(int id){
       trace.TraceInformation("Request to load stuff with id `{0}`.", id);
       var stuff = wrapped.LoadStuff(id);
       trace.TraceInformation("Stuff fetched successfully");
       return stuff;
    }
}

or

public abstract TracingStuffLoaderSql : ILoadStuff {
    protected readonly TraceSource trace;
    protected TracingStuffLoaderSql() : this(new TraceSource("NS.StuffLoaderSql")){}
    protected TracingStuffLoaderSql(TraceSource source){
        this.trace = source;
    }

    protected abstract SqlCommand GenerateLoadCommand(int id, SqlCommand cmd);
    protected abstract Stuff DecodeStuffFromDataSet(DataSet ds);
    public Stuff LoadStuff(int id){
        trace.TraceInformation("Request to load stuff with id `{0}`.", id);
        using(SqlConnection conn = GetOpenConnection()){
            trace.TraceInformation("Generating load command.");
            using(SqlCommand cmd = GenerateLoadCommand(id, conn)){
                trace.TraceInformation("Fetching data from database.");

                DataSet ds = new DataSet();
                new SqlDataAdapter().Fill(ds);
                trace.TraceInformation("Dataset fetched with {0} tables and {1} rows in the first table.", ds.Tables.Count(), ds.Tables.First().Rows.Count());

                Stuff decoded = DecodeStuffFromDataSet(ds);
                trace.TraceInformation("Decoding succeeded: {0}", decoded);
                return decoded;
            }
        }
    }
}


Using this pattern, the StuffLoader can work against any Sql database, just implement the SqlCommand method in the derived class.  The subclass would hopefully do the right thing and use the trace source to write messages, but even without writing to it, this gives a very good idea as to where a failure may have occurred..

If the trace prints "Generating load command." and does not print "Fetching data from database." the failure is inside the GenerateLoadCommand subclass implementation.  Otherwise, my base class is broken and I should have a good idea where.

This approach was working fine as there were only a few traces happening here and there.  After introducing tracing into just about every operation in the system I've uncovered a drastic flaw in this approach!!!

This SO post shares exactly my feels.  I could not believe that the tracing was synchronous and that the default trace listener was so horribly performing.  The app actually started hanging and crashing from unresponsive tracing.  I insisted that it could not be the tracing as it *must* be asynchronous....

When profiling the application locally, except for start up, the app has very little resource use (<5% CPU, < 100MB RAM) across 3 WCF services and a website.  With as few as 10 users, the site went from blazing fast to painfully slow.

Lesson learned.... DO NOT USE THE DEFAULT TRACE.  However, no matter what I tried, I could not get the default trace to stop outputting.  I had to disable tracing completely and enable it only after an error is observed; try to reproduce to capture (while horribly impacting the running application for all other users), and disable.  This really sux.

Anyway, so I've known about ETW for a while, but haven't really tried to use it.  I feel like ETW is the best choice for what I'm striving for.  In fact, my misunderstanding of how TraceSource works is exactly how EventSource works, so lucky me :)

So, how do you convert between old TraceSource (.net 2.0 feature) stuff and implement the new EventSource stuff (.net 4.5/.1 feature), you ask?

Stay tuned for Part 2!









Thursday, March 5, 2015

WCF + WebAPI + Identity space

It's been quite a while since I've been able to finish a blog article (I still have quite a few pending), but I really want to get *something* out here just to get back into the habit.

Lately, I've been thinking about Identity and Microservices.  Today, there does not seem to be a great way to share identity from a client to a server, then to another (or many other) services.

I thought federated identity would provide a better mechanism, but it still just provides a single authentication point for the user; it doesn't really handle passing a request between services.

I recently listened to a .net rocks post about Identity Server and saw an ndc video about it.  There are a bunch of videos on ndc around identity and I'm excited to see the concept of an identity server for an application domain (all the microservices that comprise the actual service).

I'll have to investigate this further and see just how well OAuth and OpenID Connect really work.

Along those lines, I've been hearing more and more about WCF being 'dead' and WebAPI being the new standard.  Unfortunately, when I first heard about WebAPI years ago, I thought the same thing, but I didn't really understand why you would pick one over the other.  Having spent the last few years in this space pretty heavily, I've noticed a few things...

Firstly, I always hear about how hard WCF is to configure.  However, that has not been my experience.  I have had nothing but an awesome time using WCF.  I think the fact that I came in at version 4.5 has to do with my rosy attitude toward WCF.

File-less activation of services and default binding behaviors makes deploying new services extremely easy.

WebAPI has been a blast as well.  From my javascript-based web application, I can call into my services with ease.  A simple get or post as the method type, a simple attribute on my api controller and we're good; my service has an object and my client has HTTP.

I think one of the major road blocks to getting more posts published is finding a way to show code and discuss it via text.  I will be trying a youtube channel to discuss my thoughts about various technologies and link them from my blog with any supporting content.

Stay tuned for a short presentation discussing when to use WCF and when to use WebAPI.