Sunday, December 20, 2015

Building Modern Web Applications - Screen Cast Series

I just uploaded my first screen cast series talking about developing modern web applications.

Part 1 creates a VM in Azure with Visual Studio 2015 installed.
Part 2 creates an open source project on github.
Part 3 builds the first page using just vanilla javascript and jQuery.
Part 4 refactors out the first control using Knockout.js
Part 5 converts the view models into TypeScript
Part 6 composes a more complex control that is able to perform the searching
Part 7 checks in the first iteration of the web page and publishes it to Azure
Part 8 builds the helper library in a build-then-refactor iterative way

This series builds out the UI first and refactors out the controls into testable, reusable components.  Going forward, I will look at making the controls responsive and how to write the app starting from the view models and then creating a UI based off of those requirements.

Thursday, June 18, 2015

Typescript Union Types + Resharper 9.0

So I just burned away a day of my life tracking down why my TypeScript definition files were giving errors on the Union type definitions (eg, function(value:string|number){/*...*/}).

After much searching around, it seems to be a Resharper 9.0 problem.  If you look under the type script section, up to version 1.3 is supported so this makes sense.  However, I could not figure out how to disable just the TypeScript stuff in studio.  So, then I decided to see if there was an update to resharper to handle the new language version.

Using the Resharper menu item and checking for updates kept failing with an error.  So I went to jet brains to see about updating from there.  I couldn't find anything for updating, only downloading full version, which is apparently how you update.  After the installer launches, you are prompted to update installed features or install new features freshly.

I eventually stumbled upon a blog post explaining that the update from 9.0 to 9.1 is jacked.  I don't mind that it doesn't work, however, I had to use google to find this post; it should have been more prevalent on the web site IMO.

Anyway, hope this saves someone some time!

Wednesday, June 10, 2015

Coded UI Fluent Syntax

I have been looking into the CodedUI project template in Visual Studio for testing some web applications. I had a requirement to automate a browser without using Visual Studio / CodedUI and investigated Selenium directly. I really, really don't like the Selenium syntax for automation, and while I do like CodedUI's syntax better, I still do not like it. I have created some simple extensions to allow a more fluent searching syntax along with some helper classes that can be used to quickly create Page Models.  These extensions are similar to CUITe.

In this article, I assume that you are familiar with UI testing and the page model approach (see my previous post on the topic). I will discuss how I've applied page modeling to SPA-type sites and how the fluent syntax can make searching for elements way more terse.

To begin, I would like to stress that the type of testing that should be done in the UI layer should be scoped to testing the UI functionality and behavior. It should not be used to test a system's business logic or any other logic which is outside the scope of the UI. Business logic tests should use Unit Testing. Some of the things we should be testing are:
  1. Does an element show or hide as a response to some user action. Eg, when clicking a close button on a dialog, does the dialog close within 3 seconds.
  2. When an input has an invalid value, does the validation show the user an expected message.
  3. Does a information screen have a valid value for each property. Eg, suppose we have a readonly display of a user's Address information. Do all of the fields have a non-blank value?
  4. Does the navigation follow expected flow? Eg, In a wizard, after completing step 1 and clicking the Save & Continue button with valid form values, does the page transition to step 2?
Things that we will not test include:
  1. After clicking save, did the entry appear in the database?
  2. Did some business-calculated field calculate correctly based on some input?
Of course these are just a few examples of the things we will and will not be testing. So, here's a simple example site:

<html>
  <head>
    <title>Simple automation site</title>
  </head>
  <body>
    <div id="mainBodyContainer">
        <button id="openButton">Open</button>
    </div>
    <div id="hiddenDialog" style="position: absolute; left: 0px; top: 0px; width: 200px; height:200px; display:none; background-color: #666;">
      <span>this is a hidden div</span>
      <button id="closeButton">Close</button>
    </div>
    <script src="http://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
    <script>
        $(function(){
           $('#openButton').click(function(){
               $('#hiddenDialog').show();
               $('#openButton').hide();
            });
           $('#closeButton').click(function(){
               $('#hiddenDialog').hide();
               $('#openButton').show();
            });
         });
    </script>
  </body>
</html>

The test will check that the page loads with the hidden div display:none (not visible on the page). Then, it will search for the open button, click it, and verify that the div becomes visible. Then we'll close the div and check that it disappears. Very simple.

We'd write a test like this using the page model pattern (discussion of the used models afterward): 

[TestMethod]
public ToggleHiddenDivTest()
{
    PageBaseModel pageModel = new PageBaseModel(BrowserWindow.Launch("http://path/to/simple/page"));
    Assert.IsFalse(pageModel.ToggleArea.IsVisible(), "The toggle area should start hidden.");
    ToggleAreaModel toggleModel = pageModel.OpenButton.Click();
    Assert.IsTrue(toggleModel.IsVisible(), "After clicking open, the toggle area should be shown.");
    Assert.IsFalse(String.IsNullOrWhitespace(toggleModel.DisplayText), "The toggle area should have non-empty display text.");
    toggleModel.CloseButton.Click();
    Assert.IsFalse(pageModel.ToggleArea.IsVisible(), "The toggle area should hide after pressing the close button.");
}


The models we'll use represent the various 'controls' on the page. Page modeling originated when web sites were more or less a page to page transition to accomplish some task. Now-a-days, we find SPA like sites where a single page may be used to accomplish a complex task without page transitions; instead we hide and show regions of the page. Any region on a page which represents a well defined control will have page model. You could think of this as 'control' modeling instead, but we'll use the term Page Modeling as it is the standard terminology for this approach.

To start, I usually have a model that represents the entire page and call that the PageBaseModel or something similar to indicate it is a base model for the entire page.

When forming page models, I typically take the approach of protected properties for the elements used to implement some behavior on the control and expose public properties and methods to interact with the control and get state information or transition models.

In this example, the HtmlDocument is protected as it represents some technology specific container for this control. The ClickOpenButton method is used to transition between models and returns a ToggleAreaModel as the result of clicking. This represents the most likely next model that the user will interact with. Finally, there are properties that get the toggle area inner page model as well as the currently displayed text in the toggle box.

public class PageBaseModel
{
    protected readonly BrowserWindow Parent;
    protected HtmlDocument DocumentWindow
    {
      get { return new HtmlDocument(this.Parent); }
    }


    protected HtmlButton OpenButton
    {
        get
        {
            HtmlButton ret = new HtmlButton(this.DocumentWindow);
            ret.SearchProperties.Add(HtmlControl.PropertyNames.Id, "openButton", PropertyExpressionOperator.EqualTo);
            return ret;
        }
    }

    public PageBaseModel(BrowserWindow bw) { this.Parent = bw; }

    public ToggleAreaModel ToggleArea
    {
        get 

        {
            return new ToggleAreaModel(this.Parent);
        }
    }

    public ToggleAreaModel ClickOpenButton() // need another method to test if the button is visible / enabled / has a clickable point....
    {
        Mouse.Click(this.OpenButton);
        return new ToggleAreaModel(this.Parent);
    }

}

public class ToggleAreaModel : PageBaseModel
{
    protected HtmlDiv Me
    {
        get
        {
            var ret = new HtmlDiv(this.DocumentWindow);
            ret.SearchProperties.Add("id", "hiddenDialog", PropertyExpressionOperator.EqualTo);
            return ret;
        }
    }

    protected HtmlSpan DisplayTextSpan
    {
        get
        {
            return new HtmlSpan(this.Me); // only span in the toggle area
        }
    }

    protected HtmlButton CloseButton
    {
        get
        {
            HtmlButton ret = new HtmlButton(this.Me);
            ret.SearchProperties.Add(HtmlControl.PropertyNames.Id, "closeButton", PropertyExpressionOperator.EqualTo);
            return ret;

/*
  Assume there was no Me and you'd like to do the same thing...
*/
            var container = new HtmlDiv(this.DocumentWindow);
            container.SearchProperties.Add("id", "hiddenDialog", PropertyExpressionOperator.EqualTo);

            HtmlButton ret = new HtmlButton(container);
            ret.SearchProperties.Add(HtmlControl.PropertyNames.Id, "closeButton", PropertyExpressionOperator.EqualTo);
            return ret;

        }
    }

    public ToggleAreaModel(BrowserWindow bw):base(bw)
    {
    }

    public string DisplayText
    {
        get { return this.DisplayTextSpan.InnerText; }
    }

    public bool IsVisible()
    {
        return this.Me.TryFind();
    }

    public PageBaseModel ClickCloseButton()
    {
        Mouse.Click(this.CloseButton);
        return new 
PageBaseModel(this.parent);
    }
}


As you can see, most of the properties require at least three lines of code and a variable assignment to return.  If the elements required additional search properties, you'd have lots of typing as each additional property would require it's own statement.  If you need to find elements 'along they way', you have create a variable and the new object and set the search properties so that you can use it in the constructor in the new object........ yikes.

Further, this feels 'backwards' to me as I'm creating the element from which to search and passing in the scope and searching criteria.  I'd prefer to start with a parent scope and create a search tree with the ability to chain elements along the way.  I've created a github repository containing the extensions and they are also available on nuget (under pre-release at time of writing).

With the extensions, I can now write the model as:

    public class ToggleAreaModel : PageBaseModel
    {
        protected HtmlDiv Me
        {
            get { return this.DocumentWindow.Find<htmldiv>("hiddenDialog"); }
        }
        protected HtmlSpan DisplayTextSpan
        {
            get
            {
                 return this.Me.Find<htmlspan>();// only span in the toggle area
            }
        }

        protected HtmlButton CloseButton
        {
            get
            {
                return this.Me.Find<htmlbutton>("closeButton");
/*
  Assume there was no inheritance and you'd like to do the same thing...
*/

                return this.DocumentWindow

                           .Find<htmldiv>("hiddenDialog")
                           .Find<htmlbutton>("closeButton");
            }
        }

        public ToggleAreaModel(BrowserWindow bw):base(bw)
        {
        }

        public string DisplayText
        {
            get { return this.DisplayTextSpan.InnerText; }
        }

        public bool IsVisible()
        {
            return this.Me.TryFind();
        }

        public IClickablePageModel<PageModelBase> CloseButton
        {

            get { return this.CloseButton.AsPageModel(return new PageModelBase(this.parent)); }
        }
    }


Notice how compact it becomes even when chaining additional properties.

There is more to the extension library, but here's the fluent syntax portion.  I'll be diving into the Page Modeling and Wrapper classes in another post.

DNSimple + Fastmail Integration

I was trying to setup some Email addresses for a domain I recently purchased and had to hunt around to figure out how the Fastmail integration works with DNSimple (services from which I purchased the domain name).

Here's what I had to do...
  1. Log in to dnsimple and navigate to the domain you want to create email addresses for.  (https://dnsimple.com/domains/yourdomain.com)
  2. Click the services button on the left nav
  3. Click the Add or edit services link
  4. Find Fastmail in the list and click Add
    • This adds a few records, but not enough
  5. Go to Fastmail and login to your account.
  6. If you have a premium account, Click Advanced after you click user name in the top left
  7. Click Virtual Domains link
  8. Under the Domain section, add your domain.  See Fastmail docs about the subdomain and other options.  You can use defaults for now.
  9. Create your alias (eg, info@yourdomain.com => you@youremail.com)
    • Check the SRS checkbox
  10. Go down to the DKIM signing keys and copy the public key
  11. Go back to DNSimple and add a TXT record with
    • Name = mesmtp._domainkey.yourdomainnamehere.com
    • Content = public key copied in step 10
It took a while for everything to propagate so give it an hour or two and see what happens.  This took me about a hour or so to get working so hopefully it helps someone.

On a different note, I will probably cancel the fastmail account in favor of sendgrid.

Hopefully more to come on this!

Page Modeling with MVVM - Component Driven, Testable Development

In a previous post, I looked at MVVM with MVC and how they can be used together in a modern web situation.  With this approach, the View Models are unit testable so that the User Interface can be tested without needing to be created.

So, what happens if you already have a UI?  Or, you want to test the UI before creating it, how do you do said testing?  This is where page modeling comes to the rescue.  First I'll introduce page modeling at a high level, then I'll investigate several common scenarios and how you can use page modeling with MVVM to create an awesome, testable UI.

Page Modeling

The topic of page modeling has been around for quite some time and is popular with Selenium based testing solutions.  I like to describe page modeling as the process of:
  1. Identifying components of a user interface
  2. Describing the observations (properties) and behaviors (methods) of that component
  3. Understanding how more complex components are composed of other components (simple and/or complex).
At its core, page modeling is breaking down a UI into components, each with well defined behaviors (methods) and observations (properties).  Let's look at an example.

Customer Search Control

Let's say I have a Customer Search control that has a
  • First Name Edit
  • Last Name Edit
  • Search Button
  • Results Table
This can be modeled as three components.
  1. Customer State Manager

    The first and last name input boxes would form one component that can store user input about a customer (think: edit screen, new customer screen, search screen all share this component).
    • First Name Edit
    • Last Name Edit
  2. Customer Collection Manager

    The result table provides the user the ability to edit or delete any given customer in a collection.
    • Results Table
  3. Search Manager

    The search component uses the previously mentioned components and adds a search button (component composed of components). This component would use the first and last name inputs to form a search request. When the Search button is pressed, the component would create a search object, call a WebAPI, get the customer results asynchronously, and populate the result table with these results.
    • Customer State Manager
    • Customer Collection Manager
    • Search Button
Each component would have its own Page Model and likely its own View Model.

Luckily, having created a UI before putting into place any sort of testable design pattern does not mean all is lost. In fact, it is very easy to extract page models from any user interface. Page Models are then easily converted into nearly equivalent View Models. The above exercise of looking at a page and breaking it up into logical components is all that is required to create page models.

Creating View Models from Page Models

Let us continue with the above example. We now have Page Models that look something like the following :

// crude examples of page models
public class CustomerState
{
    public HtmlEdit FirstName {get; set;}
    public HtmlEdit LastName {get; set; }
}

public class ResultTable
{
    public IEnumerable<ResultRow> Rows {get; set; }
    public HtmlButton AddCustomer {get; set; }
}

public class ResultRow
{
    public HtmlRow InnerRow {get; set;}
    public HtmlButton DeleteButton{get; set;}
    public HtmlButton EditButton {get; set;}
    public HtmlSpan FirstName {get; set;}
    public HtmlSpan LastName {get; set;}
}

public class SearchControl
{
    public CustomerState SearchCriteria {get; set;}
    public ResultTable Results {get; set;}
    public HtmlButton SearchButton {get; set;}
}


Let's convert this into a Knockout.js style view model using TypeScript.

class CustomerState {
    FirstName: 
KnockoutObservable<string>;
    LastName: KnockoutObservable<string>;
    constructor(firstName?:string, lastName?:string){
        this.FirstName = ko.observable(firstName);
        this.LastName = ko.observable(lastName);
    }
}

class ResultTable {
    Rows: 
KnockoutObservableArray<ResultRow>;
    constructor(rows?:ResultRow[], button?:HtmlButton){
        this.Rows = ko.observableArray(rows);
        if(button){ // binding can go into view or be passed to view model
            button.addEventListener("click", this.AddCustomer, false);
        }
    }

    AddCustomer(customerInitializer?:any){
        // add it, maybe call back to server, maybe not
    }
}

class ResultRow {
    FirstName: string;
    LastName: string;
    constructor(firstName:string, lastName:string, editButton?:HtmlButton, deleteButton?:HtmlButton){
        this.FirstName = firstName;
        this.LastName = lastName;
        if(editButton){
            editButton.addEventListener("click", this.EditCustomer, false);
        }
        if(deleteButton){
            deleteButton.addEventListener("click", this.DeleteCustomer, false);
        }
    }

    EditCustomer(){
        // open an edit dialog
    }

    DeleteCustomer(){
        // delete it
        // some sort of ajax call to web api to delete this customer 

    }
}

class CustomerSearch {
     State: 
CustomerState;
     Results: ResultTable;
     constructor(state?:CustomerState, results?:ResultTable){
         this.State = state || new CustomerState();
         this.Results = result || new ResultTable();
     }

     Search(){
        // look inside this.State, form a request, use ajax to call web api
        // handle success by populating the result table
     }
}


As you can see, the resulting View Models look extremely similar to the Page Models that were created.  Taken further, the Page Models could even have the same methods to encapsulate some behavior of the component.

public class CustomerState
{
    public HtmlEdit FirstName {get; set;}
    public HtmlEdit LastName {get; set;}
    public CustomerState SetFirstName(string firstName)
    {
        this.FirstName.Text = firstName;
        return this;
    }
    public CustomerState SetLastName(string lastName)
    {
        this.LastName.Text = lastName;
        return this;
    }
}

public class ResultTable
{
    public IEnumerable<ResultRow> Rows {get; set; }
    public HtmlButton AddCustomer {get; set; }
    public AddCustomerDialog ClickAddCustomer()
    {
        Mouse.Click(this.AddCustomer);
        return new AddCustomerDialog();
    }
}

public class ResultRow
{
    public HtmlRow InnerRow {get; set;}
    public HtmlButton DeleteButton{get; set;}
    public ResultTable ClickDeleteButton()
    {
       Mouse.Click(this.DeleteButton);
       return this.ParentTable;
    }
    public HtmlButton EditButton {get; set;}
    public EditCustomerDialog ClickEditButton()
    {
       Mouse.Click(this.EditButton);
       return new EditCustomerDialog();
    }
    public HtmlSpan FirstName {get; set;}
    public HtmlSpan LastName {get; set;}
}

public class SearchControl
{
    public CustomerState SearchCriteria {get; set;}
    public ResultTable Results {get; set;}
    public HtmlButton SearchButton {get; set;}
    public SearchControl ClickSearchButton()
    {
       Mouse.Click(this.SearchButton);
       return this;
    }
}


Testing

Regardless of how you design your UI, there is an easy path toward test-ability. Follows are some approaches you can take (in no particular order) to have a testable UI.

Page Models First

If you start by modeling the user interface with page models which describe the behavior of the user interface component (subtly different from view model which knows how to take inputs and translate them to the underlying system, more later), you can create a full suite of GUI tests without any user interface actually being created. Here is what that could look like...

[CodedUITest]
public class CustomerSearchTests
{

   protected BrowserWindow startingWindow;

   [TestInitialize]
   public void TestInitialize()
   {
       startingWindow = BrowserWindow.Launch("http://mysite.com");
       // manipulate the ui until we reach a search screen
   }

   [CodedUITest]
   public void WhenSupplyingAFirstNameFilterAndSearching_ThenAllResultsContainTheFirstNameCriteriaInFirstNameResults()
   {
      SearchControl search = new SearchControl(startingWindow);
      search.SearchCriteria.SetFirstName("Mike");
      search.ClickSearch();
      Assert.IsTrue(search.Results.Rows.All(x => x.FirstName.Contains("Mike"));
   }
}


This would be a great requirements specification for a developer writing the UI. This model even includes the specific UI element types to use (which may or may not be desirable). As the components are completed, the tests can be run and quick feedback is available to the developer as to whether the controls was created correctly. These page models could be easily converted into view models and the system could be tested via the view models.

View Models First

If you start by modeling the user interaction with your system, you can create a full suite of unit tests without any user interface (graphical or otherwise) being created. Given that the above view models were created in javascript, we would need to use a java script test runner. However, for simplicity, imagine the equivalent view models in C#.

[TestClass]
public class CustomerSearchTests
{

   [TestInitialize]
   public void TestInitialize()
   {
       // nothing to do
   }

   [TestMethod]
   public void WhenSupplyingAFirstNameFilterAndSearching_ThenAllResultsContainTheFirstNameCriteriaInFirstNameResults()
   {
      ISearchControl search = ControlFactory.Create();
      search.SearchCriteria.SetFirstName("Mike");
      search.Search();
      Assert.IsTrue(search.Results.Rows.All(x => x.FirstName.Contains("Mike"));
   }
}


Not much different here that what is above.

UI First

If you create your UI first (or inherit an existing UI), we already looked at how easy it is to extract page models and view models.

Conclusion

Regardless of whether you inherit an existing solution with a not-so-testable UI or you are setting out on a beautiful greenfield project, creating tests for your users interfaces (both graphical and non-graphical) is such a simple task, there really is not much reason not to test. This has the side benefit of tested refactoring that bleeds into the business layer. As you test the UI and move into the interaction between the UI and backing system, it is only natural to continue that journey into the systems that power your UI.

Welcome to the wonderful world of Testing! :)

Tuesday, May 26, 2015

Azure Websites now Web Apps

I have finally been able to get into using Windows Azure more and I'm loving it. I had been using it for the last two years on and off, but with the recent updates announced at Build, it's been amazing.

I found some great articles from Rick Anderson at Microsoft about Identity, MVC5, Azure, and Identity (including OAuth2.0) The series found here gets you to a production worthy site in just a few hours that includes user registration, SMS, external logins, and all that base framework that just about any site needs.

So, here's the scenario...

I started a project in MVC4 and would like to migrate to MVC5 and use the ASP.NET Identity stuff with Entity Framework and Azure SQL.

I haven't found an article describing how to use an existing database with an existing project that did not originally use that data base. For example, the MVC4 site used abc.sqlserver.qualifiedname. After updating, I'd like to use def.sqlserver.qualifiedname on a BLANK database.

Using EF migrations enables this scenario.

From the package manager, use the following commands:

PM> enable-migrations
PM> add-migration Initial

Update your connection string for the connection which points to the users db (eg, DefaultConnection) from abc (original) to def (new azure sql) server and set initial catalog to the blank database.

PM> update-database

the first command adds the required components to enable migrations including adding a Migrations folder to you project.

second command creates a migration representing the current DB schema

third command actually sends commands to the azure sql server that you updated in the web config above to be like the original database.

Migrations are a great way of helping to enable continuous deployment of databases so I'm hoping EF turns out to be as great as everyone says :)

Friday, April 24, 2015

It's official - I can program in C#

I finally went and took the first of three exams required to become a Microsoft Certified Software Developer (MCSD).  I opted for the 70-483 Programming in C# exam over the 70-480 Programming in HTML5 with JavaScript and CSS3, though I do plan to take that in the future.

Tips:
  1. Study specifically for the exam
    The range of topics is quite large and it is likely there will be questions you are not familiar with.  Topics I was not familiar with:
    • Performance Counters
    • GAC tools
    • Specific Crypto Algorithms
  2. Carefully read each question
    There can be subtle verbage that helps immediately rule out answers.
    Eg, You need to send large volumes of data securely using a hashing algorithm.  
    This would rule out any encryption algorithms.
  3. Get comfortable with the query syntax of LINQ
    I never use the 'query syntax' (eg, from c in collection where ...) and just about every question involving LINQ used that form
Overall
The exam was 55 questions with a maximum allowed time of 120 minutes.  This was more than enough time to read and think about each question before answering.  I finished the exam in just under an hour and never felt time pressure.  At the end, I had mistakenly not answered one question and it informed me of this before letting me end the exam so don't worry about skipping questions.  I do recommend taking the time to read and understand each question which I can't stress enough.  Several times, the supplied code in the question gave away the answer, it just took a second to realize what was being asked.

Bonus
I didn't realize going into it, but by passing a single exam, you are considered a Microsoft Certified Professional (MCP) and have special recognition in the Microsoft Community. I'm excited to start digging into what's included. More on this to come :)

Friday, March 27, 2015

MVC and MVVM Together in Modern Web

I've spent the last few years heavily in the web space and really believe that we're finally in a pretty good spot with design patterns, support from browsers, web standards, client hardware (smart phones) that can do more and more complex things...  But there is still so much going on and things are changing quickly!

I want to look at how to use the MVC and MVVM patterns together to provide a very powerful client application.

Lets start by breaking down these patterns.

Model

Models are typically our business objects.  More specifically, these should be Business Value Objects.  I like to make that distinction as these objects should be purely data (no methods).  How are you going to transmit the method to the client with your model?

View

A representation of data.  A view uses a model to construct some representation, typically a visual representation.  The view is in charge of binding the model to the eventual markup in the case of ASP.NET MVC.

Controller*

A controller is something that handles requests and return values.  I am leaving this slightly vague at the moment, but the main point is that they handle requests.

View Model

Typically a composition of models and other view models that are able to manipulate client side objects, and manage user interaction to the system.  For example, if you have a button on a page, there should exist some view model that will handle the click event.  Typically, this event handler will call out to a controller, take a response asynchronously, and update the view accordingly.

That leads us to the second reason for a view model - to control the state of the page.  Whether an expander is opened or closed should be driven by an observable property of the View Model.

The View Model is delivered to the user in the View returned by the Controller after a request.


Let us consider a simple web example.
  1. User makes a request to my site to see a product catalog
  2. Request for markup is handled by a Controller
  3. Controller Finds a View based on the Route of the request
  4. Controller Finds a Model based on the arguments of the request (route and/or message body)
  5. The Model is given to the View to construct some visual representation of the Model (render HTML)
  6. Within the generated markup, a view model must exist to handle user interaction and manage page state.  This is typically going to be some JavaScript object
  7. Markup is delivered to the client and the View Model is initialized
  8. Once the view model is initialized, it may call back to the server to fetch some data (eg, product list meta data)
  9. Request for data is handled by a Controller
  10. Controller does not need to find a view since we are requesting data, it simply finds the Model and returns it as data
  11. The View Model receives the data asynchronously and updates the markup by rendering the list of products via template binding
At this point our user experience is improved as the site is loaded quickly to deliver the layout, minimal data, but a view model that can take over.  This improved responsiveness is very noticeable.  Consider the alternative where the all the data must be fetched before the view can be rendered and delivered to the client.  The client gets a blank web page with a spinner.

So, you can see, there are really two types of requests we are dealing with here.  Controllers that respond to requests with markup are typically referred to as MvcControllers.  Controllers that respond to requests with data (json, xml, file, ...) are typically referred to as ApiControllers.

In the above example, we're using MVC to get us into some interesting section of the application.  After the View is delivered, MVVM takes over and the View Model handles user interaction and manages page state.  Using MvcControllers to deliver data is not natural or easy if you are using ASP.NET MVC.  WebAPI (aka, ApiControllers) were developed specifically to provide an easy way to provide data formatted in a commonly understood format (json, xml, ical, ... think mime).

I find this is a very powerful approach where the View Models become unit testable instead of having to be tested via CodedUI or similar.  Once the View Model is tested, it's just a matter of binding a view which isolates errors to binding errors.

Overall, I think this is going to become one of the more common patters to see when developing new, modern web applications.

WCF Best Practices Part 1: Setting up the solution

Windows Communication Foundation (WCF) is one of the most commonly used technologies to connect to services in the Microsoft Stack.  I found it surprisingly hard to find any guidance on how to setup your projects when using WCF.  Please refer to this dnrtv episode for guidance on how to setup your projects.  It's an oldy, but still applies today.

Some of the biggest take aways that I finally uncovered from this episode were:
  1. Service References are not your friend - DO NOT USE THEM!
  2. There are 5 major components when dealing with WCF
While I noted that service references are not your friend is the number 1 take away, I will cover the 5 components first as that will bring some context as to why they are bad.

1.  Contracts

Contracts are the core foundation of any service and are the most critical aspect to get right as early as possible.  The contracts describe the agreement that a client has with the service code driving it.  Contracts are not secret, in fact, that would be counter productive.  I want to consume a service, but I do not know how to interact with that service.  How would I know how to request something, to even how to talk to it (eg, TCP, HTTP, ...)?

In the contracts, you will have two primary types of objects:
  1. Data Transfer Objects - DTOs
  2. Service Interfaces
Notice, only the interface to the service exists in this project, not the implementation code.

Contracts will exist in their own project so that they can be shared with clients (you can openly distribute this to your customers).  If the customer is able to use your contracts dll directly, less work for them.  If the customer cannot use your contracts dll directly, WCF supports discovery while allows tools to auto-generate types (eg, Service References).

I highly, highly recommend decorating all of your DTOs with the [DataContract(NameSpace="something.that.is.not.the.default")]

This project is typically a Class Library

2.  Client Proxy

This project contains proxy classes implementing the service in question.  In .net, these classes would look something like:

// in the Contracts project
public interface IDoStuff
{
    ReturnedStuff DoStuff(WithStuff thisStuff);
}

// in the client Proxy
public class IDoStuffProxy : ClientBase<IDoStuff>, IDoStuff
{
    ReturnedStuff DoStuff(WithStuff thisStuff)
    {
        return this.Channel.DoStuff(thisStuff);
    }
}

This can easily be auto-generated and is not secret so you can provide it openly to customers.  Tools like Service Reference will generate these proxy types for you.  As you can see, there is not much involved with generated these by hand even.  svcutil (which Service References uses) can generate these for you as well.

This project is typically a Class Library

3.  Service Implementation

Of course you'll eventually have to provide some working implementation for the service contracts in the contracts project.  This type will also contain additional types that help support the service, but which are not simply DTOs or should not be given out to the customer because they contain some proprietary functionality, etc.

This is your secret sauce and should not be given out.  This code can be updated at any time as long as it does not break the contract with the client.  We know it will not as that would require a change in the Contracts project.

This project is typically a WCF Service Library

It is imperative that this be host independent as you may want to have on prem, azure, aws, ... or even iis and apache.  Trust me, you are better off keeping this separate from the service host.

Service Libraries also have a nice feature where the WcfTest client will open with a reference to your service if you start the service library directly.

4.  Service Host

If you are using iis and/or iis express the service host is extremely simple.  It contains only the web.config file and transforms.  That's it.

This project is typically a WCF Service Application

5.  Client

Finally, we have the client.  A client is anything that uses your service.

Why 5 Separate Projects

Primarily, it keeps you honest.  You should not change an API to suite the needs of an implementation and when the interface the service implements resides in a completely separate project, it really makes you consider if that is the right thing to change.

It has the added benefit of providing bits you can provide to customers which are not secret so they don't have to generate those types.  Think, Barrier to Entry.

Finally, it lends itself to easy deployment to multiple environments and even multiple platforms.

Service References

Are not your friend because Visual studio tries to manage things for you and when things break, you have no idea what you are looking at; worse, it sometimes breaks more when you 'fix' it.  Specifically, it tries to generate the DTO and proxy types and manage the web config file related to those services (it is very, very bad at this management).  Under the covers, it uses the svcutil.exe which you can use as well from command line and I guarantee you will do a better job!

The only reason to use the Service Reference would be to have it generate the types which you then cleanse and add to source control as your own code, the remove the service reference.  At that point, you may as well use the svcutil.exe.

When you own the contracts dll (maybe another team within the company) or the service provider just gives out the contracts dll directly, use it*.

*One nice thing the svcutil.exe can do is generate Task-based methods of the service interface, even when the service interface is not Task-based.  In this case, you may want to generate your own types anyway.


This was the biggest missing link for me when I jumped into WCF.  I started after version 4.5 was released so the configuration files were super easy to figure out.  I just didn't understand how to structure the projects so hopefully someone out there benefits from this!  :)

Tracing is EVIL: Part 1

So, for the last year or so I've been looking into application logging and event tracing.  I looked into ETW after listening to a .net rocks episode a long time ago, but quickly stopped when I realized that starting and stopping the tracing required elevated privileges on the server.  I haven't worked at many places where that would even be up for disucssion...  So I eventually came across System.Diagnostics.TraceSource.  This seemed to be a good fit as I could turn it on and off via configuration file and I could add custom listeners to defer the 'write' to be handled by some external handler(s).

The application under test is built around interfaces and base classes implementing the tracing where constructed.

Let's bring in an example...

/// interface for loading stuff
public interface ILoadStuff {
    /// load the stuff with the specified id
    Stuff LoadStuff(int id);
}

public class TracingStuffLoaderWrapper : ILoadStuff {
    protected readonly ILoadStuff wrapped;
    protected readonly TraceSource trace;
    public TracingStuffLoader() : this(new StuffLoaderProxy()/*WCF/service proxy*/, new TraceSource("NS.TracingStuffLoaderWrapper", Information)){}

    public TracingStuffLoader(ILoadStuff wrapped, TraceSource trace){
        // arg null check
        this.wrapped = wrapped;
        this.trace = trace;
    }

    public Stuff LoadStuff(int id){
       trace.TraceInformation("Request to load stuff with id `{0}`.", id);
       var stuff = wrapped.LoadStuff(id);
       trace.TraceInformation("Stuff fetched successfully");
       return stuff;
    }
}

or

public abstract TracingStuffLoaderSql : ILoadStuff {
    protected readonly TraceSource trace;
    protected TracingStuffLoaderSql() : this(new TraceSource("NS.StuffLoaderSql")){}
    protected TracingStuffLoaderSql(TraceSource source){
        this.trace = source;
    }

    protected abstract SqlCommand GenerateLoadCommand(int id, SqlCommand cmd);
    protected abstract Stuff DecodeStuffFromDataSet(DataSet ds);
    public Stuff LoadStuff(int id){
        trace.TraceInformation("Request to load stuff with id `{0}`.", id);
        using(SqlConnection conn = GetOpenConnection()){
            trace.TraceInformation("Generating load command.");
            using(SqlCommand cmd = GenerateLoadCommand(id, conn)){
                trace.TraceInformation("Fetching data from database.");

                DataSet ds = new DataSet();
                new SqlDataAdapter().Fill(ds);
                trace.TraceInformation("Dataset fetched with {0} tables and {1} rows in the first table.", ds.Tables.Count(), ds.Tables.First().Rows.Count());

                Stuff decoded = DecodeStuffFromDataSet(ds);
                trace.TraceInformation("Decoding succeeded: {0}", decoded);
                return decoded;
            }
        }
    }
}


Using this pattern, the StuffLoader can work against any Sql database, just implement the SqlCommand method in the derived class.  The subclass would hopefully do the right thing and use the trace source to write messages, but even without writing to it, this gives a very good idea as to where a failure may have occurred..

If the trace prints "Generating load command." and does not print "Fetching data from database." the failure is inside the GenerateLoadCommand subclass implementation.  Otherwise, my base class is broken and I should have a good idea where.

This approach was working fine as there were only a few traces happening here and there.  After introducing tracing into just about every operation in the system I've uncovered a drastic flaw in this approach!!!

This SO post shares exactly my feels.  I could not believe that the tracing was synchronous and that the default trace listener was so horribly performing.  The app actually started hanging and crashing from unresponsive tracing.  I insisted that it could not be the tracing as it *must* be asynchronous....

When profiling the application locally, except for start up, the app has very little resource use (<5% CPU, < 100MB RAM) across 3 WCF services and a website.  With as few as 10 users, the site went from blazing fast to painfully slow.

Lesson learned.... DO NOT USE THE DEFAULT TRACE.  However, no matter what I tried, I could not get the default trace to stop outputting.  I had to disable tracing completely and enable it only after an error is observed; try to reproduce to capture (while horribly impacting the running application for all other users), and disable.  This really sux.

Anyway, so I've known about ETW for a while, but haven't really tried to use it.  I feel like ETW is the best choice for what I'm striving for.  In fact, my misunderstanding of how TraceSource works is exactly how EventSource works, so lucky me :)

So, how do you convert between old TraceSource (.net 2.0 feature) stuff and implement the new EventSource stuff (.net 4.5/.1 feature), you ask?

Stay tuned for Part 2!









Thursday, March 5, 2015

WCF + WebAPI + Identity space

It's been quite a while since I've been able to finish a blog article (I still have quite a few pending), but I really want to get *something* out here just to get back into the habit.

Lately, I've been thinking about Identity and Microservices.  Today, there does not seem to be a great way to share identity from a client to a server, then to another (or many other) services.

I thought federated identity would provide a better mechanism, but it still just provides a single authentication point for the user; it doesn't really handle passing a request between services.

I recently listened to a .net rocks post about Identity Server and saw an ndc video about it.  There are a bunch of videos on ndc around identity and I'm excited to see the concept of an identity server for an application domain (all the microservices that comprise the actual service).

I'll have to investigate this further and see just how well OAuth and OpenID Connect really work.

Along those lines, I've been hearing more and more about WCF being 'dead' and WebAPI being the new standard.  Unfortunately, when I first heard about WebAPI years ago, I thought the same thing, but I didn't really understand why you would pick one over the other.  Having spent the last few years in this space pretty heavily, I've noticed a few things...

Firstly, I always hear about how hard WCF is to configure.  However, that has not been my experience.  I have had nothing but an awesome time using WCF.  I think the fact that I came in at version 4.5 has to do with my rosy attitude toward WCF.

File-less activation of services and default binding behaviors makes deploying new services extremely easy.

WebAPI has been a blast as well.  From my javascript-based web application, I can call into my services with ease.  A simple get or post as the method type, a simple attribute on my api controller and we're good; my service has an object and my client has HTTP.

I think one of the major road blocks to getting more posts published is finding a way to show code and discuss it via text.  I will be trying a youtube channel to discuss my thoughts about various technologies and link them from my blog with any supporting content.

Stay tuned for a short presentation discussing when to use WCF and when to use WebAPI.