A high level look at Angular 2

Developed by Google, Angular 2 is the newest version of the popular Single Page Application (SPA) framework. Angular 2 is a step in a new direction compared to previous versions, but has keep all the best characteristics and “lessons learnt” to deliver a fast, fully featured and rich ecosystem.

About Angular 2

Angular 2 impacts your whole application, or at the least, a section of it, rather than specific pages. Angular 2 is best suited to new (“greenfield”) development, as it can be relatively tricky to migrate legacy code (Angular 1.x code) to the new version. Angular 2 has new concepts, syntax, methodologies, and opinions, but is comparable to as previous versions in the way it works.

If you have been following Angular development since “the early days” of beta 1, then its been a very rocky road for you. Even now (this post was written around the release of RC5), the API is still evolving and new features are being added. Whilst this experience has been hard for early adopters, I believe the end result will be a fantastic, easy to use, and performant framework with a lower barrier to entry for all.

Overview

The purpose of this post is to discuss the core concepts of Angular 2. We’re not looking to dive into the details at this point, a follow up post on that will come later. We will discuss; pre-processors, build tools, components, dependency injection, interpolation, pipes, directives, event bindings, two way data binding, lifecycle hooks, routing, services and the HTTP client.

I have a side project on Github, named Angular2Todo, which is an Angular 2 todo application written with Angular 2, Universal Angular, and ASP .NET Core. If you’re interested in server side rendered Angular, please check that out.

This post is based on my own experience with Angular 2, and most of the knowledge has come from developing real world applications, including this one on Github.

Going forward, we will refer to Angular 2 simply as “Angular”. Old versions of Angular will be referred to by their version number explicitly.

TypeScript, Babel, ES6/ES5

It is hard to talk about Angular 2 without discussing TypeScript (See TypeScript playground for a live interactive demo). Angular was originally written using AtScript, which was an extension of TypeScript (TypeScript with additional functionality). However, after much collaboration between the Angular team and the TypeScript team, it was decided to use TypeScript exclusively instead.

Angular is written using TypeScript, however, you don’t necessarily have to write you Angular code using TypeScript, you could use; Babel, ES5 or ES6 if you prefer. If you are familiar with Google Dart, that is also supported.

I believe that Angular 2 applications are most commonly being developed using TypeScript, so I’ll use TypeScript throughout this post. If you need a summary of TypeScript, check out my TypeScript Beginners Guide on this website, DeveloperHandbook.com.

At a high level, TypeScript is JavaScript. You can convert all your existing JavaScript code to TypeScript as easily as changing the file extension from JS to TS. The most useful feature of TypeScript is its transpiler, which takes your TypeScript code (which is basically ES6), and converts it into ES5. Why would you want to do this? Most developers will want to utilise the new language features of ES6, ES7 and beyond, whilst not having to worry about cross browser compatibility. Support for ES6 is shaky at best on the desktop (Microsoft Edge, I’m looking at you), and very poor across mobile devices. TypeScript takes this pain away by converting your code to ES5, which is stable and does have excellent support.

Build tools

As we’re working with TypeScript, and potentially other tools as well, it makes sense to use build tools like Webpack or Gulp. Build tools can automate repetitive tasks, such as; transpiling the code (TypeScript to ES5), bundling (taking all your individual assets and merging them into one big file), minification (compressing that file for faster delivery to the client), and injection (referencing the new resource in the HTML).

Build tools can also watch your changes, and automatically build and refresh the browser automatically, so that you can focus on writing code and developing your application.

The Angular documentation, once you get beyond the absolute basics, encourages you to split your components (we will discuss these more next) into individual concerns. Your JavaScript, styles (CSS) and markup (HTML) are to be placed in individual files (component.js, component.css, component.html). This results in a lot of “chatter” between the client and server and can slow down the user experience (particularly on slower, mobile devices). Build tools can solve this problem by automatically injecting the markup and styles into your JavaScript files at compile time. This is certainly not a task you would want to perform manually!

Personally I have worked with both Gulp and Webpack when developing Angular applications. I prefer Webpack for how well it works, but I do not like the configuration aspect. Gulp is much easier to configure, but not as powerful (my feeling) as Webpack.

I have an example Gulp and Webpack configuration file on GitHub, that have both been used in real world applications.

Components

Components are the basic building blocks of all Angular applications. Components are small and have their own state per instance (meaning you can reuse the same component many times on a single page without it colliding with other instances). Components closely follow the open standard for Web Components, but don’t have the same pain of cross browser support (the Web Components standard has not been finalised yet). Components are a group of directly related JavaScript (logic), CSS (style) and HTML (markup), which are largely self contained.

Components in Angular are defined using the @Component class decorator, which is placed on a class, and take “metadata” which describe the component, and its dependencies.

A component might look like this;

import { Component } from '@angular/core';
import { ROUTER_DIRECTIVES } from '@angular/router';

@Component({
    selector: 'my-app',
    directives: [...ROUTER_DIRECTIVES],
    templateUrl: './app.component.html',
    styleUrls: ['./app.component.css']
})
export class AppComponent {

}

This is ES6/TypeScript code. We create a class, called AppComponent using the class keyword. The export keyword makes the class “public”, so that it can be referenced elsewhere throughout the application.

The @Component directive takes an object that describes the component.

  • Selector: This is used in your markup. The selector is how you refer to the component in HTML. In this example, the code would be; <my-app></my-app>
  • Directives: These are other components that you want to utilise in the components markup
  • TemplateUrl: The path on the file-system to the markup
  • StyleUrls: A string array of all the CSS files used to style the component

There are many values that can be passed in here, the main values are displayed.

About styling

Why does Angular load the styles in this manner? Take the following markup (I’ve trimmed this slightly for simplicity);

<tr>
    <td>...</td>
    <td>{{asDate(calibrationDue.date) | date}}</td>
    <td>{{asDate(calibrationDue.expiration) | date}}</td>
    <td>{{calibrationDue.registration}}</td>
    <td>{{calibrationDue.technician}}</td>
    <td>{{calibrationDue.customer}}</td>
    <td>{{calibrationDue.vehicleManufacturer}}</td>
</tr>

This is pre-compiled markup. The compiled, rendered markup looks something like this (again trimmed for simplicity);

<tr _ngcontent-dqf-10="" class="pointer">
    <td _ngcontent-dqf-10=""></td>
    <td _ngcontent-dqf-10="">Sep 29, 2014</td>
    <td _ngcontent-dqf-10="">Sep 29, 2016</td>
    <td _ngcontent-dqf-10="">AA11AA</td>
    <td _ngcontent-dqf-10="">John Smith</td>
    <td _ngcontent-dqf-10="">John Smith Transport</td>
    <td _ngcontent-dqf-10="">Ford</td>
</tr>

Notice this rather auto-generated looking attribute that has been added to the markup? Well, it was auto-generated by Angular. The same attribute was also injected into the CSS. Why? to scope the CSS, to prevent it from having any effect on any other aspect of the site outside of the component itself. Any CSS you write in your CSS file, which is referenced by a component, cannot affect any other component or any other part of the site. The styles only affect the component itself. This is tremendously powerful and will result componentized CSS, but CSS that effectively does not cascade.

Angular 1.5+

Components were introduced in Angular 1.5 to help ease the transition (the upgrade path) from Angular 1 to Angular 2. If you are writing Angular 1 code currently and are looking to migrate to Angular 2 in the future, then consider re-writing your existing controllers into components to make your migration simpler in the future.

Dependency injection

Dependency injection in Angular is similar, in my experience, to dependency injection in many other languages/frameworks. Angular takes over managing the lifecycle of a components (or services) dependencies, and dependencies of those dependencies.

When you need to use a dependency, a service for example, you inject it into your component through the constructor. Dependency injection example;

import { Component, OnInit } from '@angular/core';
import { TodoStore, Todo } from '../services/todo.service';

@Component({
  selector: 'app-footer',
  template: require('./footer.component.html')
})
export class FooterComponent implements OnInit {
  constructor(public todoStore: TodoStore) {
  }
}

In the above example, our component needs to ues the TodoStore. Rather than creating a new instance of it inside the constructor, we add it as a parameter to our constructor. When Angular initialises our component, it looks at these parameters, finds an instance, then supplies it to the component.

Dependency injection in Angular is hierarchical. When the component needs a dependency, Angular looks to the components ancestors (parents, grandparents etc) until an instance is found. Depending on how you construct you application, dependencies can be (and probably will be) singletons. However, as dependency injection in Angular is hierarchical, it is possible to have multiple instances of the same service. When dependencies are defined at the “root level” of the application, the service will always be a singleton.

Angular takes care of initialising the dependency, and the dependencies of the dependency, so you only need to worry about using the dependency and not worry about managing it.

In the above code example, we are “referencing” each dependency using its type, therefore we don’t have to concern ourselves with making our code “minification safe”. In Angular 1, we define all our dependencies in a string array ($inject) so that Angular knows the names of each dependency at run time even after the code has been mangled. This step is no longer necessary.

Why?

Why would you want to handle dependencies in this way? Code complexity is reduced, and we can make our unit tests simpler. How? When using testing we can tell Angular to inject mock versions of our dependencies, to speed up the tests and ensure we’re only testing our own code and not Angular itself.

If you inject HttpClient into a service, for example, you would not want to make real HTTP requests to the server when running your tests. Instead, you inject a mock version of the HTTP client and simulate the request/response, for faster, consistent results.

Interpolation

Perhaps the most familiar concept in Angular, interpolation. Interpolation is the means of evaluating expressions, and displaying the results.

Take the following example;

<div class="view">
  <input class="toggle" type="checkbox" (click)="toggleCompletion(todo)" [checked]="todo.completed">
  <label (dblclick)="editTodo(todo)">{{todo.title}}</label>
  <button class="destroy" (click)="remove(todo)"></button>
</div>

The interpolation code is on line 4, {{todo.title}}. There is an object on the component, called todo. The object is a complex object, so it has many properties (and functions). In this case, we want to display the title of the todo to the user. Angular looks at this expression, determines the value of todo.title and renders it on the view. Whenever the value changes, that change is evaluated and displayed automatically.

You can display the results of practically any expression. Take another example; {{2+2}}. The result is 4, so 4 will be displayed on the view.

It is also permitted to invoke functions in interpolation expressions. The following code is perfectly valid;

{{ getTheCurrentDate() }}

As long as the component has a function called getTheCurrentDate, Angular will invoke it and display the result. You should avoid calling functions when possible as Angular will evaluate them more frequently than you expect, which can hurt performance when the functions do a lot of work. Instead, favour properties that do not change frequently.

Angular has a dependency on another open source project, called Zone.js. Zone.js handles change detection, and informs Angular when changes occur. Talking about Zone.js is out of scope of this post.

Pipes

Pipes are usually used in conjunction with interpolation. Pipes, known as filters in Angular 1, are used to format data.

For example;

<div>
    {{todoStore.todos | json}}
</div>

The above code takes the array of todos, and converts it to a string and displays that result in the view.

The following are pipes built in to Angular;
* Async – Automatically subscribe to observables, which are used extensive in Angular (Angular has a dependency on RxJS).
* Date – Used to format a date object. For example; Sunday 21st August 2016.
* Percent – Used to display a number as a percentage, can pass in the number of decimal places to show.
* JSON – Used to “toString” JavaScript objects
* Currency – Format numbers as currencies ($10.99 or £10.99)

Notice there is no OrderBy pipe in the list, like there was in Angular 1. That is because ordering was a particular pain point in Angular 1. Because of the way that Angular 1 detected changes, the ordering would often occur multiple times, which when working with large data sets killed performance. The Angular team have excluded the OrderBy pipe in favour of ordering being done by your code, within the component or service.

There are some other less important pipes, but generally there is significantly less built in pipes than in previous versions. This was a deliberate choice to keep the code base lean and clean.

Structural Directives

Structural directives directly affect the structure, or the elements within, the DOM (Document Object Model). Structural directives can add DOM elements, remove them and modify them.

The most commonly used structural directives are;

  • ngFor, which is used to loop through items in an array
  • ngIf, which adds/removes an element to/from the DOM depending on the result of an expression

The syntax for structural directives is different from the norm. Structural directives are prefixed with an asterisk (*). Take the following code;

<footer class="footer" *ngIf="todoStore.todos.length > 0">
    <span class="todo-count"><strong>{{todoStore.getRemaining().length}}</strong> {{todoStore.getRemaining().length == 1 ? 'item' : 'items'}} left</span>
    <button class="clear-completed" *ngIf="todoStore.getCompleted().length > 0" (click)="removeCompleted()">Clear completed</button>
</footer>

The structural directive is shown on line 1. *ngIf="todoStore.todos.length > 0". If the expression evaluates to true, then the footer is rendered, and all expressions within are rendered too. When the expression evaluates to false, the DOM element and all of its children are thrown away, removed from the DOM. This is to save Angular from having to evaluate code that the user is never going to see.

Below is an example of ngFor;

<ul class="todo-list">
    <li *ngFor="let todo of todoStore.todos">
        <app-todo [todo]="todo"></app-todo>   
    </li>
</ul>

On our component, we have a collection of todo, which contains zero, one or more items. For each todo in the collection, a new <li> is created. Angular scopes the <li> with the todo, so that any child element within the <li> can make use of the todo. In this example, the todo is passed to another component, called TodoComponent, whose responsibility is rendering single todo‘s to view.

In a nutshell, ngFor is a loop. Each iteration of the loop creates a new element which “knows” about the current item.

Attribute Directives

Next, we have Attribute Directives, which are responsible for changing the appearance of DOM elements. When using this square-brackets syntax, it “feels” like manipulating the DOM in JavaScript.

Consider the following code sample;

<div>
    <p [style.background]="backgroundColor">Hello, AO.com!</p>
</div>

The backgroundColor is a property on the component. In code, we can set the value of backgroundColor to be a color (yellow, or #ffff00 for example). The end result will be an inline style being applied, with the background property set to yellow.

Here is a screenshot on the compiled code at run time;

You are not limited to manipulating the style of the element, you can change just about any property of the element.

Take another example;

<li *ngFor="let todo of todoStore.todos" [class.completed]="todo.completed" [class.editing]="todo.editing">
    <app-todo [todo]="todo"></app-todo>   
</li>

In this example, we are adding the CSS classes completed and editing to the element when todo.completed or todo.editing is true. Likewise, when false, the classes are removed.

Again, we could control the checked state of a Check Box (input field with a type of checkbox).

<input class="toggle" type="checkbox" (click)="toggleCompletion(todo)" [checked]="todo.completed">

When todo.completed is true, the Check Box is checked, otherwise, it is not checked.

Event Bindings

Event bindings are used to listen for user interactions, or events that get raised when the user interacts with the page or an element. Some events you may be interested in responding to; clicks, mouse moves, focus, blur, etc.

Event bindings are added to DOM elements and are denoted with brackets. Angular keeps an internal list of all the events it understands. If you try to listen for an event that it doesn’t map, Angular will look to your code instead (you can define your own custom events).

When an event occurs, typically a function on your component is invoked. You can pass arguments to the calling fuction, such as state, the event object, DOM elements, arbitrary values and more. It is also possible to pass state from one component to another using this mechanism.

Take the following example;

<label (dblclick)="editTodo(todo)">{{todo.title}}</label>

When the user double clicks on the label, the editTodo function is invoked. In this example, this element has a todo in scope, which is passed as an argument to the function.

Another example;

<input class="edit" *ngIf="todo.editing" [(ngModel)]="todo.title" (blur)="stopEditing(todo, todo.title)" (keyup.enter)="updateEditingTodo(todo, todo.title)"
  (keyup.escape)="cancelEditingTodo(todo)">

In this example, we are responding to the blur event (when the control loses focus) and key-presses (enter, escape) so we can perform an appropriate action when the user presses these keys on their keyboard.

Two way data binding

Two way data binding is primarily used when binding input controls to properties on the component. When using a two-way data binding, when a user interacts with a form (types data into a text field) that change is automatically reflected on the component. Likewise, if a property that is bound to an input field is changed in code (either by the component itself, or by something else, say an Observable being resolved) that change is reflected in the view immediately. Two way data binding is a mechanism for synchronising data between the view and the component.

To utilise two way data binding, you use ngModel. The syntax for correct use of ngModel is known as the banana in a box syntax (click the link for a full explanation as to how the name came about).

In reality, the banana in a box syntax is an amalgamation of and attribute directive and an event binding (and combination of the two). Banana in a box syntax is syntactic sugar for two mechanisms being used together, which helps you write less code.

Take the following example;

<input class="new-todo" placeholder="What needs to be done?" autofocus="" [(ngModel)]="newTodoText" (keyup.enter)="addTodo()">

When the value entered into the input field changes, the property newTodoText is automatically updated.

Now consider the long hand version of the same code;

<input class="new-todo" placeholder="What needs to be done?" autofocus="" [ngModel]="newTodoText" (ngModelChange)="newTodoText=$event" (keyup.enter)="addTodo()">

A separate event is needed to assign the new value to the property on the component ($event).

By combining the attribute directive and event binding, the code is more readable. The less code we can write, the easier our application will be to maintain over time.

Lifecycle Hooks

Lifecycle hooks can be thought of as events that get raised at key points during your components lifecycle. Lifecycle hooks are callback functions that are invoked by Angular when the component is in various transitional events.

Put another way, lifecycle hooks help you to execute code at the right times during the initialisation and destruction of your components.

Lets say your component needed to request some data from a HTTP endpoint. Where do you put code like that? In the constructor? No, ideally the constructor should be kept as clean as possible and should only initialise variables at the most. Having your constructor do a bunch of work will probably make your application run more slowly. Ideally what you need is a “load event”. Lifecycle hooks provide this. Lifecycle hooks help you execute code at the right time.

The most commonly used lifecycle hooks;
* ngOnInit: The “load event”
* ngDoCheck: Raised when Angular is running its internal change detection
* ngAfterViewInit: Raised when the component view has finished initialising
* ngOnDestroy: Called when the class is being destroyed/cleaned up

Side note; TypeScript has a construct called interfaces. If you come from a .NET background, interfaces in TypeScript are the same as what you already know. For everybody else, interfaces can be best thought of as “contracts”, which promise that the class that implements the interface also implements all the properties/functions defined on the interface. If a class implements an interface, and that interface has a function defined called ngOnInit, then the compiler can guarantee that function has been implemented on the class. (If not, a compile time error is emitted). Angular exposes an interface, called OnInit, that has a single function defined, called ngOnInit. Implementing this interface is best practice, but not mandatory. Just having the function in your class is good enough.

Example usage:

import { Component, OnInit, Input } from '@angular/core';
import { TodoStore, Todo } from '../services/todo.service';

@Component({
  selector: 'app-todo',
  template: require('./todo.component.html')
})
export class TodoComponent implements OnInit {
  ngOnInit() {
      //"load" logic goes here
  }
}

Some time after the instance of the component has been created, Angular will invoke the ngOnInit function, assuming it exists, enabling you to run custom logic (and call that HTTP endpoint, if that is what you need to do).

Routing

Routing is how the user gets around your Angular application. Routing is navigation between views/components. State, in the form of parameters, can be passed between views and can be injected into your components. A route is a combination of a path and a component. Routes are referred to by their path in your markup using the routerLink attribute directive.

Example usage;

<a routerLink="home" routerLinkActive="active">Home</a> | <a routerLink="about" routerLinkActive="active">About</a>

The above code sample also shows the use of the routerLinkActive attribute directive. This directive applies a CSS class to the element it is applied to when that route is active. The routerLinkActive directive works in conjunction with the routerLink directive, so that when the route referenced by the directive is active, the custom CSS class (in this case, called active) is applied to the element. Typically, you would want to change the visual appearance of the DOM element to indicate to the user that they are on a particular page.

Routing in Angular has been on a roller-coaster ride of changes throughout the alpha, beta and release candidate (RC) phases. The current iteration of the router uses a hierarchical router configuration approach to define routes, child routes, and to enable deep linking. For the sake of simplicity, we won’t discuss hierarchical routing/deep linking in this post, but be aware that it can be achieved by having multiple route configuration files at different levels of your applications. Child routes are extensions of their parents route.

Router configuration

To define the main routes of the application, we create a RouterConfig object, which is an array of routes. Most routes consist of a path and component. Other routes can have additional properties, like wildcard routes, which decide what to do when the user navigates to a path that does not exist (using the redirectTo property).

Example usage;

import { RouterConfig } from '@angular/router';
import { HomeComponent } from './components/home/home.component';
import { AboutComponent } from './components/about/about.component';

export const routes: RouterConfig = [
    { path: '', redirectTo: 'home', pathMatch: 'full' },
    { path: 'home', component: HomeComponent },
    { path: 'about', component: AboutComponent },
    { path: '**', redirectTo: 'home' }
];

Here we have four routes defined;

  • Default route: No path is defined, so when the user hits the page with no route parameters, they are redirected to ‘home’ (defined next).
  • Home route: Displays the HomeComponent.
  • About route: Displays the AboutComponent.
  • Wildcard route (**): When the route is not recognised, the user is redirected to the ‘home’ route. This is a catch-all.

The RouterConfig object is then referred to in the applications bootstrapper (loads the application and its component parts, called bootstrap).

Example usage (some code omitted for brevity);

import { provideRouter } from '@angular/router';
import { routes } from './routes';

bootstrap(AppComponent, [
    ...HTTP_PROVIDERS,
    FormBuilder,
    TodoStore,
    provideRouter(routes)
]);

The provideRouter function is exposed by the Angular router, and takes the RouterConfig object we just created.

More router directives

Angular also helps control navigation with several additional directives.

  • routerOutlet: Tells Angular where to put the view/component (used within the application shell, typically the AppComponent).
  • CanActivate: Allows navigation to be cancelled (useful for restricting access to certain pages under certain circumstances, like trying to access a page when the user is not logged in).
  • CanDeactivate: Runs before the route is changed, and can also cancel navigation (useful when, for example, prompting the user to save changes they have made to a form).

Angular does not “just know” about these directives, as everything router related lives within its own module. You must import the directives into your AppComponent‘s directives array;

import { Component } from '@angular/core';
import { ROUTER_DIRECTIVES } from '@angular/router';

@Component({
    selector: 'app',
    directives: [...ROUTER_DIRECTIVES],
    template: require('./app.component.html'),
    styleUrls: ['./app.component.css']
})
export class AppComponent {

}

The ROUTER_DIRECTIVES constant is a shortcut (an array, which includes all the directives previously discussed) to keep the code a bit cleaner.

Services

To end on a lighter note, services a reasonably straightforward in Angular. Services are classes that have the @Injectable decorator applied to them. The @Injectable decorator tells Angular that the class can be injected into components/directives/other services etc.

Services are used to share and abstract common functionality between one or more components. Services can help reduce code complexity and duplication. Depending on the configuration of your application, (remember the hierarchical dependency injection?), services can be singletons and maintain state over the lifetime of the application. Services can also have their own dependencies, which is handled the same way as how dependencies for your components are handled.

Example usage;

@Injectable()
export class TodoStore {
    //Implementation omitted   
}

I like to use services to add additional abstraction to some built in Angular services. Why? If the Angular API changes (and it has changed a bunch in the past), I have a single place where I have to make changes and get back up and running quicker.

Summary

Angular 2 is an opinionated web application framework for developing single page applications (SPA’s), and is actively developed by Google. Angular 2 is a move away from the original framework, it introduces and syntax’s and a “new way of doing things”. Important problems with past versions have been overcome (change detection/dirty checking was a major flaw of Angular 1) and the framework is taking a new direction. Whilst it is possible to write Angular 2 code in many ways, the preferred approach is to use TypeScript. There is a learning curve involved, the tooling especially is very much in its infancy, but once over the initial hump Angular 2 can be a fantastic framework when used in the manner it was intended for.

Angular 2 server side paging using ng2-pagination

Angular 2 is not quite out of beta yet (Beta 12 at the time of writing) but I’m in the full flow of developing with it for production use. A common feature, for good or bad, is to have lists/tables of data that the user can navigate through page by page, or even filter, to help find something useful.

Angular 2 doesn’t come with any out of the box functionality to support this, so we have to implement it ourselves. And of course what the means today is to use a third party package!

To make this happen, we will utilise n2-pagination, a great plugin, and Web API.

I’ve chosen Web API because that is what I’m using in my production app, but you could easily use ExpressJS or (insert your favourite RESTful framework here).

Checklist

Here is a checklist of what we will do to make this work;

  • Create a new Web API project (you could very easily use an existing project)
  • Enable CORS, as we will use using a seperate development server for the Angular 2 project
  • Download the Angular 2 quick start, ng2-pagination and connect the dots
  • Expose some sample data for testing

I will try to stick with this order.

Web API (for the back end)

Open up Visual Studio (free version here) and create a new Web API project. I prefer to create an Empty project and add Web API.

Add a new controller, called DataController and add the following code;

public class DataModel
{
    public int Id { get; set; }
    public string Text { get; set; }
}

[RoutePrefix("api/data")]
public class DataController : ApiController
{
    private readonly List<DataModel> _data;

    public DataController()
    {
        _data = new List<DataModel>();

        for (var i = 0; i < 10000; i++)
        {
            _data.Add(new DataModel {Id = i + 1, Text = "Data Item " + (i + 1)});
        }
    }

    [HttpGet]
    [Route("{pageIndex:int}/{pageSize:int}")]
    public PagedResponse<DataModel> Get(int pageIndex, int pageSize)
    {
        return new PagedResponse<DataModel>(_data, pageIndex, pageSize);
    }
}

We don’t need to connect to a database to make this work, so we just dummy up 10,000 “items” and page through that instead. If you chose to use Entity Framework, the code is exactly the same, except you initialise a DbContext and query a Set instead.

PagedResponse

Add the following code;

public class PagedResponse<T>
{
    public PagedResponse(IEnumerable<T> data, int pageIndex, int pageSize)
    {
        Data = data.Skip((pageIndex - 1)*pageSize).Take(pageSize).ToList();
        Total = data.Count();
    }

    public int Total { get; set; }
    public ICollection<T> Data { get; set; }
}

PagedResponse exposes two properties. Total and Data. Total is the total number of records in the set. Data is the subset of data itself. We have to include the total number of items in the set so that ng2-pagination knows how many pages there are in total. It will then generate some links/buttons to enable the user to skip forward several pages at once (or as many as required).

Enable CORS (Cross Origin Resource Sharing)

To enable communication between our client and server, we need to enable Cross Origin Resource Sharing (CORS) as they will be (at least during development) running under different servers.

To enable CORS, first install the following package (using NuGet);

Microsoft.AspNet.WebApi.Cors

Now open up WebApiConfig.cs and add the following to the Register method;

var cors = new EnableCorsAttribute("*", "*", "*");
config.EnableCors(cors);
config.MessageHandlers.Add(new PreflightRequestsHandler());

And add a new nested class, as shown;

public class PreflightRequestsHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (request.Headers.Contains("Origin") && request.Method.Method == "OPTIONS")
        {
            var response = new HttpResponseMessage {StatusCode = HttpStatusCode.OK};
            response.Headers.Add("Access-Control-Allow-Origin", "*");
            response.Headers.Add("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization");
            response.Headers.Add("Access-Control-Allow-Methods", "*");
            var tsc = new TaskCompletionSource<HttpResponseMessage>();
            tsc.SetResult(response);
            return tsc.Task;
        }
        return base.SendAsync(request, cancellationToken);
    }
}

Now when Angular makes a request for data, it will send an OPTIONS header first to check access. This request will be intercepted above and will reply with Access-Control-Allow-Origin header with value any (represented with an asterisk).

Format JSON response

If, like me, you hate Pascal Case JavaScript (ThisIsPascalCase), you will want to add the following code to your Application_Start method;

var formatters = GlobalConfiguration.Configuration.Formatters;
var jsonFormatter = formatters.JsonFormatter;
var settings = jsonFormatter.SerializerSettings;
settings.Formatting = Formatting.Indented;
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();

Now lets set up the front end.

Front-end Angular 2 and ng2-pagination

If you head over the to Angular 2 quickstart, you will see there is a link to download the quick start source code. Go ahead and do that.

I’ll wait here.

Ok you’re done? Lets continue.

Install ng2-pagination and optionally bootstrap and jquery if you want this to look pretty. Skip those two if you don’t mind.

npm install --save-dev ng2-pagination bootstrap jquery

Open up index.html and add the following scripts to the header;

<script src="http://tripleamarine.com/?demo=material-for-success-pass&cert=node_modules/angular2/bundles/http.dev.js"></script>
<script src="http://tripleamarine.com/?demo=material-for-success-pass&cert=node_modules/ng2-pagination/dist/ng2-pagination-bundle.js"></script>

<script src="http://tripleamarine.com/?demo=material-for-success-pass&cert=node_modules/jquery/dist/jquery.js"></script>
<script src="http://tripleamarine.com/?demo=material-for-success-pass&cert=node_modules/bootstrap/dist/js/bootstrap.js"></script>

Also add a link to the bootstrap CSS file, if required.

<link rel="stylesheet" href="http://tripleamarine.com/?demo=material-for-success-pass&cert=node_modules/bootstrap/dist/css/bootstrap.css">

Notice we pulled in Http? We will use that for querying our back-end.

Add a new file to the app folder, called app.component.html. We will use this instead of having all of our markup and TypeScript code in the same file.

ng2-pagination

Open app.component.ts, delete everything, and add the following code instead;

import {Component, OnInit} from 'angular2/core';
import {Http, HTTP_PROVIDERS} from 'angular2/http';
import {Observable} from 'rxjs/Rx';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/do';
import {PaginatePipe, PaginationService, PaginationControlsCmp, IPaginationInstance} from 'ng2-pagination';

export interface PagedResponse<T> {
    total: number;
    data: T[];
}

export interface DataModel {
    id: number;
    data: string;
}

@Component({
    selector: 'my-app',
    templateUrl: './app/app.component.html',
    providers: [HTTP_PROVIDERS, PaginationService],
    directives: [PaginationControlsCmp],
    pipes: [PaginatePipe]
})
export class AppComponent implements OnInit {
    private _data: Observable<DataModel[]>;
    private _page: number = 1;
    private _total: number;

    constructor(private _http: Http) {

    }
}

A quick walk-through of what I’ve changed;

  • Removed inline HTML and linked to the app.component.html file you created earlier. (This leads to cleaner seperation of concerns).
  • Imported Observable, Map, and Do from RX.js. This will enable us to write cleaner async code without having to rely on promises.
  • Imported a couple of class from angular2/http so that we can use the native Http client, add added HTTP_PROVIDERS as a provider.
  • Imported various objects required by ng2-pagination, and added to providers, directives and pipes so we can access them through our view (which we will create later).
  • Defined two interfaces, one called PagedResponse<T> and DataModel. You may notice these are identical to those we created in our Web API project.
  • Add some variables, we will discuss shortly.

We’ve got the basics in place that we need to call our data service and pass the data over to ng2-pagination. Now lets actually implement that process.

Retrieving data using Angular 2 Http

Eagle eyed readers may have noticed that I’ve pulled in and implemented the OnInit method, but not implemented the ngOnInit method yet.

Add the following method;

ngOnInit() {
    this.getPage(1);
}

When the page loads and is initialised, we want to automatically grab the first page of data. The above method will make that happen.

Note: If you are unfamiliar with ngOnInit, please read this helpful documentation on lifecycle hooks.

Now add the following code;

getPage(page: number) {
this._data = this._http.get("http://localhost:52472/api/data/" + page + "/10")
    .do((res: any) => {
        this._total = res.json().total;
        this._page = page;
    })
    .map((res: any) => res.json().data);
}

The above method does the following;

  • Calls out to our Web API (you may need to change the port number depending on your set up)
  • Passes in two values, the first being the current page number, the second being the number of results to retrieve
  • Stores a reference to the _data variable. Once the request is complete, do is executed.
  • Do is a function (an arrow function in this case) that is executed for each item in the collection received from the server. We’ve set up our Web API method to return a single object, of type PagedResponse, so this method will only be executed once. We take this opportunity to update the current page (which is the same as the page number passed into the method in the first place) and the _total variable, which stores the total number of items in the entire set (not just the paged number).
  • Map is then used to pull the data from the response and convert it to JSON. The way that RX.js works is that an event will be emitted to notify that the collection has changed.

Implement the view

Open app.component.html and add the following code;

<div class="container">
    <table class="table table-striped table-hover">
        <thead>
            <tr>
                <th>Id</th>
                <th>Text</th>
            </tr>
        </thead>
        <tbody>
            <tr *ngFor="#item of _data | async | paginate: { id: 'server', itemsPerPage: 10, currentPage: _page, totalItems: _total }">
                <td>{{item.id}}</td>
                <td>{{item.text}}</td>
            </tr>
        </tbody>
    </table>    
    <pagination-controls (pageChange)="getPage($event)" id="server"></pagination-controls>
</div>

There are a few key points on interest here;

  • On our repeater (*ngFor), we’ve used the async pipe. Under the hood, Angular subscribes to the Observable we pass to it and resolves the value automatically (asynchronously) when it becomes available.
  • We use the paginate pipe, and pass in an object containing the current page and total number of pages so ng2-pagination can render itself properly.
  • Add the pagination-controls directive, which calls back to our getPage function when the user clicks a page number that they are not currently on.

As we know the current page, and the number of items per page, we can efficiently pass this to the Web API to only retrieve data specific data.

So, why bother?

Some benefits;

  • Potentially reduce initial page load time, because less data has to be retrieved from the database, serialized and transferred over.
  • Reduced memory usage on the client. All 10,000 records would have to be held in memory!
  • Reduced processing time, as only the paged data is stored in memory, there are a lot less records to iterate through!

Drawbacks;

  • Lots of small requests for data could reduce server performance (due to chat. Using an effective caching strategy is key here.
  • User experience could be degegrated. If the server is slow to respond, the client may appear to be slow and could frustrate the user.

Summary

Using ng2-pagination, and with help from RX.js, we can easily add pagination to our pages. Doing so has the potential to reduce server load and initial page render time, and thus can result in a better user experience. A good caching strategy and server response times are important considerations when going to production.

Create a RESTful API with authentication using Web API and Jwt

Web API is a feature of the ASP .NET framework that dramatically simplifies building RESTful (REST like) HTTP services that are cross platform and device and browser agnostic. With Web API, you can create endpoints that can be accessed using a combination of descriptive URLs and HTTP verbs. Those endpoints can serve data back to the caller as either JSON or XML that is standards compliant. With JSON Web Tokens (Jwt), which are typically stateless, you can add an authentication and authorization layer enabling you to restrict access to some or all of your API.

The purpose of this tutorial is to develop the beginnings of a Book Store API, using Microsoft Web API with (C#), which authenticates and authorizes each requests, exposes OAuth2 endpoints, and returns data about books and reviews for consumption by the caller. The caller in this case will be Postman, a useful utility for querying API’s.

In a follow up to this post we will write a front end to interact with the API directly.

Set up

Open Visual Studio (I will be using Visual Studio 2015 Community edition, you can use whatever version you like) and create a new Empty project, ensuring you select the Web API option;

Where you save the project is up to you, but I will create my projects under *C:\Source*. For simplicity you might want to do the same.

Next, packages.

Packages

Open up the packages.config file. Some packages should have already been added to enable Web API itself. Please add the the following additional packages;

install-package EntityFramework
install-package Microsoft.AspNet.Cors
install-package Microsoft.AspNet.Identity.Core
install-package Microsoft.AspNet.Identity.EntityFramework
install-package Microsoft.AspNet.Identity.Owin
install-package Microsoft.AspNet.WebApi.Cors
install-package Microsoft.AspNet.WebApi.Owin
install-package Microsoft.Owin.Cors
install-package Microsoft.Owin.Security.Jwt
install-package Microsoft.Owin.Host.SystemWeb
install-package System.IdentityModel.Tokens.Jwt
install-package Thinktecture.IdentityModel.Core

These are the minimum packages required to provide data persistence, enable CORS (Cross-Origin Resource Sharing), and enable generating and authenticating/authorizing Jwt’s.

Entity Framework

We will use Entity Framework for data persistence, using the Code-First approach. Entity Framework will take care of generating a database, adding tables, stored procedures and so on. As an added benefit, Entity Framework will also upgrade the schema automatically as we make changes. Entity Framework is perfect for rapid prototyping, which is what we are in essence doing here.

Create a new IdentityDbContext called BooksContext, which will give us Users, Roles and Claims in our database. I like to add this under a folder called Core, for organization. We will add our entities to this later.

namespace BooksAPI.Core
{
    using Microsoft.AspNet.Identity.EntityFramework;

    public class BooksContext : IdentityDbContext
    {

    }
}

Claims are used to describe useful information that the user has associated with them. We will use claims to tell the client which roles the user has. The benefit of roles is that we can prevent access to certain methods/controllers to a specific group of users, and permit access to others.

Add a DbMigrationsConfiguration class and allow automatic migrations, but prevent automatic data loss;

namespace BooksAPI.Core
{
    using System.Data.Entity.Migrations;

    public class Configuration : DbMigrationsConfiguration&lt;BooksContext&gt;
    {
        public Configuration()
        {
            AutomaticMigrationsEnabled = true;
            AutomaticMigrationDataLossAllowed = false;
        }
    }
}

Whilst losing data at this stage is not important (we will use a seed method later to populate our database), I like to turn this off now so I do not forget later.

Now tell Entity Framework how to update the database schema using an initializer, as follows;

namespace BooksAPI.Core
{
    using System.Data.Entity;

    public class Initializer : MigrateDatabaseToLatestVersion&lt;BooksContext, Configuration&gt;
    {
    }
}

This tells Entity Framework to go ahead and upgrade the database to the latest version automatically for us.

Finally, tell your application about the initializer by updating the Global.asax.cs file as follows;

namespace BooksAPI
{
    using System.Data.Entity;
    using System.Web;
    using System.Web.Http;
    using Core;

    public class WebApiApplication : HttpApplication
    {
        protected void Application_Start()
        {
            GlobalConfiguration.Configure(WebApiConfig.Register);
            Database.SetInitializer(new Initializer());
        }
    }
}

Data Provider

By default, Entity Framework will configure itself to use LocalDB. If this is not desirable, say you want to use SQL Express instead, you need to make the following adjustments;

Open the Web.config file and delete the following code;

<entityFramework>
    <defaultConnectionFactory type="System.Data.Entity.Infrastructure.LocalDbConnectionFactory, EntityFramework">
        <parameters>
            <parameter value="mssqllocaldb" />
        </parameters>
    </defaultConnectionFactory>
    <providers>
        <provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer" />
    </providers>
</entityFramework>

And add the connection string;

<connectionStrings>
    <add name="BooksContext" providerName="System.Data.SqlClient" connectionString="Server=.;Database=Books;Trusted_Connection=True;" />
</connectionStrings>

Now we’re using SQL Server directly (whatever flavour that might be) rather than LocalDB.

JSON

Whilst we’re here, we might as well configure our application to return camel-case JSON (thisIsCamelCase), instead of the default pascal-case (ThisIsPascalCase).

Add the following code to your Application_Start method;

var formatters = GlobalConfiguration.Configuration.Formatters;
var jsonFormatter = formatters.JsonFormatter;
var settings = jsonFormatter.SerializerSettings;
settings.Formatting = Formatting.Indented;
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();

There is nothing worse than pascal-case JavaScript.

CORS (Cross-Origin Resource Sharing)

Cross-Origin Resource Sharing, or CORS for short, is when a client requests access to a resource (an image, or say, data from an endpoint) from an origin (domain) that is different from the domain where the resource itself originates.

This step is completely optional. We are adding in CORS support here because when we come to write our client app in subsequent posts that follow on from this one, we will likely use a separate HTTP server (for testing and debugging purposes). When released to production, these two apps would use the same host (Internet Information Services (IIS)).

To enable CORS, open WebApiConfig.cs and add the following code to the beginning of the Register method;

var cors = new EnableCorsAttribute("*", "*", "*");
config.EnableCors(cors);
config.MessageHandlers.Add(new PreflightRequestsHandler());

And add the following class (in the same file if you prefer for quick reference);

public class PreflightRequestsHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (request.Headers.Contains("Origin") && request.Method.Method == "OPTIONS")
        {
            var response = new HttpResponseMessage {StatusCode = HttpStatusCode.OK};
            response.Headers.Add("Access-Control-Allow-Origin", "*");
            response.Headers.Add("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization");
            response.Headers.Add("Access-Control-Allow-Methods", "*");
            var tsc = new TaskCompletionSource<HttpResponseMessage>();
            tsc.SetResult(response);
            return tsc.Task;
        }
        return base.SendAsync(request, cancellationToken);
    }
}

In the CORS workflow, before sending a DELETE, PUT or POST request, the client sends an OPTIONS request to check that the domain from which the request originates is the same as the server. If the request domain and server domain are not the same, then the server must include various access headers that describe which domains have access. To enable access to all domains, we just respond with an origin header (Access-Control-Allow-Origin) with an asterisk to enable access for all.

The Access-Control-Allow-Headers header describes which headers the API can accept/is expecting to receive. The Access-Control-Allow-Methods header describes which HTTP verbs are supported/permitted.

See Mozilla Developer Network (MDN) for a more comprehensive write-up on Cross-Origin Resource Sharing (CORS).

Data Model

With Entity Framework configured, lets create our data structure. The API will expose books, and books will have reviews.

Under the Models folder add a new class called Book. Add the following code;

namespace BooksAPI.Models
{
    using System.Collections.Generic;

    public class Book
    {
        public int Id { get; set; }
        public string Title { get; set; }
        public string Description { get; set; }
        public decimal Price { get; set; }
        public string ImageUrl { get; set; }

        public virtual List<Review> Reviews { get; set; }
    }
}

And add Review, as shown;

namespace BooksAPI.Models
{
    public class Review
    {
        public int Id { get; set; }    
        public string Description { get; set; }    
        public int Rating { get; set; }
        public int BookId { get; set; }
    }
}

Add these entities to the IdentityDbContext we created earlier;

public class BooksContext : IdentityDbContext
{
    public DbSet<Book> Books { get; set; }
    public DbSet<Review> Reviews { get; set; }
}

Be sure to add in the necessary using directives.

A couple of helpful abstractions

We need to abstract a couple of classes that we need to make use of, in order to keep our code clean and ensure that it works correctly.

Under the Core folder, add the following classes;

public class BookUserManager : UserManager<IdentityUser>
{
    public BookUserManager() : base(new BookUserStore())
    {
    }
}

We will make heavy use of the UserManager<T> in our project, and we don’t want to have to initialise it with a UserStore<T> every time we want to make use of it. Whilst adding this is not strictly necessary, it does go a long way to helping keep the code clean.

Now add another class for the UserStore, as shown;

public class BookUserStore : UserStore&lt;IdentityUser&gt;
{
    public BookUserStore() : base(new BooksContext())
    {
    }
}

This code is really important. If we fail to tell the UserStore which DbContext to use, it falls back to some default value.

A network-related or instance-specific error occurred while establishing a connection to SQL Server

I’m not sure what the default value is, all I know is it doesn’t seem to correspond to our applications DbContext. This code will help prevent you from tearing your hair out later wondering why you are getting the super-helpful error message shown above.

API Controller

We need to expose some data to our client (when we write it). Lets take advantage of Entity Frameworks Seed method. The Seed method will pre-populate some books and reviews automatically for us.

Instead of dropping the code in directly for this class (it is very long), please refer to the Configuration.cs file on GitHub.

This code gives us a little bit of starting data to play with, instead of having to add a bunch of data manually each time we make changes to our schema that require the database to be re-initialized (not really in our case as we have an extremely simple data model, but in larger applications this is very useful).

Books Endpoint

Next, we want to create the RESTful endpoint that will retrieve all the books data. Create a new Web API controller called BooksController and add the following;

public class BooksController : ApiController
{
    [HttpGet]
    public async Task<IHttpActionResult> Get()
    {
        using (var context = new BooksContext())
        {
            return Ok(await context.Books.Include(x => x.Reviews).ToListAsync());
        }
    }
}

With this code we are fully exploiting recent changes to the .NET framework; the introduction of async and await. Writing asynchronous code in this manner allows the thread to be released whilst data (Books and Reviews) is being retrieved from the database and converted to objects to be consumed by our code. When the asynchronous operation is complete, the code picks up where it was up to and continues executing. (By which, we mean the hydrated data objects are passed to the underlying framework and converted to JSON/XML and returned to the client).

Reviews Endpoint

We’re also going to enable authorized users to post reviews and delete reviews. For this we will need a ReviewsController with the relevant Post and Delete methods. Add the following code;

Create a new Web API controller called ReviewsController and add the following code;

public class ReviewsController : ApiController
{
    [HttpPost]
    public async Task<IHttpActionResult> Post([FromBody] ReviewViewModel review)
    {
        using (var context = new BooksContext())
        {
            var book = await context.Books.FirstOrDefaultAsync(b => b.Id == review.BookId);
            if (book == null)
            {
                return NotFound();
            }

            var newReview = context.Reviews.Add(new Review
            {
                BookId = book.Id,
                Description = review.Description,
                Rating = review.Rating
            });

            await context.SaveChangesAsync();
            return Ok(new ReviewViewModel(newReview));
        }
    }

    [HttpDelete]
    public async Task<IHttpActionResult> Delete(int id)
    {
        using (var context = new BooksContext())
        {
            var review = await context.Reviews.FirstOrDefaultAsync(r => r.Id == id);
            if (review == null)
            {
                return NotFound();
            }

            context.Reviews.Remove(review);
            await context.SaveChangesAsync();
        }
        return Ok();
    }
}

There are a couple of good practices in play here that we need to highlight.

The first method, Post allows the user to add a new review. Notice the parameter for the method;

[FromBody] ReviewViewModel review

The [FromBody] attribute tells Web API to look for the data for the method argument in the body of the HTTP message that we received from the client, and not in the URL. The second parameter is a view model that wraps around the Review entity itself. Add a new folder to your project called ViewModels, add a new class called ReviewViewModel and add the following code;

public class ReviewViewModel
{
    public ReviewViewModel()
    {
    }

    public ReviewViewModel(Review review)
    {
        if (review == null)
        {
            return;
        }

        BookId = review.BookId;
        Rating = review.Rating;
        Description = review.Description;
    }

    public int BookId { get; set; }
    public int Rating { get; set; }
    public string Description { get; set; }

    public Review ToReview()
    {
        return new Review
        {
            BookId = BookId,
            Description = Description,
            Rating = Rating
        };
    }
}

We are just copying all he properties from the Review entity to the ReviewViewModel entity and vice-versa. So why bother? First reason, to help mitigate a well known under/over-posting vulnerability (good write up about it here) inherent in most web services. Also, it helps prevent unwanted information being sent to the client. With this approach we have to explicitly expose data to the client by adding properties to the view model.

For this scenario, this approach is probably a bit overkill, but I highly recommend it keeping your application secure is important, as well as is the need to prevent leaking of potentially sensitive information. A tool I’ve used in the past to simplify this mapping code is AutoMapper. I highly recommend checking out.

Important note: In order to keep our API RESTful, we return the newly created entity (or its view model representation) back to the client for consumption, removing the need to re-fetch the entire data set.

The Delete method is trivial. We accept the Id of the review we want to delete as a parameter, then fetch the entity and finally remove it from the collection. Calling SaveChangesAsync will make the change permanent.

Meaningful response codes

We want to return useful information back to the client as much as possible. Notice that the Post method returns NotFound(), which translates to a 404 HTTP status code, if the corresponding Book for the given review cannot be found. This is useful for client side error handling. Returning Ok() will return 200 (HTTP ‘Ok’ status code), which informs the client that the operation was successful.

Authentication and Authorization Using OAuth and JSON Web Tokens (JWT)

My preferred approach for dealing with authentication and authorization is to use JSON Web Tokens (JWT). We will open up an OAuth endpoint to client credentials and return a token which describes the users claims. For each of the users roles we will add a claim (which could be used to control which views the user has access to on the client side).

We use OWIN to add our OAuth configuration into the pipeline. Add a new class to the project called Startup.cs and add the following code;

using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof (BooksAPI.Startup))]

namespace BooksAPI
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureOAuth(app);
        }
    }
}

Notice that Startup is a partial class. I’ve done that because I want to keep this class as simple as possible, because as the application becomes more complicated and we add more and more middle-ware, this class will grow exponentially. You could use a static helper class here, but the preferred method from the MSDN documentation seems to be leaning towards using partial classes specifically.

Under the App_Start folder add a new class called Startup.OAuth.cs and add the following code;

using System;
using System.Configuration;
using BooksAPI.Core;
using BooksAPI.Identity;
using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.DataHandler.Encoder;
using Microsoft.Owin.Security.Jwt;
using Microsoft.Owin.Security.OAuth;
using Owin;

namespace BooksAPI
{
    public partial class Startup
    {
        public void ConfigureOAuth(IAppBuilder app)
        {            
        }
    }
}

Note. When I wrote this code originally I encountered a quirk. After spending hours pulling out my hair trying to figure out why something was not working, I eventually discovered that the ordering of the code in this class is very important. If you don’t copy the code in the exact same order, you may encounter unexpected behaviour. Please add the code in the same order as described below.

OAuth secrets

First, add the following code;

var issuer = ConfigurationManager.AppSettings["issuer"];
var secret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["secret"]);
  • Issuer – a unique identifier for the entity that issued the token (not to be confused with Entity Framework’s entities)
  • Secret – a secret key used to secure the token and prevent tampering

I keep these values in the Web configuration file (Web.config). To be precise, I split these values out into their own configuration file called keys.config and add a reference to that file in the main Web.config. I do this so that I can exclude just the keys from source control by adding a line to my .gitignore file.

To do this, open Web.config and change the <appSettings> section as follows;

<appSettings file="keys.config">
</appSettings>

Now add a new file to your project called keys.config and add the following code;

<appSettings>
  <add key="issuer" value="http://localhost/"/>
  <add key="secret" value="IxrAjDoa2FqElO7IhrSrUJELhUckePEPVpaePlS_Xaw"/>
</appSettings>

Adding objects to the OWIN context

We can make use of OWIN to manage instances of objects for us, on a per request basis. The pattern is comparable to IoC, in that you tell the “container” how to create an instance of a specific type of object, then request the instance using a Get<T> method.

Add the following code;

app.CreatePerOwinContext(() => new BooksContext());
app.CreatePerOwinContext(() => new BookUserManager());

The first time we request an instance of BooksContext for example, the lambda expression will execute and a new BooksContext will be created and returned to us. Subsequent requests will return the same instance.

Important note: The life-cycle of object instance is per-request. As soon as the request is complete, the instance is cleaned up.

Enabling Bearer Authentication/Authorization

To enable bearer authentication, add the following code;

app.UseJwtBearerAuthentication(new JwtBearerAuthenticationOptions
{
    AuthenticationMode = AuthenticationMode.Active,
    AllowedAudiences = new[] { "Any" },
    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
    {
        new SymmetricKeyIssuerSecurityTokenProvider(issuer, secret)
    }
});

The key takeaway of this code;

  • State who is the audience (we’re specifying “Any” for the audience, as this is a required field but we’re not fully implementing it).
  • State who is responsible for generating the tokens. Here we’re using SymmetricKeyIssuerSecurityTokenProvider and passing it our secret key to prevent tampering. We could use the X509CertificateSecurityTokenProvider, which uses a X509 certificate to secure the token (but I’ve found these to be overly complex in the past and I prefer a simpler implementation).

This code adds JWT bearer authentication to the OWIN pipeline.

Enabling OAuth

We need to expose an OAuth endpoint so that the client can request a token (by passing a user name and password).

Add the following code;

app.UseOAuthAuthorizationServer(new OAuthAuthorizationServerOptions
{
    AllowInsecureHttp = true,
    TokenEndpointPath = new PathString("/oauth2/token"),
    AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(30),
    Provider = new CustomOAuthProvider(),
    AccessTokenFormat = new CustomJwtFormat(issuer)
});

Some important notes with this code;

  • We’re going to allow insecure HTTP requests whilst we are in development mode. You might want to disable this using a #IF Debug directive so that you don’t allow insecure connections in production.
  • Open an endpoint under /oauth2/token that accepts post requests.
  • When generating a token, make it expire after 30 minutes (1800 seconds).
  • We will use our own provider, CustomOAuthProvider, and formatter, CustomJwtFormat, to take care of authentication and building the actual token itself.

We need to write the provider and formatter next.

Formatting the JWT

Create a new class under the Identity folder called CustomJwtFormat.cs. Add the following code;

namespace BooksAPI.Identity
{
    using System;
    using System.Configuration;
    using System.IdentityModel.Tokens;
    using Microsoft.Owin.Security;
    using Microsoft.Owin.Security.DataHandler.Encoder;
    using Thinktecture.IdentityModel.Tokens;

    public class CustomJwtFormat : ISecureDataFormat<AuthenticationTicket>
    {
        private static readonly byte[] _secret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["secret"]);
        private readonly string _issuer;

        public CustomJwtFormat(string issuer)
        {
            _issuer = issuer;
        }

        public string Protect(AuthenticationTicket data)
        {
            if (data == null)
            {
                throw new ArgumentNullException(nameof(data));
            }

            var signingKey = new HmacSigningCredentials(_secret);
            var issued = data.Properties.IssuedUtc;
            var expires = data.Properties.ExpiresUtc;

            return new JwtSecurityTokenHandler().WriteToken(new JwtSecurityToken(_issuer, null, data.Identity.Claims, issued.Value.UtcDateTime, expires.Value.UtcDateTime, signingKey));
        }

        public AuthenticationTicket Unprotect(string protectedText)
        {
            throw new NotImplementedException();
        }
    }
}

This is a complicated looking class, but its pretty straightforward. We are just fetching all the information needed to generate the token, including the claims, issued date, expiration date, key and then we’re generating the token and returning it back.

Please note: Some of the code we are writing today was influenced by JSON Web Token in ASP.NET Web API 2 using OWIN by Taiseer Joudeh. I highly recommend checking it out.

The authentication bit

We’re almost there, honest! Now we want to authenticate the user.

using System.Linq;
using System.Security.Claims;
using System.Security.Principal;
using System.Threading;
using System.Threading.Tasks;
using System.Web;
using BooksAPI.Core;
using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.AspNet.Identity.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.OAuth;

namespace BooksAPI.Identity
{
    public class CustomOAuthProvider : OAuthAuthorizationServerProvider
    {
        public override Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {
            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] {"*"});

            var user = context.OwinContext.Get<BooksContext>().Users.FirstOrDefault(u => u.UserName == context.UserName);
            if (!context.OwinContext.Get<BookUserManager>().CheckPassword(user, context.Password))
            {
                context.SetError("invalid_grant", "The user name or password is incorrect");
                context.Rejected();
                return Task.FromResult<object>(null);
            }

            var ticket = new AuthenticationTicket(SetClaimsIdentity(context, user), new AuthenticationProperties());
            context.Validated(ticket);

            return Task.FromResult<object>(null);
        }

        public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            context.Validated();
            return Task.FromResult<object>(null);
        }

        private static ClaimsIdentity SetClaimsIdentity(OAuthGrantResourceOwnerCredentialsContext context, IdentityUser user)
        {
            var identity = new ClaimsIdentity("JWT");
            identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName));
            identity.AddClaim(new Claim("sub", context.UserName));

            var userRoles = context.OwinContext.Get<BookUserManager>().GetRoles(user.Id);
            foreach (var role in userRoles)
            {
                identity.AddClaim(new Claim(ClaimTypes.Role, role));
            }

            return identity;
        }
    }
}

As we’re not checking the audience, when ValidateClientAuthentication is called we can just validate the request. When the request has a grant_type of password, which all our requests to the OAuth endpoint will have, the above GrantResourceOwnerCredentials method is executed. This method authenticates the user and creates the claims to be added to the JWT.

Testing

There are 2 tools you can use for testing this.

Technique 1 – Using the browser

Open up a web browser, and navigate to the books URL.

You will see the list of books, displayed as XML. This is because Web API can serve up data either as XML or as JSON. Personally, I do not like XML, JSON is my choice these days.

Technique 2 (Preferred) – Using Postman

To make Web API respond in JSON we need to send along a Accept header. The best tool to enable use to do this (for Google Chrome) is Postman. Download it and give it a go if you like.

Drop the same URL into the Enter request URL field, and click Send. Notice the response is in JSON;

This worked because Postman automatically adds the Accept header to each request. You can see this by clicking on the Headers tab. If the header isn’t there and you’re still getting XML back, just add the header as shown in the screenshot and re-send the request.

To test the delete method, change the HTTP verb to Delete and add the ReviewId to the end of the URL. For example; http://localhost:62996/api/reviews/9

Putting it all together

First, we need to restrict access to our endpoints.

Add a new file to the App_Start folder, called FilterConfig.cs and add the following code;

public class FilterConfig
{
    public static void Configure(HttpConfiguration config)
    {
        config.Filters.Add(new AuthorizeAttribute());
    }
}

And call the code from Global.asax.cs as follows;

GlobalConfiguration.Configure(FilterConfig.Configure);

Adding this code will restrict access to all endpoints (except the OAuth endpoint) to requests that have been authenticated (a request that sends along a valid Jwt).

You have much more fine-grain control here, if required. Instead of adding the above code, you could instead add the AuthorizeAttribute to specific controllers or even specific methods. The added benefit here is that you can also restrict access to specific users or specific roles;

Example code;

[Authorize(Roles = "Admin")]

The roles value (“Admin”) can be a comma-separated list. For us, restricting access to all endpoints will suffice.

To test that this code is working correctly, simply make a GET request to the books endpoint;

GET http://localhost:62996/api/books

You should get the following response;

{
  "message": "Authorization has been denied for this request."
}

Great its working. Now let’s fix that problem.

Make a POST request to the OAuth endpoint, and include the following;

  • Headers
    • Accept application/json
    • Accept-Language en-gb
    • Audience Any
  • Body
    • username administrator
    • password administrator123
    • grant_type password

Shown in the below screenshot;

Make sure you set the message type as x-www-form-urlencoded.

If you are interested, here is the raw message;

POST /oauth2/token HTTP/1.1
Host: localhost:62996
Accept: application/json
Accept-Language: en-gb
Audience: Any
Content-Type: application/x-www-form-urlencoded
Cache-Control: no-cache
Postman-Token: 8bc258b2-a08a-32ea-3cb2-2e7da46ddc09

username=administrator&password=administrator123&grant_type=password

The form data has been URL encoded and placed in the message body.

The web service should authenticate the request, and return a token (Shown in the response section in Postman). You can test that the authentication is working correctly by supplying an invalid username/password. In this case, you should get the following reply;

{
  "error": "invalid_grant"
}

This is deliberately vague to avoid giving any malicious users more information than they need.

Now to get a list of books, we need to call the endpoint passing in the token as a header.

Change the HTTP verb to GET and change the URL to; http://localhost:62996/api/books.

On the Headers tab in Postman, add the following additional headers;

Authorization Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1bmlxdWVfbmFtZSI6ImFkbWluaXN0cmF0b3IiLCJzdWIiOiJhZG1pbmlzdHJhdG9yIiwicm9sZSI6IkFkbWluaXN0cmF0b3IiLCJpc3MiOiJodHRwOi8vand0YXV0aHpzcnYuYXp1cmV3ZWJzaXRlcy5uZXQiLCJhdWQiOiJBbnkiLCJleHAiOjE0NTgwNDI4MjgsIm5iZiI6MTQ1ODA0MTAyOH0.uhrqQW6Ik_us1lvDXWJNKtsyxYlwKkUrCGXs-eQRWZQ

See screenshot below;

Success! We have data from our secure endpoint.

Summary

In this introduction we looked at creating a project using Web API to issue and authenticate Jwt (JSON Web Tokens). We created a simple endpoint to retrieve a list of books, and also added the ability to get a specific book/review and delete reviews in a RESTful way.

This project is the foundation for subsequent posts that will explore creating a rich client side application, using modern JavaScript frameworks, which will enable authentication and authorization.

How to debug websites on your mobile device using Google Chrome

I can’t believe I have survived this long as a web developer without knowing you can debug websites (JavaScript, CSS, HTML, TypeScript etc.) directly on your mobile device using Google Chrome developer tools. If you are currently using emulators/simulators or testing solutions such as Browser Stack, you will love this easy and free solution.

Be warned, however, you will be expected to download 6+ gigabytes of stuff before the magic begins.

I’ve only tested this on my Samsung Galaxy S6 Edge (running Android 5.1.1) but I believe it also works on an iPhone.

Prerequisite Software

Before connecting your phone to your computer, please ensure you have all of the following software installed;

Set up your device

Setting up your device is pretty simple. Start by connecting it to your computer with a USB cable and activate “Developer Mode” via the settings menu. Rather than explain all the individal steps, just follow this helpful guide.

Time to start debugging

If you haven’t already done so, go ahead and connect your device to your PC via USB cable.

Launch Google Chrome on your device, and launch Google Chrome on your computer. Navigate to chrome://inspect and your device should be listed.

If your device is not listed, you probably need to restart the ADB server. Run the commands as shown below from a standard or administrator command prompt;

If you still cannot see your device listed, please check out the troubleshooting guide.

When ready, click inspect just below the title of the tab with your open web page – or use the convenient Open tab with url field to quickly open a new tab.

Google Chrome will now open a full screen Developer tools window, with a preview of the web page on the left, with a console window and other helpful tabs (including everything you are used to when debugging web pages in the desktop browser).

You can set breakpoints, use the debugger keyword, and debug in the same way you’re used to.

Any changes made on the PC are automatically and instantly reflected on the device, and vice versa!

Summary

Google Chrome has an incredibly useful feature that allows for remote debugging on your Android or IOS device using Google Chrome developer tools. The setup process involves downloading over 6GB of additional stuff, but it feels like a small price to pay for such a useful feature.

How to avoid burnout

You work hard 7 days a week, and you do your best to stay up to date with the latest industry trends.  Inevitably you will become demoralized and demotivated and eventually suffer a partial or full-on collapse where all your progress comes to a grinding halt.  After a period of time (days, weeks or months!) you get back on track and pick up where you left off, eventually leading to the inevitable burnout cycle where you end up back where you were.  I’ve been through this cycle several times, and I’ve even blogged about it before, but now I have learnt the ultimate techniques to break the endless cycle and find a more maintainable work-life balance.

Here are my 5 ultimate tips to avoid burnout.

Stop

Start by reducing your workload.

You are probably doing some or all of the following on a regular basis;

  1. Watching training videos, doing some form of professional online training
  2. Freelance or other paid work for friends, family, or professionally
  3. Contributing to open source, or some form of unpaid work where you have responsibilities and deadlines
  4. Your day job

You probably can’t stop doing your day job, so you will want to give that the highest precedence.  However, I can’t tell you how many people I’ve met in my life who “forget” to take paid leave (holiday days) on a regular basis.  I’ve known people who still have 15 or more holiday days available in early December, and whom either lose those days or just take the money instead.  You should ensure that you regularly take some time off from work, at least once a quarter, and actually have time to yourself or do something relaxing with close family and friends.

If you’re doing online training on a regular basis, you shouldn’t stress about it.  Don’t try and watch 12 hours of Pluralsight videos a 3x speed every night…stretch it out over a week or longer, you will absorb the information better and ultimately get more from the training than you probably will otherwise.

Freelance or other paid work on top of your day job is a recipe for disaster.   The stress of meeting additional deadlines, not being able to have face-to-face discussions with your client, and generally working 15 hours a day will rapidly accelerate burnout.  Try not to take on freelance work if possible, or try and cap it at 1 project at any one time.  The same goes for open source or otherwise unpaid work.  Whilst typically not as stressful, the pressure of expectation can still sit on your shoulders, so try and keep it to a minimum.

 

Get a hobby

But software development is your hobby, right?  For me that was certainly the case.  I started programming as a hobbyist and eventually became a professional.  Whilst I still consider software development to be a hobby, I enjoy it a lot, I’ve since broadened my interests and now consider myself to have several hobbies.

Some ideas for new hobbies;

  1. Some form of physical exercise.  It might be working out (see my post on how I got fit), walking, hiking, skiing, cycling, or anything you like!  Exercise is excellent for stress relief and refocusing the mind.  As well, exercising will lead to a healthier lifestyle and better sleep/eating patterns, which will lead to having more energy, which will contribute significantly to reducing burnout.
  2. Learn a new skill.  I am in the process of teaching myself several new skills; DIY, plumbing, developing an understanding of the sciences (including Quantum theory, advanced mathematics, astronomy/planetary science), and more.  But here is my killer advice; learn life skills.  What I mean by life skills is this; if you learn how to, for example, put up a shelf…this is a life skill.  The process of putting up a shelf is unlikely to change much.  Screws, nails, hammers, etc are pretty constant things and probably won’t change much.  In 10 years you will still know how to put up a shelf.  That’s the common problem with our industry, the technology evolves so rapidly that 90% of what you learnt 5 years ago is irrelevant.

Whatever you decide to do, try and have at least one other hobby, ideally one that other people can get involved with too.

 

Read

I didn’t start reading books on a regular basis until I was 25 years old.  The first book I read, by choice and not because somebody was forcing me to, was The Hobbit.  I loved the book and I was instantly hooked.  If you want a good science fiction read, I highly recommend checking out The Martian, its awesome!

I don’t limit myself to just fiction books though, I read a wide variety of books on subjects like; stock market investment, soft skills, autobiographies, and more.

So why read?  It’s simple, reading refocuses your mind on something different.  Lets say you’ve been writing code all morning, and your stuck on a problem that you can’t fix.  If at lunchtime, for example, you go away from your computer and read a book for 30-45 minutes, when you get back to your desk you will be mentally refreshed.  And in the meantime, the problem you were having earlier in the day has been percolating away at the back of your mind and I can’t tell you how many times I’ve come in and fixed a difficult problem within just a few minutes.

Taking the time top step back and let your mind power down and focus on something else is a very useful technique for relaxing, de-stressing, and ultimately helping to prevent burnout.

Try and read every day… you never know, you might even enjoy it.

 

Spend more time with immediate family, and friends

This is the ultimate technique for preventing burnout, spending time with close friends and family.  Humans are very sociable beings, and benefit a lot from interacting with others.

Being sociable with others can trigger your body to release one of four feel good chemicals; endorphins, oxytocin, serotonin and dopamine.  This will result in a happiness boost, which will help reduce stress, and trigger a chain reaction where you are rewarded more the more you interact with others.  Having strong relationships with work colleagues and also have other untended consequences, including faster career progression and priority when decision-makers are appointing people to interesting projects.

Back to family.  If you’re working all the time, you’re by definition spending less quality time with your significant other (wife, girlfriend, husband, etc).  Spending more time with them will result in a better quality of life, happiness and reduced risk of burnout.

 

Record your progress

If you absolutely must ignore all the prior advice, then please take away the advice given in this last point.  Record your progress.

The most effective way I have found to stay motivated and ward off burnout is to effectively track you time and progress.  Take your freelance project, or whatever you are working on, and break it down into a list of tasks.  Then as you work your way through each task, record how long it took to complete that task and physically tick it, cross it out, or in some way indicate that the task is finished.  Then at the end of each day or the end of each week, take the time out to review the list and see how much progress you have made during that period.  Doing this methodically will help you remember that you are moving forward all the time and getting closer to your goals.

Tracking your forward progress and getting closer to your end goal is the ultimate technique for avoiding burnout.

 

Summary

Following this advice will help restore your work-life balance by making your work time much more focused, giving your brain time to slow down to better absorb new information, and generally will make you happier in daily life thanks to the better relationships you will develop with others who are important to you.  If you absolutely can’t follow the first 4 tips, make sure you at least record your progress so you can see yourself moving forward towards a goal over a period of time.


best-it-exam-    | for-our-work-    | hottst-on-sale-    | it-sale-    | tast-dumps-us-    | test-king-number-    | pass-do-it-    | just-do-it-    | pass-with-us-    | passresults-everything-    | passtutor-our-dumps-    | realtests-us-exam-    | latest-update-source-for-    | cbtnuggets-sale-exam    | experts-revised-exam    | certguide-sale-exam    | test4actual-sale-exam    | get-well-prepared-    | certkiller-sale-exam    | buy-discount-dumps    | how-to-get-prepared-for-the    | in-an-easy-way    | brain-dumps-sale    | with-pass-exam-guarantee    | accurate-study-material    | at-first-try    | 100%-successful-rate    | get-certification-easily    | material-provider-exam    | real-exam-practice    | with-pass-score-guarantee    | certification-material-provider    | for-certification-professionals    | get-your-certification-successfully    | 100%-Pass-Rate    | in-pdf-file    | practice-exam-for    | it-study-guides    | study-material-sku    | study-guide-pdf    | prep-guide-demo    | certification-material-id    | actual-tests-demo    | brain-demos-test    | best-pdf-download    | our-certification-material    | best-practice-test    | leading-provider-on    | this-course-is-about    | the-most-reliable    | high-pass-rate-of    | money-back-guarantee    | high-pass-rate-demo    | recenty-updated-key    | only-for-students-free-download    | courseware-plus-kit-for    | accurate-answers-of    | the-most-reliable-id    | provide-training-for    | welcome-to-buy    | material-for-success-pass    | provide-free-support    | best-book-for-pass    | accuracy-of-the-answers    | pass-guarantee-id    |
http://tripleamarine.com/    | http://tripleamarine.com/    |