logo-blog-3-3

React Authentication on ASP.NET Core with OAuth and Identity

React Authentication

In this post I explain how React authentication on ASP.NET Core in the RealTimeWeb.NET application is implemented.

The application supports two ways to authenticate:

  • By registration and entering username and password
  • By connecting to an external login provider like Facebook and Google.

ASP.NET Core supports cookie authentication out-of-the-box. While this is enough to create classic web-application and protect controller view actions, it is not secure enough to protect the API’s used by a Single Page Application (SPA). The recommended way to protect an API is to use authentication tokens. The tokens we use here to are defined by the JWT standard in RFC 7519, JSON Web Tokens.

To generate these tokens we use the AspNet.Security.OpenIdConnect.Server (ASOS). ASOS is an advanced OAuth2/OpenID Connect server framework for ASP.NET Core. The generated tokens are validated by using ASP.NET Jwt Bearer authentication.

Configuration

All authentication configuration is placed in Infrastructure/AuthenticationConfiguration.cs:

It will configure:

  • JWT bearer token authentication for API calls.
  • Cookie authentication for WEB calls.
  • Google and Facebook external authentication providers. These providers are only configured when valid configuration is found in the application configuration.
  • OpenIdConnect.Server that provide us with an authorization and token endpoints. These are used to authenticate from the React client application.

Username/password login

Authentication-UserNamePassword

The flow used for username and password authentication is rather simple:

 

    1. The user enters username and password in a textbox. (source)
    2. The token endpoint is called with the entered credentials. (source)
    3. When the credentials are correct an access_token and refresh_token are returned. Both tokens are stored in the local storage of the browser. (source)
    4. When an API call is made the access_token is added in the header of the HTTP call to authenticate. When a valid access_token is used a valid response is returned with HTTP code 200 (OK). (source)

External provider login

OAuth also support authentication with an external provider. In the example both Google and Facebook are supported. But online you also find adapters for most other providers like GitHub, LinkedIn, … in the AspNet contrib repository.

The flow to authenticate with an external provider is a bit more complex:

Authentication-External-Login-1

 

    1. The user clicks on a specific external provider. (source)
    2. A new window is opened and loads the authorize endpoint: /account/authorize/connect?grant_type=authorization_code. (client and server source)
    3. This request is handled by the registered providers and redirects to the external provider login page.
    4. When the login in successful the external provider redirects back to our authorization complete endpoint: /account/authorize/complete. (source)
    5. When the the authorization is completed we redirect to the authorized enpoind with the generated authorization_code. This additional step is needed to get both an access_token and refresh_token because the authorization endpoint is not allowed to return the refresh_token by the RFC.
    6. The token endpoint is called with the authentication code. (source)
    7. When the authentication code is correct the access_token and refresh_token are returned. Both tokens are stored in the local storage of the browser. (source)
    8. When an API call is made the access_token is added in the header of the HTTP call to authenticate. When a valid access_token is used a valid response is returned with HTTP code 200 (OK). (source)

Using a refresh_token

When a token is requested both an access_token and refresh_token are returned. The validation lifetime of an access_token is much shorter (20 minutes) as the lifetime of the refresh_token (24 hours). The goal is to request a new access_token by providing the refresh_token when the access_token is expired.

Refresh_token are recommended because the user can stay logged-in for a longer period of time without the need to resend the credentials over the wire again. And they can also be revoked by the server. The advantages and disadvantages of the revocation support are explained in this StackOverflow post.

Authentication-Refresh-Token

    1. When the access_token expires the API returns an HTTP 401 response.
    2. When the HTTP 401 response is received the token endpoit is called with the refres_token. (source)
    3. When the refresh_token is not expired the new access_token and refresh_token are returned. Both tokens are stored in the local storage of the browser. (source)
    4. When an API call is made the access_token is added in the header of the HTTP call to authenticate. When a valid access_token is used a valid response is returned with HTTP code 200 (OK). (source)

ASP.NET Identity

To manage users ASP.NET Identity is used in combination with a custom user store that targets PostgreSQL. This will be explained in a future blog post. To be able to run the application locally without the need to install PostgreSQL, an in-memory stored can be used by leaving the connections string empty. For mor information about hot to run the example locally go to the getting started post.

Credits

I like to give credit to Taiseer Joudeh for his AngularJS Token Authentication blog posts. And I also would like to thank Kévin Chalet for the support on getting ASOS to work!

Source code

The source code of the application can be found on github:

https://github.com/tim-cools/RealTimeWeb.NET

Warning!

The application is still work in progress and new features will be added in the next following weeks…

Some of the technologies and frameworks used in this application are pre-release and most likely to change. Currently it is based on RC1 of .NET Core. I try to update the code as soon as possible whenever the final version is released. Ping me on Twitter if you have questions or issues.


RealTimeWeb.NET Blog Posts

This post is part of a blog post series about RealTimeWeb.NET application.

  1. RealTimeWeb.NET – A real time ASP.NET Core application
  2. Getting started with RealTimeWeb.NET
  3. RealTimeWeb.NET Front-end
    1. Creating an ASP.NET Core web application
    2. Single page application in React on ASP.NET Core
    3. React Authentication on ASP.NET Core with OAuth and Identity
    4. Real-time data pushed by web-sockets (SignalR) on ASP.NET Core
    5. Server-side rendering
  4. Real-time back-end
  5. Operations
  6. ...

logo-blog-3-2

Single page application in React on ASP.NET Core

Single page application in React on ASP.NET Core

What is React

React is an open-source client-side JavaScript framework for building user interfaces. It is developed by Facebook and used in their own products. It is also used by some other mayor web-sites like Netflix, imgur, Weather Underground and Feedly.

The two main reasons why they developed React were performance and simplicity. To achieve this React moves away from templates and data-binding but uses JavaScript components and a one-way data flow instead. This combined with a Virtual DOM decreases the number of updates to the real DOM and makes the user interfaces more responsive to data changes and user interactions. It this post I describe briefly how to develop an single page application in React on ASP.NET Core.

Creating React components

The React JavaScript Components represent a part of the view that will be rendered. They use JSX to describe the HTML tags needed to render the components on the page. JSX is a JavaScript extension that allows developers to define XML-like tags in their JavaScript code. This extension is not supported by browsers out-of-the-box but a transpiler like Babel is used to covert JSX into standard JavaScript. More about this later in this post.

In this example renders the MainPage component a div tag with some child components. You also see that it passes data to the header by assigning values to the attributes userAuthenticated and userName.

Declaring a style for components is as easy as assigning the style as Json object to the style property of a component in the render method

Similar like we route url’s on server side to specific views we use a router on the client side to map specific urls to a view.

One-way data flow

Components are structured in an hierarchical way and they use two types of data to render the HTML. They received input data (accessed via this.props) from the parent component. And a component can also maintain internal state data (accessed via this.state). The differences are outlined in detail here. When the data of a component changes, the component is rendered again based on the updated data. Properties also support property validation to ensure the data is correct during development.

React-Components

Managing state with Flux and Reflux

To achieve high performance React recommends using a single immutable state with unidirectional data-flow to render your application. This is supported by the Flux architecture and the Redux library.

React-Data-Flow-1

  • API this is the back-end API used for authorization, documentation and real-time data of the vehicles written in ASP.NET Core. This API is accessible over HTTP and web-sockets.
  • Service encapsulates the logic that calls the API or receives push messages from the API over web-sockets. This is a bit different as in most Redux examples but it seems more logic than placing the API calls in the Views or in the Action’s itself. This makes the Actions and Reducers only responsible for state changes. This also makes the the React Components lighter and easier testable because they don’t need to know about the action creators and the dispatcher.
  • The Store holds the single immutable state. This state is composed out of multiple reducers each responsible for a part of the state.

    When the state is constructed or updated each reducer is called to get a part to the state. And these parts are combined into the single immutable application state.
  • Actions are methods that create action objects to update the state. Action objects are simple JavaScript objects that represents what is is or should change in the state. These objects are dispatched to the store in order to update the state.
  • Reducers handle the actions and return the a new version of the state changed according to the received action.
  • Component are are the main views of the application. These views are connected to the global store by the connect method. This ensure that the view is notified when the state has changed. When the state is changed the mapStateToProps() method is invoked. This method transforms the state into the data (props) necessary to render the component.

    {this.props.children}

     

    ); } } MainPage.propTypes = { loggedOn: PropTypes.bool, name: PropTypes.string }; function mapStateToProps(state) { return { loggedOn: state.user.status === userStatus.authenticated, name: state.user.name }; } export default connect(mapStateToProps)(MainPage);

  • Sub-component are declared by component. They are not connected to the store directly but receive their data from the parent document by using the properies. (this.props)

Please have a look at the official Flux and Redux documentation for more detailed infromation about this pattern. In the sample application I diverge a bit from the I encapsulated

RealTimeWeb currently doesn’t support server-side rendering but it will be added in the near future…

Using ES2015

The JavaScript language has improved quit a lot last years and the latest standard includes some really powerful features to create more readable and maintainable code.

ECMAScript 6 is the newest version of the ECMAScript standard. This standard was ratified in June 2015. ES2015 is a significant update to the language, and the first major update to the language since ES5 was standardized in 2009. Implementation of these features in major JavaScript engines is underway now.

Quote taken from the Babel ES2015 page. ES2015 is however not yet supported by most browsers, but that doesn’t prevent us from using the features already. The langue can be used by processing the JavaScript files by the Babel transpiler. This tool convert the ES2015 source code into plain JavaScript understandable by the browsers. We also use this to tranform JSX code into JavaScript.

Here are the 4 most powerful JavaScript features used in the sample application:

  • Modules are now supported at language level by using the import and export statements.
  • By adding arrows functions () => {}, also called lambda functions, JavaScript finally supports the condensed way to declare anonymous functions as supported in most recent language.

    {message}

    ) ));

  • Classes and inheritance are now supported by a class and extends statements. Note that this is still prototype-based inheritance and that it is a different model in comparison to other class-based programming languages.

  • Spread and rest operators (…). Spread and rest operators are used to split and combine arrays or objects. Especially the object spread operator is really powerful because it can be used to clone and extend objects.

For a coprehensive list of the new features take a look at the Babel web-site

Managing JavaScript dependencies

Modern web-application consist of a large part of JavaScript code these days. And whole range of JavaScript libraries are available to use. These libraries are managed by the node package manager (npm). Npm uses a package.json file in the root of the web-project to keep track of the dependencies. This makes it easy to (re)install the necessary JavaScript dependencies by calling npm install from the command line or during the build process.

The main libraries used in RealTimeWeb are:

  • React, the client side library developed by Facebook, for creating a single page application
  • Redux for managing state in the javascript application
  • React bootstrap enabling the usage of the Twitter Bootstrap library from React

Gulp scripts

Gulp is a node.js task runner that supports many plug-ins. It is used to create build tasks and watchers. The most important plug-ins used in the application are Browserify and Babelify. Browserify combines all JavaScripts files into a single downloadable file. And Babelify transpiles ES6/7 and JSX into plain JavaScript to enable the usage of new JS features and syntax.

JavaScript-Files

The main build task preforms following task in sequence:

  • Clean the wwwroot folder. This is the folders that is used by the web-server to respond requests of the static files of the web-application
  • Browserify is used to combine Client/src/app.js and all its dependencies into a single JavaScript file wwwroot/scripts/app.js. Babelify is used to support ES6/7 and JSX features. Some libraries like React and Redux and are excluded from this file and are combined in a different vendor.js file.
  • Browserify is used to combine libraries like React and Redux in a wwwroot/scripts/vendor.js file. This improve the build time of the app.js file drastically. It also improves the load time of the page because these much less frequently changed in comparison to the application files.
  • Static files are copied from Client/statics to wwwroot

Check out Client/tools/build.js for the full gulp script.

Command line tools

Using command-line tools can improve the productivity during development. Here I present you the tools used to build and test the JavaScript application. All tools should be executed from the web-application folder src/Soloco.RealTimeWeb

To build all front-end files files from client to wwwroot folder the gulp build script is executed:

Watch for changes in JS files and build instantly. This is the tool that you will keep open in a console window during development.

Build and copy only application files. This performs a faster build because it doesn’t clean the wwwroot folder and doesn’t build the vendor.js. This can be uses during development

Run all JS tests:

Run all JS tests and watch for changes. You can keep this running in a separated terminal during the development of the JavaScript application and tests.

Using React on ASP.NET Core

Loading the React SPA in a ASP.NET Core view is as easy as including the generated app.js and vendor.js files in the generated HTML file

@*Fork me on GitHub*@

Testing the React application

While developing a React application I found these three kinds of tests really valuable.

  1. Testing of the actions and their effect on the state.
  2. Mapping of the state to component properties.
  3. Rendering of the component. For this we mock the called service by using __Rewire__. Currently this test only verifies that the view is rendered without any errors. While this is already valuable on it’s own, this test can (or should) be completed with assertions of the rendered view.

Next

In the next post I describe how I implemented authentication via OAuth for the React application.

Source code

The source code of the application can be found on github:

https://github.com/tim-cools/RealTimeWeb.NET

Warning!

The application is still work in progress and new features will be added in the next following weeks…

Some of the technologies and frameworks used in this application are pre-release and most likely to change. Currently it is based on RC1 of .NET Core. I try to update the code as soon as possible whenever the final version is released. Ping me on Twitter if you have questions or issues.


RealTimeWeb.NET Blog Posts

This post is part of a blog post series about RealTimeWeb.NET application.

  1. RealTimeWeb.NET – A real time ASP.NET Core application
  2. Getting started with RealTimeWeb.NET
  3. RealTimeWeb.NET Front-end
    1. Creating an ASP.NET Core web application
    2. Single page application in React on ASP.NET Core
    3. React Authentication on ASP.NET Core with OAuth and Identity
    4. Real-time data pushed by web-sockets (SignalR) on ASP.NET Core
    5. Server-side rendering
  4. Real-time back-end
  5. Operations
  6. ...

logo-blog-3-1

ASP.NET Core web application

ASP.NET Core web application

ASP.NET Core is the new Web-development framework developed by Microsoft. The biggest advantages between the previous version of ASP.NET is that it is multi-platform out of the box and is fully open-source. I selected the new ASP.NET Core framework for the development of this sample application because of the multi-platform support. In this post I describe briefly how to create a new ASP.NET Core and the differences between the ASP.NET Framework.

Screenshot

Create a new ASP.NET Core project

There is already a large amount of information available on the web about how to get started with ASP.NET Core so I will not cover this in details again. I will reference to the documentation needed to get started. Click here to view the instruction to install .NET Core on your local machine.

To create a new ASP.NET Core project you can choose between Visual Studio or using yeoman and the command line.
This will create a basic project structure with the necessary configuration and files to start with a web project.

ASP.NET Core Core run-time and frameworks

The new ASP.NET Core framework runs on multiple operation systems out of the box. It leverages this multi-platform support by supporting multiple run-times:

  • The CLR is the original Windows run-time developed by Microsoft. It is a non-open source environment but it prove it’s stability more as 14 years in production environments.
  • Mono is a multi-platform run-time developed by the community. It is more compatible with the original CLR as CoreCLR. This means that it runs most libraries and frameworks without any problem.
  • .NET Core (CoreCLR) is the new multi-platform run-time developed by Microsoft. It is open-source and supports Windows, Mac and Linux. It only supports a subset of the CLR so it is currently not compatible with most CLR libraries.

Each application can run on one or more run-times by specifying the framework(s) it supports. The frameworks are not very well documented for the moment. But at at least following frameworks are available:

  • dnx451 is supported by CLR and Mono.
  • dnx46 targets the .NET framework 4.6. Although mono support many C#6 features, this framework is currently not (yet) supported.
  • dnxcore50 targets only .NET Core (CoreCLR).

Because several libraries I needed for the application does not support Core CLR yet I chose to build and deploy this application in the dnx451 framework. This frameworks is supported on both on the classic CLR as on the Mono run-time.

Project.json

Instead of using a more complicated MS Build file Microsoft decided to put the project definition in a project.json file. This is similar to the node.js packages.json file. This file contains basic project information, the dependencies (NuGet libraries) and the supported frameworks. It also contains one or more commands that can be executed from command line. In this case one command is defined ‘Web’ which starts Kestrel on port 3008. Kestrel is the ASP.NET Core cross platform development web server use to run the web-application.

Click here to view the project.json file specifications. And here for more specific ASP.NET Core project.json configuration

Tooling

One of the great decision they made is to leverage improved command line experience. I like this because this improves productivity and multi-platform support. Instead of running the application from your way-to-heavy Visual Studio you can now run your applications from command line with a single command.

To run dnx a command defined in your project.json you have to call dnx followed by the command you want to run:

.NET Core also supports a command-line .NET Version Manager (dnvm) and a package manager (dnu). Which is great for automating environment setup, builds and deployment.

What is different

Following list describes the main difference between ASP.NET and ASP.NET Core:

  • Web Pages and apis are supported by a single model. Instead of using a different controller from serving Web (MVC) and Api (WebApi) requests there is only one base controller left named Controller. This controller used to implement both scenarios. Here you can see and example of an Api controller and here an example of a Web Page controller.
  • Dependency Injection is now a first class citizen. This is very basic support but it is easy to plug-in your favorite container. Lucky there are already containers that support the new dnx framework like Autofac and StructureMap.
  • All application configuration code is now grouped in a single StartUp.js and the Global.asax is not supported anymore. This class will take care of the container initialization and the web-application configuration. An example of the Startup.cs file can be viewed here.
  • Multiple environments are supported out of the box. Application configuration settings can come now from different sources instead of from a single app.config file. This is a powerful way to differ the configuration for each environment.

Conclusion

Microsoft had a good look at the competing web-development frameworks when they designed the new ASP.NET Core version. This results in my opinion into a modern web-framework adapted to the need of today’s web projects.

Source code

The source code of the application can be found on github:

https://github.com/tim-cools/RealTimeWeb.NET

Warning!

The application is still work in progress and new features will be added in the next following weeks…

Some of the technologies and frameworks used in this application are pre-release and most likely to change. Currently it is based on RC1 of .NET Core. I try to update the code as soon as possible whenever the final version is released. Ping me on Twitter if you have questions or issues.


RealTimeWeb.NET Blog Posts

This post is part of a blog post series about RealTimeWeb.NET application.

  1. RealTimeWeb.NET – A real time ASP.NET Core application
  2. Getting started with RealTimeWeb.NET
  3. RealTimeWeb.NET Front-end
    1. Creating an ASP.NET Core web application
    2. Single page application in React on ASP.NET Core
    3. React Authentication on ASP.NET Core with OAuth and Identity
    4. Real-time data pushed by web-sockets (SignalR) on ASP.NET Core
    5. Server-side rendering
  4. Real-time back-end
  5. Operations
  6. ...

logo-blog-3

RealTimeWeb.NET Front-end

Real-time web application on ASP.NET Core

RealTimeWeb is a real-time web application on ASP.NET Core with two main features:

  • Allow users to become member by registering with username/password or by an external social provider. Currently Google and Facebook authentication is implemented.
  • Display the real-time data of the vehicle monitor received from the Vehicle Monitor service.

Front-End-Components.fw_

Components

The application consist of following parts:

  • ASP.NET Core web application that is responsible to serve the necessary web-page, JavaScript and style-sheet files.
    Read more…
  • The single page application written in React.
    Read more…
  • Authentication and authorization implemented with OAuth and ASP.NET identity.
    Read more…
  • WebSockets to push the real-time data from the vehicle monitor to the user. This is implemented with the usage of SignalR.
    Read more…

Source code

The source code of the application can be found on github:

https://github.com/tim-cools/RealTimeWeb.NET

Warning!

The application is still work in progress and new features will be added in the next following weeks…

Some of the technologies and frameworks used in this application are pre-release and most likely to change. Currently it is based on RC1 of .NET Core. I try to update the code as soon as possible whenever the final version is released. Ping me on Twitter if you have questions or issues.


RealTimeWeb.NET Blog Posts

This post is part of a blog post series about RealTimeWeb.NET application.

  1. RealTimeWeb.NET – A real time ASP.NET Core application
  2. Getting started with RealTimeWeb.NET
  3. RealTimeWeb.NET Front-end
    1. Creating an ASP.NET Core web application
    2. Single page application in React on ASP.NET Core
    3. React Authentication on ASP.NET Core with OAuth and Identity
    4. Real-time data pushed by web-sockets (SignalR) on ASP.NET Core
    5. Server-side rendering
  4. Real-time back-end
  5. Operations
  6. ...

logo-blog-2

Getting started with RealTimeWeb.NET

Getting started

In order to be able to run the application locally you need to perform following steps:

  • Install PostgreSQL 9.5
  • Install RabbitMQ
  • Configure application for the external providers: Google and Facebook

The first time that the application runs it will ask you to enter all configuration values. When the settings are saved a configuration file appsettings.private.json is created in web folder. The configuration file is .ignored by git. If you need to change the configuration in the future you can:

  • Set “general:configured” to false in the appsettings.private.json in web folder.
    When the application is restarted this installation screen is shown again. (RECOMMENDED)
  • Edit the files manually.
  • Remove the file en restart the web application to start from scratch

The web application should be restarted in order to reload the configuration and make the changes active.

Web-installation

All these values are optional and the application will run without them. When no connection string is defined the application will use an in-memory data store. This means that the data will only be kept in memory as long as the application runs.

Source code

The source code of the application can be found on github:

https://github.com/tim-cools/RealTimeWeb.NET

Warning!

The application is still work in progress and new features will be added in the next following weeks…

Some of the technologies and frameworks used in this application are pre-release and most likely to change. Currently it is based on RC1 of .NET Core. I try to update the code as soon as possible whenever the final version is released. Ping me on Twitter if you have questions or issues.


RealTimeWeb.NET Blog Posts

This post is part of a blog post series about RealTimeWeb.NET application.

  1. RealTimeWeb.NET – A real time ASP.NET Core application
  2. Getting started with RealTimeWeb.NET
  3. RealTimeWeb.NET Front-end
    1. Creating an ASP.NET Core web application
    2. Single page application in React on ASP.NET Core
    3. React Authentication on ASP.NET Core with OAuth and Identity
    4. Real-time data pushed by web-sockets (SignalR) on ASP.NET Core
    5. Server-side rendering
  4. Real-time back-end
  5. Operations
  6. ...

Marten

Marten, PostgreSQL as document db for .NET

Marten, PostgreSQL as document db for .NET

Since they include the JSONB column type a while ago I was thinking about using PostgreSQL as document database. But I had no time to implement something like this myself yet. So I was happily surprised when I stumbled upon Marten a few weeks ago. Marten is a new persistence library for .NET that provide document db functionality on top of a PostgreSQL database. This was exactly what I was thinking about. It is created by Jeremy Miller the creator of SturctureMap and the Fubu stack. And tough it is still in an early stage, the supported features look already promising:

  • Schema-less document persistence with ACID support
  • Linq support for querying
  • Unit of work with Identity Map and optional dirty checks
  • Optimized batch udpates
  • Compatibility with  RavenDB client API (or close to)
  • EventStore on top of PostgreSQL with sycn and async projections

What is a Marten

A Marten is a cute animal living in many parts of the world. Check it out yourself:

14674805776_b9180bf828_o-300x214
Source: Flickpicpete

Why PostgreSQL

Since PostgreSQL version 9.2 they started to include more and more features to support a JSON column type. This allow you to store and retrieve schema-less documents in the database. And since 9.4 they included a binary JSON (jsonb) column type that support indexing on fields in the json data. Besides that PostgreSQL also support all traditional database features like ACID support, while many Document database lack this. This makes the database a good candidate for many software projects that need the productivity of storing schema-less documents combined with the advantages of traditional databases (eg multi document transactions).

PostgreSQL also has a huge supporting open source community and there are many tools and extensions. Some examples of these extensions are:

    References

    I will probably blog more about this topic later as I started contributing to the OSS project. And I hope to be able to contribute more on this awesome project in the future.
    In meanwhile you ca find more info here:

    Ampion Bus

    Ampion Innovation road-trip through East-Africa

    Ampion Innovation Bus

    Recently, I was lucky to be part of the Ampion Venture Bus! Ampion is a Berlin based organization dedicated to accelerating businesses and connecting social entrepreneurs, IT engineers and designers throughout the emerging world. A Venture Bus is a seven-day road trip where innovative ideas are developed from scratch with the guidance of mentors. Each bus contains around 40 young people who are an average age of 27 years old. Half of these young people are international and the other half are from Africa. At the end of the week, the best teams pitch their ideas in front of a high-level jury, including investors and venture capitalists. The best startups of the week have a chance to apply to a 12-month Ampion Fellowship Program—including mentorship, a small grant and office space provided by Ampion.


    ampion
    I really believe the mobile and internet revolution has the power to change the life of the people in emerging countries, and I wanted to be part of this revolution. So I applied to go on the East Africa trip for 2015. Each candidate is carefully selected by a written interview and one or more technical interviews. Needless to say, I was so happy to be selected out of many candidates.

    This was an experience of a lifetime.

    Healthcare in Africa

    Our startups would be working on the theme of e-health. Healthcare is one of the most important causes on the planet—especially in Africa. There are so many deathly emerging diseases, including monkeypox virus, Rift Valley fever virus and, of course, the Ebola virus that was such a huge deal in 2014. But there are also a lot of reemerging diseases that cause havoc all the time in this continent: Smallpox, malaria, tuberculosis, yellow fever. These diseases spread very rapidly, and it’s obviously a major concern.

    Some African people aren’t well-informed about health issues due to cultural and religious beliefs. Africa also has a lot of issues with fake medicine, witch doctors, and uneducated people pretending they are qualified to perform medical care. We heard a story about a doctor that administered red soda water to a patient instead of blood. Sadly, death was the consequence of this deception. Another sad story we heard was about a fake gynecologist who was accused of drugging and raping his patients. Clearly, e-health is an important area to focus on in Africa, and we were excited to help out.

    The Tour

    Ampion runs five tours throughout all of Africa. I joined the East Africa tour, which started in Dar es Salaam (Tanzania) and traveled consecutively through Arusha (Tanzania), Nairobi (Kenya), Kisumu (Kenya), Kampala (Uganda) and Kigali (Rwanda).

    Map

    At each city, there were local hubs dedicated to supporting local entrepreneurship. All the hubs provide a free open community place that organizes mentoring programs for young people to start their own tech businesses. They guide young people with business advice—such as product development, customer/market validation, funding—and they also support with technological advice so that even non-technical people can start their own businesses.

    I have to admit I expected small, self-organized communities of people interested in startups. But these hubs were very well organized in high-end buildings with support from governments and universities. I was positively surprised about the advanced infrastructure available to support startups and social entrepreneurship in Africa. This indicates how little we in Europe know about African countries and the emerging world in general. We have so much to learn.

    The last stop of the tour was the Transform Africa Summit 2015 in Rwanda. The theme for this summit was “Accelerating Digital Innovation”. There were over 2,500 international participants—including Rwanda’s President Paul Kagame and respected people from top technology companies like Facebook. The top three winning teams of our tour got to pitch their ideas at the opening gala dinner in front of around 300 people.

    transform-africa

    Developing Ideas

    So how do you find innovative solutions for real-world ideas? The process we used on the bus was Design thinking. Design thinking is an iterative process that starts with defining the problems in order to develop and test solutions. And it goes like this:

    Design-Thinking

    To evaluate the business validity of the found e-health solutions, we used the Lean Canvas model as proposed by the lean startup business modeling methodology:

    Business_Model_Canvas

    When it was proven that a e-health solution had business potential, we started focusing on pitching the solution and presentation skills. This pitch was repeated and evaluated several times so that in the end it could be pitched in front of the Rwanda ICT & Youth minister at kLab in Kigali. The three winning teams were announced:

    1. Mitambo is an online platform used to improve the maintenance and lifetime of medical devices. It’ll be used to educate and connect support engineers. It also allows one to monitor and diagnose machines online.
    2. The Waiting Line takes advantage of the time people have to wait in line before having access to a health service. This time is used to gather medical data and to educate the patient.
    3. mOkoa—the team I was part of—got third place with the solution to improve the control of disease outbreaks. More information is below.

    The mOkoa Team

    mOkoa-team

    As mentioned above, disease outbreaks are a horribly common problem in Africa. Detection rate is very slow, and the time that passes after an outbreak and before action is undertaken is still way too long. This causes a lot of unnecessary deaths that could be prevented.

    After defining the very real problems people face in East Africa surrounding diseases, we formed a team, came up with an idea, and named it. Our team was a multidisciplinary team of six people from four countries with many different backgrounds. Okoa is the Swahili word for “to save” and the m stands for “mobile”. So we combined the two words and made mOkoa. The goal of our team was to save lives with mobile technology!

    Our solution to the problem of disease outbreaks was to provide local health centers—called dispensaries—with a mobile tablet device that will be used to report patient symptoms and diagnoses. This tablet contains our mOkoa mobile app, which is used to enter all required data about the patients. The app will then send the data to the centralized mOkoa system by the mobile data network. Because not all rural areas are covered by mobile data access, the app will fall back on sending the information by USSD (SMS) if no mobile network is available.

    The collected information is analysed to create real-time heat maps of the spreading of diseases. Based on this information, alarms can be sent to health workers and people in the field by SMS. People who receive the alarm will know that they have to be careful and will also receive prevention information on how best to handle the outbreak.

    High-speed information about disease outbreaks will save many lives. Early detection is key, and my team and I really believe mOkoa will truly help with this. I was very honored to work alongside the teammates I worked with, and we really learned a lot while developing this startup idea.

    When the Shit Hits the Fan

    This blog post wouldn’t be complete without some general thoughts about Africa, now would it? There was the good, and there was the bad. I’ll start with the bad first.

    You probably won’t be surprised that we had some bad luck with the bus trip in Africa. I expected some problems myself, but the reality was a lot harder than I imagined. First off, the bus was not the newest and lacked A/C. Secondly, we had a lot of health issues—mostly due to insufficient food hygiene resulting in many toilet stops on some specific days. You can view a live report of the outbreak here. Due to these problems, the road trips consisted of 50 percent more daily driving time than originally planned: This was around 10-12 hours on the bus per day.

    Last but not least, my bags—along with two fellow passengers’ bags—were stolen while traveling from Kenya to Uganda. Loosing all your stuff is not really enjoyable while in the middle of a trip. It involves a lot of unnecessary trips to super markets and shops. In the end, though, I was happy that we only had material loss and nothing really bad happened.

    Enjoying Africa

    All in all, I really enjoyed the trip from the beginning to the end. For me, it was an eye opener on how the people in Africa live, and I was positively surprised about the enthusiasm and engagement of the local people! The people are well-educated and well-aware of the shortcomings of their countries and the systems they live in. More importantly, they’re highly motivated to do something about their situation. I’m confident that some of the ideas developed on the bus will result in really successful businesses in the near future.

    All the African people were a lot of fun to hang out with. They have a great ability to reevaluate everything positively with a great sense of humor! Whenever something doesn’t go as planned, they say among themselves with a big smile: “T.I.A. This Is Africa!” They are proud about their countries. I’m very glad I got to see Africa and learn more about its diverse landscape and cultures through this trip.

    Thanks, guys, for the great trip!

    To see some amazing videos about the trip, check out Nick Van Langendonck’s YouTube channel! He was my partner-in-crime on the bus.

    Last but not least, here are a a couple pictures taken while on the road trip.

    Enjoy! 😉

    the-ampion-bus-845x684

    working-on-the-bus-845x684

    great-rift-valley-845x684

    kilimanjaro-845x684

    Event Store Rest API Basics (Node.js)

    Event Store Node.js client

    I recently finished my first project written in Node.js with EventStore as data-store. I published the some of the code used to access the EventStore to github, feel free to play with it. The official documentation can be found the the EventStore web-site.

    The code to load/save the events is coded in a repository and contains 3 methods:

    Have a look at the implementation on github. And at jasmine test file for the full examples.

    Loading the events of an aggregate

    The load method loads all events of a stream to be able to replay them on an aggregate. The events are loaded in batch. (In this example per 5)

    The load method returns the current version of the stream and all stored events:

    When the stream doesn’t exists, the load methods returns an empty event array with -1 as version. -1 should be used as version to save events to a new streams.

    Load the last event of a stream

    To get the state of an projection the last event of a stream is loaded. This is done by requesting the head of the stream.

    The loadLast method returns the version of the stream, and the name and the body of the last event.

    Saving the events of an aggregate

    The save methods appends to specified event to the stream. It also performs a concurrency check based on the version. When the version is not correct the callback returns with an error.

    What’s next

    In the next posts I describe how I managed projections and applied event-sourcing on an aggregate in JavaScript.

    Source code

    A working project for these examples can be found on github: https://github.com/tim-cools/EventStore-Node-Examples

    Event Store Projections by Example

    This post is part of a series:

    1. EventStore Client API Basics (C#)
    2. Counting events of a specific type
    3. Partition events based on data found in previous events
    4. Calculating an average per day
    5. The irresponsible gambler
    6. Distribute events to other streams
    7. Temporal Projection to generate alarms
    8. Projection in C#

    Multiple IIS Sites hosting the same application and NServiceBus

    Multiple IIS Sites for the same application

    When the same application is hosted in multiple sites in IIS, and the application is hosting a NServicebus endpoint, then it is important to ensure that each site has a different end-point names. Otherwise all web-site read events from the same queue, and each site only receives a portion of the messages. This sound obvious, but a bug related to this bugged me for a long time…

    Losing SignalR messages

    In a specific application where we use SignalR we noticed really strange behavior. Only some events were transported from our NSB processor, to the SignalR client. Strangely enough all events were logged in our application logging. Further investigation pointed out that the NSB events were processed by multiple App-Domains in the same process. Where-after we found out that the same application was hosted in multiple IIS sites with different bindings and configuration. This explained off-course how we end up with multiple processors reading from the same queue, and why we only received some messages on our SignalR client.

    nsb-problem

    NServiceBus configuration

    The fix is easy, instead of using the default or a static end-point name, we define a dynamic end-point name based on the IIS Site name.