Documentation
Dolittle is an open-source, decentralized, distributed and event-driven microservice platform. The platform has been designed to build Line of Business applications without sacrificing architectural quality, code quality, maintainability or scalability.
Dedicated Runtime
Dolittle uses it’s own dedicated Runtime for managing connections to the event logs and other runtimes. This allows for easier decoupling of event producers and consumers and frees the pieces to be scaled independently.
Microservice First
At the heart of Dolittle sits the notion of decoupling. This makes it possible to take a system and break it into small focused components
that can be assembled together in any way one wants to. When it is broken up you get the benefit of scaling each individual piece on its own, rather than scaling the monolith
equally across a number of machines. This gives a higher density, better resource utilization and ultimately better cost
control.
Event-Driven
Dolittle is based on Event Sourcing, which means that the systems state is based on events.
EDA promotes loose coupling because the producers of events do not know about subscribers that are listening to this event. This makes an Event-Driven Architecture more suited to today’s distributed applications than the traditional request-response model.
PaaS Ready
Dolittle has it’s own PaaS (Platform as a Service) for hosting your Dolittle code, get in contact with us to learn more!
1 - Tutorials
Tutorials for the Dolittle platform
1.1 - Getting started
Get started with the Dolittle platform
Welcome to the tutorial for Dolittle, where you learn how to write a Microservice that keeps track of foods prepared by the chefs.
After this tutorial you will have:
Use the tabs to switch between the C# and TypeScript code examples. Full tutorial code available on GitHub for C# and TypeScript.
For a deeper dive into our Runtime, check our overview.
Setup
This tutorial expects you to have a basic understanding of C#, .NET and Docker.
Prerequisites:
Setup a .NET Core console project:
$ dotnet new console
$ dotnet add package Dolittle.SDK
This tutorial expects you to have a basic understanding of TypeScript, npm and Docker.
Prerequisites:
Setup a TypeScript NodeJS project using your favorite package manager. For this tutorial we use npm.
$ npm init
$ npm -D install typescript ts-node
$ npm install @dolittle/sdk
$ npx tsc --init
This tutorial makes use of experimental decorators. To enable it simply make sure you have “experimentalDecorators” set to true in your tsconfig.json.
Create an EventType
First we’ll create an EventType
that represents that a dish has been prepared. Events represents changes in the system, a “fact that has happened”. As the event “has happened”, it’s immutable by definition, and we should name it in the past tense accordingly.
An EventType
is a class that defines the properties of the event. It acts as a wrapper for the type of the event.
// DishPrepared.cs
using Dolittle.SDK.Events;
namespace Kitchen
{
[EventType("1844473f-d714-4327-8b7f-5b3c2bdfc26a")]
public class DishPrepared
{
public DishPrepared (string dish, string chef)
{
Dish = dish;
Chef = chef;
}
public string Dish { get; }
public string Chef { get; }
}
}
The GUID given in the [EventType()]
attribute is the EventTypeId
, which is used to identify this EventType
type in the Runtime.
// DishPrepared.ts
import { eventType } from '@dolittle/sdk.events';
@eventType('1844473f-d714-4327-8b7f-5b3c2bdfc26a')
export class DishPrepared {
constructor(readonly Dish: string, readonly Chef: string) {}
}
The GUID given in the @eventType()
decorator is the EventTypeId
, which is used to identify this EventType
in the Runtime.
Create an EventHandler
Now we need something that can react to dishes that have been prepared. Let’s create an EventHandler
which prints the prepared dishes to the console.
// DishHandler.cs
using System;
using Dolittle.SDK.Events;
using Dolittle.SDK.Events.Handling;
namespace Kitchen
{
[EventHandler("f2d366cf-c00a-4479-acc4-851e04b6fbba")]
public class DishHandler
{
public void Handle(DishPrepared @event, EventContext eventContext)
{
Console.WriteLine($"{@event.Chef} has prepared {@event.Dish}. Yummm!");
}
}
}
When an event is committed, the Handle()
method will be called for all the EventHandlers
that handle that EventType
.
The [EventHandler()]
attribute identifies this event handler in the Runtime, and is used to keep track of which event it last processed, and retrying the handling of an event if the handler fails (throws an exception).
// DishHandler.ts
import { EventContext } from '@dolittle/sdk.events';
import { eventHandler, handles } from '@dolittle/sdk.events.handling';
import { DishPrepared } from './DishPrepared';
@eventHandler('f2d366cf-c00a-4479-acc4-851e04b6fbba')
export class DishHandler {
@handles(DishPrepared)
dishPrepared(event: DishPrepared, eventContext: EventContext) {
console.log(`${event.Chef} has prepared ${event.Dish}. Yummm!`);
}
}
When an event is committed, the method decorated with the @handles(EventType)
for that specific EventType
will be called.
The @eventHandler()
decorator identifies this event handler in the Runtime, and is used to keep track of which event it last processed, and retrying the handling of an event if the handler fails (throws an exception).
Connect the client and commit an event
Let’s build a client that connects to the Runtime for a Microservice with the id "f39b1f61-d360-4675-b859-53c05c87c0e6"
. This sample Microservice is pre-configured in the development
Docker image.
While configuring the client we register the EventTypes
and EventHandlers
so that the Runtime knows about them. Then we can prepare a delicious taco and commit it to the EventStore
for the specified tenant.
// Program.cs
using Dolittle.SDK;
using Dolittle.SDK.Tenancy;
namespace Kitchen
{
class Program
{
public static void Main()
{
var client = Client
.ForMicroservice("f39b1f61-d360-4675-b859-53c05c87c0e6")
.WithEventTypes(eventTypes =>
eventTypes.Register<DishPrepared>())
.WithEventHandlers(builder =>
builder.RegisterEventHandler<DishHandler>())
.Build();
var preparedTaco = new DishPrepared("Bean Blaster Taco", "Mr. Taco");
client.EventStore
.ForTenant(TenantId.Development)
.Commit(eventsBuilder =>
eventsBuilder
.CreateEvent(preparedTaco)
.FromEventSource("bfe6f6e4-ada2-4344-8a3b-65a3e1fe16e9"));
// Blocks until the EventHandlers are finished, i.e. forever
client.Start().Wait();
}
}
}
The GUID given in FromEventSource()
is the EventSourceId
, which is used to identify where the events come from.
// index.ts
import { Client } from '@dolittle/sdk';
import { TenantId } from '@dolittle/sdk.execution';
import { DishPrepared } from './DishPrepared';
import { DishHandler } from './DishHandler';
const client = Client
.forMicroservice('f39b1f61-d360-4675-b859-53c05c87c0e6')
.withEventTypes(eventTypes =>
eventTypes.register(DishPrepared))
.withEventHandlers(builder =>
builder.register(DishHandler))
.build();
const preparedTaco = new DishPrepared('Bean Blaster Taco', 'Mr. Taco');
client.eventStore
.forTenant(TenantId.development)
.commit(preparedTaco, 'bfe6f6e4-ada2-4344-8a3b-65a3e1fe16e9');
The GUID given in the commit()
call is the EventSourceId
, which is used to identify where the events come from.
Start the Dolittle environment
Start the Dolittle environment with all the necessary dependencies with the following command:
$ docker run -p 50053:50053 -p 27017:27017 dolittle/runtime:latest-development
This will start a container with the Dolittle Development Runtime on port 50053 and a MongoDB server on port 27017.
The Runtime handles committing the events and the event handlers while the MongoDB is used for persistence.
Docker on Windows
Docker on Windows using the WSL2 backend can use massive amounts of RAM if not limited. Configuring a limit in the
.wslconfig
file can help greatly, as mentioned in
this issue. The RAM usage is also lowered if you disable the WSL2 backend in Docker for Desktop settings.
Run your microservice
Run your code, and get a delicious serving of taco:
$ dotnet run
Mr. Taco has prepared Bean Blaster Taco. Yummm!
$ npx ts-node index.ts
Mr. Taco has prepared Bean Blaster Taco. Yummm!
What’s next
1.2 - Aggregates
Get started with Aggregates
Welcome to the tutorial for Dolittle, where you learn how to write a Microservice that keeps track of foods prepared by the chefs.
After this tutorial you will have:
- a running Dolittle environment with a Runtime and a MongoDB,
- a Microservice that commits and handles Events and
- a stateful aggregate root that applies events and is controlled by an invariant
Use the tabs to switch between the C# and TypeScript code examples. Full tutorial code available on GitHub for C# and TypeScript.
Pre requisites
This tutorial builds directly upon and that you have gone through our getting started guide; done the setup, created the EventType, EventHandler and connected the client
Create an AggregateRoot
An aggregate root
is a class that upholds the rules (invariants) for the aggregates of that aggregate root. It encapsulates the domain objects, enforces business rules, and ensures that the aggregate can’t be put into an invalid state. The aggregate root usually exposes methods that creates and applies an event.
There are essentially two types of aggregate roots, stateless and stateful. The aggregate root in this example is stateful because it tracks a value called _counter that is used to control the invariant that no more than two dishes can be prepared. Stateful aggregate roots have On() methods that takes in a single parameter, an event type. Each time an event of that type is applied to this aggregate root the On method will be called. It is important that the On methods only updates the internal state of the aggregate root!
// Kitchen.cs
using System;
using Dolittle.SDK.Aggregates;
using Dolittle.SDK.Events;
namespace Kitchen
{
[AggregateRoot("01ad9a9f-711f-47a8-8549-43320f782a1e")]
public class Kitchen : AggregateRoot
{
int _counter;
public Kitchen(EventSourceId eventSource)
: base(eventSource)
{
}
public void PrepareDish(string dish, string chef)
{
if (_counter >= 2) throw new Exception("Cannot prepare more than 2 dishes");
Apply(new DishPrepared(dish, chef));
Console.WriteLine($"Kitchen Aggregate {EventSourceId} has applied {_counter} {typeof(DishPrepared)} events");
}
void On(DishPrepared @event)
=> _counter++;
}
}
The GUID given in the [AggregateRoot()]
attribute is the AggregateRootId
, which is used to identify this AggregateRoot
in the Runtime.
// Kitchen.ts
import { aggregateRoot, AggregateRoot, on } from '@dolittle/sdk.aggregates';
import { EventSourceId } from '@dolittle/sdk.events';
import { DishPrepared } from './DishPrepared';
@aggregateRoot('01ad9a9f-711f-47a8-8549-43320f782a1e')
export class Kitchen extends AggregateRoot {
private _counter: number = 0;
constructor(eventSourceId: EventSourceId) {
super(eventSourceId);
}
prepareDish(dish: string, chef: string) {
if (this._counter >= 2) throw new Error("Cannot prepare more than 2 dishes");
this.apply(new DishPrepared(dish, chef));
console.log(`Kitchen Aggregate ${this.eventSourceId} has applied ${this._counter} ${DishPrepared.name} events`);
}
@on(DishPrepared)
onDishPrepared(event: DishPrepared) {
this._counter++;
}
}
The GUID given in the @aggregateRoot()
decorator is the AggregateRootId
, which is used to identify this AggregateRoot
in the Runtime.
Apply the event through an aggregate of the Kitchen aggregate root
Let’s expand upon the client built in the getting started guide. But instead of committing the event to the event store directly we perform an action on the aggregate that eventually applies and commits the event.
// Program.cs
using Dolittle.SDK;
using Dolittle.SDK.Tenancy;
namespace Kitchen
{
class Program
{
public static void Main()
{
var client = Client
.ForMicroservice("f39b1f61-d360-4675-b859-53c05c87c0e6")
.WithEventTypes(eventTypes =>
eventTypes.Register<DishPrepared>())
.WithEventHandlers(builder =>
builder.RegisterEventHandler<DishHandler>())
.Build();
client
.AggregateOf<Kitchen>("bfe6f6e4-ada2-4344-8a3b-65a3e1fe16e9", _ => _.ForTenant(TenantId.Development))
.Perform(kitchen => kitchen.PrepareDish("Bean Blaster Taco", "Mr. Taco"));
// Blocks until the EventHandlers are finished, i.e. forever
client.Start().Wait();
}
}
}
The GUID given in AggregateOf<Kitchen>()
is the EventSourceId
, which is used to identify the aggregate of the aggregate root to perform the action on.
// index.ts
import { Client } from '@dolittle/sdk';
import { TenantId } from '@dolittle/sdk.execution';
import { DishPrepared } from './DishPrepared';
import { DishHandler } from './DishHandler';
import { Kitchen } from './Kitchen';
(async () => {
const client = Client
.forMicroservice('f39b1f61-d360-4675-b859-53c05c87c0e6')
.withEventTypes(eventTypes =>
eventTypes.register(DishPrepared))
.withEventHandlers(builder =>
builder.register(DishHandler))
.build();
await client
.aggregateOf(Kitchen, 'bfe6f6e4-ada2-4344-8a3b-65a3e1fe16e9', _ => _.forTenant(TenantId.development))
.perform(kitchen => kitchen.prepareDish('Bean Blaster Taco', 'Mr. Taco'));
console.log('Done');
})();
The GUID given in the aggregateOf()
call is the EventSourceId
, which is used to identify the aggregate of the aggregate root to perform the action on.
Start the Dolittle environment
Start the Dolittle environment with all the necessary dependencies with the following command:
$ docker run -p 50053:50053 -p 27017:27017 dolittle/runtime:latest-development
This will start a container with the Dolittle Development Runtime on port 50053 and a MongoDB server on port 27017.
The Runtime handles committing the events and the event handlers while the MongoDB is used for persistence.
Docker on Windows
Docker on Windows using the WSL2 backend can use massive amounts of RAM if not limited. Configuring a limit in the
.wslconfig
file can help greatly, as mentioned in
this issue. The RAM usage is also lowered if you disable the WSL2 backend in Docker for Desktop settings.
Run your microservice
Run your code, and get a delicious serving of taco:
$ dotnet run
Mr. Taco has prepared Bean Blaster Taco. Yummm!
$ npx ts-node index.ts
Mr. Taco has prepared Bean Blaster Taco. Yummm!
What’s next
2 - Concepts
The essential concepts of Dolittle
The Concepts section helps you learn about the abstractions and components of Dolittle.
To learn how to write a Dolittle application read our tutorial.
2.1 - Overview
Get a high-level outline of Dolittle and it’s components
Dolittle is a decentralized, distributed, event-driven microservice platform built to harness the power of events. It’s a reliable ecosystem for microservices to thrive so that you can build complex applications with small, focused microservices that are loosely coupled, event driven and highly maintainable.
Components
- Events are “facts that have happened” in your system and they form the truth of the system.
- Event Handlers & Filter process events.
- The Runtime is the core of all Dolittle applications and manages connections from the SDKs and other Runtimes to its Event Store. The Runtime is packaged as a Docker image
- The Event Store is the underlying database where the events are stored.
- The Head is the user code that uses the SDKs, which connect to the Runtime in the same way as a client (SDK) connects to a server (runtime).
- A Microservice is one or more Heads talking to a Runtime.
- Microservices can produce and consume events between each other over the Event Horizon.
Event-Driven
Dolittle uses a style of Event-Driven Architecture called Event Sourcing, which means to “capture all changes to an applications state as a sequence of events”, these events then form the “truth” of the system. Events cannot be changed or deleted as they represent things that have happened.
With event sourcing your applications state is no longer stored as a snapshot of your current state but rather as a whole history of all state-changing events. These events can then be replayed to recreate the state whenever needed, eg. replay them to a test environment to see how it would behave. The system can also produce the state it had at any point in time.
Event sourcing allows for high scalability thanks to being a very loosely coupled system, eg. a stream of events can keep a set of in-memory databases updated instead of having to query a master database.
The history of events also forms an audit log to help with debugging and auditing.
Distributed & Decentralized
Dolittle applications are built from microservices that communicate with each other using events. These microservices can scale and fail independently as there is no centralized message bus like in Kafka. The Runtimes and event stores are independent of other parts of the system.
Microservice
A microservice consists of one or many heads talking to one Runtime. The core idea is that a microservice is an independently scalable unit of deployment that can be reused in other parts of the software however you like. Each microservice is autonomous and has its own resources and event store.
This diagram shows the anatomy of a microservice with one head.

Read Cache
The
Read Cache in these pictures is not part of Dolittle. Different
projections call for different solutions depending on the sort of load and data to be stored.
Multi-tenancy
Multi-tenancy means that a single instance of the software and its supporting infrastructure serves multiple customers. Dolittle supports multi-tenancy by separating the event stores for each tenant so that each tenant only has access to its own data.
This diagram shows a microservice with 2 tenants, each of them with their own resources.

What Dolittle isn’t
Dolittle is not a traditional backend library nor an event driven message bus like Kafka. Dolittle uses Event Sourcing, which means that the state of the system is built from an append-only Event Store that has all the events ever produced by the application.
Dolittle does not provide a solution for read models/cache. Different situations call for different databases depending on the sort of load and data to be stored. The event store only defines how the events are written in the system, it doesn’t define how things are read or interpreted.
Dolittle isn’t a CQRS framework, but it used to be.
Technology
The Event Store is implemented with MongoDB.
What’s next
2.2 - Events
The source of truth in the system
An Event is a serializable representation of “a fact that has happened within your system”.
“A fact”
An event is a change (fact) within our system. The event itself contains all the relevant information concerning the change. At its simplest, an event can be represented by a name (type) if it’s enough to describe the change.
More usually, it is a simple Data Transfer Object (DTO) that contains state and properties that describe the change. It does not contain any calculations or behavior.
“that has happened”
As the event has happened, it cannot be changed, rejected, or deleted. This forms the basis of Event Sourcing If you wish to change the action or the state change that the event encapsulates, then it is necessary to initiate an action that results in another event that nullifies the impact of the first event.
This is common in accounting, for example:
Sally adds 100$ into her bank, which would result in an event like “Add 100$ to Sally’s account”. But if the bank accidentally adds 1000$ instead of the 100$ then a correcting event should be played, like “Subtract 900$ from Sally’s account”. And with event sourcing, this information is preserved in the event store for eg. later auditing purposes.
Naming
To indicate that the event “has happened in the past”, it should be named as a verb in the past tense. Often it can contain the name of the entity that the change or action is affecting.
- ✅
DishPrepared
- ✅
ItemAddedToCart
- ❌
StartCooking
- ❌
AddItemToCart
“within your system”
An event represents something interesting that you wish to capture in your system. Instead of seeing state changes and actions as side effects, they are explicitly modeled within the system and captured within the name, state and shape of our Event.
State transitions are an important part of our problem space and should be modeled within our domain — Greg Young
Naming
An event should be expressed in language that makes sense in the domain, also known as Ubiquitous Language. You should avoid overly technical/CRUD-like events where such terms are not used in the domain.
For example, in the domain of opening up the kitchen for the day and adding a new item to the menu:
- ✅
KitchenOpened
- ✅
DishAddedToMenu
- ❌
TakeoutServerReady
- ❌
MenuListingElementUpdated
Main structure of an Event
This is a simplified structure of the main parts of an event. For the Runtime, the event is only a JSON-string which is saved into the Event Store.
Event {
Content object
EventLogSequenceNumber int
EventSourceId Guid
Public bool
EventType {
EventTypeId Guid
Generation int
}
}
For the whole structure of an event as defined in protobuf, please check Contracts.
Content
This is the content of the to be committed. It needs to be serializable to JSON.
EventLogSequenceNumber
This is the events position in the Event Log. It uniquely identifies the event.
EventSourceId
EventSourceId
represents the source of the event like a “primary key” in a traditional database. By default, partitioned event handlers use it for partitioning.
Public vs. Private
There is a basic distinction between private events and public events. In much the same way that you would not grant access to other applications to your internal database, you do not allow other applications to receive any of your private events.
Private events are only accessible within a single Tenant so that an event committed for one tenant cannot be handled outside of that tenant.
Public events are also accessible within a single tenant but they can also be added to a public Stream through a public filterfor other microservices to consume. Your public event streams essentially form a public API for the other microservices to subscribe to.
Changes to public events
Extra caution should be paid to changing public events so as not to break other microservices consuming those events. We’re developing strategies to working with changes in your events and microservices.
EventType
An EventType
is the combination of an EventTypeId
to uniquely identify the type of event it is and the event type’s Generation
.
This decouples the event from a programming language and enables the renaming of events as the domain language evolves.
For the Runtime, the event is just a JSON-string. It doesn’t know about the event’s content, properties, or type (in its respective programming language). The Runtime saves the event to the event log and from that point the event is ready to be processed by the EventHandlers & Filters. For this event to be serialized to JSON and then deserialized back to a type that the client’s filters and event handlers understand, an event type is required.
This diagram shows us a simplified view of committing a single event with the type of DishPrepared
. The Runtime receives the event, and sends it back to us to be handled. Without the event type, the SDK wouldn’t know how to deserialize the JSON message coming from the Runtime.

Event types are also important when wanting to deserialize events coming from other microservices. As the other microservice could be written in a completely different programming language, event types provide a level of abstraction for deserializing the events.
Why not use class/type names instead of GUIDs?
When consuming events from other microservices it’s important to remember that they name things according to their own domain and conventions.
As an extreme example, a microservice could have an event with a type CustomerRegistered
. But in another microservice in a different domain, written in a different language, this event type could be called user_added
.
GUIDs also solve the problem of having duplicate names, it’s not hard to imagine having to have multiple events with the type of CustomerRegisterd
in your code coming from different microservices.
Generations
Generations
are still under development. At the moment they are best to be left alone.
As the code changes, the structures and contents of your events are also bound to change at some point. In most scenarios, you will see that you need to add more information to events. These iterations on the same event type are called generations. Whenever you add or change a property in an event, the generation should be incremented to reflect that it’s a new version of the event. This way the filters and handlers can handle different generations of an event.
2.3 - Streams
Get an overview of Event Streams
So, what is a stream? A stream is simply a list with two specific attributes:
- Streams are append-only. Meaning that items can only be put at the very end of the stream, and that the stream is not of a fixed length.
- Items in the stream immutable. The items or their order cannot change.
An event stream is simply a stream of events. Each stream is uniquely identified within an Event Store by a GUID. An event can belong many streams, and in most cases it will at least belong to two streams (one being the event log).
As streams are append-only, an event can be uniquely identified by its position in a stream, including in the event log.
Event streams are perhaps the most important part of the Dolittle platform. To get a different and more detailed perspective on streams, please read our section on event sourcing and streams.
Rules
There are rules on streams to maintain idempotency and the predictability of Runtime. These rules are enforced by the Runtime:
- The ordering of the events cannot change
- Events can only be appended to the end of the stream
- Events cannot be removed from the stream
- A partitioned stream cannot be changed to be unpartitioned and vice versa
Partitions
If we dive deeper into event streams we’ll see that we have two types of streams in the Runtime; partitioned and unpartitioned streams.
A partitioned stream is a stream that is split into chunks. These chunks are uniquely identified by a PartitionId
(GUID). Each item in a partitioned stream can only belong to a single partition.
An unpartitioned stream only has one chunk with a PartitionId
of 00000000-0000-0000-0000-000000000000
.
There are multiple reasons for partitioning streams. One of the benefits is that it gives a way for the developers to partition their events and the way they are processed in an Event Handler. Another reason for having partitions becomes apparent when needing to subscribe to other streams in other microservices. We’ll talk more about that in the Event Horizon section.
Public vs Private Streams
There are two different types of event streams; public and private. Private streams are exposed within their Tenant and public streams are additionally exposed to other microservices.
Through the Event Horizon other microservices can subscribe to your public streams. Using a public filter you can filter out public events to public streams.
Stream Processor
A stream processor consists of an event stream and an event processor. It takes in a stream of events, calls the event processor to process the events in order, keeps track of which events have already been processed, which have failed and when to retry. Each stream processor can be seen as the lowest level unit-of-work in regards to streams and they all run at the same time, side by side, in parallel.
Since the streams are also uniquely identified by a stream id we can identify each stream processor by their SourceStream, EventProcessor
pairing.
// structure of a StreamProcessor
StreamProcessor {
SourceStream Guid
EventProcessor Guid
// the next event to be processed
Position int
// for keeping track of failures and retry attempts
LastSuccesfullyProcessed DateTime
RetryTime DateTime
FailureReason string
ProcessingAttempts int
IsFailing bool
}
The stream processors play a central role in the Runtime. They enforce the most important rules of Event Sourcing; an event in a stream is not processed twice (unless the stream is being replayed) and that no event in a stream is skipped while processing.
Stream processors are constructs that are internal to the Runtime and there is no way for the SDK to directly interact with stream processors.
Dealing with failures
What should happen when a processor fails? We cannot skip faulty events, which means that the event processor has to halt until we can successfully process the event. This problem can be mitigated with a partitioned stream because the processing only stops for that single partition. This way we can keep processing the event stream even though one, or several, of the partitions fail. The stream processor will at some point retry processing the failing partitions and continue normally if it succeeds.
Event Processors
There are 2 different types of event processors:
- Filters that can create new streams
- Processors that process the event in the user’s code
These are defined by the user with Event Handlers & Filters.
When the processing of an event is completed it returns a processing result back to the stream processor. This result contains information on whether or not the processing succeeded or not. If it did not succeed it will say how many times it has attempted to process that event, whether or not it should retry and how long it will wait until retrying.
Multi-tenancy
When registering processors they are registered for every tenant in the Runtime, resulting in every tenant having their own copy of the stream processor.
Formula for calculating the total number of stream processors created:
(((2 x event handlers) + filters) x tenants) + event horizon subscriptions = stream processors
Let’s provide an example:
For both the filter and the event processor “processors” only one stream processor is needed. But for event handlers we need two because it consists of both a filter and an event processor. If the Runtime has 10 tenants and the head has registered 20 event handlers we’d end up with a total of 20 x 2 x 10 = 400 stream processors.
2.4 - Event Handlers & Filters
Overview of event handlers and filters
In event-driven systems it is usually not enough to just say that an Event occurred. You’d expect that something should happen as a result of that event occurring as well.
In the Runtime we can register 2 different processors that can process events; Event Handlers and Filters.
They take in a Stream of events as an input and does something to each individual event.
Each of these processors is a combination of one or more Stream Processors and Event Processor.
What it does to the event is dependent on what kind of processor it is. We’ll talk more about different processors later in this section.
Registration
In order to be able to deal with committed events, the heads needs to register their processors. The Runtime offers endpoints which initiates the registration of the different processors. Only registered processors will be ran. When the head disconnects from the Runtime all of the registered processors will be automatically unregistered and when it re-connects it will re-register them again. Processors that have been unregistered are idle in the Runtime until they are re-registered again.
Scope
Each processor processes events within a single scope. If not specified, they process events from the default scope. Events coming over the Event Horizon are saved to a scope defined by the event horizon Subscription.
Filters
The filter is a processor that creates a new stream of events from the event log. It is identified by a FilterId
and it can create either a partitioned or unpartitioned stream. The processing in the filter itself is however not partitioned since it can only operate on the event log stream which is an unpartitioned stream.

The filter is a powerful tool because it can create an entirely customized stream of events. It is up to the developer on how to filter the events, during filtering both the content and the metadata of the event is available for the filter to consider. If the filter creates a partitioned stream it also needs to include which partition the event belongs to.
However with great power comes great responsibility. The filters cannot be changed in a way so that it breaks the rules of streams. If it does, the Runtime would notice it and return a failed registration response to the head that tried to register the filter.
Public Filters
Since there are two types of streams there are two kinds of filters; public and private. They function in the same way, except that private filters creates private streams and a public filter creates public streams. Only public events can be filtered into a public stream.
Event Handlers
The event handler is a combination of a filter and an event processor. It is identified by an EventHandlerId
which will be both the id of both the filter and the event processor.

The event handler’s filter is filtering events based on the EventType
that the event handler handles.
Event handlers can be either partitioned or unpartitioned. Partitioned event handlers uses, by default, the EventSourceId
of each event as the partition id. The filter follows the same rules for streams as other filters.
Changing event handlers
The event handler registration fails if your event handler suddenly stops handling an event type that it has already handled, or starts handling a new event type that has already occurred in the event log.
Multi-tenancy
When registering processors they are registered for every tenant in the Runtime, resulting in every tenant having their own copy of the Stream Processor.
2.5 - Tenants
What is a Tenant & Multi-tenancy
Dolittle supports having multiple tenants using the same software out of the box.
What is a Tenant?
A Tenant is a single client that’s using the hosted software and infrastructure. In a SaaS (Software-as-a-Service) domain, a tenant would usually be a single customer using the service. The tenant has its privileges and resources only it has access to.
What is Multi-tenancy?
In a multi-tenant application, the same instance of the software is used to serve multiple tenants. An example of this would be an e-commerce SaaS. The same basic codebase is used by multiple different customers, each who has their own customers and their own data.
Multi-tenancy allows for easier scaling, sharing of infrastructure resources, and easier maintenance and updates to the software.

Multi-tenancy in Dolittle
In Dolittle, every tenant in a Microservice is identified by a GUID. Each tenant has their own Event Store, managed by the Runtime. These event stores are defined in the Runtime configuration files. The tenants all share the same Runtime, which is why you need to specify the tenant which to connect to when using the SDKs.
2.6 - Event Horizon
Learn about Event Horizon, Subscriptions, Consumers and Producers
At the heart of the Dolittle runtime sits the concept of Event Horizon. Event horizon is the mechanism for a microservice to give Consent for another microservice to Subscribe to its Public Stream and receive Public Events.

Producer
The producer is a Tenant in a Microservice that has one or more public streams that Consumer can subscribe to.
Only public events are eligible for being filtered into a public stream.
Once an event moves past the event horizon, the producer will no longer see it. The producer doesn’t know or care, what happens with an event after it has gone past the event horizon.
Consent
The producer has to give consent for a consumer to subscribe to a Partition in the producers public stream. Consents are defined in event-horizon-consents.json
.
Consumer
A consumer is a tenant that subscribes to a partition in one of the Producer’s public streams. The events coming from the producer will be stored into a Scoped Event Log in the consumer’s event store. This way even if the producer would get removed or deprecated, the produced events are still saved in the consumer.
To process events from a scoped event log you need scoped event handlers & filters.
The consumer sets up the subscription and will keep asking the producer for events. The producers Runtime will check whether it has a consent for that specific subscription and will only allow events to flow if that consent exists. If the producer goes offline or doesn’t consent, the consumer will keep retrying.
Subscription
A subscription is setup by the consumer to receive events from a producer. Additionally the consumer has to add the producer to its microservices.json
.
This is a simplified structure of a Subscription
in the consumer.
Subscription {
// the producers microservice, tenant, public stream and partition
MicroserviceId Guid
TenantId Guid
PublicStreamId Guid
PartitionId Guid
// the consumers scoped event log
ScopeId Guid
}
Multiple subscriptions to same scope
If multiple subscriptions route to the same
scoped event log, the ordering of the events cannot be guaranteed. There is no way to know in which order the subscriber receives the events from multiple producers as they are all independent of each other.
Event migration
We’re working on a solution for event migration strategies using Generations. As of now there is no mechanism for dealing with generations, so they are best left alone.
Extra caution should be paid to changing public events so as not to break other microservices consuming those events.
2.7 - Event Store
Introduction to the Event Store
An Event Store is a database optimized for storing Events in an Event Sourced system. The Runtime manages the connections and structure of the stored data. All Streams, Event Handlers & Filters, Aggregates and Event Horizon Subscriptions are being kept track inside the event store.
Events saved to the event store cannot be changed or deleted. It acts as the record of all events that have happened in the system from the beginning of time.
Each Tenant has their own event store database, which is configured in resources.json
.
Scope
Events that came over the Event Horizon need to be put into a scoped collection so they won’t be mixed with the other events from the system.
Scoped collections work the same way as other collections, except you can’t have Public Streams or Aggregates.
Default scope
Technically all collections are scoped, with the default scopeID being 00000000-0000-0000-0000-000000000000
.
This is left out of the naming to make the event store more readable. When we talk about scoped concepts, we always refer to non-default scopes.
Structure of the Event Store
This is the structure of the event store implemented in MongoDB. It includes the following collections in the default Scope:
event-log
aggregates
stream-processor-states
stream-definitions
stream-<streamID>
public-stream-<streamID>
For scoped collections:
Following JSON structure examples have each property’s BSON type as the value.
event-log
The Event Log includes all the Events committed to the event store in chronological order. All streams are derived from the event log.
Aggregate events have "wasAppliedByAggregate": true
set and events coming over the Event Horizon have "FromEventHorizon": true"
set.
This is the structure of a committed event:
{
// this it the events EventLogSequenceNumber,
// which identifies the event uniquely within the event log
"_id": "decimal",
"Content": "object",
// Aggregate metadata
"Aggregate": {
"wasAppliedByAggregate": "bool",
// AggregateRootId
"TypeId": "UUID",
// AggregateRoot Version
"TypeGeneration": "long",
"Version": "decimal"
},
// EventHorizon metadata
"EventHorizon": {
"FromEventHorizon": "bool",
"ExternalEventLogSequenceNumber": "decimal",
"Received": "date",
"Concent": "UUID"
},
// the committing microservices metadata
"ExecutionContext": {
//
"Correlation": "UUID",
"Microservice": "UUID",
"Tenant": "UUID",
"Version": "object",
"Environment": "string",
},
// the events metadata
"Metadata": {
"Occurred": "date",
"EventSource": "UUID",
// EventTypeId and Generation
"TypeId": "UUID",
"TypeGeneration": "long",
"Public": "bool"
}
}
aggregates
This collection keeps track of all instances of Aggregates registered with the Runtime.
{
"EventSource": "UUID",
// the AggregateRootId
"AggregateType": "UUID",
"Version": "decimal"
}
stream
A Stream contains all the events filtered into it. It’s structure is the same as the event-log
, with the extra Partition
property used for partitions
The streams StreamId
is added to the collections name, eg. a stream with the id of 323bcdb2-5bbd-4f13-a7c3-b19bc2cc2452
would be in a collection called stream-323bcdb2-5bbd-4f13-a7c3-b19bc2cc2452
.
{
// same as an Event in the "event-log" + Partition
"Partition": "UUID",
}
public-stream
The same as a stream
, except only for Public Stream with the public
prefix in collection name. Public streams can only exist on the default scope.
stream-definitions
This collection contains all Filters registered with the Runtime.
Filters defined by an Event Handler have a type of EventTypeId
, while other filters have a type of Remote
.
{
// id of the Stream the Filter creates
"_id": "UUID",
"Partitioned": "bool",
"Public": "bool",
"Filter": {
"Type": "string",
"Types": [
// EventTypeIds to filter into the stream
]
}
}
stream-processor-states
This collection keeps track of all Stream Processors and their state. Partitioned streams will have a FailingPartitions
property for tracking the fail information per partition.
{
"SourceStream": "UUID",
"EventProcessor": "UUID",
"Position": "decimal",
"LastSuccesfullyProcessed": "date",
// failure tracking information
"RetryTime": "date",
"FailureReason": "string",
"ProcessingAttempts": "int",
"IsFailing": "bool
}
subscription-states
This collection keeps track of Event Horizon Subscriptions in a very similar way to stream-processor-states
.
{
// producers microservice, tenant and stream info
"Microservice": "UUID",
"Tenant": "UUID",
"Stream": "UUID",
"Partition": "UUID",
"Position": "decimal",
"LastSuccesfullyProcessed": "date",
"RetryTime": "date",
"FailureReason": "string",
"ProcessingAttempts": "int",
"IsFailing": "bool
}
Commit vs Publish
We use the word Commit
rather than Publish
when talking about saving events to the event store. We want to emphasize that it’s the event store that is the source of truth in the system. The act of calling filters/event handlers comes after the event has been committed to the event store. We also don’t publish to any specific stream, event handler or microservice. After the event has been committed, it’s ready to be picked up by any processor that listens to that type of event.
2.8 - Event Sourcing
Overview of Event Sourcing in the Dolittle Platform
Event Sourcing is an approach that derives the current state of an application from the sequential Events that have happened within the application. These events are stored to an append-only Event Store that acts as a record for all state changes in the system.
Events are facts and Event Sourcing is based on the incremental accretion of knowledge about our application / domain. Events in the log cannot be changed or deleted. They represent things that have happened. Thus, in the absence of a time machine, they cannot be made to un-happen.
Here’s an overview of Event Sourcing:

Problem
A traditional model of dealing with data in applications is CRUD (create, read, update, delete). A typical example is to read data from the database, modify it, and update the current state of the data. Simple enough, but it has some limitations:
- Data operations are done directly against a central database, which can slow down performance and limit scalability
- Same piece of data is often accessed from multiple sources at the same time. To avoid conflicts, transactions and locks are needed
- Without additional auditing logs, the history of operations is lost. More importantly, the reason for changes is lost.
Advantages with Event Sourcing
- Horizontal scalability
- With an event store, it’s easy to separate change handling and state querying, allowing for easier horizontal scaling. The events and their projections can be scaled independently of each other.
- Event producers and consumers are decoupled and can be scaled independently.
- Flexibility
- The Event Handlers react to events committed to the event store. The handlers know about the event and its data, but they don’t know or care what caused the event. This provides great flexibility and can be easily extended/integrated with other systems.
- Replayable state
- The state of the application can be recreated by just re-applying the events. This enables rollbacks to any previous point in time.
- Temporal queries make it possible to determine the state of the application/entity at any point in time.
- Events are natural
- Audit log
- The whole history of changes is recorded in an append-only store for later auditing.
- Instead of being a simple record of reads/writes, the reason for change is saved within the events.
Problems with Event Sourcing
- Eventual consistency
- As the events are separated from the projections made from them, there will be some delay between committing an event and handling it in handlers and consumers.
- Event store is append-only
- As the event store is append-only, the only way to update an entity is to create a compensating event.
- Changing the structure of events is hard as the old events still exist in the store and need to also be handled.
Projections
The Event Store defines how the events are written in the system, it does not define or prescribe how things are read or interpreted. Committed events will be made available to any potential subscribers, which can process the events in any way they require. One common scenario is to update a read model/cache of one or multiple views, also known as a projections or materialized view. As the Event Store is not ideal for querying data, a prepopulated view that reacts to changes is used instead. Dolittle has no built-in support for a specific style of projection as the requirements for that are out of scope of the platform.
Compensating events
To negate the effect of an Event that has happened, another Event has to occur that reverses the effect. This can be seen in any mature Accounting domain where the Ledger is an immutable event store or journal. Entries in the ledger cannot be changed. The current balance can be derived at any point by accumulating all the changes (entries) that have been made and summing them up (credits and debts). In the case of mistakes, an explicit correcting action would be made to fix the ledger.
Commit vs Publish
Dolittle doesn’t publish events, rather they are committed. Events are committed to the event log, from which any potential subscribers will pick up the event from and process it. There is no way to “publish” to a particular subscriber as all the events are available on the event log, but you can create a Filter that creates a Stream.
Reason for change
By capturing all changes in the forms of events and modeling the why of the change (in the form of the event itself), an Event Sourced system keeps as much information as possible.
A common example is of a e-shopping that wants to test a theory:
A user who has an item in their shopping cart but does not proceed to buy it will be more likely to buy this item in the future
In a traditional CRUD system, where only the state of the shopping cart (or worse, completed orders) is captured, this hypothesis is hard to test. We do not have any knowledge that an item was added to the cart, then removed.
On the other hand, in an Event Sourced system where we have events like ItemAddedToCart
and ItemRemovedFromCart
, we can look back in time and check exactly how many people had an item in their cart at some point and did not buy it, subsequently did. This requires no change to the production system and no time to wait to gather sufficient data.
When creating an Event Sourced system we should not assume that we know the business value of all the data that the system generates, or that we always make well-informed decisions for what data to keep and what to discard.
Further reading
2.9 - Aggregates
Overview of Aggregates
An Aggregate is Domain-driven design (DDD) term coined by Eric Evans. An aggregate is a collection of objects and it represents a concept in your domain, it’s not a container for items. It’s bound together by an Aggregate Root, which upholds the rules (invariants) to keep the aggregate consistent. It encapsulates the domain objects, enforces business rules, and ensures that the aggregate can’t be put into an invalid state.
Example
For example, in the domain of a restaurant, a Kitchen
could be an aggregate, where it has domain objects like Chefs
, Inventory
and Menu
and an operation PrepareDish
.
The kitchen would make sure that:
- A
Dish
has to be on the Menu
for it to be ordered
- The
Inventory
needs to have enough ingredients to make the Dish
- The
Dish
gets assigned to an available Chef
Here’s a simple C#ish example of what this aggregate root could look like:
public class Kitchen
{
Chefs _chefs;
Inventory _inventory;
Menu _menu;
public void PrepareDish(Dish dish)
{
if (!_menu.Contains(dish))
{
throw new DishNotOnMenu(dish);
}
foreach (var ingredient in dish.ingredients)
{
var foundIngredient = _inventory
.GetIngredient(ingredient.Name);
if (!foundIngredient)
{
throw new IngredientNotInInventory(ingredient);
}
if (foundIngredient.Amount < ingredient.Amount)
{
throw new InventoryOutOfIngredient(foundIngredient);
}
}
var availableChef = _chefs.GetAvailableChef();
if (!availableChef)
{
throw new NoAvailableChefs();
}
availableChef.IsAvailable = false;
}
}
Aggregates in Dolittle
With Event Sourcing the aggregates are the key components to enforcing the business rules and the state of domain objects. Dolittle has a concept called AggregateRoot
in the Event Store that acts as an aggregate root to the AggregateEvents
applied to it. The root holds a reference to all the aggregate events applied to it and it can fetch all of them.
Structure of an AggregateRoot
This is a simplified structure of the main parts of an aggregate root.
AggregateRoot {
AggregateRootId Guid
EventSourceId Guid
Version int
AggregateEvents AggregateEvent[] {
EventSourceId Guid
AggregateRootId Guid
// normal Event properties also included
...
}
}
AggregateRootId
Identifies this specific type of aggregate root. In the kitchen example this would a unique id given to the Kitchen
class to identify it from other aggregate roots.
EventSourceId
EventSourceId
represents the source of the event like a “primary key” in a traditional database. In the kitchen example this would be the unique id for each instance of the Kitchen
aggregate root.
Version
Version
is the position of the next AggregateEvent
to be processed. It’s incremented after each AggregateEvent
has been applied by the AggregateRoot
. This ensures that the root will always apply the events in the correct order.
AggregateEvents
The list holds the reference ids to the actual AggregateEvent
instances that are stored in the Event Log. With this list the root can ask the Runtime to fetch all of the events with matching EventSourceId
and AggregateRootId
.
Designing aggregates
When building your aggregates, roots and rules, it is helpful to ask yourself these questions:
- “What is the impact of breaking this rule?"
- “What happens in the domain if this rule is broken?"
- “Am I modelling a domain concern or a technical concern?"
- “Can this rule be broken for a moment or does it need to be enforced immediately?"
- “Do these rules and domain objects break together or can they be split into another aggregate?"
Further reading
3 - Platform
Overview of the Dolittles Platform
Dolittle Platform is our PaaS(Platform-as-a-Service) solution for hosting your Dolittle microservices in the cloud.
3.1 - Requirements
Requirements for running microservices in the Dolittle platform
To be compatible with the environment of the Dolittle platform, there are certain requirements we impose on your microservices.
If they are not met, your application might behave unexpectedly - or in the worst case - not work at all.
The following list of requirements is subject to change, but we will always notify you when you have an application running in our platform before making any changes.
1. Your application must use the resource system
To ensure data privacy, security and proper segregation of your tenant’s data, our platform has a resource management system.
This system controls access and connection settings for resources on a per request basis and will provide your microservice with the necessary information for accessing these resources programmatically.
The connection information will not be the same as when developing locally, so you must not embed connection settings in your code.
This requirement applies to read and write data to databases or files, or while making API-calls to services, both to internal resources provided by the Dolittle platform and external 3rd party services.
For the resource management system to work, and to protect your application and users from data leakage, we encrypt and authenticate all interactions with your application through the platform.
This means that your microservices will be completely isolated by default, and all endpoints that should be accessible outside our platform needs to be exposed explicitly and configured with appropriate encryption and authentication schemes.
To enable same-origin authentication flows and adhere to internet best practices, the platform will take control of a set of URIs for the hostnames you have allocated to your application. The following paths and any sub-path of these (in any form of capitalisation) are reserved for the platform:
- /.well-known
- /robots.txt
- /sitemap
- /api/Dolittle
- /Dolittle
3. Your microservices must be stateless, scalable and probeable
To allow for efficient hosting of your application, we have to able to upgrade, re-start, move and scale your microservices to handle the load and perform necessary security upgrades.
This means that you must not rely on any in-memory state for anything apart from the per-transaction state, and you must not rely on there being a single instance of your microservices at any point in time.
To ensure that your microservices are healthy and ready to perform work, your microservices must expose both liveness and readiness probes.
The microservice should respond to the liveness probe whenever it has successfully started and is in a functional state, and should respond to the readiness probe whenever it is free to handle incoming requests from users.
4. Your application must adhere to semantic versioning of your microservices
We rely on semantic versioning to properly track changes of your microservices (from an operational aspect) and to decide on the correct course of action when new versions of your microservices are built.
Minor or patch increments will result in automatic upgrades of your running microservices without any human interaction, while major increments require manual approval and potential updates of configuration or data structures.
This means that you must increment the major number when making changes to your microservices that require changes in the platform for your application to work properly.
5. Your frontend must be a static single-page application
To ensure that any user-facing frontend is served quickly and with minimal data-usage, we serve your frontend using separate servers with appropriate caching, compression and CDN strategies.
This means that your frontend must be built as a single-page application to static HTML, CSS and js files.
These files must be built and versioned alongside your backend microservices to ensure that the frontend and backend versions are aligned and function properly.
3.2 - Deploy an application
How to deploy an application in the Dolittle Platform
This guide is for the users of our Platform. If you aren’t already a user, please contact us to host your microservices!
Prerequisites
Familiar with the following:
- Docker containers
- Kubernetes
- Microsoft Azure
Recommendation
For users on Windows OS, we recommend that you use WSL/Ubuntu as your shell/terminal instead of CMD/Poweshell.
Installation
Install the following software:
Configuration
After an environment has been provisioned for you in the Dolittle PaaS, you will receive these details to use with the deployment commands in the following sections:
Subscription ID
Resource Group
Cluster Name
Application Namespace
ACR Registry
Image Repository
Deployment Name
Application URL
Setup
All commands are meant to be run in a terminal (Shell)
AZURE
Login to Azure:
AKS - Azure Container Service
Get credentials from Dolittle’s AKS cluster
az aks get-credentials -g <Resource Group> -n <Cluster Name> --subscription <Subscription ID>
ACR - Azure Container Registry
Get credentials to Azure Container Registry
az acr login -n <ACR Registry> --subscription <Subscription ID>
Deployment
To deploy a new version of your application, follow these steps. For use semantic versioning, e.g. “1.0.0”.
Docker
Build your image
docker build -t <Image Repository>:<Tag> .
Push the image to ACR
docker push <Image Repository>:<Tag>
Kubernetes
Patch the Kubernetes deployment to run your new version
kubectl patch --namespace <Application Namespace> deployment <Deployment Name> -p '{"spec": { "template": { "spec": { "containers": [{ "name":"head", "image": "<Image Repository>:<Tag>"}] }}}}'
Debugging
kubectl commands:
Show the status of your application pods
kubectl -n <Application Namespace> get pods
Show deployed version of your application
kubectl -n <Application Namespace> get deployment -o wide
Show the logs of the last deployed version of the application
kubectl -n <Application Namespace> logs deployments/<Deployment Name>
Logs for the application, last 100 lines
kubectl -n <Application Namespace> logs deployments/<Deployment Name> --tail=100
3.3 - Update configurations
How to update configuration files in the Dolittle Platform
This guide is for the users of our Platform. If you aren’t already a user, please contact us to host your microservices!
Prerequisites
Familiar with the following:
Recommendation
For users on Windows OS, we recommend that you use WSL/Ubuntu as your shell/terminal instead of CMD/Poweshell.
Installation
Install the following software:
Configuration
After an environment has been provisioned for you in the Dolittle PaaS, you will receive a yaml file per environment. The files will be similar to this:
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: application-namespace
name: app-dev-ms-env-variables
labels:
tenant: Customer
application: App-Dev
microservice: MS-A
data:
OPENID_AUTHORITY: "yourapp.auth0.com"
OPENID_CLIENT: "client-id"
OPENID_CLIENTSECRET: "client-secret"
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: application-namespace
name: app-dev-ms-config-files
labels:
tenant: Customer
application: App-Dev
microservice: MS-A
data:
myapp.json: |
{
"somekey": "somevalue"
}
The files represent configmap resources in Kubernetes. We recommend that you store the files in a version control system(VCS) of your choice.
Purpose
Each yaml file consists of 2 configmaps per micro-service:
app-dev-ms-env-variables
: This configmap is for your environmental variables that will be passed on to the container at start up.
app-dev-ms-config-files
: This configmap is for add/override files. The default mount point is app/data
Please do NOT edit/change the following:
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: application-namespace
name: app-dev-ms-env-variables
labels:
tenant: Customer
application: App-Dev
microservice: MS-A
data:
The above mentioned data is vital to the deployment and must not be altered in any way. Any changes here may result in forbidden response when the apply command is run.
You may alter the content under data:
OPENID_AUTHORITY: "yourapp.auth0.com"
OPENID_CLIENT: "client-id"
OPENID_CLIENTSECRET: "client-secret"
connectionstring__myconnection: "strings"
Alter existing or add new key/value pairs.
myapp.json: |
{
"somekey": "somevalue"
}
customSetting.json: |
{
"settings": {
"connection":"connectionstring"
}
}
Alter existing or add new JSON data that will be linked to a specific file that will be available at runtime under app/data/
Setup
You need to setup your AKS credentials.
Update configurations
To update the configurations:
kubectl apply -f <filename>
You must be in the directory of the yaml file before running the command.
To update/add a single key in the config:
kubectl patch -n <Application Namespace> configmap <Configmap Name> -p '{"data":{"my-key":"value that i want"}}'
To remove a single key from the configuration:
kubectl patch -n <Application Namespace> configmap <Configmap Name> -p '{"data":{"my-key":null}}'
See configurations
JSON output
kubectl get -n <Application Namespace> configmap <Configmap Name> -o json
YAML output:
kubectl get -n <Application Namespace> configmap <Configmap Name> -o yaml
For an advanced print out, you need a tool called jq for parsing
kubectl get -n configmap -o json | jq -j ‘.data | to_entries | .[] | “(.key): (.value)\n”’
3.4 - Update secrets
How to update secrets in the Dolittle Platform
This guide is for the users of our Platform. If you aren’t already a user, please contact us to host your microservices!
Prerequisites
Familiar with the following:
Recommendation
For users on Windows OS, we recommend that you use WSL/Ubuntu as your shell/terminal instead of CMD/Poweshell.
Installation
Install the following software:
Secrets
After an environment has been provisioned for you in the Dolittle PaaS, you will receive a yaml file per environment. The files will be similar to this:
---
apiVersion: v1
kind: Secret
metadata:
namespace: application-namespace
name: apps-dev-ms-secret-env-variables
labels:
tenant: Customer
application: App-Dev
microservice: MS-A
type: Opaque
data:
OPENID_SECRET: b3BlbiBpZCBzZWNyZXQ=
The files represent the Secrets -resource in Kubernetes. We recommend that you store the files in a version control system(VCS) of your choice.
Purpose
Each yaml file consists of a secret per micro-service:
app-dev-ms-secret-env-variables
: This secret is for your environmental variables that will be passed on to the container at start up. One important thing to remember is that the values have to be encoded using base64.
Please do NOT edit/change the following:
---
apiVersion: v1
kind: Secret
metadata:
namespace: application-namespace
name: apps-dev-ms-secret-env-variables
labels:
tenant: Customer
application: App-Dev
microservice: MS-A
type: Opaque
data:
The above mentioned data is vital to the deployment and must not be altered in any way. Any changes here may result in forbidden response when the apply command is run.
You may alter existing or add new key/value pairs.
OPENID_SECRET: b3BlbiBpZCBzZWNyZXQ=
DB_PASSWORD: c29tZSBwYXNzd29yZA==
Setup
You need to setup your AKS credentials.
Encode secrets
To encode values:
echo -n "my super secret pwd" | base64 -w0
The above command will give you:
bXkgc3VwZXIgc2VjcmV0IHB3ZA==
The value can then be added to the secrets:
MY_SECRET: bXkgc3VwZXIgc2VjcmV0IHB3ZA==
Update secrets
To update the secrets:
kubectl apply -f <filename>
You must be in the directory of the yaml file before running the command.
To update/add a single key in the secrets:
kubectl patch -n <Application Namespace> secret <Secrets Name> -p '{"data":{"my-key":"value that i want encoded using base64"}}'
To remove a single key from the configuration:
kubectl patch -n <Application Namespace> secret <Secrets Name> -p '{"data":{"my-key":null}}'
See secrets
JSON output:
kubectl get -n <Application Namespace> secret <Secrets Name> -o json
YAML output:
kubectl get -n <Application Namespace> secret <Secrets Name> -o yaml
For an advanced print out, you need a tool called jq
for parsing the JSON in you shell:
kubectl get -n <Application Namespace> secret <Secrets Name> -o json | jq -j '.data | to_entries | .[] | "\(.key): \(.value)\n"'
3.5 - FAQ
Frequently asked questions about the Dolittle Platform
Can I login without allowing cookies?
If you’re getting strange results with logon through Sentry or another OIDC service - check that you’re allowing cookies for the domains!
Without cookies you cannot logon - at all. Sorry!
4 - References
Reference documentation
I’m the overview of the reference folder. I’ll appear when you click on the “References”
4.1 - Runtime
Reference documentation for the Runtime configuration
I’m the overview of the reference folder. I’ll appear when you click on the “References”
4.1.1 - Configuration
Runtime configuration files reference
The Runtime uses JSON configuration files. The files are mounted to the .dolittle/
folder inside the Docker image.
Configuration file |
Required |
tenants.json |
✔️ |
resources.json |
✔️ |
event-horizon-consents.json |
✔️ |
microservices.json |
️ |
metrics.json |
|
endpoints.json |
|
tenants.json
Required. Defines each Tenant in the Runtime.
resources.json
Required. Configurations for the Event Store per Tenant.
{
<tenant-id>: {
"eventStore": {
"servers": [
<MongoDB connection URI>
],
"database": <MongoDB database name>,
// defaults to 1000. MongoDB max connection amount
"maxConnectionPoolSize": 1000
}
}
}
event-horizon-consents.json
Required. Defines the Consents a Producer tenant gives to Consumers so that they can receive events over the Event Horizon.
{
// The producer tenant that gives the consent
<tenant-id>: [
{
// the consumers microservice and tenant to give consent to
"microservice": <microservice-id>,
"tenant": <tenant-id>,
// the producers public stream and partition to give consent to
"stream": <stream-id>,
"partition": <partition-id>,
// an identifier for this consent
"consent": <consent-id>
}
]
}
Note
If there are no subscriptions, the file should only contain an empty JSON object {}
.
microservices.json
Defines where the Producer microservices are so that the Consumer can Subscribe to them.
{
// the id of the producer microservice
<microservice-id>: {
// producer microservices Runtime host and public port
"host": <host>,
"port": <port>
}
}
endpoints.json
Defines the private and public ports for the Runtime.
{
"public": {
// default 50052
"port": <port>
},
"private": {
// default 50053
"port": <port>
}
}
metrics.json
The port to expose the Prometheus Runtimes metrics server on.
{
// default 9700
"Port": <port>
}
4.1.2 - Failures
The known failures and their associated codes
Event Store
Code |
Failure |
b6fcb5dd-a32b-435b-8bf4-ed96e846d460 |
Event Store Unavailable |
d08a30b0-56ab-43dc-8fe6-490320514d2f |
Event Applied By Other Aggregate Root |
b2acc526-ba3a-490e-9f15-9453c6f13b46 |
Event Applied To Other Event Source |
ad55fca7-476a-4f68-9411-1a3b087ab843 |
Event Store Persistance Error |
6f0e6cab-c7e5-402e-a502-e095f9545297 |
Event Store Consistency Error |
eb508238-87ff-4519-a743-03be5196a83d |
Event Store Sequence Is Out Of Order |
45a811d9-bdf7-4ee1-b9bc-3f248e761799 |
Event Cannot Be Null |
eb51284e-c7b4-4966-8da4-64a862f07560 |
Aggregate Root Version Out Of Order |
f25cccfb-3ae1-4969-bee6-906370ffbc2d |
Aggregate Root Concurrency Conflict |
ef3f1a42-9bc3-4d98-aa2a-942db7c56ac1 |
No Events To Commit |
Filters
Code |
Failure |
d6060ba0-39bd-4815-8b0e-6b43b5f87bc5 |
No Filter Registration Received |
2cdb6143-4f3d-49cb-bd58-68fd1376dab1 |
Cannot Register Filter Or Non Writeable Stream |
f0480899-8aed-4191-b339-5121f4d9f2e2 |
Failed To Register Filter |
Event Handlers
Code |
Failure |
209a79c7-824c-4988-928b-0dd517746ca0 |
No Event Handler Registration Received |
45b4c918-37a5-405c-9865-d032869b1d24 |
Cannot Register Event Handler Or Non Writeable Stream |
dbfdfa15-e727-49f6-bed8-7a787954a4c6 |
Failed To Register Event Handler |
Event Horizon
Code |
Failure |
9b74482a-8eaa-47ab-ac1c-53d704e4e77d |
Missing Microservice Configuration |
a1b791cf-b704-4eb8-9877-de918c36b948 |
Did Not Receive Subscription Response |
2ed211ce-7f9b-4a9f-ae9d-973bfe8aaf2b |
Subscription Cancelled |
be1ba4e6-81e3-49c4-bec2-6c7e262bfb77 |
Missing Consent |
3f88dfb6-93d6-40d3-9d28-8be149f9e02d |
Missing Subscription Arguments |
5 - Contributing
Contribute to the Dolittle platform
Dolittle is an open-source framework that is open for contributions.
This project has adopted the code of conduct defined by the Contributor Covenant to clarify expected behavior in our community. Read our Code of Conduct for more information.
Code
If you want to contribute with code, you can submit a pull request with your changes. It is highly recommended to read through all of our coding guideling to see what we’re expecting from you as a contributor.
Documentation
Contributions can also be done through documentation, all of our repositories have a Documentation
folder. It is higly recommended you read through our style guide and writing guide on documentation.
Issues
You can contribute by filing all of your issues under our Home repository.
5.1 - Guidelines
5.1.1 - The Vision
Learn about the Dolittle vision

Our vision at Dolittle is to build a platform to solve problems for line-of-business applications that is easy
to use, increases developer productivity while remaining easy to maintain.
While our vision remains constant details around what needs to be implemented shifts over time as we learn more and gain experience on how the Dolittle framework is used in production. Dolittle will adapt as new techniques and technologies emerge.
Background
Dolittle targets the line of business type of application development. In this space there are very often requirements that
are somewhat different than making other types of applications. Unlike creating a web site with content, line of business
applications has more advanced business logic and rules associated with it. In addition, most line of business applications
tend to live for a long time once they are being used by users. Big rewrites are often not an option, as it involves a lot of
work to capture existing features and domain logic in a new implementation. This means that one needs to think more
about the maintainability of the product. In addition to this, in a fast moving world, code needs to built in a way that
allows for rapidly adapting to new requirements. It truly can be a life/death situation for a company if the company is
not able to adapt to market changes, competitors or users wanting new features. Traditional techniques for building software
have issues related to this. N-tier architecture tends to mix concerns and responsibilities and thus leading to
software that is hard to maintain. According to Fred Brooks and
“The Mythical Man-Month”, 90% of the cost
related to a typical system arise in the maintenance phase. This means that we should aim towards building our systems
in a way that makes the maintenance phase as easy as possible.
The goal of Dolittle is to help make this better by focusing on bringing together good software patterns and practices,
and sticking to them without compromise. Dolittle embraces a set of practices described in this article and aims to adhere
to them fully.
History
The project got started by Einar Ingebrigtsen in late 2008 with the first public commits going out
to Codeplex in early 2009. It was originally called Bifrost. Source control History between 2009 and 2012 still sits there. The
initial thoughts behind the project was to encapsulate commonly used building blocks. In 2009, Michael Smith
and Einar took the project in a completely different direction after real world experience with
traditional n-tier architecture and the discovery of commands. In 2012 it was moved to GitHub.
The original Bifrost repository can be found here.
From the beginning the project evolved through the needs we saw when consulting for different companies. Amongst these were Komplett.
It has always had a high focus on delivering the building blocks to be able to deliver the true business value. This has been
possible by engaging very close with domain experts and developers working on line of business solutions.
A presentation @ NDC 2011 showcases the work that was done, you can find it here.
From 2012 to 2015 it got further developed @ Statoil and their needs for a critical LOB application; ProCoSys.
In 2015, Børge Nordli became the primary Dolittle resource @ Statoil and late 2015 he started
maintaining a fork that was used by the project. Pull Requests from the fork has been
coming in steadily.
The effort of design and thoughtwork going into the project is a result of great collaboration over the years.
Not only by the primary maintainers; Michael, Børge and Einar - but all colleagues and other contributors to the project.
5.1.2 - Code of conduct
Learn about what is expected from you on conduct
Contributor Covenant Code of Conduct
Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
Our Standards
Examples of behavior that contributes to creating a positive environment
include:
- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
- The use of sexualized language or imagery and unwelcome sexual attention or
advances
- Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or electronic
address, without explicit permission
- Other conduct which could reasonably be considered inappropriate in a
professional setting
Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at einar@dolittle.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project’s leadership.
Attribution
This Code of Conduct is adapted from the Contributor Covenant, version 1.4,
available at http://contributor-covenant.org/version/1/4
5.1.3 - How to Contribute
Learn about how to contribute
You can contribute through the issues on any of the repositories in all the
organizations found listed here. If you
want to contribute with code, you can submit a pull request with your changes.
Before contributing with code, it is highly recommended to read through all of
our documentation here to see what we’re expecting from you as a contributor.
This project has adopted the code of conduct defined by the
Contributor Covenant to clarify expected
behavior in our community. Read our Code of Conduct.
Contributions can also be done through documentation, all of our repositories
have a Documentation
folder and more details on writing documentation can be
found here.
5.1.4 - Core values
Learn about what we at Dolittle believe in
At Dolittle we believe that good software stems from a set of core values.
These values guides us towards our core principles
and also manifested in our development principles
that translates it into guidelines we use for our development.
This page describes these core values to help put ourselves into the pit of success.
Privacy
We value privacy at all levels. Core to everything we do is rooted in this.
This means we will always strive towards making the right technology choice that
lets the owner of data have full control over where it is stored and the ownership
is always very clear. These things should always be at the back every developers
mind when making choices. It is easy to forget that even a little log statement
could violate this.
Empowering developers
The Dolittle mission is to empower developers to create great, sustainable,
maintainable software so that they can make their users feel like heroes.
This is part of our DNA - representing how we think and how we approach every
aspect of our product development. Our products range from libraries to frameworks
to tooling, and every step of the way we try to make it as easy as possible
for the developer consuming our technology.
Delivering business value
When empowering developers, this is because we want to make it easier to create
great technical solutions without having to implement all the nitty and gritty
details of doing so; so that our end-users - the developers we are building for,
can focus on delivering the business value for their businesses. For Dolittle the
developer using our technology is our end-users and represent our business value.
Our promise in this is that we will build relevant technology and not technology
for the technology sake. Obviously, this is balanced with innovation and we will
try out things, but close the feedback loop as tight as possible so that we try
things out and iterate on it to see if things are delivering business value.
User focused
At the end of the day, whatever we are building and at any level - we build things
that affect an end user. There is always a person at the end of everything being
done. This is critical to remember. We build software to help people build software
that are more relevant and improves the lives of the actual end user using the
software.
With this in hand, we work hard to understand the persona; who are we building for
and what is the effect of building something.
Embracing change
The world is constantly changing, so should software. Adapting to new knowledge,
new opportunities and new challenges is how the world has always moved on. It is
therefor a vital piece of Dolittle to be able to embrace this change. This is a
mindset and something we strongly believe in, and is also something we stribe
towards in our codebase; making it possible to adapt and change to new requirements
without having to recreate everything.
Being pragmatic
Pragmatism is important, keeping things real, relevant and practical is at the core
of this. However, it should be treated as a trumph card for taking shortcuts and
not adhering to our principles - “as that would be the pragmatic way”. It is related
to the approach, which tool and in general the how. We also keep our focus on
the outcome and never deviate from what outcome we are trying to achieve.
5.1.5 - Core principles
Learn about the core principles of Dolittle
Security
From everything we do; security is at the heart. We want users to feel
secure when using systems built on top of the Dolittle frameworks and
platform. Zero trust
is way of thinking that basically ensures that all data and resources
are accessed in a secure manner.
Storage over compute
For everything we do at Dolittle and in the Dolittle frameworks,
we always favor using more storage than compute. Compute-power is
always the most expensive part of systems while storage is the
cheapest. This means if one has the chance and it sustainable -
duplicates in storage for different purposes is always preferred.
Since the Dolittle architecture is built around events and the
source of truth is sitting inside an event store, there is a great
opportunity of leveraging the storage capabilities out there and
really not be afraid of duplicates. This do however mean one needs
to embrace the concept of eventual consistency.
Multi-tenancy
Since compute is the most expensive, the Dolittle frameworks and platform
has been built from the ground up with multi-tenancy in mind.
This basically means that a single process running the Dolittle runtime,
can represent multiple tenants of the application the runtime represents.
This makes for a more optimal use of resources. The way one then does
things is based on the execution context with the tenant information in
it, we can use the correct connection-string for a database for instance
or other information to a resource.
Tenant segregation
With everything being multi-tenant we also focus on segregating the tenants.
This principle means that we do not share a resource - unless it can cryptographically
guarantee that data could not be shared between two tenants by accident.
Everything in the Dolittle frameworks has been built from the ground up
with this in mind and with the resource system at play, you’ll be able to
transparently work as if it was a single tenant solution and the Dolittle frameworks
in conjunction with the platform would then guarantee the correct resource.
Privacy
Data should in no way made available to arbitrary personnel. Only if the data owner
has consented should one get access to data. Much like GDPR
is saying for personal data and the consent framework defined, business to business
should be treated in the same way. This means that developers trying to hunt down
a bug, shouldn’t just be granted access to a production system and its data without
the consent of the actual data owner. An application developer that builds a
multi-tenant application might not even be the data owner, while its customers
probably are. This should be governed in agreements between the application owner and
the data owner.
Just enough software
A very core value we have at Dolittle is to not deliver more than just enough.
We know the least at the beginning of a project and the only way we can know
if anything works is to put it into the hands of others. Only then can we
really see what worked and what didn’t. Therefor it is essential that we only
do just enough. In the words of Sarah Lewis; “We thrive not when we’ve done
it all, but when we still have more to do”
(See her TED talk here.
Others has said similar things with the same sentiments - like LinkedIns Reid Hoffman said;
“If you’re not embarrassed by your first product release, you’ve released it too late.”.
In order to be able to do so and guarantee a consistent level og quality, you have
to have some core values and guiding principles to help you along the way. We have
come up with a set of principles to make it easier to do so, read more here.
5.1.6 - Development principles
Learn about the development principles of Dolittle
We at Dolittle believe that properly crafted code will make
for maintainable systems over time. Based on experience, we have
found principles that helps us do just that and we’ve proven it
time and time again that it truly does helps investing in this.
Consistency
One of the hardest things to accomplish is consistency, even
within a single codebase. The Dolittle frameworks and platform
span a number of projects and repositories and it becomes
increasingly more important to stay consistent. Consistent in
structure, naming, approach, principles, mindset and all.
The consistency enables a high level of predictability and
makes it easier to navigate for anyone using Dolittle frameworks.
For anyone maintaining Dolittle frameworks, it means that its
easier to navigate and change context between tasks.
High cohesion
Rather than grouping artifacts by its technical nature; keep
the things that are relevant to each other close. This makes it
easier to navigate and provides a more consistent structure than
having to divide by technical nature. For anyone coming into a
project and developing on a specific feature will have an easier
time understanding and mastering that feature when its all in the
same location. Examples of division by technical nature would be
keep all your interfaces in an interface folder/namespace, all
your frontend components in a component folder. While what you’re
trying to focus on is the feature and everything related to the
feature.
Cohesion
is more than just at a file level within a feature, it
is a mindset of keeping everything that belongs together close.
Thats why we apply this for instance at a repository
level as well.
High cohesion is core to the concept of a bounded context.
Divide only by the tier the artifacts belong to. See Example below.
Frontend (Web)
+-- Bounded Context 1
| +-- Module 1
| +---- Feature 1
| | | View.html
| | | ViewModel.js
| | | Styles.css
| | | SomeRestAPI.cs
| | | SomeSignalRHub.cs
| +---- Feature 2
| | | View.html
| | | ViewModel.js
| | | Styles.css
| | | SomeRestAPI.cs
| | | SomeSignalRHub.cs
+-- Bounded Context 2
...
Domain
+-- Bounded Context 1
| +-- Module 1
| +---- Feature 1
| | | Command.cs
| | | CommandInputValidator.cs
| | | CommandBusinessValidator.cs
| | | CommandHandler.cs
| | | SecurityDescriptor.cs
| | | CommandHandler.cs
| | | AggregateRoot.cs
| | | Service.cs
| +---- Feature 2
| | | Command.cs
| | | CommandInputValidator.cs
| | | CommandBusinessValidator.cs
| | | CommandHandler.cs
| | | SecurityDescriptor.cs
| | | CommandHandler.cs
| | | AggregateRoot.cs
| | | Service.cs
+-- Bounded Context 2
...
Event
+-- Bounded Context 1
| +-- Module 1
| +---- Feature 1
| | | Event.cs
| +---- Feature 2
| | | Event.cs
+-- Bounded Context 2
...
Read
+-- Bounded Context 1
| +-- Module 1
| +---- Feature 1
| | | ReadModel.cs
| | | Query.cs
| | | QueryValidator.cs
| | | SecurityDescriptor.cs
| | | AggregateRoot.cs
| | | Service.cs
| +---- Feature 2
| | | ReadModel.cs
| | | Query.cs
| | | QueryValidator.cs
| | | SecurityDescriptor.cs
| | | AggregateRoot.cs
| | | Service.cs
+-- Bounded Context 2
...
Loose coupling
Automated testing - specifications
Part of being able to move fast with precision is having a good automated test regime. One that runt fast and can be relied upon for avoiding
regressions. Dolittle was built from day one with automated tests, or rather Specs - specifications. You can read more about how Dolittle
does this here.
SOLID
The SOLID principles aims to make it easier to create more maintainable software.
It has been the core principles at play from the beginning of Dolittle.
Below is a quick summary and some relations into Dolittle.
Single Responsibility Principle
Every class should have a single responsibility, every method on this class should do only one thing.
If it needs to do more things, it is most likely a coordinator and should delegate the actual responsibility to a
dependency for the actual work. This is true for types and methods alike.
Open / Closed Principle
Systems and its entities should be open for extension, but closed for modification.
A good examples of this is how you can extend your system quite easily by just putting in new event processor
without having to change the internals of Dolittle.
Liskov Substition Principle
Objects in a program should be replaced with instances of their subtypes without altering the correctness of that program.
An example of how Dolittle follows this is for instance the event store works.
It has multiple implementations and the contract promises what it can do, implementations need to adhere to the contract.
Interface Segregation Principle
Interfaces should represent a single purpose, or concerns. A good example in .NET would be IEnumerable
and ICollection
.
Where IEnumerable
concerns itself around being able to enumerate items, the ICollection
interface is about modifying
the collection by providing support for adding and removing. A concrete implementation of both is List
.
Dependency Inversion Principle
Depend on abstractions, not upon the conrete implementations.
Rather than a system knowing about concrete types and taking also on the responsibility of the lifecycle of its dependencies.
We can quite easily define on a constructor level the dependencies it needs and let a consumer provide the dependencies.
This is often dealt with by introducing an IOC container into the system.
Dolittle is built around this principle and relies on all dependencies to be provided to it.
It also assumes one has a container in place, read more here.
Seperation of Concerns
Another part of breaking up the system is to identify and understand the different concerns and separate these out.
An example of this is in the frontend, take a view for instance. It consists of flow, styling and logic. All these are
different concerns that we can extract into their own respective files and treat independently; HTML, CSS, JavaScript.
Other good examples are validation, instead of putting the validation as attributes on a model in C# - separate these into their
own files like Dolittle enforces.
Read more in details about it here.
Separation of concerns
Decoupling & Microservices
At the heart of Dolittle sits the notion of decoupling. Making it possible to take a system and break it into small focused lego pieces
that can be assembled together in any way one wants to. This is at the core of what is referred to as
Microservices. The ability to break up the software into smaller more digestable components
that makes our software in fact much easier to understand and maintain. When writing software in a decoupled manner, one gets the
opportunity of composing it back together however one sees fit. You could compose it back in one application running inside a single
process, or you could spread it across a cluster. It really is a deployment choice once the software is giving you this freedom.
When it is broken up you get the benefit of scaling each individual piece on its own, rather than scaling the monolith
equally across a number of machines. This gives a higher density, better resource utilization and ultimately better cost
control. With all the principles mentioned in this article, one should be able to produce such a system and that is what
Dolittle aims to help with.
Discovery
Dolittle is heavily relying on different types of discovering mechanisms.
For the C# code the discovery is all about types. It relies on being able to discover concrete types, but also implementations of interfaces.
Through this it can find the things it needs. You can read more about the type discovery mechanism here.
It automatically knows about all the assemblies and the types in your system through the assembly discovery
done at startup.
Convention over configuration
Read more about conventions here.
Cross Cutting Concerns
When concerns are seperated out, some of these can be applied cross cuttingly. Aspect-oriented programming
is one way of applying these. Other ways could be more explicitly built into the code; something that Dolittle enables.
The point of this is to be able to cross-cuttingly enforce code. Things that typically are repetitive tasks that a developer needs
to remember to do are good candidates for this. It could also be more explicit like the security descriptors
in Dolittle that enables one to declaratively set up authorization rules across namespaces for instance.
This type of thinking can enable a lot of productivity and makes the code base less errorprone to things that needs to be remembered,
it can be put in place one time and one can rely on it. Patterns like chain-of-responsibility
can help accomplishing this without going all in on AOP.
Null
Null in code can be referred to the billion dollar mistake.
You MUST at all times try to avoid using null. If you have something that is optional, don’t use null as a way to check
for wether or not its provided. First of all, be explicit about what your dependencies are. A method should have overloads
without the parameters that are optional. For implementations that are optional, provide a NullImplementation as the
default instead. This makes program flow better and no need for dealing with exceptions
such as the NullReferenceException
Runtime Exceptions
Exceptions should not be considered a way to do program flow. Exceptions should be treated as an exceptional state of the system
often caused by faulty infrastructure. At times there are exceptions that are valid due to developers not using an API right.
As long as it there is no way to recover an exception is fine. You should not throw an exception and let a caller of your API
deal with the recovery of an exception. Exceptions MUST be considered unrecoverable.
Examples of naming of exceptions can be found in C# Coding Styles.
Immutability
Mutability in code is a challenge. For instance when dealing with threading, if an object used between two different threads
is mutable, you basically have zero chance of guaranteeing its state. By making it immutable and making it explicit that you
create a new version of the object when mutating - you will avoid threading issues all together. This is very core to typical
functional programming languages, but is a good mindset regardless of language.
Mutability however goes even further, methods should never return a mutable type - it should protect its internals and take
ownership of anything that can be mutated. That way you make your code very clear on responsibility. An example of this
in C# would be returning List<>
/IList<>
from a method. Instead of returning this, you should be return en IEnumerable<>
.
A List<>
would be implementing IEnumerable<>
, so you don’t need to convert it to an immutable. This way the contract
is saying that you can’t control its mutation and the responsibility becomes very clear. This makes responsibilities and concerns
very clear.
5.1.7 - Patterns
Learn about some of the patterns we apply in Dolittle
Backend
Command Query Responsibility Segregation
Most systems has different requirements for the read and the write part of each bounded context. The requirements vary on what is
needed to be written in relation to what is being read and used. The performance characteristics are also for the most part different.
Most line-of-business applications tend to read a lot more than they write. CQRS
talks about totally segregating the read from the write and treat them uniquely.
One finds event sourcing often associated with CQRS, something that Dolittle has embraced and helps
bridge the two sides and stay completely decoupled. It is an optional part of Dolittle but hightly recommended together with an event store.

Frontend
Model View View Model
MVVM is a variation of Martin Fowler’s Presentation Model.
Its the most commonly used pattern in XAML based platforms such as WPF, Silverlight, UWP, Xamarin and more.
Model
The model refers to state being used typically originating from a server component such as a database.
It is often referred to as the domain model. In the context of Dolittle, this would typically be the ReadModel.
View
The view represents the structure and layout on the screen. It observes the ViewModel.
ViewModel
The ViewModel holds the state; the model and also exposes behaviors that the view can utilize.
In XAML the behaviors is represented by a command,
something that wraps the behavior and provides a point for execution but also the ability to check wether or not
it can execute. This proves very handy when you want to validate things and not be able to execute unless one is valid or is authorized.
Dolittle has the concept of commands, these are slightly different however. In fact, commands in Dolittle is a part of the domain.
It is the thing that describes the users intent. You can read more about them here.
In the Dolittle JavaScript frontend however, the type of properties found with the XAML platforms
can also be found here. Read more about the frontend commands here.
Binding
Part of connecting the View with the ViewModel and enabling it to observe it is the concept of binding.
Binding sits between the View and the ViewModel and can with some implementations even understand when values change
and automatically react to the change. In XAML, this is accomplished through implementing interfaces like INotifyPropertyChanged
and INotifyCollectionChanged
for collections.
Dolittle have full client support for both XAML based clients and also for JavaScript / Web based.
For XAML and what is supported, read more in detail here.
For the JavaScript support, Dolittle has been built on top of Knockout that provides obervable()
and observableArray()
.
Read more about the JavaScript support here.
A traditional MVVM would look something like this:

With the artifacts found in Dolittle and more separation in place with CQRS, the diagram looks slightly different

You can read more details about the MVVM pattern here.
5.1.8 - Conventions
Learn about how Dolittle sees conventions
Part of a matured and maintainable solution is its conventions.
All projects have this and they get established over time. The things
that says that business logic goes here, this type of files goes here.
The conventions established are often related to structure and it
helps with consistency in your codebase.
Recipe driven development
Its not uncommon to have a Wiki with things to remember for different
types of code; recipes for what you need to remember to implement for
that particular type of building block. These are great candidates for
automation and can also be applied cross cuttingly.
Convention over Configuration
Some systems require a lot of configuration to work and it might not
even just be a thing you do at the beginning - but you have to add
configuration over time as you move along. Dolittle believes that
we can do a lot of this using conventions and lean on the design
paradigm of convention over configuration
to do so. This helps lower the number of decisions a developer has to
do and as long as you stick with the conventions, it should all work out.
It also helps if you want to change the convention, as you don’t need
to go change a lot of configuration in addition to changing the convention
that you might have enforced in structure.
Code Conventions
We have great opportunities with modern development environments to
visit the code at build time or reflect / introspect on the code at
runtime. The benefits you can get from doing this are:
- Discover artifacts in your code to avoid having to explicitly add things in code;
which then makes your code adhere to the open/closed principle
- Consistency; when things are discovered you enforce a consistency in the codebase
An example of this for frontend development is how Aurelia
automatically hooks up views and view models based on the name being the same.
In Dolittle we do a lot around discovering, in fact its one of the core things
we do consistently.
The simplest example of a convention in play in Dolittle is during initialization,
Dolittle will configure whatever IOC container
you have hooked with conventions. One default convention plays a part here saying
that an interface named IFoo
will be bound to Foo
as long as they both sit in the same namespace. You’ll see this throughout Dolittle
internally as well, for instance ICommandCoordinator
is bound to
CommandCoordinator
.
The conventions at play are described throughout the documentation when it is relevant.
5.1.9 - Domain Driven Design
Learn about Domain Driven Design and how it fits with Dolittle
Dolittle got from the beginning set to embrace Domain Driven Design and
its concepts from. The reason for this is that part of modelling a system is understanding the domain that the system is targetting and
understanding the vocabulary used by the domain experts in that domain and then be able to model exactly this.
DDD is all about getting to a ubiquitous language that all team members use and understand.
Bounded context
In a large system you find that the system is not a single monolithic system, but rather a composition of smaller systems.
Rather than modelling these together as one, bounded contexts play an important role in helping you separate the different
sub systems and modelling these on their own. Putting it all together in one model tends to become hard to maintain over
time and often error prone due to different requirements between the contexts that has yet to be properly defined.
We see that we often have some of the same data across a system and chose to model this only once - making the model
include more than what is needed for specific purposes. This leads to bringing in more data than is needed and becomes
a compromise. Take for instance the usage of Object-relational mapping
and a single model for the entire system approach. If you have a model with relationships and you in reality have different
requirements you end up having to do a compromise of how you fetch it. For instance, if one your features displays all
the parts of the model including its children; it makes sense to eagerly fetch all of this to save roundtrips. While if
the same model is used in a place where only the top aggregate holds the information you need, you want to be able to
lazy load it so that only the root gets loaded and not its children. The simple solution to this is to model each of the
models for the different bounded contexts and use the power of the ORM to actually map to the database for the needs one
has.
The core principal is to keep the different parts of your system apart and not take any dependency on any other contexts.
All the details about a bounded context should be available in a context map. The context map provides then a highlevel
overview of the bounded context and its artifacts.
Building blocks
Domain Driven Design provides a set of building blocks to be able to model the domain. Dolittle aims to include most of these
building blocks as long as it makes sense.
Value Object
A value object is an object that contains attributes but has no conceptual identity. They should be treated as immutable.
In Dolittle you’ll find the concept value object as a good example. Value objects does not hold
identity that make them unique in a system. For instance multiple persons can live on the same address, making the address
a great candidate for a value object as it is not a unique identifier.
Aggregate
Aggregates represents a collection of objects that are bound together to form a root entity. In Dolittle you’ll find the
AggregateRoot that represents this. Important aspect of the aggregate in Dolittle is
however that it does not expose any public state, whatever entities it relies on should only be used internally to
be able to perform business logic. The AggregateRoot
is also what is known as an EventSource.
Entity
Entities are the artifacts that aggregates can use to form the root entity. They are uniquely identified in the system.
For aggregate roots in Dolittle, it is about modelling the business logic that belong together.
Repository
The repository pattern is all about providing an abstraction for working with domain objects and be storage agnostic, but focused
around the needs of the domain model.
Since Dolittle is built around the concept of CQRS,
the domain repository is one that knows how to work with aggregate roots.
Service
When operations conceptually does not belong to the domain object, you can pull in supporting services.
These are not something the aggregate knows about, but something that knows about both and coordinates it.
In Dolittle this would be the CommandHandler
Domain Events
Important part of modelling the domain are the domain events. These are the things
the domain experts talk about, the consequences, the things that happens in the system. Domain events represents the actual
state transitions in a system. The AggregateRoot is the place where events are produced.
5.1.10 - Naming
Learn about how Doolittle’s naming conventions
One of the most important aspects of maintainable code is readability.
Being able to identify what something does just by reading the name.
This applies to files, type names, functions / methods - all the way through.
Abbreviations
You should not use abbreviations, unless they are well known and understood abbreviations,
such as XML
or JSON
or similar.
Plural for modules / namespaces / folders
Typically when working on features, the feature represents an artifact in the system.
This artifact is often represented as a noun in the system and the feature concerning
the noun should be pluralized.
An example would be for instance Employee
and the feature with everything related to
this artifact would be Employees
. Examples from our own code-base could be the
Applications
namespace, which holds Application.
Similarily; ResourceTypes
with ResourceType
within it.
Database schemas, folders in systems or in general collections of artifacts should
similarly be named like this consistently.
Prefix / postfix
Having prefixes or postfixes to type names is often considered a code-smell.
It can be an indication that the name alone is not saying what it is actually doing.
There is no reason to add the technical concern as a pre-/postfix.
Examples of pre-/postfixes you should avoid:
- Controller
- ViewModel
- Exception
- Factory
- Manager
Another common thing seen and done is to include the word Base
as a prefix or postfix.
This should not be there.
Instead of adding post/pre-fixes; make the naming unambiguous instead.
Upper CamelCase vs lower camelCase
All C# code consistently uses upper CamelCase - also called Pascal Case.
While all JavaScript is consistently using lower camelCase - with the exception of
types that can be instantiated. These have upper CamelCase. This last convention is
a convention that is common in the JavaScript space.
Going between the two worlds, Dolittle makes sure to translate everything.
During serialization for instance, translation is done for naming - both ways - making
it feel natural to a C# developer as well as a JavaScript developer.
5.1.11 - Versioning
Learn about how Dolittle is versioned
Versioning
Dolittle adheres to the Semantic Versioning v2 versioning scheme.
This gives the following : <major>.<minor>.<patch>
.
Patch
Patches are improvements, bug fixes and similar and is to be considered backwards compatible.
This maps to the following changelog labels: Fixed, Security
Minor
Minor contains new features / functionality and is to be considered backwards compatible.
This maps to the following changelog labels: Added, Deprecated
Major
Major is a breaking change - not to be considered backwards compatible.
This maps to the following changelog labels: Changed, Removed
Pre-release
Pre-releases are considered an edge case and deviates the normal versioning strategy.
Dolittle as a general principle does not apply this in general releases as a strategy,
but might take advantage of for special cases.
Changelog
Dolittle adheres to the guidance in the Keep a change log site.
Types of changes / labels
Label |
Description |
Backwards compatible |
Added |
For new features |
* |
Changed |
For changes in existing functionality |
- |
Deprecated |
For soon to be removed features |
* |
Removed |
For now removed features |
- |
Fixed |
For any bug fixed |
* |
Security |
In case of vulnerabilities |
* |
Dependencies
Some package managers, like NuGet has a strategy of resolving to the lowest possible version it can.
This means that when you have an application consuming a dependency that has a dependency to something
that gets a patch, the application does not necessarily gain the benefit of this patch.
e.g.
+- Application
+--- First level dependency
+--- Second level dependency
When patching the second level, the first level also needs to be updated and the application itself
needs to chose a wildcard dependency for either minor or patch to be able to get the patch.
Dolittle recommends using a wildcard on minor and you can safely rely on the semantics of the versioning
to be accurate.
Source Control
All repositories have a master branch which holds the current released software at any point in time.
The branch gets tagged with the appropriate version based on each merged pull request coming in.
This means that every pull request that gets merged will have a unique version number associated with it.
Issues
Issues are to be associated with every pull request (read more here).
This information is used to create the changelog and versioning. Labeling of these issues is therefor
vital.
Change Log Label |
Issue Label |
Comment |
Added |
- |
Implicit if not having any other type label on the issue |
Changed |
breaking change |
- |
Deprecated |
deprecation |
- |
Removed |
removal |
- |
Fixed |
bug |
- |
Security |
security |
- |
Future
Dolittle is working on automating this and actually deducting the changes from the code from its public APIs.
This will in time make this less error-prone.
5.1.12 - Definition of done
Learn about what we at Dolittle defines as our definition of done
We have a clear definition of what we consider to be done. These are the exit-criteria to determine if
an implementation is complete. The actual coding part is only part of what actual done is.
The definition of done is used actively when pull requests are reviewed.
This is our definition:
Functional software
Core to everything coming through a pull request; it must be functional software. This means it should
have been tested by the developer or confirmed tested by a tester, or automated tests / specifications.
Adhering to our values and principles
It is expected that the code is adhering to our core principles and our
development principles, which are well founded in our
core values.
Following our expected structure
All code should be adhering to our repository structure and in general our
cohesion principles as found in development principles.
Has automated specifications (tests)
We do not look at code coverage as a metric, but we look at the logical coverage. We expect our code to
have automated specifications (tests) around it on a unit level. We look for behavioral specifications.
Has API documentation (XML, JsDoc, etc..)
All public APIs should have documentation around them, which is language specific and can be automatically
extracted and generated API documentation for our documentation site
Has general documentation
Documentation is important to have and to maintain on changes. It is expected that any minor version bump
contains the documentation for whatever is new and a major to contain the documentation for what has changed.
Read more on how to contribute to documentation here.
If exposing formats like JSON files that should be made available, create or remember to update the schema
for it for it to be published. E.g. Schema Store
Ready to be deployed
Code should be ready to be deployed. Pull requests should never be made unless the code is ready to be deployed.
The definition of what deployment is, is defined by each repository. For some it means deploying a package
that can be consumed publicly. Which means it needs to be production ready. For other repositories, it could
mean it needs to be ready to deployed to a staging environment - typically for applications being used by
users.
5.1.13 - Pull requests
Learn about what we’re looking for with regards to pull requests
All of the Dolittle repositories are pull request
based. That means that nothing gets into any of the repositories without it being a pull request first.
The pull request as a gated concept means that we get to do code reviews and make sure the pull request adheres
to the definition of done.
Remotes and Forks
Contributers with access to a repository can send pull requests on the specific repository. For others, create your own fork and submit a pull request from the forked repository.
Delete branch after accepted pull request
After a pull request had been successfully merged, remember to delete the remote branch.
Open a draft pull request
We encourage opening a draft pull request to create a space to discuss the work being done and get feedback early in the process. It’s a lot easier to change a design that is being evolved that one that is “final”
5.1.14 - Just enough software
Learn about how we think about delivering just enough software
As described in our principles, we focus on delivering
just enough software. That means we think very iteratively at a macro level and deliver
just enough to make something work. We do not compromise on quality and our principles
of working, but scope things down to be exactly what we need to get the feature ready
to be used.
Every feature added to parts of the Dolittle platform has different development stages.
For each of these stages there is a set of REQUIRED capabilities associated with it.
The capabilities expected vary somewhat between the different type of features and due to the
nature of the feature becomes OPTIONAL.
Some things has a natural public endpoint, while others don’t.
The driving force behind defining the different stages is to get a rapid feedback loop.
Deliver the bare minimum and gain experience and improve.
In no way does this mean that a feature is parked or finished, it should constantly evolve
and be iterated on.
We define the stages as general:
Stage |
Description |
0 |
Proof of concept - proving a piece of functionality |
1 |
Minimum viable solution - it should work, but does not have the experience of use yet |
2 |
Basic tooling - typically something like CLI access - if applicable |
3 |
Advanced tooling - typically in the sense of a UI - if applicable |
4 |
Developer tools - extensions to supported developer tools like IDEs or similar |
Common
At the core of everything we do we require the following:
Specifications
All units of code should have automated specifications around them. It is not a goal of 100% coverage of code
lines, but close to a 100% coverage of critical logic and the interaction between systems - which is mocked out.
Core Principles
At the heart of everything we do sits our core values,
core principles and development principles
that are to be considered required and prerequisites for this to work.
That means we build with the values and principles at hand.
For more details on contribution, read more here.
You should also read more about the vision.
Logging
Part of understanding a system is to be able to in production bubble up what it going on and follow execution paths.
Logging helps with this and is a minimum requirement.
Cross Cutting
Throughout the different stages, hardening needs to be done. Once it has left the first stage and into a
system and runs in production; the learning of what works and what doesn’t comes. This needs to be fed
into the production immediately and takes priority.
Stage 0
Some features needs to be proven before it gets commitment from the platform. In this phase you might piggyback
off of other peoples work and do the shortest path to proving the functionality you’re aiming for.
A concrete example in Dolittle has been the proving of inter-bounded context communication where the first
version that proved all the concepts was built on top of Kafka. While Kafka was not a viable solution for
the long run.
Stage 0 is an OPTIONAL stage. Some functionality don’t need this stage as it is ready to be developed
into Stage 1 directly.
Stage 1
When coming up with new functionality it is very important to gain experience from it as soon as possible.
The first stage represents the MVP - Minimum Viable Product, or in our case solution. Getting it into
systems that proves what works and what does not and feed the result back as soon as possible is the primary
objective for this stage.
Telemetry
Some features requires the recording of telemetry. The telemetry is used by the platform to keep track of different
performance indicators. This could be details like time spent processing, hit-count or similar.
If the functionality being build has a natural set of performance indicators or feeds indirectly into others, it needs
to use the telemetry system for this.
Telemetry is OPTIONAL and depends on the nature of the functionality being built.
API
The most important aspect of any new feature is to work on the design of the API - the surface that is being used.
Getting the implementation wrong is much more forgiving than getting the design of the API surface wrong.
Most effort should go into the design and the API.
API is REQUIRED. Either it is an internal API or a public API, it still is the contract and needs the most attention.
API Documentation
All code artifacts should adhere to the language specific approach to document the API. In C# for instance, it should be
in the form of XML documentation.
API Documentation is REQUIRED
Public APIs - Interaction
Some features are expected to be used in Dolittle tools, be it CLI tools,
All public APIs must dogfood Dolittle.
This means that all APIs are represented as Commands and Queries and has a full cycle. State changes must be represented as
events. A public API in Dolittle is not represented as a REST API, although that is one of the interaction layers available.
A REST API is just one of many options for the different entrypoints (Commands / Queries).
Public APIs is OPTIONAL. Not all functionality needs to be interacted with and is just used internally.
Stage 2
Vital part of the success of a lot of features is the capability of interacting with it through tooling.
The tools organization holds the different tools, such as the CLI;
which is often a starting point for a lot of the tools.
Other tooling experience could also be small widgets in Web developer tools.
Stage 2 is OPTIONAL. Not all functionality needs to be exposed in tooling.
Stage 3
Part of the Dolittle platform is the Studio - the portal in which you
have the full overview of the runtime environment and management tools to help you manage running systems built with
Dolittle.
Stage 3 is OPTIONAL. Not all functionality needs to be exposed in Studio.
Stage 4
In order to make it simpler for developers, providing proper tooling experience inside code editors or IDEs - can make it
a lot more accessible. The tools organization holds the different tools, including
developer tools that extends these.
Stage 4 is OPTIONAL. Not all functionality needs to be exposed in Developer Tools.
5.1.15 - Repositories
Learn about how Dolittles and the repository structures
All of Dolittle repositories should be consistent in naming, structure and folder names.
This gives us a higher level of consistency and it makes it easier for us to create
cross cutting tools that can be applied to all of our repositories.
As part of a pull request review we look for this consistency and
make sure that everything is adhering to this structure.
One of the core principles is the high cohestion principle.
Keeping everything that belongs together close applies also to repositories. This is
why we keep everything related to a repository within the repository and not separate
on its function. An added benefit with that is that it is much easier to adhere to the
definition of done.
Short names
We do not use short names for folders nor files. Examples you’ll find in other repositories
and might even be considered defacto standard, are things like src
and such.
We believe in things being ubiquitous and have a high focus on readability. Therefor, the
example above would be instead Source
.
Structure
Below is the structure our repositories follow. All repositories might not have all elements,
but this is what is being adhered to.
<Root of repository>
│
└─── Documentation
└─── Samples
└─── Schemas
└─── Boilerplates
└─── Source
Documentation
All the documentation, with the exception of except API documentation that is often generated
from source files, must be in the Documentation
folder. Follow the
guide for contributing to the documentation.
All documentation is generated to our official site.
Putting things in here in the excepted format and structure, it will end up eventually
on the documentation site.
Samples
Samples that show concrete examples directly linked to what the repository represents,
should be in the Samples
folder. If there are multiple samples, these should have
folders named in a way that makes it self explanatory for what they show within the Samples
folder.
Schemas
If the project exposes JSON formats that one wants to have published to the Schema Store,
they should be located in the Schemas
folder.
BoilerPlates
Some projects has boiler plates that they use to make it easier for developers to get started.
This is typically used by the Dolittle Tooling.
All boiler plates should be in the BoilerPlates
folder at the root of the project.
Source
All source representing the purpose of the repository, except samples, should be
within the Source folder.
5.1.16 - Editor config
Learn about how your editor should be configured
In the root of all projects there SHOULD be a .editorconfig
file that governs how your editor should be configured.
If your editor does not support it, you need to set this up manually.
Default
All text files has this setting by default.
Property |
Setting |
End of line |
LF (Unix) |
Indent |
Spaces |
Indent size |
4 |
YAML
For YAML files, the following properties are overridden.
Property |
Setting |
Indent size |
2 |
5.1.17 - Logging
Learn about how you should use logging in your code
Logs are an important tool for developers to both understand program flow, and trace down bugs and errors as they appear in their software.
Comprehensive, cohesive and focused log messages are key to the efficacy of logs as a development tool.
To ensure that we empower developers with our software, we have put in place five guiding principles for writing log messages.
Structured log messages
Traditionally log messages have been stored as strings with data embedded with string formatting.
While being simple to store and transmit, these strings loose semantic and contextual information about data types and parameters.
This in turn makes searching and displaying log messages labour intensive and require specialized tools.
Modern logging frameworks support structured or semantic log messages.
These frameworks split the definition of the human readable log message from the data it contains.
All popular logging frameworks support the message template format specification, or a subset thereof.
logger.Trace("Committing events for {AggregateRoot} on {EventSource}", aggregateRoot.Id, eventSourceId);
TRACE 2020/04/03 12:19:58 Committing events for 9eb48567-c3ac-434b-90f1-26660723103b on 2fd8866a-9a4b-492b-8e98-791118552426
{
"level": "trace",
"timestamp": "2020-04-03T12:19:58.060Z",
"category": "Dolittle.Commands.Coordination.Runtime",
"template": "Committing events for {AggregateRoot} on {EventSource}",
"data": {
"AggregateRoot": "9eb48567-c3ac-434b-90f1-26660723103b",
"EventSource": "2fd8866a-9a4b-492b-8e98-791118552426"
}
}
Log message categories
To allow filtering of log messages from different parts of the source code during execution flow, log messages must contain a category.
In most languages this category is defined by the fully qualified name of the types that define the code executed, including the package or namespace in which the type resides.
These categories are commonly used during debugging to selectively enable Debug
or Trace
messages for parts of the software by defining filters on the log message output.
Log message levels
We define five log message levels that represent the intent or severity of the log message.
They are, in decreasing order of severity:
Error
- unrecoverable failure, resulting in end-user error.
Warning
- recoverable failure, performance or functionality is degraded.
Information
- information that is needed to use the software, and user activity traces.
Debug
- execution activity and sub-activity checkpoints.
Trace
- detailed execution trace with data that affects flow path.
Error
An error log message indicates that an unrecoverable failure has occurred, and that the current execution flow has stopped as a consequence of the failure.
The current activity that the software was performing is not possible to complete, and will therefore in most cases lead to an end user error message being shown.
For languages that have the concept of exceptions or errors, these must be included in an error log message.
An error log message indicates that immediate action is required to recover full software functionality.
Warning
While an error message indicates an unrecoverable failure, warning log messages indicate a recoverable failure or abnormal or unexpected behavior.
The current execution flow is able to continue to complete the current activity by recovering to a fail-safe state, albeit with possible degraded performance or functionality.
Typical examples would be that an expected data structure that was not found but it is possible to continue with default values, or multiple data structures were found where there should only be one, but it is safe to continue.
A warning log message indicates that cleanup or validation is required at a later point in time to recover or verify the intended software functionality.
Warning log messages are also used to warn developers about wrong usage of functionality, and deprecated functionality that will be removed in the future.
Informational log messages tracks the general execution flow of the software, and provides the developer with required information to use the software correctly.
These log messages have long term value, and typically include host startup information and user interactions with the application.
Information level log messages is typically the lowest severity messages that will be written by default, and must therefore not be used to log messages that are not useful for while the software is working as expected.
Debug
Debug log messages are used by developers to figure out where failures occur during execution flow while investigating interactively with the software.
These log messages represents high-level checkpoints of activities and sub-activities during execution flow, to give hints for what log message categories and source code to investigate in more detail.
Debug messages should not contain any data other than correlation and trace identifiers used to identify unique failing interactions.
Trace
Trace log messages are the most verbose of the log messages.
They are used by developers to figure out what caused a failure or an unexpected behavior, and should therefore contain the data that affects the execution flow path.
Typical uses of trace log messages are public methods on interface implementations, and contents of collections used for lookup.
Log output
The logs of an applications is its source of truth. It is important that log messages are consistent in where they are outputted and the format in which they are outputted. They should be outputted to a place where they can be easily retrieved by anyone who is supposed to read them. Log messages are normally outputted to the console, but they can also be appended to files. The log messages that are outputted should be readable and have a consistent style and format.
Configuring
We’re not necessarily interested in all of the logging levels or all of the categories each time we run an application. The logging should be easily configurable so that we can choose what we want to see in terms of categories and the levels of the logging. For instance software running in a production environment should consider only logging information, warning and error log messages. While we may want to show more log messages when running in development mode. It is also important to keep in mind that logging can possibly have a considerable performance cost. This is especially important to consider when deploying software with lots of logging to production environments.
Asp.Net Core
We’re using Microsoft’s logger in the Dolittle framework for .Net. We can use the ‘appsettings.json’ to configure the logging and we can provide different configurations for different environments like production and development. Look here for information on Microsoft’s logger.
Log message
Log messages should be written in a style that makes it easy to navigate and filter out irrelevant information so that we can find the cause of any error that has occurred by simply reviewing the them. Logs should be focused and comprehensive for both humans and machines. They should also be consistent in format and style across platforms, languages and frameworks.
Stick to English
There are arguably many reasons to stick to English-only log messages. One technical reason is that English ensures us that we stick to ASCII character set.
This is important because we don’t necessarily know what happens to the log message. If the log messages uses specials character sets it might not render correctly or can become corrupt and thus unreadable.
Log context
Each log message should contain enough information so that the intended reader understands exactly what is going on without having to read any prior log messages. When we write log messages it is in the context of the code that we write, in the context of where the log statement is, and it is easy to forget that this context information is not implicit in the outputted log. And depending on the content of those log messages they might even not be comprehendible in the end.
There are possibly multiple aspects of ‘context’ in regards to logging. One might be the current environment or execution context of the application for when the logging is performed and the other might be domain specific context, meaning information regarding where the logging is taking place in the execution flow of an operation or values of interest like IDs and names.
Log messages should have the appropriate information regarding the context relevant to the information that is intended to be communicated. For example for multi-threaded applications it would make sense to add information of the executing thread id and correlations between actions. And for multi-tenanted applications it would make sense to have information about the tenant the procedures are performed in.
It is important to consider the weight of the contextual information added to each log message. Adding lots of context information to every log message makes the log messages bloated and less human-readable. The amount of context information on a log message should be in proportion to the log message level. For instance an information log message should not contain lots of contextual information that is not strictly needed for the end-user to use the software while a trace or debug log message should contain the necessary information to deduce the cause of an error. For warning and error log messages that are produced as a result of an exception or error it is important to include the stacktrace of the exception/error as part of the log message. Usually the methods or procedures to create log messages at these levels has its own parameter for an exception/error that outputs a log with the stacktrace nicely formatted.
For statically typed languages the namespace of the code executing the logging statement is usually provided with the log message which is helpful information for the developers in the case of troubleshooting.
Keep in mind the reader of the logs
We add logs to software because someone most likely has to read them someday. Thus it makes sense that we should keep in mind the target audience when writing log messages. Which person is most likely going to read a log message affects all the aspects of that log message; The log message content, level and category is dependent on that. Information log messages is intended for the end-user while trace and debug messages are most likely only read in the case of troubleshooting, meaning that only developers will read them. The content of the log message be targeted towards the intended audience.
Sensitive information like personal identifiable information, passwords and social security numbers has no place in log messages.
5.1.18 - Working Locally
Working Locally
Working Locally
Local packages
A lot of projects have a NuGet.config
file, in this you’ll often find a local source and if you do a …
… it basically fails if you don’t have the path it asks for.
If you’re not interested in being able to deploy packages locally between different projects, you can add an option
to ignore this:
$ dotnet restore --ignore-failed-sources
Be aware that the NuGet.config file is hierarchical in nature and sources can be disabled at any level. If you are not finding
packages in the source you are expecting, check for disabled sources in any NuGet.config file. It will look like:
<disabledPackageSources>
<add key="local" value="true" />
</disabledPackageSources>
Debugging locally
Be sure to read the README for DotNET.Build before starting.
If you want do debug an application into Dolittle’s source code, you have to follow these instructions:
-
You want to make sure that when building and packing the solutions they use the locally generated packages (the ones the DeployPackagesLocally.sh script creates and copies into the right place in %HOME%/.nuget/packages)
- This is not the case for Dolittle/DotNET.Fundamentals, since it does not have dependency on other dolittle packages.
- For the other solutions, in the parent directory (the directory where the Build folder is present) there should be a NuGet.Config file, that file should have a reference to the local packages folder.
This can be achieved by, for example, having a
<add key="local" value="%HOME%/.nuget/packages"/>
as a child of a packageSources tag in the configuration tag in the top-level Nuget.Config file.
Note that when you don’t want the local v.1000 packages, this package feed source should either be disabled, or you can delete the local packages by running the DeleteLocalPackages.sh script in Build.
-
It is really important that you deploy the packages in the right order
- Dolittle/DotNET.Fundamentals
- Dolittle/Runtime
- Dolittle/DotNET.SDK
- All other dependencies
- Note that the other dependencies should not have dependencies on each other. If they have, then there can be trouble when creating the packages.
If you’re having trouble with dependencies (assemblies not loading or similar errors at startup) then this might be the cause. Check the other dependencies if they have dependencies on each other and build and package them in the correct ordering.
-
Make sure that the application that you want to debug also has a packageSource reference to %HOME%/.nuget/packages. Do a dotnet clean && nuget restore && dotnet restore to ensure that the solution is using the locally deployed packages.
-
Happy debugging!
Working across multiple projects
Most Dolittle projects has a sub module for dealing with builds and adding productivity to the development experience that you can read more about here.
In this there is a file called DeployPackagesLocally.
Its purpose is to make it easier to work across multiple different projects that generate packages that are dependencies into higher level
projects.
The way it does this is to take advantage of the NuGet option of local packages.
It has been setup with an assumed structure, between the different projects and organisations that Dolittle has.
From the base path in which you have your repositories, lets assume you have a Dolittle folder and then the following structure:
+-- Dolittle
+-- Packages (Target for NuGet packages being deployed)
+-- DotNET.SDK
+-- Runtime
+-- DotNET.Fundamentals
+-- [interaction (Organization)](https://github.com/dolittle-interaction)
+---- AspNetCore
+---- ... other repos
+-- [platform (Organization)](https://github.com/dolittle-platform)
+---- Sentry
+---- ... other repos
As you can notice, there is a convention at play here - organizations are prefixed with dolittle-
, whatever comes after the dash is then the name of the folder given. This is not important, but gives you a sense of the thinking and conventions going into this. All the repositories found in main Dolittle repository are considered “root” or core building blocks and do not belong in a sub-folder as such.
To enable a faster feedback loop you can now start deploying packages locally and be able to restore
directly from these
and also enable local debugging directly.
In order to do this, simply run the script from a shell:
$ ./Build/DeployPackagesLocally.sh
This script is maintained in the Build git submodule. The script will find the correct Packages folder assuming that it is in a folder that is a direct parent of the project you are deploying. If you use the conventions outlined above with the Dolittle root folder and a Packages child folder, it should work as intended.
Known Issues
When trying to develop locally using the local packages that have been built from source, you should be aware of hard-coded versions in client code. The local packages will all have the version 2.0.0-alpha2.1000. Any hard-coded version will miss this local nuget source and instead go to the dolittle nuget source and pull the appropriate version. Unfortunately, this will likely pull a whole host of
other versions of the framework dlls that it relies on and lead to a “dll hell” scenario. Most likely this will manifest itself in a runtime exception of System.IO.FileLoadException, with the message “The located assembly’s manifest definition does not match the assembly reference”. Be sure to scrutinize the output of your builds and ensure that no other versions of Dolittle are being installed.
As well as hard-coded versions, you should have local versions built for all dolittle framework dlls used in your client project. For the same reason as a hard-coded version, a non local built version will not hit the local nuget cache and will pull down a different version of the framework.
When using workspaces in VSCode, be aware that things may be excluded from the Workspace that include references to other versions of the Dolittle framework. These will not be detected by search from within VSCode.
5.1.19 - Issues
Learn about how to submit issues
Dolittle has a default issue template when you create an issue in GitHub.
The template is targeting bugs. For any bugs or problems, please follow the template.
Committing
When you’re committing you can reference issues with hashtag # - and the number of the issue.
This will link the issue and the commit and the commit will show up as a comment on the
issue. This is very useful for transparency and helps on discussing.
Branching
Creating a branch per issue is a good practice, this isolates the changes you’re doing and relates
it to the issue. Name it so that it is clear which issue the branch is for; issue/# - e.g. issue/712.
Pull requests
Pull requests must associate an issue by referencing it in the title or in the description with hashtag #, as
with commits.
5.1.20 - Runtime exceptions
Learn about how to work with runtime exceptions in code
Exceptions should not be considered a way to do program flow. Exceptions should be treated as an exceptional state of the system
often caused by faulty infrastructure. At times there are exceptions that are valid due to developers not using an API right.
As long as it there is no way to recover an exception is fine. You should not throw an exception and let a caller of your API
deal with the recovery of an exception. Exceptions MUST be considered unrecoverable.
Naming of exceptions is covered by the C# Coding Styles.
5.1.21 - C# coding styles
Learn about how to write C# in Dolittle
This is the to be considered the coding standard for Dolittle and is subject to automated
verification during automated builds and also part of code-reviews such as those done for
pull requests. Some things are common between languages, such as naming.
Values, principles and patterns & practices
It is assumed that all code written is adhering to our core values,
core principles and development principles.
On top of this we apply patterns that also reflect a lot of the mindset of things we do.
Read more here.
Compactness
In general, code should be compact in the sense that any “noise” of language artifacts or similar
that aren’t really needed SHALL NOT be used. This to increase readability, not decrease it.
Things that are implicit, SHALL be left implicit and not turned into explicits.
Keywords
Use of var
Types are implicitly provided by the compiler and considered noise during declaration.
If one feel the need for explicitly declaring variables with their type, it is often a
symptom of something else being wrong - such as large methods that you can’t get a feel
for straight away. This is most likely breaking the Single Responsibility Principle.
You MUST use var
and let the compiler infer the type implicitly.
Private members
In C# the private modifier is not needed as this is the default modifier if nothing is specified.
Private members SHALL NOT have a private modifier.
Example:
public class SomeClass
{
string _someString;
}
this
Explicit use of this SHALL NOT be used. With the convention for prefixing private members,
the differentiation is clear.
Prefixes and postfixes
A very common thing in naming is to include pre/post fixes that describes the technical implementation
or even the pattern that is being used in the implementation. This does not serve as useful information.
Examples of this is Manager
, Helper
, Repository
, Controller
and more (e.g. EmployeeRepository
).
You SHOULD NOT pre or postfix, but rather come up with a name that describes what it is.
Take EmployeeRepository
sample, the postfix Repository
is not useful for the consumer;
a better name would be Employees
.
Member variables
Member variables MUST be prefixed with an underscore.
Example:
public class SomeClass
{
string _someInstanceMember;
static string _someStaticMember;
}
One type per file
All files MUST contain only one type.
Class naming
Naming of classes SHALL be unambiguous and by name tell exactly what it is providing.
Example:
// Coordinates uncommitted event streams
public class UncommittedEventStreamCoordinator {}
Interface naming
Its been a common naming strategy to include I
in front of any interface
.
Prefixing with I
can have other meaning as well, such as the actual word “I”.
This can give better naming to interfaces and better meaning to names.
Examples:
// Implemented by types that can provide configuration
public interface ICanConfigure {}
// Implemented by a type that can provide a container instance
public interface ICanCreateContainer
You SHOULD try look for this way of naming, as it provides a whole new level of expressing intent in the code.
Private methods
Private methods MUST be placed at the end of a class.
Example:
public class SomeClass
{
public void PublicMethod()
{
PrivateMethod();
}
void PrivateMethod()
{
}
}
Exceptions
flow
Exceptions are to be considered exceptional state. They MUST NOT be used to control
program flow. Exceptional state is typically caused by infrastructure problems or other
problems causing normal flow to be able to continue.
types
You MUST create explicit exception types and NOT use built in ones.
The exception type can implement one of the standard ones.
Example:
public class SomethingIsNull : ArgumentException
{
public SomethingIsNull() : base("Something was null") {}
}
Throwing
If there is a reason to throw an exception, your validation code and actual throwing
MUST be in a separate private method.
Example:
public class SomeClass
{
public void PublicMethod(string something)
{
ThrowIfSomethingIsNull(something);
}
void ThrowIfSomethingIsNull(string something)
{
if( something == null ) throw new SomethingIsNull();
}
}
Async / Await
In C# the async / await keywords should be used with utmost care. It is a thing that
without really thinking it through can bleed throughout your codebase without necessarily
a good reason. Alongside async / await comes the Task
type that needs to be there.
The places where threading is necessary, it MUST be dealt with internally to the
implementation and not bleed throughout its APIs. Dolittle has a very good handle on its
entrypoints and from these entrypoints, the need for scaling out across multiple threads
are rarely needed. With the underlying infrastructure being relied on, web requests are
already threaded. Since we enter the system and returns back as soon possible, we have a
good grip of when this is needed. Threads can easily get out of hand and actually slow
down systems.
Exposing IList / ICollection
Public APIs SHALL NOT have mutable types as return types, such as IList, ICollection.
The responsibility for maintaining state should be with the owner of it. By exposing the
ability for changing state outside the owner, you lose control over who can change state
and side-effects occur that aren’t clear. Instead you should always expose immutable types
like IEnumerable instead.
Mutability
One of the biggest cause of side-effects in a system is the ability to mutate state and possibly
state one does not necessarily own. The example is something creates an instance of an object
and exposes public getters and setters for its properties and inviting anyone to change
this state. This makes it hard to track which part of the system actually changed the state.
Be very conscious about ownership of instances. Avoid mutability. Most of the time it is
not needed. Instead, create new objects with the mutation in place.
5.1.22 - C# Specifications
Learn about how to write C# specifications
All the C# code has been specified by using Machine Specifications with an adapted style.
Since we’re using this for specifying units as well, we have a certain structure to reflect this. The structure is reflected in the folder structure and naming of files.
Folder structure
The basic folder structure we have is:
(project to specify).Specs
(namespace)
for_(unit to specify)
given
a_(context).cs
when_(behavior to specify).cs
A concrete sample of this would be:
Dolittle.Specs
Commands
for_CommandContext
given
a_command_context_for_a_simple_command_with_one_tracked_object.cs
when_committing.cs
The implementation SHOULD then look something like this :
public class when_committing : given.a_command_context_for_a_simple_command_with_one_tracked_object_with_one_uncommitted_event
{
static UncommittedEventStream event_stream;
Establish context = () => event_store_mock.Setup(e=>e.Save(Moq.It.IsAny<UncommittedEventStream>())).Callback((UncommittedEventStream s) => event_stream = s);
Because of = () => command_context.Commit();
It should_call_save = () => event_stream.ShouldNotBeNull();
It should_call_save_with_the_event_in_event_stream = () => event_stream.ShouldContainOnly(uncommitted_event);
It should_commit_aggregated_root = () => aggregated_root.CommitCalled.ShouldBeTrue();
}
The specifications should read out very clearly in plain English, which makes the code look very different from what we do for our units. For instance we use underscore (_) as space in type names, variable names and the specification delegates. We also want to keep things as one-liners, so your Establish, Because and It statements should preferably be on one line. There are some cases were this does not make any sense, when you need to verify more complex scenarios. This also means that an It statement should be one assert.
Moq is used for for handling mocking / faking of objects.
5.1.23 - Copyright header
Learn about the requirements of copyright headers in code files
Code files
All code files MUST to have the following copyright header, this includes even automated test files for all languages. The format needs to adhere to the following.
// Copyright (c) Dolittle. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
For XML based languages, this would look like:
<!-- Copyright (c) Dolittle. All Rights Reserved. Licensed under the MIT License. See LICENSE file in the project root for full license information. -->
Other languages might have other ways to represents comments, for instance bash/shell scripts or similar:
# Copyright (c) Dolittle. All rights reserved.
# Licensed under the MIT license. See LICENSE file in the project root for full license information.
5.2 - Documentation
Documentation of documentation and how to write it
5.2.1 - Get started
Get started writing documentation locally
All of Dolittles documentation is open-source and hosted on GitHub.
Add a new repository to the main Documentation repository
This guide teaches you how to add a new repository to the Dolittle documentation structure.
Start by cloning the Documentation repository and its submodules:
$ git clone --recursive https://github.com/dolittle/documentation
If you’ve already cloned it, you can get the submodules by doing the following:
$ git submodule update --init --recursive
1. Create documentation for the new repository
At the root of the working repository, create a Documentation
folder with at least a matching _index.md
and other
markdown files if needed. Read our guide on structure for more information.
2. Adding the working repository as a submodule
In the Documentation repository, navigate to the Source/repositories/
folder and pull your working repository here as a submodule:
$ git submodule add <repository_url> <repository_name>
3. Linking submodules to content
The system relies on all documentation content sitting in the Source/content
folder. This includes markdown files, images and other resources you link to your documentation.
The content
folder contains the parent folders, with a matching _index.md
and the contents of the Documentation
folder from the repository directly in this.
This is done by creating a symbolic link to the repositories Documentation
folder.
<Documentation root>
└── Source
└── content
└── fundamentals
└── runtimes
└── ...
Open a shell and navigate to the correct sub-folder in the content
folder and then in the corresponding organisation folder.
Unix:
$ ln -s ../../repositories/<organisation-folder>/<repository>/Documentation <folder-name>
Windows:
c:> mklink /d <folder-name> ..\..\repositories\<organisation-folder>\<repository>\Documentation
Example:
Unix:
$ ln -s ../../repositories/runtime/Runtime/Documentation runtime
Windows:
c:> mklink /d runtime c:\Projects\Dolittle\Documentation\Source\repositories\runtime\Runtime\Documentation
Chances are you are contributing to the code of the repository and you can therefor leave it in place and maintain
code and documentation side-by-side.
All folder names given in this process will act as URL segments, be very carefull to change these after they have been deployed.
Writing
All documentation is written in markdown following the GitHub flavor.
Markdown can be written using simple text editors (Pico, Nano, Notepad), but more thorough editors like Visual Studio Code or Sublime Text are highly recommended. VSCode also has a markdown preview feature.
Read the writing guiden and style guide for more information.
Happy documenting
5.2.2 - Writing guide
A guide on how to write documentation
This document is meant to be read alongside the style guide to provide concrete examples on formatting the document and syntax of different Hugo shortcodes.
Documentation overview
All Dolittle documentation is generated using Hugo 0.58.3, with the Dot theme.
Writing documentation
All files MUST have a metadata header at the top of the file following the Hugo Front Matter format. Some of this metadata gets put into the generated HTML file.
The keywords
and title
properties are used for searching while the description
shows up in the search results.
---
title: About contributing to documentation
description: Learn about how to contribute to documentation
keywords: Contributing
author: dolittle
// for topmost _index.md files add the correct repository property
repository: https://github.com/dolittle/Documentation
weight: 2
---
The main landing pages also have an icon
attribute in the Front-Matter. These icons are from the Themify icon pack.
Documentation filenames
All files MUST be lower cased, words MUST be separated with a dash. Example: csharp-coding-styles.md
. Hugo also takes care of converting between dashes and underscores as well as lower- and uppercase.
Links
Within same repository
When adding links to other pages inside the same repository DO NOT USE the file extension .md
- otherwise the link
will be broken. For instance, linking to the API documentation is done by adding a markdown link
as follows:
Renders to:
API
Cross Repositories
Link pages from other repositories using Hugos relref/ref
functions inside the markdown.
External resources
Linking to external resources is done in the standard Markdown way:
[Dolittle Home](https://github.com/dolittle/home)
Looks like this:
Dolittle Home
Hugo supports Mermaid shortcodes to write diagrams. Mermaid SHOULD be favored over using images when possible. Examples of Mermaid
Some diagrams/figures might not be possible to do using Mermaid, these can then be images. Beware however how you create these images and make sure they comply with the look and feel.
Images
All images should be kept close to the markdown file using it.
To make sure the folders aren’t getting cluttered and to have some structure, put images in a images
folder.
Images should not have backgrounds that assume the background of the site, instead you SHOULD be using file formats with support for transparency such as png.
<repository root>
└── Documentation
└── MyArea
└── [markdown files]
└── images
[image files]
To display images use the standard markdown format:

Renders to:

The URL to the image needs to be fully qualified, typically pointing to the GitHub URL.
This is something being worked on and registered as an issue
here.
The path is relative to the document where you declare the link from.
Notices
Hugo supports different levels of alerts:
Tip
Use tips for practical, non-essential information.
{{% alert %}}
You can also create ReadModels with the CLI tool.
{{% /alert %}}
Renders to:
You can also create ReadModels with the CLI tool.
Warning
Use warnings for mandatory information that the user needs to know to protect the user from personal and/or data injury.
{{% alert color="warning" %}}
Do not remove `artifacts.json` if you do not know what you're doing.
{{% /alert %}}
Renders to:
Do not remove artifacts.json
if you do not know what you’re doing.
5.2.3 - Style guide
A set of standards for the documentation
This document is meant to serve as a guide for writing documentation. It’s not an exhaustive list, but serves as a starting point for conventions and best practices to follow while writing.
Comprehensive
Cover concepts in-full, or not at all. Describe all of the functionality of a product. Do not omit functionality that you regard as irrelevant for the user. Do not write about what is not there yet. Stay in the current.
Describe what you see. Use explicit examples to demonstrate how a feature works. Provide instructions rather than descriptions. Present your information in the order that users experience the subject matter.
Avoid future tense (or using the term “will”) whenever possible. For example, future tense (“The screen will display…") does not read as well as the present tense (“The screen displays…"). Remember, the users you are writing for most often refer to the documentation while they are using the system, not after or in advance of using the system.
Use simple present tense as much as possible. It avoids problems with consequences and time related communications, and is the easiest tense for translation.
Include (some) examples and tutorials in content. Many readers look first towards examples for quick answers, so including them will help save these people time. Try to write examples for the most common use cases, but not for everything.
Tone
Write in a neutral tone. Avoid humor, personal opinions, colloquial language and talking down to your reader. Stay factual, stay technical.
Example:
The applet is a handy little screen grabber.
Rewrite:
You use the applet to take screenshots.
Use active voice (subject-verb-object sequence) as it makes for more lively, interesting reading. It is more compelling than passive voice and helps to reduce word count. Examples.
Example:
The CLI tool creates the boilerplate.
Rewrite:
The boilerplate is created by the CLI tool.
Use second person (“you”) when speaking to or about the reader. Authors can refer to themselves in the first person (“I” in single-author articles or “we” in multiple-author articles) but should keep the focus on the reader.
Avoid sexist language. There is no need to identify gender in your instructions.
Use bold to emphasize text that is particularly important, bearing in mind that overusing bold reduces its impact and readability.
Use inline code
for anything that the reader must type or enter. For methods, classes, variables, code elements, files and folders.
Use italic when introducing a word that you will also define or are using in a special way. (Use rarely, and do not use for slang.)
Hyperlinks should surround the words which describe the link itself. Never use links like “click here” or “this page”.
Use tips for practical, non-essential information.
You can also create ReadModels with the CLI tool.
Use warnings for mandatory information that the user needs to know to protect the user from personal and/or data injury.
Do not remove artifacts.json
if you do not know what you’re doing.
Concise
Review your work frequently as you write your document. Ask yourself which words you can take out.
-
Limit each sentence to less than 25 words.
Example:
Under normal operating conditions, the kernel does not always immediately write file data to the disks, storing it in a memory buffer and then periodically writing to the disks to speed up operations.
Rewrite:
Normally, the kernel stores the data in memory prior to periodically writing the data to the disk.
-
Limit each paragraph to one topic, each sentence to one idea, each procedure step to one action.
Example:
The Workspace Switcher applet helps you navigate all of the virtual desktops available on your system. The X Window system, working in hand with a piece of software called a window manager, allows you to create more than one virtual desktop, known as workspaces, to organize your work, with different applications running in each workspace. The Workspace Switcher applet is a navigational tool to get around the various workspaces, providing a miniature road map in the GNOME panel showing all your workspaces and allowing you to switch easily between them.
Rewrite:
You can use the Workspace Switcher to add new workspaces to the GNOME Desktop. You can run different applications in each workspace. The Workspace Switcher applet provides a miniature map that shows all of your workspaces. You can use the Workspace Switcher applet to switch between workspaces.
-
Aim for economical expression.
Omit weak modifiers such as “quite,” “very,” and “extremely.” Avoid weak verbs such as “is,” “are,” “has,” “have,” “do,” “does,” “provide,” and “support.” (Weak modifiers have a diluting effect, and weak verbs require more wordy constructions.) A particularly weak verb construction to avoid is starting a sentence with “There is …” or “There are…")
-
Prefer shorter words over longer alternatives.
Example: “helps” rather than “facilitates” and “uses” rather than “utilizes.”
-
Use abbreviations as needed.
Spell out acronyms on first use. Avoid creating new abbreviation as they can confuse rathen than clarify concepts. Do not explain familiar abbreviations.
Example:
Dolittle uses Event Driven Architecture (EDA) and Command Query Responsibility Segregation (CQRS) patterns.
HTML and CSS are not programming languages.
Structure
Move from the known to the unknown, the old to the new, or the familiar to the unexpected. Structure content to help readers identify and skip over concepts which they already understand or see are not relevant to their immediate questions.
Avoid unnecessary subfolders. Don’t create subfolders that only contain a single page. Make the user have access to the pages with as few clicks as possible.
Headings and lists
Headings should be descriptive and concise. Use a level-one heading to start a broad subject area. Level-one headings are typically generic titles, such as Basic Skills, Getting Started, and so on. Use level-two, level-three, and level-four headings to chunk information into easy-to-identify sections. Do not use more than four heading levels.
Use specific titles that summarize the information in the associated sections. Avoid empty headings devoid of technical content such as “Going further,” “Next steps,” “Considerations,” and so on.
Use numbered lists when the entries in the list must follow a sequence. Use unnumbered lists where the entries are of the same importance and do not follow a sequence. Always introduce a list with a sentence or two.
External resources
This document is based on style guides from GNOME, IBM, Red Hat and Write The Docs.
5.2.4 - Structure overview
Understand the structure of dolittle documentation
Structure internally
All documentation is inside Dolittles Documentation repositorys Source
folder. The 2 main pieces of this folder are content
and repositories
:
-
Source/repositories
contain submodules to Dolittle repositories.
-
Source/content
is the folder that Hugo uses to render dolittle.io, making it the root of the pages. It contains documentation and symlinks to each Source/repositories
submodules Documentation folder.
Defining folder hierarchy on dolittle.io
To add structure (sub-folders) to the content folder and make these visible, Hugo expects an _index.md
inside the subfolders. The _index.md
files acts as a landing page for the subfolder and should contain a Front Matter section. This defines the title, description, keywords & relative weighting in its parent tree.
---
title: Page Title
description: A short description of the pages contents
keywords: comma, separated, keywords, to, help, searching
author: authorname
weight: 2
---
_index.md files within subfolders should only contain the Front Matter and nothing else unless needed. This makes the subfolder links on the sidebar work as only dropdowns without linking to the content of the _index.md. We prefer this as it makes for a more smooth experience on the site.
Only create subfolders when needed. Aim for a flat structure.
5.2.5 - API documentation
Learn about how to make sure APIs are documented
All public APIs MUST be documented regardless of what language and use-case.
All C# files MUST be documented using XML documentation comments.
For inheritance in documentation, you can use the <inheritdoc/>
element.
JavaScript
All JavaScript files MUST be documented using JSDoc.