DotNet gRPC Internals

In the this series of posts I will look into the details of gRPC in ASP.NET Core. In the previous post I created a simple service and a corresponding client. In this posts I will focus on the internal implementation of ASP.NET Core's gRPC extension. gRPC (gRPC Remote Procedure Calls is an open source remote procedure call implementation that is based on modern (web) standards. gRPC leverages HTTP2 as transport protocol, and uses Protocol Buffer as a data format and interface definition language. It is typically used for back channel (service-to-service) communication due to its efficiency. However, the efficiency comes at a cost: debugging/decoding messages are not as straightforward as with other protocols.

The Internals

In this post I will look into how Grpc.AspNetCore nuget package handles gRPC requests. As mentioned in the previous post, the package includes tooling, that generates the base service code for the our real service implementation. During startup AddGrpc() registers dependencies to the DI container, but there is only a handful of classes registered today. The main activity happens during mapping the service with MapGrpcService<T>() call.

Internally this method creates the HTTP call handlers for the service method. Firstly, this method checks if the AddGrpc() call has registered the service dependencies, and delegates the rest of the work to ServiceRouteBuilder<T>. ServiceRouteBuilder<T> is responsible for creating the ASP.NET Core endpoints, which has to be done before app.Run(); is called in Program.cs.

Find out more


.NET Misconceptions

.NET 7 is released in 2022 autumn. While .NET and C# have been steadily improving with new features year by year, as @markrendle wrote in 2017, .NET has a renaissance. However, non-.NET developers have been hardly keeping up with the framework and the language. Developers familiar only with the 'classic' .NET voice loads of misconceptions about modern .NET itself. This post aims to debunk these ideas.

History

A brief overview about the naming changes that cause confusion for the past years.

  • The 'classic' .NET 4.x is referred as .NET Framework these days. At the time of writing this post, the latest release is .NET 4.8. Depending on the minor versions, the Framework is not developed further, only patched with bug and security fixes.

  • .NET Core is an initiative of moving .NET Framework to open source. It has been released with version 1, 2 and 3 parallel to .NET Framework releases. The branding of .NET Core is discontinued.

  • .NET Standard defines a set of API-s. It is a contract that enables bridging from .NET Framework to .NET Core and .NET 5.

  • .NET (5, 6, 7, etc.) is the continuation of .NET Core, while it is also provides a strategy to migrate from .NET Framework. In the rest of the post, when I refer to .NET, I refer to the latest release of this work.

Find out more


Deadlocking Pipes

I/O pipelines are a special constructs that has been added to .NET at its renaissance. Pipes help to solve the problem of buffering and parsing an incoming/outgoing stream of byte data. This is an inherently difficult problem to implement considering performance aspects. The data chunks received as the input stream are unlikely to be delimited on message boundaries. That means a single data chunk might contain only partial message or multiple messages and a partial message. Handling all use-cases, taking care about buffering, increasing buffer size (if needed) or reducing buffer size, reducing excessive memory allocations are challenging to be implemented by hand. Fortunately, System.IO.Pipelines help to solve this problem.

Problem

The official documentation for System.IO.Pipelines shows a basic usage of pipes. It creates a pipe, and then uses a reader and a writer to demonstrate the usage of the pipe. Finally, it uses await Task.WhenAll(reading, writing); to await both tasks completing. Reading the full documentation, it should be clear that using pipes require great care from the developer's point of view.

In the writer implementation of the above sample a while loop is used to write data into the pipe. When the write completes, or an exception occurs the code breaks out of the loop.

Find out more


DotNet gRPC Getting Started

In the this series of posts I will look into the details of gRPC in ASP.NET Core. In this first post I will be creating a simple service and a corresponding client. In future posts I will focus on the internal implementation of ASP.NET Core's gRPC extension. gRPC (gRPC Remote Procedure Calls is an open source remote procedure call implementation that is based on modern (web) standards. gRPC leverages HTTP2 as transport protocol, and uses Protocol Buffer as a data format and interface definition language. It is typically used for back channel (service-to-service) communication due to its efficiency. However, the efficiency comes at a cost: debugging/decoding messages are not as straightforward as with other protocols.

Creating a gRPC Service

This post provides a getting started and a look into the internals of Grpc.AspNetCore nuget package. The official documentation is spot on to get started:

  • create a proto file: the proto file defines the messages exchanged by server and the client as well as the operations that a client may invoke on a server

  • create a new asp.net core project (using dotnet CLI: dotnet new webapp)

  • add Grpc.AspNetCore nuget package to the project (dotnet add package Grpc.AspNetCore)

  • add the proto file as a Server gRPC service

Find out more


HttpClient Diagnostics

HttpClient has the capability to propagate correlation Id-s in the HTTP headers of traceparent and tracestate. Every recent .NET release had changes in this scope, for the last few releases the followings has changed:

  • Automatic Id propagation

  • AspNet Core creates a new parent activity for each request (by default)

  • Actvitity's DefaultIdFormat changed in .NET 5

  • ActivitySource introduced

In .NET 6 HttpClient allows greater control on how traceIds and spanIds are propagated on downstream HTTP calls. It accomplishes this with the help of DistributedContextPropagator, which comes with a few built in propagator strategies. In this post I will look into testing these strategies work with OpenTelemetry and Jaeger.

Under the hood

Find out more