Codementor Events

Building a microservice with Golang, Kafka and DynamoDB — Part I

Published Apr 18, 2019

Making Golang and Kafka work together

In this two series post, I want to mainly talk about my initial experiences building a vehicle tracking microservice using Golang, Kafka and DynamoDB. I come from a Python world, building web apps and backend systems using Django, Flask, Postgres and RabbitMQ as a message broker. The first post will talk about how to wire these technologies together to create a microservice skeleton and the next one will cover integration with DynamoDB, simple optimizations and enhancements to make it scale. Why? Because not only did it take me a while to figure this out and settle on a clear and concise solution but also I found it fairly interesting technologies to work with.


A Brief Intro about Microservices

In a traditional monolith application, all of an organization’s features are written into one single application or grouped on the basis of required business product. Sometime’s they’re grouped by their type, such as controllers, models, factories etc. Other times, perhaps in a larger application, features are separated by concern(SOC) or by feature or by domain. So you may have an auth package, a users package and an articles package. Which may contain their own set of factories, services, repositories, models etc. But ultimately they are grouped together within a single codebase.

A microservice is the concept of taking that second approach further and segregating those concerns into fine grained independent runnable codebases.


Why Microservices?

Complexity — Splitting features into microservices allows you to split code into smaller chunks. It harks back to the old Unix adage of ‘doing one thing well’. There’s a tendency with monoliths to allow domains to become tightly coupled with one another, and concerns to become blurred. This leads to riskier, more complex updates, potentially more bugs and more difficult integrations.

Scale — In a monolith, certain areas of code may be used more frequently than others. With a monolith, you can only scale the entire codebase. So if your auth service is hit constantly, you need to scale the entire codebase to cope with the load for just your auth service.

With microservices, everything is more granular, including scalability and managing spikes in demand. Demand may surge for one component of an app or a certain subset of data, and a microservices architecture enables you to scale only the app components impacted, rather than the entire application and underlying infrastructure. Along with benefits of MSA(Microservice Architecture), there are few downsides also like maintaining microservices, increased network usage, dealing with distributed systems and deployment complexity etc.


Why Golang to build microservices?

Microservices are supported by just about all languages, after all, microservices are a concept rather than a specific framework or tool. That being said, some languages are better suited and, or have better support for microservices than others. One language with great support is Golang.

Golang is very light-weight, very fast, and has a fantastic support for concurrency, which is a powerful capability when running across several machines and cores.

Golang also contains a very powerful standard libraries for writing web services. Finally, there is a fantastic microservice framework available for Go called go-micro which we will be using in this series.


Martin Fowler published a great overview of Microservices. When it comes to Golang the concepts remain the same; small single-responsibility services that communicate via HTTP, TCP, or Message Queue.

When publishing an API for public consumption HTTP and JSON have emerged as the standard. However, my preference for inter-service communication is google/protobuf library.

Protocol Buffers are a way of encoding structured data in an efficient yet extensible format. Google uses Protocol Buffers for almost all of its internal RPC protocols and file formats.

Protocol Buffers allow services to communicate data between each other with a defined contract (and without all the serialization overhead of JSON). API service which takes HTTP/JSON request and then uses RPC/Protobufs to communicate between internal RPC services. In the following example, service is its own self-contained Golang application.

Below is how the location.proto file looks like

// declare proto syntax
syntax = "proto3";

// declare package name
package service.location;

// declare RPC services
service LocationService {
    rpc GetLocation(LocationRequest) returns (LocationResponse) {}
}

// declare request and response messages
message LocationRequest {
    int32 user_id = 1;
}

message LocationResponse {
    string timestamp = 1;
    double latitude = 2;
    double longitude = 3;
    int32 user_id = 4;
}

API service will dump the location data in Kafka topic. But before that, we need to create Kafka producer written using sarama, which is a Go library for Apache Kafka and sarama-cluster, which is Go library for Cluster extensions for Sarama.

Kafka producer config

Check this out for more producer configuration options.

Publish a message using implementation of below interface.

type LocationTrackingProducer interface {
   PublishMessage(messageValue sarama.ByteEncoder, partitionHashKey string) error
}

And then on the service.location front, we have written the consumer to get the data from Kafka topic and store in DynamoDB.

Kafka consumer

In service.location we will also implement GetLocation to expose this as RPC method to other services/api’s. Below is how handler.go will look like

type LocationServiceHandler struct {
   repo repository.Repository
}

func NewLocationServiceHandler(repo repository.Repository) *LocationServiceHandler {
   return &LocationServiceHandler{repo: repo}
}

func (d *LocationServiceHandler) GetLocation(ctx context.Context, request *locationProto.LocationsRequest, response *locationProto.LocationsResponse) error {
   if request.UserId == 0 {
      return errors.New("Received an invalid user ID")
   }

// get location data from dynamoDB
   locationData, err := d.repo.GetLocation(request.UserId)
   if err != nil {
      return err
   }
   response = locationData
   return nil
}

and we will register above handler with go-micro service, below is how the main.go will look like.

func main() {
   locationRepo := &LocationRepository{} // Create a new service.

srv := micro.NewService(
      micro.Name("service.location"),
      micro.Version("latest"),
   )

locationClient := locationProto.NewLocationServiceClient("service.location", srv.Client())

// Init will parse the command line flags.
   srv.Init()

// Register handler
   locationProto.RegisterLocationServiceHandler(srv.Server(), handler.NewLocationServiceHandler(locationRepo))

// Run the server
   if err := srv.Run(); err != nil {  
      fmt.Println(err) 
   }
}

That’s it. The next part in this series will try to optimize the performance of above implemented services. Any bugs, mistakes, or feedback on this article, or anything you would find helpful, please drop a comment.

Building a microservice with Golang, Kafka and DynamoDB — Part II
_The Journey of API response time from 1.2sec to under 50ms_medium.com


Further reading

https://www.nginx.com/blog/introduction-to-microservices/
https://martinfowler.com/articles/microservices.html
https://medium.facilelogin.com/ten-talks-on-microservices-you-cannot-miss-at-any-cost-7bbe5ab7f43f
https://ewanvalentine.io/microservices-in-golang-part-1/
http://microservices.io/patterns/monolithic.html
https://www.quora.com/How-is-Go-programming-language-used-in-microservice-architecture

That’s a wrap.

Hope you guys like it and learn from it. Thanks._

Happy Learning.

Discover and read more posts from Pramod Maurya
get started
post commentsBe the first to share your opinion
Show more replies