Codementor Events

Why I built a Serverless GraphQL API on AWS Lambda

Published Apr 30, 2019Last updated Oct 26, 2019
Why I built a Serverless GraphQL API on AWS Lambda

About me

I have been building web applications for over 15+ years in many different industries. From startups in the financial and lottery spaces to Fortune 500 companies in Machine Learning and AI fields. I have launched numerous companies as both a Founder, CTO, and Co-Founder and have managed teams of over 20+ engineers directly. I am a Sr. Software Engineer, CTO, and seasoned entrepreneur. Familiar with both operational, legal, accounting and leadership aspects of starting, building and running a successful business.

Stability, Security and Scale - Now.

I needed to build an isolated sandbox system for allowing a company to have a partner API for partners who needed to integrate into the companies existing platform. This platform is sophisticated, with 20+ services and a more significant number of supporting systems and infrastructure. Some of the domain included the use of Event-driven architectures as well as scraping and several complex requirements.

The challenge is that the extension to the internal system needed to be done within a very short time-frame, around 1-2 weeks from start to finish. With the complexity of the current infrastructure, I had to find a way to build out the ecosystem without getting blocked by the constraints that already existed.

To both move quickly and allow for vastly scalable and stable platform for the partners to utilize (in a number of countries around the world where connectivity could be a factor) I needed just to be able to focus on building a GraphQL based API that could be quickly updated and deployed inside of a globally available IaaS provider - AWS.

Why Serverless and GraphQL on AWS?

Having already been familiar with GraphQL and the various industry leaders in that space, I chose to use the GraphQL Yoga Lambda implementation to be able to quickly deploy Serverless services with GraphQL endpoints that could be stitched together in a single access endpoint. Besides, the user and API credentials we based on AWS API Gateway and Cognito to quickly create and manage users and API keys that can both be scaled globally and quickly managed or disabled in the case of bad-actors within the partner ecosystem. Basing all of this on TypeScript allowed for type-safety to be present from the backend database to the UI.

The process of building Serverless GraphQL API on AWS
The first step in the process was to set up a mono repo that was going to contain the various Serverless services as well as many other packages, including a React Native application for use on devices where we were required to provide partners with end-to-end tooling.

The first step was to create a rough model of the domain and expected GraphQL elements within each of the services. I was using this domain to create the types and schemas needed to enforce type-safety from the database to UI. We applied this across several services and then schema-stitched into a single API endpoint.

The second step of setting up the GraphQL APIs was then defining both the resolvers for Query and Mutation's in GraphQL and their associated React UI's. Each of the services handled doing the Server Side Rendering (SSR) that was also stitched into various front-end contexts using the TailorJS project open-sourced by Zalando. Using this approach allowed each UI element and service to contain their respective functionality without creating a messy overlap on the UI side.

The final part of the process was to set up a CI/CD system that could both run the testing and deploy the various services to several CloudFront custom domain distributions across multiple availability zones in AWS for a global, low-latency driven API provided by Route53 Latency-Base DNS rules and API Gateways custom domains.

Challenges

Some of the challenges with this approach was being able to maintain the concept of a user across all services efficiently. Using API tokens, and JWT Tokens combined with Cognito, we could both resolve permissions of the API against the Partner (via API key) and the user based on Cognito JWT ID Tokens.

Setting up the correct Serverless configuration was not hard but took some thought up-front to make it easy to add, update and deploy changes and new layers to the system without impacting availability once partners started integrations.

Takeaways

When building a Serverless system that has multiple layers of authorization, it's vital to get your User model right at the start. When creating User Pools in AWS Cognito - you can restrict read/write access to various properties to a user only at creation time of that user that can't ever be updated. We used this to secure both encrypted API Key items and some other data that ensured users could never circumvent their sandboxed environments without you having to create a new user.

Additionally, to have a responsive API, you need a minimum of two or more availability zones for AWS deployment (We started with US-EAST-1 and EU-CENTRAL-1) to ensure your API latency isn't affected by regional connectivity restrictions. Using VPN proxies can significantly improve confidence when testing out the API's global availability and performance. We found this helped us out a lot when trying to diagnose specific implementation issues with partners.

One thing we were surprised by in development was the realization that Dry and Serverless don't always mix well. We started with the intention to share code in various services where needed and could have done so using NPM packages, but ultimately decided to use the CI / CD pipeline to synchronize a "source of truth" across services by copying files at build time. Our approach worked out in our favor when we realized we had various exceptions to the rule around several components that had "almost" shared business logic.

Some advice

When advising anyone looking into using this stack - one of the key benefits is the ability to move quickly - very quickly. Don't worry about getting everything right at the start b/c you can ultimately teardown and re-build everything from the ground up around the AWS and Serverless stacks. Up until you make your first release (and then, even after without too much effort) you can tweak and completely re-deploy changes without minutes. Having gotten a couple of things wrong on our initial configurations around Cognito User Pools, this ability allowed us to correct them quickly without affecting the partners' efforts during integrations and saved us from having to do complex migrations later down the road.

Conculsion

As a result, my opinion is that this type of stack and deployment could be the future of web development. Being able to remove barriers on both the Dev-Ops and Developer fronts makes it easy to build and ship with confidence without losing the ability to use the large number of services you can find in the public clouds that currently exist. Besides, the cost of operating these systems when tied to revenue-producing transactions allow the cost of operations never to become an issue until your optimizing for profit in the distant future.

Discover and read more posts from Ryan J. Peterson
get started
post comments1Reply
clarc
5 years ago

Although with this implementation we lose the power of retrieving only the attributes we want, right? Since we would retrieve the whole model (User and Tweet)