Codementor Events

Scaffolding a NodeJS GraphQL API server

Published Nov 27, 2017Last updated May 25, 2018
Scaffolding a NodeJS GraphQL API server

Recently, I’ve been working on a new full-stack project and I wanted to take the opportunity to try my hand at creating a GraphQL API. I didn’t find many resources that walked through the whole stack on the back-end from DB to ORM to API design, including tests, so I thought I could help some other folks by laying out what I ended up with.

TL;DR :

  1. Docker and docker-compose
  2. node server
  3. postgres database
  4. express and apollo-server-express to serve queries
  5. Sequelize ORM
  6. graphql schema language
  7. graphql-sequelize as the connector
  8. winston for logging
  9. passport with express-session and connect-session-sequelize for authentication and authorization
  10. mocha for testing

What is GraphQL?


The power of GraphQL, from https://www.graphql.com/

This one, luckily, has been covered extensively by the creators. If you’re thinking of implementing a GraphQL API, you should start by reading those docs in full. The short version is that it is a queryable schema for your API that allows clients to request only the data that they need, and gives a standardized system for exploring API endpoints without the need for reference documentation.

Why GraphQL?

I chose GraphQL because, after hearing a lot of buzz and playing around with some open GraphiQL endpoints I was hooked by the idea of being able to declare a schema for my data and let the client handle forming the exact shape of the request. I loved the idea of transferring the minimum amount over the wire, batching requests together for optimal payloads, and caching extensively on both the client and server.

What else do you need?

This is the question that took me a long time to understand. There are plenty of great examples for getting started with a GraphQL server, but they all seemed to leave off after pointing out that the resolve function should be doing the work of fetching data to fulfill a particular field. So, from the ground up, these are the pieces that I put together:

  1. The containers for development and deployment
  2. The server
  3. The database
  4. The request server
  5. The ORM
  6. The ORM to GraphQL adapter
  7. The GraphQL schema
  8. Logging
  9. Authentication and authorization
  10. The tests

I’ll go through piece by piece and explain the technology I chose and the reasoning for it, then give a small example or place to learn more.


Containers

My first decision was to use containers to encapsulate my api server and database. My main motivation here was to simplify deployment — it is far easier for me to upload a set of images to Docker Hub and do a simple git pull && docker-compose up -d than it is to write some scripts to set up my environment each time. Docker with docker-compose also allows for easy management of multiple environments for development, testing, and production. It also simplifies startup and teardown locally, where I am running several separate services for the app.

So, I needed to create a Dockerfile and a docker-compose.yml :

Dockerfile:

FROM mhart/alpine-node:9RUN mkdir www/WORKDIR www/ADD . .RUN npm install && npm run buildCMD npm run start

docker-compose.yml:

version: “3”services: api: build: ./api image: crypto-dca-api:latest container_name: crypto-dca-api env_file: config/.env environment: - NODE_ENV=production ports: - 8088:8088
db: build: ./db image: crypto-dca-db:latest container_name: crypto-dca-db env_file: config/.env volumes: - crypto-dca-db:/var/lib/postgresql/data ports: - 5432:5432
volumes: crypto-dca-db: driver: local

With this, I am able to run docker-compose up -d from the root of my project and spin up a server and database using a persistent volume. I also ended up creating separate containers for development, and a different environment for testing. I’ll leave those as an exercise for the reader, this is just a basic example of what the production setup looks like.

To learn more about Docker and docker-compose, check out the official tutorial

The server

You need a server to connect to your database and respond to GraphQL requests from the client.

I chose node as my server-side language because I come from a front-end background and it made sense to leverage my domain expertise when building a server. Node is a good choice because it has extremely robust community support around GraphQL and is highly portable and easy to run in a container.

With new async/await support as of version 8, the synchronicity model is much easier to manage which is a huge boon when building a highly asynchronous API.

The database

I went with postgres as my database for a couple of reasons: It has extensive community support with ORMs, it is open-source, and it is lightweight. MySQL would work just as well, though PostgreSQL has a reputation for scaling better. Ultimately, the DB is well abstracted behind the ORM so it is relatively simple to swap later if necessary.

Here is the (dead simple) Dockerfile for my database:

FROM postgres:10COPY ./setup-db.sh /docker-entrypoint-initdb.d/setup-db.sh

And my init file:

#!/bin/bash
set -e
POSTGRES="psql --username ${POSTGRES_USER}"DATABASES=($POSTGRES_DEV_DB $POSTGRES_TEST_DB $POSTGRES_PROD_DB)
for i in ${DATABASES[@]}; do echo "Creating database: ${i}" psql -U postgres -tc "SELECT 1 FROM pg_database WHERE datname = '${i}'" | grep -q 1 || psql -U postgres -c "CREATE DATABASE \"${i}\""done

The init file simply creates the databases that do not exist on container startup.

The request server

With the basics out of the way, we can start putting the pieces together to actually serve requests to our API. I chose express as a basic webserver framework because of its ubiquity in the node ecosystem, plugin support, and simple API.

Express allows us to listen on a port and respond to HTTP requests, but we need another layer to allow us to digest and respond to GraphQL requests. For this, I use apollo-server-express. It has an extremely simple API and does some mapping to allow us to define our schema in the node GraphQL schema language. Here’s what it looks like in action:

const bodyParser = require('body-parser');const { graphqlExpress, graphiqlExpress } = require('apollo-server-express');const logger = require('../helpers/logger');const { NODE_ENV } = process.env;
module.exports = function (app) { const schema = require('../schema');
app.use('/graphql', bodyParser.json(), (req, res, next) => graphqlExpress({ schema, context: { user: req.user } })(req, res, next) );
if (NODE_ENV === 'development') { app.get('/graphiql', graphiqlExpress({ endpointURL: '/graphql' })); }
logger.info(`Running a GraphQL API server at /graphql`);}

All we’re doing here is setting up our root endpoints, we still need to define the mapping between the graphql query language and our database in our schema.

The ORM

In order to map between the query language of your database (SQL) and the native language of your server (Javascript), you typically use an ORM. There are a few popular ORMs in Javascript, but I decided to go with Sequelize because it is the most heavily maintained, comes with a CLI tool, and has lots of active community support.

To connect Sequelize to your database, you need to do a few things. Unfortunately, there is a gap between the existing version of sequelize-cli and the latest version of Sequelize (4). You can still use sequelize-cli to scaffold a Sequelize app, but you may need to make some modifications, especially to cli-generated models.

To get started, you can install sequelize-cli and run sequelize init from your project directory. By default, this will create a new directory structure with config, models, migrations, and seeders, as well as an index.js file in the models directory that creates a new instance of the ORM with the given configuration and associates all models with that instance.

I ended up splitting this into 2 files for easier testing:

build-db.js :

var env = process.env.NODE_ENV || 'development';var config = require(__dirname + '/../config/config.js')[env];var Sequelize = require('sequelize');
module.exports = function () {
return config.use_env_variable ? new Sequelize(process.env[config.use_env_variable]) : new Sequelize(config.database, config.username, config.password, config);}

decorate-db.js :

const fs = require('fs');const path = require('path');const Sequelize = require('sequelize');const modelPath = path.join(__dirname, '../models');
module.exports = function (sequelize) {
const db = {};
fs .readdirSync(modelPath) .filter(file => { return file.indexOf('.') === -1 }) .forEach(folder => { const model = sequelize['import']( path.join(modelPath, folder, 'index.js') ); db[model.name] = model; });
Object.keys(db).forEach(modelName => { if (db[modelName].associate) { db[modelName].associate(db); } });
db.sequelize = sequelize; db.Sequelize = Sequelize;
return db;}

From there, you can define your models by hand or use the CLI tool to help build them.

models/Wallet/index.js :

const { v4 } = require('uuid');
module.exports = (sequelize, DataTypes) => { const Wallet = sequelize.define('Wallet', { id: { primaryKey: true, type: DataTypes.STRING, defaultValue: () => v4() }, name: { type: DataTypes.STRING, allowNull: false }, address: { type: DataTypes.STRING, allowNull: false }, local: { type: DataTypes.BOOLEAN, allowNull: false } });
Wallet.associate = function ({ User, Wallet }) { Wallet.belongsTo(User); }
return Wallet;};

Once you are done, you should have an object in JS that you can import into other files that gives you full query access to your database.

One trick I found for easily generating both migrations and my initial database shape was to actually dump the state of my DB out to a file and use that as a raw SQL import. This saved me a lot of time writing migration syntax (which is not quite the same as model definition syntax), as well as writing seeders — I just have a few SQL files that I can load up as test states or initial seed state for development.

migrations/1-initial-state.js :

const { readFile } = require('fs');
module.exports = { up(migration) { return new Promise((res, rej) => { readFile( 'migrations/initial-tables.sql', 'utf8', (err, sql) => { if (err) return rej(err); migration.sequelize.query(sql, { raw: true }) .then(res) .catch(rej); }); }); }, down: (migration) => { return migration.dropAllTables(); }}

The ORM to GraphQL adapter

One critical package that we need to add is the code that allows us to easily map Sequelize models to GraphQL types, queries, and mutations. The aptly-named graphql-sequelize package does this quite well, providing two excellent abstractions that I will discuss below — a resolver for mapping GraphQL queries to Sequelize operations, and an attributeFields mapping allowing us to re-use our model definitions as GraphQL type field lists.

The GraphQL schema

Whew! All that work and we haven’t even written anything that GraphQL can understand yet. Don’t worry, we’re getting there. Now that we have a Javascript representation of our database, we need to map that to a GraphQL schema.

There are two pieces of a GraphQL schema that we need to create. The first is a set of types that allows us to properly specify the form of our data. The second is the list of queries and mutations that we can use to search and manipulate our data.

Types

There are two main ways to create types. The first is a more manual process, where you specify the exact shape of the type for each model. That looks something like this:

models/Wallet/type.js :

const { GraphQLObjectType, GraphQLNonNull, GraphQLBoolean, GraphQLString} = require('graphql');
module.exports = new GraphQLObjectType({ name: 'Wallet', description: 'A wallet address', fields: () => ({ id: { type: new GraphQLNonNull(GraphQLString), description: 'The id of the wallet', }, name: { type: new GraphQLNonNull(GraphQLString), description: 'The name of the wallet', }, address: { type: new GraphQLNonNull(GraphQLString), description: 'The address of the wallet', }, local: { type: new GraphQLNonNull(GraphQLBoolean), description: 'Whether the wallet is local or on an exchange' } })});

This has the advantage of allowing us fine-grained control over all of the fields in our type, the ability to add metadata, and to create additional computed fields on a type that might not exist on the model.

The disadvantage is definitely the verbosity — it’s a lot of work to basically re-define all of your models as GraphQL types. Luckily, the graphql-sequelize package gives us a shortcut through attributeFields:

const { GraphQLObjectType, GraphQLNonNull, GraphQLBoolean, GraphQLString} = require('graphql');const { attributeFields } = require('graphql-sequelize');const { Wallet } = require('../');
module.exports = new GraphQLObjectType({ name: 'Wallet', description: 'A wallet address', fields: attributeFields(Wallet);});

This saves us a good deal of typing but removes some of the expressiveness and discoverability that GraphQL enables us to create. I opted to do all of my types long-hand, but at the end of the day it’s up to you.

Queries and Mutations

Types represent pieces of data in our schema, while queries and mutations represent ways of interacting with those pieces of data. I decided to create a few basic queries for each of my models — some enabling lookup, and some enabling modification. The resolver provided by graphql-sequelize makes creating these an absolute breeze, and begins to show some of the power behind coupling GraphQL with a good ORM.

models/Wallet/queries.js :

const { GraphQLNonNull, GraphQLString, GraphQLList} = require('graphql');const { Op: {iLike} } = require('sequelize');const { resolver } = require('graphql-sequelize');const walletType = require('./type');const sort = require('../../helpers/sort');
module.exports = Wallet => ({ wallet: { type: walletType, args: { id: { description: 'ID of wallet', type: new GraphQLNonNull(GraphQLString) } }, resolve: resolver(Wallet, { after: result => result.length ? result[0] : result }) }, wallets: { type: new GraphQLList(walletType), resolve: resolver(Wallet) }, walletSearch: { type: new GraphQLList(walletType), args: { query: { description: 'Fuzzy-matched name of wallet', type: new GraphQLNonNull(GraphQLString) } }, resolve: resolver(Wallet, { dataLoader: false, before: (findOptions, args) => ({ where: { name: { [iLike]: `%${args.query}%` }, }, order: [['name', 'ASC']], ...findOptions }), after: sort }) }})

You can see that for the most part, you just define the model and response type and graphql-sequelize handles the gruntwork of doing the lookup for you.

Mutations are quite similar, though you need to do the legwork of updating the model yourself:

model/User/mutations.js :

const { GraphQLNonNull, GraphQLString} = require('graphql');const userType = require('./type');const { resolver } = require('graphql-sequelize');
module.exports = User => ({ createUser: { type: userType, args: { name: { description: 'Unique username', type: new GraphQLNonNull(GraphQLString) }, password: { description: 'Password', type: new GraphQLNonNull(GraphQLString) } }, resolve: async function(root, {name, password}, context, info){ const user = await User.create({ name, password }); return await resolver(User)(root, {id: user.id}, context, info); } }});

With our types, queries, and mutations created, we just need to stitch everything together into a single schema and plug it into apollo-server-express:

schema.js :

const { GraphQLObjectType, GraphQLSchema,} = require('graphql');const { queries, mutations } = require('./models/fields');
module.exports = new GraphQLSchema({ query: new GraphQLObjectType({ name: 'RootQuery', fields: () => queries }), mutation: new GraphQLObjectType({ name: 'RootMutation', fields: () => mutations })});

Et voila! We can now start hitting our server on /graphql and /graphiql and interacting with the schema on top of our database.

We do need a couple more pieces in order to have a robust API solution, however. Just being able to play with our API doesn’t mean that it’s tested, maintainable, or secured. I’ll talk briefly about how to check those pieces off as well.

Logging

Logging is a vital part of any project. It allows us to easily identify exactly what is happening with our app and track down bugs as they happen. After playing around with hand-rolled logs, I decided to outsource to a well-known package called winston. It allows me to set global log levels and to log to stout, sterr, file, or remote API if I want.

helpers/logger.js :

const { NODE_ENV } = process.env;const winston = require('winston');
let level, transports;
switch (NODE_ENV) { case 'development': level = 'verbose'; transports = [new winston.transports.Console()]; break;
case 'production': level = 'verbose'; transports = [new winston.transports.File({ filename: 'error.log', level: 'error' }), new winston.transports.File({ filename: 'combined.log', level: 'verbose' })] break;}
module.exports = winston.createLogger({ level, transports});

This allows me fine-grained control over exactly what gets logged where. In code I can specify the level of the message like so: logger.verbose(message);

Authentication and authorization

Any API, especially one that allows modification or retrieval of sensitive data, will need authentication and authorization. This is the best article I found on this subject, and it lead me to implement authentication separately from my GraphQL API.

To piece it together, I used the stack of passport, express-session, and connect-session-sequelize. This allows me to use a passport provider to authenticate a user, then save the authentication token in a database session and store data in a cookie. On request, I can parse the cookie and use it to identify the user making the request. Here’s what it looks like:

routes/auth.js :

const bodyParser = require('body-parser');const passport = require('passport');const expressSession = require('express-session');const Store = require('connect-session-sequelize')(expressSession.Store);const flash = require('express-flash');const LocalStrategy = require('passport-local').Strategy;const logger = require('../helpers/logger');const { SESSION_KEY} = process.env;
module.exports = function (app) { const db = require('../helpers/db').up();
passport.use('local', new LocalStrategy( async (username, password, done) => { const { validLogin, user } = await db.User.checkPassword(username, password) return validLogin ? done(null, user) : done(null, false, { message: 'Invalid username or password' }); } ));
passport.serializeUser(function(user, done) { done(null, user.id); });
passport.deserializeUser(async function(id, done) { const user = await db.User.findById(id); done(null, user); });
app.use(expressSession({ secret: SESSION_KEY, store: new Store({ db: db.sequelize }), resave: false, saveUninitialized: false }))
app.use(passport.initialize()); app.use(passport.session()); app.use(flash());
app.post( '/auth/local', bodyParser.urlencoded({ extended: true }), passport.authenticate('local'), (req, res) => res.send(req.user.id) );
app.post( '/logout', async (req, res) => { req.logout(); req.session.destroy(function (err) { err && logger.error(err); res.clearCookie('connect.sid'); res.sendStatus(200); }) } )}

This allows us to do authorization because it places the user on every request object. So, if we look back at our GraphQL route, we see:

app.use('/graphql', bodyParser.json(), (req, res, next) => graphqlExpress({ schema, context: { user: req.user }})(req, res, next));

This allows us to access the current user as context in any of our queries. If we want to do access control, we can check in resolve method for any protected query or mutation whether the current user is allowed to perform that particular action.

Tests

Ah, tests. The one thing we love to either obsess over or forget about entirely. Finding a good way to test this API has been more than a little challenging — Sequelize in particular seems to struggle with resetting to a good state and closing connections during testing. You’ll notice throughout the code that there are a lot of calls to helpers/db around — this allows us to lazily instantiate the DB when required, rather than assuming that the connection will be created at the application level.

Some ground rules for testing this app:

  1. Most tests should be integration tests. docker-compose makes it easy to spin up a sandboxed environment for tests and respond on a given port, let’s take advantage of that and write our tests from the perspective of a client interacting with our API rather than the developer of the API
  2. We have migrations, seeds, and the ability to start and stop the database. We should leverage that to test from as clean a slate as possible for each test. Let’s not carry over state between tests.
  3. We should be able to watch our tests as we develop to aid in writing tests alongside code

So, with this in mind, here is how I created my test framework. I started by creating a new docker-compose service for my tests:

api-test: build: context: ./api dockerfile: Dockerfile-dev image: crypto-dca-api:latest container_name: crypto-dca-api-test env_file: config/.env environment: - NODE_ENV=test entrypoint: npm run watch-tests volumes: - ./api:/www

This allows me to set the node-env and run a custom command for watching tests. That watch-tests command is defined here in my package.json :

"watch-tests": "NODE_ENV=test mocha --exit --watch ./test/{unit,integration}/index.js"

This watches both my unit and integration test entrypoints. Those entrypoints allow me to do test-group startup and cleanup operations.

Here is what my integration runner looks like:

test/integration/index.js :

const { describe, before, after} = require('mocha');const { up } = require('../../helpers/db');const { start, stop } = require('../../helpers/server');const testDir = require('../helpers/test-dir');const runMigration = require('../helpers/migration');
let db, migrate;
describe('integration tests', () => { before(async () => { db = up(); migrate = runMigration(db); await migrate.down(); await migrate.up(); await start({ db }); });
['db', 'auth', 'graphql', 'rpc'].forEach(dir => testDir(`integration/${dir}`) )
after(async () => { await migrate.down(); await stop(); });})
module.exports = () => db;

This ensures we are starting from a clean DB and server state, and that we clean up after ourselves.

Here’s a sample integration test:

const { expect } = require('chai');const { describe, it } = require('mocha');const fetch = require('node-fetch');const { name } = require('../../helpers/sort');
describe('wallet query', () => { it('should be able to query all wallets', async () => { const query = encodeURIComponent(` { wallets { name, address, local } } `);
const resp = await fetch(`http://localhost:8088/graphql?query=${query}`) const { data: { wallets } } = await resp.json();
expect( wallets.sort(name) ).to.deep.equal([{ name: "local BTC", address: "abacadsf", local: true }, { name: "remote BTC", address: "asdfdcvzdsfasd", local: false }, { name: "remote USDT", address: "vczvsadf", local: false }]) });})

Creating these tests is easy because we can simply load up the test data using a seeder, use the/graphiql endpoint to run our query, inspect the output, and then copy it over to the test. There is a little bit of complexity because I am using generated UUIDs, so I have to sometimes query an ID before doing a search, but for the most part it is a mechanical process. I would like to explore using Jest because I know it is good at doing this type of state-diffing.


That’s it! We now have a functional, tested, easily deployed GraphQL API with authentication and authorization. It’s not trivial to put all of these pieces together but once you figure out the core (GraphQL, Sequelize, graphql-sequelize) it becomes pretty simple to create something extremely powerful and extensible.

If you have some suggestions for how I could make my project or this article better, please leave a comment. I’d also love to check out your implementations of GraphQL APIs.

Cheers!

Discover and read more posts from Tom Lagier
get started
post comments4Replies
Tim De Lange
7 years ago

Wow, this is such a relevant article. Would it be possible for you to fix the code listings? Currently all the code is rendering on one line.

Russell Winterbotham
7 years ago

I need to know if you could adapt what you did to allow me to have a User Interface for my database where users could select a name from a dropdown list, and then select several columns from my database which will reveal relevant data about the “name” they selected. So, for example select “bob jones” and “age” “education” “income” “salary” could all be displayed, if they were chosen from a list of available Data Fields.

Gianfranco.JS
7 years ago

Nice article. Just a suggestion: put multiple lines of code in a single block so it’s easier to read here on the blog. Again, great article!

Show more replies