Insights >Blog

An introduction to Apollo GraphQL Node.js

Jorge Cortes


August 30th, 2018

One of the reasons I decided to create this post is because GraphQL, as a middleware backend technology, is increasingly popular among the open source community. In addition, because as a developer it is important to know what are the best practices to build a backend project from scratch, with the aim of having an excellent and flexible API tool taking into account technologies like Node.js, GraphQL and Apollo.

In this post I’m going to take for granted that you are familiar with Node.js and GraphQL, however I’m going to give a more detailed explanation about Apollo before we get started.

An introduction to Apollo GraphQL Nodejs 

What is Apollo?

This is, as its website points out, a very flexible and robust GraphQL client for production ready applications. It also supports a set of front-end technologies like React, React Native, Angular, Swift (iOS), Java (Android) and Plain Javascript (Vanilla JS).

It also works as a backend technology, where the Apollo community has built a number of helper libraries, with two of the most popular being graphql-tools and apollo-server.

The library graphql-tools is used to build a GraphQL project from scratch, allowing the developer to combine all queries, mutations and subscriptions in one executable schema. On the other hand it takes advantage of the schema language of GraphQL. Also, it is used to build mocks of data, which are very useful for either development or testing.

The library apollo-server is a Node.js server production ready utility and supports frameworks like Express, Connect, Hapi, Koa among other HTTP Node.js servers. Also apollo-server works with other clients GraphQL like Relay, Looka and others.

Creating a codebase project architecture

To get started building your own API backend project, these are the steps to take into account that will help us to generate all the needed file structure. It means that at the end we’ll end up with a base architecture to be able to easily create schemas, resolvers, models and connectors, and therefore they can be consumables through an endpoint accessible by any developer.

So the first thing we’ll do will be to build a directory in where it’s going to be our project and then we’ll install all the necessary tools like this:

 $ mkdir belatrix-api
$ cd belatrix-api
$ npm init -y
$ npm install --save graphql-tools graphql apollo-server-express body-parser compression cors express lodash dotenv
$ npm install --save-dev nodemon babel-cli babel-core babel-preset-es2015 babel-preset-stage-2 babel-register

Then to have sample data like a database, we’ll create the following file with this content at the root of your project:

# data.js
export const authors = [
{ id: 1, firstName: 'Carlos', lastName: 'Guzman' },
{ id: 2, firstName: 'Diego', lastName: 'Barrero' },
{ id: 3, firstName: 'Juan', lastName: 'Cortes' },
];
 
export const posts = [
{ id: 1, authorId: 1, title: GraphQL Course', votes: 2 },
{ id: 2, authorId: 2, title: Nodejs Introduction', votes: 3 },
{ id: 3, authorId: 2, title: ‘Functional Programming', votes: 1 },
{ id: 4, authorId: 3, title: ‘React Native Course', votes: 7 },
];

Now we’ll create the next file structure for the project:

.env
data.js
/api
├── index.js
├── schema.js
├── server.js
└── /sql
├── connector.js
├── models.js
└── schema.js

Global configuration

The next step is to create the file .env that will have all the environment variables that we want for the application, such as the port of the Node.js service to listen to, like this:

PORT=3011

Next we will build our own Node.js server by putting the following content into the file:

/api/index.js

In this file we import the function run, which in turn allows us pass any environment variable that we need to execute the server:

# /api/index.js
import dotenv from "dotenv";
import { run } from './server';
 
dotenv.config({ silent: true });
run(process.env);
# Nodejs server configurationThen we create the file /api/server.js putting the following content.
# /api/server.js
import express from "express";
import { graphqlExpress, graphiqlExpress } from "apollo-server-express";
import bodyParser from "body-parser";
import { isString } from "lodash";
import { createServer } from "http";
 
import { Authors, Posts } from "./sql/models";
 
import schema from "./schema";

In these imports you can notice the use of the middleware functions called graphqlExpress and graphialqExpress which will be useful to give support to GraphQL from Node.js

Also, it is very important to mention that the imports of the models Authors and Posts contain all the interactions with the model either by database, some third party API, or just interaction with data in a JSON file.

The schema import is the one that contains all the definitions of queries, mutations and subscriptions in GraphQL.

Now let’s get going creating a function called “run” that will look like this:

# /api/server.js
export const run = ({
PORT: portFromEnv = 3010
} = {}) => {
 
}

In this case we are only capturing the variable PORT from the environment variables, but we can also import variables like API keys or global variables that sometimes are needed for the application.

Then, inside the function run, we setup the port, create the express application, and give support to the JSON format.

# /api/server.js
const port = isString(portFromEnv) ?
parseInt(portFromEnv, 10) : portFromEnv;
 
const app = express();
 
app.use(bodyParser.urlencoded({ extended: true }));
app.use(bodyParser.json());

Next we will setup the graphql and graphiql services like this:

# /api/server.js
 
app.use("/graphiql", graphiqlExpress({
endpointURL: "/graphql"
}));
 
app.use("/graphql", graphqlExpress(req => {
const query = req.query.query || req.body.query;
if (query && query.length > 2000) {
throw new Error("Query too large.");
}
 
return {
schema,
context: {
Authors: new Authors(),
Posts: new Posts()
},
};
}));

Where the endpoint of the graphiql tool it is configured in the URL http://localhost:3011/graphql and the GraphQL schema with Queries, Mutations and Subscriptions with the two models called Authors and Posts are being setup as well.

Then we should build a server and listening in the right port, as follows;

# /api/server.js
 
const server = createServer(app);
 
server.listen(port, () => {
console.log(`API Server is now running on http://localhost:${port}`);
});

Main Schema setup

# /api/schema.js
import { merge } from "lodash";
import { makeExecutableSchema } from "graphql-tools";
import { schema as sqlSchema, resolvers as sqlResolvers } from "./sql/schema.js";

In the previous imports, we make use of the functionality merge from lodash to merge all the resolvers that we might have in our project. Also we import the utility makeExecutableSchema to be able to combine all the schemas with the list of resolvers, and create an executable schema available to the graphql express middleware.

Then in the same file we define all data types and resolvers that are going to be used in the project and that are in charge of communicating to the models as follows:

# /api/schema.js
 
const rootSchema = [`
type Query {
posts: [Post]
author(id: Int!): Author
}
 
type Mutation {
upvotePost (
postId: Int!
): Post
}
`];

Then we define the resolvers like this:

# /api/schema.js
 
const rootResolvers = {
Query: {
posts: (_, params, context) => context.Posts.getPosts(),
author: (_, { id }, context) => context.Authors.getAuthorById(id)
},
Mutation: {
upvotePost: (_, { postId }, context) => context.Posts.upVotePost(postId)
}
};

Where the first query resolver gets the list of the posts, and the second one gets the information about the Author by its id. Regarding the mutations there is only one called upvotePost which adds the number of likes to a specific post.

Then we’ll do the merge of the resolvers, while on the other hand we join the list of schemas in order to create the final executable GraphQL schema.

# /api/schema.js
 
const schema = [...rootSchema, ...sqlSchema];
const resolvers = merge(rootResolvers, sqlResolvers);
 
const executableSchema = makeExecutableSchema({
typeDefs: schema,
resolvers,
});
 
export default executableSchema;

Secondary Schema setup

The secondary schema would have all resolvers and schemas that will belong to each model, in this case the models are Author and Post, and the file will have this content:

# /api/sql/schema.js
 
export const schema = [`
type Author {
id: Int!
firstName: String
lastName: String
posts: [Post] # posts list by author
}
type Post {
id: Int!
title: String
author: Author
votes: Int
}
`];
 
export const resolvers = {
Author: {
posts: (author, _, context) => context.Posts.getPostsByAuthorId(author.id)
},
Post: {
author: (post, _, context) => context.Authors.getAuthorById(post.authorId),
},
};

In this case two types are being created, one called Author with its corresponding fields, and another one called Post. On the other hand the resolvers to query data are being used to list the posts by author id and to get the information about the author according to its own id.

Models setup

This is the file which is in charge of having the business logic and interaction with the data source, either throughout a database framework, a third party API, just a JSON file or from a Javascript object like in this case:

For that, the file /api/sql/models.js has been created and in it the first thing to do is to write the imports.

# /api/sql/models.js
import { find, filter } from "lodash";
 
import connector from "./connector";
const { authors, posts } = connector();

With this the lodash is imported to make filters and search for the object from arrays and also be able to use a connector, which in this case just returns two arrays, the authors and posts.

On the other hand the models are created as classes, so they will be Authors and Posts along with its business logic on its own functions.

# /api/sql/models.js
 
export class Posts {
getPosts() {
return posts;
}
 
upVotePost(postId) {
const post = find(posts, { id: postId });
if (!post) {
throw new Error(`Couldn't find post with id ${postId}`);
}
post.votes += 1;
return post;
}
 
getPostsByAuthorId(authorId) {
return filter(posts, { authorId });
}
}
 
export class Authors {
getAuthorById(id) {
return find(authors, { id });
}
}

Connector setup

The idea behind a connector is to have a strategy that can return an object along with its functionalities of data access that can also be configurable by environment, doing so either by URL, API keys, or a database URL path. In this case we just import a file that exposes a javascript object which contains all the data that we need, as follows:

import { authors, posts } from "../../data.js";
 
export default () => ({ authors, posts });

Project execution

To be able to run the project, you can do it through NPM, with the following settings from this file:

# package.json
"scripts": {
"start": "babel-node api/index.js",
"dev": "nodemon api/index.js --watch api --exec babel-node"
}

Once you have your NPM settings done, you can run the project like so:

$ npm run dev

Then a message will show up on the console that will say:

API Server is now running on http://localhost:3010

Then to be able to see the schemas working, and you can go to the URL http://localhost:3010/graphql from a web browser. That will show you a UI to handle all your queries and mutations.

Share

Related posts

See also

Services

Software development

Software testing

Consultancy & innovation

User experience

Industries

Fintech

Media & entertainment

Healthcare

All industries

Insights

Blog

Whitepapers

Webinars

Videos

Why Belatrix?

International presence

Nearshore advantages

Project governance

Agile expertise

Flexible engagement models

Our talent development