Let's build a REST API with NestJS! In this post we'll discuss:
- The basics of getting started
- Establishing a database and designing an effective schema
- Defining API endpoints for CRUD operations
- Implementing data validation in the application
- Decoupling the database schema from the API response
In upcoming posts, I'll cover topics such as documentation, authentication, pagination, testing, and deployment. In this series, you will learn how to build a REST API using NestJS, including best practices for developing the API.
The tech stack for this project will consist of:
- A TypeScript-based NestJS application
- Object Relational Mapping (ORM) powered by Prisma
- A MySQL database hosted by PlanetScale
- Deployed as a Docker container on Fly
What is NestJS?
NestJS is a progressive Node.js framework that allows you to build server-side applications using JavaScript, with integrated support for TypeScript.
NestJS leverages Express as its underlying framework, but can also be configured to use alternative frameworks such as Fastify. It offers an additional layer of abstraction over these frameworks, while still providing direct access to their APIs, enabling you to incorporate any third-party modules compatible with them.
So what makes NestJS a good choice? Although it is built on top of Express, which in turn is built on Node.js, what added benefits does it offer compared to just using Express?
To start with, NestJS provides structure and a sensible set of default options. Express is designed to be unopinionated, which can be desirable for some teams but not for everyone. Adding structure can lead to more consistency across projects and can increase efficiency for teams that benefit from it.
This is a valuable feature as it can save a significant amount of time and resolve common issues. NestJS has been developed with a focus on modularity, scalability, and testability while remaining loosely coupled. The design of NestJS has been inspired by Angular, and it features concepts such as modules, services, and controllers (with controllers being equivalent to components in the Angular framework).
NestJS also comes with a command-line interface that helps you create a new project and add components to an existing project, promoting organization and saving time by eliminating the need to write repetitive code.
NestJS has excellent documentation. The documentation, along with your source code, thanks to TypeScript, should provide you with everything you need to know to get started. This is your source of truth.
Having said that, seeing an example can be useful in understanding how the components work together and realizing the overall picture. With that in mind, let's go through a hands-on demonstration of building a REST API with NestJS. We'll begin with something simple and gradually add features.
What are we building?
I've made a public API that serves quotes and aphorisms from famous and noteworthy people on topics such as love, life, wisdom, and success.
You can get a random quote now by sending a curl request to the API:
curl https://api.quotd.io/quotes/random
{
"id": 169,
"text": "There is only one kind of love, but there are a thousand imitations.",
"author": { "id": 110, "name": "François de La Rochefoucauld" },
"category": { "id": 1, "name": "Love" }
}
The returned quote includes the text
and id
, as well as the author's name
and id
, and the category's name
and id
.
This project is a passion of mine. I am inspired by literature, speeches, and quotes from great minds and strive to create something meaningful here. I plan to continue adding quotes and hope to create a valuable resource over time. Regardless of the outcome, I am enjoying the process.
Getting started
First, you'll need to install the command line interface as a global dependency:
npm install -g @nestjs/cli
And we're going to start a new project with nest new
:
nest new quotd-api
Choose a package manager and navigate to the project directory to check out the structure. In particular, turn your attention to the src
directory which has already been populated with several files.
src/
├── app.controller.spec.ts
├── app.controller.ts
├── app.module.ts
├── app.service.ts
└── main.ts
The entry point for the application is located in the main.ts
file.
import { NestFactory } from "@nestjs/core";
import { AppModule } from "./app.module";
async function bootstrap() {
const app = await NestFactory.create(AppModule);
await app.listen(3000);
}
bootstrap();
It includes an asynchronous function to bootstrap the application. Using the NestFactory
function, we create a Nest application instance and receive an application object. We start an HTTP listener, which simply calls the listen()
method on the application object and awaits incoming requests on a specified port.
So far this looks pretty similar to a typical Node.js application you might be used to. As we move forward, we will make additions to this file, but these particular lines will remain pretty much the way you see them now.
NestJS is intended to be organized into modules, each with its own directory, and app.modules.ts
is the root module for the application. So, let's take a peek in there.
import { Module } from "@nestjs/common";
import { AppController } from "./app.controller";
import { AppService } from "./app.service";
@Module({
imports: [],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}
Looks like we have a typical class AppModule
, but with the decorator @Module()
imported from @nestjs/common
.
TypeScript decorators can also be applied to methods, properties, and parameters in addition to classes, and they are used extensively in a NestJS application. Shortly we'll import into AppModule
our various modules as we create them.
For now though, let's turn our attention to the AppController
class found in the file app.controller.ts
:
import { Controller, Get } from "@nestjs/common";
import { AppService } from "./app.service";
@Controller()
export class AppController {
constructor(private readonly appService: AppService) {}
@Get()
getHello(): string {
return this.appService.getHello();
}
}
The AppController
class has the @Controller
decorator and a constructor with the AppService
passed in as an argument. Controllers are responsible for handling incoming requests and returning responses to the client.
In this case, we only have one endpoint defined. We have a @Get()
decorator that maps to the HTTP GET
request method, meaning that when a GET
request is received, the getHello()
method will be called to handle it.
In a typical NestJS application, multiple endpoints would be defined to correspond to various CRUD (Create, Read, Update, Delete) operations. Each endpoint would be decorated with a different HTTP request method decorator, such as @Get()
, @Post()
, @Put()
, or @Delete()
, depending on the operation it handles.
Finally, let's check out AppService
where the business logic is defined:
import { Injectable } from "@nestjs/common";
@Injectable()
export class AppService {
getHello(): string {
return "Hello World!";
}
}
And we'll see that particular getHello
function returns a string, Hello World!
Pretty simple.
Note another decorator, the @Injectable()
decorator. This decorator attaches metadata to the class that indicates it should be managed by the Nest inversion of control (IOC) container. The IOC container is responsible for managing the lifecycle of the class and providing it with any required dependencies.
To start the server, you can run npm run start
in the command line. This will launch the server and make it accessible at http://localhost:3000
. If you send a curl request to the server, curl http://localhost:3000
, you will receive the response, Hello World!
.
When developing the application, you can use npm run start:dev
to run the server and watch for file changes. This allows you to quickly test changes to the code without having to manually restart the server each time.
Designing the database schema
To begin with, I want to keep things simple for this project, so we'll start with a basic design. For serving quotes, we'll need at least a table named QUOTES
, which will have a minimum of two columns: TEXT
and ID
. These columns will be used to store the text of the quote and a unique identifier for each quote, respectively.
Each quote is assigned to a single author in the AUTHORS
table, in a one-to-many relationship where an author can be associated with multiple quotes. The database schema must reflect this relationship.
Likewise, each quote is assigned to a single category stored in the CATEGORIES
table, also representing a one-to-many relationship where each quote belongs to one category, and each category can have multiple quotes. Both the AUTHORS
and CATEGORIES
tables will have columns named NAME
and ID
.
Later, if we decide to allow multiple categories per quote, this relationship would become a many-to-many relationship. This would require updating the schema to reflect this change. For now, we will stick with one category per quote.
To ensure data integrity, we will need to add unique constraints on the text field of the QUOTES
table, the NAME
column of the AUTHORS
table, and the NAME
column of the CATEGORIES
table. This will prevent duplicate quotes, authors, and categories from being inserted into the database.
Two authors can have the same name, such as in the case of Samuel Butler the 17th-century poet and Samuel Butler the 19th-century novelist. We will address this issue when the time comes.
In the future, I plan to add a citation source and additional author-related metadata from Wikidata. However, for now, let's start with the basic schema and expand it as we go along.
Add modules and prepare API endpoints
We need to create a module, controller, and provider for each of our resources: quotes, authors, and categories. We require the capability to perform CRUD operations (create, read, update, delete) on each resource through an endpoint and its corresponding HTTP request method (GET
, POST
, PATCH
, and DELETE
).
So to do that, let's start with quotes. Use the Nest CLI to generate a module, controller, and service for quotes:
nest g resources quotes
Choose REST API for transport layer, and yes to generate CRUD entry points. The CLI saves us time and effort compared to manually writing out all the code.
By issuing that command, we get the following files:
src/quotes
├── dto/
│ ├── create-quote.dto.ts
│ └── update-quote.dto.ts
├── entities/
│ └── quote.entity.ts
├── quotes.controller.spec.ts
├── quotes.controller.ts
├── quotes.module.ts
├── quotes.service.spec.ts
└── quotes.service.ts
And you’ll also notice it automatically imports the new module into AppModule
. Let's go ahead and generate resources for our remaining modules as well, authors and categories:
nest g resource authors
nest g resource categories
The QuotesController has five methods that enable CRUD operations on quotes:
- The
create()
method handles aPOST
request, creates a new quote. - The
findAll()
method handles aGET
request, retrieves all quotes. - The
findOne()
method handles aGET
request, retrieves a quote by its id. - The
update()
method handles aPATCH
request, updates a quote by its id. - The
remove()
method handles aDELETE
request, deletes a quote by its id.
Let's take a closer look at the findOne()
method in the QuotesController
.
@Get(':id')
findOne(@Param('id') id: string) {
return this.quotesService.findOne(+id);
}
The @Param()
decorator from @nestjs/common
enables us to capture request parameters and use them in our method. Here, we are receiving a string, id
, and converting it to a number with type coercion.
Whereas @Param()
gives us access to the request parameters, the @Body()
decorator gives us the request body. For example, take a look at the create()
method:
@Post()
create(@Body() createQuoteDto: CreateQuoteDto) {
return this.quotesService.create(createQuoteDto);
}
Here the request body is received as a data transfer object (DTO), which is an object that defines how the data will be sent over the network. The DTO in this case is a class named CreateQuoteDto
that specifies the structure of the data to be sent.
It is recommended to use a class for the DTO instead of a TypeScript interface as classes are part of the ES6 syntax and will be preserved in the transpiled JavaScript code.
If it were written as an interface, it would be removed during transpilation and therefore not available during runtime, as our application is written in TypeScript but runs in JavaScript.
Let's now shape the structure of CreateQuoteDto
by opening create-quote.dto.ts
and adding the following code:
export class CreateQuoteDto {
readonly text: string;
}
Let's define the text
property of the CreateQuoteDto
class as a readonly string to maintain immutability.
To validate the correctness of data sent to the application, the Nest ValidationPipe
should be used. This requires the class-validator and class-transformer packages.
npm i --save class-validator class-transformer
Open main.ts
and set up the ValidationPipe
at the application level by calling useGlobalPipes()
.
app.useGlobalPipes(new ValidationPipe());
We'll pass an instance of the ValidationPipe
from @nestjs/common
and our entire application will have access to validation. This makes it easy for us to ensure that every request that is sent to our application will go through validation before processing. This can help catch any invalid requests early on and prevent bugs from arising due to invalid data.
So we return to our DTO and import @IsString()
from class-validator
. We’ll apply that decorator to text
to ensure the correct data type is sent from the client. An empty string won't do either so let's also add the @IsNotEmpty()
decorator as well.
import { IsString, IsNotEmpty } from "class-validator";
export class CreateQuoteDto {
@IsString()
@IsNotEmpty()
readonly text: string;
}
Now if we send a non-string value for text
, we'll receive an error with a message stating, "text must be a string."
curl -X 'POST' \
-H "Content-Type: application/json" \
-d '{"text": 42}' \
'http://localhost:3000/quotes'
{
"statusCode": 400,
"message": ["text must be a string"],
"error": "Bad Request"
}
To enhance security, let's protect our application from potentially malicious data by taking an additional step.
Back in main.ts
, we can set several options for ValidationPipe
by passing it an object and set whitelist
to true
. This will filter out any properties sent by the client that are not defined in the DTO.
Let's take it a step further and turn on another property that not only filters out additional data but also rejects the request if there is any data other than what is defined in our DTO. This can be done with forbidNonWhitelisted
.
app.useGlobalPipes(
new ValidationPipe({
whitelist: true,
forbidNonWhitelisted: true,
})
);
With these properties now set in ValidationPipe
, our application will reject requests that include properties not defined in the DTO, thereby further protecting our application from potentially malicious data.
One final step in our validation process is to ensure that the object being sent in is an actual instance of our DTO. To accomplish this, we'll set the transform property of the ValidationPipe
to true
in our global implementation. Now, the incoming object will be transformed into an instance of CreateQuoteDto
, giving us confidence in the data types we're working with.
In the end, it should look like this:
app.useGlobalPipes(
new ValidationPipe({
whitelist: true,
transform: true,
forbidNonWhitelisted: true,
})
);
To align with the schema we discussed, let's update the CreateAuthorDto
in a similar manner as we did with CreateQuoteDto
with the appropriate decorators for name
to ensure that it is a string and not empty.
import { IsString, IsNotEmpty } from "class-validator";
export class CreateAuthorDto {
@IsString()
@IsNotEmpty()
readonly name: string;
}
And likewise for CreateCategoryDto
:
import { IsString, IsNotEmpty } from "class-validator";
export class CreateCategoryDto {
@IsString()
@IsNotEmpty()
readonly name: string;
}
To finish up CreateQuoteDto
, we'll need to import CreateAuthorDto
and CreateCategoryDto
. We'll also need to use the @ValidateNested()
decorator from class-validator
and the @Type()
decorator from class-transformer
so that we can include both author and category information in a POST
request to quotes
import { IsString, IsNotEmpty, ValidateNested } from "class-validator";
import { Type } from "class-transformer";
import { CreateAuthorDto } from "../../authors/dto/create-author.dto";
import { CreateCategoryDto } from "../../categories/dto/create-category.dto";
export class CreateQuoteDto {
@IsString()
@IsNotEmpty()
readonly text: string;
@ValidateNested()
@IsNotEmpty()
@Type(() => CreateAuthorDto)
readonly author: CreateAuthorDto;
@ValidateNested()
@IsNotEmpty()
@Type(() => CreateCategoryDto)
readonly category: CreateCategoryDto;
}
We can now send a POST
request to our /quotes
endpoint with a text, author, and category:
curl -X 'POST' \
-H "Content-Type: application/json" \
-d '{"text": "The rain in Spain stays mainly in the plain.", "author": { "name": "Audrey Hepburn" }, "category": { "name": "Wisdom" }}' \
'http://localhost:3000/quotes'
The data is validated when sent by the client with the correct data types, but nothing is accomplished because we haven't any business logic or even set up a database yet. So let's work on that next.
Set up a MySQL database with Docker Compose
Containerizing our applications makes them portable, easier to maintain, and modular while adding almost no overhead cost. In a future post, we'll enhance our project for production with a Docker multi-stage build, standardizing our Node.js environment.
For now, we'll use Docker Compose to set up a MySQL database for development purposes only. In production, we'll use PlanetScale, a MySQL-compatible serverless database platform that offers horizontal sharding, unlimited connections, and zero-downtime schema migrations with branching.
Ensure that you have installed Docker on your local machine before proceeding. Then, create a new file in the root directory of your project named docker-compose.yml
:
services:
db:
image: mysql:8
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: quotd
MYSQL_USER: mysql
MYSQL_PASSWORD: mysql
ports:
- "3306:3306"
We'll use the official MySQL Docker image (version 8), map port 3306 on our local machine to port 3306 of our container, and set some environment variables.
To start a new container, you can use the command docker-compose up
. If you want to run the container in the background, use the --detach
or -d
flag. To view the logs, you can use the command docker-compose logs --follow db
. To see a list of containers, you can use the command docker ps
.
To stop, start, remove, or perform other actions on containers, you can use the commands docker-compose stop
, docker-compose start
, docker-compose down
, docker-compose rm
, and others. For more information, you can refer to the docker --help
and docker-compose --help
.
To simplify our workflow, we'll include a few of these Docker commands in our package.json
file. During development, it's common to restart with a clean database, so we'll create some npm scripts to make that process easier.
Add the following commands to package.json
:
{
"scripts": {
"db:rm": "docker compose rm db --force --stop --volumes",
"db:up": "docker compose up db --detach",
"db:restart": "run-s db:rm db:up"
}
}
What do these do?
-
npm run db:rm
stops and removes the database container along with its associated volumes. -
npm run db:up
starts a new database container in detached mode. -
npm run db:restart
utilizes bothdb:rm
anddb:up
to provide a fresh, clean slate for the database when necessary.
For the db:restart
script to work, we'll also need to use the npm-run-all package, which has the run-s
alias. It allows us to run two tasks sequentially.
npm install npm-run-all --save-dev
Now if we run npm run db:up
, we can access the MySQL shell by running docker compose exec db mysql -u mysql -p
and entering the password, "mysql", as defined in our Compose file. That's pretty cool, but instead of manually creating tables according to the schema, we'll use an Object Relational Mapper (ORM).
An ORM simplifies the database interaction by translating the objects in our application to database tables and the relationships between these objects into relationships between the tables. This way, we can work with the objects in our code, and the ORM takes care of the database interactions.
Adding Prisma to the project
Prisma is a type-safe ORM for Node.js and TypeScript that provides automatic type generation from your database schema without incurring any additional cost. Prisma streamlines the development process by eliminating the need for manual type definitions and providing a simple and intuitive API for database operations.
To add Prisma to the project, first add Prisma as a development dependency.
npm install prisma --save-dev
Next, run npx prisma init --datasource-provider mysql
, which will generate a Prisma schema file, ./prisma/schema.prisma
.
By default, Prisma uses foreign keys in the underlying database to enforce relationships between fields in your Prisma schema. However, PlanetScale does not allow the use of foreign keys. To work around this restriction, you can use the relationMode
field in the datasource
block of your Prisma schema and set it to prisma
:
datasource db {
provider = "mysql"
url = env("DATABASE_URL")
relationMode = "prisma"
}
This will enable the emulation of relationships in the Prisma Client, avoiding the need for foreign keys in the database.
Next, let's add our desired schema to the Prisma schema file.
model Author {
id Int @id @default(autoincrement())
name String @unique
quotes Quote[]
@@map("authors")
}
model Quote {
id Int @id @default(autoincrement())
text String @unique @db.VarChar(512)
author Author @relation(fields: [authorId], references: [id], onDelete: Cascade)
authorId Int
category Category @relation(fields: [categoryId], references: [id])
categoryId Int
@@index([authorId, categoryId])
@@map("quotes")
}
model Category {
id Int @id @default(autoincrement())
name String @unique
quotes Quote[]
@@map("categories")
}
There are a couple of things to note here. First, the text field in the Quote
model had to be made larger, increasing its size to 512 characters, to accommodate the occasionally lengthier quotes.
Second, because foreign key constraints are not supported in PlanetScale, it is wise to add indexes manually. This is done to optimize performance and potentially reduce the cost of rows read. In the Quote
model, we'll mitigate that issue by adding a composite index on authorId
and categoryId
.
One last thing before we push, let's update the credentials in the .env
to match those in our Compose file:
DATABASE_URL='mysql://mysql:mysql@localhost:3306/quotd'
So with the schema in place, we can now run npx prisma db push
to bring our database in line with the Prisma schema. This will also generate the Prisma Client. Next, we need to add a Prisma module and service so that we can easily import Prisma into the various modules.
Let's create those now by running the commands nest g module prisma
and nest g service prisma
.
To handle environmental variables reliably, it's advisable to use the NestJS method for configuration, as it can handle secrets more predictably in a NestJS application compared to the internal way provided by Prisma. Additionally, shortly we will need to set more environmental variables, so it's a good idea to set up the configuration now to save time later.
To set up environment variables in our NestJS project, let's add @nestjs/config
as a dependency and then import ConfigModule
into the AppModule
and invoke forRoot()
:
import { ConfigModule } from '@nestjs/config';
@Module({
imports: [
ConfigModule.forRoot(),
/* ... */
],
})
The PrismaService
should look like this:
import { Injectable, OnModuleInit, INestApplication } from "@nestjs/common";
import { PrismaClient } from "@prisma/client";
@Injectable()
export class PrismaService extends PrismaClient implements OnModuleInit {
constructor() {
super({
datasources: {
db: {
url: process.env.DATABASE_URL,
},
},
});
}
async onModuleInit() {
await this.$connect();
}
async enableShutdownHooks(app: INestApplication) {
this.$on("beforeExit", async () => {
await app.close();
});
}
}
And the PrismaModule
should look like this:
import { Module } from "@nestjs/common";
import { PrismaService } from "./prisma.service";
@Module({
providers: [PrismaService],
exports: [PrismaService],
})
export class PrismaModule {}
There is an issue with NestJS's enableShutdownHooks
and Prisma, so we'll need to update main.ts
to address that issue:
import { PrismaService } from "./prisma/prisma.service";
async function bootstrap() {
/* ... */
const prismaService = app.get(PrismaService);
await prismaService.enableShutdownHooks(app);
}
bootstrap();
Finally, import the PrismaModule
into each of our modules: quotes, authors, and categories.
import { PrismaModule } from '../prisma/prisma.module';
@Module({
imports: [PrismaModule],
})
Implement the CRUD operations
With our modules in place, data validation handled, and the database and ORM functional, we can now implement the CRUD operations. NestJS has already prepared us for success in this area, so we can jump right into our services where the business logic happens.
Let's start with the quotes resource and open our QuotesService
, importing the PrismaService
and passing it as a private, read-only parameter in the constructor.
import { PrismaService } from "../prisma/prisma.service";
@Injectable()
export class QuotesService {
constructor(private readonly prisma: PrismaService) {}
/* ... */
}
In the create()
method, which is our POST
operation, we'll destructure text
, author
, and category
. To differentiate between the two, we'll rename author.name
and category.name
to author
and category
, respectively.
We'll now use Prisma's create
and connectOrCreate
methods to store the quote, author, and category information in the database.
create(createQuoteDto: CreateQuoteDto) {
const {
text,
author: { name: author },
category: { name: category },
} = createQuoteDto;
return this.prisma.quote.create({
data: {
text,
author: {
connectOrCreate: {
where: { name: author },
create: { name: author },
},
},
category: {
connectOrCreate: {
where: { name: category },
create: { name: category },
},
},
},
});
}
The connectOrCreate
method from Prisma is being used in this function as it allows us to either connect to an existing author or category or create a new one if it does not exist, avoiding any failures due to unique constraints.
To retrieve a list of all quotes, we'll create a method findAll()
in QuotesService
and use Prisma's findMany
method to return all the quotes in the database. This method will handle a GET
request to the /quotes
endpoint.
findAll() {
return this.prisma.quote.findMany();
}
The result of a call to findAll()
without arguments, however, will only show the foreign key IDs for author
and category
, not the author name and category.
[
{
id: 1,
text: "The rain in Spain stays mainly in the plain.",
authorId: 1,
categoryId: 1,
},
];
And this isn't what we want. To get the author name and category, we can use Prisma's select
clause to specify which fields we want to include in the returned data. This way, we can exclude the foreign keys and include the author name and category.
findAll() {
return this.prisma.quote.findMany({
select: {
id: true,
text: true,
author: {
select: {
id: true,
name: true,
},
},
category: {
select: {
id: true,
name: true,
},
},
},
});
}
And with that update, we get the desired output including the name and category:
[
{
"id": 1,
"text": "The rain in Spain stays mainly in the plain.",
"author": { "id": 1, "name": "Audrey Hepburn" },
"category": { "id": 1, "name": "Wisdom" }
}
]
Let's now implement the findOne()
, which will use Prisma's findUnique
method.
The findOne()
method returns a quote for a given id. For example, a GET
request to http://localhost:3000/quotes/1
will return a quote with an id of 1.
We'll pass in the id
as a parameter and use the where
filter in findUnique
to retrieve the quote with that specific id.
findOne(id: number) {
return this.prisma.quote.findUnique({
where: { id },
select: {
id: true,
text: true,
author: {
select: {
id: true,
name: true,
},
},
category: {
select: {
id: true,
name: true,
},
},
},
});
}
The goal is to have the returned data from findOne()
to be in the same shape as a single object from findAll()
. To keep our code DRY (Don't Repeat Yourself) and take advantage of type-safety, we'll use the Prisma validator. This way, we can reuse the code and have a type-safe returned object.
Import the @prisma/client
library and create a new variable called selectQuoteValidator
. Then, we'll move our code into this single location by passing it as an argument to the Prisma validator.
import { Prisma } from "@prisma/client";
const selectQuoteValidator = Prisma.validator<Prisma.QuoteSelect>()({
id: true,
text: true,
author: {
select: {
id: true,
name: true,
},
},
category: {
select: {
id: true,
name: true,
},
},
});
Finally, we can reuse our code in findAll()
and findOne()
by using it as the select
argument.
findAll() {
return this.prisma.quote.findMany({
select: selectQuoteValidator,
});
}
findOne(id: number) {
return this.prisma.quote.findUnique({
where: { id },
select: selectQuoteValidator,
});
}
By making use of the Prisma validator, we're able to keep our code clean and tidy. We can now reuse selectQuoteValidator
in multiple locations, making it easy to maintain and reducing the risk of bugs.
The final two methods are update()
and remove()
. The method update()
will use the UpdateQuoteDto
which is similar to the CreateQuoteDto
but allows for partial updates. It uses the PartialType
from @nestjs/mapped-types
.
import { PartialType } from "@nestjs/mapped-types";
import { CreateQuoteDto } from "./create-quote.dto";
export class UpdateQuoteDto extends PartialType(CreateQuoteDto) {}
The DTO for create()
and update()
are similar, with the exception that an update request usually only requires a subset of the data, the data that will be changed. In this case, using partial types is a common solution to handle this scenario.
So we'll use UpdateQuoteDto
in the update()
method. But just like the issue of code reuse we addressed between findAll()
and findOne()
, the same problem occurs between create()
and update()
.
update(id: number, updateQuoteDto: UpdateQuoteDto) {
const {
text,
author: { name: author },
category: { name: category },
} = updateQuoteDto;
return this.prisma.quote.update({
where: { id },
data: {
text,
author: {
connectOrCreate: {
where: { name: author },
create: { name: author },
},
},
category: {
connectOrCreate: {
where: { name: category },
create: { name: category },
},
},
},
});
}
Let's use the Prisma validator once more in our update()
method by making a createQuoteValidator
function and passing in text, author, and category as arguments. This function takes the same code we previously created and applies it in this new context.
const createQuoteValidator = (
text: string,
author: string,
category: string
) => {
return Prisma.validator<Prisma.QuoteCreateInput>()({
text,
author: {
connectOrCreate: {
where: { name: author },
create: { name: author },
},
},
category: {
connectOrCreate: {
where: { name: category },
create: { name: category },
},
},
});
};
Now update the create()
method to use the Prisma validator.
create(createQuoteDto: CreateQuoteDto) {
const {
text,
author: { name: author },
category: { name: category },
} = createQuoteDto;
return this.prisma.quote.create({
data: createQuoteValidator(text, author, category),
select: selectQuoteValidator,
});
}
Next, we'll use the same validator in the update()
method.
update(id: number, updateQuoteDto: UpdateQuoteDto) {
const {
text,
author: { name: author },
category: { name: category },
} = updateQuoteDto;
return this.prisma.quote.update({
where: { id },
data: createQuoteValidator(text, author, category),
select: selectQuoteValidator,
});
}
Lastly, implementing the remove()
method is straightforward. We'll use the delete
method from Prisma and pass it the id
from the request parameters using a where
filter.
remove(id: number) {
return this.prisma.quote.delete({
where: { id },
});
}
Decouple schema and API response
Best practices in API development call for separating the database schema from the API response. The database schema is optimized for efficient data storage and retrieval, while the API response should be structured to meet the specific needs of the client. By decoupling these two structures, you can increase the flexibility in how data is stored and returned.
We have already used data transfer objects to separate the database schema from the request payload, ensuring that the data sent by the client conforms to the expected structure and data types. To further optimize the API response, we can use entities to separate the database schema from the API response. This gives us full control over the data returned to the client, ensuring that it meets their needs.
Additionally, using entities for the API response improves the type safety of the application. By defining the structure and data types of the API response in TypeScript, we can detect and prevent any unintended changes to the returned data in services and controllers. This helps to maintain the consistency and reliability of the API response, ensuring that it always conforms to the expected structure and data types.
To implement this, update the quote.entity.ts
file in the ./src/quotes
directory with the following code:
import { CategoryEntity } from "../../categories/entity/category.entity";
import { AuthorEntity } from "../../authors/entity/author.entity";
export class QuoteEntity {
id: number;
text: string;
}
export class QuoteWithAuthorEntity {
id: number;
text: string;
author: AuthorEntity;
}
export class QuoteWithAuthorAndCategoryEntity {
id: number;
text: string;
author: AuthorEntity;
category: CategoryEntity;
}
And author.entity.ts
should look like this:
import { QuoteEntity } from "../../quotes/entity/quote.entity";
export class AuthorEntity {
id: number;
name: string;
}
export class AuthorWithQuotesEntity {
id: number;
name: string;
quotes: QuoteEntity[];
}
And category.entity.ts
should look like this:
import { QuoteWithAuthorEntity } from "../../quotes/entity/quote.entity";
export class CategoryEntity {
id: number;
name: string;
}
export class CategoryWithAuthorAndQuotesEntity {
id: number;
name: string;
quotes: QuoteWithAuthorEntity[];
}
Now we can update QuotesController
to use these new entities.
/* ... */
import {
QuoteEntity,
QuoteWithAuthorAndCategoryEntity,
} from "./entity/quote.entity";
@Controller("quotes")
export class QuotesController {
constructor(private readonly quotesService: QuotesService) {}
@Post()
create(@Body() createQuoteDto: CreateQuoteDto): Promise<QuoteEntity> {
return this.quotesService.create(createQuoteDto);
}
@Get()
findAll(): Promise<QuoteEntity[]> {
return this.quotesService.findAll();
}
@Get("random")
findRandom(): Promise<QuoteEntity> {
return this.quotesService.findRandom();
}
@Get(":id")
findOne(@Param("id") id: string): Promise<QuoteWithAuthorAndCategoryEntity> {
return this.quotesService.findOne(+id);
}
@Patch(":id")
update(
@Param("id") id: string,
@Body() updateQuoteDto: UpdateQuoteDto
): Promise<QuoteEntity> {
return this.quotesService.update(+id, updateQuoteDto);
}
@Delete(":id")
async remove(@Param("id") id: string): Promise<void> {
await this.quotesService.remove(+id);
}
}
Likewise, we'll need to update the AuthorsController
and CategoriesController
to use the new entities too. See the source repository for the full code.
Once that is done, if any changes are made to the returned type in the QuotesService
, for example, we'll get a TypeScript error in the QuotesController
. This helps ensure that the API response always conforms to the expected structure and data types, improving the reliability and robustness of the application.
Wrapping up
To enable CRUD operations for authors and categories, the same approach we applied to quotes needs to be replicated. For more details, refer to the source repository.
There is still much work ahead to turn this project into a polished product. In future posts, I will discuss topics such as documentation, authentication, pagination, testing, and deployment.
I am looking forward to sharing the next steps of this journey with you. If you have any questions or feedback, don't hesitate to reach out to me in the comments or on Twitter.