When I joined Wouessi Digital as a contract Software Engineer, one of my first tasks was setting up a CI/CD pipeline for a Node.js backend that processed federal IT procurement contracts. There was no pipeline. Deployments were manual, tests were run locally, and Docker images were built by hand. This is how I fixed that.
The goal was simple: any push to the main branch should automatically install dependencies, lint, run unit tests, build a Docker image, and deploy to the Kubernetes cluster. If any stage fails, the pipeline stops and the deployment never happens.
pipeline {
agent any
stages {
stage('Install') {
steps { sh 'npm ci' }
}
stage('Lint') {
steps { sh 'npm run lint' }
}
stage('Test') {
steps { sh 'npm test' }
}
stage('Build Image') {
steps {
sh 'docker build -t procurement-api:$BUILD_NUMBER .'
}
}
stage('Deploy') {
steps {
sh 'kubectl set image deployment/procurement-api api=procurement-api:$BUILD_NUMBER'
}
}
}
}The biggest challenge was keeping the Docker image lean. The first build came in at 1.2GB. After switching to a multi-stage build and using node:20-alpine as the final base, it dropped to 180MB. That alone cut deploy time by more than half.
Automation is not a nice-to-have. A 10-minute manual deploy done 20 times a week is 3+ hours of lost engineering time every month.
The other lesson: fail fast. Putting the lint stage before tests and the build stage last means you catch the cheap errors (style violations) before running the expensive ones (full test suite + Docker build). Small ordering decisions compound.