Deep Observability in Node.js Using OpenTelemetry and Pino


As applications become increasingly distributed, debugging performance issues or locating failures in a Node.js backend can be challenging. Logging by itself usually provides limited context to comprehend how a request navigates through many layers of your system. Similarly, you cannot correlate trace data with application-specific events if you use tracing without structured logging. 

That is where OpenTelemetry (OTel) for tracing and Pino for structured logging come in. By combining the two, you get deep observability — blending logs and traces together for an unobstructed view of your system’s behavior, thereby speeding up debugging, monitoring, and root cause analysis.

With this article, you will know how to:

  • Configure OpenTelemetry for tracing in Node.js
  • Implement Pino for efficient structured logging
  • Inject trace and span context into logs
  • Link the traces and logs in observability tools like Jaeger or New Relic, or Datadog

Table of Contents

What Is OpenTelemetry and Pino Logger?

OpenTelemetry

OpenTelemetry is an open standard for collecting traces, metrics, and logs. In Node.js applications, it helps in stitching together HTTP/HTTPS request traces, Database, and external API spans.

Pino

It is a low-overhead, high-performance Node.js logging library. Unlike console logging, Pino logs asynchronous structured JSONs

Setup:

mkdir otel-pino-express-api && cd otel-pino-express-api
npm init -y
 npm install pino pino-http pino-opentelemetry-transport express \
@opentelemetry/sdk-node @opentelemetry/api \
@opentelemetry/semantic-conventions \
@opentelemetry/resources @opentelemetry/exporter-trace-otlp-http

Code Structure:
otel-pino-express-api
├── app.js
├── server.js
├── otel.js
├── logger.js
├── package.json

Package.Json

{
  "name": "otel-pino-express-api",
  "version": "1.0.0",
  "main": "server.js",
  "type": "module",
  "scripts": {
    "start": "node --require './otel.js' server.js"
  },
  "dependencies": {
    "@opentelemetry/api": "^1.9.0",
    "@opentelemetry/sdk-node": "^0.50.0",
    "@opentelemetry/auto-instrumentations-node": "^0.50.0",
    "@opentelemetry/exporter-trace-otlp-http": "^0.50.0",
    "express": "^4.18.2",
    "pino": "^8.15.0",
    "pino-http": "^8.2.0"
  }
}

Logger.js:

// logger.js
const pino = require('pino');

const transport = pino.transport({
  target: 'pino-opentelemetry-transport',
  options: {
    serviceName: 'pino-otel-demo',
    logLevel: 'info',
  },
});

const logger = pino(transport);
module.exports = logger;


export default logger;

Otel.js

Otel.js:
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
const { Resource } = require('@opentelemetry/resources');
const { SemanticResourceAttributes } = require('@opentelemetry/semantic-conventions');

const sdk = new NodeSDK({
  traceExporter: new OTLPTraceExporter({
url: 'http://localhost:4318/v1/traces', // OTLP_EXPORTER_OTLP_LOGS_ENDPOINT
headers: {}, 
}),
  resource: new Resource({
    [SemanticResourceAttributes.SERVICE_NAME]: 'pino-otel-demo',
  }),
});

sdk.start();

App.js

{
req.log.info(‘Handled GET /’);
res.send(‘Hello from Pino + OpenTelemetry!’);
});
export default app;
” data-lang=”text/javascript”>

require('./otel');

const express = require('express');
const pinoHttp = require('pino-http');
const logger = require('./logger');
const app = express();

app.use(pinoHttp({ logger }));

app.get("https://feeds.dzone.com/", (req, res) => {
  req.log.info('Handled GET /');
  res.send('Hello from Pino + OpenTelemetry!');
});
export default app;

Server.js

import app from './app.js';
import logger from './logger.js';

const PORT = process.env.PORT || 3000;

app.listen(PORT, () => {
  logger.info(`Server listening on port ${PORT}`);
});

Run the API:

Key Points

  • pino-opentelemetry-transport: This library acts as a transport for Pino, allowing you to send logs to an OpenTelemetry collector
  • @opentelemetry/sdk-node: This is used to initialize the OpenTelemetry SDK, which manages the log exporter.
  • OTLP_EXPORTER_OTLP_LOGS_ENDPOINT: This environment variable (or OTEL_EXPORTER_OTLP_ENDPOINT) specifies the URL of your OpenTelemetry collector.
  • OTel Collector Configuration: Ensure your collector is configured to accept logs via the OTLP protocol and has appropriate processors (e.g., batch, filter) and exporters (e.g., file, logging).
  • Correlation: OpenTelemetry logs are designed to be correlated with traces and metrics, allowing you to see the full picture of your application’s behavior.
  • To export Otel traces to a dashboard, you’ll typically forward them to a backend like Grafana, NewRelic, or AWS Xray. For simplicity, let’s leverage Jaeger + OTEL SDK.

Run Jaeger Locally

docker run -d --name jaeger \
  -e COLLECTOR_OTLP_ENABLED=true \
  -p 16686:16686 \
  -p 4318:4318 \
  jaegertracing/all-in-one:latest

  • Jaeger UI will be at: http://localhost:16686
  • OTLP HTTP endpoint: http://localhost:4318/v1/traces

Logs, traces, and metrics are all important observability pillars that together provide us with the entire picture of distributed systems. Positioning them strategically, such as positioning counters and logs at entry and exit points and utilizing traces at points of decision, allows us to debug effectively. Correlating signals enables us to easily navigate metrics, investigate request flows, and solve complex problems in distributed systems.

Incident management and observability are also related domains in a close manner. By combining both, you can create a better and more effective system for incident response. Given the powerful tracing that OTEL provides and essential application logging via Pino, it gets easier to track any performance bottlenecks in the deeply nested calls of APIs and their DB calls.

The Pitfalls to Watch Out For

1. Broken Context Propagation

Pitfall: Traces are broken when trace context is not properly propagated from service to service.

How to avoid it: Ensure headers like traceparent are propagated from one service call to another and use OTEL’s context propagation APIs or auto-instrumentation where available.

2. Over-Instrumentation and Telemetry Noise

Pitfall: Generating too many spans or logs will overwhelm your system and make it harder to derive meaning.

How to avoid it: Instrument only the crucial parts selectively and use sampling in order to keep the data volume in check.

3. Lack of Correlation Among Traces, Metrics, and Logs

Pitfall: Irrelevant telemetry types get in the way of determining root causes.

How to avoid it: Inject trace IDs into logs and supply consistent resource attributes for all telemetry signals.

4. Resource Conflicts / Multiple SDKs

Pitfall: Tracing fails silently or causes side effects.

How to avoid it:

  • Only initialize one OpenTelemetry SDK per service.
  • Avoid mixing legacy and OTEL SDKs unless designed to interop.
  • Reuse singleton tracer instances.

5. Missing or Inconsistent Service Names

Pitfall: Can’t search, group, or trace requests across services.

How to avoid it:

  • Set a consistent service name in each app via Resource configuration.
  • Avoid default values like “unknown_service”. If you forget to specify a service name, your traces will show up under a generic name like unknown_service.
  • Always configure a meaningful service.name to help group and identify services in your observability dashboard.

Conclusion

By merging Pino with OpenTelemetry, we have high-performance, structured logging and distributed tracing. It enables deep observability, giving you the opportunity to take a closer examination of how the system behaves and allowing you to fix what needs to be fixed before the final stages of development.


Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment