Logging is crucial. It gives you observability into how your code works, how well it works and it gives you a heads up if anyone is trying to exploit any flaws. But it's also often underprioritized due to the "boring" aspect of logging. Many often go with a drop-in replacement or use the first cloud provider they can find. Frameworks are also often pretty biased with what logging tools they provide and support. And I fully understand why developers would do that, in the past it's been incredibly complicated to set up a proper logging environment without spending both a lot of time and effort to set up an ELK stack or similar indexing database. You either end up using a cloud provider or you simply log to SysLog or a log file. Many logging tools rely on either ElasticSearch or Apache Solr. That's quickly becoming a thing of the past though, Grafana Loki was a new contender which was released a few years ago. They take a completely different approach in how to store logs which are both more lightweight in terms of storage and required dependencies.
I won't go to much into detail in how Loki works, but I will quickly cover the normal setup of Loki. Loki is fully integrated into Grafana, if you don't know what Grafana is, it's a really good open sourced observability stack, it's often used with visualizing metrics, IoT devices, logs and a lot more. By using Grafana we can interact with our Loki instance. Loki is a log aggregator which means it needs to get logs from somewhere, the nice thing about Loki is that it uses standard HTTP requests using JSON objects. Which makes it easy to feed new logs into Loki. The normal way to feed Loki logs however is by using Promtail, Promtail takes log files on a system and pushes those logs to Loki. But what if your system is a bit more complicated? Maybe you partially use cloud provided microservices, some VPS services but want all centralized logging where you have all the control? That's where pushing logs directly to Loki yourself might make sense.
Installment prerequisites
The installed tools required for this project:
- NPM/Node.js (If you got Nix, you can run
nix-shell -p nodejs
when you need to use npm or node.js) - Grafana with Grafana Loki (install instructions defined below)
- Either Docker or Nix if you don't have Grafana Loki setup from before.
We will be using Node.js for the application code. Grafana will be used to interact with Grafana Loki. I recommend using Nix on Linux but I've also included pointers towards how to set this up in Docker.
Setting up Grafana + Grafana Loki
The loki.yaml
You need the loki.yaml for both of the setups. Here's an example configuration file:
Warning: This is just for testing locally! Look at the Loki documentation to see how to set it up in production. Grafana + Grafana Loki is also available as a cloud service if that is desired.
# Enables authentication through the X-Scope-OrgID header, which must be present
# if true. If false, the OrgID will always be set to "fake".
auth_enabled: false
server:
http_listen_address: "0.0.0.0"
http_listen_port: 3100
ingester:
lifecycler:
address: "127.0.0.1"
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
schema_config:
configs:
- from: 2020-05-15
store: boltdb
object_store: filesystem
schema: v11
index:
prefix: index_
period: 168h
storage_config:
boltdb:
directory: /tmp/loki/index
filesystem:
directory: /tmp/loki/chunks
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
Option A) As a docker container
Since there's a few docker images, you're free to read more on how to set them up on the follow links or just use the Docker container.
- Grafana Loki as a docker container
Docker command:
docker run -d --name=loki --mount type=bind,source="path to loki-config.yaml",target=/etc/loki/local-config.yaml -p 3100:3100 grafana/loki
- Grafana as a docker container
Docker command:
docker run -d --name=grafana -p 3000:3000 grafana/grafana
After starting up Grafana and Grafana Loki, you need to go into Grafana. To do that you go the admin panel for Grafana (which will be the grafana IP address in i.e Docker desktop + :3100) the standard username:password combination will be admin:admin. You then go down to the grafana "sources" tab and add Grafana Loki. After getting a successful connection to Loki, you can head over to the "explore" tab and start communicating with Grafana Loki.
Option B) As Nix packages using nix-ops
Setting up Loki and Grafana Loki as Nix packages will become it's own post. I got it working already, if anyone requests it in the comment section I will prioritize writing it. In the meantime, use the docker instructions.
Pino logging
After that introduction I bet you're eager to see some code! There's two loki packages that I've found, one for Winston and one for Pino. I will be using pino-loki in in my examples because pino is the default logger for Fastify. In addition both of those frameworks have very low overhead.
Let us start with setting up Pino!
Create a new folder and run npm init
Done? Great!
We need a few packages, so instead of installing them one by one through the command line (and in the case of there being version mismatches from this tutorial). Let's declare out packages!
Edit the package.json
We will define a few packages and set our package type to module
so we can use ES6 imports (replaces require()
)
{
... // Rest of the generated package.json
"type": "module",
"dependencies": {
"fastify": "^4.7.0",
"pino": "^8.6.1",
"pino-loki": "^2.0.3",
},
"devDependencies": {
"pino-pretty": "^9.1.1"
}
}
Run npm install
Create a new file called server.js
import pino from 'pino';
let logger = pino({level:'debug'});
logger.info("Hello world!");
Now if we run node server.js
in the same directory. We should get some log output!
But it's the log output is output as JSON and is pretty ugly! Let's add a transport!
import pino from 'pino';
import pretty from 'pino-pretty';
const streams = [
{ level: 'debug', stream: pretty() }
];
let logger = pino({level:'info'}, pino.multistream(streams));
logger.info("Hello world!");
If we now run node server.js
we should get some more readable and colorful output!
Adding pino-loki
We're now getting pretty close to adding pino-loki!
Here's the full code:
"use strict";
import pino from 'pino';
// Tested with pino-loki v2.03
const pinoLokiTransport = pino.transport({
target: "pino-loki",
options: {
host: 'http://localhost:3100', // Change if Loki hostname is different
batching:false,
labels: {application:"test-application-without-web-framework"}
},
});
// Tested with pino v8.6.1
const pinoPretty = pino.transport({
target: "pino-pretty",
options: {
translateTime: 'HH:MM:ss Z',
ignore: 'pid,hostname'
}
});
// Combine the streams
// NOTE: By setting the "level", you can choose what level each individual transport will recieve a log
const streams = [
{level: 'debug', stream: pinoLokiTransport},
{level: 'debug', stream: pinoPretty}
];
// Set up the Loki logger instance
// NOTE: By setting "level", you can set the globally "lowest" level that a transport will use
let logger = pino({level:'trace' }, pino.multistream(streams));
logger.info("Hello world!");
// Log message with custom tags to Loki
logger.info({customTag:"BEEP BOOP"}, "Hello world with tags!")
// Workaround process exiting before logs are sent.
setTimeout(() => {}, 5000);
Remember to change the options.hostname to your Loki IP address, this depends on if you've set it up in Docker or Nix.
Try to run the code again node server.js
it should now output two log lines. If you don't get any errors within the first 3 seconds, congrats! The logs are most likely sent to your Loki instance!
Head over to your grafana instance, add the Loki instance to your sources tab if you haven't already. Navigate to explore and set the explorer to your Loki instance. If you now start by typing in {
you should get some suggestions to what to search for. The application tag and someCustomTag
should be able to be queried. If it is, congrats! Your application is sending logs to your Loki instance!
Adding pino-loki with Fastify
I was promising using Fastify weren't I?
This is pretty straight forward, we simply replace the default pino logger with our "custom" pino logger.
'use strict'
import Fastify from 'fastify';
import pino from 'pino';
// Tested with pino-loki v2.03
const pinoLokiTransport = pino.transport({
target: "pino-loki",
options: {
host: 'http://localhost:3100', // Change if Loki hostname is different
labels: {application:"test-application"}
},
});
// Tested with pino v8.6.1
const pinoPretty = pino.transport({
target: "pino-pretty",
options: {
translateTime: 'HH:MM:ss Z',
ignore: 'pid,hostname'
}
});
// Combine the streams
// NOTE: By setting the "level", you can choose what level each individual transport will recieve a log
const streams = [
{level: 'debug', stream: pinoLokiTransport},
{level: 'debug', stream: pinoPretty}
];
const fastify = Fastify({
logger: {
stream: pino.multistream(streams),
level: 'trace' // Set global log level across all transports
}
});
fastify.get('/', function (request, reply) {
// Example of how the data sent to Loki will look:
// {"level":30,"time":1969065718477,"pid":26892,"reqId":"req-1","someTag":"Some info about the current request","msg":"This is a test"}
request.log.info({someTag:'Some extra info about the current request' }, "This is a test");
// Let's try to send some error data to Loki
try {
throw new Error("Some test error object");
} catch (error){
// Example of how data sent to Loki will look:
// {"level":50,"time":1969065718477,"pid":26892,"reqId":"req-1","err": {"type": "Error", "message": "Some test error object", stack: <strack trace...>},"msg":"This is a test error test!"}
request.log.error(error, "This is a test error test!");
}
// Let's return hello world to the user
reply.send({ hello: 'world' });
});
// If you want this example to be accessible outside of localhost, write the IP/hostname address or `::`
fastify.listen({ host: "localhost", port: 3030 }, (err, address) => {
if (err) {
console.log(err)
process.exit(1)
}
})
Now if you run the code. You should be able to access http://localhost:3030
and every request should be logged.
Thanks for reading!