Warning
You're browsing the documentation for an old version of Webiny. Consider upgrading your project to Webiny 5.41.x.
What you'll learn
  • what is the code benchmark
  • how to enable the code benchmark
  • how to output the code benchmark measurements
Before continuing

In order to follow this article, you must use Webiny version 5.35.0 or greater.

Overview
anchor

With the 5.35.0 version of Webiny, we have included the benchmarking tool for the code execution. Basically, a piece of code is wrapped into the measurement method and, if benchmarking is enabled, it measures execution time and memory difference between the measurement starting and end points.

This is useful to figure out where are the bottlenecks in the Webiny’s and the users code.

Measuring Code Execution
anchor

As noted in the overview part of this article, the benchmark tool measures execution time and memory difference, from start to end of execution, of the piece of code wrapped in the measurement method.

The measurement method looks like this:

The measurement is stored into an object with interface BenchmarkMeasurementexternal link. The properties are:

  • name - identifier of the measurement (my code in our example)
  • category - category of the measurement - default is webiny
  • start - the Date object created when the measurement started
  • end - the Date object created when the measurement ended
  • elapsed - difference, in milliseconds, from the end to start Date values
  • memory - difference, in bytes, from the end to the start of the memory measurement (collected via the process.memoryUsage().heapUsed)

Categorizing the Measurements
anchor

You can set your own measurement category, as the default category is webiny:

By setting the category, you can filter the measurements easily later. You can write anything you like in both the category and name properties.

Enabling the Measurement
anchor

The measurements are not enabled by the default because, we are absolutely positive, nobody wants it to run all the time. If you want to enable the measurements, you will need to create a plugin which enables it:

In our example we give few possibilities to enable the benchmarking, but you can enable it however you like.

Although our example give a simple code to enable the benchmark via headers or query parameters, we would recommend to make it more complex. You can use the context.security.getIdentity() to check if a user is logged in and then enable the benchmark if correct headers are sent.

You can also check for some randomly generated string in the headers or query string. Just make it a bit more complex than our example as you don’t want to allow everyone to enable the benchmarking.

The Measurement Results
anchor

By the default we output the results into the console (it will be stored on Cloudwatch) in the end of the request.

As we use the Fastifyexternal link as our request and response handler, and we output the measurements in the onResponse hook and the onTimeout hook. If you want to know more about the Fastify hooks and lifecycles, there is quite extensive documentation about the Fastify lifecyclesexternal link, onResponseexternal link and onTimeoutexternal link hooks.

Customising the Measurement Results Output
anchor

You can customize the output of the measurement logs, for example: send them to a service of your own liking.

You can add multiple plugins which attach some onOutput method, and they will all be executed, from the last to the first added, if you do not break the cycle with the stop().

Also, remember that we have measurement categories, and if you have set the category to your measurement point, you can filter out unnecessary measurements when having a custom output.

Conclusion
anchor

The benchmark was initially intended as an internal tool to help us figure out how much of execution time is spent on some piece of code. We decided to attach it to the main context object, and have it documented, because it could help our users to figure out if and why the execution of their code is slow.