Metrics should be a key part of your DevOps process. It is through these metrics, one can drive continuous improvement of your delivery process. Almost all commands in sfpowerscripts, are instrumented with StatsD as well as Log Based Metrics.
Ensure you have a StatsD Daemon running on a server, Setting up StatsD daemon on a server is quite simple, there are lot of guides available (https://www.scalyr.com/blog/statsd-measure-anything-in-your-system/) . If you are after a hosted StatsD, hosted Graphite offers a hosted StatsD solution as part of their Hosted Graphite offering (https://www.hostedgraphite.com/docs/integrationguide/ig_hosted_statsd.html).
Ensure your build agents can reach the StatsD server, this can be bit problematic, when you are using cloud based agents, which imply StatsD service has to be on the internet and reachable from the agent, so plan this out. If you are using self hosted agents, the StatsD server should be reachable as well.
To visualize these metrics, you need a StatsD Backend (https://thenewstack.io/collecting-metrics-using-statsd-a-standard-for-real-time-monitoring/) such as DataDog (Hosted), Graphana, and many others to aggergate and report data.
Enable StatsD metrics in your scripts by adding these environment variables
# Set STATSD Environment Variables for logging metrics about this buildexport SFPOWERSCRIPTS_STATSD=trueexport SFPOWERSCRIPTS_STATSD_HOST=172.23.95.52export SFPOWERSCRIPTS_STATSD_PORT=8125 // Optional, defaults to 8125export SFPOWERSCRIPTS_STATSD_PROTOCOL=UDP // Optional, defualts to UDP, Supports UDP/TCP
sfpowerscripts is also able to generate metrics in a log file. These metrics are written to .sfpowerscripts/metrics.log in your working directory. This log file after every run of a command could be send to a log aggregator for further analysis.
The JSON payload consist of the the following, name of the metric (metric), type of the metric such as count, guage or timers ( type ), timestamp (timestamp) and followed by tags pertaining to the particular metric (tags)
A sample metric is shown below
One could write a parse this file, and then send each individual entries to a logging system that allows JSON based logging.
sfpowerscripts is also able to integrate into Datadog natively using HTTP/HTTPS integration. This feature allows one to directly post metrics to Datadog instance without using an intermittent StatsD server to aggregate metrics before reaching an analyzer.
To setup native DataDog integration, you need to set the following environment variables
# Set DATADOG Environment Variables for logging metrics natively to DataDogexport SFPOWERSCRIPTS_DATADOG=trueexport SFPOWERSCRIPTS_DATADOG_HOST=app.datadoghq.com // Or equivalent datadog regionexport SFPOWERSCRIPTS_DATADOG_API_KEY=<your api key> // Refer to datadog documentation
sfpowerscripts is also able to integrate into NewRelic natively using HTTP/HTTPS integration. Similar to native DataDog integration, this feature allows one to directly post metrics to NewRelic without using an intermittent StatsD server to aggregate metrics before reaching an analyzer.
# Set NEWRELIC Environment Variables for logging metrics natively to NewRelicexport SFPOWERSCRIPTS_NEWRELIC=trueexport SFPOWERSCRIPTS_NEWRELIC_API_KEY=<your api key> // Refer to newrelic documentation to generate an NEWRELIC INGEST KEY
The following are the list of metrics that are captured.
Number of times deploy command failed
Time spent on executing deploy command
Number of times deployment was scheduled to run
Number of packages that were scheduled to be deployed by the deploy command
Number of succeeded deploy executions
Number of packages that were successfully deployed
Number of times deploy command failed to execute
Number and details of packages that failed to deploy
Number of times build was scheduled to run
Time spent on executing build command
Number of packages being scheduled to build
Number of packages successfully built
Number of packages failed to build
Number of scheduled validations
Number of successful validations
Number of time validate failed to execute
Time spent on executing validate command
Number of packages scheduled for installation in a validation
Number of successful package installations in a validation
Number of failed package installations in a validation
Time spent on executing publish command
Number of succeeded publish executions
Number of times a package was installed
Time taken to install a package
Time taken to create a package
Number of times a particular package was created
Number of metadata in a package
Test Coverage of a package
Number of times apex tests were triggered for a package
Time taken for Apex Test Execution
Time taken for Apex Test Execution (Command Time)
Number of orgs that were succeeded during a run of prepare
Number of orgs that failed during a run of prepare
Time take to prepare a pool of scratchorgs
Number of scratch orgs that failed on a checkpoint, during prepare
Number of scratch orgs that partially succeeded, during prepare
Number of packages scheduled for installation when preparing scratch org pools
Number of packages successfully installed when preparing scratch org pools
Number of packages failed to install when preparing scratch org pools
Number of scratch orgs that are available in a pool after fetched by validate command
Number of scheduled releases
Number of successful releases
Number of failed releases
Time taken for a release
Number of packages scheduled for release
Number of packages that were installed successfully in a release
Number of packages that failed to install in a release