How to monitor and reduce open source risk by integrating Tidelift's APIs into your organization

Whether you're an engineering manager or a security professional, one of your goals is to constantly reduce risk in your organization.

In this article, we'll show how you can use Tidelift's APIs and a small amount of code to find risk from bad packages in your organization and then build a plan to address it. Additionally, we'll show how Tidelift can help you track your progress in eliminating risk from bad packages over time.

Get a software bill of materials (SBOM)

The first thing we'll do is get a software bill of materials (SBOM) for our application. You may already have an SBOM for your application; if you don't, there are many tools you can use for this. In this example, we'll use a tool called syft.

We will run the following in our source directory:

syft . > my_sbom.syft

This returns a file that looks like this:

NAME                                              VERSION                 TYPE       
@adobe/css-tools                                  4.0.1                   npm            
@ampproject/remapping                             2.2.1                   npm            
@apideck/better-ajv-errors                        0.3.2                   npm            
@babel/code-frame                                 7.16.7                  npm            
@babel/code-frame                                 7.22.10                 npm            

Write code to evaluate this SBOM using Tidelift's APIs

We then want to evaluate the software we're using with Tidelift's API. For more information on how to use these APIs to evaluate software, please refer to Evaluating a package with Tidelift's API

The sample code that we're using is available at It's a short bit of python that parses a syft file and uses Tidelift's bulk API to evaluate the software used. lkj

We will run:

tidelift-lookup my_sbom.syft > myreport.csv

The sample code produces a .CSV file, one row for each syft file passed on the command line, with the following structure:

  • version: the name of the syft file. If you're processing multiple files for different releases of your application, name them '<version>.syft', such as '1.2.3.syft'.
  • good_count: the number of packages in use that pass Tidelift's evaluation
  • bad: an array of packages that do not pass Tidelift's evaluation
  • bad_count: the number of bad packages
  • unassessed: an array of packages that Tidelift could not assess. They may be private dependencies that aren't published in an upstream package manager, for example.
  • unassessed_count: the number of unassessed packages
  • behind: an array of packages that are more than one major release behind the current version
  • behind_count: the number of packages that are at least one major release behind the current version

Sample output can look like this (full package lists omitted for space):

1.2.3,1265,"['npm/abab', 'npm/ajv-formats'...]",136,"[]",0,

Creating an initiative that uses this data

With this data, we now implement two separate initiatives for the engineering team.

Moving away from bad packages

There are 136 packages that have been evaluated as being unsafe to use. We now have a punchlist that can be sent directly to the engineering team for prioritization, and researching alternatives.

You can use this CSV to create issues in an issue tracker, such as Jira.

Staying current with releases

There are 643 dependencies that have been evaluated as being more than one major release behind the current release. There is risk associated with all of these such that these releases might not receive fixes or security updates in the event of a new vulnerability. Again, we now have an exact punchlist that can be sent to the engineering team for a prioritization, and modernization initiative, in Jira or other tools.

Tracking reduction in risk over time

Once such an initiative is undertaken, it is important to track success, and report to stakeholders?

To do this, you would run this report periodically - it can be done weekly, or whenever a new code is pushed out. Since it's a CSV, new lines can be added to track the number of bad or behind packages over time. This data can be used to build a dashboard in whichever BI tool the organization uses, such as PowerBI or other tools.

Here's an example chart that tracks good and bad packages in a sample application over each release.

You can see that there have been two major initiatives (around release 16.0.0 and release 20.0.0) that have led to a significant reduction in bad packages and versions.



Was this article helpful?
0 out of 0 found this helpful



Article is closed for comments.

Articles in this section