Suggestions
← TIL
~2 min read
#performance#ci-cd#lighthouse#devops

Setting up Lighthouse CI in your pipeline

Stop celebrating that 100/100 Lighthouse score on your local machine. Web performance is not a snapshot; it’s a constant regression. If you don’t have automated “Performance Budgets” in your pipeline that physically block Pull Requests that degrade your Core Web Vitals, the next commit from a junior developer is going to sink your B2B conversion.

How do you implement the toll barrier?#

The heart of Lighthouse CI is its configuration file, which dictates exactly what metrics you tolerate and which are non-negotiable.

Terminal
campa@macbook~$npm install -g @lhci/cli

          added 1 package, and audited 2 packages in 1s
        

Then, you define the contract at the root of your project:

lighthouserc.json
{
  "ci": {
    "collect": {
      "staticDistDir": "./dist",
      "numberOfRuns": 3
    },
    "assert": {
      "assertions": {
        "categories:performance": ["error", {"minScore": 0.9}],
        "categories:accessibility": ["error", {"minScore": 1}],
        "first-contentful-paint": ["warn", {"maxNumericValue": 2000}]
      }
    },
    "upload": {
      "target": "temporary-public-storage"
    }
  }
}

For this to work on autopilot, you integrate it into GitHub Actions using their official action:

.github/workflows/ci.yml
jobs:
  lhci:
    name: Lighthouse CI
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Build project
        run: npm run build
      - name: Run Lighthouse CI
        uses: treosh/lighthouse-ci-action@v12
        with:
          configPath: './lighthouserc.json'
          uploadArtifacts: true
          temporaryPublicStorage: true
🛠️Mitigating Network VarianceClick to expand

You will quickly realize that free GitHub runners are not stable. If you run Lighthouse only once, a network spike can fail a valid PR.

That’s why we use "numberOfRuns": 3. LHCI runs the audit three times and uses the median to apply the assertions. Also, you’ll notice we use "error" for the overall score (blocks the merge) but "warn" for the FCP (only warns). This avoids unnecessary friction while keeping severe regressions at bay.

Why is this vital for the business?#

Discovering that your checkout takes 4 seconds after a customer complains on Twitter is very expensive. Web performance is a legal risk in many cases. Automating this metric is a senior step towards development without local friction.

$0Regression Cost
Impact
Automatic Prevention

By blocking slow code in the pipeline, you eliminate the cost of QA and users suffering in production.

Should you implement Lighthouse CI today?

Buy it if you want

  • Stable and predictable metrics over time
  • Ending the 'it works fast on my machine' debate
  • Ensuring AAA accessibility from day 1

Prepare for

  • Increasing your total CI/CD time (approx 2-3 mins)
  • Occasional false positives on saturated virtual runners
  • Maintaining (and sometimes relaxing) budgets as the app grows

References#

g CO₂
Link copied to clipboard