Running Pull Request Tests#

When you submit a pull request (PR) to an Open edX repository, a series of tests - which vary from repo to repo - are run. Sometimes tests won’t be run at all, and sometimes they’ll fail or be canceled. That looks like this:

This guide will help you start diagnosing errors with tests, but it is difficult to make it comprehensive. Post in the Open edX Forums under the “Development” category if you are running into issues you can’t solve.

Tests aren’t running on my PR#

The first time you make a pull request to a repository you haven’t made a PR to before, the tests will not run automatically. Someone who is a maintainer of or committer to the repo needs to validate your code is not dangerous before running the tests. If no one has looked at your PR over a few business days, the Community Contribution Management team can help move things along; tag them in a comment in your PR with a message and the handle: @openedx/community-pr-managers.

The openedx/cla Check is Failing#

If the openedx/cla check is failing:

This means that we don’t have a CLA (Contributor License Agreement) on file for you. To sign the CLA, please visit Once you’ve signed the CLA, please allow 1 business day for it to be processed. After this time, you can re-run the CLA check by editing the PR title, pushing a new commit, or rebasing. If the problem persists, you can tag the @openedx/cla-problems team in a comment on your PR for further assistance.

The “Lint Commit Messages/commitlint” Test Is Failing#

The Open edX project follows a rule known as Conventional Commits. This is a way of labeling your commit messages so they are extra informative. Please see the linked document to learn more about what Conventional Commits are and how we use them. If you just need a brief refresher, the types of conventional commits are: build, chore, docs, feat, fix, perf, refactor, revert, style, test, and temp. Please append ! to the type (eg: feat!:) for breaking changes.

Python, Javascript, or Other Tests are Failing#

It is difficult to have one good answer for this because every repository has its own style of tests. In general, the two best rules are run all tests locally before committing, and on the PR build, click the “Details” button to the right of the failing test for more information.

Running tests locally changes from repository to repository, but in general most repositories will have a “Makefile” that defines commands to run tests. For many repos, you can run make help in your terminal to see a list of all the _targets_, or commands, that you can run. For example, in this repo we see:

> make help
Please use `make <target>` where <target> is one of
  clean                     delete generated byte code and coverage reports
  help                      display this help message
  piptools                  install pinned version of pip-compile and pip-sync
  quality-python            Run python linters
  quality                   Run linters
  requirements              install development environment requirements
  test-python               run tests using pytest and generate coverage report
  test                      run tests
  upgrade                   update the requirements/*.txt files with the latest packages satisfying requirements/*.in

This means we can run the make test command to run the tests, or make quality to run the linters. Some repos don’t have the make help target defined:

> make help
make: *** No rule to make target `help'.  Stop.

In this case, you can take a look at the Makefile itself to find out what the targets are. They might not have descriptions, so you may have to do a bit of sleuthing to figure out which target does what if they’re not named well. If you find yourself in this situation, please make a pull request to add the help target to the Makefile!

My Python/Javascript/etc Tests Pass Locally but Fail On PR#

Unfortunately, this happens from time to time when people break tests on the master or main branch of the repo. In this case, you have a few options: you can wait a few days and then rebase on top of master and hope the newest changes have fixed the failures. You can also dive a little deeper into recently merged PRs, and follow them to be notified of when the fix has been applied (and then do the rebase). You could even try submitting a PR yourself to fix the failing tests!