This document covers the UI-related automated tests.
Dev Environment
If you are successfully running the app, you should have all the
necessary packages. If you don’t, those packages should live in the
DESCRIPTION
file. Some dev-related packages to note:
- shinytest2 - shinytest2 is the key R package helping to test shiny web app functionality
- diffviewer - diffviewer helps visually compare 2 files
- Pandoc – Comes bundled with RStudio
-
PhantomJS? – Used to be needed for downloads
(installed with
webshot::install_phantomjs()
). But note:
NOTE FROM https://rstudio.github.io/shinytest2/articles/z-migration.html: - shinytest2 is the successor to shinytest. shinytest was implemented using webdriver which uses PhantomJS. PhantomJS has been unsupported since 2017 and does not support displaying bslib’s Bootstrap v5. shinytest2 uses chromote to connect to your locally installed Chrome or Chromium application, allowing shinytest2 to display bslib’s Bootstrap v5.
How shinytest2 Works
shinytest2 (here referred to henceforth as “shinytest” not to be confused with the older R package that was named shinytest!) automates shiny web app functionality testing, so we can determine if code updates break or unexpectedly modify parts of an application.
It runs the installed version of the app in a headless Chromium browser, simulating user interactions and taking snapshots. These snapshots are stored as JSON files with accompanying PNG images. If differences arise from code updates, the test fails, indicating which files changed.
EJAM’s shinytest2 Folder Structure
tests/
├── testthat.R (modified for shinytest2)
├── app-functionality.R
└── testthat/
├── setup-shinytest2.R
├── test-[DATA TYPE]-functionality.R (e.g. test-FIPS-functionality.R)
└── _snaps/
├── [OS, e.g. linux]-[R Version, e.g. 4.4]/
│ ├── FIPS-shiny-functionality/
│ │ ├── .json, .png, .xlsx, .html files
│ ├── shapefile-shiny-functionality/
│ │ ├── .json, .png, .xlsx, .html files
│ ├── latlon-shiny-functionality/
│ │ ├── .json, .png, .xlsx, .html files
│ ├── NAICS-shiny-functionality/
│ │ ├── .json, .png, .xlsx, .html files
│ └── FRS-shiny-functionality/
│ │ ├── .json, .png, .xlsx, .html files
File Descriptions
-
testthat.R
– Callsshinytest2::test_app()
to run all tests. Can filter specific tests usingfilter="test-name"
. -
testthat/setup-shinytest2.R
– Loadsglobal.R
and app scripts into the testing environment. -
app-functionality.R
– Contains generic function,main_shinytest()
, that defines app interactions for testing, running tests for multiple data types (FIPS, shapefile, latlon, NAICS, etc.). -
testthat/test-[DATA TYPE]-functionality.R
– Simple call to the main app functionality function, specifying the data type to test with. -
testthat/_snaps/
– Stores snapshots categorized by OS, R version, and data type.-
.json
files – Capture app snapshots. -
.png
files – Screenshots (do not trigger failures). -
.xlsx
&.html
files – Download files. Compared via content hashing to prevent false failures.
-
Updating Tests
You may wish to modify the shinytest scripts, either to add new interactions with the application or to modify existing ones, such as in the case of an app component that can no longer be interacted with. Here are some methods and tips for updating the shinytest script accordingly.
Direct Updates
Modify app-functionality.R
to add new interactions with
the app for the shinytest to test.
Using shinytest2::record_test()
to generate testing
code
If you’re not sure how to code interactions directly, run
shinytest2::record_test()
to test the app interactively and
record your actions, which can then be copied into test scripts.
Using shiny::exportTestValues(name = value)
Throughout the app code, shiny::exportTestValues()
can
be used to store values from reactive expressions or other items that
are not inputs or outputs and therefore may not be included in the
standard snapshots. Then, in the shinytests, you can specify
export=[name]
to include in the snapshot the export named
“name” that you specified in the code, or export=TRUE
to
include all exports.
Running Tests Locally
The primary method for running the shinytests is:
shinytest2::test_app(".", filter="-functionality")
However, it is strongly recommended during development to run the
entire testthat.R
which runs
remotes::install_local()
to ensure your development code is
the one tested. This is because shinytest2 automatically references the
installed version of a package.
GitHub Actions Integration
Using GitHub Actions (GHA) we can have GitHub run our shinytests prior to merging a Pull Request, to give us peace of mind that the app will still work with the merged code.
Reviewing & Updating Snapshots
Reviewing
testthat::snapshot_review()
Optionally, can filter to review specific files or folders of snapshots.
Accepting New Snapshots
testthat::snapshot_accept()
Optionally, can accept them interactively when reviewing
Debugging Tests & GitHub Actions
Debugging shinytest2
- Use
save_log()
to inspect logs. - Add
print()
,message()
, orwarning()
statements inapp-functionality.R
. - Run, line-by line or in chunks, the main shinytest code in
app-functionality.R
beginning with:
test_category <- "NAICS-functionality"
test_snap_dir <- glue::glue("{normalizePath(testthat::test_path())}/_snaps/{platform_variant()}/{test_category}-functionality/")
outputs_to_remove <- c('an_leaf_map')
app <- AppDriver$new(
variant = platform_variant(),
name = test_category,
seed=12345,
load_timeout=2e+06,
width = 1920,
screenshot_args = FALSE,
expect_values_screenshot_args = FALSE,
height = 1080,
options = list(
shiny.reactlog = TRUE,
shiny.trace = TRUE
)
)
Then, after running lines or chunks, run app$get_log()
to view the log.
Debugging GHA
Generally, if you test locally and update snapshots accordingly, GHA should pass. However, the tests do sometimes fail due to OS differences, R version differences, or even package differences. Here are some tips for debugging these issues:
- Inspect the log in the GitHub repo, under the Actions tab or the Checks tab of the PR.
- Inspect artifacts (zipped test outputs) after a failed run and compare snapshots in a diff viewer to identify discrepancies.
Current State of Tests
- If the shinytests are failing, it is likely because snapshots have not been updated locally and pushed after recent changes.
- The version of R used for testing was 4.4.1 as of early 2025, and should reflect the version used in your development environment, to ensure consistency as much as possible.