You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thanks for this great package. I greatly appreciate working with it.
I have a suggestion for structuring tests, although I do not know how difficult it is to implement that in a bullet-proof way.
It follows the Ruby test framework rspec and having worked with Ruby, Python and R quite a bit, I have the feeling this is the gold standard of how human readable tests could look like.
Basically it consists of three things:
describe states the subject of your test, so the name of the function, e.g. if the function for connecting to a database is called connect, you'd write describe('connect', {...}
context states the data environment for the test, so you provide specific data to a function, you mock things, or you set an environment variable. A call would look like e.g. context('when a database url is specified in the environment, {...})
it basically wraps one ore more tinytest::expect_* statements. This is the innermost layer, and also just provides a descriptive text about what you want to happen, e.g it('connects to the database', {...}) or it('raises error, {...})
At the most basic, this provides a great visual way to visually parse test files: what is tested, what setups there are, under which circumstances you expect what. This creates a logical structure that's easier to read than a flat file sequences of of data = data.frame(...); expect_equal(...) calls. context might modify the environment, which should be undone afterwards.
for example, I have a database connection function which depends on the environment variable APP_ENV (which is set to "test" when running the test). Based on that, a section of a list is used to provide parameters:
For the development case, I have a fork: if there is an environment variable called DATABASE_URL, it would try that first, otherwise use default values, .e.g
host: localhost
port: 5432
In the test environment, it always uses the defaults, in production, it always relies on a DATABASE_URL being set.
A naive implementation of the thing (using the box package)
report_description <- Sys.getenv('TINYTEST_VERBOSE') != ''
whitespace <- ' '
indentation <- paste(rep(whitespace, 2), collapse = '')
.new_counter <- function() {
value <- 0
function(action) {
if (action == 'increase') {
value <<- value + 1
} else if (action == 'decrease') {
value <<- value - 1
if (value < 0) {
stop('value must not be less than zero.')
}
} else if (action == 'status') {
return(value)
} else {
stop('action ', action, ' not understood.')
}
}
}
counter <- .new_counter()
.run_block <- function(description, block) {
if (report_description) {
message(rep(indentation, counter('status')), description)
counter('increase')
on.exit(counter('decrease'))
}
local(block)
}
#' @export
describe <- .run_block
#' @export
context <- .run_block
#' @export
it <- .run_block
with test environment
connects
with development environment
without url
connects
when database url is in environment
when url is valid
connects
when url is invalid
raises error
with production environment
when database url is not provided
raises error
when database url is in environment
when url is valid
connects
when url is invalid
raises error
test_database.R............... 19 tests OK 0.6s
All ok, 19 results (0.6s)
Process finished with exit code 0
It would also be cool to snapshot the failed test before an it call and append [FAILED] if the number if failed tests changes during the execution of the block. That makes it very easy to visually grasp what is going wrong.
What do you think about an addition like that? There are many more features that rspec has, e.g. variables/calls that are lazy and can be defined for a whole context for a base setup but that can be overwritten in single occasions, but this might be a first start if this is something you'd consider.
The text was updated successfully, but these errors were encountered:
In the example there is a test declaration in the form of a function call spanning 61 lines. As someone not used to this framework, that is a lot harder to edit and understand than a short sequence of imperative programming statements. This means it would probably hamper learnability of tinytest.
Moreover, one of the core design ideas of tinytest is that a test script is just an R script that should be runnable with source (or run_test_file). So I feel this is out of scope for tinytest.
First of all, thanks for this great package. I greatly appreciate working with it.
I have a suggestion for structuring tests, although I do not know how difficult it is to implement that in a bullet-proof way.
It follows the Ruby test framework rspec and having worked with Ruby, Python and R quite a bit, I have the feeling this is the gold standard of how human readable tests could look like.
Basically it consists of three things:
describe
states the subject of your test, so the name of the function, e.g. if the function for connecting to a database is calledconnect
, you'd writedescribe('connect', {...}
context
states the data environment for the test, so you provide specific data to a function, you mock things, or you set an environment variable. A call would look like e.g.context('when a database url is specified in the environment, {...})
it
basically wraps one ore moretinytest::expect_*
statements. This is the innermost layer, and also just provides a descriptive text about what you want to happen, e.git('connects to the database', {...})
orit('raises error, {...})
At the most basic, this provides a great visual way to visually parse test files: what is tested, what setups there are, under which circumstances you expect what. This creates a logical structure that's easier to read than a flat file sequences of of
data = data.frame(...); expect_equal(...)
calls.context
might modify the environment, which should be undone afterwards.for example, I have a database connection function which depends on the environment variable
APP_ENV
(which is set to "test" when running the test). Based on that, a section of a list is used to provide parameters:In YAML:
For the development case, I have a fork: if there is an environment variable called
DATABASE_URL
, it would try that first, otherwise use default values, .e.gIn the
test
environment, it always uses the defaults, inproduction
, it always relies on aDATABASE_URL
being set.A naive implementation of the thing (using the
box
package)And this is how the test file would look like:
This leads to a full test output like
It would also be cool to snapshot the failed test before an
it
call and append[FAILED]
if the number if failed tests changes during the execution of the block. That makes it very easy to visually grasp what is going wrong.What do you think about an addition like that? There are many more features that
rspec
has, e.g. variables/calls that are lazy and can be defined for a wholecontext
for a base setup but that can be overwritten in single occasions, but this might be a first start if this is something you'd consider.The text was updated successfully, but these errors were encountered: