UNPKG

@eagleoutice/flowr

Version:

Static Dataflow Analyzer and Program Slicer for the R Programming Language

275 lines (208 loc) โ€ข 16.4 kB
"use strict"; var __importDefault = (this && this.__importDefault) || function (mod) { return (mod && mod.__esModule) ? mod : { "default": mod }; }; Object.defineProperty(exports, "__esModule", { value: true }); const log_1 = require("../../test/functionality/_helper/log"); const log_2 = require("../util/log"); const doc_code_1 = require("./doc-util/doc-code"); const doc_files_1 = require("./doc-util/doc-files"); const doc_structure_1 = require("./doc-util/doc-structure"); const doc_types_1 = require("./doc-util/doc-types"); const path_1 = __importDefault(require("path")); const doc_auto_gen_1 = require("./doc-util/doc-auto-gen"); const doc_cli_option_1 = require("./doc-util/doc-cli-option"); function getText() { const { info } = (0, doc_types_1.getTypesFromFolder)({ rootFolder: path_1.default.resolve('./test'), files: [path_1.default.resolve('./src/dataflow/graph/dataflowgraph-builder.ts'), path_1.default.resolve('./src/util/log.ts'), path_1.default.resolve('./src/slicing/static/static-slicer.ts')], typeNameForMermaid: 'parameter', inlineTypes: doc_types_1.mermaidHide }); return `${(0, doc_auto_gen_1.autoGenHeader)({ filename: module.filename, purpose: 'linting and testing definitions' })} For the latest code coverage information, see [codecov.io](${doc_files_1.FlowrCodecovRef}), for the latest benchmark results, see the [benchmark results](${doc_files_1.FlowrSiteBaseRef}/wiki/stats/benchmark) wiki page. - [๐Ÿจ Testing Suites](#testing-suites) - [๐Ÿงช Functionality Tests](#functionality-tests) - [๐Ÿ—๏ธ Test Structure](#test-structure) - [๐Ÿท๏ธ Test Labels](#test-labels) - [๐Ÿ–‹๏ธ Writing a Test](#writing-a-test) - [๐Ÿค Running Only Some Tests](#running-only-some-tests) - [๐Ÿ’ฝ System Tests](#system-tests) - [๐Ÿ’ƒ Performance Tests](#performance-tests) - [๐Ÿ“ Testing Within Your IDE](#testing-within-your-ide) - [VS Code](#vs-code) - [Webstorm](#webstorm) - [๐Ÿชˆ CI Pipeline](#ci-pipeline) - [๐Ÿงน Linting](#linting) - [Oh no, the linter fails](#oh-no-the-linter-fails) - [License Checker](#license-checker) - [๐Ÿ› Debugging](#debugging) - [VS Code](#vs-code-1) - [Logging](#logging) <a id='testing-suites'></a> ## ๐Ÿจ Testing Suites Currently, flowR contains three testing suites: one for [functionality](#functionality-tests), one for [system tests](#system-tests), and one for [performance](#performance-tests). We explain each of them in the following. In addition to running those tests, you can use the more generalized \`npm run checkup\`. This command includes the construction of the docker image, the generation of the wiki pages, and the linter. <a id='functionality-tests'></a> ### ๐Ÿงช Functionality Tests The functionality tests represent conventional unit (and depending on your terminology component/api) tests. We use [vitest](https://vitest.dev/) as our testing framework. You can run the tests by issuing (some quick benchmarks may be available with \`vitest bench\`): ${(0, doc_code_1.codeBlock)('shell', 'npm run test')} Within the commandline, this should automatically drop you into a watch mode which will automatically re-run (potentially) affected tests if you change the code. If, at any time there are too many errors for you to comprehend, you can use \`--bail=<value>\` to stop the tests after a certain number of errors. For example: ${(0, doc_code_1.codeBlock)('shell', 'npm run test -- --bail=1')} If you want to run the tests without the watch mode, you can use: ${(0, doc_code_1.codeBlock)('shell', 'npm run test -- --no-watch')} To run all tests, including a coverage report and label summary, run: ${(0, doc_code_1.codeBlock)('shell', 'npm run test-full')} However, depending on your local version of&nbsp;R, your network connection, and other factors (each test may have a set of criteria), some tests may be skipped automatically as they do not apply to your current system setup (or cannot be tested with the current prerequisites). Each test can specify such requirements as part of the \`TestConfiguration\`, which is then used in the \`test.skipIf\` function of _vitest_. It is up to the [ci](#ci-pipeline) to run the tests on different systems to ensure that those tests run. <a id='test-structure'></a> #### ๐Ÿ—๏ธ Test Structure All functionality tests are to be located under [test/functionality](${doc_files_1.RemoteFlowrFilePathBaseRef}/test/functionality). This folder contains three special and important elements: - \`test-setup.ts\` which is the entry point if *all* tests are run. It should automatically disable logging statements and configure global variables (e.g., if installation tests should run). - \`_helper/\` folder which contains helper functions to be used by other tests. - \`test-summary.ts\` which may produce a summary of the covered capabilities. ${(0, doc_structure_1.block)({ type: 'WARNING', content: ` We name all test files using the \`.test.ts\` suffix and try to run them in parallel. Whenever this is impossible (e.g., when using ${(0, doc_types_1.shortLink)('withShell', info)}), please use _\`describe.sequential\`_ to disable parallel execution for the respective test (otherwise, such tests are flaky). ` })} <a id='test-labels'></a> #### ๐Ÿท๏ธ Test Labels Generally, tests are [labeled](${doc_files_1.RemoteFlowrFilePathBaseRef}test/functionality/_helper/label.ts) according to the *flowR* capabilities they test. The set of currently supported capabilities and their IDs can be found in ${(0, doc_files_1.getFilePathMd)('../r-bridge/data/data.ts')}. The resulting labels are used in the test report that is generated as part of the test output. They group tests by the capabilities they test and allow the report to display how many tests ensure that any given capability is properly supported. The report can be found on the wiki's [capabilities page](${doc_files_1.FlowrWikiBaseRef}/Capabilities). To add new labels, simply add them to the relevant section in ${(0, doc_files_1.getFilePathMd)('../r-bridge/data/data.ts')} as part of a pull request. <a id='writing-a-test'></a> #### ๐Ÿ–‹๏ธ Writing a Test Currently, this is heavily dependent on what you want to test (normalization, dataflow, quad-export, โ€ฆ) and it is probably best to have a look at existing tests in that area to get an idea of what comfort functionality is available. Various helper functions are available to ease in writing tests with common behaviors, like testing for dataflow, slicing or query results. These can be found in [the \`_helper\` subdirectory](${doc_files_1.RemoteFlowrFilePathBaseRef}test/functionality/_helper). For example, an [existing test](${doc_files_1.RemoteFlowrFilePathBaseRef}test/functionality/dataflow/processing-of-elements/atomic/dataflow-atomic.test.ts) that tests the dataflow graph of a simple variable looks like this: ${(0, doc_code_1.codeBlock)('typescript', ` assertDataflow(label('simple variable', ['name-normal']), shell, 'x', emptyGraph().use('0', 'x') ); `)} Have a look at ${(0, doc_types_1.shortLink)('assertDataflow', info)}, ${(0, doc_types_1.shortLink)('label', info)}, and ${(0, doc_types_1.shortLink)('emptyGraph', info)} for more information. When writing dataflow tests, additional settings can be used to reduce the amount of graph data that needs to be pre-written. Notably: - ${(0, doc_types_1.shortLink)('expectIsSubgraph', info)} indicates that the expected graph is a subgraph, rather than the full graph that the test should generate. The test will then only check if the supplied graph is contained in the result graph, rather than an exact match. - ${(0, doc_types_1.shortLink)('resolveIdsAsCriterion', info)} indicates that the ids given in the expected (sub)graph should be resolved as [slicing criteria](${doc_files_1.FlowrWikiBaseRef}/Terminology#slicing-criterion) rather than actual ids. For example, passing \`12@a\` as an id in the expected (sub)graph will cause it to be resolved as the corresponding id. The following example shows both in use: ${(0, doc_code_1.codeBlock)('typescript', ` assertDataflow(label('without distractors', [...OperatorDatabase['<-'].capabilities, 'numbers', 'name-normal', 'newlines', 'name-escaped']), shell, '\`a\` <- 2\\na', emptyGraph() .use('2@a') .reads('2@a', '1@\`a\`'), { expectIsSubgraph: true, resolveIdsAsCriterion: true } ); `)} <a id='running-only-some-tests'></a> #### ๐Ÿค Running Only Some Tests To run only some tests, vitest allows you to [filter](https://vitest.dev/guide/filtering.html) tests. Besides, you can use the watch mode (with \`npm run test\`) to only run tests that are affected by your changes. <a id='system-tests'></a> ### ๐Ÿ’ฝ System Tests In contrast to the [functionality tests](#functionality-tests), the system tests use runners like the \`npm\` scripts to test the behavior of the whole system, for example, by running the CLI or the server. They are slower and hence not part of \`npm run test\` but can be run using: ${(0, doc_code_1.codeBlock)('shell', 'npm run test:system')} To work, they require you to set up your system correctly (e.g., have \`npm\` available on your path). The CI environment will make sure of that. At the moment, these tests are not labeled and only intended to check basic availability of *flowR*'s core features (as we test the functionality of these features dedicately with the [functionality tests](#functionality-tests)). Have a look at the [test/system-tests](${doc_files_1.RemoteFlowrFilePathBaseRef}test/system-tests) folder for more information. <a id='performance-tests'></a> ### ๐Ÿ’ƒ Performance Tests The performance test suite of *flowR* uses several suites to check for variations in the required times for certain steps. Although we measure wall time in the CI (which is subject to rather large variations), it should give a rough idea *flowR*'s performance. Furthermore, the respective scripts can be used locally as well. To run them, issue: ${(0, doc_code_1.codeBlock)('shell', 'npm run performance-test')} See [test/performance](${doc_files_1.RemoteFlowrFilePathBaseRef}test/performance) for more information on the suites, how to run them, and their results. If you are interested in the results of the benchmarks, see [here](${doc_files_1.FlowrSiteBaseRef}/wiki/stats/benchmark). <a id='testing-within-your-ide'></a> ### ๐Ÿ“ Testing Within Your IDE #### VS Code Using the vitest Extension for Visual Studio Code, you can start tests directly from the definition and explore your suite in the Testing tab. To get started, install the [vitest Extension](https://marketplace.visualstudio.com/items?itemName=vitest.explorer). | Testing Tab | In Code | |:---------------------------------------:|:-------------------------------------:| | ![testing tab](img/testing-vs-code.png) | ![in code](img/testing-vs-code-2.png) | - Left-clicking the <img style="vertical-align: middle" src='img/circle-check-regular.svg' height='16pt'> or <img style="vertical-align: middle" src='img/circle-xmark-regular.svg' height='16pt'> Icon next to the code will rerun the test. Right-clicking will open a context menu, allowing you to debug the test. - In the Testing tab, you can run (and debug) all tests, individual suites or individual tests. #### Webstorm Please follow the official guide [here](https://www.jetbrains.com/help/webstorm/vitest.html). <a id='ci-pipeline'></a> ## ๐Ÿชˆ CI Pipeline We have several workflows defined in [.github/workflows](${doc_files_1.RemoteFlowrFilePathBaseRef}/.github/workflows/). We explain the most important workflows in the following: - [qa.yaml](${doc_files_1.RemoteFlowrFilePathBaseRef}/.github/workflows/qa.yaml) is the main workflow that will run different steps depending on several factors. It is responsible for: - running the [functionality](#functionality-tests) and [performance tests](#performance-tests) - uploading the results to the [benchmark page](${doc_files_1.FlowrSiteBaseRef}/wiki/stats/benchmark) for releases - running the [functionality tests](#functionality-tests) on different operating systems (Windows, macOS, Linux) and with different versions of R - reporting code coverage - running the [linter](#linting) and reporting its results - deploying the documentation to [GitHub Pages](${doc_files_1.FlowrSiteBaseRef}/doc/) - [release.yaml](${doc_files_1.RemoteFlowrFilePathBaseRef}/.github/workflows/release.yaml) is responsible for creating a new release, only to be run by repository owners. Furthermore, it adds the new docker image to [docker hub](${doc_files_1.FlowrDockerRef}). - [broken-links-and-wiki.yaml](${doc_files_1.RemoteFlowrFilePathBaseRef}/.github/workflows/broken-links-and-wiki.yaml) repeatedly tests that all links are not dead! <a id='linting'></a> ## ๐Ÿงน Linting There are two linting scripts. The main one: ${(0, doc_code_1.codeBlock)('shell', 'npm run lint')} And a weaker version of the first (allowing for *todo* comments) which is run automatically in the [pre-push githook](${doc_files_1.RemoteFlowrFilePathBaseRef}/.githooks/pre-push) as explained in the [CONTRIBUTING.md](${doc_files_1.RemoteFlowrFilePathBaseRef}/.github/CONTRIBUTING.md): ${(0, doc_code_1.codeBlock)('shell', 'npm run lint-local')} Besides checking coding style (as defined in the [package.json](${doc_files_1.RemoteFlowrFilePathBaseRef}/package.json)), the *full* linter runs the [license checker](#license-checker). In case you are unaware, eslint can automatically fix several linting problems[](https://eslint.org/docs/latest/use/command-line-interface#fix-problems). So you may be fine by just running: ${(0, doc_code_1.codeBlock)('shell', 'npm run lint-local -- --fix')} <a id='oh-no-the-linter-fails'></a> ### ๐Ÿ’ฅ Oh no, the linter fails By now, the rules should be rather stable and so, if the linter fails, it is usually best if you (when necessary) read the respective description and fix the respective problem. Rules in this project cover general JavaScript issues [using regular ESLint](https://eslint.org/docs/latest/rules), TypeScript-specific issues [using typescript-eslint](https://typescript-eslint.io/rules/), and code formatting [with ESLint Stylistic](https://eslint.style/packages/default#rules). However, in case you think that the linter is wrong, please do not hesitate to open a [new issue](${doc_files_1.FlowrGithubBaseRef}/flowr/issues/new/choose). <a id='license-checker'></a> ### ๐Ÿชช License Checker *flowR* is licensed under the [GPLv3 License](${doc_files_1.FlowrGithubBaseRef}/flowr/blob/main/LICENSE) requiring us to only rely on [compatible licenses](https://www.gnu.org/licenses/license-list.en.html). For now, this list is hardcoded as part of the npm [\`license-compat\`](${doc_files_1.RemoteFlowrFilePathBaseRef}/package.json) script so it can very well be that a new dependency you add causes the checker to fail &mdash; *even though it is compatible*. In that case, please either open a [new issue](${doc_files_1.FlowrGithubBaseRef}/flowr/issues/new/choose) or directly add the license to the list (including a reference to why it is compatible). <a id='debugging'></a> ## ๐Ÿ› Debugging ### VS Code When working with VS Code, you can attach a debugger to the REPL. This works automatically by running the \`Start Debugging\` command (\`F5\` by default). You can also set the \`Auto Attach Filter\` setting to automatically attach the debugger, when running \`npm run flowr\`. ### Logging *flowR* uses a wrapper around [tslog](https://www.npmjs.com/package/tslog) using a class named ${(0, doc_types_1.shortLink)(log_2.FlowrLogger.name, info)}. They obey to, for example, the ${(0, doc_cli_option_1.getCliLongOptionOf)('flowr', 'verbose')} option. Throughout *flowR*, we use the \`log\` object (or subloggers of it) for logging. To create your own logger, you can use ${(0, doc_types_1.shortLink)(log_2.FlowrLogger.name + '::' + (new log_2.FlowrLogger().getSubLogger.name), info, true, 'i')}. For example, check out the ${(0, doc_types_1.shortLink)('slicerLogger', info)} for the static slicer. `; } if (require.main === module) { (0, log_1.setMinLevelOfAllLogs)(6 /* LogLevel.Fatal */); console.log(getText()); } //# sourceMappingURL=print-linting-and-testing-wiki.js.map