This page is a resource for development standards across all Meltano products and Meltano code repos.
The definition of done for any increment of work should always include:
An iterative approach does not mean that docs and tests can be saved for a later iteration. To save time, it’s generally a good practice to write docs and tests before writing the code.
Our Meltano value of ongoing
Iteration is balanced by a requirement that each increment is “stable”. A stable increment is an iteration that provides value without disproportionately adding maintenance and support costs.
For more information, please see the handbook section on Stable Increments.
Linting for our repositories is handled by
pre-commit, and run in CI using
pre-commit, as the name implies, can be used to manage git pre-commit hooks. That said, it can also be run standalone as a general-purpose tool manager that maintains and runs tools in isolated environments. You may prefer to not install its git hooks at all, since that can add an annoying delay to each commit. If you do install its git hooks, they can be skipped as needed by passing the
-n flag to
Our primary reasons for using
pre-commit are as follows:
.pre-commit-config.yamlinstead of the dev dependency section of
pyproject.toml. This prevents the transitive dependency restrictions from our linting dependencies from impacting the runtime dependencies. For example, if the latest version of one of our linting dependencies requires
importlib-resources<4.0.0, but one of our runtime dependencies requires
importlib-resources>=5.0.0, then we’d likely have to downgrade that runtime dependency until we found a compatible version. Thanks to
pre-commitmanaging these dependencies, this is no longer an issue, and we can run
poetry lockwith less fear.
pre-commitcheck is failing in CI, it’s probably failing locally too.
pre-commit.ciGitHub app is installed in the Meltano and MeltanoLabs GitHub organizations, and is given access on a per-repository basis. If CI autofixes are enabled within
.pre-commit-config.yaml, then the
pre-commit.ciapplication will commit whatever changes result from it running the
pre-commitchecks on all files, if any, to the PR it ran the checks on.
pre-commitchecks that we can specify. An incomplete list can be found at pre-commit.com/hooks.
We recommend using
pipx to install/run
pre-commit, since that saves it from being installed into (and potentially interfering with) your active Python environment:
pipx run pre-commit
By default, running
pre-commit will run every check on all files staged by git. This can increase performance since there are fewer files to check, but you may also want to run the checks against all files like so:
pre-commit run --all-files
A useful shell alias may be
alias lint='pipx run pre-commit run --all-files'
For any tool which supports it,
pyproject.toml is where all configuration should be stored, rather than within
.pre-commit-config.yaml, or a tool-specific config file.
There is no one-size-fits-all approach to deciding which
pre-commit checks should be used for a given repository. We recommend checking out examples of what
.pre-commit-config.yaml is in Meltano repositories which already use
pre-commit. For Python projects, some good checks to run using
Every docs page should be linted and should adhere to linting standards.
It is a good idea to install the markdownlint VS Code extension, or similar, so you have realtime lint guidance while editing.
Whenever possible, projects should have automated lint checks, including markdown lint checks and broken link checks.
Documentation is critical and should be included in every increment. Docs should never be skipped or moved as a follow-on issue after the merge.
A test of minimally complete documentation is as follows:
If either of these conditions is not met, the MR should not be merged as it does not meet the minimal definition of done as related to documentation.
Note that within these qualifications, there’s still tons of room for variability in the overall “first iteration” time investment.
For more information on writing quality documentation, check out Divio’s documentation system.
Q: We make decisions to postpone certain components all the time - why not allow docs to be created after the feature launches?
There are several invisible costs that appear immediately after docs are delayed: additional support costs and training costs, along with additional overhead related to administrating and prioritizing the follow-on issue. All of these together can quickly add up to more than the cost of the docs authoring itself.
Apart from the above-mentioned costs, there’s an additional risk that a user will discover the feature and then fail to implement it. Contrary to our goal of providing “early access” to a valuable feature, we risk damaging a user’s confidence in our product because of a bad onboarding experience.
The only valid exceptions to this requirement are: (1) if another team member (such as a member of the PM team) is separately assigned the docs authoring, or (2) if we are accepting a contribution contribution and taking the docs authoring role upon ourselves.
Even in these cases, however, docs still need to be completed before the feature is released.
For many users, the CLI is the primary Meltano interface interacted with on a regular basis. As such, we aim to make to make working with our CLI as intuitive and joyful as possible.
When adding or changing functionality in Meltano’s CLI, refer to clig.dev for guidelines on creating human-centric CLIs.
SQL code should validate against the SQLFluff checks and should match with SQLFLuff auto-format output. (Ideally, CI tests are to be enabled wherever possible.)
All projects containing SQL code should include a
.sqlfluff configuration file with the minimal settings. Changes to these settings (such as max line length) should be considered on a per-project basis.
If using VS Code, developers writing SQL should install the SQLFluff VS Code extension. This extension gives real time lint feedback and has autoformat capabilities for many of its rules.
[sqlfluff] dialect = snowflake # or another dialect as needed templater = dbt output_line_length = 80 ignore_templated_areas = True runaway_limit = 100 [sqlfluff:rules] tab_space_size = 4 max_line_length = 80 indent_unit = space comma_style = trailing [sqlfluff:rules:L010] # Keywords capitalisation_policy = upper [sqlfluff:rules:L014] # Unquoted Identifiers extended_capitalisation_policy = lower [sqlfluff:rules:L030] # Function Names capitalisation_policy = upper [sqlfluff:templater:dbt] # TODO: Replace with project-specific dbt settings: project_dir = transform profiles_dir = transform/profile profile = meltano target = snowflake
Terraform code should validate against the
terraform fmt checks and should match with
terraform fmt auto-format output. (Ideally, CI tests are to be enabled wherever possible.)
As a general guide, please refer to Gruntwork’s Terraform Style Guide - except the “Testing” section, which does not yet apply.
AWS account IDs should be treated as private. Account IDs should not be included in public facing repositories.