Determines the baseline contig ploidy for germline samples given counts data
Germline karyotyping is a frequently performed task in bioinformatics pipelines, e.g. for sex determination and aneuploidy identification. This tool uses counts data for germline karyotyping.
Performing germline karyotyping using counts data requires calibrating ("modeling") the technical coverage bias and variance for each contig. The Bayesian model and the associated inference scheme implemented in {@link DetermineGermlineContigPloidy} includes provisions for inferring and explaining away much of the technical variation. Furthermore, karyotyping confidence is automatically adjusted for individual samples and contigs.
Running {@link DetermineGermlineContigPloidy} is the first computational step in the GATK germline CNV calling pipeline. It provides a baseline ("default") copy-number state for each contig/sample with respect to which the probability of alternative states is allocated.
The computation done by this tool, aside from input data parsing and validation, is performed outside of the Java Virtual Machine and using the gCNV computational python module, namely {@code gcnvkernel}. It is crucial that the user has properly set up a python conda environment with {@code gcnvkernel} and its dependencies installed. If the user intends to run {@link DetermineGermlineContigPloidy} using one of the official GATK Docker images, the python environment is already set up. Otherwise, the environment must be created and activated as described in the main GATK README.md file.
OpenMP and MKL parallelism can be controlled by setting the OMP_NUM_THREADS and MKL_NUM_THREADS
environment variables, respectively.
Advanced users may wish to set the PYTENSOR_FLAGS environment variable to override the GATK PyTensor
configuration. For example, by running
PYTENSOR_FLAGS="base_compiledir=PATH/TO/BASE_COMPILEDIR" gatk DetermineGermlineContigPloidy ..., users can specify
the PyTensor compilation directory (which is set to $HOME/.pytensor by default). See PyTensor documentation
at
https://pytensor.readthedocs.io/en/latest/library/config.html.
This tool has two operation modes as described below:
A TSV file specifying prior probabilities for each integer ploidy state and for each contig is required in this mode and must be specified via the {@code contig-ploidy-priors} argument. The following shows an example of such a table:
| CONTIG_NAME | PLOIDY_PRIOR_0 | PLOIDY_PRIOR_1 | PLOIDY_PRIOR_2 | PLOIDY_PRIOR_3 |
| 1 | 0.01 | 0.01 | 0.97 | 0.01 |
| 2 | 0.01 | 0.01 | 0.97 | 0.01 |
| X | 0.01 | 0.49 | 0.49 | 0.01 |
| Y | 0.50 | 0.50 | 0.00 | 0.00 |
Note that the contig names appearing under {@code CONTIG_NAME} column must match contig names in the input counts files, and all contigs appearing in the input counts files must have a corresponding entry in the priors table. The order of contigs is immaterial in the priors table. The highest ploidy state is determined by the prior table (3 in the above example). A ploidy state can be strictly forbidden by setting its prior probability to 0. For example, the Y contig in the above example can only assume 0 and 1 ploidy states.
If the provided count data only contains intervals from a single chromosome, the model degeneracy in this case will lead to no meaningful inference and the ploidy states will be resolved to the most likely ploidy states given by the ploidy prior table. As an example, autosomal contigs will assume ploidy 2, and X could assume either 1 or 2 depending on tie breakers.
The tool output in the COHORT mode will contain two subdirectories, one ending with "-model" and the other ending with "-calls". The model subdirectory contains the inferred parameters of the ploidy model, which may be used later on for karyotyping one or more similarly-sequenced samples (see below). The calls subdirectory contains one subdirectory for each sample, listing various sample-specific quantities such as the global read-depth, average ploidy, per-contig baseline ploidies, and per-contig coverage variance estimates.
In the CASE mode, the contig ploidy prior table is taken directly from the provided model parameters path and must be not provided again.
The quality of ploidy model parametrization and the sensitivity/precision of germline karyotyping are sensitive to the choice of model hyperparameters, including standard deviation of mean contig coverage bias (set using the {@code mean-bias-standard-deviation} argument), mapping error rate (set using the {@code mapping-error-rate} argument), and the typical scale of contig- and sample-specific unexplained variance (set using the {@code global-psi-scale} and {@code sample-psi-scale} arguments, respectively). It is crucial to note that these hyperparameters are not universal and must be tuned for each sequencing protocol and properly set at runtime.
The model underlying this tool assumes integer ploidy states (in contrast to fractional/variable ploidy states). Therefore, it is to be used strictly on germline samples and for the purpose of sex determination, autosomal aneuploidy detection, or as a part of the GATK germline CNV calling pipeline. The presence of large somatic events and mosaicism (e.g., sex chromosome loss and somatic trisomy) will naturally lead to unreliable results. We strongly recommended inspecting genotyping qualities (GQ) from the tool output and considering to drop low-GQ contigs in downstream analyses. Finally, given the Bayesian status of this tool, we suggest including as many high-quality germline samples as possible for ploidy model parametrizaton in the COHORT mode. This will downplay the role of questionable samples and will yield a more reliable estimation of genuine sequencing biases.
Accurate germline karyotyping requires incorporating SNP allele-fraction data and counts data in a unified probabilistic model and is beyond the scope of the present tool. The current implementation only uses counts data for karyotyping and while being fast, it may not provide the most reliable results.
COHORT mode:
gatk DetermineGermlineContigPloidy \ --input normal_1.counts.hdf5 \ --input normal_2.counts.hdf5 \ ... \ --contig-ploidy-priors a_valid_ploidy_priors_table.tsv --output output_dir \ --output-prefix normal_cohort
COHORT mode (with optional interval filtering):
gatk DetermineGermlineContigPloidy \ -L intervals.interval_list \ -XL blacklist_intervals.interval_list \ --interval-merging-rule OVERLAPPING_ONLY \ --input normal_1.counts.hdf5 \ --input normal_2.counts.hdf5 \ ... \ --contig-ploidy-priors a_valid_ploidy_priors_table.tsv --output output_dir \ --output-prefix normal_cohort
CASE mode:
gatk DetermineGermlineContigPloidy \ --model a_valid_ploidy_model_dir --input normal_1.counts.hdf5 \ --input normal_2.counts.hdf5 \ ... \ --output output_dir \ --output-prefix normal_case@author Mehrtash Babadi <mehrtash@broadinstitute.org> @author Samuel Lee <slee@broadinstitute.org>
This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.
| Argument name(s) | Default value | Summary | |
|---|---|---|---|
| Required Arguments | |||
| --input -I |
Input paths for read-count files containing integer read counts in genomic intervals for all samples. All intervals specified via -L/-XL must be contained; if none are specified, then intervals must be identical and in the same order for all samples. If read-count files are given by Google Cloud Storage paths, have the extension .counts.tsv or .counts.tsv.gz, and have been indexed by IndexFeatureFile, only the specified intervals will be queried and streamed; this can reduce disk usage by avoiding the complete localization of all read-count files. | ||
| --output -O |
Output directory. This will be created if it does not exist. | ||
| --output-prefix |
Prefix for output filenames. | ||
| Optional Tool Arguments | |||
| --adamax-beta-1 |
0.9 | Adamax optimizer first moment estimation forgetting factor. | |
| --adamax-beta-2 |
0.999 | Adamax optimizer second moment estimation forgetting factor. | |
| --arguments_file |
read one or more arguments files and add them to the command line | ||
| --caller-external-admixing-rate |
0.75 | Admixing ratio of new and old called posteriors (between 0 and 1; larger values implies using more of the new posterior and less of the old posterior) after convergence. | |
| --caller-internal-admixing-rate |
0.75 | Admixing ratio of new and old called posteriors (between 0 and 1; larger values implies using more of the new posterior and less of the old posterior) for internal convergence loops. | |
| --caller-update-convergence-threshold |
0.001 | Maximum tolerated calling update size for convergence. | |
| --contig-ploidy-priors |
Input file specifying contig-ploidy priors. If only a single sample is specified, this input should not be provided. If multiple samples are specified, this input is required. | ||
| --convergence-snr-averaging-window |
5000 | Averaging window for calculating training signal-to-noise ratio (SNR) for convergence checking. | |
| --convergence-snr-countdown-window |
10 | The number of ADVI iterations during which the SNR is required to stay below the set threshold for convergence. | |
| --convergence-snr-trigger-threshold |
0.1 | The SNR threshold to be reached before triggering the convergence countdown. | |
| --disable-annealing |
false | (advanced) Disable annealing. | |
| --disable-caller |
false | (advanced) Disable caller. | |
| --disable-sampler |
false | (advanced) Disable sampler. | |
| --gcs-max-retries -gcs-retries |
20 | If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection | |
| --gcs-project-for-requester-pays |
Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed. User must have storage.buckets.get permission on the bucket being accessed. | ||
| --global-psi-scale |
0.001 | Prior scale of contig coverage unexplained variance. If a single sample is provided, this input will be ignored. | |
| --help -h |
false | display the help message | |
| --initial-temperature |
2.0 | Initial temperature (for DA-ADVI). | |
| --interval-merging-rule -imr |
ALL | Interval merging rule for abutting intervals | |
| --intervals -L |
One or more genomic intervals over which to operate | ||
| --learning-rate |
0.05 | Adamax optimizer learning rate. | |
| --log-emission-samples-per-round |
2000 | Log emission samples drawn per round of sampling. | |
| --log-emission-sampling-median-rel-error |
5.0E-4 | Maximum tolerated median relative error in log emission sampling. | |
| --log-emission-sampling-rounds |
100 | Log emission maximum sampling rounds. | |
| --mapping-error-rate |
0.3 | Typical mapping error rate. | |
| --max-advi-iter-first-epoch |
1000 | Maximum ADVI iterations in the first epoch. | |
| --max-advi-iter-subsequent-epochs |
1000 | Maximum ADVI iterations in subsequent epochs. | |
| --max-calling-iters |
1 | Maximum number of internal self-consistency iterations within each calling step. | |
| --max-training-epochs |
100 | Maximum number of training epochs. | |
| --mean-bias-standard-deviation |
1.0 | Prior standard deviation of the contig-level mean coverage bias. If a single sample is provided, this input will be ignored. | |
| --min-training-epochs |
20 | Minimum number of training epochs. | |
| --model |
Input ploidy-model directory. If only a single sample is specified, this input is required. If multiple samples are specified, this input should not be provided. | ||
| --num-thermal-advi-iters |
5000 | Number of thermal ADVI iterations (for DA-ADVI). | |
| --sample-psi-scale |
1.0E-4 | Prior scale of the sample-specific correction to the coverage unexplained variance. | |
| --version |
false | display the version number for this tool | |
| Optional Common Arguments | |||
| --exclude-intervals -XL |
One or more genomic intervals to exclude from processing | ||
| --gatk-config-file |
A configuration file to use with the GATK. | ||
| --interval-exclusion-padding -ixp |
0 | Amount of padding (in bp) to add to each interval you are excluding. | |
| --interval-padding -ip |
0 | Amount of padding (in bp) to add to each interval you are including. | |
| --interval-set-rule -isr |
UNION | Set merging approach to use for combining interval inputs | |
| --QUIET |
false | Whether to suppress job-summary info on System.err. | |
| --tmp-dir |
Temp directory to use. | ||
| --use-jdk-deflater -jdk-deflater |
false | Whether to use the JdkDeflater (as opposed to IntelDeflater) | |
| --use-jdk-inflater -jdk-inflater |
false | Whether to use the JdkInflater (as opposed to IntelInflater) | |
| --verbosity |
INFO | Control verbosity of logging. | |
| Advanced Arguments | |||
| --showHidden |
false | display hidden arguments | |
Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.
Adamax optimizer first moment estimation forgetting factor.
double 0.9 [ [ 0 1 ] ]
Adamax optimizer second moment estimation forgetting factor.
double 0.999 [ [ 0 1 ] ]
read one or more arguments files and add them to the command line
List[File] []
Admixing ratio of new and old called posteriors (between 0 and 1; larger values implies using more of the new posterior and less of the old posterior) after convergence.
double 0.75 [ [ 0 ∞ ] ]
Admixing ratio of new and old called posteriors (between 0 and 1; larger values implies using more of the new posterior and less of the old posterior) for internal convergence loops.
double 0.75 [ [ 0 ∞ ] ]
Maximum tolerated calling update size for convergence.
double 0.001 [ [ 0 ∞ ] ]
Input file specifying contig-ploidy priors. If only a single sample is specified, this input should not be provided. If multiple samples are specified, this input is required.
File null
Averaging window for calculating training signal-to-noise ratio (SNR) for convergence checking.
int 5000 [ [ 0 ∞ ] ]
The number of ADVI iterations during which the SNR is required to stay below the set threshold for convergence.
int 10 [ [ 0 ∞ ] ]
The SNR threshold to be reached before triggering the convergence countdown.
double 0.1 [ [ 0 ∞ ] ]
(advanced) Disable annealing.
boolean false
(advanced) Disable caller.
boolean false
(advanced) Disable sampler.
boolean false
One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite). This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the
command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals
(e.g. -XL myFile.intervals). strings gathered from the command line -XL argument to be parsed into intervals to exclude
List[String] []
A configuration file to use with the GATK.
String null
If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
int 20 [ [ -∞ ∞ ] ]
Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed. User must have storage.buckets.get permission on the bucket being accessed.
String ""
Prior scale of contig coverage unexplained variance. If a single sample is provided, this input will be ignored.
double 0.001 [ [ 0 ∞ ] ]
display the help message
boolean false
Initial temperature (for DA-ADVI).
double 2.0 [ [ 0 ∞ ] ]
Input paths for read-count files containing integer read counts in genomic intervals for all samples. All intervals specified via -L/-XL must be contained; if none are specified, then intervals must be identical and in the same order for all samples. If read-count files are given by Google Cloud Storage paths, have the extension .counts.tsv or .counts.tsv.gz, and have been indexed by IndexFeatureFile, only the specified intervals will be queried and streamed; this can reduce disk usage by avoiding the complete localization of all read-count files.
R List[String] []
Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a
padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not
actually overlap) into a single continuous interval. However you can change this behavior if you want them to be
treated as separate intervals instead.
The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:
IntervalMergingRule ALL
Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a
padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can
change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to
perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule
INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will
always be merged using UNION).
Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.
The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:
IntervalSetRule UNION
One or more genomic intervals over which to operate
List[String] []
Adamax optimizer learning rate.
double 0.05 [ [ 0 ∞ ] ]
Log emission samples drawn per round of sampling.
int 2000 [ [ 0 ∞ ] ]
Maximum tolerated median relative error in log emission sampling.
double 5.0E-4 [ [ 0 ∞ ] ]
Log emission maximum sampling rounds.
int 100 [ [ 0 ∞ ] ]
Typical mapping error rate.
double 0.3 [ [ 0 1 ] ]
Maximum ADVI iterations in the first epoch.
int 1000 [ [ 0 ∞ ] ]
Maximum ADVI iterations in subsequent epochs.
int 1000 [ [ 0 ∞ ] ]
Maximum number of internal self-consistency iterations within each calling step.
int 1 [ [ 0 ∞ ] ]
Maximum number of training epochs.
int 100 [ [ 0 ∞ ] ]
Prior standard deviation of the contig-level mean coverage bias. If a single sample is provided, this input will be ignored.
double 1.0 [ [ 0 ∞ ] ]
Minimum number of training epochs.
int 20 [ [ 0 ∞ ] ]
Input ploidy-model directory. If only a single sample is specified, this input is required. If multiple samples are specified, this input should not be provided.
File null
Number of thermal ADVI iterations (for DA-ADVI).
int 5000 [ [ 0 ∞ ] ]
Output directory. This will be created if it does not exist.
R File null
Prefix for output filenames.
R String null
Whether to suppress job-summary info on System.err.
Boolean false
Prior scale of the sample-specific correction to the coverage unexplained variance.
double 1.0E-4 [ [ 0 ∞ ] ]
display hidden arguments
boolean false
Temp directory to use.
GATKPath null
Whether to use the JdkDeflater (as opposed to IntelDeflater)
boolean false
Whether to use the JdkInflater (as opposed to IntelInflater)
boolean false
Control verbosity of logging.
The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:
LogLevel INFO
display the version number for this tool
boolean false
See also General Documentation | Tool Docs Index Tool Documentation Index | Support Forum
GATK version 4.6.2.0 built at Sun, 13 Apr 2025 13:21:43 -0400.