Showing tool doc from version 4.6.2.0 | The latest version is
4.6.2.0

GenomicsDBImport

Import VCFs to GenomicsDB

Category Short Variant Discovery


Overview

Import single-sample GVCFs into GenomicsDB before joint genotyping.

The GATK4 Best Practice Workflow for SNP and Indel calling uses GenomicsDBImport to merge GVCFs from multiple samples. GenomicsDBImport offers the same functionality as CombineGVCFs and initially came from the Intel-Broad Center for Genomics. The datastore transposes sample-centric variant information across genomic loci to make data more accessible to tools.

To query the contents of the GenomicsDB datastore, use SelectVariants. See Tutorial#11813 to get started.

Details on GenomicsDB are at https://github.com/GenomicsDB/GenomicsDB/wiki. In brief, GenomicsDB utilises a data storage system optimized for storing/querying sparse arrays. Genomics data is typically sparse in that each sample has few variants with respect to the entire reference genome. GenomicsDB contains specialized code for genomics applications, such as VCF parsing and INFO field annotation calculation.

Input

One or more GVCFs produced by in HaplotypeCaller with the `-ERC GVCF` or `-ERC BP_RESOLUTION` settings, containing the samples to joint-genotype.

Output

A GenomicsDB workspace

Usage examples

Provide each sample GVCF separately.
    gatk --java-options "-Xmx4g -Xms4g" GenomicsDBImport \
      -V data/gvcfs/mother.g.vcf.gz \
      -V data/gvcfs/father.g.vcf.gz \
      -V data/gvcfs/son.g.vcf.gz \
      --genomicsdb-workspace-path my_database \
      --tmp-dir=/path/to/large/tmp \
      -L 20
  
Provide sample GVCFs in a map file.
    gatk --java-options "-Xmx4g -Xms4g" \
       GenomicsDBImport \
       --genomicsdb-workspace-path my_database \
       --batch-size 50 \
       -L chr1:1000-10000 \
       --sample-name-map cohort.sample_map \
       --tmp-dir /path/to/large/tmp \
       --reader-threads 5
  
The sample map is a tab-delimited text file with sample_name--tab--path_to_sample_vcf per line. Using a sample map saves the tool from having to download the GVCF headers in order to determine the sample names. Sample names in the sample name map file may have non-tab whitespace, but may not begin or end with whitespace.
  sample1      sample1.vcf.gz
  sample2      sample2.vcf.gz
  sample3      sample3.vcf.gz
  
The sample name map file may optionally contain a third column with an explicit index path/URI for each VCF:
  sample1      sample1.vcf.gz      sample1.vcf.gz.tbi
  sample2      sample2.vcf.gz      sample2.vcf.gz.tbi
  sample3      sample3.vcf.gz      sample3.vcf.gz.tbi
  
It is also possible to specify an explicit index for only a subset of the samples:
  sample1      sample1.vcf.gz
  sample2      sample2.vcf.gz      sample2.vcf.gz.tbi
  sample3      sample3.vcf.gz
  
Add new samples to an existing genomicsdb workspace. In the incremental import case, no intervals are specified in the command because the tool will use the same intervals used in the initial import. Sample map is also supported for incremental import.
    gatk --java-options "-Xmx4g -Xms4g" GenomicsDBImport \
      -V data/gvcfs/mother.g.vcf.gz \
      -V data/gvcfs/father.g.vcf.gz \
      -V data/gvcfs/son.g.vcf.gz \
      --genomicsdb-update-workspace-path my_database \
      --tmp-dir /path/to/large/tmp \
  
Get Picard-style interval_list from existing workspace
    gatk --java-options "-Xmx4g -Xms4g" GenomicsDBImport \
      --genomicsdb-update-workspace-path my_database \
      --output-interval-list-to-file /output/path/to/file
  
The interval_list for the specified/existing workspace will be written to /output/path/to/file. This will output a Picard-style interval_list (with a sequence dictionary header)

Caveats

Developer Note

To read data from GenomicsDB, use the query interface {@link org.genomicsdb.reader.GenomicsDBFeatureReader}

Additional Information

Read filters

This Read Filter is automatically applied to the data by the Engine before processing by GenomicsDBImport.

GenomicsDBImport specific arguments

This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.

Argument name(s) Default value Summary
Required Arguments
--genomicsdb-update-workspace-path
Workspace when updating GenomicsDB. Can be a POSIX file system absolute or relative path or a HDFS/GCS URL. Use this argument when adding new samples to an existing GenomicsDB workspace or when using the output-interval-list-to-file option. Either this or genomicsdb-workspace-path must be specified. Must point to an existing workspace.
--genomicsdb-workspace-path
Workspace for GenomicsDB. Can be a POSIX file system absolute or relative path or a HDFS/GCS URL. Use this argument when creating a new GenomicsDB workspace. Either this or genomicsdb-update-workspace-path must be specified. Must be an empty or non-existent directory.
Optional Tool Arguments
--arguments_file
read one or more arguments files and add them to the command line
--batch-size
0 Batch size controls the number of samples for which readers are open at once and therefore provides a way to minimize memory consumption. However, it can take longer to complete. Use the consolidate flag if more than a hundred batches were used. This will improve feature read time. batchSize=0 means no batching (i.e. readers for all samples will be opened at once) Defaults to 0
--bypass-feature-reader
false Use htslib to read input VCFs instead of GATK's FeatureReader. This will reduce memory usage and potentially speed up the import. Lower memory requirements may also enable parallelism through max-num-intervals-to-import-in-parallel. To enable this option, VCFs must be normalized, block-compressed and indexed.
--cloud-index-prefetch-buffer
 -CIPB
0 Size of the cloud-only prefetch buffer (in MB; 0 to disable). Defaults to cloudPrefetchBuffer if unset.
--cloud-prefetch-buffer
 -CPB
0 Size of the cloud-only prefetch buffer (in MB; 0 to disable).
--consolidate
false Boolean flag to enable consolidation. If importing data in batches, a new fragment is created for each batch. In case thousands of fragments are created, GenomicsDB feature readers will try to open ~20x as many files. Also, internally GenomicsDB would consume more memory to maintain bookkeeping data from all fragments. Use this flag to merge all fragments into one. Merging can potentially improve read performance, however overall benefit might not be noticeable as the top Java layers have significantly higher overheads. This flag has no effect if only one batch is used. Defaults to false
--disable-bam-index-caching
 -DBIC
false If true, don't cache bam indexes, this will reduce memory requirements but may harm performance if many intervals are specified. Caching is automatically disabled if there are no intervals specified.
--disable-sequence-dictionary-validation
false If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
--gcs-max-retries
 -gcs-retries
20 If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
--gcs-project-for-requester-pays
Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed. User must have storage.buckets.get permission on the bucket being accessed.
--genomicsdb-segment-size
1048576 Buffer size in bytes allocated for GenomicsDB attributes during import. Should be large enough to hold data from one site.
--genomicsdb-shared-posixfs-optimizations
false Allow for optimizations to improve the usability and performance for shared Posix Filesystems(e.g. NFS, Lustre). If set, file level locking is disabled and file system writes are minimized by keeping a higher number of file descriptors open for longer periods of time. Use with batch-size option if keeping a large number of file descriptors open is an issue
--genomicsdb-use-gcs-hdfs-connector
false Use the GCS HDFS Connector instead of the native GCS SDK client with gs:// URLs.
--genomicsdb-vcf-buffer-size
16384 Buffer size in bytes to store variant contexts. Larger values are better as smaller values cause frequent disk writes. Defaults to 16384 which was empirically determined to work well for many inputs.
--header
Specify a vcf file to use instead of reading and combining headers from the input vcfs
--help
 -h
false display the help message
--interval-merging-rule
 -imr
ALL Interval merging rule for abutting intervals
--intervals
 -L
One or more genomic intervals over which to operate
--merge-input-intervals
false Boolean flag to import all data in between intervals. Improves performance using large lists of intervals, as in exome sequencing, especially if GVCF data only exists for specified intervals.
--output-interval-list-to-file
Path to output file where intervals from existing workspace should be written.If this option is specified, the tools outputs the interval_list of the workspace pointed to by genomicsdb-update-workspace-path at the path specified here in a Picard-style interval_list with a sequence dictionary header
--overwrite-existing-genomicsdb-workspace
false Will overwrite given workspace if it exists. Otherwise a new workspace is created. Cannot be set to true if genomicsdb-update-workspace-path is also set. Defaults to false
--reference
 -R
Reference sequence
--sites-only-vcf-output
false If true, don't emit genotype fields when writing vcf file output.
--validate-sample-name-map
false Boolean flag to enable checks on the sampleNameMap file. If true, tool checks whetherfeature readers are valid and shows a warning if sample names do not match with the headers. Defaults to false
--variant
 -V
GVCF files to be imported to GenomicsDB. Each file must contain data for only a single sample. Either this or sample-name-map must be specified.
--version
false display the version number for this tool
Optional Common Arguments
--add-output-sam-program-record
true If true, adds a PG tag to created SAM/BAM/CRAM files.
--add-output-vcf-command-line
true If true, adds a command line header line to created VCF files.
--create-output-bam-index
 -OBI
true If true, create a BAM/CRAM index when writing a coordinate-sorted BAM/CRAM file.
--create-output-bam-md5
 -OBM
false If true, create a MD5 digest for any BAM/SAM/CRAM file created
--create-output-variant-index
 -OVI
true If true, create a VCF index when writing a coordinate-sorted VCF file.
--create-output-variant-md5
 -OVM
false If true, create a a MD5 digest any VCF file created.
--disable-read-filter
 -DF
Read filters to be disabled before analysis
--disable-tool-default-read-filters
false Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)
--exclude-intervals
 -XL
One or more genomic intervals to exclude from processing
--gatk-config-file
A configuration file to use with the GATK.
--input
 -I
BAM/SAM/CRAM file containing reads
--interval-exclusion-padding
 -ixp
0 Amount of padding (in bp) to add to each interval you are excluding.
--interval-padding
 -ip
0 Amount of padding (in bp) to add to each interval you are including.
--interval-set-rule
 -isr
UNION Set merging approach to use for combining interval inputs
--inverted-read-filter
 -XRF
Inverted (with flipped acceptance/failure conditions) read filters applied before analysis (after regular read filters).
--lenient
 -LE
false Lenient processing of VCF files
--max-variants-per-shard
0 If non-zero, partitions VCF output into shards, each containing up to the given number of records.
--QUIET
false Whether to suppress job-summary info on System.err.
--read-filter
 -RF
Read filters to be applied before analysis
--read-index
Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.
--read-validation-stringency
 -VS
SILENT Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
--seconds-between-progress-updates
10.0 Output traversal statistics every time this many seconds elapse
--sequence-dictionary
Use the given sequence dictionary as the master/canonical sequence dictionary. Must be a .dict file.
--tmp-dir
Temp directory to use.
--use-jdk-deflater
 -jdk-deflater
false Whether to use the JdkDeflater (as opposed to IntelDeflater)
--use-jdk-inflater
 -jdk-inflater
false Whether to use the JdkInflater (as opposed to IntelInflater)
--verbosity
INFO Control verbosity of logging.
Advanced Arguments
--avoid-nio
false Do not attempt to open the input vcf file paths in java. This can only be used with bypass-feature-reader. It allows operating on file systems which GenomicsDB understands how to open but GATK does not. This will disable many of the sanity checks.
--max-num-intervals-to-import-in-parallel
1 Max number of intervals to import in parallel; higher values may improve performance, but require more memory and a higher number of file descriptors open at the same time
--merge-contigs-into-num-partitions
0 Number of GenomicsDB arrays to merge input intervals into. Defaults to 0, which disables this merging. This option can only be used if entire contigs are specified as intervals. The tool will not split up a contig into multiple arrays, which means the actual number of partitions may be less than what is specified for this argument. This can improve performance in the case where the user is trying to import a very large number of contigs - larger than 100
--reader-threads
1 How many simultaneous threads to use when opening VCFs in batches; higher values may improve performance when network latency is an issue. Multiple reader threads are not supported when running with multiple intervals.
--sample-name-map
Path to file containing a mapping of sample name to file uri in tab delimited format. If this is specified then the header from the first sample will be treated as the merged header rather than merging the headers, and the sample names will be taken from this file. This may be used to rename input samples. This is a performance optimization that relaxes the normal checks for consistent headers. Using vcfs with incompatible headers may result in silent data corruption.
--showHidden
false display hidden arguments

Argument details

Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.


--add-output-sam-program-record / -add-output-sam-program-record

If true, adds a PG tag to created SAM/BAM/CRAM files.

boolean  true


--add-output-vcf-command-line / -add-output-vcf-command-line

If true, adds a command line header line to created VCF files.

boolean  true


--arguments_file

read one or more arguments files and add them to the command line

List[File]  []


--avoid-nio

Do not attempt to open the input vcf file paths in java. This can only be used with bypass-feature-reader. It allows operating on file systems which GenomicsDB understands how to open but GATK does not. This will disable many of the sanity checks.

Exclusion: This argument cannot be used at the same time as variant.

boolean  false


--batch-size

Batch size controls the number of samples for which readers are open at once and therefore provides a way to minimize memory consumption. However, it can take longer to complete. Use the consolidate flag if more than a hundred batches were used. This will improve feature read time. batchSize=0 means no batching (i.e. readers for all samples will be opened at once) Defaults to 0

int  0  [ [ -∞  ∞ ] ]


--bypass-feature-reader

Use htslib to read input VCFs instead of GATK's FeatureReader. This will reduce memory usage and potentially speed up the import. Lower memory requirements may also enable parallelism through max-num-intervals-to-import-in-parallel. To enable this option, VCFs must be normalized, block-compressed and indexed.

boolean  false


--cloud-index-prefetch-buffer / -CIPB

Size of the cloud-only prefetch buffer (in MB; 0 to disable). Defaults to cloudPrefetchBuffer if unset.

int  0  [ [ -∞  ∞ ] ]


--cloud-prefetch-buffer / -CPB

Size of the cloud-only prefetch buffer (in MB; 0 to disable).

int  0  [ [ -∞  ∞ ] ]


--consolidate

Boolean flag to enable consolidation. If importing data in batches, a new fragment is created for each batch. In case thousands of fragments are created, GenomicsDB feature readers will try to open ~20x as many files. Also, internally GenomicsDB would consume more memory to maintain bookkeeping data from all fragments. Use this flag to merge all fragments into one. Merging can potentially improve read performance, however overall benefit might not be noticeable as the top Java layers have significantly higher overheads. This flag has no effect if only one batch is used. Defaults to false

Boolean  false


--create-output-bam-index / -OBI

If true, create a BAM/CRAM index when writing a coordinate-sorted BAM/CRAM file.

boolean  true


--create-output-bam-md5 / -OBM

If true, create a MD5 digest for any BAM/SAM/CRAM file created

boolean  false


--create-output-variant-index / -OVI

If true, create a VCF index when writing a coordinate-sorted VCF file.

boolean  true


--create-output-variant-md5 / -OVM

If true, create a a MD5 digest any VCF file created.

boolean  false


--disable-bam-index-caching / -DBIC

If true, don't cache bam indexes, this will reduce memory requirements but may harm performance if many intervals are specified. Caching is automatically disabled if there are no intervals specified.

boolean  false


--disable-read-filter / -DF

Read filters to be disabled before analysis

List[String]  []


--disable-sequence-dictionary-validation / -disable-sequence-dictionary-validation

If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!

boolean  false


--disable-tool-default-read-filters / -disable-tool-default-read-filters

Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)

boolean  false


--exclude-intervals / -XL

One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite). This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals (e.g. -XL myFile.intervals). strings gathered from the command line -XL argument to be parsed into intervals to exclude

List[String]  []


--gatk-config-file

A configuration file to use with the GATK.

String  null


--gcs-max-retries / -gcs-retries

If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection

int  20  [ [ -∞  ∞ ] ]


--gcs-project-for-requester-pays

Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed. User must have storage.buckets.get permission on the bucket being accessed.

String  ""


--genomicsdb-segment-size

Buffer size in bytes allocated for GenomicsDB attributes during import. Should be large enough to hold data from one site.

long  1048576  [ [ -∞  ∞ ] ]


--genomicsdb-shared-posixfs-optimizations

Allow for optimizations to improve the usability and performance for shared Posix Filesystems(e.g. NFS, Lustre). If set, file level locking is disabled and file system writes are minimized by keeping a higher number of file descriptors open for longer periods of time. Use with batch-size option if keeping a large number of file descriptors open is an issue

boolean  false


--genomicsdb-update-workspace-path

Workspace when updating GenomicsDB. Can be a POSIX file system absolute or relative path or a HDFS/GCS URL. Use this argument when adding new samples to an existing GenomicsDB workspace or when using the output-interval-list-to-file option. Either this or genomicsdb-workspace-path must be specified. Must point to an existing workspace.

Exclusion: This argument cannot be used at the same time as genomicsdb-workspace-path, header.

R String  null


--genomicsdb-use-gcs-hdfs-connector

Use the GCS HDFS Connector instead of the native GCS SDK client with gs:// URLs.

boolean  false


--genomicsdb-vcf-buffer-size

Buffer size in bytes to store variant contexts. Larger values are better as smaller values cause frequent disk writes. Defaults to 16384 which was empirically determined to work well for many inputs.

long  16384  [ [ 1,024  [ 10,240  ∞ ] ]


--genomicsdb-workspace-path

Workspace for GenomicsDB. Can be a POSIX file system absolute or relative path or a HDFS/GCS URL. Use this argument when creating a new GenomicsDB workspace. Either this or genomicsdb-update-workspace-path must be specified. Must be an empty or non-existent directory.

Exclusion: This argument cannot be used at the same time as genomicsdb-update-workspace-path, output-interval-list-to-file.

R String  null


--header

Specify a vcf file to use instead of reading and combining headers from the input vcfs

Exclusion: This argument cannot be used at the same time as genomicsdb-update-workspace-path.

FeatureInput[VariantContext]  null


--help / -h

display the help message

boolean  false


--input / -I

BAM/SAM/CRAM file containing reads

List[GATKPath]  []


--interval-exclusion-padding / -ixp

Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when analyzing exomes.

int  0  [ [ -∞  ∞ ] ]


--interval-merging-rule / -imr

Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not actually overlap) into a single continuous interval. However you can change this behavior if you want them to be treated as separate intervals instead.

The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:

ALL
OVERLAPPING_ONLY

IntervalMergingRule  ALL


--interval-padding / -ip

Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when analyzing exomes.

int  0  [ [ -∞  ∞ ] ]


--interval-set-rule / -isr

Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will always be merged using UNION). Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.

The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:

UNION
Take the union of all intervals
INTERSECTION
Take the intersection of intervals (the subset that overlaps all intervals specified)

IntervalSetRule  UNION


--intervals / -L

One or more genomic intervals over which to operate

List[String]  []


--inverted-read-filter / -XRF

Inverted (with flipped acceptance/failure conditions) read filters applied before analysis (after regular read filters).

List[String]  []


--lenient / -LE

Lenient processing of VCF files

boolean  false


--max-num-intervals-to-import-in-parallel

Max number of intervals to import in parallel; higher values may improve performance, but require more memory and a higher number of file descriptors open at the same time

int  1  [ [ 1  ∞ ] ]


--max-variants-per-shard

If non-zero, partitions VCF output into shards, each containing up to the given number of records.

int  0  [ [ 0  ∞ ] ]


--merge-contigs-into-num-partitions / -merge-contigs-into-num-partitions

Number of GenomicsDB arrays to merge input intervals into. Defaults to 0, which disables this merging. This option can only be used if entire contigs are specified as intervals. The tool will not split up a contig into multiple arrays, which means the actual number of partitions may be less than what is specified for this argument. This can improve performance in the case where the user is trying to import a very large number of contigs - larger than 100

int  0  [ [ 0  ∞ ] ]


--merge-input-intervals

Boolean flag to import all data in between intervals. Improves performance using large lists of intervals, as in exome sequencing, especially if GVCF data only exists for specified intervals.

boolean  false


--output-interval-list-to-file

Path to output file where intervals from existing workspace should be written.If this option is specified, the tools outputs the interval_list of the workspace pointed to by genomicsdb-update-workspace-path at the path specified here in a Picard-style interval_list with a sequence dictionary header

Exclusion: This argument cannot be used at the same time as genomicsdb-workspace-path.

String  null


--overwrite-existing-genomicsdb-workspace

Will overwrite given workspace if it exists. Otherwise a new workspace is created. Cannot be set to true if genomicsdb-update-workspace-path is also set. Defaults to false

Boolean  false


--QUIET

Whether to suppress job-summary info on System.err.

Boolean  false


--read-filter / -RF

Read filters to be applied before analysis

List[String]  []


--read-index / -read-index

Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.

List[GATKPath]  []


--read-validation-stringency / -VS

Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.

The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:

STRICT
LENIENT
SILENT

ValidationStringency  SILENT


--reader-threads

How many simultaneous threads to use when opening VCFs in batches; higher values may improve performance when network latency is an issue. Multiple reader threads are not supported when running with multiple intervals.

int  1  [ [ 1  ∞ ] ]


--reference / -R

Reference sequence

GATKPath  null


--sample-name-map

Path to file containing a mapping of sample name to file uri in tab delimited format. If this is specified then the header from the first sample will be treated as the merged header rather than merging the headers, and the sample names will be taken from this file. This may be used to rename input samples. This is a performance optimization that relaxes the normal checks for consistent headers. Using vcfs with incompatible headers may result in silent data corruption.

Exclusion: This argument cannot be used at the same time as variant.

String  null


--seconds-between-progress-updates / -seconds-between-progress-updates

Output traversal statistics every time this many seconds elapse

double  10.0  [ [ -∞  ∞ ] ]


--sequence-dictionary / -sequence-dictionary

Use the given sequence dictionary as the master/canonical sequence dictionary. Must be a .dict file.

GATKPath  null


--showHidden / -showHidden

display hidden arguments

boolean  false


--sites-only-vcf-output

If true, don't emit genotype fields when writing vcf file output.

boolean  false


--tmp-dir

Temp directory to use.

GATKPath  null


--use-jdk-deflater / -jdk-deflater

Whether to use the JdkDeflater (as opposed to IntelDeflater)

boolean  false


--use-jdk-inflater / -jdk-inflater

Whether to use the JdkInflater (as opposed to IntelInflater)

boolean  false


--validate-sample-name-map

Boolean flag to enable checks on the sampleNameMap file. If true, tool checks whetherfeature readers are valid and shows a warning if sample names do not match with the headers. Defaults to false

Boolean  false


--variant / -V

GVCF files to be imported to GenomicsDB. Each file must contain data for only a single sample. Either this or sample-name-map must be specified.

Exclusion: This argument cannot be used at the same time as sample-name-map, avoid-nio.

List[String]  []


--verbosity / -verbosity

Control verbosity of logging.

The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:

ERROR
WARNING
INFO
DEBUG

LogLevel  INFO


--version

display the version number for this tool

boolean  false


Return to top


See also General Documentation | Tool Docs Index Tool Documentation Index | Support Forum

GATK version 4.6.2.0 built at Sun, 13 Apr 2025 13:21:43 -0400.