Showing tool doc from version 4.6.2.0 | The latest version is
4.6.2.0

**BETA** CpxVariantReInterpreterSpark

(Internal) Tries to extract simple variants from a provided GATK-SV CPX.vcf

Category Structural Variant Discovery


Overview

(Internal) Tries to extract simple variants from a provided GATK-SV CPX.vcf

Additional Information

Read filters

This Read Filter is automatically applied to the data by the Engine before processing by CpxVariantReInterpreterSpark.

CpxVariantReInterpreterSpark specific arguments

This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.

Argument name(s) Default value Summary
Required Arguments
--cpx-vcf
file containing complex variants as output by GATK-SV
--input
 -I
BAM/SAM/CRAM file containing reads
--prefix-out-vcf
prefix for two files containing derived simple variants for complex variants having one/multiple entry in SEGMENT annotation
--reference
 -R
Reference sequence file
Optional Tool Arguments
--arguments_file
read one or more arguments files and add them to the command line
--assembly-imprecise-evidence-overlap-uncertainty
100 Uncertainty in overlap of assembled breakpoints and evidence target links.
--bam-partition-size
0 maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).
--cnv-calls
External CNV calls file. Should be single sample VCF, and contain only confident autosomal non-reference CNV calls (for now).
--conf
Spark properties to set on the Spark context in the format =
--disable-sequence-dictionary-validation
false If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
--gcs-max-retries
 -gcs-retries
20 If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
--gcs-project-for-requester-pays
Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed. User must have storage.buckets.get permission on the bucket being accessed.
--help
 -h
false display the help message
--imprecise-variant-evidence-threshold
7 Number of pieces of imprecise evidence necessary to call a variant in the absence of an assembled breakpoint.
--interval-merging-rule
 -imr
ALL Interval merging rule for abutting intervals
--intervals
 -L
One or more genomic intervals over which to operate
--max-callable-imprecise-deletion-size
15000 Maximum size deletion to call based on imprecise evidence without corroborating read depth evidence
--min-align-length
50 Minimum flanking alignment length
--min-mq
 -mq
30 Minimum mapping quality of evidence assembly contig
--non-canonical-contig-names-file
 -alt-tigs
file containing non-canonical chromosome names (e.g chrUn_KI270588v1) in the reference, human reference (hg19 or hg38) assumed when omitted
--num-reducers
0 For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.
--output-shard-tmp-dir
when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used
--program-name
Name of the program running
--sharded-output
false For tools that write an output, write the output in multiple pieces (shards)
--spark-master
local[*] URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
--spark-verbosity
Spark verbosity. Overrides --verbosity for Spark-generated logs only. Possible values: {ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE}
--truth-interval-padding
50 Breakpoint padding for evaluation against truth data.
--use-nio
false Whether to use NIO or the Hadoop filesystem (default) for reading files. (Note that the Hadoop filesystem is always used for writing files.)
--version
false display the version number for this tool
Optional Common Arguments
--add-output-vcf-command-line
true If true, adds a command line header line to created VCF files.
--create-output-bam-index
 -OBI
true If true, create a BAM index when writing a coordinate-sorted BAM file.
--create-output-bam-splitting-index
true If true, create a BAM splitting index (SBI) when writing a coordinate-sorted BAM file.
--create-output-variant-index
 -OVI
true If true, create a VCF index when writing a coordinate-sorted VCF file.
--disable-read-filter
 -DF
Read filters to be disabled before analysis
--disable-tool-default-read-filters
false Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)
--exclude-intervals
 -XL
One or more genomic intervals to exclude from processing
--gatk-config-file
A configuration file to use with the GATK.
--interval-exclusion-padding
 -ixp
0 Amount of padding (in bp) to add to each interval you are excluding.
--interval-padding
 -ip
0 Amount of padding (in bp) to add to each interval you are including.
--interval-set-rule
 -isr
UNION Set merging approach to use for combining interval inputs
--inverted-read-filter
 -XRF
Inverted (with flipped acceptance/failure conditions) read filters applied before analysis (after regular read filters).
--QUIET
false Whether to suppress job-summary info on System.err.
--read-filter
 -RF
Read filters to be applied before analysis
--read-index
Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.
--read-validation-stringency
 -VS
SILENT Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
--splitting-index-granularity
4096 Granularity to use when writing a splitting index, one entry will be put into the index every n reads where n is this granularity value. Smaller granularity results in a larger index with more available split points.
--tmp-dir
Temp directory to use.
--use-jdk-deflater
 -jdk-deflater
false Whether to use the JdkDeflater (as opposed to IntelDeflater)
--use-jdk-inflater
 -jdk-inflater
false Whether to use the JdkInflater (as opposed to IntelInflater)
--verbosity
INFO Control verbosity of logging.
Advanced Arguments
--debug-mode
false Run interpretation tool in debug mode (more information print to screen)
--showHidden
false display hidden arguments

Argument details

Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.


--add-output-vcf-command-line / -add-output-vcf-command-line

If true, adds a command line header line to created VCF files.

boolean  true


--arguments_file

read one or more arguments files and add them to the command line

List[File]  []


--assembly-imprecise-evidence-overlap-uncertainty

Uncertainty in overlap of assembled breakpoints and evidence target links.

int  100  [ [ -∞  ∞ ] ]


--bam-partition-size

maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).

long  0  [ [ -∞  ∞ ] ]


--cnv-calls

External CNV calls file. Should be single sample VCF, and contain only confident autosomal non-reference CNV calls (for now).

String  null


--conf

Spark properties to set on the Spark context in the format =

List[String]  []


--cpx-vcf

file containing complex variants as output by GATK-SV

R String  null


--create-output-bam-index / -OBI

If true, create a BAM index when writing a coordinate-sorted BAM file.

boolean  true


--create-output-bam-splitting-index

If true, create a BAM splitting index (SBI) when writing a coordinate-sorted BAM file.

boolean  true


--create-output-variant-index / -OVI

If true, create a VCF index when writing a coordinate-sorted VCF file.

boolean  true


--debug-mode

Run interpretation tool in debug mode (more information print to screen)

Boolean  false


--disable-read-filter / -DF

Read filters to be disabled before analysis

List[String]  []


--disable-sequence-dictionary-validation / -disable-sequence-dictionary-validation

If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!

boolean  false


--disable-tool-default-read-filters / -disable-tool-default-read-filters

Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)

boolean  false


--exclude-intervals / -XL

One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite). This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals (e.g. -XL myFile.intervals). strings gathered from the command line -XL argument to be parsed into intervals to exclude

List[String]  []


--gatk-config-file

A configuration file to use with the GATK.

String  null


--gcs-max-retries / -gcs-retries

If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection

int  20  [ [ -∞  ∞ ] ]


--gcs-project-for-requester-pays

Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed. User must have storage.buckets.get permission on the bucket being accessed.

String  ""


--help / -h

display the help message

boolean  false


--imprecise-variant-evidence-threshold

Number of pieces of imprecise evidence necessary to call a variant in the absence of an assembled breakpoint.

int  7  [ [ -∞  ∞ ] ]


--input / -I

BAM/SAM/CRAM file containing reads

R List[GATKPath]  []


--interval-exclusion-padding / -ixp

Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when analyzing exomes.

int  0  [ [ -∞  ∞ ] ]


--interval-merging-rule / -imr

Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not actually overlap) into a single continuous interval. However you can change this behavior if you want them to be treated as separate intervals instead.

The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:

ALL
OVERLAPPING_ONLY

IntervalMergingRule  ALL


--interval-padding / -ip

Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when analyzing exomes.

int  0  [ [ -∞  ∞ ] ]


--interval-set-rule / -isr

Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will always be merged using UNION). Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.

The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:

UNION
Take the union of all intervals
INTERSECTION
Take the intersection of intervals (the subset that overlaps all intervals specified)

IntervalSetRule  UNION


--intervals / -L

One or more genomic intervals over which to operate

List[String]  []


--inverted-read-filter / -XRF

Inverted (with flipped acceptance/failure conditions) read filters applied before analysis (after regular read filters).

List[String]  []


--max-callable-imprecise-deletion-size

Maximum size deletion to call based on imprecise evidence without corroborating read depth evidence

int  15000  [ [ -∞  ∞ ] ]


--min-align-length

Minimum flanking alignment length

Integer  50  [ [ -∞  ∞ ] ]


--min-mq / -mq

Minimum mapping quality of evidence assembly contig

Integer  30  [ [ -∞  ∞ ] ]


--non-canonical-contig-names-file / -alt-tigs

file containing non-canonical chromosome names (e.g chrUn_KI270588v1) in the reference, human reference (hg19 or hg38) assumed when omitted

String  null


--num-reducers

For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.

int  0  [ [ -∞  ∞ ] ]


--output-shard-tmp-dir

when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used

Exclusion: This argument cannot be used at the same time as sharded-output.

String  null


--prefix-out-vcf

prefix for two files containing derived simple variants for complex variants having one/multiple entry in SEGMENT annotation

R String  null


--program-name

Name of the program running

String  null


--QUIET

Whether to suppress job-summary info on System.err.

Boolean  false


--read-filter / -RF

Read filters to be applied before analysis

List[String]  []


--read-index / -read-index

Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.

List[GATKPath]  []


--read-validation-stringency / -VS

Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.

The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:

STRICT
LENIENT
SILENT

ValidationStringency  SILENT


--reference / -R

Reference sequence file

R GATKPath  null


--sharded-output

For tools that write an output, write the output in multiple pieces (shards)

Exclusion: This argument cannot be used at the same time as output-shard-tmp-dir.

boolean  false


--showHidden / -showHidden

display hidden arguments

boolean  false


--spark-master

URL of the Spark Master to submit jobs to when using the Spark pipeline runner.

String  local[*]


--spark-verbosity

Spark verbosity. Overrides --verbosity for Spark-generated logs only. Possible values: {ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE}

String  null


--splitting-index-granularity

Granularity to use when writing a splitting index, one entry will be put into the index every n reads where n is this granularity value. Smaller granularity results in a larger index with more available split points.

long  4096  [ [ 1  ∞ ] ]


--tmp-dir

Temp directory to use.

GATKPath  null


--truth-interval-padding

Breakpoint padding for evaluation against truth data.

int  50  [ [ -∞  ∞ ] ]


--use-jdk-deflater / -jdk-deflater

Whether to use the JdkDeflater (as opposed to IntelDeflater)

boolean  false


--use-jdk-inflater / -jdk-inflater

Whether to use the JdkInflater (as opposed to IntelInflater)

boolean  false


--use-nio

Whether to use NIO or the Hadoop filesystem (default) for reading files. (Note that the Hadoop filesystem is always used for writing files.)

boolean  false


--verbosity / -verbosity

Control verbosity of logging.

The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:

ERROR
WARNING
INFO
DEBUG

LogLevel  INFO


--version

display the version number for this tool

boolean  false


Return to top


See also General Documentation | Tool Docs Index Tool Documentation Index | Support Forum

GATK version 4.6.2.0 built at Sun, 13 Apr 2025 13:21:43 -0400.