MarkDuplicates on Spark
This is a Spark implementation of Picard MarkDuplicates that allows the tool to be run in parallel on multiple cores on a local machine or multiple machines on a Spark cluster while still matching the output of the non-Spark Picard version of the tool. Since the tool requires holding all of the readnames in memory while it groups read information, machine configuration and starting sort-order impact tool performance.
Here are some differences of note between MarkDuplicatesSpark and Picard MarkDuplicates.For a typical 30x coverage WGS BAM, we recommend running on a machine with at least 16 GB. Memory usage scales with library complexity and the tool will need more memory for larger or more complex data. If the tool is running slowly it is possible Spark is running out of memory and is spilling data to disk excessively. If this is the case then increasing the memory available to the tool should yield speedup to a threshold; otherwise, increasing memory should have no effect beyond that threshold.
Note that this tool does not support UMI based duplicate marking.
See MarkDuplicates documentation for details on tool features and background information.
gatk MarkDuplicatesSpark \
-I input.bam \
-O marked_duplicates.bam
Additionally produce estimated library complexity metrics
gatk MarkDuplicatesSpark \
-I input.bam \
-O marked_duplicates.bam \
-M marked_dup_metrics.txt
MarkDuplicatesSpark run locally specifying the removal of sequencing duplicates
gatk MarkDuplicatesSpark \
-I input.bam \
-O marked_duplicates.bam \
--remove-sequencing-duplicates
MarkDuplicatesSpark run locally tagging OpticalDuplicates using the "DT" attribute for reads
gatk MarkDuplicatesSpark \
-I input.bam \
-O marked_duplicates.bam \
--duplicate-tagging-policy OpticalOnly
MarkDuplicates run locally specifying the core input. Note if 'spark.executor.cores' is unset, Spark will use all available cores on the machine.
gatk MarkDuplicatesSpark \
-I input.bam \
-O marked_duplicates.bam \
-M marked_dup_metrics.txt \
--conf 'spark.executor.cores=5'
MarkDuplicates run on a Spark cluster of five executors and with eight executor cores
gatk MarkDuplicatesSpark \
-I input.bam \
-O marked_duplicates.bam \
-M marked_dup_metrics.txt \
-- \
--spark-runner SPARK \
--spark-master MASTER_URL \
--num-executors 5 \
--executor-cores 8
Please see
Picard DuplicationMetrics
for detailed explanations of the output metrics.
This Read Filter is automatically applied to the data by the Engine before processing by MarkDuplicatesSpark.
This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.
| Argument name(s) | Default value | Summary | |
|---|---|---|---|
| Required Arguments | |||
| --input -I |
BAM/SAM/CRAM file containing reads | ||
| --output -O |
the output bam | ||
| Optional Tool Arguments | |||
| --arguments_file |
read one or more arguments files and add them to the command line | ||
| --bam-partition-size |
0 | maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block). | |
| --conf |
Spark properties to set on the Spark context in the format |
||
| --disable-sequence-dictionary-validation |
false | If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk! | |
| --do-not-mark-unmapped-mates |
false | Enabling this option will mean unmapped mates of duplicate marked reads will not be marked as duplicates. | |
| --duplicate-scoring-strategy -DS |
SUM_OF_BASE_QUALITIES | The scoring strategy for choosing the non-duplicate among candidates. | |
| --duplicate-tagging-policy |
DontTag | Determines how duplicate types are recorded in the DT optional attribute. | |
| --gcs-max-retries -gcs-retries |
20 | If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection | |
| --gcs-project-for-requester-pays |
Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed. User must have storage.buckets.get permission on the bucket being accessed. | ||
| --help -h |
false | display the help message | |
| --interval-merging-rule -imr |
ALL | Interval merging rule for abutting intervals | |
| --intervals -L |
One or more genomic intervals over which to operate | ||
| --metrics-file -M |
Path to write duplication metrics to. | ||
| --num-reducers |
0 | For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input. | |
| --optical-duplicate-pixel-distance |
100 | The maximum offset between two duplicate clusters in order to consider them optical duplicates. This should usually be set to some fairly small number (e.g. 5-10 pixels) unless using later versions of the Illumina pipeline that multiply pixel values by 10, in which case 50-100 is more normal. | |
| --output-shard-tmp-dir |
when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used | ||
| --program-name |
Name of the program running | ||
| --read-name-regex |
Regular expression that can be used to parse read names in the incoming SAM file. Read names are parsed to extract three variables: tile/region, x coordinate and y coordinate. These values are used to estimate the rate of optical duplication in order to give a more accurate estimated library size. Set this option to null to disable optical duplicate detection. The regular expression should contain three capture groups for the three variables, in order. It must match the entire read name. Note that if the default regex is specified, a regex match is not actually done, but instead the read name is split on colon character. For 5 element names, the 3rd, 4th and 5th elements are assumed to be tile, x and y values. For 7 element names (CASAVA 1.8), the 5th, 6th, and 7th elements are assumed to be tile, x and y values. | ||
| --reference -R |
Reference sequence | ||
| --remove-all-duplicates |
false | If true do not write duplicates to the output file instead of writing them with appropriate flags set. | |
| --remove-sequencing-duplicates |
false | If true do not write optical/sequencing duplicates to the output file instead of writing them with appropriate flags set. | |
| --sharded-output |
false | For tools that write an output, write the output in multiple pieces (shards) | |
| --spark-master |
local[*] | URL of the Spark Master to submit jobs to when using the Spark pipeline runner. | |
| --spark-verbosity |
Spark verbosity. Overrides --verbosity for Spark-generated logs only. Possible values: {ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE} | ||
| --use-nio |
false | Whether to use NIO or the Hadoop filesystem (default) for reading files. (Note that the Hadoop filesystem is always used for writing files.) | |
| --version |
false | display the version number for this tool | |
| Optional Common Arguments | |||
| --add-output-vcf-command-line |
true | If true, adds a command line header line to created VCF files. | |
| --create-output-bam-index -OBI |
true | If true, create a BAM index when writing a coordinate-sorted BAM file. | |
| --create-output-bam-splitting-index |
true | If true, create a BAM splitting index (SBI) when writing a coordinate-sorted BAM file. | |
| --create-output-variant-index -OVI |
true | If true, create a VCF index when writing a coordinate-sorted VCF file. | |
| --disable-read-filter -DF |
Read filters to be disabled before analysis | ||
| --disable-tool-default-read-filters |
false | Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on) | |
| --exclude-intervals -XL |
One or more genomic intervals to exclude from processing | ||
| --gatk-config-file |
A configuration file to use with the GATK. | ||
| --interval-exclusion-padding -ixp |
0 | Amount of padding (in bp) to add to each interval you are excluding. | |
| --interval-padding -ip |
0 | Amount of padding (in bp) to add to each interval you are including. | |
| --interval-set-rule -isr |
UNION | Set merging approach to use for combining interval inputs | |
| --inverted-read-filter -XRF |
Inverted (with flipped acceptance/failure conditions) read filters applied before analysis (after regular read filters). | ||
| --QUIET |
false | Whether to suppress job-summary info on System.err. | |
| --read-filter -RF |
Read filters to be applied before analysis | ||
| --read-index |
Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically. | ||
| --read-validation-stringency -VS |
SILENT | Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded. | |
| --splitting-index-granularity |
4096 | Granularity to use when writing a splitting index, one entry will be put into the index every n reads where n is this granularity value. Smaller granularity results in a larger index with more available split points. | |
| --tmp-dir |
Temp directory to use. | ||
| --use-jdk-deflater -jdk-deflater |
false | Whether to use the JdkDeflater (as opposed to IntelDeflater) | |
| --use-jdk-inflater -jdk-inflater |
false | Whether to use the JdkInflater (as opposed to IntelInflater) | |
| --verbosity |
INFO | Control verbosity of logging. | |
| Advanced Arguments | |||
| --allow-multiple-sort-orders-in-input |
false | Allow non-queryname sorted inputs when specifying multiple input bams. | |
| --flow-end-pos-uncertainty |
0 | Maximal number of bases of reads (fragment) ends difference that is marked as match (should only be applied to flow based reads). Default 0. | |
| --flow-q-is-known-end |
false | Treat reads (fragment) clipped on tm:Q as known end position (should only be applied to flow based reads) (default: false) | |
| --flow-quality-sum-strategy |
false | Use specific quality summing strategy for flow based reads. The strategy ensures that the same (and correct) quality value is used for all bases of the same homopolymer. Default false. | |
| --flow-skip-start-homopolymers |
0 | Skip first N flows, when considering duplicates (should only be applied to flow based reads). Default 0. | |
| --flowbased |
false | Single argument for enabling the bulk of flow based features (should only be applied to flow based reads). | |
| --showHidden |
false | display hidden arguments | |
| --single-end-reads-clipping-is-end |
false | Use clipped, rather than unclipped, when considering duplicates (should only be applied to flow based reads). Default false. | |
| --single-end-reads-end-position-significant |
false | Make end location of read (fragment) be significant when considering duplicates, in addition to the start location, which is always significant (should only be applied to flow based reads). Default false. | |
| --treat-unsorted-as-querygroup-ordered |
false | Treat unsorted files as query-group orderd files. WARNING: This option disables a basic safety check and may result in unexpected behavior if the file is truly unordered | |
Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.
If true, adds a command line header line to created VCF files.
boolean true
Allow non-queryname sorted inputs when specifying multiple input bams.
boolean false
read one or more arguments files and add them to the command line
List[File] []
maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).
long 0 [ [ -∞ ∞ ] ]
Spark properties to set on the Spark context in the format
List[String] []
If true, create a BAM index when writing a coordinate-sorted BAM file.
boolean true
If true, create a BAM splitting index (SBI) when writing a coordinate-sorted BAM file.
boolean true
If true, create a VCF index when writing a coordinate-sorted VCF file.
boolean true
Read filters to be disabled before analysis
List[String] []
If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
boolean false
Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)
boolean false
Enabling this option will mean unmapped mates of duplicate marked reads will not be marked as duplicates.
boolean false
The scoring strategy for choosing the non-duplicate among candidates.
The --duplicate-scoring-strategy argument is an enumerated type (MarkDuplicatesScoringStrategy), which can have one of the following values:
MarkDuplicatesScoringStrategy SUM_OF_BASE_QUALITIES
Determines how duplicate types are recorded in the DT optional attribute.
Exclusion: This argument cannot be used at the same time as remove-all-duplicates, remove-sequencing-duplicates.
The --duplicate-tagging-policy argument is an enumerated type (DuplicateTaggingPolicy), which can have one of the following values:
DuplicateTaggingPolicy DontTag
One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite). This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the
command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals
(e.g. -XL myFile.intervals). strings gathered from the command line -XL argument to be parsed into intervals to exclude
List[String] []
Maximal number of bases of reads (fragment) ends difference that is marked as match (should only be applied to flow based reads). Default 0.
int 0 [ [ -∞ ∞ ] ]
Treat reads (fragment) clipped on tm:Q as known end position (should only be applied to flow based reads) (default: false)
boolean false
Use specific quality summing strategy for flow based reads. The strategy ensures that the same (and correct) quality value is used for all bases of the same homopolymer. Default false.
boolean false
Skip first N flows, when considering duplicates (should only be applied to flow based reads). Default 0.
int 0 [ [ -∞ ∞ ] ]
Single argument for enabling the bulk of flow based features (should only be applied to flow based reads).
Boolean false
A configuration file to use with the GATK.
String null
If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
int 20 [ [ -∞ ∞ ] ]
Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed. User must have storage.buckets.get permission on the bucket being accessed.
String ""
display the help message
boolean false
BAM/SAM/CRAM file containing reads
R List[GATKPath] []
Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a
padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not
actually overlap) into a single continuous interval. However you can change this behavior if you want them to be
treated as separate intervals instead.
The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:
IntervalMergingRule ALL
Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a
padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can
change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to
perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule
INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will
always be merged using UNION).
Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.
The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:
IntervalSetRule UNION
One or more genomic intervals over which to operate
List[String] []
Inverted (with flipped acceptance/failure conditions) read filters applied before analysis (after regular read filters).
List[String] []
Path to write duplication metrics to.
String null
For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.
int 0 [ [ -∞ ∞ ] ]
The maximum offset between two duplicate clusters in order to consider them optical duplicates. This should usually be set to some fairly small number (e.g. 5-10 pixels) unless using later versions of the Illumina pipeline that multiply pixel values by 10, in which case 50-100 is more normal.
int 100 [ [ -∞ ∞ ] ]
the output bam
R String null
when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used
Exclusion: This argument cannot be used at the same time as sharded-output.
String null
Name of the program running
String null
Whether to suppress job-summary info on System.err.
Boolean false
Read filters to be applied before analysis
List[String] []
Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.
List[GATKPath] []
Regular expression that can be used to parse read names in the incoming SAM file. Read names are parsed to extract three variables: tile/region, x coordinate and y coordinate. These values are used to estimate the rate of optical duplication in order to give a more accurate estimated library size. Set this option to null to disable optical duplicate detection. The regular expression should contain three capture groups for the three variables, in order. It must match the entire read name. Note that if the default regex is specified, a regex match is not actually done, but instead the read name is split on colon character. For 5 element names, the 3rd, 4th and 5th elements are assumed to be tile, x and y values. For 7 element names (CASAVA 1.8), the 5th, 6th, and 7th elements are assumed to be tile, x and y values.
String
Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:
ValidationStringency SILENT
Reference sequence
GATKPath null
If true do not write duplicates to the output file instead of writing them with appropriate flags set.
Exclusion: This argument cannot be used at the same time as duplicate-tagging-policy, remove-sequencing-duplicates.
boolean false
If true do not write optical/sequencing duplicates to the output file instead of writing them with appropriate flags set.
Exclusion: This argument cannot be used at the same time as duplicate-tagging-policy, remove-all-duplicates.
boolean false
For tools that write an output, write the output in multiple pieces (shards)
Exclusion: This argument cannot be used at the same time as output-shard-tmp-dir.
boolean false
display hidden arguments
boolean false
Use clipped, rather than unclipped, when considering duplicates (should only be applied to flow based reads). Default false.
boolean false
Make end location of read (fragment) be significant when considering duplicates, in addition to the start location, which is always significant (should only be applied to flow based reads). Default false.
boolean false
URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
String local[*]
Spark verbosity. Overrides --verbosity for Spark-generated logs only. Possible values: {ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE}
String null
Granularity to use when writing a splitting index, one entry will be put into the index every n reads where n is this granularity value. Smaller granularity results in a larger index with more available split points.
long 4096 [ [ 1 ∞ ] ]
Temp directory to use.
GATKPath null
Treat unsorted files as query-group orderd files. WARNING: This option disables a basic safety check and may result in unexpected behavior if the file is truly unordered
boolean false
Whether to use the JdkDeflater (as opposed to IntelDeflater)
boolean false
Whether to use the JdkInflater (as opposed to IntelInflater)
boolean false
Whether to use NIO or the Hadoop filesystem (default) for reading files. (Note that the Hadoop filesystem is always used for writing files.)
boolean false
Control verbosity of logging.
The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:
LogLevel INFO
display the version number for this tool
boolean false
See also General Documentation | Tool Docs Index Tool Documentation Index | Support Forum
GATK version 4.6.2.0 built at Sun, 13 Apr 2025 13:21:43 -0400.