Quantcast
Channel: Ask the FireCloud Team — GATK-Forum

Using gsutil -m: how do I specify multiple sources and multiple destinations

$
0
0

I want to download 15,000 files from my FireCloud bucket, all of which have the same filename but different paths, eg gs://fc-mybucket/MyTool/*/MySubtask/output.vcf

It's too slow to make 15,000 calls to gsutil as follows: gsutil cp gs://fc-mybucket/MyTool/ID1/MySubtask/output.vcf ID1.output.vcf

Using the -m option as follows doesn't work:
gsutil cp -m gs://fc-mybucket/MyTool/*/MySubtask/output.vcf mydir/

because it keeps copying all the source files into a single file mydir/output.vcf.

I also tried using -r:
gsutil cp -m -r gs://fc-mybucket/MyTool/*/MySubtask/output.vcf mydir/
But this also fails to create all the subdirectories necessary.

I would like to specify two input files: a list of 15000 source paths and a list of 15000 destination paths.

Any suggestions of how to download and rename all my files in parallel?


Issues with pairsets in absolute

$
0
0

Hello I'm running absolute and having issues with running it on the pairset. I can run it successfully on individual pairs but it fails when it runs on a pairset using this.pairs. I am able to use the same pairset using this. pairs to run CNV and model segments post processing workflow. A colleague of mine has been able to use the absolute method with a pairset. I'm not sure where this is failing. The error message in the log file is not helpful.

Create directory command in WDL

$
0
0

Hi! One of the required inputs of my tool is output directory (--outdir), so I'm trying to add a command to create a new directory within my WDL script (mkdir ${outdir}) but it doesn't seem to work. First, is it even possible? And if so, how to do it? Thanks!

Defining multiple outputs

$
0
0

Hi! My command creates multiple output files and I'm not sure how to properly define them in output section of my WDL script.

Here is the script:

Rscript ${pcn_extdata}/Coverage.R \
--outdir ${outdir} \
--bam ${bam} \
--intervals ${intervals}

and my outputs will be:
File png = "${BAM_pre}_coverage_loess.png"
File qc = "${BAM_pre}_coverage_loess_qc.txt"
File loess = "${BAM_pre}_coverage_loess.txt"
File cov = "${BAM_pre}_coverage.txt"

Also, here is the log file.

2019/03/13 17:42:44 I: Switching to status: pulling-image
2019/03/13 17:42:44 I: Calling SetOperationStatus(pulling-image)
2019/03/13 17:42:44 I: SetOperationStatus(pulling-image) succeeded
2019/03/13 17:42:44 I: Writing new Docker configuration file
2019/03/13 17:42:44 I: Pulling image "bioconductor/devel_core2@sha256:dd9b36d719ef00b7c821c5245e997e6ec7d8a71b2bcfeb5ba8132793a604e18f"
2019/03/13 17:45:09 I: Pulled image "bioconductor/devel_core2@sha256:dd9b36d719ef00b7c821c5245e997e6ec7d8a71b2bcfeb5ba8132793a604e18f" successfully.
2019/03/13 17:45:09 I: Switching to status: localizing-files
2019/03/13 17:45:09 I: Calling SetOperationStatus(localizing-files)
2019/03/13 17:45:09 I: SetOperationStatus(localizing-files) succeeded
2019/03/13 17:45:09 I: Docker file /cromwell_root/5aa919de-0aa0-43ec-9ec3-288481102b6d/tcga/OV/WGA_RepliG/WXS/BI/ILLUMINA/C239.TCGA-09-0365-10A-01W.6.bam maps to host location /mnt/local-disk/5aa919de-0aa0-43ec-9ec3-288481102b6d/tcga/OV/WGA_RepliG/WXS/BI/ILLUMINA/C239.TCGA-09-0365-10A-01W.6.bam.
2019/03/13 17:45:09 I: Running command: sudo gsutil -q -m cp gs://5aa919de-0aa0-43ec-9ec3-288481102b6d/tcga/OV/WGA_RepliG/WXS/BI/ILLUMINA/C239.TCGA-09-0365-10A-01W.6.bam /mnt/local-disk/5aa919de-0aa0-43ec-9ec3-288481102b6d/tcga/OV/WGA_RepliG/WXS/BI/ILLUMINA/C239.TCGA-09-0365-10A-01W.6.bam
2019/03/13 17:56:34 I: Docker file /cromwell_root/5aa919de-0aa0-43ec-9ec3-288481102b6d/tcga/OV/WGA_RepliG/WXS/BI/ILLUMINA/C239.TCGA-09-0365-10A-01W.6.bam.bai maps to host location /mnt/local-disk/5aa919de-0aa0-43ec-9ec3-288481102b6d/tcga/OV/WGA_RepliG/WXS/BI/ILLUMINA/C239.TCGA-09-0365-10A-01W.6.bam.bai.
2019/03/13 17:56:34 I: Running command: sudo gsutil -q -m cp gs://5aa919de-0aa0-43ec-9ec3-288481102b6d/tcga/OV/WGA_RepliG/WXS/BI/ILLUMINA/C239.TCGA-09-0365-10A-01W.6.bam.bai /mnt/local-disk/5aa919de-0aa0-43ec-9ec3-288481102b6d/tcga/OV/WGA_RepliG/WXS/BI/ILLUMINA/C239.TCGA-09-0365-10A-01W.6.bam.bai
2019/03/13 17:56:35 I: Docker file /cromwell_root/fc-secure-0893eb66-fffa-4cc2-a919-5c803249c3b9/Test/whole_exome_agilent_1.1_hg19_gcgene.txt maps to host location /mnt/local-disk/fc-secure-0893eb66-fffa-4cc2-a919-5c803249c3b9/Test/whole_exome_agilent_1.1_hg19_gcgene.txt.
2019/03/13 17:56:35 I: Running command: sudo gsutil -q -m cp gs://fc-secure-0893eb66-fffa-4cc2-a919-5c803249c3b9/Test/whole_exome_agilent_1.1_hg19_gcgene.txt /mnt/local-disk/fc-secure-0893eb66-fffa-4cc2-a919-5c803249c3b9/Test/whole_exome_agilent_1.1_hg19_gcgene.txt
2019/03/13 17:56:37 I: Docker file /cromwell_root/script maps to host location /mnt/local-disk/script.
2019/03/13 17:56:37 I: Running command: sudo gsutil -q -m cp gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/script /mnt/local-disk/script
2019/03/13 17:56:39 I: Done copying files.
2019/03/13 17:56:39 I: Switching to status: running-docker
2019/03/13 17:56:39 I: Calling SetOperationStatus(running-docker)
2019/03/13 17:56:39 I: SetOperationStatus(running-docker) succeeded
2019/03/13 17:56:39 I: Setting these data volumes on the docker container: [-v /tmp/ggp-199678580:/tmp/ggp-199678580 -v /mnt/local-disk:/cromwell_root]
2019/03/13 17:56:39 I: Running command: docker run -v /tmp/ggp-199678580:/tmp/ggp-199678580 -v /mnt/local-disk:/cromwell_root -e __extra_config_gcs_path=gs://cromwell-auth-fccredits-thorium-corn-3153/db196b97-e2ce-4c6a-b27e-ca337096522f_auth.json -e C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt=/cromwell_root/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt -e exec=/cromwell_root/script -e C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt=/cromwell_root/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt -e coverage.Coverage.intervals-0=/cromwell_root/fc-secure-0893eb66-fffa-4cc2-a919-5c803249c3b9/Test/whole_exome_agilent_1.1_hg19_gcgene.txt -e coverage.Coverage.bai-0=/cromwell_root/5aa919de-0aa0-43ec-9ec3-288481102b6d/tcga/OV/WGA_RepliG/WXS/BI/ILLUMINA/C239.TCGA-09-0365-10A-01W.6.bam.bai -e stderr=/cromwell_root/stderr -e C239.TCGA-09-0365-10A-01W.6_coverage_loess.png=/cromwell_root/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png -e C239.TCGA-09-0365-10A-01W.6_coverage.txt=/cromwell_root/C239.TCGA-09-0365-10A-01W.6_coverage.txt -e stdout=/cromwell_root/stdout -e coverage.Coverage.bam-0=/cromwell_root/5aa919de-0aa0-43ec-9ec3-288481102b6d/tcga/OV/WGA_RepliG/WXS/BI/ILLUMINA/C239.TCGA-09-0365-10A-01W.6.bam -e rc=/cromwell_root/rc bioconductor/devel_core2@sha256:dd9b36d719ef00b7c821c5245e997e6ec7d8a71b2bcfeb5ba8132793a604e18f /tmp/ggp-199678580
2019/03/13 18:46:53 I: Switching to status: delocalizing-files
2019/03/13 18:46:53 I: Calling SetOperationStatus(delocalizing-files)
2019/03/13 18:46:53 I: SetOperationStatus(delocalizing-files) succeeded
2019/03/13 18:46:53 I: Docker file /cromwell_root/C239.TCGA-09-0365-10A-01W.6_coverage.txt maps to host location /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage.txt.
2019/03/13 18:46:53 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage.txt
2019/03/13 18:46:54 E: command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage.txt
CommandException: 1 file/object could not be transferred.
 (exit status 1)
2019/03/13 18:46:54 W: cp failed: gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage.txt, command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage.txt
CommandException: 1 file/object could not be transferred.

2019/03/13 18:46:55 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage.txt
2019/03/13 18:46:56 E: command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage.txt
CommandException: 1 file/object could not be transferred.
 (exit status 1)
2019/03/13 18:46:56 W: cp failed: gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage.txt, command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage.txt
CommandException: 1 file/object could not be transferred.

2019/03/13 18:46:58 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage.txt
2019/03/13 18:46:59 E: command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage.txt
CommandException: 1 file/object could not be transferred.
 (exit status 1)
2019/03/13 18:46:59 W: cp failed: gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage.txt, command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage.txt
CommandException: 1 file/object could not be transferred.

2019/03/13 18:47:02 I: Switching to status: copied 0 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage.txt"
2019/03/13 18:47:02 I: Calling SetOperationStatus(copied 0 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage.txt")
2019/03/13 18:47:02 I: SetOperationStatus(copied 0 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage.txt") succeeded
2019/03/13 18:47:02 I: Docker file /cromwell_root/stdout maps to host location /mnt/local-disk/stdout.
2019/03/13 18:47:02 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/stdout gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/stdout
2019/03/13 18:47:03 I: Deleting log file
2019/03/13 18:47:03 I: Running command: sudo rm -f /var/log/google-genomics/out.log
2019/03/13 18:47:03 I: Switching to status: copied 1 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/stdout"
2019/03/13 18:47:03 I: Calling SetOperationStatus(copied 1 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/stdout")
2019/03/13 18:47:03 I: SetOperationStatus(copied 1 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/stdout") succeeded
2019/03/13 18:47:03 I: Docker file /cromwell_root/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png maps to host location /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png.
2019/03/13 18:47:03 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png
2019/03/13 18:47:04 E: command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png
CommandException: 1 file/object could not be transferred.
 (exit status 1)
2019/03/13 18:47:04 W: cp failed: gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png, command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png
CommandException: 1 file/object could not be transferred.

2019/03/13 18:47:06 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png
2019/03/13 18:47:07 E: command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png
CommandException: 1 file/object could not be transferred.
 (exit status 1)
2019/03/13 18:47:07 W: cp failed: gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png, command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png
CommandException: 1 file/object could not be transferred.

2019/03/13 18:47:09 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png
2019/03/13 18:47:10 E: command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png
CommandException: 1 file/object could not be transferred.
 (exit status 1)
2019/03/13 18:47:10 W: cp failed: gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png, command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png
CommandException: 1 file/object could not be transferred.

2019/03/13 18:47:12 I: Switching to status: copied 0 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png"
2019/03/13 18:47:12 I: Calling SetOperationStatus(copied 0 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png")
2019/03/13 18:47:12 I: SetOperationStatus(copied 0 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.png") succeeded
2019/03/13 18:47:12 I: Docker file /cromwell_root/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt maps to host location /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt.
2019/03/13 18:47:12 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt
2019/03/13 18:47:13 E: command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt
CommandException: 1 file/object could not be transferred.
 (exit status 1)
2019/03/13 18:47:13 W: cp failed: gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt, command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt
CommandException: 1 file/object could not be transferred.

2019/03/13 18:47:14 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt
2019/03/13 18:47:15 E: command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt
CommandException: 1 file/object could not be transferred.
 (exit status 1)
2019/03/13 18:47:15 W: cp failed: gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt, command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt
CommandException: 1 file/object could not be transferred.

2019/03/13 18:47:17 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt
2019/03/13 18:47:18 E: command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt
CommandException: 1 file/object could not be transferred.
 (exit status 1)
2019/03/13 18:47:18 W: cp failed: gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt, command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt
CommandException: 1 file/object could not be transferred.

2019/03/13 18:47:21 I: Switching to status: copied 0 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt"
2019/03/13 18:47:21 I: Calling SetOperationStatus(copied 0 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt")
2019/03/13 18:47:21 I: SetOperationStatus(copied 0 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess_qc.txt") succeeded
2019/03/13 18:47:21 I: Docker file /cromwell_root/stderr maps to host location /mnt/local-disk/stderr.
2019/03/13 18:47:21 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/stderr gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/stderr
2019/03/13 18:47:22 I: Deleting log file
2019/03/13 18:47:22 I: Running command: sudo rm -f /var/log/google-genomics/out.log
2019/03/13 18:47:22 I: Switching to status: copied 1 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/stderr"
2019/03/13 18:47:22 I: Calling SetOperationStatus(copied 1 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/stderr")
2019/03/13 18:47:22 I: SetOperationStatus(copied 1 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/stderr") succeeded
2019/03/13 18:47:22 I: Docker file /cromwell_root/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt maps to host location /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt.
2019/03/13 18:47:22 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt
2019/03/13 18:47:23 E: command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt
CommandException: 1 file/object could not be transferred.
 (exit status 1)
2019/03/13 18:47:23 W: cp failed: gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt, command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt
CommandException: 1 file/object could not be transferred.

2019/03/13 18:47:25 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt
2019/03/13 18:47:26 E: command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt
CommandException: 1 file/object could not be transferred.
 (exit status 1)
2019/03/13 18:47:26 W: cp failed: gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt, command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt
CommandException: 1 file/object could not be transferred.

2019/03/13 18:47:27 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt
2019/03/13 18:47:28 E: command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt
CommandException: 1 file/object could not be transferred.
 (exit status 1)
2019/03/13 18:47:28 W: cp failed: gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt, command failed: CommandException: No URLs matched: /mnt/local-disk/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt
CommandException: 1 file/object could not be transferred.

2019/03/13 18:47:31 I: Switching to status: copied 0 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt"
2019/03/13 18:47:31 I: Calling SetOperationStatus(copied 0 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt")
2019/03/13 18:47:31 I: SetOperationStatus(copied 0 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/C239.TCGA-09-0365-10A-01W.6_coverage_loess.txt") succeeded
2019/03/13 18:47:31 I: Docker file /cromwell_root/rc maps to host location /mnt/local-disk/rc.
2019/03/13 18:47:31 I: Running command: sudo gsutil -q -m cp -L /var/log/google-genomics/out.log /mnt/local-disk/rc gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/rc
2019/03/13 18:47:32 I: Deleting log file
2019/03/13 18:47:32 I: Running command: sudo rm -f /var/log/google-genomics/out.log
2019/03/13 18:47:32 I: Switching to status: copied 1 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/rc"
2019/03/13 18:47:32 I: Calling SetOperationStatus(copied 1 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/rc")
2019/03/13 18:47:33 I: SetOperationStatus(copied 1 file(s) to "gs://fc-secure-7796a61e-c9f1-415c-81eb-174b72d40092/ff6c1564-c389-47e3-87ce-b96c4daa1096/coverage/db196b97-e2ce-4c6a-b27e-ca337096522f/call-Coverage/rc") succeeded

Questions about the very basics, please help

$
0
0

Hi there,

I've been trying my best to work through the tutorials, but it's been hard for me to follow.

I'd like to do a very simple task just to make sure I understand the parts correctly.

Let's say I'm a five-year old who knows Docker but not much else. I want to sign into Terra, and run an analysis. The analysis simply accesses a TCGA data set from FireCloud (say any FPKM), multiplies the read_counts by 2, and saves it to my Google bucket.

How do I do this?

Can we start with the first step? How do I make a TCGA FPKM file that is hosted on FireCloud accessible to my method?

Please explain like I'm five. Pictures with code would be greatly appreciated.

Thank you!

Oauth Authentication for FireCloud via API calls

$
0
0

Hi There,

We are integrating our system to fireCloud via APIs(https://api.firecloud.org/). Now to run APIs we need to provide OAuth Token. While testing it via swagger it allows me to log in using EmailID. But, how we need to get this token programmatically? I can see only two APIs under OAuth API list on swagger:
POST /handle-oauth-code
GET /api/refresh-token-status

refresh token looks fine, but how to trigger handle-OAuth-code? what will be redirectUri for this? Looking for urgent help on this.
Note: I'm exploring FISS too but for now we need to integrate using these API calls.

No machines available error

$
0
0

Hi,

I'm trying to interpret the following error:

 The job was stopped before the command finished. PAPI error code 5. no machines available

Does this mean no machines were available from the cloud, or that I have an invalid configuration in my WDL?

I've tested my WDL locally and it runs fine. It's available here, if it helps: https://github.com/edawson/bammer/blob/master/bammer.wdl

Deleting Project Data

$
0
0

Hello,

I attempted to delete a project and move the data to archival storage-- however my google cloud console activity reports "Failed: Update Bucket" over the many days since I submitted the request. I think I may be still being charged for data storage, but now I'm blocked from accessing the bucket? Please advise!

Thanks
Jason Gallant


Does FireCloud always allocate VMs from google's us-central1 region

$
0
0

We want to move some data off of our workspaces' multiregional cloud storage to regional storage in order to reduce our storage costs. In order to be able to continue running analyses on the data residing in regional storage, the the VMs that FireCloud/Cromwell launches need to be in the same region as the regional storage. Are the VMs provisioned by Firecloud to run tasks always in the us-central1 region. Is this the default region? Is there a way of overriding cromwell's default region?

BUG: Groups addUserToGroup API response does not conform to swagger docs.

$
0
0

The swagger documentation indicates that a 404 response indicates a User not found error. However entering an unknown user elicits a 400 response. A 404 response is given when the group does not exist.

How do I do a bulk download of multiple methods?

$
0
0
I am using the firecloud online portal and I need to download all the methods from my search result. Is there a way to do this either from the online portal or from the command line?

WDL cannot find parameter file that exists in same google bucket

$
0
0

I'm trying to run a simple WDL that runs "samtools mpileup", where one WDL parameter is the name of an intervals file. From FireCloud I enter the intervals filename as follows
" gs://fc-ca79cf19-a640-44f1-beed-751b14874ad2/misc_files/testing.intervals"

This testing.intervals is a simple text file that I uploaded to the google bucket using gsutil. I know the file exists at that location because a "gsutil cp gs://fc-ca79cf19-a640-44f1-beed-751b14874ad2/misc_files/testing.intervals ." command works fine.

However when I run my WDL (in the same google bucket), I get the following error

samtools mpileup: Could not read file "gs://fc-ca79cf19-a640-44f1-beed-751b14874ad2/misc_files/testing.intervals": No such file or directory

Any advice?

Done in the VM, but no output on firecloud

limit swap space when running WDL using Terra

$
0
0

I've encountered a case where if a user doesn't specify enough RAM to the STAR aligner and STAR has to use swap space, the task will get stuck running indefinitely. Is there a way to limit swap? Thanks.

Samtool 'non-existent file' stops the the gatk4-germline-snps-indel/joint-discovery-gatk4 workflow

$
0
0
Hello,
I am trying to run a version of the joint-discovery-gatk4-local workflow slightly adjusted to run with a SLURM backend (I am running with gatk 4.0.12.0; the json and wdl files are both based on github.com/gatk-workflows/gatk4-germline-snps-indels 'local' version). When running with enough samples to trigger the scatter-gather of the metrics, the workflow stops at the "GatherMetrics" step. I get this error message:
htsjdk.samtools.SAMException: Cannot read non-existent file: file:///test_joint-call/cromwell-executions/JointGenotyping/0c5fec3d-ae6a-4740-b991-3c5832c36315/call-GatherMetrics/inputs/-343490749/test3000.0.variant_calling_detail_metrics.variant_calling_detail_metrics

This file (with the double suffix) is indeed non-existent, but the file test3000.0.variant_calling_detail_metrics does exist in the right location. And in the command line featured in the logs, the filename is correct, and points to an existing and readable file:

```
Using GATK jar /cvmfs/soft.computecanada.ca/easybuild/software/2017/Core/gatk/4.0.12.0/gatk-package-4.0.12.0-local.jar
Running:
java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -Xmx2g -Xms2g -jar /cvmfs/soft.computecanada.ca/easybuild/software/2017/Core/gatk/4.0.12.0/gatk-package-4.0.12.0-local.jar AccumulateVariantCallingMetrics --INPUT /test_joint-call/cromwell-executions/JointGenotyping/0c5fec3d-ae6a-4740-b991-3c5832c36315/call-GatherMetrics/inputs/-343490749/test3000.0.variant_calling_detail_metrics --INPUT [... follows a long list of input shards ...] --OUTPUT test3000
```

Have you seen such a problem before? Do you know how to solve it? Those file are generated and named automatically, it would be strange if there was really a problem reading one.

Many thanks,

Frederic

Issue with CPTAC2 Authorization

$
0
0

According to dbGaP, I'm authorized on TCGA, TARGET, CPTAC2, and CPTAC3:

The GDC agrees:

But FireCloud does not:

I'm going to need to do work in FireCloud with CPTAC2 data soon, so it would be good to get this resolved.

storage.objects.list access error for freecredit project

$
0
0

Hi,
I ran a modified method 3-Joint-Discovery from workspace help-gatk/Germline-SNPs-Indels-GATK4-hg38. The GVCF input files failed to import. Because it's a trial account, I couldn't modify the IAM settings of the data bucket. I've searched on storage.objects.list error topics, and found the ACL setting tricks but I'm not sure which part I should modify in the workflow.

2019/03/25 05:17:02 I: Running command: sudo gsutil -q -m cp gs://fc-12f21f92-dfe5-451e-bca8-95552bd85f03/4665e834-aac7-40b7-b28f-8d574cf5be00/HaplotypeCallerGvcf_GATK4/f206209b-af79-482b-8f5b-77839edb52fb/call-MergeGVCFs/Sample_100.SpinachV2pseudo.aligned.duplicate_marked.sorted.g.vcf.gz /mnt/local-disk/fc-12f21f92-dfe5-451e-bca8-95552bd85f03/4665e834-aac7-40b7-b28f-8d574cf5be00/HaplotypeCallerGvcf_GATK4/f206209b-af79-482b-8f5b-77839edb52fb/call-MergeGVCFs/Sample_100.SpinachV2pseudo.aligned.duplicate_marked.sorted.g.vcf.gz
2019/03/25 05:17:05 E: command failed: AccessDeniedException: 403 pet-255319754645899bdec02@fccredits-carbon-gold-4123.iam.gserviceaccount.com does not have storage.objects.list access to fc-12f21f92-dfe5-451e-bca8-95552bd85f03.
CommandException: 1 file/object could not be transferred.

monitoring_image documentation

Firecloud jobs stopping unexpectedly - PAPI error code 10. 14

$
0
0

I have submitted a large computation to FC with ~4000 shards across 4 calls, all tasks allowing a few preemptibles. I noticed a number of shards eventually failing with the following code:

message: Task workflowAssembly.qcQualityHuman:160:3 failed. The job was stopped before the command finished.
PAPI error code 10. 14: VM ggp-9876237950776228486 stopped unexpectedly.

I know that this has been reported previously by various users. Did you ever figure out why this happens and how to prevent it? I call cache my results, but it's a very large amount of data that will be copied every time I re-run to successfully process the failed shards, and eventually aggregate my results.

Damian@Broad

Tasks queued in Cromwell

$
0
0

I started a workflow on March 29th on ~600 samples that completed for several samples but now all the tasks seem to still be stuck in Cromwell (has been the case with almost no movement since Saturday). Is there a way I can get a sense of when they will start being processed again?

GoogleCloud Project permissions

$
0
0

Good morning,

I am trying to add a lab member to my GoogleCloud Project but do not have appropriate permissions.

I was reviewing my GoogleCloud billing project 'stoverlabbilling' and noticed that under Permissions, the owners are listed as 'billing@firecloud.org' and 'firecloud-project-owners@firecloud.org'

I do not see my own information among the owners or editors.

Is it possible that I was inadvertently removed?
The Google Account associated with that billing project is daniel.g.stover@gmail.com
-We are getting and paying invoices...so the billing is working!

Thanks!
Dan

New feature in FC: workflowFailureMode: "ContinueWhilePossible"

$
0
0

Hi,

Is it possible to add "workflowFailureMode": "ContinueWhilePossible" as option when lunching analysis? It's particularly helpful in big computations when some jobs fail randomly PAPI error code 10. It would be easier than having to use the firecloud api.

Damian


GenotypeGVCFs GATK 4.1.0.0 error bcf_update_format: Assertion `nps && nps*line->n_sample==n' failed.

$
0
0

Hello!

I am trying to run GenotypeGVCFs on a set of 12 pooled-seq (50 fish per pool, using a ploidy of 20 in GATK) whole genome samples. Unfortunately I have had recurrent java memory issues ( I think!) and this one error that I can't figure out. I posted a version of this question originally on gatk/discussion/13430 and didn't pursue it because I thought it was tied to other problems I was having with GenomicsDBImports. Since then I have tried a couple things, but still get the error.

Here is a little bit about my analysis so far: I have pre-processed bam files (according to GATK best practices) for each of my 12 pools. I'm working on a non-model organism, so I don't have a full genome to align my reads to, just 24 linkage groups and 8626 scaffolds. The number of scaffolds has presented some issues, so at times along my pipeline I have split the analysis into two tracks, keeping the linkage groups together and the scaffolds together. I start with HaplotypeCaller scatter/gathering over the linkage groups then using -L and treating the scaffolds as intervals. Then MergeGVCFs to combine all the GVCFs into one GVCF file per pool. I use LeftAlignAndTrimVariants to left align and trim the variants (I was having a problem with GenomicsDB where there were variants with a ton of PLs and at it was preventing me from importing to the GenomicsDB, this seems to have solved that problem). Then I use GenomicsDBImport to make a GenomicsDB for each of my linkage groups, with all my pools included. I'm using CombineGVCFs to combine all the GVCFs for the scaffolds.

This is where I run into trouble. I have made a GenomicsDB for each of my 24 linkage groups without issue, but to run GenotypeGVCFs on the GenomicsDB and get genotypes I have to split the linkage group into intervals. I can get some intervals of just under ~2million base pairs to genotype with this command and runtime:

  command <<<
    set -e

    tar -xf ${workspace_tar}
    WORKSPACE=$( basename ${workspace_tar} .tar)

    "/gatk/gatk" --java-options "-Xmx125g -Xms125g" \
     GenotypeGVCFs \
     -R ${ref_fasta} \
     -O ${output_vcf_filename} \
     --disable-read-filter NotDuplicateReadFilter \
     -V gendb://$WORKSPACE \
     --verbosity ERROR \
     -L ${interval}

  >>>
  runtime {
    docker: "broadinstitute/gatk:4.1.0.0"
    memory: "150 GB"
    cpu: "4"
    disks: "local-disk " + disk_size + " HDD"
    preemptible: preemptible
  }

But some intervals fail with the following error, which I have been treating like an out of memory error, and continuously bumping up the -Xmx, -Xms, total memory and the disk space (up to 500G), though I'm not sure which would help- I just feel desperate to get them genotyped.

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007fb8e3875edf, pid=19, tid=0x00007fdefaaed700
#
# JRE version: OpenJDK Runtime Environment (8.0_191-b12) (build 1.8.0_191-8u191-b12-0ubuntu0.16.04.1-b12)
# Java VM: OpenJDK 64-Bit Server VM (25.191-b12 mixed mode linux-amd64 )
# Problematic frame:
# C  [libtiledbgenomicsdb571603236969900509.so+0x17dedf]  void VariantOperations::reorder_field_based_on_genotype_index<int>(std::vector<int, std::allocator<int> > const&, unsigned long, MergedAllelesIdxLUT<true, true> const&, unsigned int, bool, bool, unsigned int, RemappedDataWrapperBase&, std::vector<unsigned long, std::allocator<unsigned long> >&, int, std::vector<int, std::allocator<int> > const&, unsigned long, std::vector<int, std::allocator<int> >&)+0x6f
#
# Core dump written. Default location: /cromwell_root/core or core.19
#
# An error report file with more information is saved as:
# /cromwell_root/hs_err_pid19.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#

Following the examples in this thread: gatk/discussion/comment/43887, I have also tried adding -XX:+UseSerialGC and using --use-jdk-deflater true and --use-jdk-inflater true, and using the most recent GATK: 4.1.1.0. Those have not helped.

Do you know why I might be getting this error? Should I continue to increase the memory/disk space, or reduce the size of the interval, or is there something else that could be wrong?

The other error that I get is maybe more cryptic:

java: /home/vagrant/GenomicsDB/dependencies/htslib/vcf.c:3641: bcf_update_format: Assertion `nps && nps*line->n_sample==n' failed.
Using GATK jar /gatk/gatk-package-4.1.0.0-local.jar
Running:
    java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -Xmx125g -Xms125g -jar /gatk/gatk-package-4.1.0.0-local.jar GenotypeGVCFs -R /cromwell_root/uw-hauser-pcod-gatk-tests/gadMor2.fasta -O LG01.3.Genotyped.vcf.gz --disable-read-filter NotDuplicateReadFilter -V gendb://genomicsdb_LGs --verbosity ERROR -L LG01:6000001-8000000

That one seems to be tied to a specific location in the genome. Though this run with the interval LG01:6000001-8000000 failed, it produced part of a VCF file. I made two smaller intervals from the original interval to see if avoiding the area around where the VCF file stopped would help it finish running. My two new intervals were: LG01:6000001-7134350 and LG01:7150000-8000000. The first interval worked fine, but the second gave me the same error and no VCF file.

Do you have any ideas what might be happening here, and what bcf_update_format: Assertion `nps && nps*line->n_sample==n' failed means? I don't need to genotype every location- if some are giving me issues because they are too messy I am ok dropping them. I would just like to make sure that I am not inadvertently introducing errors in my data, or ignoring some warning flags for issues I need to address in the analysis. And, with so many linkage groups split up into intervals, it is tough to have a pipeline that requires manual tinkering when intervals fail.

Thank you so much for any advice you have!

Cannot find output when running docker container with cromwell

$
0
0
Hi there, I currently have a method that I now run with Docker and was hoping to put on Firecloud. The way the docker is currently structured, I run a wrapper script within the container and that writes output files to a directory within the container, and I then run a docker cp to get the output files on my local system. In trying to write a WDL, I am able to run the process (stdout log shows output from the script running) but not extract the output (get a file not found error) when I specify the output file using the path within the docker container. I am not sure how to either mount a volume so that the Cromwell can read the outputs or modify my docker so things are written somewhere else.

java.lang.Exception: The job was aborted from outside Cromwell

$
0
0

Hi Team,

We are enabling Cromwell on AWS and using Cromwell version 39. While testing on five-dollar pipeline, we are facing subjected issue intermittently. There are instances wherein some tasks will complete in particular run successfully but get failed with this issue in subsequent executions.

Is it due to lack of resources or anything else? Please help on this and let me know if need further details.

java.lang.Exception: The job was aborted from outside Cromwell
        at cromwell.engine.workflow.lifecycle.execution.WorkflowExecutionActor$$anonfun$5.applyOrElse(WorkflowExecutionActor.scala:251)
        at cromwell.engine.workflow.lifecycle.execution.WorkflowExecutionActor$$anonfun$5.applyOrElse(WorkflowExecutionActor.scala:186)
        at scala.PartialFunction$OrElse.apply(PartialFunction.scala:168)
        at akka.actor.FSM.processEvent(FSM.scala:687)
        at akka.actor.FSM.processEvent$(FSM.scala:681)
        at cromwell.engine.workflow.lifecycle.execution.WorkflowExecutionActor.akka$actor$LoggingFSM$$super$processEvent(WorkflowExecutionActor.scala:51)
        at akka.actor.LoggingFSM.processEvent(FSM.scala:820)
        at akka.actor.LoggingFSM.processEvent$(FSM.scala:802)
        at cromwell.engine.workflow.lifecycle.execution.WorkflowExecutionActor.processEvent(WorkflowExecutionActor.scala:51)
        at akka.actor.FSM.akka$actor$FSM$$processMsg(FSM.scala:678)
        at akka.actor.FSM$$anonfun$receive$1.applyOrElse(FSM.scala:672)
        at akka.actor.Actor.aroundReceive(Actor.scala:517)
        at akka.actor.Actor.aroundReceive$(Actor.scala:515)
        at cromwell.engine.workflow.lifecycle.execution.WorkflowExecutionActor.akka$actor$Timers$$super$aroundReceive(WorkflowExecutionActor.scala:51)
        at akka.actor.Timers.aroundReceive(Timers.scala:55)
        at akka.actor.Timers.aroundReceive$(Timers.scala:40)
        at cromwell.engine.workflow.lifecycle.execution.WorkflowExecutionActor.aroundReceive(WorkflowExecutionActor.scala:51)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:588)
        at akka.actor.ActorCell.invoke(ActorCell.scala:557)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
        at akka.dispatch.Mailbox.run(Mailbox.scala:225)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
        at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

java.lang.Exception: The job was aborted from outside Cromwell
        at cromwell.engine.workflow.lifecycle.execution.WorkflowExecutionActor$$anonfun$5.applyOrElse(WorkflowExecutionActor.scala:251)
        at cromwell.engine.workflow.lifecycle.execution.WorkflowExecutionActor$$anonfun$5.applyOrElse(WorkflowExecutionActor.scala:186)
        at scala.PartialFunction$OrElse.apply(PartialFunction.scala:168)
        at akka.actor.FSM.processEvent(FSM.scala:687)
        at akka.actor.FSM.processEvent$(FSM.scala:681)
        at cromwell.engine.workflow.lifecycle.execution.WorkflowExecutionActor.akka$actor$LoggingFSM$$super$processEvent(WorkflowExecutionActor.scala:51)
        at akka.actor.LoggingFSM.processEvent(FSM.scala:820)
        at akka.actor.LoggingFSM.processEvent$(FSM.scala:802)
        at cromwell.engine.workflow.lifecycle.execution.WorkflowExecutionActor.processEvent(WorkflowExecutionActor.scala:51)
        at akka.actor.FSM.akka$actor$FSM$$processMsg(FSM.scala:678)
        at akka.actor.FSM$$anonfun$receive$1.applyOrElse(FSM.scala:672)
        at akka.actor.Actor.aroundReceive(Actor.scala:517)
        at akka.actor.Actor.aroundReceive$(Actor.scala:515)
        at cromwell.engine.workflow.lifecycle.execution.WorkflowExecutionActor.akka$actor$Timers$$super$aroundReceive(WorkflowExecutionActor.scala:51)
        at akka.actor.Timers.aroundReceive(Timers.scala:55)
        at akka.actor.Timers.aroundReceive$(Timers.scala:40)
        at cromwell.engine.workflow.lifecycle.execution.WorkflowExecutionActor.aroundReceive(WorkflowExecutionActor.scala:51)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:588)
        at akka.actor.ActorCell.invoke(ActorCell.scala:557)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
        at akka.dispatch.Mailbox.run(Mailbox.scala:225)

message: Unable to complete JES Api Request; message: Request payload size exceeds t

$
0
0

When running a call to combine multiple files into one using a python script I see the following error:
• message: Unable to complete JES Api Request
• causedBy:
• message: Request payload size exceeds the limit: 5242880 bytes.

This happens only to for calls that require a large number of files to be merged, although I adjust disk and memory requirements accordingly.
Thanks!
Dmitry from the Broad

Subworkflow details cannot be expanded

$
0
0

Hi FC Team,

As of last night, a few members of our group have noticed that subworkflow details cannot be expanded for running workflows when clicking the Show button next to each subworkflow on the job monitor.

It does appear that these subworkflows are running and completing correctly, but we wanted to make you aware of this behavior with the online portal.

Thanks!
Ryan

Sucessfully completed tasks still marked as running

$
0
0

I have several workflows in my workspaces that are marked as still running even though many of them have successfully completed for several hours now (expanding the individual cases reveals that they are complete). This is slowing down my overall workflow as I would like to kick off the next tasks but the annotations have not been updated in the data model.

Notebook terminal is freezing

$
0
0

My terminal window only stays stable for about 5 minutes. If I close and reopen the window works just fine. Notebook seems to be working just fine as well.


Problem transitioning to new Google Billing Account after free trial

$
0
0

I followed the steps here. I get this error when I try to run a workflow:

Task helloUncompressGZ.UncompressGZ:NA:1 failed. The job was stopped before the command finished. PAPI error code 7. Access Not Configured. Compute Engine API has not been used in project 794883612204 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=794883612204 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.

I tried to enable the Compute Engine API but I get this error:

IAM: you have insufficient permissions to  
enable or disable services and APIs for this project. Contact a project owner to request permissions.

I tried to change the permissions but I get the following error:

You need permissions for this action. Required permission(s): resourcemanager.projects.setIamPolicy.

I emailed Google and they gave me this response:

Thank you for your message. My name is Ronnie from Google Cloud Platform Billing Support and I’ll be happy to assist you. As I understand, you want to add access permission to yourself for your project.

Upon checking, I see that you are listed as a billing admin of your billing account (ID: 01535E-99B321-0C4F31). However, it does not necessarily mean that if you are the billing admin, you will also have access to the project.

Based on your project information, you are not listed as the Project Owner of your project “fccredits-terbium-copper-6722”. Hence, the reason why you are receiving an error message when making some changes inside project.

For you to make changes in your project, you must be added as a Project Owner of the account. And the only way to do that is contact your Project Owner and ask them to add you as one of the Project Owners. To add you as a Project Owner, please see steps below [1].

1. Open the IAM page in the GCP Console.
2. OPEN THE IAM PAGE
3. Click Select a project, choose a project, and click Open.
4. Click Add.
5. Enter an email address. You can add individuals, service accounts, or Google Groups as members, but every project must have at least one individual as a member.
6. Select a role. Roles give members the appropriate level of permission. We recommend giving the member the least amount of privilege needed. Members with Owner-level permissions are also project owners and can manage all aspects of the project, including shutting it down.
7. Click Save.

Please note that a Project Owner can only do this. And once you are added, you will now have access to the project.

firecloud cant access project resources - PAPI error code 7, access not configured

$
0
0

Hi - I'm seeing the following error when launching my workflow;

message: Workflow failed
causedBy:
message: Task trinity_fusion_wf.TRINITY_FUSION_UC_TASK:NA:4 failed. The job was stopped before the command finished. PAPI error code 7. Access Not Configured. Compute Engine API has not been used in project 163758633474 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=163758633474 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.

Firecloud used to have access to my buckets, so I'm not sure why this is happening.

If I go to the URL it mentions, it says:

Failed to load.
There was an error while loading /apis/api/compute.googleapis.com/overview?project=163758633474. Please try again.

problem with permission to access to the google bucket associated with a workspace

$
0
0

Hello,

When I try to access a google bucket associated with a Firecloud workspace, I get the following message:

"You need the storage.objects.list permission to list objects in this bucket. Ask a project or bucket owner to give you this permission and try again."

I am a workspace owner and so far everything worked well and I could see the contents of the bucket (.bam/.bai files and results of the analyses) without any problem.

Additionally, the 'estimated storage fee' which was always visible in the workspace, generates the following error

"Estimated Monthly Storage Fee: Ask timed out on [Actor[akka://FireCloud-Orchestration-API/user/IO-HTTP#-81949532]] after [60000 ms]. Sender[null] sent message of type "spray.http.HttpRequest"."

Could you please let me know how can I fixed it?

parallel upload of data to google bucket in Firecloud workspace

$
0
0

Hello,

We have a large set of .bam/.bai files to transfer to the google bucket associated with a Firecloud workspace. We plan to do the upload of the files in parallel, meaning that several .bam/.bai files may be uploaded in parallel at the same time by two different users with access to the same google bucket.
Could you please let me know if we can proceed with such upload and it will not generate any issues with the files or google bucket?

is recursive docker possible?

$
0
0

Hi,

Is it possible to have a workflow run in firecloud as privileged, so it can call docker within the docker (recursively)?

There's an externally developed pipeline that I'd want to run, where that pipeline already includes many docker calls. I'm considering making a wdl pipeline that encapsulates this other pipeline along with their pipeline runner. Would this be a feasible option?

many thanks,

~brian

Switching cluster runtime does not preserve the cluster configuration

$
0
0

I switched runtimes on my notebook cluster because I needed a larger computer. When I did that all of the software installed on the cluster disappeared. Is there anyway to image the cluster and preserve the configuration?

Does the notebook cluster showdown if you logout of Terra?

$
0
0

I ran an ML job on the notebook cluster which was running fine but overnight. When I came back in the morning the cluster had shutdown and my job was killed. Is that the right behavior?

Thanks,

Ilya


Is Terra/Firecloud down?

$
0
0

I can't login. Get a gcp error.

"502. That’s an error.

The server encountered a temporary error and could not complete your request.

Please try again in 30 seconds. That’s all we know."

Error uploading samples table to workspace

$
0
0

I get the following when trying to upload a 7mb table to my workspace. Is it too large?

How to change billing account/billing project initially assigned to a workspace?

$
0
0

Hi,
Could you please let me know how can I change a billing account/billing project that I initially assigned during generation of a workspace in Firecloud?

Failure while processing large unmapped bam file as input to cromwell

$
0
0

Hi Team,

We are running five dollar pipeline by running Cromwell(v39) on AWS. Whenever we are running our pipeline on small files(~300MB) process is proceeding fine with further processing. But, whenever we are providing large file ranging 48GB to 68GB then it is failing due to below error.

We observed that "SplitLargeReadGroup.SamSplitter" only triggering for large files but not for smaller files which are ranging in ~MBs.

Do we need to perform any special configuration to handle large files or we are doing some wrong config which is causing failure to this pipeline.

Exception:

AwsBatchAsyncBackendJobExecutionActor [^[[38;5;2m5f712db2^[[0mSplitLargeReadGroup.SamSplitter:NA:1]: ^[[38;5;5mset -e
mkdir output_dir

total_reads=$(samtools view -c /cromwell_root/cromwelleast/references/broad-references/macrogen_NA12878_full.bam)

java -Dsamjdk.compression_level=2 -Xms3000m -jar /usr/gitc/picard.jar SplitSamByNumberOfReads \
  INPUT=/cromwell_root/cromwellbucket/references/broad-references/macrogen_NA12878_full.bam \
  OUTPUT=output_dir \
  SPLIT_TO_N_READS=48000000 \
  TOTAL_READS_IN_INPUT=$total_reads^[[0m
[2019-04-18 20:50:06,31] [^[[38;5;1merror^[[0m] AwsBatchAsyncBackendJobExecutionActor [^[[38;5;2m5f712db2^[[0mSplitLargeReadGroup.SamSplitter:NA:1]: Error attempting to Execute
cromwell.engine.io.IoAttempts$EnhancedCromwellIoException: [Attempted 1 time(s)] - FileSystemException: /tmp/temp-s3-538074772416833219ce_WholeGenomeGermlineSingleSample_91352b21-b271-443c-b332-0a25b27ec894_call-UnmappedBamToAlignedBam_UnmappedBamToAlignedBam_b2858ebe-2463-48b7-bfc8-f83a786e5247_call-SplitRG_shard-0_SplitLargeReadGroup_5f712db2-4f6e-9955-7feeb03af894_call-SamSplitter_script: File name too long
Caused by: java.nio.file.FileSystemException: /tmp/temp-s3-538074772416833219ce_WholeGenomeGermlineSingleSample_91352b21-b271-443c-b332-0a25b27ec894_call-UnmappedBamToAlignedBam_UnmappedBamToAlignedBam_b2858ebe-2463-48b7-bfc8-f83a786e5247_call-SplitRG_shard-0_SplitLargeReadGroup_5f712db2-4f6e-9955-7feeb03af894_call-SamSplitter_script: File name too long
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
        at java.nio.file.Files.newByteChannel(Files.java:361)
        at java.nio.file.Files.createFile(Files.java:632)
        at java.nio.file.TempFileHelper.create(TempFileHelper.java:138)
        at java.nio.file.TempFileHelper.createTempFile(TempFileHelper.java:161)
        at java.nio.file.Files.createTempFile(Files.java:897)
        at org.lerch.s3fs.S3SeekableByteChannel.<init>(S3SeekableByteChannel.java:52)
        at org.lerch.s3fs.S3FileSystemProvider.newByteChannel(S3FileSystemProvider.java:360)
        at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
        at java.nio.file.Files.newOutputStream(Files.java:216)
        at java.nio.file.Files.write(Files.java:3292)
        at better.files.File.writeByteArray(File.scala:270)
        at better.files.File.write(File.scala:280)
        at cromwell.core.path.BetterFileMethods.write(BetterFileMethods.scala:179)
        at cromwell.core.path.BetterFileMethods.write$(BetterFileMethods.scala:178)
        at cromwell.filesystems.s3.S3Path.write(S3PathBuilder.scala:158)
        at cromwell.core.path.EvenBetterPathMethods.writeContent(EvenBetterPathMethods.scala:99)
        at cromwell.core.path.EvenBetterPathMethods.writeContent$(EvenBetterPathMethods.scala:97)
        at cromwell.filesystems.s3.S3Path.writeContent(S3PathBuilder.scala:158)
        at cromwell.engine.io.nio.NioFlow.$anonfun$write$1(NioFlow.scala:89)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
        at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:87)
        at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:351)
        at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:372)
        at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:312)
        at cats.effect.internals.IOShift$Tick.run(IOShift.scala:36)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)

How to exit in the middle of a linear chain tasks in wdl

$
0
0

Hi,
I have a sequential of tasks. If I found the output of task1 has an error message "No space on the disk", I would like to exit the analysis. Don't continue with the following tasks. Is that implementable and how to do that in WDL? Thank you very much.

Plink prefix and wdl

$
0
0

Hi,

I am trying to create a WDL that runs plink and takes as an input a prefix to the bed/bim/fam file

I am using Cromwell and docker to run it

for example:

plink -bfile prefix --recode --out prefix

Cromwell copies over the input files. How do I configure the wdl to use a prefix and grab the files associated with the prefix without explicitly defining the files. Also even if I can get the files copied over how do I pass a prefix to the files?

Thanks,

Ilya

Implementing GenotyGVCFs and GenomicsDBImport in WDL


Confirm I am getting the right runtime and runtime options

$
0
0

How do I confirm that I am getting the right runtime and where are the different WDL runtime parameters/options described?

Can I add the hash of docker container to my snapshot?

$
0
0

runtime {
docker: "imagename":HASH
memory: memGB
disks: "local-disk 256 HDD"
cpu: nCores
}

This would be useful for forcing versioning of the docker image in the snapshot.

Docker pull error

$
0
0

I am getting the following docker pull error when working in a project that is using my google billing account instead of the free billing account.

Task genoml.train:NA:1 failed. The job was stopped before the command finished. PAPI error code 5. 8: Failed to pull image gcr.io/project/image "gcloud docker -- pull gcr.io/project/image" failed: exit status 1: Using default tag: latest Error response from daemon: pull access denied for gcr.io/project/imagel, repository does not exist or may require 'docker login'

I followed the instructions here.

https://software.broadinstitute.org/firecloud/documentation/article?id=11558

It worked just fine with the free trial billing account.

View VMs associated with running jobs in GCP

$
0
0

I am trying to view my jobs in GCP. For the free trial account, I do not have have any active jobs in my job history but I do have a VM running in GCP. It's been running since 4/17. See screenshot

For my the billing account associated with my organization, I am have one active job in my job history and I cannot find it in GCP.

WorkFlow getting aborted intermittently without any exception

$
0
0

Hi Team,

We are facing yet another issue with the large file(68GB) processing wherein workflow is getting aborted all of sudden without any exception on AWS enabled Cromwell. This behaviour is not permanent though, sometimes it gets successful but gets fail in another attempt.

We have tried increasing memory which we are assigning to Docker container and Xms values but no luck. Also, tried increasing TIMEOUT setting in Cromwell config.

Most of the time it is happening with SamSplitter(to split large file) and SamToFastqAndBwaMemAndMba tasks. I'm also copying timeout setting which we are maintaining in Cromwell.config.

Please help us to resolve this issue.

Timeout Setting in Cromwell config:
akka {
http {
server {
request-timeout = 1800s
idle-timeout = 2400s
}
client {
request-timeout = 1800s
connecting-timeout = 300s
}
}
}

WorkFlow stopped logger without exception even when a process is still running:--

[2019-05-03 14:34:00,51] [info] AwsBatchAsyncBackendJobExecutionActor [^[[38;5;2mc68f0e1b^[[0mSLRG.SamSplitter:NA:1]: Status change from Initializing to Running
[2019-05-03 15:27:11,95] [info] AwsBatchAsyncBackendJobExecutionActor [^[[38;5;2m461a6066^[[0mUBTAB.CQYM:0:1]: Status change from Running to Succeeded
[2019-05-03 15:43:25,43] [info] Workflow polling stopped
[2019-05-03 15:43:25,45] [info] Shutting down WorkflowStoreActor - Timeout = 5 seconds
[2019-05-03 15:43:25,45] [info] Shutting down WorkflowLogCopyRouter - Timeout = 5 seconds
[2019-05-03 15:43:25,45] [info] 0 workflows released by cromid-abdb07d
[2019-05-03 15:43:25,46] [info] Aborting all running workflows.
[2019-05-03 15:43:25,46] [info] Shutting down JobExecutionTokenDispenser - Timeout = 5 seconds
[2019-05-03 15:43:25,47] [info] JobExecutionTokenDispenser stopped
[2019-05-03 15:43:25,47] [info] WorkflowStoreActor stopped
[2019-05-03 15:43:25,47] [info] Shutting down WorkflowManagerActor - Timeout = 3600 seconds
[2019-05-03 15:43:25,47] [info] WorkflowManagerActor Aborting all workflows
[2019-05-03 15:43:25,47] [info] WorkflowExecutionActor-155c13d0-09e9-4ad7-b4c4-9cd2b1099c14 [^[[38;5;2m155c13d0^[[0m]: Aborting workflow
[2019-05-03 15:43:25,47] [info] WorkflowLogCopyRouter stopped
[2019-05-03 15:43:25,47] [info] 461a6066-0463-485c-9de3-763d0658f236-SubWorkflowActor-SubWorkflow-UBTAB:-1:1 [^[[38;5;2m461a6066^[[0m]: Aborting workflow
[2019-05-03 15:43:25,47] [info] c68f0e1b-9998-4cca-9749-40586e3d097f-SubWorkflowActor-SubWorkflow-SplitRG:0:1 [^[[38;5;2mc68f0e1b^[[0m]: Aborting workflow
[2019-05-03 15:43:25,55] [info] Attempted CancelJob operation in AWS Batch for Job ID fd00c634-7a5f-453c-afad-9dbdead31a91. There were no errors during the operation
[2019-05-03 15:43:25,55] [info] We have normality. Anything you still can't cope with is therefore your own problem
[2019-05-03 15:43:25,55] [info] https://www.youtube.com/watch?v=YCRxnjE7JVs
[2019-05-03 15:43:25,55] [info] AwsBatchAsyncBackendJobExecutionActor [^[[38;5;2mc68f0e1b^[[0mSLRG.SamSplitter:NA:1]: AwsBatchAsyncBackendJobExecutionActor [^[[38;5;2mc68f0e1b^[[0m:SLRG.SamSplitter:NA:1] Aborted StandardAsyncJob(fd00c634-7a5f-453c-afad-9dbdead31a91)
[2019-05-03 15:56:31,10] [info] AwsBatchAsyncBackendJobExecutionActor [^[[38;5;2mc68f0e1b^[[0mSLRG.SamSplitter:NA:1]: Status change from Running to Succeeded
[2019-05-03 15:56:31,81] [info] 461a6066-0463-485c-9de3-763d0658f236-SubWorkflowActor-SubWorkflow-UBTAB:-1:1 [^[[38;5;2m461a6066^[[0m]: WorkflowExecutionActor [^[[38;5;2m461a6066^[[0m] aborted: SubWorkflow-SplitRG:0:1
[2019-05-03 15:56:32,09] [info] WorkflowExecutionActor-155c13d0-09e9-4ad7-b4c4-9cd2b1099c14 [^[[38;5;2m155c13d0^[[0m]: WorkflowExecutionActor [^[[38;5;2m155c13d0^[[0m] aborted: SubWorkflow-UBTAB:-1:1
[2019-05-03 15:56:32,84] [info] WorkflowManagerActor All workflows are aborted
[2019-05-03 15:56:32,84] [info] WorkflowManagerActor All workflows finished
[2019-05-03 15:56:32,84] [info] WorkflowManagerActor stopped
[2019-05-03 15:56:33,07] [info] Connection pools shut down
[2019-05-03 15:56:33,07] [info] Shutting down SubWorkflowStoreActor - Timeout = 1800 seconds
[2019-05-03 15:56:33,07] [info] Shutting down JobStoreActor - Timeout = 1800 seconds
[2019-05-03 15:56:33,07] [info] SubWorkflowStoreActor stopped
[2019-05-03 15:56:33,07] [info] Shutting down CallCacheWriteActor - Timeout = 1800 seconds
[2019-05-03 15:56:33,07] [info] Shutting down ServiceRegistryActor - Timeout = 1800 seconds
[2019-05-03 15:56:33,07] [info] Shutting down DockerHashActor - Timeout = 1800 seconds
[2019-05-03 15:56:33,07] [info] CallCacheWriteActor Shutting down: 0 queued messages to process
[2019-05-03 15:56:33,07] [info] JobStoreActor stopped
[2019-05-03 15:56:33,07] [info] CallCacheWriteActor stopped
[2019-05-03 15:56:33,07] [info] Shutting down IoProxy - Timeout = 1800 seconds
[2019-05-03 15:56:33,08] [info] DockerHashActor stopped
[2019-05-03 15:56:33,08] [info] IoProxy stopped
[2019-05-03 15:56:33,08] [info] Shutting down connection pool: curAllocated=1 idleQueues.size=1 waitQueue.size=0 maxWaitQueueLimit=256 closed=false
[2019-05-03 15:56:33,08] [info] Shutting down connection pool: curAllocated=0 idleQueues.size=0 waitQueue.size=0 maxWaitQueueLimit=256 closed=false
[2019-05-03 15:56:33,08] [info] WriteMetadataActor Shutting down: 72 queued messages to process
[2019-05-03 15:56:33,08] [info] KvWriteActor Shutting down: 0 queued messages to process
[2019-05-03 15:56:33,09] [info] Shutting down connection pool: curAllocated=0 idleQueues.size=0 waitQueue.size=0 maxWaitQueueLimit=256 closed=false
[2019-05-03 15:56:33,09] [info] WriteMetadataActor Shutting down: processing 0 queued messages
[2019-05-03 15:56:33,09] [info] ServiceRegistryActor stopped
[2019-05-03 15:56:33,11] [info] Database closed
[2019-05-03 15:56:33,11] [info] Stream materializer shut down
[2019-05-03 15:56:33,12] [info] WDL HTTP import resolver closed

What is the plain text for an Array in Workspace Attributes

$
0
0

What is the plain text for an Array in Workspace Attributes? If not exist, how can I upload an array in my workspace attributes?

Say I want to copy the "known_indels_array" attribute in the following workspace to my own one. What is the "offical" way to do this? It seem that the text file downloaded from this workspace can not be recognized by another workspace. In another words, text like ["a", "b"] doesn't work in my case.

https://portal.firecloud.org/#workspaces/help-gatk/Pre-processing_hg38_v2

Thanks,
Chunyang

Running costs on FireCloud

$
0
0
Hi,
I'm trying to find out about how much it might cost to run a cohort of samples (500, 1000, or 2000) using germline GATK Best Practices Pipeline on FireCloud , and also how long the runs might take. Thanks