For both workflow languages, they are both better for building a singular reproducible workflow that can be published with an academic paper. For us, I'm looking for a workflow language that can treat the pipeline as a testable, deployable piece of software. I find that with Nextflow, scientists fall into bad patterns of mixing in the pipeline logic (eg if this sample type, then process it this way) interspersed with the bioinformatics model (eg use these bowtie2 parameters) throughout the pipeline which makes it more difficult to maintain as our platform evolves. Their K8s integration is lacking for both of them and they work much better an academic-style clusters.
YAML does leave a lot to be desired, but it also forces a degree of simplicity in architecting the pipeline because to do otherwise is too cumbersome. I really liked WDL as a language when I used to use that--seemed to have a nice balance of readability and simplicity. I believe Dyno created a python SDK for the Argo YAML syntax, and I need to look into that more.
YAML does leave a lot to be desired, but it also forces a degree of simplicity in architecting the pipeline because to do otherwise is too cumbersome. I really liked WDL as a language when I used to use that--seemed to have a nice balance of readability and simplicity. I believe Dyno created a python SDK for the Argo YAML syntax, and I need to look into that more.