Publications

Below is a complete list of publications in journals, books and conference proceedings. Where available, links are included to preprints or the printed article. Citation counts and other versions can be found on Google Scholar.

Under review

  1. S. Xu, M. Rasouli, R. M. Kirby, D. Moxey and H. Sundar
    A geometrically informed algebraic multigrid preconditioned iterative approach for solving high-order finite element systems
    under review in Comput. Phys. Commun., January 2023. BibTeX Abstract
    @unpublished{xu-2022,
      title = {A geometrically informed algebraic multigrid preconditioned iterative approach for solving high-order finite element systems},
      author = {Xu, S. and Rasouli, M. and Kirby, R. M. and Moxey, D. and Sundar, H.},
      note = {under review in Comput. Phys. Commun.},
      month = jan,
      year = {2023},
      keywords = {journal}
    }
    
    Algebraic multigrid (AMG) is conventionally applied in a black-box fashion, agnostic to the underlying geometry. In this work, we propose that using geometric information – when available – to assist with setting up the AMG hierarchy is beneficial, especially for solving linear systems resulting from high-order finite element discretizations. For geometric multigrid, it is known that using p-coarsening before h-coarsening can provide better scalability, but setting up p-coarsening is non-trivial in AMG. Our method, called geometrically informed algebraic multigrid (GIAMG), with minimal information of the geometry from the user, is able to set up a grid hierarchy that includes p-coarsening at the top grids. A major advantage of using p-coarsening with AMG – beyond the benefits known in the context of GMG – is the increased sparsification of coarse grid operators. We extensively evaluate GIAMG by testing on the 3D Helmholtz and incompressible Navier–Stokes operators, and demonstrate mesh-independent convergence, and excellent parallel scalability. We also compare the performance of GIAMG with existing AMG packages, including Hypre and ML.

2024

  • M. D. Green, K. S. Kirilov, M. Turner, J. Marcon, J. Eichstädt, E. Laughton, C. D. Cantwell, S. J. Sherwin, J. Peiró and D. MoxeyComput. Phys. Commun., (298), p. 109089, 2024. 10.1016/j.cpc.2024.109089 BibTeX Abstract
    @article{green-2024,
      title = {NekMesh: An open-source high-order mesh generation framework},
      author = {Green, M. D. and Kirilov, K. S. and Turner, M. and Marcon, J. and Eichst\"adt, J. and Laughton, E. and Cantwell, C. D. and Sherwin, S. J. and Peir\'o, J. and Moxey, D.},
      journal = {Comput. Phys. Commun.},
      year = {2024},
      number = {298},
      pages = {109089},
      doi = {10.1016/j.cpc.2024.109089},
      url = {https://www.sciencedirect.com/science/article/pii/S0010465524000122}
    }
    
    High-order spectral element simulations are now becoming increasingly popular within the computational modelling community, as they offer the potential to deliver increased accuracy at reduced cost compared to traditional low-order codes. However, to support accurate, high-fidelity simulations in complex industrial applications, there is a need to generate curvilinear meshes which robustly and accurately conform to geometrical features. This is, at present, a key challenge within the mesh generation community, with only a few open-source tools able to generate curvilinear meshes for complex geometries. We present NekMesh: an open-source mesh generation package which is designed to enable the generation of valid, high-quality curvilinear meshes of complex, three-dimensional geometries for performing high-order simulations. We outline the software architecture adopted in NekMesh, which uses a pipeline of processing modules to provide a flexible, CAD-independent high-order mesh processing tool, capable of both generating meshes for a wide range of use cases, as well as post-processing linear meshes from a range of input formats for use with high-order simulations. A number of examples in various application areas are presented, with a particular emphasis on challenging aeronautical and fluid dynamics test cases.
  • M. D. Green, R. Foster-Turner, A. Hunt, A. M. Ramirez-Mancebo, S. C. Lieber, J. W. Hartwig, D. Moxey and A. TafuniExp. Therm. Fluid Sci., (154), p. 111144, 2024. 10.1016/j.expthermflusci.2024.111144 BibTeX Abstract
    @article{green-2024a,
      title = {Flight-ready electrical capacitance tomography SMARTTS tank for use with cryogenics},
      author = {Green, M. D. and Foster-Turner, R. and Hunt, A. and Ramirez-Mancebo, A. M. and Lieber, S. C. and Hartwig, J. W. and Moxey, D. and Tafuni, A.},
      journal = {Exp. Therm. Fluid Sci.},
      year = {2024},
      number = {154},
      pages = {111144},
      doi = {10.1016/j.expthermflusci.2024.111144},
      url = {https://www.sciencedirect.com/science/article/pii/S089417772400013X}
    }
    
    The Atout SMARTTS (SMART Tanks for Space) system allows the propellant mass distribution in a storage vessel to be measured accurately under several motion and gravity conditions and at any fill level. Based on Electrical Capacitance Tomography (ECT), SMARTTS systems incorporate electrodes on the inside of the tank, electrical connections to these electrodes, and capacitance measurements. Interpretation of the capacitance measurements is done through the Atout software, providing live images, fill levels and center of mass measurement of the propellant within the tank. The main objective of this work is to demonstrate the successful operation of a flight-ready aluminum tank with integrated SMARTTS electrodes and feedthroughs at cryogenic temperatures. The experiments presented herein consist of three submersion cycles in which the tank is lowered into an open-top dewar filled with liquid nitrogen. During the cycle, the tank is filled with liquid nitrogen when lowered into the dewar, then drained as it is lifted out. Successful operation of the SMARTTS system has been proven via live images of the fluid in the tank, as well as measured fill volume and center of mass of fluid. The materials and sensors have performed satisfactorily with no failures or post-experiment signs of damage.

2023

  • J. Slaughter, D. Moxey and S. J. SherwinFlow Turbul. Combust., (110), pp. 917–944, 2023. 10.1007/s10494-023-00404-7 BibTeX Abstract
    @article{slaughter-2023,
      title = {Large eddy simulation of an inverted multi-element wing in ground effect},
      author = {Slaughter, J. and Moxey, D. and Sherwin, S. J.},
      journal = {Flow Turbul. Combust.},
      year = {2023},
      number = {110},
      pages = {917-944},
      doi = {10.1007/s10494-023-00404-7},
      url = {https://link.springer.com/content/pdf/10.1007/s10494-023-00404-7.pdf}
    }
    
    Due to the proprietary nature of modern motorsport and Formula 1, current scientific literature lacks relevant studies and benchmarks that can be used to test and validate new methods. Due to the release of a free geometry - the Imperial Front Wing - we present a computational study of a multi-element aerofoil at a ride height of 0.36h/c and a Reynolds Number of 2.2 \times 105. A 0.16c slice of the Imperial has been examined using high-order Spectral/hp Element Methods. Time averaged force data is presented finding lift and drag coefficients of -8.33 and 0.17 respectively. Transient analysis of the force- and surface pressure data resulted in salient mode identification with respect to the transition mechanisms of each element.The mainplane and flap laminar separation were studied and the cross-spectral phase presented for the lower frequency modes. At a St=40 an in-phase relationship was identified between mainplane and flap Laminar Separation Bubbles, whilst at St=60 a distinct out-of-phase relationship was identified. Wake results including wake-momentum deficit and turbulent kinetic energy plots have been presented - showing wake meandering and subsequent break down due to a Kelvin-Helmholtz instability. These results, particularly the transition mechanisms will allow for the construction of a data set to validate novel methods in this area.
  • @article{eichstadt-2023,
      title = {Efficient vectorised kernels for unstructured high-order finite element fluid solvers on GPU architectures in two dimensions},
      author = {Eichst\"adt, J. and Peir\'o, J. and Moxey, D.},
      journal = {Comput. Phys. Commun.},
      year = {2023},
      volume = {284},
      pages = {108624},
      url = {https://www.sciencedirect.com/science/article/pii/S0010465522003435},
      doi = {10.1016/j.cpc.2022.108624}
    }
    
    We develop efficient kernels for elemental operators of matrix-free solvers of the Helmholtz equation, which are the core operations for incompressible Navier-Stokes solvers, for use on graphics-processing units (GPUs). Our primary concern in this work is the extension of matrix-free routines to efficiently evaluate this elliptic operator on regular and curvilinear triangular elements in a tensor-product manner. We investigate two types of efficient CUDA kernels for a range of polynomial orders and thus varying arithmetic intensities: the first maps each elemental operation to a \emphCUDA-thread for a completely vectorised kernel, whilst the second maps each element to a \emphCUDA-block for nested parallelism. Our results show that the first option is beneficial for elements with low polynomial order, whereas the second option is beneficial for elements of higher order. The crossover point between these two schemes for the hardware used in this study lies at around P=4-5, depending on element type. For both options, we highlight the importance of the layout of data structures, which necessitates the development of \emphinterleaved elemental data for vectorised kernels, and analyse the effect of selecting different memory spaces on the GPU. As the considered kernels are foremost memory-bandwidth bound, we develop kernels for curved elements that trade memory bandwidth against additional arithmetic operations, and demonstrate improved throughput in selected cases. We further compare our optimised \emphCUDA kernels against optimised \emphOpenACC kernels, to contrast the performance between a native and a portable programming model for GPUs.

2022

  • F. F. Buscariolo, J. Hoessler, D. Moxey, A. Jassim, K. Gouder, J. Basler, Y. Murai, G. R. S. Assi and S. J. SherwinJ. Wind. Eng. Ind. Aerod., 221, p. 104832, 2022. 10.1016/j.jweia.2021.104832 BibTeX Abstract
    @article{buscariolo-2022,
      title = {Spectral/$hp$ element simulation of flow past a Formula One front wing: validation against experiments},
      author = {Buscariolo, F. F. and Hoessler, J. and Moxey, D. and Jassim, A. and Gouder, K. and Basler, J. and Murai, Y. and Assi, G. R. S. and Sherwin, S. J.},
      journal = {J. Wind. Eng. Ind. Aerod.},
      year = {2022},
      volume = {221},
      pages = {104832},
      url = {https://arxiv.org/pdf/1909.06701},
      doi = {10.1016/j.jweia.2021.104832}
    }
    
    Emerging commercial and academic tools are regularly being applied to the design of road and race cars, but there currently are no well-established benchmark cases to study the aerodynamics of race car wings in ground effect. In this paper we propose a new test case, with a relatively complex geometry, supported by the availability of CAD model and experimental results. We refer to the test case as the Imperial Front Wing, originally based on the front wing and endplate design of the McLaren 17D race car.cv A comparison of different resolutions of a high fidelity spectral/hp element simulation using under-resolved DNS/implicit LES approach with fourth and fifth polynomial order is presented. The results demonstrate good correlation to both the wall-bounded streaklines obtained by oil flow visualization and experimental PIV results, correctly predicting key characteristics of the time-averaged flow structures, namely intensity, contours and locations. This study highlights the resolution requirements in capturing salient flow features arising from this type of challenging geometry, providing an interesting test case for both traditional and emerging high-fidelity simulations.
  • E. Laughton, V. Zala, A. Narayan, R. M. Kirby and D. MoxeyJ. Sci. Comp., 90, p. 78, 2022. 10.1007/s10915-021-01750-2 BibTeX Abstract
    @article{laughton-2022,
      title = {Fast barycentric-based evaluation over spectral/$hp$ elements},
      author = {Laughton, E. and Zala, V. and Narayan, A. and Kirby, R. M. and Moxey, D.},
      journal = {J. Sci. Comp.},
      year = {2022},
      volume = {90},
      pages = {78},
      url = {https://link.springer.com/content/pdf/10.1007/s10915-021-01750-2.pdf},
      doi = {10.1007/s10915-021-01750-2}
    }
    
    As the use of spectral/hp element methods, and high-order finite element methods in general, continues to spread, community efforts to create efficient, optimized algorithms associated with fundamental high-order operations have grown. Core tasks such as solution expansion evaluation at quadrature points, stiffness and mass matrix generation, and matrix assembly have received tremendous attention. With the expansion of the types of problems to which high-order methods are applied, and correspondingly the growth in types of numerical tasks accomplished through high-order methods, the number and types of these core operations broaden. This work focuses on solution expansion evaluation at arbitrary points within an element. This operation is core to many postprocessing applications such as evaluation of streamlines and pathlines, as well as to field projection techniques such as mortaring. We expand barycentric interpolation techniques developed on an interval to 2D (triangles and quadrilaterals) and 3D (tetrahedra, prisms, pyramids, and hexahedra) spectral/hp element methods. We provide efficient algorithms for their implementations, and demonstrate their effectiveness using the spectral/hp element library Nektar++.

2021

  • G. Mengaldo, D. Moxey, M. Turner, R. C. Moura, A. Jassim, M. Taylor, J. Peiró and S. J. SherwinSIAM Review, (63), pp. 723–755, 2021. 10.1137/20M1345359 BibTeX Abstract
    @article{mengaldo-2020,
      title = {Industry-relevant implicit large-eddy simulation of a high-performance road car via spectral/$hp$ element methods},
      author = {Mengaldo, G. and Moxey, D. and Turner, M. and Moura, R. C. and Jassim, A. and Taylor, M. and Peir\'o, J. and Sherwin, S. J.},
      journal = {SIAM Review},
      pages = {723-755},
      issue = {63},
      number = {4},
      year = {2021},
      doi = {10.1137/20M1345359},
      url = {https://arxiv.org/pdf/2009.10178}
    }
    
    We present a successful deployment of high-fidelity Large-Eddy Simulation (LES) technologies based on spectral/hp element methods to industrial flow problems, which are characterized by high Reynolds numbers and complex geometries. In particular, we describe the numerical methods, software development and steps that were required to perform the implicit LES of a real automotive car, namely the Elemental Rp1 model. To the best of the authors’ knowledge, this simulation represents the first fifth-order accurate transient LES of an entire real car geometry. Moreover, this constitutes a key milestone towards considerably expanding the computational design envelope currently allowed in industry, where steady-state modelling remains the standard. To this end, a number of novel developments had to be made in order to overcome obstacles in mesh generation and solver technology to achieve this simulation, which we detail in this paper. The main objective is to present to the industrial and applied mathematics community, a viable pathway to translate academic developments into industrial tools, that can substantially advance the analysis and design capabilities of high-end engineering stakeholders. The novel developments and results were achieved using the academic-driven open-source framework Nektar++
  • M. B. Lykkegaard, T. Dodwell and D. MoxeyComput. Meth. Appl. Mech. Eng., 383, p. 113895, 2021. 10.1016/j.cma.2021.113895 BibTeX Abstract
    @article{lykkegaard-2021,
      title = {Accelerating uncertainty quantification of groundwater flow modelling using deep neural networks},
      author = {Lykkegaard, M. B. and Dodwell, T. and Moxey, D.},
      year = {2021},
      journal = {Comput. Meth. Appl. Mech. Eng.},
      volume = {383},
      pages = {113895},
      url = {https://www.sciencedirect.com/science/article/pii/S0045782521002322},
      doi = {10.1016/j.cma.2021.113895}
    }
    
    This paper presents a novel algorithmic approach which fuses Markov Chain Monte Carlo (MCMC) and Machine Learning methods to accelerate the uncertainty quantification of fluid flow in a heterogeneous porous medium, such as groundwater flow. We formulate the governing mathematical model as a Bayesian inverse problem, permitting us to consider the model parameters as a random process with an underlying probability distribution. MCMC allows us to sample from this distribution given some real observations of the system, but it comes with some limitations: it can be prohibitively expensive when dealing with costly likelihood functions, subsequent samples are often highly correlated, and the standard Metropolis-Hastings algorithm suffers from the curse of dimensionality. This paper designs a Metropolis-Hastings proposal which exploits a deep neural network (DNN) approximation of the model, trained on samples from the prior parameter distribution, to significantly accelerate the Bayesian computations. The approach is developed by modifying a delayed acceptance (DA) model hierarchy, whereby, instead of merely screening proposals with a coarse model before passing them to the fine, proposals are generated by running short subchains using an inexpensive DNN approximation in conjunction with the preconditioned Crank-Nicolson (pCN) transition kernel. As a result, the proposal distribution inherits its dimension-independence from the pCN kernel and subsequent fine model proposals are less correlated. Using a simple adaptive error model, we estimate and correct for the bias of the DNN approximation with respect to the posterior distribution on-the-fly. The approach is tested on a synthetic example, using different DNNs trained on a varying number of prior samples. The results show that the cost of uncertainty quantification using our novel approach can be reduced by up to 75% compared to single-level pCN MCMC, depending on the precomputation cost and accuracy of the employed DNN.
  • @article{laughton-2020,
      title = {A comparison of interpolation techniques for non-conformal high-order discontinuous Galerkin methods},
      author = {Laughton, E. and Tabor, G. and Moxey, D.},
      journal = {Comput. Meth. Appl. Mech. Eng.},
      year = {2021},
      volume = {381},
      pages = {113820},
      url = {https://www.sciencedirect.com/science/article/pii/S0045782521001560/pdfft},
      doi = {10.1016/j.cma.2021.113820}
    }
    
    The capability to incorporate moving geometric features within models for complex simulations is a common requirement in many fields. The fluid mechanics within aeronautical applications, for example, routinely feature rotating (e.g. turbines, wheels and fan blades) or sliding components (e.g. in compressor or turbine cascade simulations). With an increasing trend towards the high-fidelity modelling of these cases, in particular combined with the use of high-order discontinuous Galerkin methods, there is therefore a requirement to understand how different numerical treatments of the interfaces between the static mesh and the sliding/rotating part impact on overall solution quality. In this article, we compare two different approaches to handle this non-conformal interface. The first is the so-called mortar approach, where flux integrals along edges are split according to the positioning of the non-conformal grid. The second is a lesser-documented point-to-point interpolation method, where the interior and exterior quantities for flux evaluations are interpolated from elements lying on the opposing side of the interface. Although the mortar approach has advantages in terms of its numerical properties, in that it preserves the local conservation properties of DG methods, in the context of complex 3D meshes it poses significant implementation difficulties which the point-to-point method handles more readily. In this article we examine the numerical properties of each method, focusing not only on observing convergence orders for smooth solutions, but also how each method performs in under-resolved simulations of linear and nonlinear hyperbolic problems, to inform the use of these methods in implicit large-eddy simulations.
  • Z. Yan, Y. Pan, G. Castiglioni, K. Hillewaert, J. Peiró, D. Moxey and S. J. SherwinComput. Math. Appl., 81, pp. 351–372, 2021. 10.1016/j.camwa.2020.03.009 BibTeX Abstract
    @article{yan-2020,
      title = {\emph{Nektar++}: Design and implementation of an implicit spectral/hp element compressible flow solver using a Jacobian-free Newton Krylov approach},
      author = {Yan, Z. and Pan, Y. and Castiglioni, G. and Hillewaert, K. and Peir\'o, J. and Moxey, D. and Sherwin, S. J.},
      year = {2021},
      journal = {Comput. Math. Appl.},
      volume = {81},
      pages = {351--372},
      url = {https://arxiv.org/abs/2002.04222},
      doi = {10.1016/j.camwa.2020.03.009}
    }
    
    At high Reynolds numbers the use of explicit in time compressible flow simulations with spectral/hp element discretisation can become significantly limited by time step. To alleviate this limitation we extend the capability of the spectral/hp element open-source software framework, Nektar++, to include an implicit discontinuous Galerkin compressible flow solver. The integration in time is carried out by a singly diagonally implicit Runge-Kutta method. The non-linear system arising from the implicit time integration is iteratively solved by the Jacobian-free Newton Krylov (JFNK) method. A favourable feature of the JFNK approach is its extensive use of the explicit operators available from the previous explicit in time implementation. The functionalities of different building blocks of the implicit solver are analyzed from the point view of software design and placed in appropriate hierarchical levels in the C++ libraries. In the detailed implementation, the contributions of different parts of the solver to computational cost, memory consumption and programming complexity are also analyzed. A combination of analytical and numerical methods is adopted to simplify the programming complexity in forming the preconditioning matrix. The solver is verified and tested using cases such as manufactured compressible Poiseuille flow, Taylor-Green vortex, turbulent flow over a circular cylinder at Re = 3900 and shock wave boundary-layer interaction. The results show that the implicit solver can speed-up the simulations while maintaining good simulation accuracy.

2020

  • J. Marcon, G. Castiglioni, D. Moxey, S. J. Sherwin and J. PeiróInt. J. Numer. Meth. Eng., 121 (23), pp. 5405–5425, 2020. 10.1002/nme.6529 BibTeX Abstract
    @article{marcon-2020,
      title = {$rp$-adaptation for compressible flows},
      author = {Marcon, J. and Castiglioni, G. and Moxey, D. and Sherwin, S. J. and Peir\'o, J.},
      volume = {121},
      number = {23},
      pages = {5405--5425},
      journal = {Int. J. Numer. Meth. Eng.},
      year = {2020},
      url = {https://onlinelibrary.wiley.com/doi/10.1002/nme.6529},
      doi = {10.1002/nme.6529}
    }
    
    We present an rp-adaptation strategy for the high-fidelity simulation of compressible inviscid flows with shocks. The mesh resolution in regions of flow discontinuities is increased by using a variational optimiser to r-adapt the mesh and cluster degrees of freedom there. In regions of smooth flow, we locally increase or decrease the local resolution through increasing or decreasing the polynomial order of the elements. This dual approach allows us to take advantage of the strengths of both methods for best computational performance, thereby reducing overall cost of the simulation. The adaptation workflow uses a sensor for both discontinuities and smooth regions that is cheap to calculate, but the framework is general and could be used in conjunction with other feature-based sensors or error estimators. We demonstrate this proof-of-concept using two geometries at transonic and supersonic flow regimes. The method was implemented in the open-source spectral/hp element framework \em Nektar++, and its dedicated high-order mesh generation tool \em NekMesh. The results show that the proposed rp-adaptation methodology is a reasonably cost-effective way of improving accuracy.
  • @article{eichstadt-2020,
      title = {A comparison of the shared-memory parallel programming models OpenMP, OpenACC and Kokkos in the context of implicit solvers for high-order FEM},
      author = {Eichst\"adt, J. and Vymazal, M. and Moxey, D. and Peir\'o, J.},
      journal = {Comput. Phys. Commun.},
      volume = {255},
      pages = {107245},
      year = {2020},
      doi = {10.1016/j.cpc.2020.107245},
      url = {https://davidmoxey.uk/assets/pubs/2020-cpc-comparison.pdf}
    }
    
    We consider the application of three performance-portable programming models in the context of a high-order spectral element, implicit time-stepping solver for the Navier-Stokes equations. We aim to evaluate whether the use of these models allows code developers to deliver high-performance solvers for computational fluid dynamics simulations that are capable of effectively utilising both many-core CPU and GPU architectures. Using the core elliptic solver for the Navier-Stokes equations as a benchmarking guide, we evaluate the performance of these models on a range of unstructured meshes and give guidelines for the translation of existing codebases and their data structures to these models.
  • D. Moxey, R. Amici and R. M. KirbySIAM J. Sci. Comput., 42 (3), pp. C97–C123, 2020. 10.1137/19M1246523 BibTeX Abstract
    @article{moxey-2020b,
      title = {Efficient matrix-free high-order finite element evaluation for simplicial elements},
      author = {Moxey, D. and Amici, R. and Kirby, R. M.},
      journal = {SIAM J. Sci. Comput.},
      year = {2020},
      volume = {42},
      number = {3},
      pages = {C97-C123},
      url = {https://davidmoxey.uk/assets/pubs/2020-vectorisation.pdf},
      doi = {10.1137/19M1246523}
    }
    
    With the gap between processor clock speeds and memory bandwidth speeds continuing to increase, the use of arithmetically intense schemes, such as high-order finite element methods, continues to be of considerable interest. In particular, the use of matrix-free formulations of finite element operators for tensor-product elements of quadrilaterals in two dimensions and hexahedra in three dimensions, in combination with single-instruction multiple-data (SIMD) instruction sets, is a well-studied topic at present for the efficient implicit solution of elliptic equations. However, a considerable limiting factor for this approach is the use of meshes comprising of only quadrilaterals or hexahedra, the creation of which is still an open problem within the mesh generation community. In this article, we study the efficiency of high-order finite element operators for the Helmholtz equation with a focus on extending this approach to unstructured meshes of triangles, tetrahedra and prismatic elements using the spectral/hp element method and corresponding tensor-product bases for these element types. We show that although performance is naturally degraded when going from hexahedra to these simplicial elements, efficient implementations can still be obtained that are capable of attaining 50–70% of the peak FLOPS of processors with both AVX2 and AVX512 instruction sets.
  • D. Moxey, C. D. Cantwell, Y. Bao, A. Cassinelli, G. Castiglioni, S. Chun, E. Juda, E. Kazemi, K. Lackhove, J. Marcon, G. Mengaldo, D. Serson, M. Turner, H. Xu, J. Peiró, R. M. Kirby and S. J. SherwinComput. Phys. Commun., 249, p. 107110, 2020. 10.1016/j.cpc.2019.107110 BibTeX Abstract
    @article{moxey-2020a,
      title = {\emph{Nektar++}: enhancing the capability and application of high-fidelity spectral/$hp$ element methods},
      author = {Moxey, D. and Cantwell, C. D. and Bao, Y. and Cassinelli, A. and Castiglioni, G. and Chun, S. and Juda, E. and Kazemi, E. and Lackhove, K. and Marcon, J. and Mengaldo, G. and Serson, D. and Turner, M. and Xu, H. and Peir\'o, J. and Kirby, R. M. and Sherwin, S. J.},
      journal = {Comput. Phys. Commun.},
      year = {2020},
      volume = {249},
      pages = {107110},
      url = {https://www.sciencedirect.com/science/article/pii/S0010465519304175},
      doi = {10.1016/j.cpc.2019.107110}
    }
    
    Nektar++ is an open-source framework that provides a flexible, performant and scalable platform for the development of solvers for partial differential equations using the high-order spectral/hp element method. In particular, \emphNektar++ aims to overcome the complex implementation challenges that are often associated with high-order methods, thereby allowing them to be more readily used in a wide range of application areas. In this paper, we present the algorithmic, implementation and application developments associated with our \emphNektar++ version 5.0 release. We describe some of the key software and performance developments, including our strategies on parallel I/O, on \emphin situ processing, the use of collective operations for exploiting current and emerging hardware, and interfaces to enable multi-solver coupling. Furthermore, we provide details on a newly developed Python interface that enable more rapid on-boarding of new users unfamiliar with spectral/hp element methods, C++ and/or \emphNektar++. This release also incorporates a number of numerical method developments – in particular: the method of moving frames (MMF), which provides an additional approach for the simulation of equations on embedded curvilinear manifolds and domains; a means of handling spatially variable polynomial order; and a novel technique for quasi-3D simulations (which combine a 2D spectral element and 1D Fourier spectral method) to permit spatially-varying perturbations to the geometry in the homogeneous direction. Finally, we demonstrate the new application-level features provided in this release, namely: a facility for generating high-order curvilinear meshes called \emphNekMesh; a novel new \emphAcousticSolver for aeroacoustic problems; our development of a ‘thick’ strip model for the modelling of fluid-structure interaction (FSI) problems in the context of vortex-induced vibrations (VIV). We conclude by commenting on some lessons learned and by discussing some directions for future code development and expansion.

2019

  • M. Vymazal, D. Moxey, S. Sherwin, C. D. Cantwell and R. M. KirbyJ. Comput. Phys., 394, pp. 732–744, 2019. 10.1016/j.jcp.2019.05.021 BibTeX Abstract
    @article{vymazal-2019,
      title = {On weak Dirichlet boundary conditions for elliptic problems in the continuous Galerkin method},
      author = {Vymazal, M. and Moxey, D. and Sherwin, S. and Cantwell, C. D. and Kirby, R. M.},
      year = {2019},
      journal = {J. Comput. Phys.},
      volume = {394},
      pages = {732-744},
      doi = {10.1016/j.jcp.2019.05.021},
      url = {https://davidmoxey.uk/assets/pubs/2019-weak-bcs.pdf}
    }
    
    We combine continuous and discontinuous Galerkin methods in the setting of a model diffusion problem. Starting from a hybrid discontinuous formulation, we replace element interiors by more general subsets of the computational domain - groups of elements that support a piecewise-polynomial continuous expansion. This step allows us to identify a new weak formulation of Dirichlet boundary condition in the continuous framework. We show that the boundary condition leads to a stable discretization with a single parameter insensitive to mesh size and polynomial order of the expansion. The robustness of the approach is demonstrated on several numerical examples.
  • A. Yakhot, Y. Feldman, D. Moxey, S. J. Sherwin and G. E. KarniadakisFlow Turbul. Combust., 103 (1), pp. 1–24, 2019. 10.1007/s10494-018-0002-8 BibTeX Abstract
    @article{yakhot-2019,
      title = {Turbulence in a localized puff in a pipe},
      author = {Yakhot, A. and Feldman, Y. and Moxey, D. and Sherwin, S. J. and Karniadakis, G. E.},
      journal = {Flow Turbul. Combust.},
      volume = {103},
      number = {1},
      pages = {1--24},
      year = {2019},
      url = {https://davidmoxey.uk/assets/pubs/2018-puff-turb.pdf},
      doi = {10.1007/s10494-018-0002-8}
    }
    
    We have performed direct numerical simulations of a spatio-temporally intermittent flow in a pipe for Rem = 2250. From previous experiments and simulations of pipe flow, this value has been estimated as a threshold when the average speeds of upstream and downstream fronts of a puff are identical. We investigated the structure of an individual puff by considering three-dimensional snapshots over a long time period. To assimilate the velocity data, we applied a conditional sampling based on the location of the maximum en- ergy of the transverse (turbulent) motion. Specifically, at each time instance, we followed a turbulent puff by a three-dimensional moving window centered at that location. We collected a snapshot-ensemble (10000 time instances, snap- shots) of the velocity fields acquired over T = 2000D/U time interval inside the moving window. The cross-plane velocity field inside the puff showed the dynamics of a developing turbulence. In particular, the analysis of the cross- plane radial motion yielded the illustration of the production of turbulent kinetic energy directly from the mean flow. A snapshot-ensemble averaging over 10000 snapshots revealed azimuthally arranged large-scale (coherent) structures indicating near-wall sweep and ejection activity. The localized puff is about 15-17 pipe diameters long and the flow regime upstream of its upstream edge and downstream of its leading edge is almost laminar. In the near-wall region, despite the low Reynolds number, the turbulence statistics, in particular, the distribution of turbulence intensities, Reynolds shear stress, skewness and flatness factors, become similar to a fully-developed turbulent pipe flow in the vicinity of the puff upstream edge. In the puff core, the velocity profile becomes flat and logarithmic. It is shown that this “fully-developed turbulent flash” is very narrow being about two pipe diameters long.
  • @article{moxey-2019,
      title = {Interpolation error bounds for curvilinear finite elements and their implications on adaptive mesh refinement},
      author = {Moxey, D. and Sastry, S. P. and Kirby, R. M.},
      journal = {J. Sci. Comp.},
      volume = {78},
      number = {2},
      pages = {1045-1062},
      year = {2019},
      doi = {10.1007/s10915-018-0795-6},
      url = {http://dx.doi.org/10.1007/s10915-018-0795-6}
    }
    
    There is an increasing requirement from both academia and industry for high-fidelity flow simulations that are able to accurately capture complicated and transient flow dynamics in complex geometries. Coupled with the growing availability of high-performance, highly parallel computing resources, there is therefore a demand for scalable numerical methods and corresponding software frameworks which can deliver the next-generation of complex and detailed fluid simulations to scientists and engineers in an efficient way. In this article we discuss recent and upcoming advances in the use of the spectral/hp element method for addressing these modelling challenges. To use these methods efficiently for such applications, is critical that computational resolution is placed in the regions of the flow where it is needed most, which is often not known \empha priori. We propose the use of spatially and temporally varying polynomial order, coupled with appropriate error estimators, as key requirements in permitting these methods to achieve computationally efficient high-fidelity solutions to complex flow problems in the fluid dynamics community.

2018

  • M. Turner, J. Peiró and D. MoxeyComput. Aided Design, 103, pp. 73–91, 2018. 10.1016/j.cad.2017.10.004 BibTeX Abstract
    @article{turner-2018,
      title = {Curvilinear mesh generation using a variational framework},
      author = {Turner, M. and Peir\'o, J. and Moxey, D.},
      journal = {Comput. Aided Design},
      volume = {103},
      pages = {73-91},
      year = {2018},
      doi = {10.1016/j.cad.2017.10.004},
      url = {http://www.sciencedirect.com/science/article/pii/S0010448517301744}
    }
    
    We aim to tackle the challenge of generating unstructured high-order meshes of complex three-dimensional bodies, which remains a significant bottleneck in the wider adoption of high-order methods. In particular we show that by adopting a variational approach to the generation process, many of the current popular high-order generation methods can be encompassed under a single unifying framework. This allows us to compare the effectiveness of these methods and to assess the quality of the meshes they produce in a systematic fashion. We present a detailed overview of the theory and numerical implementation of the framework, and in particular we highlight how this can be effectively exploited to yield a highly-efficient parallel implementation. The effectiveness of this approach is examined by considering a number of two- and three-dimensional examples, where we show how it can be used for both mesh quality optimisation and untangling of invalid meshes.
  • J. Eichstädt, M. Green, M. Turner, J. Peiró and D. MoxeyComput. Phys. Commun., 229, pp. 36–53, 2018. 10.1016/j.cpc.2018.03.025 BibTeX Abstract
    @article{eichstadt-2018,
      title = {Accelerating high-order mesh generation with an architecture-independent programming model},
      author = {Eichst\"adt, J. and Green, M. and Turner, M. and Peir\'o, J. and Moxey, D.},
      journal = {Comput. Phys. Commun.},
      volume = {229},
      pages = {36-53},
      year = {2018},
      doi = {10.1016/j.cpc.2018.03.025},
      url = {https://www.sciencedirect.com/science/article/pii/S0010465518300973}
    }
    
    Heterogeneous manycore performance-portable programming models and libraries, such as Kokkos, have been developed to facilitate portability and maintainability of high-performance computing codes and enhance their resilience to architectural changes. Here we investigate the suitability of the \emphKokkos programming model for optimizing the performance of the high-order mesh generator \emphNekMesh, which has been developed to efficiently generate meshes containing millions of elements for industrial problem involving complex geometries. We describe the variational approach for \empha posteriori high-order mesh generation employed within \emphNekMesh and its parallel implementation. We discuss its optimisation for modern manycore massively parallel shared-memory CPU and GPU platforms using \emphKokkos and demonstrate that we achieve increased performance on multicore CPUs and accelerators compared with a native \emphPthreads implementation. Further, we show that we achieve additional speedup and cost reduction by running on GPUs without any hardware-specific code optimisation.
  • D. de Grazia, D. Moxey, S. J. Sherwin, M. A. Kravtsova and A. I. RubanPhys. Rev. Fluids, 3, p. 024101, 2018. 10.1103/PhysRevFluids.3.024101 BibTeX Abstract
    @article{degrazia-2016,
      title = {DNS of a compressible boundary layer flow past an isolated three-dimensional hump in a high-speed subsonic regime},
      author = {de Grazia, D. and Moxey, D. and Sherwin, S. J. and Kravtsova, M. A. and Ruban, A. I.},
      journal = {Phys. Rev. Fluids},
      volume = {3},
      pages = {024101},
      year = {2018},
      doi = {10.1103/PhysRevFluids.3.024101},
      url = {https://davidmoxey.uk/assets/pubs/2018-prf.pdf}
    }
    
    In this paper we study the boundary-layer separation produced in a high-speed subsonic boundary layer by a small wall roughness. Specifically, we present a direct numerical simulation (DNS) of a two-dimensional boundary-layer flow over a flat plate encountering a three-dimensional Gaussian-shaped hump. This work was motivated by the lack of DNS data of boundary-layer flows past roughness elements in a similar regime which is typical of civil aviation. The Mach and Reynolds numbers are chosen to be relevant for aeronautical applications when considering small imperfections at the leading edge of wings. We analyze different heights of the hump: The smaller heights result in a weakly nonlinear regime, while the larger result in a fully nonlinear regime with an increasing laminar separation bubble arising downstream of the roughness element and the formation of a pair of streamwise counterrotating vortices which appear to support themselves.

2017

  • D. Ekelschot, D. Moxey, S. J. Sherwin and J. PeiróComput. Struct., 181, pp. 55–69, 2017. 10.1016/j.compstruc.2016.03.004 BibTeX Abstract
    @article{ekelschot-2017,
      title = {A $p$-adaptation method for compressible flow problems using a goal-based error estimator},
      author = {Ekelschot, D. and Moxey, D. and Sherwin, S. J. and Peir\'o, J.},
      journal = {Comput. Struct.},
      volume = {181},
      pages = {55-69},
      year = {2017},
      doi = {10.1016/j.compstruc.2016.03.004},
      url = {https://davidmoxey.uk/assets/pubs/2016-padapt.pdf}
    }
    
    An accurate calculation of aerodynamic force coefficients for a given geometry is of fundamental importance for aircraft design. High-order spectral/hp element methods, which use a discontinuous Galerkin discretisation of the compressible Navier–Stokes equations, are now increasingly being used to improve the accuracy of flow simulations and thus the force coefficients. To reduce error in the calculated force coefficients whilst keeping computational cost minimal, we propose a p-adaptation method where the degree of the approximating polynomial is locally increased in the regions of the flow where low resolution is identified using a goal-based error estimator as follows. Given an objective functional such as the aerodynamic force coefficients, we use control theory to derive an adjoint problem which provides the sensitivity of the functional with respect to changes in the flow variables, and assume that these changes are represented by the local truncation error. In its final form, the goal-based error indicator represents the effect of truncation error on the objective functional, suitably weighted by the adjoint solution. Both flow governing and adjoint equations are solved by the same high-order method, where we allow the degree of the polynomial within an element to vary across the mesh. We initially calculate a steady-state solution to the governing equations using a low polynomial order and use the goal-based error indicator to identify parts of the computational domain that require improved solution accuracy which is achieved by increasing the approximation order. We demonstrate the cost-effectiveness of our method across a range of polynomial orders by considering a number of examples in two- and three-dimensions and in subsonic and transonic flow regimes. Reductions in both the number of degrees of freedom required to resolve the force coefficients to a given error, as well as the computational cost, are both observed in using the p-adaptive technique.

2016

  • D. Moxey, C. D. Cantwell, R. M. Kirby and S. J. SherwinComput. Meth. Appl. Mech. Eng., 310, pp. 628–645, 2016. 10.1016/j.cma.2016.07.001 BibTeX Abstract
    @article{moxey-2016b,
      title = {Optimizing the performance of the spectral/hp element method with collective linear algebra operations},
      author = {Moxey, D. and Cantwell, C. D. and Kirby, R. M. and Sherwin, S. J.},
      journal = {Comput. Meth. Appl. Mech. Eng.},
      volume = {310},
      pages = {628--645},
      year = {2016},
      url = {http://www.sciencedirect.com/science/article/pii/S0045782516306739},
      doi = {10.1016/j.cma.2016.07.001}
    }
    
    As high-performance computing hardware evolves, increasing core counts mean that memory bandwidth is becoming the deciding factor in attaining peak CPU performance. Methods that make efficient use of memory and caches are therefore essential for modern hardware. High-order finite element methods, such as those implemented in the spectral/hp framework \nekpp, are particularly well-suited to this environment. Unlike low-order methods that typically utilize sparse storage, matrices representing high-order operators have greater density and richer structure. In this paper, we show how these qualities can be exploited to increase runtime performance by amalgamating the action of key operators on multiple elements into a single, memory-efficient block. We investigate different strategies for achieving optimal performance across a range of polynomial orders and element types. As these strategies all depend on external factors such as BLAS implementation and the geometry of interest, we present a technique for automatically selecting the most efficient strategy at runtime.
  • @article{bolis-2016,
      title = {{An adaptable parallel algorithm for the direct numerical simulation of incompressible turbulent flows using a Fourier spectral/hp element method and MPI virtual topologies}},
      author = {Bolis, A. and Cantwell, C. D. and Moxey, D. and Serson, D. and Sherwin, S. J.},
      journal = {Comput. Phys. Commun.},
      volume = {206},
      pages = {17--25},
      year = {2016},
      doi = {10.1016/j.cpc.2016.04.011},
      url = {http://www.sciencedirect.com/science/article/pii/S001046551630100X}
    }
    
    A hybrid parallelisation technique for distributed memory systems is investigated for a coupled Fourier-\spectralhp element discretisation of domains characterised by geometric homogeneity in one or more directions. The performance of the approach is mathematically modelled in terms of operation count and communication costs for identifying the most efficient parameter choices. The model is calibrated to target a specific hardware platform after which it is shown to accurately predict the performance in the hybrid regime. The method is applied to modelling turbulent flow using the incompressible Navier-Stokes equations in an axisymmetric pipe and square channel. The hybrid method extends the practical limitations of the discretisation, allowing greater parallelism and reduced wall times. Performance is shown to continue to scale when both parallelisation strategies are used.
  • J.-E. W. Lombard, D. Moxey, S. J. Sherwin, J. F. A. Hoessler, S. Dhandapani and M. J. TaylorAIAA J., 54 (2), pp. 506–518, 2016. 10.2514/1.J054181 BibTeX Abstract
    @article{lombard-2016,
      title = {Implicit large-eddy simulation of a wingtip vortex},
      author = {Lombard, J.-E. W. and Moxey, D. and Sherwin, S. J. and Hoessler, J. F. A. and Dhandapani, S. and Taylor, M. J.},
      year = {2016},
      journal = {AIAA J.},
      volume = {54},
      number = {2},
      pages = {506--518},
      url = {http://arxiv.org/abs/1507.06012},
      doi = {10.2514/1.J054181}
    }
    
    In this article, recent developments in numerical methods for performing a large-eddy simulation of the formation and evolution of a wingtip vortex are presented. The development of these vortices in the near wake, in combination with the large Reynolds numbers present in these cases, makes these types of test cases particularly challenging to investigate numerically. First, an overview is given of the spectral vanishing viscosity/implicit large-eddy simulation solver that is used to perform the simulations, and techniques are highlighted that have been adopted to solve various numerical issues that arise when studying such cases. To demonstrate the method’s viability, results are presented from numerical simulations of flow over a NACA 0012 profile wingtip at Rec=1.2⋅106 and they are compared against experimental data, which is to date the highest Reynolds number achieved for a large-eddy simulation that has been correlated with experiments for this test case. The model in this paper correlates favorably with experiment, both for the characteristic jetting in the primary vortex and pressure distribution on the wing surface. The proposed method is of general interest for the modeling of transitioning vortex-dominated flows over complex geometries.
  • S. Yakovlev, D. Moxey, S. J. Sherwin and R. M. KirbyJ. Sci. Comp., 67 (1), pp. 192–220, 2016. 10.1007/s10915-015-0076-6 BibTeX Abstract
    @article{yakovlev-2016,
      title = {{To CG or to HDG: a comparative study in 3D}},
      author = {Yakovlev, S. and Moxey, D. and Sherwin, S. J. and Kirby, R. M.},
      journal = {J. Sci. Comp.},
      volume = {67},
      number = {1},
      pages = {{192-220}},
      year = {2016},
      url = {https://davidmoxey.uk/assets/pubs/2015-hdg.pdf},
      doi = {10.1007/s10915-015-0076-6}
    }
    
    Since the inception of discontinuous Galerkin (DG) methods for elliptic problems, there has existed a question of whether DG methods can be made more computationally efficient than continuous Galerkin (CG) methods. Fewer degrees of freedom, approximation properties for elliptic problems together with the number of optimization techniques, such as static condensation, available within CG framework make it challenging for DG methods to be competitive until recently. However, with the introduction of a static-condensation-amenable DG method – the hybridizable discontinuous Galerkin (HDG) method – it has become possible to perform a realistic comparison of CG and HDG methods when applied to elliptic problems. In this work, we extend upon an earlier 2D comparative study, providing numerical results and discussion of the CG and HDG method performance in three dimensions. The comparison categories covered include steady-state elliptic and time-dependent parabolic problems, various element types and serial and parallel performance. The postprocessing technique, which allows for superconvergence in the HDG case, is also discussed. Depending on the linear system solver used and the type of the problem (steady-state vs time-dependent) in question the HDG method either outperforms or demonstrates a comparable performance when compared with the CG method. The HDG method however falls behind performance-wise when the iterative solver is used, which indicates the need for an effective preconditioning strategy for the method.
  • D. Moxey, D. Ekelschot, Ü. Keskin, S. J. Sherwin and J. PeiróComput. Aided Design, 72, pp. 130–139, 2016. 10.1016/j.cad.2015.09.007 BibTeX Abstract
    @article{moxey-2016a,
      title = {High-order curvilinear meshing using a thermo-elastic analogy},
      author = {Moxey, D. and Ekelschot, D. and Keskin, {\"U}. and Sherwin, S. J. and Peir{\'o}, J.},
      journal = {Comput. Aided Design},
      volume = {72},
      pages = {130--139},
      year = {2016},
      url = {http://www.sciencedirect.com/science/article/pii/S0010448515001530},
      doi = {10.1016/j.cad.2015.09.007}
    }
    
    With high-order methods becoming increasingly popular in both academia and industry, generating curvilinear meshes that align with the boundaries of complex geometries continues to present a significant challenge. Whereas traditional low-order methods use planar-faced elements, high-order methods introduce curvature into elements that may, if added naively, cause the element to self-intersect. Over the last few years, several curvilinear mesh generation techniques have been designed to tackle this issue, utilising mesh deformation to move the interior nodes of the mesh in order to accommodate curvature at the boundary. Many of these are based on elastic models, where the mesh is treated as a solid body and deformed according to a linear or non-linear stress tensor. However, such methods typically have no explicit control over the validity of the elements in the resulting mesh. In this article, we present an extension of this elastic formulation, whereby a thermal stress term is introduced to ‘heat’ or ‘cool’ elements as they deform. We outline a proof-of-concept implementation and show that the adoption of a thermo-elastic analogy leads to an additional degree of robustness, by considering examples in both two and three dimensions.

2015

  • G. Mengaldo, D. de Grazia, D. Moxey, P. E. Vincent and S. J. SherwinJ. Comput. Phys., 299, pp. 56–81, 2015. 10.1016/j.jcp.2015.06.032 BibTeX Abstract
    @article{mengaldo-2015,
      title = {{Dealiasing techniques for high-order spectral element methods on regular and irregular grids}},
      author = {Mengaldo, G. and de Grazia, D. and Moxey, D. and Vincent, P. E. and Sherwin, S. J.},
      journal = {J. Comput. Phys.},
      year = {2015},
      volume = {299},
      pages = {56--81},
      doi = {10.1016/j.jcp.2015.06.032},
      url = {http://www.sciencedirect.com/science/article/pii/S0021999115004301}
    }
    
    High-order methods are becoming increasingly attractive in both academia and industry, especially in the context of computational fluid dynamics. However, before they can be more widely adopted, issues such as lack of robustness in terms of numerical stability need to be addressed, particularly when treating industrial-type problems where challenging geometries and a wide range of physical scales, typically due to high Reynolds numbers, need to be taken into account. One source of instability is aliasing effects which arise from the nonlinearity of the underlying problem. In this work we detail two dealiasing strategies based on the concept of consistent integration, the first of which uses a localised approach which is useful when the nonlinearities only arise in parts of the problem and the second a more traditional approach of using a higher quadrature. The main goal of both dealiasing techniques is to improve the robustness of high order spectral element methods, thereby reducing aliasing-driven instabilities. We demonstrate how these two strategies can be effectively applied to both continuous and discontinuous discretisations, where in the latter both volumetric and interface approximations must be considered. We show the key features of each dealiasing technique applied to the scalar conservation law with numerical examples and we highlight the main differences in implementation between continuous and discontinuous spatial discretisations.
  • C. D. Cantwell, D. Moxey, A. Comerford, A. Bolis, G. Rocco, G. Mengaldo, D. de Grazia, S. Yakovlev, J.-E. Lombard, D. Ekelschot, B. Jordi, H. Xu, Y. Mohamied, C. Eskilsson, B. Nelson, P. Vos, C. Biotto, R. M. Kirby and S. J. SherwinComput. Phys. Commun., 192, pp. 205–219, 2015. 10.1016/j.cpc.2015.02.008 BibTeX Abstract
    @article{cantwell-2015,
      title = {Nektar++: An open-source spectral/hp element framework},
      author = {Cantwell, C. D. and Moxey, D. and Comerford, A. and Bolis, A. and Rocco, G. and Mengaldo, G. and de Grazia, D. and Yakovlev, S. and Lombard, J.-E. and Ekelschot, D. and Jordi, B. and Xu, H. and Mohamied, Y. and Eskilsson, C. and Nelson, B. and Vos, P. and Biotto, C. and Kirby, R. M. and Sherwin, S. J.},
      journal = {Comput. Phys. Commun.},
      volume = {192},
      pages = {205--219},
      year = {2015},
      doi = {10.1016/j.cpc.2015.02.008},
      url = {http://www.sciencedirect.com/science/article/pii/S0010465515000533}
    }
    
    Nektar++ is an open-source software framework designed to support the development of high-performance scalable solvers for partial differential equations using the spectral/hp element method. High-order methods are gaining prominence in several engineering and biomedical applications due to their improved accuracy at reduced computational cost. However, their proliferation is often limited by implementational complexity, which makes practically embracing these methods particularly challenging. Nektar++ is an initiative to overcome this limitation by encapsulating the mathematical complexities of the underlying method within an efficient C++ framework, making the techniques more accessible to the broader scientific and industrial communities for solving a range of problems. The software supports a variety of discretisation techniques and implementation strategies, supporting methods research as well as application-focused computation, and the multi-layered structure of the framework allows the user to embrace as much or as little of the complexity as they need. The libraries capture the mathematical constructs of spectracl/hp element methods, while the associated collection of pre-written PDE solvers provides out-of-the-box application-level functionality and a template for users who wish to develop solutions for addressing questions in their own scientific domains.
  • D. Moxey, M. D. Green, S. J. Sherwin and J. PeiróComput. Meth. Appl. Mech. Eng., 283, pp. 636–650, 2015. 10.1016/j.cma.2014.09.019 BibTeX Abstract
    @article{moxey-2015a,
      title = {An isoparametric approach to high-order curvilinear boundary-layer meshing},
      author = {Moxey, D. and Green, M. D. and Sherwin, S. J. and Peir{\'o}, J.},
      journal = {Comput. Meth. Appl. Mech. Eng.},
      volume = {283},
      pages = {636--650},
      year = {2015},
      doi = {10.1016/j.cma.2014.09.019},
      url = {http://www.sciencedirect.com/science/article/pii/S004578251400334X}
    }
    
    The generation of high-order curvilinear meshes for complex three-dimensional geometries is presently a challenging topic, particularly for meshes used in simulations at high Reynolds numbers where a thin boundary layer exists near walls and elements are highly stretched in the direction normal to flow. In this paper, we present a conceptually simple but very effective and modular method to address this issue. We propose an isoparametric approach, whereby a mesh containing a valid coarse discretisation comprising of high-order triangular prisms near walls is refined to obtain a finer prismatic or tetrahedral boundary-layer mesh. The validity of the prismatic mesh provides a suitable mapping that allows one to obtain very fine mesh resolutions across the thickness of the boundary layer. We describe the method in detail for a high-order approximation using modal basis functions, discuss the requirements for the splitting method to produce valid prismatic and tetrahedral meshes and provide a sufficient criterion of validity in both cases. By considering two complex aeronautical configurations, we demonstrate how highly stretched meshes with sufficient resolution within the laminar sublayer can be generated to enable the simulation of flows with Reynolds numbers of 106 and above.

2014

  • @article{ferrer-2014,
      title = {{Stability of projection methods for incompressible flows using high order pressure-velocity pairs of same degree: Continuous and Discontinuous Galerkin formulations}},
      author = {Ferrer, E. and Moxey, D. and Sherwin, S. J. and Willden, R. H. J.},
      volume = {16},
      number = {3},
      pages = {817-840},
      doi = {10.4208/cicp.290114.170414a},
      year = {2014},
      journal = {Commun. Comp. Phys.},
      url = {https://davidmoxey.uk/assets/pubs/2014-temporal.pdf}
    }
    
    This paper presents limits for stability of projection type schemes when using high order pressure-velocity pairs of same degree. Two high order h/p variational methods encompassing continuous and discontinuous Galerkin formulations are used to explain previously observed lower limits on the time step for projection type schemes to be stable, when h- or p-refinement strategies are considered. In addition, the analysis included in this work shows that these stability limits do not depend only on the time step but on the product of the latter and the kinematic viscosity, which is of particular importance in the study of high Reynolds number flows. We show that high order methods prove advantageous in stabilising the simulations when small time steps and low kinematic viscosities are used. Drawing upon this analysis, we demonstrate how the effects of this instability can be reduced in the discontinuous scheme by introducing a stabilisation term into the global system. Finally, we show that these lower limits are compatible with Courant-Friedrichs-Lewy (CFL) type restrictions, given that a sufficiently high polynomial order or a mall enough mesh spacing is selected.
  • D. de Grazia, G. Mengaldo, D. Moxey, P. E. Vincent and S. J. SherwinInt. J. Numer. Meth. Fl., 75 (12), pp. 860–877, 2014. 10.1002/fld.3915 BibTeX Abstract
    @article{degrazia-2014,
      title = {{Connections between the discontinuous Galerkin method and high-order flux reconstruction schemes}},
      author = {de Grazia, D. and Mengaldo, G. and Moxey, D. and Vincent, P. E. and Sherwin, S. J.},
      volume = {75},
      number = {12},
      issn = {1097-0363},
      doi = {10.1002/fld.3915},
      pages = {860--877},
      year = {2014},
      url = {https://davidmoxey.uk/assets/pubs/2014-frdg.pdf},
      journal = {Int. J. Numer. Meth. Fl.}
    }
    
    With high-order methods becoming more widely adopted throughout the field of computational fluid dynamics, the development of new computationally efficient algorithms has increased tremendously in recent years. The flux reconstruction approach allows various well-known high order schemes to be cast within a single unifying framework. Whilst a connection between flux reconstruction and the discontinuous Galerkin method has been established elsewhere, it still remains to fully investigate the explicit connections between the many popular variants of the discontinuous Galerkin method and the flux reconstruction approach. In this work, we closely examine the connections between three nodal versions of tensor product discontinuous Galerkin spectral element approximations and two types of flux reconstruction schemes for solving systems of conservation laws on quadrilateral meshes. The different types of discontinuous Galerkin approximations arise from the choice of the solution nodes of the Lagrange basis representing the solution and from the quadrature approximation used to integrate the mass matrix and the other terms of the discretisation. By considering both a linear and nonlinear advection equation on a regular grid, we examine the mathematical properties which connect these discretisations. These arguments are further confirmed by the results of an empirical numerical study.
  • J. Cohen, C. D. Cantwell, N. P. C. Hong, D. Moxey, M. Illingworth, A. Turner, J. Darlington and S. J. SherwinJ. Open Res. Soft., 2 (1), 2014. 10.5334/jors.az BibTeX Abstract
    @article{cohen-2014,
      author = {Cohen, J. and Cantwell, C. D. and Hong, N. P. Chue and Moxey, D. and Illingworth, M. and Turner, A. and Darlington, J. and Sherwin, S. J.},
      title = {Simplifying the Development, Use and Sustainability of HPC Software},
      journal = {J. Open Res. Soft.},
      volume = {2},
      number = {1},
      year = {2014},
      issn = {2049-9647},
      url = {https://davidmoxey.uk/assets/pubs/2014-jors.pdf},
      doi = {10.5334/jors.az}
    }
    
    Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS) cloud computing become more widely accepted for high-performance computing (HPC), scientists require more support from computer scientists and resource providers to develop efficient code that offers long-term sustainability and makes optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. In this updated version of our submission to the WSSSPE13 workshop at SuperComputing 2013 we set out our approach to simplifying access to HPC applications and resources for end-users through the use of flexible and interchangeable software components and associated high-level functional-style operations. We believe this approach can support sustainability of scientific software and help to widen access to it.

2011

  • K. Avila, D. Moxey, A. de Lozar, M. Avila, D. Barkley and B. HofScience, 333 (6039), pp. 192–196, 2011. 10.1126/science.1203223 BibTeX Abstract
    @article{avila-2011,
      title = {{The onset of turbulence in pipe flow}},
      author = {Avila, K. and Moxey, D. and de Lozar, A. and Avila, M. and Barkley, D. and Hof, B.},
      volume = {333},
      number = {6039},
      pages = {192--196},
      year = {2011},
      month = may,
      journal = {Science},
      note = {published as a research article},
      doi = {10.1126/science.1203223},
      url = {https://davidmoxey.uk/assets/pubs/2011-science.pdf}
    }
    
    Shear flows undergo a sudden transition from laminar to turbulent motion as the velocity increases and the onset of turbulence radically changes transport efficiency and mixing properties. Even for the well-studied case of pipe flow, it has not been possible to determine at what Reynolds number the motion will be either persistently turbulent or ultimately laminar. We show that in pipes, turbulence which is transient at low Reynolds numbers becomes sustained at a distinct critical point. Through extensive experiments and computer simulations we are able to identify and characterize the processes ultimately responsible for sustaining turbulence. In contrast to the classical Landau-Ruelle-Takens view that turbulence arises from an increase in the temporal complexity of fluid motion, here spatial proliferation of chaotic domains is the decisive process and intrinsic to the nature of fluid turbulence.

2010

  • D. Moxey and D. BarkleyProc. Nat. Acad. Sci., 107 (18), pp. 8091–8096, 2010. 10.1073/pnas.0909560107 BibTeX Abstract
    @article{moxey-2010,
      title = {{Distinct large-scale turbulent-laminar states in transitional pipe flow}},
      author = {Moxey, D. and Barkley, D.},
      journal = {Proc. Nat. Acad. Sci.},
      volume = {107},
      number = {18},
      pages = {8091--8096},
      year = {2010},
      month = may,
      doi = {10.1073/pnas.0909560107},
      url = {https://davidmoxey.uk/assets/pubs/2010-pnas.pdf}
    }
    
    When fluid flows through a channel, pipe, or duct, there are two basic forms of motion: smooth laminar motion and complex turbulent motion. The discontinuous transition between these states is a fundamental problem that has been studied for more than 100 years. What has received far less attention is the large-scale nature of the turbulent flows near transition once they are established. We have carried out extensive numerical computations in pipes of variable lengths up to 125 diameters to investigate the nature of transitional turbulence in pipe flow. We show the existence of three fundamentally different turbulent states separated by two distinct Reynolds numbers. Below Re1   2300, turbulence takes the form of familiar equilibrium (or long-time transient) puffs that are spatially localized and keep their size independent of pipe length. At Re1 the flow makes a striking transition to a spatio-temporally intermittent flow that fills the pipe. Irregular alternation of turbulent and laminar regions is inherent and does not result from random disturbances. The fraction of turbulence increases with Re until Re2   2600 where there is a continuous transition to a state of uniform turbulence along the pipe. We relate these observations to directed percolation and argue that Re1 marks the onset of infinite-lifetime turbulence.

2015

  • D. Moxey, M. D. Green, S. J. Sherwin and J. Peiróin New Challenges in Grid Generation and Adaptivity for Scientific Computing, Springer, 2015, pp. 203–215. 10.1007/978-3-319-06053-8_10 BibTeX Abstract
    @inbook{moxey-2015d,
      title = {On the generation of curvilinear meshes through subdivision of isoparametric elements},
      author = {Moxey, D. and Green, M. D. and Sherwin, S. J. and Peir\'o, J.},
      booktitle = {New Challenges in Grid Generation and Adaptivity for Scientific Computing},
      pages = {203--215},
      year = {2015},
      publisher = {Springer},
      doi = {10.1007/978-3-319-06053-8_10},
      url = {https://davidmoxey.uk/assets/pubs/2014-tet.pdf}
    }
    
    Recently, a new mesh generation technique based on the isoparametric representation of curvilinear elements has been developed in order to address the issue of generating high-order meshes with highly stretched elements. Given a valid coarse mesh comprising of a prismatic boundary layer, this technique uses the shape functions that define the geometries of the elements to produce a series of subdivided elements of arbitrary height. The purpose of this article is to investigate the range of conditions under which the resulting meshes are valid, and additionally to consider the application of this method to different element types. We consider the subdivision strategies that can be achieved with this technique and apply it to the generation of meshes suitable for boundary-layer fluid problems.
  • J. Peiró, D. Moxey, B. Jordi, S. J. Sherwin, B. W. Nelson, R. M. Kirby and R. Haimes
    High-order visualization with ElVis
    in IDIHOM: Industrialization of High-Order Methods-A Top-Down Approach, Springer, 2015, pp. 521–534. 10.1007/978-3-319-12886-3_24 BibTeX Abstract
    @inbook{moxey-2015c,
      title = {{High-order visualization with ElVis}},
      author = {Peir{\'o}, J. and Moxey, D. and Jordi, B. and Sherwin, S. J. and Nelson, B. W. and Kirby, R. M. and Haimes, R.},
      booktitle = {IDIHOM: Industrialization of High-Order Methods-A Top-Down Approach},
      pages = {521--534},
      year = {2015},
      doi = {10.1007/978-3-319-12886-3_24},
      publisher = {Springer}
    }
    
    Accurate visualization of high-order meshes and flow fields is a fundamental tool for the verification, validation, analysis and interpretation of high-order flow simulations. Standard visualization tools based on piecewise linear approximations can be used for the display of highorder fields but their accuracy is restricted by computer memory and processing time. More often than not, the accurate visualization of complex flows using this strategy requires computational resources beyond the reach of most users. This chapter describes ElVis, a truly high-order and interactive visualization system created for the accurate and interactive visualization of scalar fields produced by high-order spectral/hp finite element simulations. We show some examples that motivate the need for such a visualization system and illustrate some of its features for the display and analysis of simulation data.
  • D. Moxey, M. Hazan, S. J. Sherwin and J. Peiró
    Curvilinear mesh generation for boundary layer problems
    in IDIHOM: Industrialization of High-Order Methods-A Top-Down Approach, Springer, 2015, pp. 41–64. 10.1007/978-3-319-12886-3_3 BibTeX Abstract
    @inbook{moxey-2015b,
      title = {Curvilinear mesh generation for boundary layer problems},
      author = {Moxey, D. and Hazan, M. and Sherwin, S. J. and Peir{\'o}, J.},
      booktitle = {IDIHOM: Industrialization of High-Order Methods-A Top-Down Approach},
      pages = {41--64},
      year = {2015},
      doi = {10.1007/978-3-319-12886-3_3},
      publisher = {Springer}
    }
    
    In this article, we give an overview of a new technique for unstructured curvilinear boundary layer grid generation, which uses the isoparametric mappings that define elements in an existing coarse prismatic grid to produce a refined mesh capable of resolving arbitrarily thin boundary layers. We demonstrate that the technique always produces valid grids given an initially valid coarse mesh, and additionally show how this can be extended to convert hybrid meshes to meshes containing only simplicial elements.

2023

  • K. Kirilov, J. Peiró, M. Green, D. Moxey, L. B. da Veiga, F. Dassi and A. Russoin SIAM International Meshing Roundtable Workshop, 2023. BibTeX Abstract
    @inproceedings{kirilov-2023,
      title = {Curvilinear mesh generation for the high-order virtual element method (VEM)},
      author = {Kirilov, K. and Peir\'{o}, J. and Green, M. and Moxey, D. and da Veiga, L. Beirao and Dassi, F. and Russo, A.},
      booktitle = {SIAM International Meshing Roundtable Workshop},
      year = {2023},
      url = {https://internationalmeshingroundtable.com/assets/papers/2023/21-Kirilov-compressed.pdf}
    }
    
    We present a proof-of-concept methodology for generating curvilinear polygonal meshes suitable for high-order discretizations by the Virtual Element Method (VEM). A VEM discretization requires the definition of a set of boundary and internal points that are used to interpolate the approximation functions and to evaluate integrals by means of suitable quadratures. The procedure to locate these points on the boundary borrows ideas from previous work on a posteriori high-order mesh generation in which the geometrical inquiries to a B-rep of the computational domain are performed via an interface to CAD libraries. Here we describe the steps of the procedure that transforms a straight-sided polygonal mesh, generated using third-party software, into a curvilinear boundary-conforming mesh. We discuss criteria for ensuring and verifying the validity of the mesh. Finally, using the Laplace equation with Dirichlet boundary conditions as a model problem, we show that VEM discretizations on such meshes achieve the expected rates of convergence as the mesh resolution is increased.

2022

  • B. Liu, C. D. Cantwell, D. Moxey, M. Green and S. J. Sherwinin 8th European Congress on Computational Methods in Applied Sciences and Engineering, 2022. 10.23967/eccomas.2022.291 BibTeX
    @inproceedings{liu-2022,
      author = {Liu, B. and Cantwell, C. D. and Moxey, D. and Green, M. and Sherwin, S. J.},
      title = {Vectorised spectral/hp element matrix-free operator for anisotropic heat transport in tokamak edge plasma},
      booktitle = {8th European Congress on Computational Methods in Applied Sciences and Engineering},
      doi = {10.23967/eccomas.2022.291},
      url = {https://www.scipedia.com/public/Liu_et_al_2022b},
      year = {2022}
    }
    

2019

  • A. Yakhot, Y. Feldman, D. Moxey, S. J. Sherwin and G. E. Karniadakisin Progress in Turbulence VIII, 2019, pp. 15–20. 10.1007/978-3-030-22196-6_3 BibTeX Abstract
    @inproceedings{yakhot-2019b,
      author = {Yakhot, A. and Feldman, Y. and Moxey, D. and Sherwin, S. J. and Karniadakis, G. E.},
      editor = {{\"O}rl{\"u}, Ramis and Talamelli, Alessandro and Peinke, Joachim and Oberlack, Martin},
      title = {Near-Wall Turbulence in a Localized Puff in a Pipe},
      booktitle = {Progress in Turbulence VIII},
      year = {2019},
      publisher = {Springer},
      pages = {15--20},
      isbn = {978-3-030-22196-6},
      doi = {10.1007/978-3-030-22196-6_3},
      url = {https://davidmoxey.uk/assets/pubs/2019-nearwall-turb.pdf}
    }
    
    We have performed direct numerical simulations of a transitional flow in ai pipe for Rem=2250 when turbulence manifests in the form of fleshes (puffs). From experiments and simulations, Rem approx 2250 has been estimated as a threshold when the average speeds of upstream and downstream fronts of a puff are identical (Song et al. in J Fluid Mech 813:283–304, 2017, [1]). The flow regime upstream of its trailing edge and downstream of its leading edge is almost laminar. To collect the velocity data, at each time instance, we followed a turbulent puff by a three-dimensional moving window centered at the location of the maximum energy of the transverse (turbulent) motion. In the near-wall region, despite the low Reynolds number, the turbulence statistics, in particular, the distribution of turbulence intensities and Reynolds shear stress becomes similar to a fully-developed turbulent pipe flow.
  • J. Eichstädt, D. Moxey and J. Peiróin 2019 AIAA Aerospace Sciences Meeting, 2019. 10.2514/6.2019-1404 BibTeX Abstract
    @inproceedings{eichstadt-2019b,
      title = {Towards a performance-portable high-order implicit flow solver},
      author = {Eichst\"adt, J. and Moxey, D. and Peir\'o, J.},
      booktitle = {2019 AIAA Aerospace Sciences Meeting},
      year = {2019},
      doi = {10.2514/6.2019-1404},
      url = {https://davidmoxey.uk/assets/pubs/2019-aiaa-scitech-2.pdf}
    }
    
    We present an approach for robust high-order mesh generation specially tailored to streamlined bodies. The method is based on a semi-structured approach which combines the high quality of structured meshes in the near-field with the flexibility of unstructured meshes in the far-field. We utilise medial axis technology to robustly partition the near-field into blocks which can be meshed coarsely with a linear swept mesher. A high-order mesh of the near-field is then generated and split using an isoparametric approach which allows us to obtain highly stretched elements aligned with the flow field. Special treatment of the partition is performed on the wing root juntion and the trailing edge — into the wake — to obtain an H-type mesh configuration with anisotropic hexahedra ideal for the strong shear of high Reynolds number simulations. We then proceed to discretise the far-field using traditional robust tetrahedral meshing tools. This workflow is made possible by two sets of tools: CADfix, focused on CAD system, the block partitioning of the near-field and the generation of a linear mesh; and NekMesh, focused on the curving of the high-order mesh and the generation of highly-stretched boundary layer elements. We demonstrate this approach on a NACA0012 wing attached to a wall and show that a gap between the wake partition and the wall can be inserted to remove the dependency of the partitioning procedure on the local geometry.
  • J. Marcon, J. Peiró, D. Moxey, N. Bergemann, H. Bucklow and M. R. Gammonin 2019 AIAA Aerospace Sciences Meeting, 2019. 10.2514/6.2019-1725 BibTeX Abstract
    @inproceedings{marcon-2019,
      title = {A semi-structured approach to curvilinear mesh generation around streamlined bodies},
      author = {Marcon, J. and Peir\'o, J. and Moxey, D. and Bergemann, N. and Bucklow, H. and Gammon, M. R.},
      booktitle = {2019 AIAA Aerospace Sciences Meeting},
      year = {2019},
      doi = {10.2514/6.2019-1725 },
      url = {https://davidmoxey.uk/assets/pubs/2019-aiaa-scitech.pdf}
    }
    
    We present an approach for robust high-order mesh generation specially tailored to streamlined bodies. The method is based on a semi-structured approach which combines the high quality of structured meshes in the near-field with the flexibility of unstructured meshes in the far-field. We utilise medial axis technology to robustly partition the near-field into blocks which can be meshed coarsely with a linear swept mesher. A high-order mesh of the near-field is then generated and split using an isoparametric approach which allows us to obtain highly stretched elements aligned with the flow field. Special treatment of the partition is performed on the wing root juntion and the trailing edge — into the wake — to obtain an H-type mesh configuration with anisotropic hexahedra ideal for the strong shear of high Reynolds number simulations. We then proceed to discretise the far-field using traditional robust tetrahedral meshing tools. This workflow is made possible by two sets of tools: CADfix, focused on CAD system, the block partitioning of the near-field and the generation of a linear mesh; and NekMesh, focused on the curving of the high-order mesh and the generation of highly-stretched boundary layer elements. We demonstrate this approach on a NACA0012 wing attached to a wall and show that a gap between the wake partition and the wall can be inserted to remove the dependency of the partitioning procedure on the local geometry.

2018

  • J. Marcon, M. Turner, J. Peiró, D. Moxey, C. R. Pollard, H. Bucklow and M. Gammonin 2018 AIAA Aerospace Sciences Meeting, 2018. 10.2514/6.2018-1403 BibTeX Abstract
    @inproceedings{marcon-2018,
      title = {High-order curvilinear hybrid mesh generation for CFD simulations},
      author = {Marcon, J. and Turner, M. and Peir\'o, J. and Moxey, D. and Pollard, C. R. and Bucklow, H. and Gammon, M.},
      booktitle = {2018 AIAA Aerospace Sciences Meeting},
      year = {2018},
      doi = {10.2514/6.2018-1403},
      url = {https://davidmoxey.uk/assets/pubs/2018-aiaa-scitech.pdf}
    }
    
    We describe a semi-structured method for the generation of high-order hybrid meshes suited for the simulation of high Reynolds number flows. This is achieved through the use of highly stretched elements in the viscous boundary layers near the wall surfaces. CADfix is used to first repair any possible defects in the CAD geometry and then generate a medial object based decomposition of the domain that wraps the wall boundaries with partitions suitable for the generation of either prismatic or hexahedral elements. The latter is a novel distinctive feature of the method that permits to obtain well-shaped hexahedral meshes at corners or junctions in the boundary layer. The medial object approach allows greater control on the “thickness” of the boundary-layer mesh than is generally achievable with advancing layer techniques. CADfix subsequently generates a hybrid straight-sided mesh of prismatic and hexahedral elements in the near-field region modelling the boundary layer, and tetrahedral elements in the far-field region covering the rest of the domain. The mesh in the near-field region provides a framework that facilitates the generation, via an isoparametric technique, of layers of highly stretched elements with a distribution of points in the direction normal to the wall tailored to efficiently and accurately capture the flow in the boundary layer. The final step is the generation of a high-order mesh using NekMesh, a high-order mesh generator within the Nektar++ framework. NekMesh uses the CADfix API as a geometry engine that handles all the geometrical queries to the CAD geometry required during the high-order mesh generation process. We will describe in some detail the methodology using a simple geometry, a NACA wing tip, for illustrative purposes. Finally, we will present two examples of application to reasonably complex geometries proposed by NASA as CFD validation cases: the Common Research Model and the Rotor 67.

2017

  • D. Moxey, C. D. Cantwell, G. Mengaldo, D. Serson, D. Ekelschot, J. Peiró, S. J. Sherwin and R. M. Kirbyin Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2016, 2017, pp. 63–79. 10.1007/978-3-319-65870-4_4 BibTeX Abstract
    @inproceedings{moxey-2017a,
      title = {Towards $p$-adaptive spectral/$hp$ element methods for modelling industrial flows},
      author = {Moxey, D. and Cantwell, C. D. and Mengaldo, G. and Serson, D. and Ekelschot, D. and Peir\'o, J. and Sherwin, S. J. and Kirby, R. M.},
      booktitle = {Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2016},
      pages = {63-79},
      year = {2017},
      doi = {10.1007/978-3-319-65870-4_4},
      url = {https://davidmoxey.uk/assets/pubs/2017-icosahom16.pdf}
    }
    
    There is an increasing requirement from both academia and industry for high-fidelity flow simulations that are able to accurately capture complicated and transient flow dynamics in complex geometries. Coupled with the growing availability of high-performance, highly parallel computing resources, there is therefore a demand for scalable numerical methods and corresponding software frameworks which can deliver the next-generation of complex and detailed fluid simulations to scientists and engineers in an efficient way. In this article we discuss recent and upcoming advances in the use of the spectral/hp element method for addressing these modelling challenges. To use these methods efficiently for such applications, is critical that computational resolution is placed in the regions of the flow where it is needed most, which is often not known \empha priori. We propose the use of spatially and temporally varying polynomial order, coupled with appropriate error estimators, as key requirements in permitting these methods to achieve computationally efficient high-fidelity solutions to complex flow problems in the fluid dynamics community.
  • M. Turner, D. Moxey, J. Peiró, M. Gammon, C. R. Pollard and H. Bucklowin Procedia Engineering, 2017, 203, pp. 206–218. 10.1016/j.proeng.2017.09.808 BibTeX Abstract
    @inproceedings{turner-2017b,
      title = {A framework for the generation of high-order curvilienar hybrid meshes for CFD simulations},
      author = {Turner, M. and Moxey, D. and Peir\'o, J. and Gammon, M. and Pollard, C. R. and Bucklow, H.},
      booktitle = {Procedia Engineering},
      year = {2017},
      volume = {203},
      pages = {206-218},
      doi = {10.1016/j.proeng.2017.09.808},
      url = {http://www.sciencedirect.com/science/article/pii/S1877705817343692}
    }
    
    We present a pipeline of state-of-the-art techniques for the generation of high-order meshes that contain highly stretched elements in viscous boundary layers, and are suitable for flow simulations at high Reynolds numbers. The pipeline uses CADfix to generate a medial object based decomposition of the domain, which wraps the wall boundaries with prismatic partitions. The use of medial object allows the prism height to be larger than is generally possible with advancing layer techniques. CADfix subsequently generates a hybrid straight-sided (or linear) mesh. A high-order mesh is then generated a posteriori using NekMesh, a high-order mesh generator within the Nektar++ framework. During the high-order mesh generation process, the CAD definition of the domain is interrogated; we describe the process for integrating the CADfix API as an alternative backend geometry engine for NekMesh, and discuss some of the implementation issues encountered. Finally, we illustrate the methodology using three geometries of increasing complexity: a wing tip, a simplified landing gear and an aircraft in cruise configuration.

2016

  • M. Turner, J. Peiró and D. Moxeyin Procedia Engineering, 2016, 82, pp. 127–135. 10.1016/j.proeng.2016.11.069 BibTeX Abstract
    @inproceedings{turner-2016b,
      title = {A variational framework for high-order mesh generation},
      author = {Turner, M. and Peir\'o, J. and Moxey, D.},
      booktitle = {Procedia Engineering},
      year = {2016},
      volume = {82},
      pages = {127-135},
      doi = {10.1016/j.proeng.2016.11.069},
      url = {http://www.sciencedirect.com/science/article/pii/S1877705816333781}
    }
    
    The generation of sufficiently high quality unstructured high-order meshes remains a significant obstacle in the adoption of high-order methods. However, there is little consensus on which approach is the most robust, fastest and produces the ’best’ meshes. In this work we aim to provide a route to investigate this question, by examining popular high-order mesh generation methods in the context of an e cient variational framework for the generation of curvilinear meshes. By considering previous works in a variational form, we are able to compare their characteristics and study their robustness. Alongside a description of the theory and practical implementation details, including an e cient multi-threading parallelisation strategy, we demonstrate the e↵ectiveness of the framework, showing how it can be used for both mesh quality optimisation and untangling of invalid meshes.
  • J.-E. Lombard, D. Moxey and S. J. Sherwinin European Congress on Computational Methods in Applied Sciences and Engineering, Crete, Greece, 2016. BibTeX Abstract
    @inproceedings{lombard-2016a,
      title = {The wing-tip vortex test case},
      author = {Lombard, J.-E. and Moxey, D. and Sherwin, S. J.},
      booktitle = {European Congress on Computational Methods in Applied Sciences and Engineering, Crete, Greece},
      month = jun,
      year = {2016},
      url = {https://davidmoxey.uk/assets/pubs/2016-eccomas-2.pdf}
    }
    
    We present a spectral/hp element discritisation, using the Nektar++ code, for performing a Large Eddy Simulation (LES) of the formation and evolution of a wingtip vortex as a test case involving a 3D geometry. The development of these vortices in the near wake, in combination with the large Reynolds numbers, make this test case particularly challenging to simulate. We consider flow over a NACA 0012 profile wingtip at 1.2 million Reynolds number, based on chord length and compare them against experimental data, which is to date the highest Reynolds number achieved for a LES that has been correlated with experiments for this test case. The jetting of the primary vortex and pressure distribution on the wing surface in our model were successfully correlated with the experiment however the vortex formation over the rear wing tip has some discrepancies which lead act as a motivator for further testing of high-fidelity methods in this test case. The formation of the wingtip vortex test case is of general interest for the modeling of transitioning vortex dominated flows over complex geometries which is of particular interest to industries such as high-lift configurations in aircraft, wind-turbine or propeller and automotive design.
  • M. Turner, D. Moxey, S. J. Sherwin and J. Peiróin Proceedings of the European Congress on Computational Methods in Applied Sciences and Engineering, 2016, pp. 428–433. 10.7712/100016.1825.8410 BibTeX Abstract
    @inproceedings{turner-2016a,
      title = {Automatic generation of 3D unstructured high-order curvilinear meshes},
      author = {Turner, M. and Moxey, D. and Sherwin, S. J. and Peir\'o, J.},
      booktitle = {Proceedings of the European Congress on Computational Methods in Applied Sciences and Engineering},
      pages = {428--433},
      year = {2016},
      url = {https://davidmoxey.uk/assets/pubs/2016-eccomas.pdf},
      doi = {10.7712/100016.1825.8410}
    }
    
    The generation of suitable, good quality high-order meshes is a significant obstacle in the academic and industrial uptake of high-order CFD methods. These methods have a number of favourable characteristics such as low dispersion and dissipation and higher levels of numerical accuracy than their low-order counterparts, however the methods are highly susceptible to inaccuracies caused by low quality meshes. These meshes require significant curvature to accuratly describe the geometric surfaces, which presents a number of difficult challenges in their generation. As yet, research into the field has produced a number of interesting technologies that go some way towards achieving this goal, but are yet to provide a complete system that can systematically produce curved high-order meshes for arbitrary geometries for CFD analysis. This paper presents our efforts in that direction and introduces an open-source high-order mesh generator, NekMesh, which has been created to bring high-order meshing technologies into one coherent pipeline which aims to produce 3D high-order curvilinear meshes from CAD geometries in a robust and systematic way.

2015

  • J. Cohen, C. Cantwell, D. Moxey, J. Nowell, P. Austing, X. Guo, J. Darlington and S. J. Sherwinin IEEE eScience (Munich, Germany), 2015. 10.1109/eScience.2015.43 BibTeX Abstract
    @inproceedings{cohen-2015a,
      title = {{TemPSS: A service providing software parameter templates and profiles for scientific HPC}},
      author = {Cohen, J. and Cantwell, C. and Moxey, D. and Nowell, J. and Austing, P. and Guo, X. and Darlington, J. and Sherwin, S. J.},
      booktitle = {IEEE eScience (Munich, Germany)},
      year = {2015},
      doi = {10.1109/eScience.2015.43},
      url = {https://davidmoxey.uk/assets/pubs/2015-tempss.pdf}
    }
    
    Generating and managing input data for large-scale scientific computations has, for many classes of application, always been a challenging process. The emergence of new hardware platforms and increasingly complex scientific models compounds this problem as configuration data can change depending on the underlying hardware and properties of the computation. In this paper we present TemPro, a web based service for building and managing application input files in a semantically focused manner using the concepts of software parameter templates and job profiles. Many complex, distributed applications require the expertise of more than one individual to allow an application to run efficiently on different types of hardware. TemPro supports collaborative development of application inputs through the ability to save, edit and extend job profiles that define the inputs to an application. We describe the concepts of templates and profiles and the structures that developers provide to add an application template to the TemPro service. In addition, we detail the implementation of the service and its functionality.
  • M. Turner, D. Moxey and J. Peiróin 24th International Meshing Roundtable, 2015. BibTeX Abstract
    @inproceedings{turner-2015,
      title = {Automatic mesh sizing specification of complex three dimensional domains using an octree structure},
      booktitle = {24th International Meshing Roundtable},
      author = {Turner, M. and Moxey, D. and Peir\'o, J.},
      year = {2015},
      url = {https://davidmoxey.uk/assets/pubs/2015-imr24.pdf}
    }
    
    A system for automatically specifying a distribution of mesh sizing throughout three dimensional complex domains is presented, which aims to reduce the level of user input required to generate a mesh. The primary motivation for the creation of this system is for the production of suitable linear meshes that are sufficiently coarse for high-order mesh generation purposes. Resolution is automatically increased in regions of high curvature, with the system only requiring three parameters from the user to successfully generate the sizing distribution. This level of automation is achieved through the construction of an octree description of the domain, which targets the curvature of the surfaces and guides the generation of the mesh. After the construction of the octree, an ideal mesh spacing specification is calculated for each octant, based on a relation to the radii of curvature of the domain surfaces and mesh gradation criteria. The system is capable of accurately estimating the number of elements that will be produced prior to the generation process, so that the meshing parameters can be altered to coarsen the mesh before effort is wasted generating the actual mesh.
  • J. Cohen, D. Moxey, C. D. Cantwell, P. Austing, J. Darlington and S. J. Sherwinin 2015 IEEE/ACM 1st International Workshop on Software Engineering for High Performance Computing in Science (SE4HPCS), 2015, pp. 56–59. 10.1109/SE4HPCS.2015.16 BibTeX Abstract
    @inproceedings{cohen-2015b,
      title = {Ensuring an effective user experience when managing and running scientific HPC software},
      booktitle = {2015 IEEE/ACM 1st International Workshop on Software Engineering for High Performance Computing in Science (SE4HPCS)},
      author = {Cohen, J. and Moxey, D. and Cantwell, C. D. and Austing, P. and Darlington, J. and Sherwin, S. J.},
      year = {2015},
      pages = {56-59},
      url = {https://davidmoxey.uk/assets/pubs/2015-se4hpcs.pdf},
      doi = {10.1109/SE4HPCS.2015.16}
    }
    
    With CPU clock speeds stagnating over the last few years, ongoing advances in computing power and capabilities are being supported through increasing multi- and many-core parallelism. The resulting cost of locally maintaining large-scale computing infrastructure, combined with the need to perform increasingly large simulations, is leading to the wider use of alternative models of accessing infrastructure, such as the use of Infrastructure-as-a-Service (IaaS) cloud platforms. The diversity of platforms and the methods of interacting with them can make using them with complex scientific HPC codes difficult for users. In this position paper, we discuss our approaches to tackling these challenges on heterogeneous resources. As an example of the application of these approaches we use Nekkloud, our web-based interface for simplifying job specification and deployment of the Nektar++ high-order finite element HPC code. We also present results from a recent Nekkloud evaluation workshop undertaken with a group of Nektar++ users.

2014

  • D. Moxey, D. Ekelschot, U. Keskin, S. J. Sherwin and J. Peiróin Procedia Engineering, 2014, 82, pp. 127–135. 10.1016/j.proeng.2014.10.378 BibTeX Abstract
    @inproceedings{moxey-2014a,
      title = {{A thermo-elastic analogy for high-order curvilinear meshing with control of mesh validity and quality}},
      author = {Moxey, D. and Ekelschot, D. and Keskin, U. and Sherwin, S. J. and Peir{\'o}, J.},
      booktitle = {Procedia Engineering},
      year = {2014},
      volume = {82},
      pages = {127-135},
      doi = {10.1016/j.proeng.2014.10.378},
      url = {https://davidmoxey.uk/assets/pubs/2014-elasticity.pdf}
    }
    
    In recent years, techniques for the generation of high-order curvilinear mesh have frequently adopted mesh deformation procedures to project the curvature of the surface onto the mesh, thereby introducing curvature into the interior of the domain and lessening the occurrence of self-intersecting elements. In this article, we propose an extension of this approach whereby thermal stress terms are incorporated into the state equation to provide control on the validity and quality of the mesh, thereby adding an extra degree of robustness which is lacking in current approaches.

2013

  • J. Cohen, D. Moxey, C. D. Cantwell, P. Burovskiy, J. Darlington and S. J. Sherwinin 2013 IEEE International Conference on Cluster Computing, 2013, pp. 1–5. 10.1109/cluster.2013.6702616 BibTeX Abstract
    @inproceedings{cohen-2013b,
      author = {Cohen, J. and Moxey, D. and Cantwell, C. D. and Burovskiy, P. and Darlington, J. and Sherwin, S. J.},
      booktitle = {2013 IEEE International Conference on Cluster Computing},
      title = {Nekkloud: A software environment for high-order finite element analysis on clusters and clouds},
      year = {2013},
      pages = {1-5},
      doi = {10.1109/cluster.2013.6702616},
      url = {https://davidmoxey.uk/assets/pubs/2013-cluster.pdf}
    }
    
    As the capabilities of computational platforms continue to grow, scientific software is becoming ever more complex in order to target these platforms effectively. When using large-scale distributed infrastructure such as clusters and clouds it can be difficult for end-users to make efficient use of these platforms. In the libhpc project we are developing a suite of tools and services to simplify job description and execution on heterogeneous infrastructure. In this paper we present Nekkloud, a web-based software environment that builds on elements of the libhpc framework, for running the Nektar++ high-order finite element code on cluster and cloud platforms. End-users submit their jobs via Nekkloud, which then handles their execution on a chosen computing platform. Nektar++ provides a set of solvers that support scientists across a range of domains, ensuring that Nekkloud has a broad range of use cases. We describe the design and development of Nekkloud, user experience and integration with both local campus infrastructure and remote cloud resources enabling users to make better use of the resources available to them.
  • J. Cohen, C. D. Cantwell, N. P. C. Hong, D. Moxey, M. Illingworth, A. Turner, J. Darlington and S. J. Sherwinin WSSPE13 Workshop, Supercomputing, 2013. BibTeX Abstract
    @inproceedings{cohen-2013a,
      author = {Cohen, J. and Cantwell, C. D. and Hong, N. P. Chue and Moxey, D. and Illingworth, M. and Turner, A. and Darlington, J. and Sherwin, S. J.},
      title = {Simplifying the Development, Use and Sustainability of HPC Software},
      booktitle = {WSSPE13 Workshop, Supercomputing},
      year = {2013},
      url = {https://davidmoxey.uk/assets/pubs/2013-wsspe13.pdf}
    }
    
    Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS) cloud computing become more widely accepted for HPC computations, scientists require more support from computer scientists and resource providers to develop efficient code and make optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. The use of such frameworks has implications for the sustainability of scientific software. In this paper we set out our developing understanding of these challenges based on work carried out in the libhpc project.

2012

  • J. Cohen, J. Darlington, B. Fuchs, D. Moxey, C. D. Cantwell, P. Burovskiy, S. J. Sherwin and N. P. C. Hongin First Workshop on Maintainable Software Practices in e-Science, 8th IEEE International Conference on eScience, 2012. BibTeX Abstract
    @inproceedings{cohen-2012,
      title = {libHPC: Software sustainability and reuse through metadata preservation},
      booktitle = {First Workshop on Maintainable Software Practices in e-Science, 8th IEEE International Conference on eScience},
      author = {Cohen, J. and Darlington, J. and Fuchs, B. and Moxey, D. and Cantwell, C. D. and Burovskiy, P. and Sherwin, S. J. and Hong, N. P. Chue},
      year = {2012},
      url = {https://davidmoxey.uk/assets/pubs/2012-escience.pdf}
    }
    
    Software development, particularly of complex scientific applications, requires a detailed understanding of the problem(s) to be solved and an ability to translate this understanding into the generic constructs of a programming language. We believe that such knowledge – information about a code’s “building blocks”, especially the low-level functions and procedures in which domain-specific tasks are implemented – can be very effectively leveraged to optimise code execution across platforms and operating systems. However, all too often such knowledge gets lost during the development process, which can bury the scientist’s understanding in the code in a manner that makes it difficult to recover or extract later on. In this paper, we describe our work in the EPSRC-funded libHPC project to build a framework that captures and utilises this information to achieve optimised performance in dynamic, heterogeneous networked execution environments. The aim of the framework is to allow scientists to work in high-level scripting environments based on component libraries to provide descriptions of applications which can then be mapped to optimal execution configurations based on available resources. A key element in our approach is the use of “co-ordination forms” – or functional paradigms – for creating optimised execution plans from components. Our main exemplar application is an advanced finite element framework, Nektar++, and we detail ongoing work to undertake profiling and performance analysis to extract software metadata and derive optimal execution configurations, to target resources based on their hardware metadata.

2011

  • D. MoxeyPhD thesis, University of Warwick, 2011. BibTeX Abstract
    @phdthesis{moxey-2011,
      author = {Moxey, D.},
      title = {{Spatio-temporal dynamics in pipe flow}},
      school = {University of Warwick},
      month = oct,
      year = {2011},
      url = {https://davidmoxey.uk/assets/pubs/2011-thesis.pdf}
    }
    
    When fluid flows through a channel, pipe or duct, there are two basic forms of motion: smooth laminar flow and disordered turbulent motion. The transition between these two states is a fundamental and open problem which has been studied for over 125 years. What has received far less attention are the intermittent dynamics which possess qualities of both turbulent and laminar regimes. The purpose of this thesis is therefore to investigate large-scale intermittent states through extensive numerical simulations in the hopes of further understanding the transition to turbulence in pipe flow.

2007

  • @mastersthesis{moxey-2007,
      author = {Moxey, D.},
      title = {``Snakes on a plane'': An introduction to the study of polymer chains using Monte Carlo methods},
      school = {University of Warwick},
      month = jul,
      year = {2007},
      url = {https://davidmoxey.uk/assets/pubs/2007-project.pdf}
    }
    
    In this report, a number of basic Monte Carlo methods for modelling polymer chains are presented (including configurational-bias Monte Carlo and the pruned-enriched Rosenbluth method). These are then used to investigate the behaviour of the collapse of polymer chains around the well-studied theta-point. Additionally, a flat-histogram version of PERM is outlined and applied to the problem of polymers both tethered to and in close proximity to an adsorbing surface.