Publications

Below is a complete list of my publications in journals, books and conference proceedings. Where available I have included links to preprints or the printed article. Citation counts and other versions can be found on Google Scholar.

Under review

  1. D. de Grazia, D. Moxey, S. J. Sherwin, M. A. Kravtsova and A. I. Ruban
    DNS of a compressible boundary layer flow past an isolated three-dimensional hump in a high-speed subsonic regime
    under review in Phys. Rev. Fluids, June 2016. BibTeX Abstract
    @unpublished{degrazia-2016,
      title = {DNS of a compressible boundary layer flow past an isolated
                        three-dimensional hump in a high-speed subsonic regime},
      author = {de Grazia, D. and Moxey, D. and Sherwin, S. J. and Kravtsova, M. A. and Ruban, A. I.},
      note = {under review in Phys. Rev. Fluids},
      keywords = {journal},
      month = jun,
      year = {2016}
    }
    
    In recent years the aeronautical industry has paid a lot of attention to analysing the laminar-turbulent transition of boundary-layer flows over wing surfaces. This mechanism plays a key role in increasing the viscous drag of aircraft and, from this perspective, the wing surface roughnesses represent a major concern in enabling the transition. A deep understanding of the laminar-turbulent transition scenario caused by this type of imperfections can help finding better strategies to delay this mechanism and this will result in a reduction of fuel consumption and noise emission. In this paper we study the boundary-layer separation produced in a high speed subsonic boundary layer by a small wall roughness. Specifically we present a Direct Numerical Simulation (DNS) of a two-dimensional boundary-layer flow over a flat plate encountering a three-dimensional Gaussian-shaped hump. This work was motivated by the lack of DNS data of boundary-layer flows past roughness elements in a similar regime which is typical of civil aviation. The Mach and Reynolds numbers are chosen to be relevant for aeronautical applications when considering small imperfections at the leading edge of wings. We analyse different heights of the hump: the smaller heights result in a weakly nonlinear regime, whilst the larger result in a fully nonlinear regime with an increasing laminar separation bubble arising downstream of the roughness element and the formation of a pair of streamwise counter-rotating vortices which appear to support themselves.
  2. D. Moxey, S. P. Sastry and R. M. Kirby
    Interpolation error bounds for curvilinear finite elements and their implications on adaptive mesh refinement
    under review in J. Comput. Phys., June 2017. BibTeX Abstract
    @unpublished{moxey-2017b,
      title = {Interpolation error bounds for curvilinear finite elements and
                        their implications on adaptive mesh refinement},
      author = {Moxey, D. and Sastry, S. P. and Kirby, R. M.},
      note = {under review in J. Comput. Phys.},
      month = jun,
      year = {2017},
      keywords = {journal}
    }
    
    There is an increasing requirement from both academia and industry for high-fidelity flow simulations that are able to accurately capture complicated and transient flow dynamics in complex geometries. Coupled with the growing availability of high-performance, highly parallel computing resources, there is therefore a demand for scalable numerical methods and corresponding software frameworks which can deliver the next-generation of complex and detailed fluid simulations to scientists and engineers in an efficient way. In this article we discuss recent and upcoming advances in the use of the spectral/hp element method for addressing these modelling challenges. To use these methods efficiently for such applications, is critical that computational resolution is placed in the regions of the flow where it is needed most, which is often not known a priori. We propose the use of spatially and temporally varying polynomial order, coupled with appropriate error estimators, as key requirements in permitting these methods to achieve computationally efficient high-fidelity solutions to complex flow problems in the fluid dynamics community.
  3. J. Eichstädt, M. Green, M. Turner, J. Peiró and D. Moxey
    Accelerating high-order mesh generation with an architecture-independent programming model
    under review in Comput. Phys. Commun., November 2017. BibTeX Abstract
    @unpublished{eichstadt-2017,
      title = {Accelerating high-order mesh generation with an
                        architecture-independent programming model},
      author = {Eichst\"adt, J. and Green, M. and Turner, M. and Peir\'o, J. and Moxey, D.},
      note = {under review in Comput. Phys. Commun.},
      month = nov,
      year = {2017},
      keywords = {journal}
    }
    
    Heterogeneous manycore performance-portable programming models and libraries, such as Kokkos, have been developed to facilitate portability and maintainability of high-performance computing codes and enhance their resilience to architectural changes. Here we investigate the suitability of the Kokkos programming model for optimizing the performance of the high-order mesh generator NekMesh, which has been developed to efficiently generate meshes containing millions of elements for industrial problem involving complex geometries. We describe the variational approach for \empha posteriori high-order mesh generation employed within NekMesh and its parallel implementation. We discuss its optimisation for modern manycore massively parallel shared-memory CPU and GPU platforms using Kokkos and demonstrate that we achieve increased performance on multicore CPUs and accelerators compared with a native Pthreads implementation. Further, we show that we achieve additional speedup and cost reduction by running on GPUs without any hardware-specific code optimisation.

In press

  1. M. Turner, J. Peiró and D. Moxeyto appear in Comput. Aided Design, January 2017. 10.1016/j.cad.2017.10.004 BibTeX Abstract
    @unpublished{turner-2017a,
      title = {Curvilinear mesh generation using a variational framework},
      author = {Turner, M. and Peir\'o, J. and Moxey, D.},
      note = {to appear in Comput. Aided Design},
      month = jan,
      year = {2017},
      keywords = {journalpress},
      doi = {10.1016/j.cad.2017.10.004},
      url = {http://www.sciencedirect.com/science/article/pii/S0010448517301744}
    }
    
    We aim to tackle the challenge of generating unstructured high-order meshes of complex three-dimensional bodies, which remains a significant bottleneck in the wider adoption of high-order methods. In particular we show that by adopting a variational approach to the generation process, many of the current popular high-order generation methods can be encompassed under a single unifying framework. This allows us to compare the effectiveness of these methods and to assess the quality of the meshes they produce in a systematic fashion. We present a detailed overview of the theory and numerical implementation of the framework, and in particular we highlight how this can be effectively exploited to yield a highly-efficient parallel implementation. The effectiveness of this approach is examined by considering a number of two- and three-dimensional examples, where we show how it can be used for both mesh quality optimisation and untangling of invalid meshes.

2017

  • D. Ekelschot, D. Moxey, S. J. Sherwin and J. PeiróComput. Struct., 181, pp. 55–69, 2017. 10.1016/j.compstruc.2016.03.004 BibTeX Abstract
    @article{ekelschot-2017,
      title = {A $p$-adaptation method for compressible flow problems using a
                        goal-based error estimator},
      author = {Ekelschot, D. and Moxey, D. and Sherwin, S. J. and Peir\'o, J.},
      journal = {Comput. Struct.},
      volume = {181},
      pages = {55-69},
      year = {2017},
      doi = {10.1016/j.compstruc.2016.03.004},
      url = {https://davidmoxey.uk/assets/pubs/2016-padapt.pdf}
    }
    
    An accurate calculation of aerodynamic force coefficients for a given geometry is of fundamental importance for aircraft design. High-order spectral/hp element methods, which use a discontinuous Galerkin discretisation of the compressible Navier–Stokes equations, are now increasingly being used to improve the accuracy of flow simulations and thus the force coefficients. To reduce error in the calculated force coefficients whilst keeping computational cost minimal, we propose a p-adaptation method where the degree of the approximating polynomial is locally increased in the regions of the flow where low resolution is identified using a goal-based error estimator as follows. Given an objective functional such as the aerodynamic force coefficients, we use control theory to derive an adjoint problem which provides the sensitivity of the functional with respect to changes in the flow variables, and assume that these changes are represented by the local truncation error. In its final form, the goal-based error indicator represents the effect of truncation error on the objective functional, suitably weighted by the adjoint solution. Both flow governing and adjoint equations are solved by the same high-order method, where we allow the degree of the polynomial within an element to vary across the mesh. We initially calculate a steady-state solution to the governing equations using a low polynomial order and use the goal-based error indicator to identify parts of the computational domain that require improved solution accuracy which is achieved by increasing the approximation order. We demonstrate the cost-effectiveness of our method across a range of polynomial orders by considering a number of examples in two- and three-dimensions and in subsonic and transonic flow regimes. Reductions in both the number of degrees of freedom required to resolve the force coefficients to a given error, as well as the computational cost, are both observed in using the p-adaptive technique.

2016

  • D. Moxey, C. D. Cantwell, R. M. Kirby and S. J. SherwinComput. Meth. Appl. Mech. Eng., 310, pp. 628–645, 2016. 10.1016/j.cma.2016.07.001 BibTeX Abstract
    @article{moxey-2016b,
      title = {Optimizing the performance of the spectral/hp element method
                        with collective linear algebra operations},
      author = {Moxey, D. and Cantwell, C. D. and Kirby, R. M. and Sherwin, S. J.},
      journal = {Comput. Meth. Appl. Mech. Eng.},
      volume = {310},
      pages = {628--645},
      year = {2016},
      url = {http://www.sciencedirect.com/science/article/pii/S0045782516306739},
      doi = {10.1016/j.cma.2016.07.001}
    }
    
    As high-performance computing hardware evolves, increasing core counts mean that memory bandwidth is becoming the deciding factor in attaining peak CPU performance. Methods that make efficient use of memory and caches are therefore essential for modern hardware. High-order finite element methods, such as those implemented in the spectral/hp framework \nekpp, are particularly well-suited to this environment. Unlike low-order methods that typically utilize sparse storage, matrices representing high-order operators have greater density and richer structure. In this paper, we show how these qualities can be exploited to increase runtime performance by amalgamating the action of key operators on multiple elements into a single, memory-efficient block. We investigate different strategies for achieving optimal performance across a range of polynomial orders and element types. As these strategies all depend on external factors such as BLAS implementation and the geometry of interest, we present a technique for automatically selecting the most efficient strategy at runtime.
  • @article{bolis-2016,
      title = {{An adaptable parallel algorithm for the direct numerical
                        simulation of incompressible turbulent flows using a Fourier
                        spectral/hp element method and MPI virtual topologies}},
      author = {Bolis, A. and Cantwell, C. D. and Moxey, D. and Serson, D. and Sherwin, S. J.},
      journal = {Comput. Phys. Commun.},
      volume = {206},
      pages = {17--25},
      year = {2016},
      doi = {10.1016/j.cpc.2016.04.011},
      url = {http://www.sciencedirect.com/science/article/pii/S001046551630100X}
    }
    
    A hybrid parallelisation technique for distributed memory systems is investigated for a coupled Fourier-\spectralhp element discretisation of domains characterised by geometric homogeneity in one or more directions. The performance of the approach is mathematically modelled in terms of operation count and communication costs for identifying the most efficient parameter choices. The model is calibrated to target a specific hardware platform after which it is shown to accurately predict the performance in the hybrid regime. The method is applied to modelling turbulent flow using the incompressible Navier-Stokes equations in an axisymmetric pipe and square channel. The hybrid method extends the practical limitations of the discretisation, allowing greater parallelism and reduced wall times. Performance is shown to continue to scale when both parallelisation strategies are used.
  • J.-E. W. Lombard, D. Moxey, S. J. Sherwin, J. F. A. Hoessler, S. Dhandapani and M. J. TaylorAIAA J., 54 (2), pp. 506–518, 2016. 10.2514/1.J054181 BibTeX Abstract
    @article{lombard-2016,
      title = {Implicit large-eddy simulation of a wingtip vortex},
      author = {Lombard, J.-E. W. and Moxey, D. and Sherwin, S. J. and Hoessler, J. F. A. and Dhandapani, S. and Taylor, M. J.},
      year = {2016},
      journal = {AIAA J.},
      volume = {54},
      number = {2},
      pages = {506--518},
      url = {http://arxiv.org/abs/1507.06012},
      doi = {10.2514/1.J054181}
    }
    
    In this article, recent developments in numerical methods for performing a large-eddy simulation of the formation and evolution of a wingtip vortex are presented. The development of these vortices in the near wake, in combination with the large Reynolds numbers present in these cases, makes these types of test cases particularly challenging to investigate numerically. First, an overview is given of the spectral vanishing viscosity/implicit large-eddy simulation solver that is used to perform the simulations, and techniques are highlighted that have been adopted to solve various numerical issues that arise when studying such cases. To demonstrate the method’s viability, results are presented from numerical simulations of flow over a NACA 0012 profile wingtip at Rec=1.2⋅106 and they are compared against experimental data, which is to date the highest Reynolds number achieved for a large-eddy simulation that has been correlated with experiments for this test case. The model in this paper correlates favorably with experiment, both for the characteristic jetting in the primary vortex and pressure distribution on the wing surface. The proposed method is of general interest for the modeling of transitioning vortex-dominated flows over complex geometries.
  • S. Yakovlev, D. Moxey, S. J. Sherwin and R. M. KirbyJ. Sci. Comp., 67 (1), pp. 192–220, 2016. 10.1007/s10915-015-0076-6 BibTeX Abstract
    @article{yakovlev-2016,
      title = {{To CG or to HDG: a comparative study in 3D}},
      author = {Yakovlev, S. and Moxey, D. and Sherwin, S. J. and Kirby, R. M.},
      journal = {J. Sci. Comp.},
      volume = {67},
      number = {1},
      pages = {{192-220}},
      year = {2016},
      url = {https://davidmoxey.uk/assets/pubs/2015-hdg.pdf},
      doi = {10.1007/s10915-015-0076-6}
    }
    
    Since the inception of discontinuous Galerkin (DG) methods for elliptic problems, there has existed a question of whether DG methods can be made more computationally efficient than continuous Galerkin (CG) methods. Fewer degrees of freedom, approximation properties for elliptic problems together with the number of optimization techniques, such as static condensation, available within CG framework make it challenging for DG methods to be competitive until recently. However, with the introduction of a static-condensation-amenable DG method – the hybridizable discontinuous Galerkin (HDG) method – it has become possible to perform a realistic comparison of CG and HDG methods when applied to elliptic problems. In this work, we extend upon an earlier 2D comparative study, providing numerical results and discussion of the CG and HDG method performance in three dimensions. The comparison categories covered include steady-state elliptic and time-dependent parabolic problems, various element types and serial and parallel performance. The postprocessing technique, which allows for superconvergence in the HDG case, is also discussed. Depending on the linear system solver used and the type of the problem (steady-state vs time-dependent) in question the HDG method either outperforms or demonstrates a comparable performance when compared with the CG method. The HDG method however falls behind performance-wise when the iterative solver is used, which indicates the need for an effective preconditioning strategy for the method.
  • D. Moxey, D. Ekelschot, Ü. Keskin, S. J. Sherwin and J. PeiróComput. Aided Design, 72, pp. 130–139, 2016. 10.1016/j.cad.2015.09.007 BibTeX Abstract
    @article{moxey-2016a,
      title = {High-order curvilinear meshing using a thermo-elastic analogy},
      author = {Moxey, D. and Ekelschot, D. and Keskin, {\"U}. and Sherwin, S. J. and Peir{\'o}, J.},
      journal = {Comput. Aided Design},
      volume = {72},
      pages = {130--139},
      year = {2016},
      url = {http://www.sciencedirect.com/science/article/pii/S0010448515001530},
      doi = {10.1016/j.cad.2015.09.007}
    }
    
    With high-order methods becoming increasingly popular in both academia and industry, generating curvilinear meshes that align with the boundaries of complex geometries continues to present a significant challenge. Whereas traditional low-order methods use planar-faced elements, high-order methods introduce curvature into elements that may, if added naively, cause the element to self-intersect. Over the last few years, several curvilinear mesh generation techniques have been designed to tackle this issue, utilising mesh deformation to move the interior nodes of the mesh in order to accommodate curvature at the boundary. Many of these are based on elastic models, where the mesh is treated as a solid body and deformed according to a linear or non-linear stress tensor. However, such methods typically have no explicit control over the validity of the elements in the resulting mesh. In this article, we present an extension of this elastic formulation, whereby a thermal stress term is introduced to ‘heat’ or ‘cool’ elements as they deform. We outline a proof-of-concept implementation and show that the adoption of a thermo-elastic analogy leads to an additional degree of robustness, by considering examples in both two and three dimensions.

2015

  • G. Mengaldo, D. de Grazia, D. Moxey, P. E. Vincent and S. J. SherwinJ. Comput. Phys., 299, pp. 56–81, 2015. 10.1016/j.jcp.2015.06.032 BibTeX Abstract
    @article{mengaldo-2015,
      title = {{Dealiasing techniques for high-order spectral element methods
                        on regular and irregular grids}},
      author = {Mengaldo, G. and de Grazia, D. and Moxey, D. and Vincent, P. E. and Sherwin, S. J.},
      journal = {J. Comput. Phys.},
      year = {2015},
      volume = {299},
      pages = {56--81},
      doi = {10.1016/j.jcp.2015.06.032},
      url = {http://www.sciencedirect.com/science/article/pii/S0021999115004301}
    }
    
    High-order methods are becoming increasingly attractive in both academia and industry, especially in the context of computational fluid dynamics. However, before they can be more widely adopted, issues such as lack of robustness in terms of numerical stability need to be addressed, particularly when treating industrial-type problems where challenging geometries and a wide range of physical scales, typically due to high Reynolds numbers, need to be taken into account. One source of instability is aliasing effects which arise from the nonlinearity of the underlying problem. In this work we detail two dealiasing strategies based on the concept of consistent integration, the first of which uses a localised approach which is useful when the nonlinearities only arise in parts of the problem and the second a more traditional approach of using a higher quadrature. The main goal of both dealiasing techniques is to improve the robustness of high order spectral element methods, thereby reducing aliasing-driven instabilities. We demonstrate how these two strategies can be effectively applied to both continuous and discontinuous discretisations, where in the latter both volumetric and interface approximations must be considered. We show the key features of each dealiasing technique applied to the scalar conservation law with numerical examples and we highlight the main differences in implementation between continuous and discontinuous spatial discretisations.
  • C. D. Cantwell, D. Moxey, A. Comerford, A. Bolis, G. Rocco, G. Mengaldo, D. de Grazia, S. Yakovlev, J.-E. Lombard, D. Ekelschot, B. Jordi, H. Xu, Y. Mohamied, C. Eskilsson, B. Nelson, P. Vos, C. Biotto, R. M. Kirby and S. J. SherwinComput. Phys. Commun., 192, pp. 205–219, 2015. 10.1016/j.cpc.2015.02.008 BibTeX Abstract
    @article{cantwell-2015,
      title = {Nektar++: An open-source spectral/hp element framework},
      author = {Cantwell, C. D. and Moxey, D. and Comerford, A. and Bolis, A. and Rocco, G. and Mengaldo, G. and de Grazia, D. and Yakovlev, S. and Lombard, J.-E. and Ekelschot, D. and Jordi, B. and Xu, H. and Mohamied, Y. and Eskilsson, C. and Nelson, B. and Vos, P. and Biotto, C. and Kirby, R. M. and Sherwin, S. J.},
      journal = {Comput. Phys. Commun.},
      volume = {192},
      pages = {205--219},
      year = {2015},
      doi = {10.1016/j.cpc.2015.02.008},
      url = {http://www.sciencedirect.com/science/article/pii/S0010465515000533}
    }
    
    Nektar++ is an open-source software framework designed to support the development of high-performance scalable solvers for partial differential equations using the spectral/hp element method. High-order methods are gaining prominence in several engineering and biomedical applications due to their improved accuracy at reduced computational cost. However, their proliferation is often limited by implementational complexity, which makes practically embracing these methods particularly challenging. Nektar++ is an initiative to overcome this limitation by encapsulating the mathematical complexities of the underlying method within an efficient C++ framework, making the techniques more accessible to the broader scientific and industrial communities for solving a range of problems. The software supports a variety of discretisation techniques and implementation strategies, supporting methods research as well as application-focused computation, and the multi-layered structure of the framework allows the user to embrace as much or as little of the complexity as they need. The libraries capture the mathematical constructs of spectracl/hp element methods, while the associated collection of pre-written PDE solvers provides out-of-the-box application-level functionality and a template for users who wish to develop solutions for addressing questions in their own scientific domains.
  • D. Moxey, M. D. Green, S. J. Sherwin and J. PeiróComput. Meth. Appl. Mech. Eng., 283, pp. 636–650, 2015. 10.1016/j.cma.2014.09.019 BibTeX Abstract
    @article{moxey-2015a,
      title = {An isoparametric approach to high-order curvilinear
                        boundary-layer meshing},
      author = {Moxey, D. and Green, M. D. and Sherwin, S. J. and Peir{\'o}, J.},
      journal = {Comput. Meth. Appl. Mech. Eng.},
      volume = {283},
      pages = {636--650},
      year = {2015},
      doi = {10.1016/j.cma.2014.09.019},
      url = {http://www.sciencedirect.com/science/article/pii/S004578251400334X}
    }
    
    The generation of high-order curvilinear meshes for complex three-dimensional geometries is presently a challenging topic, particularly for meshes used in simulations at high Reynolds numbers where a thin boundary layer exists near walls and elements are highly stretched in the direction normal to flow. In this paper, we present a conceptually simple but very effective and modular method to address this issue. We propose an isoparametric approach, whereby a mesh containing a valid coarse discretisation comprising of high-order triangular prisms near walls is refined to obtain a finer prismatic or tetrahedral boundary-layer mesh. The validity of the prismatic mesh provides a suitable mapping that allows one to obtain very fine mesh resolutions across the thickness of the boundary layer. We describe the method in detail for a high-order approximation using modal basis functions, discuss the requirements for the splitting method to produce valid prismatic and tetrahedral meshes and provide a sufficient criterion of validity in both cases. By considering two complex aeronautical configurations, we demonstrate how highly stretched meshes with sufficient resolution within the laminar sublayer can be generated to enable the simulation of flows with Reynolds numbers of 106 and above.

2014

  • @article{ferrer-2014,
      title = {{Stability of projection methods for incompressible flows
                        using high order pressure-velocity pairs of same degree:
                        Continuous and Discontinuous Galerkin formulations}},
      author = {Ferrer, E. and Moxey, D. and Sherwin, S. J. and Willden, R. H. J.},
      volume = {16},
      number = {3},
      pages = {817-840},
      doi = {10.4208/cicp.290114.170414a},
      year = {2014},
      journal = {Commun. Comp. Phys.},
      url = {https://davidmoxey.uk/assets/pubs/2014-temporal.pdf}
    }
    
    This paper presents limits for stability of projection type schemes when using high order pressure-velocity pairs of same degree. Two high order h/p variational methods encompassing continuous and discontinuous Galerkin formulations are used to explain previously observed lower limits on the time step for projection type schemes to be stable, when h- or p-refinement strategies are considered. In addition, the analysis included in this work shows that these stability limits do not depend only on the time step but on the product of the latter and the kinematic viscosity, which is of particular importance in the study of high Reynolds number flows. We show that high order methods prove advantageous in stabilising the simulations when small time steps and low kinematic viscosities are used. Drawing upon this analysis, we demonstrate how the effects of this instability can be reduced in the discontinuous scheme by introducing a stabilisation term into the global system. Finally, we show that these lower limits are compatible with Courant-Friedrichs-Lewy (CFL) type restrictions, given that a sufficiently high polynomial order or a mall enough mesh spacing is selected.
  • D. de Grazia, G. Mengaldo, D. Moxey, P. E. Vincent and S. J. SherwinInt. J. Numer. Meth. Fl., 75 (12), pp. 860–877, 2014. 10.1002/fld.3915 BibTeX Abstract
    @article{degrazia-2014,
      title = {{Connections between the discontinuous Galerkin method and
                        high-order flux reconstruction schemes}},
      author = {de Grazia, D. and Mengaldo, G. and Moxey, D. and Vincent, P. E. and Sherwin, S. J.},
      volume = {75},
      number = {12},
      issn = {1097-0363},
      doi = {10.1002/fld.3915},
      pages = {860--877},
      year = {2014},
      url = {https://davidmoxey.uk/assets/pubs/2014-frdg.pdf},
      journal = {Int. J. Numer. Meth. Fl.}
    }
    
    With high-order methods becoming more widely adopted throughout the field of computational fluid dynamics, the development of new computationally efficient algorithms has increased tremendously in recent years. The flux reconstruction approach allows various well-known high order schemes to be cast within a single unifying framework. Whilst a connection between flux reconstruction and the discontinuous Galerkin method has been established elsewhere, it still remains to fully investigate the explicit connections between the many popular variants of the discontinuous Galerkin method and the flux reconstruction approach. In this work, we closely examine the connections between three nodal versions of tensor product discontinuous Galerkin spectral element approximations and two types of flux reconstruction schemes for solving systems of conservation laws on quadrilateral meshes. The different types of discontinuous Galerkin approximations arise from the choice of the solution nodes of the Lagrange basis representing the solution and from the quadrature approximation used to integrate the mass matrix and the other terms of the discretisation. By considering both a linear and nonlinear advection equation on a regular grid, we examine the mathematical properties which connect these discretisations. These arguments are further confirmed by the results of an empirical numerical study.
  • J. Cohen, C. D. Cantwell, N. P. C. Hong, D. Moxey, M. Illingworth, A. Turner, J. Darlington and S. J. SherwinJ. Open Res. Soft., 2 (1), 2014. 10.5334/jors.az BibTeX Abstract
    @article{cohen-2014,
      author = {Cohen, J. and Cantwell, C. D. and Hong, N. P. Chue and Moxey, D. and Illingworth, M. and Turner, A. and Darlington, J. and Sherwin, S. J.},
      title = {Simplifying the Development, Use and Sustainability of HPC
                        Software},
      journal = {J. Open Res. Soft.},
      volume = {2},
      number = {1},
      year = {2014},
      issn = {2049-9647},
      url = {https://davidmoxey.uk/assets/pubs/2014-jors.pdf},
      doi = {10.5334/jors.az}
    }
    
    Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS) cloud computing become more widely accepted for high-performance computing (HPC), scientists require more support from computer scientists and resource providers to develop efficient code that offers long-term sustainability and makes optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. In this updated version of our submission to the WSSSPE13 workshop at SuperComputing 2013 we set out our approach to simplifying access to HPC applications and resources for end-users through the use of flexible and interchangeable software components and associated high-level functional-style operations. We believe this approach can support sustainability of scientific software and help to widen access to it.

2011

  • K. Avila, D. Moxey, A. de Lozar, M. Avila, D. Barkley and B. HofScience, 333 (6039), pp. 192–196, 2011. 10.1126/science.1203223 BibTeX Abstract
    @article{avila-2011,
      title = {{The onset of turbulence in pipe flow}},
      author = {Avila, K. and Moxey, D. and de Lozar, A. and Avila, M. and Barkley, D. and Hof, B.},
      volume = {333},
      number = {6039},
      pages = {192--196},
      year = {2011},
      month = may,
      journal = {Science},
      note = {published as a research article},
      doi = {10.1126/science.1203223},
      url = {https://davidmoxey.uk/assets/pubs/2011-science.pdf}
    }
    
    Shear flows undergo a sudden transition from laminar to turbulent motion as the velocity increases and the onset of turbulence radically changes transport efficiency and mixing properties. Even for the well-studied case of pipe flow, it has not been possible to determine at what Reynolds number the motion will be either persistently turbulent or ultimately laminar. We show that in pipes, turbulence which is transient at low Reynolds numbers becomes sustained at a distinct critical point. Through extensive experiments and computer simulations we are able to identify and characterize the processes ultimately responsible for sustaining turbulence. In contrast to the classical Landau-Ruelle-Takens view that turbulence arises from an increase in the temporal complexity of fluid motion, here spatial proliferation of chaotic domains is the decisive process and intrinsic to the nature of fluid turbulence.

2010

  • D. Moxey and D. BarkleyProc. Nat. Acad. Sci., 107 (18), pp. 8091–8096, 2010. 10.1073/pnas.0909560107 BibTeX Abstract
    @article{moxey-2010,
      title = {{Distinct large-scale turbulent-laminar states in
                        transitional pipe flow}},
      author = {Moxey, D. and Barkley, D.},
      journal = {Proc. Nat. Acad. Sci.},
      volume = {107},
      number = {18},
      pages = {8091--8096},
      year = {2010},
      month = may,
      doi = {10.1073/pnas.0909560107},
      url = {https://davidmoxey.uk/assets/pubs/2010-pnas.pdf}
    }
    
    When fluid flows through a channel, pipe, or duct, there are two basic forms of motion: smooth laminar motion and complex turbulent motion. The discontinuous transition between these states is a fundamental problem that has been studied for more than 100 years. What has received far less attention is the large-scale nature of the turbulent flows near transition once they are established. We have carried out extensive numerical computations in pipes of variable lengths up to 125 diameters to investigate the nature of transitional turbulence in pipe flow. We show the existence of three fundamentally different turbulent states separated by two distinct Reynolds numbers. Below Re1   2300, turbulence takes the form of familiar equilibrium (or long-time transient) puffs that are spatially localized and keep their size independent of pipe length. At Re1 the flow makes a striking transition to a spatio-temporally intermittent flow that fills the pipe. Irregular alternation of turbulent and laminar regions is inherent and does not result from random disturbances. The fraction of turbulence increases with Re until Re2   2600 where there is a continuous transition to a state of uniform turbulence along the pipe. We relate these observations to directed percolation and argue that Re1 marks the onset of infinite-lifetime turbulence.

2015

  • D. Moxey, M. D. Green, S. J. Sherwin and J. Peiróin New Challenges in Grid Generation and Adaptivity for Scientific Computing, Springer, 2015, pp. 203–215. 10.1007/978-3-319-06053-8_10 BibTeX Abstract
    @inbook{moxey-2015d,
      title = {On the generation of curvilinear meshes through subdivision of
                        isoparametric elements},
      author = {Moxey, D. and Green, M. D. and Sherwin, S. J. and Peir\'o, J.},
      booktitle = {New Challenges in Grid Generation and Adaptivity for
                        Scientific Computing},
      pages = {203--215},
      year = {2015},
      publisher = {Springer},
      doi = {10.1007/978-3-319-06053-8_10},
      url = {https://davidmoxey.uk/assets/pubs/2014-tet.pdf}
    }
    
    Recently, a new mesh generation technique based on the isoparametric representation of curvilinear elements has been developed in order to address the issue of generating high-order meshes with highly stretched elements. Given a valid coarse mesh comprising of a prismatic boundary layer, this technique uses the shape functions that define the geometries of the elements to produce a series of subdivided elements of arbitrary height. The purpose of this article is to investigate the range of conditions under which the resulting meshes are valid, and additionally to consider the application of this method to different element types. We consider the subdivision strategies that can be achieved with this technique and apply it to the generation of meshes suitable for boundary-layer fluid problems.
  • J. Peiró, D. Moxey, B. Jordi, S. J. Sherwin, B. W. Nelson, R. M. Kirby and R. Haimes
    High-order visualization with ElVis
    in IDIHOM: Industrialization of High-Order Methods-A Top-Down Approach, Springer, 2015, pp. 521–534. 10.1007/978-3-319-12886-3_24 BibTeX Abstract
    @inbook{moxey-2015c,
      title = {{High-order visualization with ElVis}},
      author = {Peir{\'o}, J. and Moxey, D. and Jordi, B. and Sherwin, S. J. and Nelson, B. W. and Kirby, R. M. and Haimes, R.},
      booktitle = {IDIHOM: Industrialization of High-Order Methods-A Top-Down
                        Approach},
      pages = {521--534},
      year = {2015},
      doi = {10.1007/978-3-319-12886-3_24},
      publisher = {Springer}
    }
    
    Accurate visualization of high-order meshes and flow fields is a fundamental tool for the verification, validation, analysis and interpretation of high-order flow simulations. Standard visualization tools based on piecewise linear approximations can be used for the display of highorder fields but their accuracy is restricted by computer memory and processing time. More often than not, the accurate visualization of complex flows using this strategy requires computational resources beyond the reach of most users. This chapter describes ElVis, a truly high-order and interactive visualization system created for the accurate and interactive visualization of scalar fields produced by high-order spectral/hp finite element simulations. We show some examples that motivate the need for such a visualization system and illustrate some of its features for the display and analysis of simulation data.
  • D. Moxey, M. Hazan, S. J. Sherwin and J. Peiró
    Curvilinear mesh generation for boundary layer problems
    in IDIHOM: Industrialization of High-Order Methods-A Top-Down Approach, Springer, 2015, pp. 41–64. 10.1007/978-3-319-12886-3_3 BibTeX Abstract
    @inbook{moxey-2015b,
      title = {Curvilinear mesh generation for boundary layer problems},
      author = {Moxey, D. and Hazan, M. and Sherwin, S. J. and Peir{\'o}, J.},
      booktitle = {IDIHOM: Industrialization of High-Order Methods-A Top-Down
                        Approach},
      pages = {41--64},
      year = {2015},
      doi = {10.1007/978-3-319-12886-3_3},
      publisher = {Springer}
    }
    
    In this article, we give an overview of a new technique for unstructured curvilinear boundary layer grid generation, which uses the isoparametric mappings that define elements in an existing coarse prismatic grid to produce a refined mesh capable of resolving arbitrarily thin boundary layers. We demonstrate that the technique always produces valid grids given an initially valid coarse mesh, and additionally show how this can be extended to convert hybrid meshes to meshes containing only simplicial elements.

2017

  • D. Moxey, C. D. Cantwell, G. Mengaldo, D. Serson, D. Ekelschot, J. Peiró, S. J. Sherwin and R. M. Kirbyin Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2016, 2017, pp. 63–79. 10.1007/978-3-319-65870-4_4 BibTeX Abstract
    @inproceedings{moxey-2017a,
      title = {Towards $p$-adaptive spectral/$hp$ element methods for
                        modelling industrial flows},
      author = {Moxey, D. and Cantwell, C. D. and Mengaldo, G. and Serson, D. and Ekelschot, D. and Peir\'o, J. and Sherwin, S. J. and Kirby, R. M.},
      booktitle = {Spectral and High Order Methods for Partial Differential
                        Equations ICOSAHOM 2016},
      pages = {63-79},
      year = {2017},
      doi = {10.1007/978-3-319-65870-4_4},
      url = {https://davidmoxey.uk/assets/pubs/2017-icosahom16.pdf}
    }
    
    There is an increasing requirement from both academia and industry for high-fidelity flow simulations that are able to accurately capture complicated and transient flow dynamics in complex geometries. Coupled with the growing availability of high-performance, highly parallel computing resources, there is therefore a demand for scalable numerical methods and corresponding software frameworks which can deliver the next-generation of complex and detailed fluid simulations to scientists and engineers in an efficient way. In this article we discuss recent and upcoming advances in the use of the spectral/hp element method for addressing these modelling challenges. To use these methods efficiently for such applications, is critical that computational resolution is placed in the regions of the flow where it is needed most, which is often not known a priori. We propose the use of spatially and temporally varying polynomial order, coupled with appropriate error estimators, as key requirements in permitting these methods to achieve computationally efficient high-fidelity solutions to complex flow problems in the fluid dynamics community.
  • M. Turner, D. Moxey, J. Peiró, M. Gammon, C. R. Pollard and H. Bucklowin Procedia Engineering, 2017, 203, pp. 206–218. 10.1016/j.proeng.2017.09.808 BibTeX Abstract
    @inproceedings{turner-2017b,
      title = {A framework for the generation of high-order curvilienar
                        hybrid meshes for CFD simulations},
      author = {Turner, M. and Moxey, D. and Peir\'o, J. and Gammon, M. and Pollard, C. R. and Bucklow, H.},
      booktitle = {Procedia Engineering},
      year = {2017},
      volume = {203},
      pages = {206-218},
      doi = {10.1016/j.proeng.2017.09.808},
      url = {http://www.sciencedirect.com/science/article/pii/S1877705817343692}
    }
    
    We present a pipeline of state-of-the-art techniques for the generation of high-order meshes that contain highly stretched elements in viscous boundary layers, and are suitable for flow simulations at high Reynolds numbers. The pipeline uses CADfix to generate a medial object based decomposition of the domain, which wraps the wall boundaries with prismatic partitions. The use of medial object allows the prism height to be larger than is generally possible with advancing layer techniques. CADfix subsequently generates a hybrid straight-sided (or linear) mesh. A high-order mesh is then generated a posteriori using NekMesh, a high-order mesh generator within the Nektar++ framework. During the high-order mesh generation process, the CAD definition of the domain is interrogated; we describe the process for integrating the CADfix API as an alternative backend geometry engine for NekMesh, and discuss some of the implementation issues encountered. Finally, we illustrate the methodology using three geometries of increasing complexity: a wing tip, a simplified landing gear and an aircraft in cruise configuration.

2016

  • M. Turner, J. Peiró and D. Moxeyin Procedia Engineering, 2016, 82, pp. 127–135. 10.1016/j.proeng.2016.11.069 BibTeX Abstract
    @inproceedings{turner-2016b,
      title = {A variational framework for high-order mesh generation},
      author = {Turner, M. and Peir\'o, J. and Moxey, D.},
      booktitle = {Procedia Engineering},
      year = {2016},
      volume = {82},
      pages = {127-135},
      doi = {10.1016/j.proeng.2016.11.069},
      url = {http://www.sciencedirect.com/science/article/pii/S1877705816333781}
    }
    
    The generation of sufficiently high quality unstructured high-order meshes remains a significant obstacle in the adoption of high-order methods. However, there is little consensus on which approach is the most robust, fastest and produces the ’best’ meshes. In this work we aim to provide a route to investigate this question, by examining popular high-order mesh generation methods in the context of an e cient variational framework for the generation of curvilinear meshes. By considering previous works in a variational form, we are able to compare their characteristics and study their robustness. Alongside a description of the theory and practical implementation details, including an e cient multi-threading parallelisation strategy, we demonstrate the e↵ectiveness of the framework, showing how it can be used for both mesh quality optimisation and untangling of invalid meshes.
  • J.-E. Lombard, D. Moxey and S. J. Sherwinin European Congress on Computational Methods in Applied Sciences and Engineering, Crete, Greece, 2016. BibTeX Abstract
    @inproceedings{lombard-2016a,
      title = {The wing-tip vortex test case},
      author = {Lombard, J.-E. and Moxey, D. and Sherwin, S. J.},
      booktitle = {European Congress on Computational Methods in Applied Sciences
                        and Engineering, Crete, Greece},
      month = jun,
      year = {2016},
      url = {https://davidmoxey.uk/assets/pubs/2016-eccomas-2.pdf}
    }
    
    We present a spectral/hp element discritisation, using the Nektar++ code, for performing a Large Eddy Simulation (LES) of the formation and evolution of a wingtip vortex as a test case involving a 3D geometry. The development of these vortices in the near wake, in combination with the large Reynolds numbers, make this test case particularly challenging to simulate. We consider flow over a NACA 0012 profile wingtip at 1.2 million Reynolds number, based on chord length and compare them against experimental data, which is to date the highest Reynolds number achieved for a LES that has been correlated with experiments for this test case. The jetting of the primary vortex and pressure distribution on the wing surface in our model were successfully correlated with the experiment however the vortex formation over the rear wing tip has some discrepancies which lead act as a motivator for further testing of high-fidelity methods in this test case. The formation of the wingtip vortex test case is of general interest for the modeling of transitioning vortex dominated flows over complex geometries which is of particular interest to industries such as high-lift configurations in aircraft, wind-turbine or propeller and automotive design.
  • M. Turner, D. Moxey, S. J. Sherwin and J. Peiróin Proceedings of the European Congress on Computational Methods in Applied Sciences and Engineering, 2016, pp. 428–433. 10.7712/100016.1825.8410 BibTeX Abstract
    @inproceedings{turner-2016a,
      title = {Automatic generation of 3D unstructured high-order curvilinear
                        meshes},
      author = {Turner, M. and Moxey, D. and Sherwin, S. J. and Peir\'o, J.},
      booktitle = {Proceedings of the European Congress on Computational Methods
                        in Applied Sciences and Engineering},
      pages = {428--433},
      year = {2016},
      url = {https://davidmoxey.uk/assets/pubs/2016-eccomas.pdf},
      doi = {10.7712/100016.1825.8410}
    }
    
    The generation of suitable, good quality high-order meshes is a significant obstacle in the academic and industrial uptake of high-order CFD methods. These methods have a number of favourable characteristics such as low dispersion and dissipation and higher levels of numerical accuracy than their low-order counterparts, however the methods are highly susceptible to inaccuracies caused by low quality meshes. These meshes require significant curvature to accuratly describe the geometric surfaces, which presents a number of difficult challenges in their generation. As yet, research into the field has produced a number of interesting technologies that go some way towards achieving this goal, but are yet to provide a complete system that can systematically produce curved high-order meshes for arbitrary geometries for CFD analysis. This paper presents our efforts in that direction and introduces an open-source high-order mesh generator, NekMesh, which has been created to bring high-order meshing technologies into one coherent pipeline which aims to produce 3D high-order curvilinear meshes from CAD geometries in a robust and systematic way.

2015

  • J. Cohen, C. Cantwell, D. Moxey, J. Nowell, P. Austing, X. Guo, J. Darlington and S. J. Sherwinin IEEE eScience (Munich, Germany), 2015. 10.1109/eScience.2015.43 BibTeX Abstract
    @inproceedings{cohen-2015a,
      title = {{TemPSS: A service providing software parameter templates and
                        profiles for scientific HPC}},
      author = {Cohen, J. and Cantwell, C. and Moxey, D. and Nowell, J. and Austing, P. and Guo, X. and Darlington, J. and Sherwin, S. J.},
      booktitle = {IEEE eScience (Munich, Germany)},
      year = {2015},
      doi = {10.1109/eScience.2015.43},
      url = {https://davidmoxey.uk/assets/pubs/2015-tempss.pdf}
    }
    
    Generating and managing input data for large-scale scientific computations has, for many classes of application, always been a challenging process. The emergence of new hardware platforms and increasingly complex scientific models compounds this problem as configuration data can change depending on the underlying hardware and properties of the computation. In this paper we present TemPro, a web based service for building and managing application input files in a semantically focused manner using the concepts of software parameter templates and job profiles. Many complex, distributed applications require the expertise of more than one individual to allow an application to run efficiently on different types of hardware. TemPro supports collaborative development of application inputs through the ability to save, edit and extend job profiles that define the inputs to an application. We describe the concepts of templates and profiles and the structures that developers provide to add an application template to the TemPro service. In addition, we detail the implementation of the service and its functionality.
  • M. Turner, D. Moxey and J. Peiróin 24th International Meshing Roundtable, 2015. BibTeX Abstract
    @inproceedings{turner-2015,
      title = {Automatic mesh sizing specification of complex three
                        dimensional domains using an octree structure},
      booktitle = {24th International Meshing Roundtable},
      author = {Turner, M. and Moxey, D. and Peir\'o, J.},
      year = {2015},
      url = {https://davidmoxey.uk/assets/pubs/2015-imr24.pdf}
    }
    
    A system for automatically specifying a distribution of mesh sizing throughout three dimensional complex domains is presented, which aims to reduce the level of user input required to generate a mesh. The primary motivation for the creation of this system is for the production of suitable linear meshes that are sufficiently coarse for high-order mesh generation purposes. Resolution is automatically increased in regions of high curvature, with the system only requiring three parameters from the user to successfully generate the sizing distribution. This level of automation is achieved through the construction of an octree description of the domain, which targets the curvature of the surfaces and guides the generation of the mesh. After the construction of the octree, an ideal mesh spacing specification is calculated for each octant, based on a relation to the radii of curvature of the domain surfaces and mesh gradation criteria. The system is capable of accurately estimating the number of elements that will be produced prior to the generation process, so that the meshing parameters can be altered to coarsen the mesh before effort is wasted generating the actual mesh.
  • J. Cohen, D. Moxey, C. D. Cantwell, P. Austing, J. Darlington and S. J. Sherwinin 2015 IEEE/ACM 1st International Workshop on Software Engineering for High Performance Computing in Science (SE4HPCS), 2015, pp. 56–59. 10.1109/SE4HPCS.2015.16 BibTeX Abstract
    @inproceedings{cohen-2015b,
      title = {Ensuring an effective user experience when managing and
                        running scientific HPC software},
      booktitle = {2015 IEEE/ACM 1st International Workshop on Software
                        Engineering for High Performance Computing in Science
                        (SE4HPCS)},
      author = {Cohen, J. and Moxey, D. and Cantwell, C. D. and Austing, P. and Darlington, J. and Sherwin, S. J.},
      year = {2015},
      pages = {56-59},
      url = {https://davidmoxey.uk/assets/pubs/2015-se4hpcs.pdf},
      doi = {10.1109/SE4HPCS.2015.16}
    }
    
    With CPU clock speeds stagnating over the last few years, ongoing advances in computing power and capabilities are being supported through increasing multi- and many-core parallelism. The resulting cost of locally maintaining large-scale computing infrastructure, combined with the need to perform increasingly large simulations, is leading to the wider use of alternative models of accessing infrastructure, such as the use of Infrastructure-as-a-Service (IaaS) cloud platforms. The diversity of platforms and the methods of interacting with them can make using them with complex scientific HPC codes difficult for users. In this position paper, we discuss our approaches to tackling these challenges on heterogeneous resources. As an example of the application of these approaches we use Nekkloud, our web-based interface for simplifying job specification and deployment of the Nektar++ high-order finite element HPC code. We also present results from a recent Nekkloud evaluation workshop undertaken with a group of Nektar++ users.

2014

  • D. Moxey, D. Ekelschot, U. Keskin, S. J. Sherwin and J. Peiróin Procedia Engineering, 2014, 82, pp. 127–135. 10.1016/j.proeng.2014.10.378 BibTeX Abstract
    @inproceedings{moxey-2014a,
      title = {{A thermo-elastic analogy for high-order curvilinear meshing
                        with control of mesh validity and quality}},
      author = {Moxey, D. and Ekelschot, D. and Keskin, U. and Sherwin, S. J. and Peir{\'o}, J.},
      booktitle = {Procedia Engineering},
      year = {2014},
      volume = {82},
      pages = {127-135},
      doi = {10.1016/j.proeng.2014.10.378},
      url = {https://davidmoxey.uk/assets/pubs/2014-elasticity.pdf}
    }
    
    In recent years, techniques for the generation of high-order curvilinear mesh have frequently adopted mesh deformation procedures to project the curvature of the surface onto the mesh, thereby introducing curvature into the interior of the domain and lessening the occurrence of self-intersecting elements. In this article, we propose an extension of this approach whereby thermal stress terms are incorporated into the state equation to provide control on the validity and quality of the mesh, thereby adding an extra degree of robustness which is lacking in current approaches.

2013

  • J. Cohen, D. Moxey, C. D. Cantwell, P. Burovskiy, J. Darlington and S. J. Sherwinin 2013 IEEE International Conference on Cluster Computing, 2013, pp. 1–5. 10.1109/cluster.2013.6702616 BibTeX Abstract
    @inproceedings{cohen-2013b,
      author = {Cohen, J. and Moxey, D. and Cantwell, C. D. and Burovskiy, P. and Darlington, J. and Sherwin, S. J.},
      booktitle = {2013 IEEE International Conference on Cluster Computing},
      title = {Nekkloud: A software environment for high-order finite element
                        analysis on clusters and clouds},
      year = {2013},
      pages = {1-5},
      doi = {10.1109/cluster.2013.6702616},
      url = {https://davidmoxey.uk/assets/pubs/2013-cluster.pdf}
    }
    
    As the capabilities of computational platforms continue to grow, scientific software is becoming ever more complex in order to target these platforms effectively. When using large-scale distributed infrastructure such as clusters and clouds it can be difficult for end-users to make efficient use of these platforms. In the libhpc project we are developing a suite of tools and services to simplify job description and execution on heterogeneous infrastructure. In this paper we present Nekkloud, a web-based software environment that builds on elements of the libhpc framework, for running the Nektar++ high-order finite element code on cluster and cloud platforms. End-users submit their jobs via Nekkloud, which then handles their execution on a chosen computing platform. Nektar++ provides a set of solvers that support scientists across a range of domains, ensuring that Nekkloud has a broad range of use cases. We describe the design and development of Nekkloud, user experience and integration with both local campus infrastructure and remote cloud resources enabling users to make better use of the resources available to them.
  • J. Cohen, C. D. Cantwell, N. P. C. Hong, D. Moxey, M. Illingworth, A. Turner, J. Darlington and S. J. Sherwinin WSSPE13 Workshop, Supercomputing, 2013. BibTeX Abstract
    @inproceedings{cohen-2013a,
      author = {Cohen, J. and Cantwell, C. D. and Hong, N. P. Chue and Moxey, D. and Illingworth, M. and Turner, A. and Darlington, J. and Sherwin, S. J.},
      title = {Simplifying the Development, Use and Sustainability of HPC
                        Software},
      booktitle = {WSSPE13 Workshop, Supercomputing},
      year = {2013},
      url = {https://davidmoxey.uk/assets/pubs/2013-wsspe13.pdf}
    }
    
    Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS) cloud computing become more widely accepted for HPC computations, scientists require more support from computer scientists and resource providers to develop efficient code and make optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. The use of such frameworks has implications for the sustainability of scientific software. In this paper we set out our developing understanding of these challenges based on work carried out in the libhpc project.

2012

  • J. Cohen, J. Darlington, B. Fuchs, D. Moxey, C. D. Cantwell, P. Burovskiy, S. J. Sherwin and N. P. C. Hongin First Workshop on Maintainable Software Practices in e-Science, 8th IEEE International Conference on eScience, 2012. BibTeX Abstract
    @inproceedings{cohen-2012,
      title = {libHPC: Software sustainability and reuse through metadata
                        preservation},
      booktitle = {First Workshop on Maintainable Software Practices in
                        e-Science, 8th IEEE International Conference on eScience},
      author = {Cohen, J. and Darlington, J. and Fuchs, B. and Moxey, D. and Cantwell, C. D. and Burovskiy, P. and Sherwin, S. J. and Hong, N. P. Chue},
      year = {2012},
      url = {https://davidmoxey.uk/assets/pubs/2012-escience.pdf}
    }
    
    Software development, particularly of complex scientific applications, requires a detailed understanding of the problem(s) to be solved and an ability to translate this understanding into the generic constructs of a programming language. We believe that such knowledge – information about a code’s “building blocks”, especially the low-level functions and procedures in which domain-specific tasks are implemented – can be very effectively leveraged to optimise code execution across platforms and operating systems. However, all too often such knowledge gets lost during the development process, which can bury the scientist’s understanding in the code in a manner that makes it difficult to recover or extract later on. In this paper, we describe our work in the EPSRC-funded libHPC project to build a framework that captures and utilises this information to achieve optimised performance in dynamic, heterogeneous networked execution environments. The aim of the framework is to allow scientists to work in high-level scripting environments based on component libraries to provide descriptions of applications which can then be mapped to optimal execution configurations based on available resources. A key element in our approach is the use of “co-ordination forms” – or functional paradigms – for creating optimised execution plans from components. Our main exemplar application is an advanced finite element framework, Nektar++, and we detail ongoing work to undertake profiling and performance analysis to extract software metadata and derive optimal execution configurations, to target resources based on their hardware metadata.

2011

  • D. MoxeyPhD thesis, University of Warwick, 2011. BibTeX Abstract
    @phdthesis{moxey-2011,
      author = {Moxey, D.},
      title = {{Spatio-temporal dynamics in pipe flow}},
      school = {University of Warwick},
      month = oct,
      year = {2011},
      url = {https://davidmoxey.uk/assets/pubs/2011-thesis.pdf}
    }
    
    When fluid flows through a channel, pipe or duct, there are two basic forms of motion: smooth laminar flow and disordered turbulent motion. The transition between these two states is a fundamental and open problem which has been studied for over 125 years. What has received far less attention are the intermittent dynamics which possess qualities of both turbulent and laminar regimes. The purpose of this thesis is therefore to investigate large-scale intermittent states through extensive numerical simulations in the hopes of further understanding the transition to turbulence in pipe flow.

2007

  • @mastersthesis{moxey-2007,
      author = {Moxey, D.},
      title = {``Snakes on a plane'': An introduction to the study of polymer
                        chains using Monte Carlo methods},
      school = {University of Warwick},
      month = jul,
      year = {2007},
      url = {https://davidmoxey.uk/assets/pubs/2007-project.pdf}
    }
    
    In this report, a number of basic Monte Carlo methods for modelling polymer chains are presented (including configurational-bias Monte Carlo and the pruned-enriched Rosenbluth method). These are then used to investigate the behaviour of the collapse of polymer chains around the well-studied theta-point. Additionally, a flat-histogram version of PERM is outlined and applied to the problem of polymers both tethered to and in close proximity to an adsorbing surface.