Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove refine option redistribute and ghost_mode in favor of an explicit partitioner #3362

Closed
wants to merge 22 commits into from

Conversation

schnellerhase
Copy link
Contributor

@schnellerhase schnellerhase commented Aug 21, 2024

The refine call previously hid from the user the control of the repartitioner in use for the refined mesh. It allowed for some configuration through redistribute and ghost_mode which in combination resulted in a selection of specific partitioning routine.

Now the control over which partitioner is used is given to the caller. Previous behavior of redistribute=False is maintained as default with the new (refinement specific) partitioning routine maintain_coarse_partitioner.

TODO:

  • create_cell_partitioner not working in python, allowing no usage in the tests of the refinement routines
  • export messy and very repetetive when it comes to handling the callbacks of partitioner - not restricted to these new changes -> move to general location and introduce shared naming

@garth-wells
Copy link
Member

@schnellerhase could you resolve the merge conflicts?

@garth-wells garth-wells added the enhancement New feature or request label Sep 17, 2024
@schnellerhase schnellerhase marked this pull request as ready for review September 17, 2024 20:39
@schnellerhase
Copy link
Contributor Author

schnellerhase commented Sep 17, 2024

I guess the TODO's could also be retro fitted in a follow up PR, do not strictly break functionality as is.

@garth-wells garth-wells added this to the 0.9.0 milestone Sep 19, 2024
/// @param[in] cells Lists of cells of each cell type.
/// @return Destination ranks for each cell on this process
graph::AdjacencyList<std::int32_t>
maintain_coarse_partitioner(MPI_Comm comm, int nparts,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this name could be improved - maybe 'identity_partitioner'?

Copy link
Contributor Author

@schnellerhase schnellerhase Sep 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Open for new names, I agree it is not nicely named.

I wasn't entirely sure if this partitioner routine makes any sense outside of the refinement context (which the name kind of hints). #3358 might be an application of this partitioner, at least as a way of disabling it for now. But not sure if this is a reasonable use case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need it - we can pass nullptr as the partitioning function, in which case mesh::create_mesh doesn't do any redistribution. Or am I missing something?

@jorgensd
Copy link
Member

@schnellerhase Imagine you have a ghosted mesh. Then the maintain_coarse_partitioner is not doing the right thing, as it removes any ghost cells. See for instance: https://github.com/jorgensd/adios4dolfinx/blob/main/src/adios4dolfinx/checkpointing.py#L656-L674, https://github.com/jorgensd/adios4dolfinx/blob/main/src/adios4dolfinx/checkpointing.py#L485-L497 on how to maintain a given partition.

Secondly, could you elaborate on why create_cell_partitioner doesn't work from Python? As I use it in: https://jsdokken.com/dolfinx_docs/meshes.html#mesh-partitioning

I guess the TODO's could also be retro fitted in a follow up PR, do not strictly break functionality as is.

Could you elaborate on why it doesn't break functionality? Previously tests with redistribute=True passed, while now all tests retain an initial (unghosted) partitioning.

@schnellerhase
Copy link
Contributor Author

Had a thought about it and you are completely right @jorgensd, the maintain_coarse_partitioner does not work correctly for the ghosted parts.

@schnellerhase
Copy link
Contributor Author

What we want to do is, recreate a mesh with the same ghost mode type, but no redistribution of the mesh nodes, those are already on the correct processes once we call create_mesh (refine produces the refinement only on the owned parts of the mesh).

@schnellerhase
Copy link
Contributor Author

That means we need a partitioner that only performs the compute_destination_ranks if ghosting is enabled, otherwise just the identity, right?

@schnellerhase
Copy link
Contributor Author

Regarding the problems with the python interface to create_cell_partitioner, the following fails for me, which should be a shortened version of the tutorial code:

from mpi4py import MPI

import numpy as np

import dolfinx
import ufl
from basix.ufl import element
from dolfinx.graph import partitioner_scotch

quadrilaterals = np.array([], dtype=np.int64)
quad_points = np.array([[0, 0], [0.3, 0]], dtype=np.float64)

ufl_quad = ufl.Mesh(element("Lagrange", "quadrilateral", 1, shape=(2,)))


partitioner = dolfinx.mesh.create_cell_partitioner(partitioner_scotch())
quad_mesh = dolfinx.mesh.create_mesh(
    MPI.COMM_WORLD, quadrilaterals, quad_points, ufl_quad, partitioner=partitioner
)

fails with

TypeError: create_mesh(): incompatible function arguments. The following argument types are supported:
    1. create_mesh(arg0: MPICommWrapper, arg1: collections.abc.Sequence[ndarray[dtype=int64, writable=False, shape=(*), order='C']], arg2: collections.abc.Sequence[dolfinx.cpp.fem.CoordinateElement_float32], arg3: ndarray[dtype=float32, writable=False, order='C'], arg4: collections.abc.Callable[[MPICommWrapper, int, collections.abc.Sequence[dolfinx.cpp.mesh.CellType], collections.abc.Sequence[numpy.ndarray[dtype=int64, writable=False, ]]], dolfinx.cpp.graph.AdjacencyList_int32], /) -> dolfinx.cpp.mesh.Mesh_float32
    2. create_mesh(comm: MPICommWrapper, cells: ndarray[dtype=int64, writable=False, shape=(*, *), order='C'], element: dolfinx.cpp.fem.CoordinateElement_float32, x: ndarray[dtype=float32, writable=False, order='C'], partitioner: collections.abc.Callable[[MPICommWrapper, int, collections.abc.Sequence[dolfinx.cpp.mesh.CellType], collections.abc.Sequence[numpy.ndarray[dtype=int64, writable=False, ]]], dolfinx.cpp.graph.AdjacencyList_int32] | None) -> dolfinx.cpp.mesh.Mesh_float32
    3. create_mesh(arg0: MPICommWrapper, arg1: collections.abc.Sequence[ndarray[dtype=int64, writable=False, shape=(*), order='C']], arg2: collections.abc.Sequence[dolfinx.cpp.fem.CoordinateElement_float64], arg3: ndarray[dtype=float64, writable=False, order='C'], arg4: collections.abc.Callable[[MPICommWrapper, int, collections.abc.Sequence[dolfinx.cpp.mesh.CellType], collections.abc.Sequence[numpy.ndarray[dtype=int64, writable=False, ]]], dolfinx.cpp.graph.AdjacencyList_int32], /) -> dolfinx.cpp.mesh.Mesh_float64
    4. create_mesh(comm: MPICommWrapper, cells: ndarray[dtype=int64, writable=False, shape=(*, *), order='C'], element: dolfinx.cpp.fem.CoordinateElement_float64, x: ndarray[dtype=float64, writable=False, order='C'], partitioner: collections.abc.Callable[[MPICommWrapper, int, collections.abc.Sequence[dolfinx.cpp.mesh.CellType], collections.abc.Sequence[numpy.ndarray[dtype=int64, writable=False, ]]], dolfinx.cpp.graph.AdjacencyList_int32] | None) -> dolfinx.cpp.mesh.Mesh_float64

Invoked with types: Intracomm, ndarray, dolfinx.cpp.fem.CoordinateElement_float64, ndarray, nanobind.nb_func

@garth-wells
Copy link
Member

Superseded by #3444

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants