-
-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove refine option redistribute
and ghost_mode
in favor of an explicit partitioner
#3362
Conversation
@schnellerhase could you resolve the merge conflicts? |
…licit partitioner
d74f3b4
to
e1ed42a
Compare
I guess the TODO's could also be retro fitted in a follow up PR, do not strictly break functionality as is. |
/// @param[in] cells Lists of cells of each cell type. | ||
/// @return Destination ranks for each cell on this process | ||
graph::AdjacencyList<std::int32_t> | ||
maintain_coarse_partitioner(MPI_Comm comm, int nparts, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this name could be improved - maybe 'identity_partitioner'?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Open for new names, I agree it is not nicely named.
I wasn't entirely sure if this partitioner routine makes any sense outside of the refinement context (which the name kind of hints). #3358 might be an application of this partitioner, at least as a way of disabling it for now. But not sure if this is a reasonable use case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need it - we can pass nullptr
as the partitioning function, in which case mesh::create_mesh
doesn't do any redistribution. Or am I missing something?
@schnellerhase Imagine you have a ghosted mesh. Then the Secondly, could you elaborate on why
Could you elaborate on why it doesn't break functionality? Previously tests with |
Had a thought about it and you are completely right @jorgensd, the |
What we want to do is, recreate a mesh with the same ghost mode type, but no redistribution of the mesh nodes, those are already on the correct processes once we call |
That means we need a partitioner that only performs the |
Regarding the problems with the from mpi4py import MPI
import numpy as np
import dolfinx
import ufl
from basix.ufl import element
from dolfinx.graph import partitioner_scotch
quadrilaterals = np.array([], dtype=np.int64)
quad_points = np.array([[0, 0], [0.3, 0]], dtype=np.float64)
ufl_quad = ufl.Mesh(element("Lagrange", "quadrilateral", 1, shape=(2,)))
partitioner = dolfinx.mesh.create_cell_partitioner(partitioner_scotch())
quad_mesh = dolfinx.mesh.create_mesh(
MPI.COMM_WORLD, quadrilaterals, quad_points, ufl_quad, partitioner=partitioner
) fails with TypeError: create_mesh(): incompatible function arguments. The following argument types are supported:
1. create_mesh(arg0: MPICommWrapper, arg1: collections.abc.Sequence[ndarray[dtype=int64, writable=False, shape=(*), order='C']], arg2: collections.abc.Sequence[dolfinx.cpp.fem.CoordinateElement_float32], arg3: ndarray[dtype=float32, writable=False, order='C'], arg4: collections.abc.Callable[[MPICommWrapper, int, collections.abc.Sequence[dolfinx.cpp.mesh.CellType], collections.abc.Sequence[numpy.ndarray[dtype=int64, writable=False, ]]], dolfinx.cpp.graph.AdjacencyList_int32], /) -> dolfinx.cpp.mesh.Mesh_float32
2. create_mesh(comm: MPICommWrapper, cells: ndarray[dtype=int64, writable=False, shape=(*, *), order='C'], element: dolfinx.cpp.fem.CoordinateElement_float32, x: ndarray[dtype=float32, writable=False, order='C'], partitioner: collections.abc.Callable[[MPICommWrapper, int, collections.abc.Sequence[dolfinx.cpp.mesh.CellType], collections.abc.Sequence[numpy.ndarray[dtype=int64, writable=False, ]]], dolfinx.cpp.graph.AdjacencyList_int32] | None) -> dolfinx.cpp.mesh.Mesh_float32
3. create_mesh(arg0: MPICommWrapper, arg1: collections.abc.Sequence[ndarray[dtype=int64, writable=False, shape=(*), order='C']], arg2: collections.abc.Sequence[dolfinx.cpp.fem.CoordinateElement_float64], arg3: ndarray[dtype=float64, writable=False, order='C'], arg4: collections.abc.Callable[[MPICommWrapper, int, collections.abc.Sequence[dolfinx.cpp.mesh.CellType], collections.abc.Sequence[numpy.ndarray[dtype=int64, writable=False, ]]], dolfinx.cpp.graph.AdjacencyList_int32], /) -> dolfinx.cpp.mesh.Mesh_float64
4. create_mesh(comm: MPICommWrapper, cells: ndarray[dtype=int64, writable=False, shape=(*, *), order='C'], element: dolfinx.cpp.fem.CoordinateElement_float64, x: ndarray[dtype=float64, writable=False, order='C'], partitioner: collections.abc.Callable[[MPICommWrapper, int, collections.abc.Sequence[dolfinx.cpp.mesh.CellType], collections.abc.Sequence[numpy.ndarray[dtype=int64, writable=False, ]]], dolfinx.cpp.graph.AdjacencyList_int32] | None) -> dolfinx.cpp.mesh.Mesh_float64
Invoked with types: Intracomm, ndarray, dolfinx.cpp.fem.CoordinateElement_float64, ndarray, nanobind.nb_func |
Superseded by #3444 |
The refine call previously hid from the user the control of the repartitioner in use for the refined mesh. It allowed for some configuration through
redistribute
andghost_mode
which in combination resulted in a selection of specific partitioning routine.Now the control over which partitioner is used is given to the caller. Previous behavior of
redistribute=False
is maintained as default with the new (refinement specific) partitioning routinemaintain_coarse_partitioner
.TODO:
create_cell_partitioner
not working in python, allowing no usage in the tests of the refinement routines