Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TRANSFER execute_on is only ever called for conservative transfers, when any postprocessor transfer could want such an option #28981

Open
GiudGiud opened this issue Oct 31, 2024 · 1 comment
Labels
C: Framework P: normal A defect affecting operation with a low possibility of significantly affects. T: defect An anomaly, which is anything that deviates from expectations.

Comments

@GiudGiud
Copy link
Contributor

Bug Description

When trying to transfer PPs in fixed point iterations, there is no insertion point at time step begin that fits nicely for the "number of fixed point iterations". This made me look into using EXEC_TRANSFER, which is actually silently ignored even though the PP is transferred!

Steps to Reproduce

this shows it

[Mesh]
  type = GeneratedMesh
  dim = 2
  nx = 5
  ny = 5
  parallel_type = replicated
  uniform_refine = 1
[]

[Variables]
  [./u]
  [../]
[]

[AuxVariables]
  [./v]
  [../]
[]

[Kernels]
  [./diff]
    type = CoefDiffusion
    variable = u
    coef = 0.1
  [../]
  [./time]
    type = TimeDerivative
    variable = u
  [../]
  [./force_u]
    type = CoupledForce
    variable = u
    v = v
  [../]
[]

[BCs]
  [./left]
    type = DirichletBC
    variable = u
    boundary = left
    value = 0
  [../]
  [./right]
    type = DirichletBC
    variable = u
    boundary = right
    value = 1
  [../]
[]

[Postprocessors]
  [num_its]
    type = NumFixedPointIterations
    execute_on = 'INITIAL TIMESTEP_BEGIN TIMESTEP_END FINAL MULTIAPP_FIXED_POINT_END MULTIAPP_FIXED_POINT_BEGIN TRANSFER'
#    execute_on = 'initial timestep_end'
  [../]
[]

[Executioner]
  type = Transient
  num_steps = 20
  dt = 0.1
  solve_type = PJFNK
  petsc_options_iname = '-pc_type -pc_hypre_type'
  petsc_options_value = 'hypre boomeramg'
  nl_abs_tol = 1e-14
  fixed_point_max_its = 10
  fixed_point_min_its = 3
  fixed_point_abs_tol = 1e-14
  fixed_point_rel_tol = 1e-6
  accept_on_max_fixed_point_iteration = true
  relaxation_factor = 1
  transformed_variables = 'v'
[]

[Outputs]
  exodus = true
[]

[MultiApps]
  [./sub]
    type = TransientMultiApp
    positions = '0 0 0'
    input_files = sub.i
    clone_parent_mesh = true
#    execute_on = 'TIMESTEP_END'
  [../]
[]

[Transfers]
  [./v_from_sub]
    type = MultiAppNearestNodeTransfer
    from_multi_app = sub
    source_variable = v
    variable = v
  [../]
  [./u_to_sub]
    type = MultiAppNearestNodeTransfer
    to_multi_app = sub
    source_variable = u
    variable = u
  [../]
  [./iteration]
    type = MultiAppPostprocessorTransfer
    to_postprocessor = fp_iteration
    from_postprocessor = num_its
    to_multi_app = sub
    execute_on = 'INITIAL TIMESTEP_BEGIN TIMESTEP_END FINAL MULTIAPP_FIXED_POINT_END MULTIAPP_FIXED_POINT_BEGIN'
  [../]
[]

Impact

Lagged reason for the first fixed point iteration
No result at all if only executing on TRANSFER

[Optional] Diagnostics

No response

@GiudGiud GiudGiud added C: Framework T: defect An anomaly, which is anything that deviates from expectations. P: normal A defect affecting operation with a low possibility of significantly affects. labels Oct 31, 2024
@GiudGiud
Copy link
Contributor Author

This likely unknowingly hindered our work on overlapping domain coupling. We routinely executed PPs on timestep_begin in fixed point iterations, but those executions happen after the transfers_to_multiapp and the multiapp execution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: Framework P: normal A defect affecting operation with a low possibility of significantly affects. T: defect An anomaly, which is anything that deviates from expectations.
Projects
None yet
Development

No branches or pull requests

1 participant