Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[IR] Implement register_initializer #1941

Merged
merged 2 commits into from
Nov 13, 2024
Merged

Conversation

justinchuby
Copy link
Collaborator

@justinchuby justinchuby commented Nov 13, 2024

Implement register_initializer(value) on Graph for robustly adding an initializer to the graph. Users can also directly modify the graph.initializers dictionary, but this method does more comprehensive checks before adding, and calling this method is simpler than modifying the dictionary.

The method could be in the convenience too, but I put it here due to its relevance and for better discoverability.

@justinchuby justinchuby added topic: api topic: IR Intermediate representation labels Nov 13, 2024
Copy link

codecov bot commented Nov 13, 2024

❌ 16 Tests Failed:

Tests completed Failed Passed Skipped
14277 16 14261 1630
View the full list of 3 ❄️ flaky tests
tests.eager_mode_test.TestEagerModeArguments_0_reference_runtime::test_function_input_and_attribute_by_kwargs_out_of_order

Flake rate in main: 38.99% (Passed 7016 times, Failed 4483 times)

Stack Traces | 0.003s run time
..../test_torch_nightly/lib/python3.12.../reference/ops/_op.py:91: in run
    res = self._run(x, y)
..../test_torch_nightly/lib/python3.12.../reference/ops/_op.py:139: in _run
    res = (convert_from_ml_dtypes(res[0]),)
..../test_torch_nightly/lib/python3.12.../onnx/reference/custom_element_types.py:50: in convert_from_ml_dtypes
    return array.view(dtype=dtype)
E   ValueError: Changing the dtype of a 0d array is only supported if the itemsize is unchanged

The above exception was the direct cause of the following exception:
tests/eager_mode_test.py:115: in test_function_input_and_attribute_by_kwargs_out_of_order
    self.assertEqual(add_with_alpha(alpha=3.0, other=2.0, this=1.0), 7.0)
onnxscript/values.py:576: in __call__
    return evaluator.default().eval_function(self, args, kwargs)
onnxscript/evaluator.py:307: in eval_function
    result = function.function(*adapted_args, **adapted_kwargs)
tests/eager_mode_test.py:59: in add_with_alpha
    other = op.Mul(other, alpha)
.../onnx_opset/_impl/opset14.py:696: in Mul
    return op(*self._prepare_inputs(schema, A, B))
onnxscript/values.py:304: in __call__
    return evaluator.default().eval(schema, args, kwargs)
onnxscript/evaluator.py:194: in eval
    outputs = self._eval(schema, inputs, attributes, closure)
onnxscript/evaluator.py:524: in _eval
    result = session.run(None, session_run_input)
..../test_torch_nightly/lib/python3.12.../onnx/reference/reference_evaluator.py:599: in run
    outputs = node.run(*inputs, **linked_attributes)
..../test_torch_nightly/lib/python3.12.../reference/ops/_op.py:114: in run
    res = OpRunBinary.run(self, x, y)
..../test_torch_nightly/lib/python3.12.../reference/ops/_op.py:93: in run
    raise TypeError(
E   TypeError: Issues with types <class 'numpy.ndarray'>, <class 'numpy.ndarray'> (binary operator 'Mul').
tests.eager_mode_test.TestEagerModeArguments_0_reference_runtime::test_function_all_input_by_kwargs

Flake rate in main: 38.99% (Passed 7016 times, Failed 4483 times)

Stack Traces | 0.003s run time
..../test_torch_nightly/lib/python3.12.../reference/ops/_op.py:91: in run
    res = self._run(x, y)
..../test_torch_nightly/lib/python3.12.../reference/ops/_op.py:139: in _run
    res = (convert_from_ml_dtypes(res[0]),)
..../test_torch_nightly/lib/python3.12.../onnx/reference/custom_element_types.py:50: in convert_from_ml_dtypes
    return array.view(dtype=dtype)
E   ValueError: Changing the dtype of a 0d array is only supported if the itemsize is unchanged

The above exception was the direct cause of the following exception:
tests/eager_mode_test.py:109: in test_function_all_input_by_kwargs
    self.assertEqual(add_with_alpha(this=1.0, other=2.0), 3.0)
onnxscript/values.py:576: in __call__
    return evaluator.default().eval_function(self, args, kwargs)
onnxscript/evaluator.py:307: in eval_function
    result = function.function(*adapted_args, **adapted_kwargs)
tests/eager_mode_test.py:59: in add_with_alpha
    other = op.Mul(other, alpha)
.../onnx_opset/_impl/opset14.py:696: in Mul
    return op(*self._prepare_inputs(schema, A, B))
onnxscript/values.py:304: in __call__
    return evaluator.default().eval(schema, args, kwargs)
onnxscript/evaluator.py:194: in eval
    outputs = self._eval(schema, inputs, attributes, closure)
onnxscript/evaluator.py:524: in _eval
    result = session.run(None, session_run_input)
..../test_torch_nightly/lib/python3.12.../onnx/reference/reference_evaluator.py:599: in run
    outputs = node.run(*inputs, **linked_attributes)
..../test_torch_nightly/lib/python3.12.../reference/ops/_op.py:114: in run
    res = OpRunBinary.run(self, x, y)
..../test_torch_nightly/lib/python3.12.../reference/ops/_op.py:93: in run
    raise TypeError(
E   TypeError: Issues with types <class 'numpy.ndarray'>, <class 'numpy.ndarray'> (binary operator 'Mul').
tests.eager_mode_test.TestEagerModeArguments_0_reference_runtime::test_function_attribute_by_positional_args

Flake rate in main: 38.99% (Passed 7016 times, Failed 4483 times)

Stack Traces | 0.003s run time
..../test_torch_nightly/lib/python3.12.../reference/ops/_op.py:91: in run
    res = self._run(x, y)
..../test_torch_nightly/lib/python3.12.../reference/ops/_op.py:139: in _run
    res = (convert_from_ml_dtypes(res[0]),)
..../test_torch_nightly/lib/python3.12.../onnx/reference/custom_element_types.py:50: in convert_from_ml_dtypes
    return array.view(dtype=dtype)
E   ValueError: Changing the dtype of a 0d array is only supported if the itemsize is unchanged

The above exception was the direct cause of the following exception:
tests/eager_mode_test.py:112: in test_function_attribute_by_positional_args
    self.assertEqual(add_with_alpha(1.0, 2.0, 3.0), 7.0)
onnxscript/values.py:576: in __call__
    return evaluator.default().eval_function(self, args, kwargs)
onnxscript/evaluator.py:307: in eval_function
    result = function.function(*adapted_args, **adapted_kwargs)
tests/eager_mode_test.py:59: in add_with_alpha
    other = op.Mul(other, alpha)
.../onnx_opset/_impl/opset14.py:696: in Mul
    return op(*self._prepare_inputs(schema, A, B))
onnxscript/values.py:304: in __call__
    return evaluator.default().eval(schema, args, kwargs)
onnxscript/evaluator.py:194: in eval
    outputs = self._eval(schema, inputs, attributes, closure)
onnxscript/evaluator.py:524: in _eval
    result = session.run(None, session_run_input)
..../test_torch_nightly/lib/python3.12.../onnx/reference/reference_evaluator.py:599: in run
    outputs = node.run(*inputs, **linked_attributes)
..../test_torch_nightly/lib/python3.12.../reference/ops/_op.py:114: in run
    res = OpRunBinary.run(self, x, y)
..../test_torch_nightly/lib/python3.12.../reference/ops/_op.py:93: in run
    raise TypeError(
E   TypeError: Issues with types <class 'numpy.ndarray'>, <class 'numpy.ndarray'> (binary operator 'Mul').

To view more test analytics, go to the Test Analytics Dashboard
Got feedback? Let us know on Github

ValueError: If the value does not have its ``.const_value`` set.
"""
if value.name in self._initializers:
if self._initializers[value.name] is not value:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Not necessarily in this PR, but would be nice to have a method to reuse initializers, especially for common duplicated constants, like 0, 1 or even shapes like (batchsize, seqlen, hiddensize).
  • On a related note, for very small tensors, we could replace the is comparison above with a value-equality check. May not be that important if we add a separate utility to reuse initializers as above, but if not this could serve so approximately.

Copy link
Collaborator Author

@justinchuby justinchuby Nov 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be a graph pass too (dedup initializers)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

)
if not value.name:
raise ValueError(f"Initializer must have a name: {value!r}")
if value.producer() is not None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should support promotion of a Constant node to an initializer (later on perhaps; could be an option in this method or a separate one).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that can be a graph pass?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@titaiwangms titaiwangms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we update this usage to exporter in terms of robustness?

@justinchuby
Copy link
Collaborator Author

Should we update this usage to exporter in terms of robustness?

Sure, we can do that

@justinchuby justinchuby enabled auto-merge (squash) November 13, 2024 19:21
@justinchuby justinchuby merged commit fa7d13a into main Nov 13, 2024
25 of 41 checks passed
@justinchuby justinchuby deleted the justinchu/register-initializer branch November 13, 2024 19:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: api topic: IR Intermediate representation
Projects
Development

Successfully merging this pull request may close these issues.

3 participants