Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plugins incorrectly loaded during test runs #626

Open
simonw opened this issue Nov 13, 2024 · 2 comments
Open

Plugins incorrectly loaded during test runs #626

simonw opened this issue Nov 13, 2024 · 2 comments
Labels
bug Something isn't working developer-experience

Comments

@simonw
Copy link
Owner

simonw commented Nov 13, 2024

I got a failing test which turned out to be because llm-ollama was installed:

>           assert not self._requests_not_matched, (
                f"The following requests were not expected:\n"
                f"{requests_description}\n"
                "\n"
                "If this is on purpose, refer to https://github.com/Colin-b/pytest_httpx/blob/master/README.md#allow-to-not-register-responses-for-every-request"
            )
E           AssertionError: The following requests were not expected:
E             - GET request on http://127.0.0.1:11434/api/tags
E             - GET request on http://127.0.0.1:11434/api/tags
E             
E             If this is on purpose, refer to https://github.com/Colin-b/pytest_httpx/blob/master/README.md#allow-to-not-register-responses-for-every-request
E           assert not [<Request('GET', 'http://127.0.0.1:11434/api/tags')>, <Request('GET', 'http://127.0.0.1:11434/api/tags')>]
E            +  where [<Request('GET', 'http://127.0.0.1:11434/api/tags')>, <Request('GET', 'http://127.0.0.1:11434/api/tags')>] = <pytest_httpx._httpx_mock.HTTPXMock object at 0x33a424f40>._requests_not_matched

../../../.local/share/virtualenvs/llm-p4p8CDpq/lib/python3.10/site-packages/pytest_httpx/_httpx_mock.py:329: AssertionError
================================================================================================== short test summary info ===================================================================================================
ERROR tests/test_llm.py::test_llm_prompt_creates_log_database - AssertionError: The following requests were not expected:
@simonw simonw added bug Something isn't working developer-experience labels Nov 13, 2024
@simonw
Copy link
Owner Author

simonw commented Nov 13, 2024

That's not supposed to happen because of this code:

llm/llm/plugins.py

Lines 15 to 17 in d34eac5

if not hasattr(sys, "_called_from_test") and LLM_LOAD_PLUGINS is None:
# Only load plugins if not running tests
pm.load_setuptools_entrypoints("llm")

And:

llm/tests/conftest.py

Lines 11 to 14 in d34eac5

def pytest_configure(config):
import sys
sys._called_from_test = True

But I added some print statements and saw this:

Gonna load those plugins False
Setting that sys._called_from_test = True

So those things are firing in the opposite order from what I expect.

diff --git a/llm/plugins.py b/llm/plugins.py
index 933725c..25d6bc3 100644
--- a/llm/plugins.py
+++ b/llm/plugins.py
@@ -14,6 +14,7 @@ LLM_LOAD_PLUGINS = os.environ.get("LLM_LOAD_PLUGINS", None)
 
 if not hasattr(sys, "_called_from_test") and LLM_LOAD_PLUGINS is None:
     # Only load plugins if not running tests
+    print("Gonna load those plugins", getattr(sys, "_called_from_test", False))
     pm.load_setuptools_entrypoints("llm")
 
 
diff --git a/tests/conftest.py b/tests/conftest.py
index 7eb3dd5..322666b 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -10,6 +10,7 @@ from typing import Optional
 
 def pytest_configure(config):
     import sys
+    print("Setting that sys._called_from_test = True")
 
     sys._called_from_test = True

@simonw
Copy link
Owner Author

simonw commented Nov 13, 2024

This seems to fix it, by only loading plugins when a CLI command is invocated:

diff --git a/llm/cli.py b/llm/cli.py
index bb9075e..eb585f0 100644
--- a/llm/cli.py
+++ b/llm/cli.py
@@ -137,6 +137,9 @@ def cli():
 
         llm 'Five outrageous names for a pet pelican'
     """
+    from .plugins import load_plugins
+
+    load_plugins()
 
 
 @cli.command(name="prompt")
diff --git a/llm/plugins.py b/llm/plugins.py
index 933725c..5c00b9e 100644
--- a/llm/plugins.py
+++ b/llm/plugins.py
@@ -12,27 +12,36 @@ pm.add_hookspecs(hookspecs)
 
 LLM_LOAD_PLUGINS = os.environ.get("LLM_LOAD_PLUGINS", None)
 
-if not hasattr(sys, "_called_from_test") and LLM_LOAD_PLUGINS is None:
-    # Only load plugins if not running tests
-    pm.load_setuptools_entrypoints("llm")
-
-
-# Load any plugins specified in LLM_LOAD_PLUGINS")
-if LLM_LOAD_PLUGINS is not None:
-    for package_name in [name for name in LLM_LOAD_PLUGINS.split(",") if name.strip()]:
-        try:
-            distribution = metadata.distribution(package_name)  # Updated call
-            llm_entry_points = [
-                ep for ep in distribution.entry_points if ep.group == "llm"
-            ]
-            for entry_point in llm_entry_points:
-                mod = entry_point.load()
-                pm.register(mod, name=entry_point.name)
-                # Ensure name can be found in plugin_to_distinfo later:
-                pm._plugin_distinfo.append((mod, distribution))  # type: ignore
-        except metadata.PackageNotFoundError:
-            sys.stderr.write(f"Plugin {package_name} could not be found\n")
-
-for plugin in DEFAULT_PLUGINS:
-    mod = importlib.import_module(plugin)
-    pm.register(mod, plugin)
+_loaded = False
+
+
+def load_plugins():
+    global _loaded
+    if _loaded:
+        return
+    _loaded = True
+    if not hasattr(sys, "_called_from_test") and LLM_LOAD_PLUGINS is None:
+        # Only load plugins if not running tests
+        pm.load_setuptools_entrypoints("llm")
+
+    # Load any plugins specified in LLM_LOAD_PLUGINS")
+    if LLM_LOAD_PLUGINS is not None:
+        for package_name in [
+            name for name in LLM_LOAD_PLUGINS.split(",") if name.strip()
+        ]:
+            try:
+                distribution = metadata.distribution(package_name)  # Updated call
+                llm_entry_points = [
+                    ep for ep in distribution.entry_points if ep.group == "llm"
+                ]
+                for entry_point in llm_entry_points:
+                    mod = entry_point.load()
+                    pm.register(mod, name=entry_point.name)
+                    # Ensure name can be found in plugin_to_distinfo later:
+                    pm._plugin_distinfo.append((mod, distribution))  # type: ignore
+            except metadata.PackageNotFoundError:
+                sys.stderr.write(f"Plugin {package_name} could not be found\n")
+
+    for plugin in DEFAULT_PLUGINS:
+        mod = importlib.import_module(plugin)
+        pm.register(mod, plugin)

But... won't that break the Python library use-case, which also relies on plugins being loaded so that thinks like llm.get_model("claude-3.5-sonnet") work?

Maybe the llm.get_model() and llm.get_embedding_model() functions should also trigger load_plugins()?

simonw added a commit that referenced this issue Nov 13, 2024
simonw added a commit that referenced this issue Nov 14, 2024
…els (#613)

- #507 (comment)

* register_model is now async aware

Refs #507 (comment)

* Refactor Chat and AsyncChat to use _Shared base class

Refs #507 (comment)

* fixed function name

* Fix for infinite loop

* Applied Black

* Ran cog

* Applied Black

* Add Response.from_row() classmethod back again

It does not matter that this is a blocking call, since it is a classmethod

* Made mypy happy with llm/models.py

* mypy fixes for openai_models.py

I am unhappy with this, had to duplicate some code.

* First test for AsyncModel

* Still have not quite got this working

* Fix for not loading plugins during tests, refs #626

* audio/wav not audio/wave, refs #603

* Black and mypy and ruff all happy

* Refactor to avoid generics

* Removed obsolete response() method

* Support text = await async_mock_model.prompt("hello")

* Initial docs for llm.get_async_model() and await model.prompt()

Refs #507

* Initial async model plugin creation docs

* duration_ms ANY to pass test

* llm models --async option

Refs #613 (comment)

* Removed obsolete TypeVars

* Expanded register_models() docs for async

* await model.prompt() now returns AsyncResponse

Refs #613 (comment)

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
simonw added a commit that referenced this issue Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working developer-experience
Projects
None yet
Development

No branches or pull requests

1 participant