Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Report-errors in emitter #738

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

earonesty
Copy link

@earonesty earonesty commented Dec 17, 2020

Emitters can raise exceptions, for example if a handle becomes invalidated. Rather than handle it on a per-emitter basis, a catch-all can emit the underlying exception.

The caller can then choose to tear down an observer, or ignore the event as needed.

DRAFT: Does this look reasonable? Should I keep going (tests, use cases, docs).

@CCP-Aporia
Copy link
Contributor

CCP-Aporia commented Dec 17, 2020

Thanks for investigating this. IMHO having an error event can be useful for consumers to notice that the observer or emitter they are relying upon may need to be re-created.

Unfortunately not all emitters are using the same run() implementation, and some even need to go a bit further in their implementation in order to catch exceptions (f.e. the callback method specified in the FSEventsEmitter). However, this is a good start and shows other implementors that it is important to catch exceptions in run(). :-)

Copy link
Collaborator

@BoboTiG BoboTiG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank @tommorris!

I like the event idea, I think you can move forward (and sorry for the delay).

It is an important change, so if you can add tests, some documentation, and a line + your GitHub nickname in the changelog, that would be awesome!

try:
self.queue_events(self.timeout)
except Exception as ex:
self.queue_event(FileObserverErrorEvent(self._watch.path, ex))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.queue_event(FileObserverErrorEvent(self._watch.path, ex))
self.queue_event(FileObserverErrorEvent(self._watch.path, ex))
# Prevent circular reference
ex = None
del ex

Comment on lines +252 to +253
self._exception = exception
super(FileObserverErrorEvent, self).__init__(src_path)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self._exception = exception
super(FileObserverErrorEvent, self).__init__(src_path)
super().__init__(src_path)
self._exception = exception

@tommorris
Copy link
Contributor

@BoboTiG I'm not sure I'm involved in this PR at all. (I did make a PR for this project back in 2014: #242 but nothing since then.)

@BoboTiG
Copy link
Collaborator

BoboTiG commented May 15, 2022

@tommorris my bad, sorry for the false alert :)

@BoboTiG
Copy link
Collaborator

BoboTiG commented May 15, 2022

I like the event idea, I think you can move forward (and sorry for the delay).

It is an important change, so if you can add tests, some documentation, and a line + your GitHub nickname in the changelog, that would be awesome!

@earonesty ⬆️

@pbrackin
Copy link

pbrackin commented Oct 3, 2022

I am trying to use watchdog to monitor a NAS CIFS share from a windows VM as a windows service. It works well for a while but eventually I see that the observer thread dies with no information a la: #663

This seems to be getting at a way to figure out the cause of that thread death which seems like a good idea. I wonder, how would one use this when subclassing PatternMatchingEventHandler? Do we need to first add the EVENT_TYPE_ERROR to the FileSystemEventHandler dispatch so that we can trap the event (eg, "on_error") and dump its repr?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants