We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This is great! I've been looking for something like this for quite a while. I have two suggestions:
Output results in yolo txt format similar to RectLabel: https://rectlabel.com
Allow for inference from a live video feed: https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture a. webcam for testing purposes b. video capture card BMD decklink and ultrastudio: https://www.blackmagicdesign.com/products
Allow user to select how often inference is run on video, i.e., perhaps instead of running on all frames a user could select 1 frame per second, etc.
The text was updated successfully, but these errors were encountered:
Hi, thanks for the suggestions.
I'll try to improve the app during the weekends with these in mind.
Sorry, something went wrong.
No branches or pull requests
This is great! I've been looking for something like this for quite a while. I have two suggestions:
Output results in yolo txt format similar to RectLabel:
https://rectlabel.com
Allow for inference from a live video feed:
https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture
a. webcam for testing purposes
b. video capture card BMD decklink and ultrastudio: https://www.blackmagicdesign.com/products
Allow user to select how often inference is run on video, i.e., perhaps instead of running on all frames a user could select 1 frame per second, etc.
The text was updated successfully, but these errors were encountered: