Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sorry but I have to say it: WOW! #258

Open
Zibri opened this issue May 12, 2024 · 2 comments
Open

Sorry but I have to say it: WOW! #258

Zibri opened this issue May 12, 2024 · 2 comments

Comments

@Zibri
Copy link

Zibri commented May 12, 2024

I am quite new to the A.I. world, and I spent a week testing a lot of different models in very harsh conditions (cpu only and 16 gb of ram)
This is by far the best model of them all.
Who knows me will be suprised because I rarely praise something (anything) in this way.
I wonder how much it would deteriorate by reducing it's size to around 8gb....
I tested a gguf model of 16gb (wizardlm-13b-v1.2.Q4_0.gguf) and the results were amazing.

@0wwafa
Copy link

0wwafa commented May 23, 2024

Impressive. Indeed. This and mistral are my favourites for now.

@Zibri Zibri changed the title Sorry but I have to say it: WOW?! Sorry but I have to say it: WOW! Jun 6, 2024
@nguyenvulong
Copy link

nguyenvulong commented Sep 24, 2024

could you please add some benchmark results that'd make us wow too
which experiments have you run so far?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants