-
LaVi Lab led by Prof. Liwei Wang @ CSE, CUHK
- Hong Kong
- https://henryhzy.github.io/
- @Zi_Yuan_Hu
Highlights
- Pro
Pinned Loading
-
Awesome-Multimodal-LLM
Awesome-Multimodal-LLM PublicResearch Trends in LLM-guided Multimodal Learning.
-
CLEVA
CLEVA PublicForked from LaVi-Lab/CLEVA
[EMNLP 2023 Demo] CLEVA: Chinese Language Models EVAluation Platform
Shell
-
stanford-crfm/helm
stanford-crfm/helm PublicHolistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110). This framework is also used to evaluate text-to-image …
-
Visual-Table
Visual-Table PublicForked from LaVi-Lab/Visual-Table
[EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"
Python
-
LaVi-Lab/TG-Vid
LaVi-Lab/TG-Vid Public[EMNLP 2024] Official code for "Enhancing Temporal Modeling of Video LLMs via Time Gating"
Python 4
If the problem persists, check the GitHub status page or contact support.