DAMO-NLP-SG

DAMO-NLP-SG/VideoLLaMA2.1-7B-AV

No description available.

Model Documentation



VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs

If you like our project, please give us a star ⭐ on Github for the latest update.




📰 News

* 2024.10.22] Release checkpoints of [VideoLLaMA2.1-7B-AV * 2024.10.15] Release checkpoints of [VideoLLaMA2.1-7B-16F-Base and VideoLLaMA2.1-7B-16F * 2024.08.14] Release checkpoints of [VideoLLaMA2-72B-Base and VideoLLaMA2-72B * 2024.07.30] Release checkpoints of [VideoLLaMA2-8x7B-Base and VideoLLaMA2-8x7B. * 2024.06.25] 🔥🔥 As of Jun 25, our [VideoLLaMA2-7B-16F is the Top-1 ~7B-sized VideoLLM on the MLVU Leaderboard. * 2024.06.18] 🔥🔥 As of Jun 18, our [VideoLLaMA2-7B-16F is the Top-1 ~7B-sized VideoLLM on the VideoMME Leaderboard. * [2024.06.17] 👋👋 Update technical report with the latest results and the missing references. If you have works closely related to VideoLLaMA 2 but not mentioned in the paper, feel free to let us know. * 2024.06.14] 🔥🔥 [Online Demo is available. * [2024.06.03] Release training, evaluation, and serving codes of VideoLLaMA 2.

🌎 Model Zoo

Vision-Only Checkpoints

| Model Name | Type | Visual Encoder | Language Decoder |

Training Frames |

|:-------------------|:--------------:|:----------------|:------------------|:----------------------:| | VideoLLaMA2-7B-Base | Base | clip-vit-large-patch14-336 | Mistral-7B-Instruct-v0.2 | 8 | | VideoLLaMA2-7B | Chat | clip-vit-large-patch14-336 | Mistral-7B-Instruct-v0.2 | 8 | | VideoLLaMA2-7B-16F-Base | Base | clip-vit-large-patch14-336 | Mistral-7B-Instruct-v0.2 | 16 | | VideoLLaMA2-7B-16F | Chat | clip-vit-large-patch14-336 | Mistral-7B-Instruct-v0.2 | 16 | | VideoLLaMA2-8x7B-Base | Base | clip-vit-large-patch14-336 | Mixtral-8x7B-Instruct-v0.1 | 8 | | VideoLLaMA2-8x7B | Chat | clip-vit-large-patch14-336 | Mixtral-8x7B-Instruct-v0.1 | 8 | | VideoLLaMA2-72B-Base | Base | clip-vit-large-patch14-336 | Qwen2-72B-Instruct | 8 | | VideoLLaMA2-72B | Chat | clip-vit-large-patch14-336 | Qwen2-72B-Instruct | 8 | | VideoLLaMA2.1-7B-16F-Base | Base | siglip-so400m-patch14-384 | Qwen2-7B-Instruct | 16 | | VideoLLaMA2.1-7B-16F | Chat | siglip-so400m-patch14-384 | Qwen2-7B-Instruct | 16 |

Audio-Visual Checkpoints

| Model Name | Type | Audio Encoder | Language Decoder | |:-------------------|:--------------:|:----------------|:----------------------:| | VideoLLaMA2.1-7B-AV (This Checkpoint) | Chat | Fine-tuned BEATs_iter3+(AS2M)(cpt2) | VideoLLaMA2.1-7B-16F |

🚀 Main Results



Multi-Choice Video QA & Video Captioning



Open-Ended Video QA



Multi-Choice & Open-Ended Audio QA



Open-Ended Audio-Visual QA





🤖 Inference with VideoLLaMA2-AV

python
import sys
sys.path.append('./')
from videollama2 import model_init, mm_infer
from videollama2.utils import disable_torch_init
import argparse

def inference(args):

model_path = args.model_path model, processor, tokenizer = model_init(model_path)

if args.modal_type == "a": model.model.vision_tower = None elif args.modal_type == "v": model.model.audio_tower = None elif args.modal_type == "av": pass else: raise NotImplementedError

Audio-visual Inference

audio_video_path = "assets/00003491.mp4" preprocess = processor['audio' if args.modal_type == "a" else "video"] if args.modal_type == "a": audio_video_tensor = preprocess(audio_video_path) else: audio_video_tensor = preprocess(audio_video_path, va=True if args.modal_type == "av" else False) question = f"Please describe the video with audio information."

Audio Inference

audio_video_path = "assets/bird-twitter-car.wav" preprocess = processor['audio' if args.modal_type == "a" else "video"] if args.modal_type == "a": audio_video_tensor = preprocess(audio_video_path) else: audio_video_tensor = preprocess(audio_video_path, va=True if args.modal_type == "av" else False) question = f"Please describe the audio."

Video Inference

audio_video_path = "assets/output_v_1jgsRbGzCls.mp4" preprocess = processor['audio' if args.modal_type == "a" else "video"] if args.modal_type == "a": audio_video_tensor = preprocess(audio_video_path) else: audio_video_tensor = preprocess(audio_video_path, va=True if args.modal_type == "av" else False) question = f"What activity are the people practicing in the video?"

output = mm_infer( audio_video_tensor, question, model=model, tokenizer=tokenizer, modal='audio' if args.modal_type == "a" else "video", do_sample=False, )

print(output)

if __name__ == "__main__": parser = argparse.ArgumentParser()

parser.add_argument('--model-path', help='', , required=False, default='DAMO-NLP-SG/VideoLLaMA2.1-7B-AV') parser.add_argument('--modal-type', choices=["a", "v", "av"], help='', required=True) args = parser.parse_args()

inference(args)



Citation



If you find VideoLLaMA useful for your research and applications, please cite using this BibTeX:
bibtex
@article{damonlpsg2024videollama2,
  title={VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs},
  author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong},
  journal={arXiv preprint arXiv:2406.07476},
  year={2024},
  url = {https://arxiv.org/abs/2406.07476}
}

@article{damonlpsg2023videollama, title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding}, author = {Zhang, Hang and Li, Xin and Bing, Lidong}, journal = {arXiv preprint arXiv:2306.02858}, year = {2023}, url = {https://arxiv.org/abs/2306.02858} }

Files & Weights

FilenameSizeAction
audio_tower.bin 0.17 GB
mm_projector_a.bin 0.03 GB
model-00001-of-00004.safetensors 4.54 GB
model-00002-of-00004.safetensors 4.59 GB
model-00003-of-00004.safetensors 4.65 GB
model-00004-of-00004.safetensors 2.09 GB
training_args.bin 0.00 GB