--- license: apache-2.0 datasets: - AIDC-AI/Ovis-dataset library_name: transformers tags: - MLLM pipeline_tag: image-text-to-text --- ## Introduction Ovis is a novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings. For a comprehensive introduction, please refer to [Ovis paper](https://arxiv.org/abs/2405.20797) and [Ovis GitHub](https://github.com/AIDC-AI/Ovis).
## Model Built upon Ovis1.5, Ovis1.6 further enhances high-resolution image processing, is trained on a larger, more diverse, and higher-quality dataset, and refines the training process with DPO training following instruction-tuning. | Ovis MLLMs | ViT | LLM | Model Weights | |:------------------|:-----------:|:------------------:|:---------------------------------------------------------------:| | Ovis1.6-Gemma2-9B | Siglip-400M | Gemma2-9B-It | [Huggingface](https://huggingface.co./AIDC-AI/Ovis1.6-Gemma2-9B) | ## Performance With just **10B** parameters, Ovis1.6-Gemma2-9B leads the [OpenCompass](https://github.com/open-compass/VLMEvalKit) benchmark among open-source MLLMs within **30B** parameters.
## Usage Below is a code snippet to run Ovis with multimodal inputs. For additional usage instructions, including inference wrapper and Gradio UI, please refer to [Ovis GitHub](https://github.com/AIDC-AI/Ovis?tab=readme-ov-file#inference). ```bash pip install torch==2.2.0 transformers==4.44.2 numpy==1.24.3 pillow==10.3.0 ``` ```python import torch from PIL import Image from transformers import AutoModelForCausalLM # load model model = AutoModelForCausalLM.from_pretrained("AIDC-AI/Ovis1.6-Gemma2-9B", torch_dtype=torch.bfloat16, multimodal_max_length=8192, trust_remote_code=True).cuda() text_tokenizer = model.get_text_tokenizer() visual_tokenizer = model.get_visual_tokenizer() # enter image path and prompt image_path = input("Enter image path: ") image = Image.open(image_path) text = input("Enter prompt: ") query = f'\n{text}' # format conversation prompt, input_ids, pixel_values = model.preprocess_inputs(query, [image]) attention_mask = torch.ne(input_ids, text_tokenizer.pad_token_id) input_ids = input_ids.unsqueeze(0).to(device=model.device) attention_mask = attention_mask.unsqueeze(0).to(device=model.device) pixel_values = [pixel_values.to(dtype=visual_tokenizer.dtype, device=visual_tokenizer.device)] # generate output with torch.inference_mode(): gen_kwargs = dict( max_new_tokens=1024, do_sample=False, top_p=None, top_k=None, temperature=None, repetition_penalty=None, eos_token_id=model.generation_config.eos_token_id, pad_token_id=text_tokenizer.pad_token_id, use_cache=True ) output_ids = model.generate(input_ids, pixel_values=pixel_values, attention_mask=attention_mask, **gen_kwargs)[0] output = text_tokenizer.decode(output_ids, skip_special_tokens=True) print(f'Output:\n{output}') ``` ## Citation If you find Ovis useful, please cite the paper ``` @article{lu2024ovis, title={Ovis: Structural Embedding Alignment for Multimodal Large Language Model}, author={Shiyin Lu and Yang Li and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang and Han-Jia Ye}, year={2024}, journal={arXiv:2405.20797} } ``` ## License The project is licensed under the Apache 2.0 License and is restricted to uses that comply with the license agreements of Gemma2 and Siglip.