Contents

LLaMA_adapter for multimodel

1. clone the repo

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
git clone https://github.com/OpenGVLab/LLaMA-Adapter.git
cd LLaMA-Adapter/llama_adapter_v2_multimodal7b/
conda create -n llama_adapter_v2 python=3.8 -y
conda activate llama_adapter_v2
pip install -r requirements.txt
apt update && apt install -y libsm6 libxext6
pip install opencv-python

mkdir -p models/7B  # /home/jovyan/LLaMA-Adapter/llama_adapter_v2_multimodal7b/models/7B

# /home/pi/workspace/LLaMA-Adapter/llama_adapter_v2_multimodal7b/models/7B

2. download the original Llama-7b model

2.0 for LLaMA v1

1
2
3
curl -OL https://agi.gpt4.org/llama/LLaMA/tokenizer.model
curl -OL https://agi.gpt4.org/llama/LLaMA/7B/consolidated.00.pth
curl -OL https://agi.gpt4.org/llama/LLaMA/7B/params.json

2.1 get a token from https://huggingface.co/settings/tokens

it should be like “hf_avHIoqzdvupwMBZzhhREIMKZCUL12345”

2.2 log into huggingface with this token

1
huggingface-cli login

2.3 download

1
2
3
4
path='/home/pi/workspace/LLaMA-Adapter/llama_adapter_v2_multimodal7b/models/7B'
path='/home/jovyan/LLaMA-Adapter/llama_adapter_v2_multimodal7b/models/7B'
from huggingface_hub import snapshot_download
snapshot_download(repo_id='meta-llama/Llama-2-7b', local_dir=path)

3. copy tokenzier.model

1
2
cd /home/pi/workspace/LLaMA-Adapter/llama_adapter_v2_multimodal7b/models/7B
cp tokenizer.model ../

4. run a test

1
cd /home/pi/workspace/LLaMA-Adapter/llama_adapter_v2_multimodal7b
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
import cv2
import llama
import torch
from PIL import Image

device = "cuda" if torch.cuda.is_available() else "cpu"

#llama_dir='/home/pi/workspace/LLaMA-Adapter/llama_adapter_v2_multimodal7b/models/'
llama_dir='/home/pi/models/llama/'

# choose from BIAS-7B, LORA-BIAS-7B
model, preprocess = llama.load("BIAS-7B", llama_dir, device)
model.eval()

prompt = llama.format_prompt("Please introduce this painting.")
img = Image.fromarray(cv2.imread("../docs/logo_v1.png"))
img = preprocess(img).unsqueeze(0).to(device)

result = model.generate(img, [prompt])[0]

print(result)