/images/logo.png
A notebook for something

LLaMA_adapter for multimodel

1. clone the repo

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
git clone https://github.com/OpenGVLab/LLaMA-Adapter.git
cd LLaMA-Adapter/llama_adapter_v2_multimodal7b/
conda create -n llama_adapter_v2 python=3.8 -y
conda activate llama_adapter_v2
pip install -r requirements.txt
apt update && apt install -y libsm6 libxext6
pip install opencv-python

mkdir -p models/7B  # /home/jovyan/LLaMA-Adapter/llama_adapter_v2_multimodal7b/models/7B

# /home/pi/workspace/LLaMA-Adapter/llama_adapter_v2_multimodal7b/models/7B

Segment Anything

Model Card

1
2
3
4
5
6
7

git clone https://github.com/facebookresearch/segment-anything.git
cd segment-anything
# instsll
pip install -e .
# download the checkpoint
mkdir checkpoints; cd checkpoints; curl -OL https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth; cd ..

Run a LLaMA 30B model with llama.cpp

Model Card

https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGML

1
curl -OL https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGML/resolve/main/Wizard-Vicuna-30B-Uncensored.ggmlv3.q4_1.bin

Test run

Q: “how can I get to the Mars, what are the options and analyze the cost for each option”

A: “There are several ways to get to Mars, but currently, none of them are affordable for an individual. The most feasible options are:

Load Huggingface Model from local directory

Step 1.

1
pip install huggingface_hub

Step 2. download model

Here we download model ‘bert-base-uncased’ into ‘/home/pi/models/hf/’

1
2
from huggingface_hub import snapshot_download
snapshot_download(repo_id='meta-llama/Llama-2-7b', cache_dir='/home/pi/models/hf/')

Remote Jupyter Notebook

Setup Remote Jupyter Notebook

  • A: my laptop, will be used to get access to a notebook running on B, through a web browser
  • B: my desktop, runs conda as the python env manager, with address pi@10.0.0.101
  • B is a powerful machines with GPU. We start a jupyter notebook on B and get access to it from A

Step1.

From A:

1