Step 1.
1
|
pip install huggingface_hub
|
Step 2. download model
Here we download model ‘bert-base-uncased’ into ‘/home/pi/models/hf/’
1
2
|
from huggingface_hub import snapshot_download
snapshot_download(repo_id='meta-llama/Llama-2-7b', cache_dir='/home/pi/models/hf/')
|
The downloaded files are
1
|
models--meta-llama--Llama-2-7b
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
ls -R models--meta-llama--Llama-2-7b
models--meta-llama--Llama-2-7b/:
blobs refs snapshots
models--meta-llama--Llama-2-7b/blobs:
0f0bc7714e028d4eb77a300525b581013454f06b 510da489e04e615c1b50e690e03a62d3bbff9fd9 abbcc199b2d1e4feb5d7e40c0bd67e1b0ce29e97
432012a5e6ec946e6c1cb318f256223889e3ab44 525dc349d71fe257fce4098c146446df6fef4247174f351381e4c3214af126f0 d67a91807d5879d193a694da57f28ff85092e92dc9fbef4888bd05e22b15ab75
4531f05cde0f2f2cb2d44055cf08e1d467d40196 6523f76675b50e9cf3a57d1fb135189abcffe1c7
51089e27e6764fb9f72c06a0f3710699fb6c9448 9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
models--meta-llama--Llama-2-7b/refs:
main
models--meta-llama--Llama-2-7b/snapshots:
365ffa8f1a6c455d3e2028ae658236b4b85ba824
models--meta-llama--Llama-2-7b/snapshots/365ffa8f1a6c455d3e2028ae658236b4b85ba824:
checklist.chk consolidated.00.pth LICENSE.txt params.json README.md Responsible-Use-Guide.pdf tokenizer_checklist.chk tokenizer.model USE_POLICY.md
|
If need to download the raw files, use “local_dir”
1
|
snapshot_download(repo_id='meta-llama/Llama-2-7b', local_dir='/home/pi/models/7b')
|
The downloaded files are
1
|
checklist.chk consolidated.00.pth LICENSE.txt params.json README.md Responsible-Use-Guide.pdf tokenizer_checklist.chk tokenizer.model USE_POLICY.md
|
Step 3. load model from local cache
1
2
3
4
5
6
|
import os
os.environ['HUGGINGFACE_HUB_CACHE'] = '/home/pi/models/hf/'
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
|