Skip to content

Commit e2d25de

Browse files
authored
Update_docker by heyang (#29)
1 parent 5dc121e commit e2d25de

37 files changed

+261
-270
lines changed

docker/llm/README.md

+60-63
Large diffs are not rendered by default.

docker/llm/finetune/lora/cpu/README.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@
22

33
[Alpaca Lora](https://github.com/tloen/alpaca-lora/tree/main) uses [low-rank adaption](https://arxiv.org/pdf/2106.09685.pdf) to speed up the finetuning process of base model [Llama2-7b](https://huggingface.co/meta-llama/Llama-2-7b), and tries to reproduce the standard Alpaca, a general finetuned LLM. This is on top of Hugging Face transformers with Pytorch backend, which natively requires a number of expensive GPU resources and takes significant time.
44

5-
By constract, BigDL here provides a CPU optimization to accelerate the lora finetuning of Llama2-7b, in the power of mixed-precision and distributed training. Detailedly, [Intel OneCCL](https://www.intel.com/content/www/us/en/developer/tools/oneapi/oneccl.html), an available Hugging Face backend, is able to speed up the Pytorch computation with BF16 datatype on CPUs, as well as parallel processing on Kubernetes enabled by [Intel MPI](https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html).
5+
By constract, IPEX-LLM here provides a CPU optimization to accelerate the lora finetuning of Llama2-7b, in the power of mixed-precision and distributed training. Detailedly, [Intel OneCCL](https://www.intel.com/content/www/us/en/developer/tools/oneapi/oneccl.html), an available Hugging Face backend, is able to speed up the Pytorch computation with BF16 datatype on CPUs, as well as parallel processing on Kubernetes enabled by [Intel MPI](https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html).
66

77
The architecture is illustrated in the following:
88

99
![image](https://llm-assets.readthedocs.io/en/latest/_images/llm-finetune-lora-cpu-k8s.png)
1010

11-
As above, BigDL implements its MPI training with [Kubeflow MPI operator](https://github.com/kubeflow/mpi-operator/tree/master), which encapsulates the deployment as MPIJob CRD, and assists users to handle the construction of a MPI worker cluster on Kubernetes, such as public key distribution, SSH connection, and log collection.
11+
As above, IPEX-LLM implements its MPI training with [Kubeflow MPI operator](https://github.com/kubeflow/mpi-operator/tree/master), which encapsulates the deployment as MPIJob CRD, and assists users to handle the construction of a MPI worker cluster on Kubernetes, such as public key distribution, SSH connection, and log collection.
1212

1313
Now, let's go to deploy a Lora finetuning to create a LLM from Llama2-7b.
1414

@@ -20,7 +20,7 @@ Follow [here](https://github.com/kubeflow/mpi-operator/tree/master#installation)
2020

2121
### 2. Download Image, Base Model and Finetuning Data
2222

23-
Follow [here](https://github.com/intel-analytics/BigDL/tree/main/docker/llm/finetune/lora/docker#prepare-bigdl-image-for-lora-finetuning) to prepare BigDL Lora Finetuning image in your cluster.
23+
Follow [here](https://github.com/intel-analytics/IPEX-LLM/tree/main/docker/llm/finetune/lora/docker#prepare-ipex-llm-image-for-lora-finetuning) to prepare IPEX-LLM Lora Finetuning image in your cluster.
2424

2525
As finetuning is from a base model, first download [Llama2-7b model from the public download site of Hugging Face](https://huggingface.co/meta-llama/Llama-2-7b). Then, download [cleaned alpaca data](https://raw.githubusercontent.com/tloen/alpaca-lora/main/alpaca_data_cleaned_archive.json), which contains all kinds of general knowledge and has already been cleaned. Next, move the downloaded files to a shared directory on your NFS server.
2626

@@ -34,21 +34,21 @@ After preparing parameters in `./kubernetes/values.yaml`, submit the job as befl
3434

3535
```bash
3636
cd ./kubernetes
37-
helm install bigdl-lora-finetuning .
37+
helm install ipex-llm-lora-finetuning .
3838
```
3939

4040
### 4. Check Deployment
4141
```bash
42-
kubectl get all -n bigdl-lora-finetuning # you will see launcher and worker pods running
42+
kubectl get all -n ipex-llm-lora-finetuning # you will see launcher and worker pods running
4343
```
4444

4545
### 5. Check Finetuning Process
4646

4747
After deploying successfully, you can find a launcher pod, and then go inside this pod and check the logs collected from all workers.
4848

4949
```bash
50-
kubectl get all -n bigdl-lora-finetuning # you will see a launcher pod
51-
kubectl exec -it <launcher_pod_name> bash -n bigdl-ppml-finetuning # enter launcher pod
50+
kubectl get all -n ipex-llm-lora-finetuning # you will see a launcher pod
51+
kubectl exec -it <launcher_pod_name> bash -n ipex-llm-lora-finetuning # enter launcher pod
5252
cat launcher.log # display logs collected from other workers
5353
```
5454

docker/llm/finetune/lora/cpu/docker/Dockerfile

+7-7
Original file line numberDiff line numberDiff line change
@@ -12,13 +12,13 @@ FROM mpioperator/intel as builder
1212
ARG http_proxy
1313
ARG https_proxy
1414
ENV PIP_NO_CACHE_DIR=false
15-
COPY ./requirements.txt /bigdl/requirements.txt
15+
COPY ./requirements.txt /ipex_llm/requirements.txt
1616

1717
# add public key
1818
COPY --from=key-getter /root/intel-oneapi-archive-keyring.gpg /usr/share/keyrings/intel-oneapi-archive-keyring.gpg
1919
RUN echo "deb [signed-by=/usr/share/keyrings/intel-oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main " > /etc/apt/sources.list.d/oneAPI.list
2020

21-
RUN mkdir /bigdl/data && mkdir /bigdl/model && \
21+
RUN mkdir /ipex_llm/data && mkdir /ipex_llm/model && \
2222
# install pytorch 2.0.1
2323
apt-get update && \
2424
apt-get install -y python3-pip python3.9-dev python3-wheel git software-properties-common && \
@@ -29,12 +29,12 @@ RUN mkdir /bigdl/data && mkdir /bigdl/model && \
2929
pip install intel_extension_for_pytorch==2.0.100 && \
3030
pip install oneccl_bind_pt -f https://developer.intel.com/ipex-whl-stable && \
3131
# install transformers etc.
32-
cd /bigdl && \
32+
cd /ipex_llm && \
3333
git clone https://github.com/huggingface/transformers.git && \
3434
cd transformers && \
3535
git reset --hard 057e1d74733f52817dc05b673a340b4e3ebea08c && \
3636
pip install . && \
37-
pip install -r /bigdl/requirements.txt && \
37+
pip install -r /ipex_llm/requirements.txt && \
3838
# install python
3939
add-apt-repository ppa:deadsnakes/ppa -y && \
4040
apt-get install -y python3.9 && \
@@ -56,9 +56,9 @@ RUN mkdir /bigdl/data && mkdir /bigdl/model && \
5656
echo " UserKnownHostsFile /dev/null" >> /etc/ssh/ssh_config && \
5757
sed -i 's/#\(StrictModes \).*/\1no/g' /etc/ssh/sshd_config
5858

59-
COPY ./bigdl-lora-finetuing-entrypoint.sh /bigdl/bigdl-lora-finetuing-entrypoint.sh
60-
COPY ./lora_finetune.py /bigdl/lora_finetune.py
59+
COPY ./ipex-llm-lora-finetuing-entrypoint.sh /ipex_llm/ipex-llm-lora-finetuing-entrypoint.sh
60+
COPY ./lora_finetune.py /ipex_llm/lora_finetune.py
6161

62-
RUN chown -R mpiuser /bigdl
62+
RUN chown -R mpiuser /ipex_llm
6363
USER mpiuser
6464
ENTRYPOINT ["/bin/bash"]

docker/llm/finetune/lora/cpu/docker/README.md

+11-11
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
## Fine-tune LLM with One CPU
22

3-
### 1. Prepare BigDL image for Lora Finetuning
3+
### 1. Prepare IPEX LLM image for Lora Finetuning
44

55
You can download directly from Dockerhub like:
66

77
```bash
8-
docker pull intelanalytics/bigdl-llm-finetune-lora-cpu:2.5.0-SNAPSHOT
8+
docker pull intelanalytics/ipex-llm-finetune-lora-cpu:2.5.0-SNAPSHOT
99
```
1010

1111
Or build the image from source:
@@ -17,7 +17,7 @@ export HTTPS_PROXY=your_https_proxy
1717
docker build \
1818
--build-arg http_proxy=${HTTP_PROXY} \
1919
--build-arg https_proxy=${HTTPS_PROXY} \
20-
-t intelanalytics/bigdl-llm-finetune-lora-cpu:2.5.0-SNAPSHOT \
20+
-t intelanalytics/ipex-llm-finetune-lora-cpu:2.5.0-SNAPSHOT \
2121
-f ./Dockerfile .
2222
```
2323

@@ -27,13 +27,13 @@ Here, we try to finetune [Llama2-7b](https://huggingface.co/meta-llama/Llama-2-7
2727

2828
```
2929
docker run -itd \
30-
--name=bigdl-llm-fintune-lora-cpu \
30+
--name=ipex-llm-fintune-lora-cpu \
3131
--cpuset-cpus="your_expected_range_of_cpu_numbers" \
3232
-e STANDALONE_DOCKER=TRUE \
3333
-e WORKER_COUNT_DOCKER=your_worker_count \
34-
-v your_downloaded_base_model_path:/bigdl/model \
35-
-v your_downloaded_data_path:/bigdl/data/alpaca_data_cleaned_archive.json \
36-
intelanalytics/bigdl-llm-finetune-lora-cpu:2.5.0-SNAPSHOT \
34+
-v your_downloaded_base_model_path:/ipex_llm/model \
35+
-v your_downloaded_data_path:/ipex_llm/data/alpaca_data_cleaned_archive.json \
36+
intelanalytics/ipex-llm-finetune-lora-cpu:2.5.0-SNAPSHOT \
3737
bash
3838
```
3939

@@ -44,21 +44,21 @@ You can adjust the configuration according to your own environment. After our te
4444
Enter the running container:
4545

4646
```
47-
docker exec -it bigdl-llm-fintune-lora-cpu bash
47+
docker exec -it ipex-llm-fintune-lora-cpu bash
4848
```
4949

5050
Then, run the script to start finetuning:
5151

5252
```
53-
bash /bigdl/bigdl-lora-finetuing-entrypoint.sh
53+
bash /ipex_llm/ipex-llm-lora-finetuing-entrypoint.sh
5454
```
5555

5656
After minutes, it is expected to get results like:
5757

5858
```
5959
Training Alpaca-LoRA model with params:
60-
base_model: /bigdl/model/
61-
data_path: /bigdl/data/alpaca_data_cleaned_archive.json
60+
base_model: /ipex_llm/model/
61+
data_path: /ipex_llm/data/alpaca_data_cleaned_archive.json
6262
output_dir: /home/mpiuser/finetuned_model
6363
batch_size: 128
6464
micro_batch_size: 8

docker/llm/finetune/lora/cpu/docker/bigdl-lora-finetuing-entrypoint.sh docker/llm/finetune/lora/cpu/docker/ipex-llm-lora-finetuing-entrypoint.sh

+6-6
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,9 @@ then
1515
-genv KMP_AFFINITY="granularity=fine,none" \
1616
-genv KMP_BLOCKTIME=1 \
1717
-genv TF_ENABLE_ONEDNN_OPTS=1 \
18-
python /bigdl/lora_finetune.py \
19-
--base_model '/bigdl/model/' \
20-
--data_path "/bigdl/data/alpaca_data_cleaned_archive.json" \
18+
python /ipex_llm/lora_finetune.py \
19+
--base_model '/ipex_llm/model/' \
20+
--data_path "/ipex_llm/data/alpaca_data_cleaned_archive.json" \
2121
--output_dir "/home/mpiuser/finetuned_model" \
2222
--micro_batch_size 8 \
2323
--bf16
@@ -29,7 +29,7 @@ else
2929
if [ "$WORKER_ROLE" = "launcher" ]
3030
then
3131
sed "s/:1/ /g" /etc/mpi/hostfile > /home/mpiuser/hostfile
32-
export DATA_PATH="/bigdl/data/$DATA_SUB_PATH"
32+
export DATA_PATH="/ipex_llm/data/$DATA_SUB_PATH"
3333
sleep 10
3434
mpirun \
3535
-n $WORLD_SIZE \
@@ -40,8 +40,8 @@ else
4040
-genv KMP_AFFINITY="granularity=fine,none" \
4141
-genv KMP_BLOCKTIME=1 \
4242
-genv TF_ENABLE_ONEDNN_OPTS=1 \
43-
python /bigdl/lora_finetune.py \
44-
--base_model '/bigdl/model/' \
43+
python /ipex_llm/lora_finetune.py \
44+
--base_model '/ipex_llm/model/' \
4545
--data_path "$DATA_PATH" \
4646
--output_dir "/home/mpiuser/finetuned_model" \
4747
--micro_batch_size $MICRO_BATCH_SIZE \
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
apiVersion: v2
22
name: trusted-fintune-service
3-
description: A Helm chart for BigDL PPML Trusted BigData Service on Kubernetes
3+
description: A Helm chart for IPEX-LLM Finetuning Service on Kubernetes
44
type: application
55
version: 1.1.27
66
appVersion: "1.16.0"

docker/llm/finetune/lora/cpu/kubernetes/templates/bigdl-lora-finetuning-job.yaml docker/llm/finetune/lora/cpu/kubernetes/templates/ipex-llm-lora-finetuning-job.yaml

+12-12
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
apiVersion: kubeflow.org/v2beta1
22
kind: MPIJob
33
metadata:
4-
name: bigdl-lora-finetuning-job
5-
namespace: bigdl-lora-finetuning
4+
name: ipex-llm-lora-finetuning-job
5+
namespace: ipex-llm-lora-finetuning
66
spec:
77
slotsPerWorker: 1
88
runPolicy:
@@ -20,10 +20,10 @@ spec:
2020
claimName: nfs-pvc
2121
containers:
2222
- image: {{ .Values.imageName }}
23-
name: bigdl-ppml-finetuning-launcher
23+
name: ipex-llm-lora-finetuning-launcher
2424
securityContext:
2525
runAsUser: 1000
26-
command: ['sh' , '-c', 'bash /bigdl/bigdl-lora-finetuing-entrypoint.sh']
26+
command: ['sh' , '-c', 'bash /ipex_llm/ipex-llm-lora-finetuing-entrypoint.sh']
2727
env:
2828
- name: WORKER_ROLE
2929
value: "launcher"
@@ -34,7 +34,7 @@ spec:
3434
- name: MASTER_PORT
3535
value: "42679"
3636
- name: MASTER_ADDR
37-
value: "bigdl-lora-finetuning-job-worker-0.bigdl-lora-finetuning-job-worker"
37+
value: "ipex-llm-lora-finetuning-job-worker-0.ipex-llm-lora-finetuning-job-worker"
3838
- name: DATA_SUB_PATH
3939
value: "{{ .Values.dataSubPath }}"
4040
- name: OMP_NUM_THREADS
@@ -46,20 +46,20 @@ spec:
4646
volumeMounts:
4747
- name: nfs-storage
4848
subPath: {{ .Values.modelSubPath }}
49-
mountPath: /bigdl/model
49+
mountPath: /ipex_llm/model
5050
- name: nfs-storage
5151
subPath: {{ .Values.dataSubPath }}
52-
mountPath: "/bigdl/data/{{ .Values.dataSubPath }}"
52+
mountPath: "/ipex_llm/data/{{ .Values.dataSubPath }}"
5353
Worker:
5454
replicas: {{ .Values.trainerNum }}
5555
template:
5656
spec:
5757
containers:
5858
- image: {{ .Values.imageName }}
59-
name: bigdl-ppml-finetuning-worker
59+
name: ipex-llm-lora-finetuning-worker
6060
securityContext:
6161
runAsUser: 1000
62-
command: ['sh' , '-c', 'bash /bigdl/bigdl-lora-finetuing-entrypoint.sh']
62+
command: ['sh' , '-c', 'bash /ipex_llm/ipex-llm-lora-finetuing-entrypoint.sh']
6363
env:
6464
- name: WORKER_ROLE
6565
value: "trainer"
@@ -70,18 +70,18 @@ spec:
7070
- name: MASTER_PORT
7171
value: "42679"
7272
- name: MASTER_ADDR
73-
value: "bigdl-lora-finetuning-job-worker-0.bigdl-lora-finetuning-job-worker"
73+
value: "ipex-llm-lora-finetuning-job-worker-0.ipex-llm-lora-finetuning-job-worker"
7474
- name: LOCAL_POD_NAME
7575
valueFrom:
7676
fieldRef:
7777
fieldPath: metadata.name
7878
volumeMounts:
7979
- name: nfs-storage
8080
subPath: {{ .Values.modelSubPath }}
81-
mountPath: /bigdl/model
81+
mountPath: /ipex_llm/model
8282
- name: nfs-storage
8383
subPath: {{ .Values.dataSubPath }}
84-
mountPath: "/bigdl/data/{{ .Values.dataSubPath }}"
84+
mountPath: "/ipex_llm/data/{{ .Values.dataSubPath }}"
8585
resources:
8686
requests:
8787
cpu: {{ .Values.cpuPerPod }}
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
apiVersion: v1
22
kind: Namespace
33
metadata:
4-
name: bigdl-qlora-finetuning
4+
name: ipex-llm-lora-finetuning

docker/llm/finetune/lora/cpu/kubernetes/templates/nfs-pv.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
apiVersion: v1
22
kind: PersistentVolume
33
metadata:
4-
name: nfs-pv-bigdl-lora-finetuning
5-
namespace: bigdl-lora-finetuning
4+
name: nfs-pv-ipex-llm-lora-finetuning
5+
namespace: ipex-llm-lora-finetuning
66
spec:
77
capacity:
88
storage: 15Gi

docker/llm/finetune/lora/cpu/kubernetes/templates/nfs-pvc.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ kind: PersistentVolumeClaim
22
apiVersion: v1
33
metadata:
44
name: nfs-pvc
5-
namespace: bigdl-lora-finetuning
5+
namespace: ipex-llm-lora-finetuning
66
spec:
77
accessModes:
88
- ReadWriteOnce

docker/llm/finetune/lora/cpu/kubernetes/values.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
imageName: intelanalytics/bigdl-llm-finetune-lora-cpu:2.5.0-SNAPSHOT
1+
imageName: intelanalytics/ipex-llm-finetune-lora-cpu:2.5.0-SNAPSHOT
22
trainerNum: 8
33
microBatchSize: 8
44
nfsServerIp: your_nfs_server_ip

docker/llm/finetune/qlora/cpu/docker/Dockerfile

+10-10
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ ENV TRANSFORMERS_COMMIT_ID=95fe0f5
1818
COPY --from=key-getter /root/intel-oneapi-archive-keyring.gpg /usr/share/keyrings/intel-oneapi-archive-keyring.gpg
1919
RUN echo "deb [signed-by=/usr/share/keyrings/intel-oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main " > /etc/apt/sources.list.d/oneAPI.list
2020

21-
RUN mkdir -p /bigdl/data && mkdir -p /bigdl/model && \
21+
RUN mkdir -p /ipex_llm/data && mkdir -p /ipex_llm/model && \
2222
# install pytorch 2.1.0
2323
apt-get update && \
2424
apt-get install -y --no-install-recommends python3-pip python3.9-dev python3-wheel python3.9-distutils git software-properties-common && \
@@ -27,8 +27,8 @@ RUN mkdir -p /bigdl/data && mkdir -p /bigdl/model && \
2727
pip3 install --upgrade pip && \
2828
export PIP_DEFAULT_TIMEOUT=100 && \
2929
pip install --upgrade torch==2.1.0 && \
30-
# install CPU bigdl-llm
31-
pip3 install --pre --upgrade bigdl-llm[all] && \
30+
# install CPU ipex-llm
31+
pip3 install --pre --upgrade ipex-llm[all] && \
3232
# install ipex and oneccl
3333
pip install https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/cpu/intel_extension_for_pytorch-2.1.0%2Bcpu-cp39-cp39-linux_x86_64.whl && \
3434
pip install oneccl_bind_pt -f https://developer.intel.com/ipex-whl-stable && \
@@ -41,16 +41,16 @@ RUN mkdir -p /bigdl/data && mkdir -p /bigdl/model && \
4141
apt-get update && apt-get install -y curl wget gpg gpg-agent software-properties-common libunwind8-dev && \
4242
# get qlora example code
4343
ln -s /usr/bin/python3 /usr/bin/python && \
44-
cd /bigdl && \
45-
git clone https://github.com/intel-analytics/BigDL.git && \
46-
mv BigDL/python/llm/example/CPU/QLoRA-FineTuning/* . && \
44+
cd /ipex_llm && \
45+
git clone https://github.com/intel-analytics/IPEX-LLM.git && \
46+
mv IPEX-LLM/python/llm/example/CPU/QLoRA-FineTuning/* . && \
4747
mkdir -p /GPU/LLM-Finetuning && \
48-
mv BigDL/python/llm/example/GPU/LLM-Finetuning/common /GPU/LLM-Finetuning/common && \
49-
rm -r BigDL && \
50-
chown -R mpiuser /bigdl
48+
mv IPEX-LLM/python/llm/example/GPU/LLM-Finetuning/common /GPU/LLM-Finetuning/common && \
49+
rm -r IPEX-LLM && \
50+
chown -R mpiuser /ipex_llm
5151

5252
# for standalone
53-
COPY ./start-qlora-finetuning-on-cpu.sh /bigdl/start-qlora-finetuning-on-cpu.sh
53+
COPY ./start-qlora-finetuning-on-cpu.sh /ipex_llm/start-qlora-finetuning-on-cpu.sh
5454

5555
USER mpiuser
5656

docker/llm/finetune/qlora/cpu/docker/Dockerfile.k8s

+9-9
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ ENV TRANSFORMERS_COMMIT_ID=95fe0f5
1919
COPY --from=key-getter /root/intel-oneapi-archive-keyring.gpg /usr/share/keyrings/intel-oneapi-archive-keyring.gpg
2020
RUN echo "deb [signed-by=/usr/share/keyrings/intel-oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main " > /etc/apt/sources.list.d/oneAPI.list
2121

22-
RUN mkdir -p /bigdl/data && mkdir -p /bigdl/model && \
22+
RUN mkdir -p /ipex_llm/data && mkdir -p /ipex_llm/model && \
2323
apt-get update && \
2424
apt install -y --no-install-recommends openssh-server openssh-client libcap2-bin gnupg2 ca-certificates \
2525
python3-pip python3.9-dev python3-wheel python3.9-distutils git software-properties-common && \
@@ -40,8 +40,8 @@ RUN mkdir -p /bigdl/data && mkdir -p /bigdl/model && \
4040
pip3 install --upgrade pip && \
4141
export PIP_DEFAULT_TIMEOUT=100 && \
4242
pip install --upgrade torch==2.1.0 --index-url https://download.pytorch.org/whl/cpu && \
43-
# install CPU bigdl-llm
44-
pip3 install --pre --upgrade bigdl-llm[all] && \
43+
# install CPU ipex-llm
44+
pip3 install --pre --upgrade ipex-llm[all] && \
4545
# install ipex and oneccl
4646
pip install intel_extension_for_pytorch==2.0.100 && \
4747
pip install oneccl_bind_pt -f https://developer.intel.com/ipex-whl-stable && \
@@ -59,14 +59,14 @@ RUN mkdir -p /bigdl/data && mkdir -p /bigdl/model && \
5959
rm -rf /var/lib/apt/lists/* && \
6060
# get qlora example code
6161
ln -s /usr/bin/python3 /usr/bin/python && \
62-
cd /bigdl && \
63-
git clone https://github.com/intel-analytics/BigDL.git && \
64-
mv BigDL/python/llm/example/CPU/QLoRA-FineTuning/* . && \
65-
rm -r BigDL && \
66-
chown -R mpiuser /bigdl
62+
cd /ipex_llm && \
63+
git clone https://github.com/intel-analytics/IPEX-LLM.git && \
64+
mv IPEX-LLM/python/llm/example/CPU/QLoRA-FineTuning/* . && \
65+
rm -r IPEX-LLM && \
66+
chown -R mpiuser /ipex_llm
6767

6868
# for k8s
69-
COPY ./bigdl-qlora-finetuing-entrypoint.sh /bigdl/bigdl-qlora-finetuing-entrypoint.sh
69+
COPY ./ipex-llm-qlora-finetuing-entrypoint.sh /ipex_llm/ipex-llm-qlora-finetuing-entrypoint.sh
7070

7171
USER mpiuser
7272

0 commit comments

Comments
 (0)