Skip to content

Commit e8c0772

Browse files
authored
[Env] SKYPILOT_JOB_ID for all tasks (skypilot-org#1377)
* Add run id for normal job * add example for the run id * fix env_check * fix env_check * fix * address comments * Rename to SKYPILOT_JOB_ID * rename the controller's job id to avoid confusion * rename env variables * fix
1 parent a613b43 commit e8c0772

File tree

22 files changed

+127
-72
lines changed

22 files changed

+127
-72
lines changed

.github/workflows/pytest.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -53,4 +53,4 @@ jobs:
5353
pip install pytest pytest-xdist pytest-env>=0.6
5454
5555
- name: Run tests with pytest
56-
run: SKY_DISABLE_USAGE_COLLECTION=1 pytest ${{ matrix.test-path }}
56+
run: SKYPILOT_DISABLE_USAGE_COLLECTION=1 pytest ${{ matrix.test-path }}

docs/source/examples/spot-jobs.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ Below we show an `example <https://github.com/skypilot-org/skypilot/blob/master/
150150
--max_seq_length 384 \
151151
--doc_stride 128 \
152152
--report_to wandb \
153-
--run_name $SKYPILOT_RUN_ID \
153+
--run_name $SKYPILOT_JOB_ID \
154154
--output_dir /checkpoint/bert_qa/ \
155155
--save_total_limit 10 \
156156
--save_steps 1000
@@ -162,11 +162,11 @@ the output directory and frequency of checkpointing (see more
162162
on `Huggingface API <https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.save_steps>`_).
163163
You may also refer to another example `here <https://github.com/skypilot-org/skypilot/tree/master/examples/spot/resnet_ddp>`_ for periodically checkpointing with PyTorch.
164164

165-
We also set :code:`--run_name` to :code:`$SKYPILOT_RUN_ID` so that the loggings will be saved
165+
We also set :code:`--run_name` to :code:`$SKYPILOT_JOB_ID` so that the loggings will be saved
166166
to the same run in Weights & Biases.
167167

168168
.. note::
169-
The environment variable :code:`$SKYPILOT_RUN_ID` can be used to identify the same job, i.e., it is kept identical across all
169+
The environment variable :code:`$SKYPILOT_JOB_ID` (example: "sky-2022-10-06-05-17-09-750781_spot_id-22") can be used to identify the same job, i.e., it is kept identical across all
170170
recoveries of the job.
171171
It can be accessed in the task's :code:`run` commands or directly in the program itself (e.g., access
172172
via :code:`os.environ` and pass to Weights & Biases for tracking purposes in your training script). It is made available to

docs/source/running-jobs/distributed-jobs.rst

+10-10
Original file line numberDiff line numberDiff line change
@@ -30,27 +30,27 @@ For example, here is a simple PyTorch Distributed training example:
3030
run: |
3131
cd pytorch-distributed-resnet
3232
33-
num_nodes=`echo "$SKY_NODE_IPS" | wc -l`
34-
master_addr=`echo "$SKY_NODE_IPS" | head -n1`
35-
python3 -m torch.distributed.launch --nproc_per_node=$SKY_NUM_GPUS_PER_NODE \
36-
--nnodes=$num_nodes --node_rank=${SKY_NODE_RANK} --master_addr=$master_addr \
33+
num_nodes=`echo "$SKYPILOT_NODE_IPS" | wc -l`
34+
master_addr=`echo "$SKYPILOT_NODE_IPS" | head -n1`
35+
python3 -m torch.distributed.launch --nproc_per_node=$SKYPILOT_NUM_GPUS_PER_NODE \
36+
--nnodes=$num_nodes --node_rank=${SKYPILOT_NODE_RANK} --master_addr=$master_addr \
3737
--master_port=8008 resnet_ddp.py --num_epochs 20
3838
3939
In the above, :code:`num_nodes: 2` specifies that this task is to be run on 2
4040
nodes. The :code:`setup` and :code:`run` commands are executed on both nodes.
4141

4242
SkyPilot exposes these environment variables that can be accessed in a task's ``run`` commands:
4343

44-
- :code:`SKY_NODE_RANK`: rank (an integer ID from 0 to :code:`num_nodes-1`) of
44+
- :code:`SKYPILOT_NODE_RANK`: rank (an integer ID from 0 to :code:`num_nodes-1`) of
4545
the node executing the task.
46-
- :code:`SKY_NODE_IPS`: a string of IP addresses of the nodes reserved to execute
46+
- :code:`SKYPILOT_NODE_IPS`: a string of IP addresses of the nodes reserved to execute
4747
the task, where each line contains one IP address.
4848

49-
- You can retrieve the number of nodes by :code:`echo "$SKY_NODE_IPS" | wc -l`
50-
and the IP address of the third node by :code:`echo "$SKY_NODE_IPS" | sed -n
49+
- You can retrieve the number of nodes by :code:`echo "$SKYPILOT_NODE_IPS" | wc -l`
50+
and the IP address of the third node by :code:`echo "$SKYPILOT_NODE_IPS" | sed -n
5151
3p`.
5252

5353
- To manipulate these IP addresses, you can also store them to a file in the
54-
:code:`run` command with :code:`echo $SKY_NODE_IPS >> ~/sky_node_ips`.
55-
- :code:`SKY_NUM_GPUS_PER_NODE`: number of GPUs reserved on each node to execute the
54+
:code:`run` command with :code:`echo $SKYPILOT_NODE_IPS >> ~/sky_node_ips`.
55+
- :code:`SKYPILOT_NUM_GPUS_PER_NODE`: number of GPUs reserved on each node to execute the
5656
task; the same as the count in ``accelerators: <name>:<count>`` (rounded up if a fraction).

examples/env_check.yaml

+23-3
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,27 @@ run: |
2424
exit 1
2525
fi
2626
27-
echo NODE ID: $SKY_NODE_RANK
28-
echo NODE IPS: "$SKY_NODE_IPS"
29-
worker_addr=`echo "$SKY_NODE_IPS" | sed -n 2p`
27+
if [[ -z "$SKYPILOT_NODE_RANK" ]]; then
28+
echo "SKYPILOT_NODE_RANK is not set"
29+
exit 1
30+
else
31+
echo "SKYPILOT_NODE_RANK is set to ${SKYPILOT_NODE_RANK}"
32+
fi
33+
34+
if [[ -z "$SKYPILOT_NODE_IPS" ]]; then
35+
echo "SKYPILOT_NODE_IPS is not set"
36+
exit 1
37+
else
38+
echo "SKYPILOT_NODE_IPS is set to ${SKYPILOT_NODE_IPS}"
39+
echo "${SKYPILOT_NODE_IPS}"
40+
echo "${SKYPILOT_NODE_IPS}" | wc -l | grep 2 || exit 1
41+
fi
42+
worker_addr=`echo "$SKYPILOT_NODE_IPS" | sed -n 2p`
3043
echo Worker IP: $worker_addr
44+
45+
if [[ -z "$SKYPILOT_JOB_ID" ]]; then
46+
echo "SKYPILOT_JOB_ID is not set"
47+
exit 1
48+
else
49+
echo "SKYPILOT_JOB_ID is set to ${SKYPILOT_JOB_ID}"
50+
fi

examples/ray_tune_app.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,6 @@ setup: |
1111
pip3 install ray[tune] pytorch-lightning==1.4.9 lightning-bolts torchvision
1212
1313
run: |
14-
if [ "${SKY_NODE_RANK}" == "0" ]; then
14+
if [ "${SKYPILOT_NODE_RANK}" == "0" ]; then
1515
python3 tune_ptl_example.py
1616
fi

examples/resnet_distributed_torch.yaml

+3-3
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,8 @@ setup: |
1919
run: |
2020
cd pytorch-distributed-resnet
2121
22-
num_nodes=`echo "$SKY_NODE_IPS" | wc -l`
23-
master_addr=`echo "$SKY_NODE_IPS" | head -n1`
22+
num_nodes=`echo "$SKYPILOT_NODE_IPS" | wc -l`
23+
master_addr=`echo "$SKYPILOT_NODE_IPS" | head -n1`
2424
python3 -m torch.distributed.launch --nproc_per_node=1 \
25-
--nnodes=$num_nodes --node_rank=${SKY_NODE_RANK} --master_addr=$master_addr \
25+
--nnodes=$num_nodes --node_rank=${SKYPILOT_NODE_RANK} --master_addr=$master_addr \
2626
--master_port=8008 resnet_ddp.py --num_epochs 20

examples/resnet_distributed_torch_scripts/run.sh

+3-3
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@ conda activate resnet
44
conda env list
55

66
cd pytorch-distributed-resnet
7-
num_nodes=`echo "$SKY_NODE_IPS" | wc -l`
8-
master_addr=`echo "$SKY_NODE_IPS" | head -n1`
7+
num_nodes=`echo "$SKYPILOT_NODE_IPS" | wc -l`
8+
master_addr=`echo "$SKYPILOT_NODE_IPS" | head -n1`
99
echo MASTER_ADDR $master_addr
1010
python -m torch.distributed.launch --nproc_per_node=1 \
11-
--nnodes=$num_nodes --node_rank=${SKY_NODE_RANK} --master_addr=$master_addr \
11+
--nnodes=$num_nodes --node_rank=${SKYPILOT_NODE_RANK} --master_addr=$master_addr \
1212
--master_port=8008 resnet_ddp.py --num_epochs 20

examples/spot/bert_qa.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ run: |
3636
--max_seq_length 384 \
3737
--doc_stride 128 \
3838
--report_to wandb \
39-
--run_name $SKYPILOT_RUN_ID \
39+
--run_name $SKYPILOT_JOB_ID \
4040
--output_dir /checkpoint/bert_qa/ \
4141
--save_total_limit 10 \
4242
--save_steps 1000

examples/spot/resnet.yaml

+3-3
Original file line numberDiff line numberDiff line change
@@ -46,9 +46,9 @@ run: |
4646
# modify your run id for each different run!
4747
run_id="resnet-run-1"
4848
49-
num_nodes=`echo "$SKY_NODE_IPS" | wc -l`
50-
master_addr=`echo "$SKY_NODE_IPS" | head -n1`
49+
num_nodes=`echo "$SKYPILOT_NODE_IPS" | wc -l`
50+
master_addr=`echo "$SKYPILOT_NODE_IPS" | head -n1`
5151
python3 -m torch.distributed.launch --nproc_per_node=1 \
52-
--nnodes=$num_nodes --node_rank=${SKY_NODE_RANK} --master_addr=$master_addr \
52+
--nnodes=$num_nodes --node_rank=${SKYPILOT_NODE_RANK} --master_addr=$master_addr \
5353
--master_port=8008 resnet_ddp.py --num_epochs 100000 --model_dir /checkpoint/torch_ddp_resnet/ \
5454
--resume --model_filename resnet_distributed-with-epochs.pth --run_id $run_id --wandb_dir /checkpoint/

examples/storage/checkpointed_training.yaml

+3-3
Original file line numberDiff line numberDiff line change
@@ -43,9 +43,9 @@ run: |
4343
cd pytorch-distributed-resnet
4444
git pull
4545
46-
num_nodes=`echo "$SKY_NODE_IPS" | wc -l`
47-
master_addr=`echo "$SKY_NODE_IPS" | head -n1`
46+
num_nodes=`echo "$SKYPILOT_NODE_IPS" | wc -l`
47+
master_addr=`echo "$SKYPILOT_NODE_IPS" | head -n1`
4848
python3 -m torch.distributed.launch --nproc_per_node=1 \
49-
--nnodes=$num_nodes --node_rank=${SKY_NODE_RANK} --master_addr=$master_addr \
49+
--nnodes=$num_nodes --node_rank=${SKYPILOT_NODE_RANK} --master_addr=$master_addr \
5050
--master_port=8008 resnet_ddp.py --num_epochs 100 --model_dir /checkpoints/torch_ddp_resnet/ \
5151
--resume --model_filename resnet_distributed-with-epochs.pth

examples/storage/pingpong.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -27,5 +27,5 @@ setup: |
2727
git clone https://github.com/romilbhardwaj/sharedfs-pingpong.git
2828
2929
run: |
30-
num_nodes=`echo "$SKY_NODE_IPS" | wc -l`
31-
python -u sharedfs-pingpong/main.py --process-id ${SKY_NODE_RANK} --shared-path /sharedfs/ --num-processes $num_nodes
30+
num_nodes=`echo "$SKYPILOT_NODE_IPS" | wc -l`
31+
python -u sharedfs-pingpong/main.py --process-id ${SKYPILOT_NODE_RANK} --shared-path /sharedfs/ --num-processes $num_nodes

sky/backends/cloud_vm_ray_backend.py

+26-2
Original file line numberDiff line numberDiff line change
@@ -292,7 +292,9 @@ def check_ip():
292292
else:
293293
ip_rank_map = {{ip: i for i, ip in enumerate(gang_scheduling_id_to_ip)}}
294294
ip_list_str = '\\n'.join(gang_scheduling_id_to_ip)
295-
295+
296+
sky_env_vars_dict['SKYPILOT_NODE_IPS'] = ip_list_str
297+
# Environment starting with `SKY_` is deprecated.
296298
sky_env_vars_dict['SKY_NODE_IPS'] = ip_list_str
297299
"""),
298300
]
@@ -318,6 +320,7 @@ def register_run_fn(self, run_fn: str, run_fn_name: str) -> None:
318320
def add_ray_task(self,
319321
bash_script: str,
320322
task_name: Optional[str],
323+
job_run_id: str,
321324
ray_resources_dict: Optional[Dict[str, float]],
322325
log_path: str,
323326
env_vars: Dict[str, str] = None,
@@ -366,6 +369,10 @@ def add_ray_task(self,
366369
sky_env_vars_dict_str = '\n'.join(
367370
f'sky_env_vars_dict[{k!r}] = {v!r}'
368371
for k, v in env_vars.items())
372+
if job_run_id is not None:
373+
sky_env_vars_dict_str += (
374+
f'\nsky_env_vars_dict[{constants.JOB_ID_ENV_VAR!r}]'
375+
f' = {job_run_id!r}')
369376

370377
logger.debug('Added Task with options: '
371378
f'{name_str}{cpu_str}{resources_str}{num_gpus_str}')
@@ -379,10 +386,18 @@ def add_ray_task(self,
379386
log_path = os.path.expanduser({log_path!r})
380387
381388
if script is not None:
389+
sky_env_vars_dict['SKYPILOT_NUM_GPUS_PER_NODE'] = {int(math.ceil(num_gpus))!r}
390+
# Environment starting with `SKY_` is deprecated.
382391
sky_env_vars_dict['SKY_NUM_GPUS_PER_NODE'] = {int(math.ceil(num_gpus))!r}
392+
383393
ip = gang_scheduling_id_to_ip[{gang_scheduling_id!r}]
394+
sky_env_vars_dict['SKYPILOT_NODE_RANK'] = ip_rank_map[ip]
395+
# Environment starting with `SKY_` is deprecated.
384396
sky_env_vars_dict['SKY_NODE_RANK'] = ip_rank_map[ip]
385-
sky_env_vars_dict['SKY_JOB_ID'] = {self.job_id}
397+
398+
sky_env_vars_dict['SKYPILOT_INTERNAL_JOB_ID'] = {self.job_id}
399+
# Environment starting with `SKY_` is deprecated.
400+
sky_env_vars_dict['SKY_INTERNAL_JOB_ID'] = {self.job_id}
386401
387402
futures.append(run_bash_command_with_log \\
388403
.options({name_str}{cpu_str}{resources_str}{num_gpus_str}) \\
@@ -3002,12 +3017,16 @@ def _execute_task_one_node(self, handle: ResourceHandle,
30023017
run_fn_name = task.run.__name__
30033018
codegen.register_run_fn(run_fn_code, run_fn_name)
30043019

3020+
job_run_id = common_utils.get_global_job_id(
3021+
self.run_timestamp, cluster_name=handle.cluster_name, job_id=job_id)
3022+
30053023
command_for_node = task.run if isinstance(task.run, str) else None
30063024
use_sudo = isinstance(handle.launched_resources.cloud, clouds.Local)
30073025
codegen.add_ray_task(
30083026
bash_script=command_for_node,
30093027
env_vars=task.envs,
30103028
task_name=task.name,
3029+
job_run_id=job_run_id,
30113030
ray_resources_dict=backend_utils.get_task_demands_dict(task),
30123031
log_path=log_path,
30133032
use_sudo=use_sudo)
@@ -3061,6 +3080,10 @@ def _execute_task_n_nodes(self, handle: ResourceHandle, task: task_lib.Task,
30613080
run_fn_code = textwrap.dedent(inspect.getsource(task.run))
30623081
run_fn_name = task.run.__name__
30633082
codegen.register_run_fn(run_fn_code, run_fn_name)
3083+
3084+
job_run_id = common_utils.get_global_job_id(
3085+
self.run_timestamp, cluster_name=handle.cluster_name, job_id=job_id)
3086+
30643087
# TODO(zhwu): The resources limitation for multi-node ray.tune and
30653088
# horovod should be considered.
30663089
for i in range(num_actual_nodes):
@@ -3075,6 +3098,7 @@ def _execute_task_n_nodes(self, handle: ResourceHandle, task: task_lib.Task,
30753098
bash_script=command_for_node,
30763099
env_vars=task.envs,
30773100
task_name=name,
3101+
job_run_id=job_run_id,
30783102
ray_resources_dict=accelerator_dict,
30793103
log_path=log_path,
30803104
gang_scheduling_id=i,

sky/callbacks/sky_callback/utils.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
import os
22
from typing import Optional
33

4-
DISABLE_CALLBACK = os.environ.get('SKY_DISABLE_CALLBACK',
4+
DISABLE_CALLBACK = os.environ.get('SKYPILOT_DISABLE_CALLBACK',
55
'False').lower() in ('true', '1')
66

77

sky/execution.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -548,7 +548,7 @@ def _maybe_translate_local_file_mounts_and_sync_up(
548548
# Translate the workdir and local file mounts to cloud file mounts.
549549
# ================================================================
550550
task = copy.deepcopy(task)
551-
run_id = common_utils.get_run_id()[:8]
551+
run_id = common_utils.get_usage_run_id()[:8]
552552
original_file_mounts = task.file_mounts if task.file_mounts else {}
553553
original_storage_mounts = task.storage_mounts if task.storage_mounts else {}
554554

sky/skylet/constants.py

+2
Original file line numberDiff line numberDiff line change
@@ -8,3 +8,5 @@
88
UNINITIALIZED_ONPREM_CLUSTER_MESSAGE = (
99
'Found uninitialized local cluster {cluster}. Run this '
1010
'command to initialize it locally: sky launch -c {cluster} \'\'')
11+
12+
JOB_ID_ENV_VAR = 'SKYPILOT_JOB_ID'

sky/spot/controller.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -13,10 +13,12 @@
1313
from sky import sky_logging
1414
from sky.backends import backend_utils
1515
from sky.backends import cloud_vm_ray_backend
16+
from sky.skylet import constants
1617
from sky.skylet import job_lib
1718
from sky.spot import recovery_strategy
1819
from sky.spot import spot_state
1920
from sky.spot import spot_utils
21+
from sky.utils import common_utils
2022

2123
logger = sky_logging.init_logger(__name__)
2224

@@ -37,11 +39,9 @@ def __init__(self, job_id: int, task_yaml: str,
3739
# Add a unique identifier to the task environment variables, so that
3840
# the user can have the same id for multiple recoveries.
3941
# Example value: sky-2022-10-04-22-46-52-467694_id-17
40-
# TODO(zhwu): support SKYPILOT_RUN_ID for normal jobs as well, so
41-
# the use can use env_var for normal jobs.
4242
task_envs = self._task.envs or {}
43-
task_envs['SKYPILOT_RUN_ID'] = (f'{self.backend.run_timestamp}'
44-
f'_id-{self._job_id}')
43+
task_envs[constants.JOB_ID_ENV_VAR] = common_utils.get_global_job_id(
44+
self.backend.run_timestamp, 'spot', self._job_id)
4545
self._task.set_envs(task_envs)
4646

4747
spot_state.set_submitted(

sky/templates/gcp-ray.yml.j2

+4-4
Original file line numberDiff line numberDiff line change
@@ -171,15 +171,15 @@ head_start_ray_commands:
171171
# Line "which prlimit ..": increase the limit of the number of open files for the raylet process, as the `ulimit` may not take effect at this point, because it requires
172172
# all the sessions to be reloaded. This is a workaround.
173173
- ((ps aux | grep -v nohup | grep -v grep | grep -q -- "python3 -m sky.skylet.skylet") || nohup python3 -m sky.skylet.skylet >> ~/.sky/skylet.log 2>&1 &);
174-
export SKY_NUM_GPUS=0 && which nvidia-smi > /dev/null && SKY_NUM_GPUS=$(nvidia-smi --query-gpu=index,name --format=csv,noheader | wc -l);
175-
ray stop; ray start --disable-usage-stats --head --port=6379 --object-manager-port=8076 --autoscaling-config=~/ray_bootstrap_config.yaml {{"--resources='%s'" % custom_resources if custom_resources}} --num-gpus=$SKY_NUM_GPUS || exit 1;
174+
export SKYPILOT_NUM_GPUS=0 && which nvidia-smi > /dev/null && SKYPILOT_NUM_GPUS=$(nvidia-smi --query-gpu=index,name --format=csv,noheader | wc -l);
175+
ray stop; ray start --disable-usage-stats --head --port=6379 --object-manager-port=8076 --autoscaling-config=~/ray_bootstrap_config.yaml {{"--resources='%s'" % custom_resources if custom_resources}} --num-gpus=$SKYPILOT_NUM_GPUS || exit 1;
176176
which prlimit && for id in $(pgrep -f raylet/raylet); do sudo prlimit --nofile=1048576:1048576 --pid=$id || true; done;
177177

178178
# Worker commands are needed for TPU VM Pods
179179
{%- if num_nodes > 1 or tpu_vm %}
180180
worker_start_ray_commands:
181-
- SKY_NUM_GPUS=0 && which nvidia-smi > /dev/null && SKY_NUM_GPUS=$(nvidia-smi --query-gpu=index,name --format=csv,noheader | wc -l);
182-
ray stop; ray start --disable-usage-stats --address=$RAY_HEAD_IP:6379 --object-manager-port=8076 {{"--resources='%s'" % custom_resources if custom_resources}} --num-gpus=$SKY_NUM_GPUS || exit 1;
181+
- SKYPILOT_NUM_GPUS=0 && which nvidia-smi > /dev/null && SKYPILOT_NUM_GPUS=$(nvidia-smi --query-gpu=index,name --format=csv,noheader | wc -l);
182+
ray stop; ray start --disable-usage-stats --address=$RAY_HEAD_IP:6379 --object-manager-port=8076 {{"--resources='%s'" % custom_resources if custom_resources}} --num-gpus=$SKYPILOT_NUM_GPUS || exit 1;
183183
which prlimit && for id in $(pgrep -f raylet/raylet); do sudo prlimit --nofile=1048576:1048576 --pid=$id || true; done;
184184
{%- else %}
185185
worker_start_ray_commands: []

sky/templates/spot-controller.yaml.j2

+1-1
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ setup: |
3535
run: |
3636
python -u -m sky.spot.controller \
3737
{{remote_user_yaml_prefix}}/{{cluster_name}}.yaml \
38-
--job-id $SKY_JOB_ID {% if retry_until_up %}--retry-until-up{% endif %}
38+
--job-id $SKYPILOT_INTERNAL_JOB_ID {% if retry_until_up %}--retry-until-up{% endif %}
3939

4040
envs:
4141
SKYPILOT_USAGE_USER_ID: {{logging_user_hash}}

sky/usage/usage_lib.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ def __init__(self) -> None:
7777
super().__init__(constants.USAGE_MESSAGE_SCHEMA_VERSION)
7878
# Message identifier.
7979
self.user: str = _get_user_hash()
80-
self.run_id: str = common_utils.get_run_id()
80+
self.run_id: str = common_utils.get_usage_run_id()
8181
self.sky_version: str = sky.__version__
8282

8383
# Entry

0 commit comments

Comments
 (0)