Skip to content

CLI Reference

Docker Usage

If provided with a docker image, you can run any of the commands mentioned in this refernce by prefixing it with

docker run  --gpus all -v $(pwd):/directory
This will mount your current working directory on your local disk onto the path /directory inside the container. This would allow you to run the beyondllm commands on the files located in your current working directory on your local disk. For example, to run the fine-tune command via docker, we'll use:
docker run  --gpus all -d -v $(pwd):/directory beyondllm fine-tune \
    --experiment-path <experiment-path>

Docker Usage with a License key

If provided with a docker image and a license key, you can run any of the commands mentioned in this refernce by prefixing it with

docker run -v <LICENSE-KEY-DIR-PATH>:/beyondllm/runtime_key -e PYARMOR_RKEY=/beyondllm/runtime_key -v $(pwd):/directory
This will do two things:

  • Mount the local path of your provided license key on your disk onto the the path /beyondllm/runtime_key inside the docker container, and then set the environment variable PYARMOR_RKEY pointing to that mounted directory containing your license key.

  • Mount your current working directory on your local disk onto the path /directory inside the container. This would allow you to run the beyondllm commands on the files located in your current working directory on your local disk.

For example, to run the merge-lora command via docker, we'll use (assuming your license key is locally under /keys/beyondllm):

docker run  --gpus all -it -v /keys/beyondllm:/beyondllm/runtime_key -e PYARMOR_RKEY=/beyondllm/runtime_key -v $(pwd):/directory beyondllm merge-lora  \
    --experiment-path <experiment-path>

beyondllm

BeyondLLM is a tool designed for fine-tuning various LLM models with customizable configurations.

Usage:

beyondllm [OPTIONS] COMMAND [ARGS]...

Options:

  --install-completion  Install completion for the current shell.
  --show-completion     Show completion for the current shell, to copy it or
                        customize the installation.

configure-finetune

Create and configure the finetune configuration yaml file that will be used for finetuning.

Usage:

beyondllm configure-finetune [OPTIONS]

Options:

  --experiment-path PATH          The experiment path where you save all the
                                  experiment related outputs and
                                  configurations  \[required]
  --model-path TEXT               The base model that will be fine-tuned. Can
                                  be hf repo name or local model path.
                                  \[required]
  --model-type TEXT               The type of the model  \[required]
  --tokenizer-type TEXT           The type of the tokenizer  \[required]
  --type-derived-model [llama|falcon|mistral|none]
                                  Set the type derived model. can be either
                                  llama, falcon, mistral or none if it is not
                                  one of the available options.  \[required]
  --trust-remote-code / --no-trust-remote-code
                                  Set to True if the model is from unknown
                                  source  \[required]
  --max-length INTEGER            The maximum token length for prompts. If any
                                  of the examples in the dataset exceeded the
                                  max-length, it will be neglected
                                  \[required]
  --ds-dir-path TEXT              The input dir for the datasets to be packed
                                  and tokenized.  \[required]
  --ds-task-type [completion|conversation]
                                  The task type of training, either completion
                                  or conversation.  \[required]
  --packed-dataset-output TEXT    The path to save the packed and tokenized
                                  dataset.
  --chat-template [alpaca|inst|chatml|gemma|cohere|llama3|phi_3|none]
                                  The chat template incase you want to do
                                  instruction finetuning.  \[default: none]
  --model-finetuned-output TEXT   The path to save the finetuned model.
  --finetune-adapter-type [qlora|lora|full]
                                  The type of finetuning, can be lora, qlora
                                  or full  \[default: qlora]
  --lora-r INTEGER                LoRA decomposition matrix rank value
                                  \[default: 8]
  --lora-alpha INTEGER            LoRA scaling factor  \[default: 16]
  --gradient-accumulation-steps INTEGER
                                  Number of gradient accumulation steps
                                  \[default: 1]
  --micro-batch-size INTEGER      Size of micro batches for training
                                  \[default: 2]
  --eval-batch-size INTEGER       Batch size for evaluation  \[default: 2]
  --num-epochs INTEGER            Number of training epochs  \[default: 3]
  --warmup-steps INTEGER          Number of warmup steps  \[default: 10]
  --learning-rate FLOAT           Learning rate value  \[default: 0.0002]
  --save-steps INTEGER            Number of each steps to save a checkpoint
                                  \[default: 100]
  --eval-steps INTEGER            Number of each steps to do evaluation
                                  \[default: 100]
  --resume-from-checkpoint TEXT   Checkpoint path to resume training
  --special-tokens TEXT           Special tokens to add to the tokenizer
                                  \[default: bos_token:<s>,eos_token:</s>,unk_
                                  token:<unk>]
  --tokens TEXT                   New tokens to add to the tokenizer
  --deepspeed PATH                Deepspeed configuration file
  --exit-if-no-gpu / --no-exit-if-no-gpu
                                  Set to True to exit finetune configuration
                                  if no GPU found  \[default: exit-if-no-gpu]

fine-tune

Finetuning using both fine-tune and accelerate configurations found in experiment path experiment-path.

Usage:

beyondllm fine-tune [OPTIONS]

Options:

  --experiment-path PATH  The experiment path contains all experiment-related
                          outputs and configurations needed for fine-tuning.
                          \[required]

get-config

Retrieive the value of a configuration parameter.

Usage:

beyondllm get-config [OPTIONS] PARAMETER

Options:

  PARAMETER  \[required]

merge-lora

Merge lora adapter to a model based on the finetune-config at experiment-path.

Usage:

beyondllm merge-lora [OPTIONS]

Options:

  --experiment-path PATH  The experiment path contains all experiment-related
                          outputs and configurations needed for merging lora
                          adapter.  \[required]

serve-docs

Run the mkdocs server to serve the documentation site.

Usage:

beyondllm serve-docs [OPTIONS]