%pip install --upgrade sagemaker
%pip install sagemaker-experiments
Train Falcon with near-linear scaling using Sharded Data Parallelism technique in SageMaker Model Parallelism Library
This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.
In this notebook, you’ll learn how to train the Hugging Face Transformers Falcon model with the Sharded Data Parallelism technique supported by SageMaker’s Model Parallelism library (SMP) with PyTorch 2.0 and GLUE/SST2 dataset on SageMaker.
Sharded data parallelism is a distributed training technique that splits the model parameters, gradients, and optimizer states across GPUs in a data parallel group. It is purpose-built for extreme-scale models and leverages Amazon in-house MiCS technology which achieves a near-linear scaling efficiency. For large models that cannot fit into a single GPU, we also recommend using the sharded data parallelism technique with Activation Checkpointing and Activation Offloading in SMP first, before leveraging other techniques such as tensor parallelism or pipeline parallelism.
This feature is also compatible with Tensor Parallelism.
This notebook is accompanied by the following files:
train.py
: The entry point script that’ll be passed to the SageMaker PyTorch estimator later in this notebook when launching the training job. This script is prepared to run an end-to-end training of the Falcon model with SMP, settings for sharded data parallelism applied, and implemented with code lines to save, load, and fine-tune the model. You can follow the comments throughout the script to learn where the SMP APIs and code modifications are implemented.data_pipeline.py
: This has data pipeline functions to prepare the training dataset.learining_rate.py
: This has functions for learning rate schedule.requirements.txt
: This installs the dependencies, including huggingface transformers.memory_tracker.py
: This has functions to track memory usage.model_config.py
: This has functions to get model configuration information.sdp_utils.py
: This has util functions for sharded data parallelism.
Additional resources
To learn more about the SageMaker model parallelism library, see Model Parallel Distributed Training with SageMaker Distributed.
To learn more about using the SageMaker Python SDK with PyTorch, see Using PyTorch with the SageMaker Python SDK.
To learn more about launching a training job in Amazon SageMaker with your own training image, see Use Your Own Training Algorithms.
To learn more about sharded data parallelism, check Sharded Data Parallelism or the blog Near-linear scaling of gigantic-model training on AWS.
Prerequisites
You must create an S3 bucket to store the input data for training. This bucket must be located in the same AWS Region that you choose to launch your training job. To learn how to create a S3 bucket, see Create your first S3 bucket in the Amazon S3 documentation.
Amazon SageMaker initialization
Run the following cell to import SageMaker modules and retrieve information of your current SageMaker work environment, such as your AWS account ID, the AWS Region, and the ARN of your Amazon SageMaker execution role. Upgrade SageMaker SDK to the latest version.
NOTE: This step might require a kernel restart.
%%time
import os
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.pytorch import PyTorch
= (
role
get_execution_role()# provide a pre-existing role ARN as an alternative to creating a new role
) print(f"SageMaker Execution Role: {role}")
= boto3.client("sts")
client = client.get_caller_identity()["Account"]
account print(f"AWS account: {account}")
= boto3.session.Session()
session = session.region_name
region print(f"AWS region: {region}")
= boto3.client("sagemaker")
sm_boto_client = sagemaker.session.Session(boto_session=session)
sagemaker_session
# get default bucket
= sagemaker_session.default_bucket()
default_bucket print()
print("Default bucket for this session: ", default_bucket)
Download and prepare GLUE/SST2 data
Here you will download, prepare the GLUE/SST2 dataset and then copy the files to S3.
Install the Hugging Face Transformers and Datasets libraries
! pip install -q datasets transformers==4.21.0
import datasets
from datasets import load_dataset, load_from_disk, load_metric
from sagemaker.pytorch import PyTorch
import transformers
import logging
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
)
from transformers.testing_utils import CaptureLogger
= logging.getLogger(__name__) logger
Load data
This section loads the GLUE/SST2 dataset and splits it to training and validation datasets.
= {
hyperparameters "dataset_name": "glue",
"dataset_config_name": "sst2",
"do_train": True,
"do_eval": True,
"cache_dir": "tmp",
}
= load_dataset(
raw_datasets "dataset_name"],
hyperparameters["dataset_config_name"],
hyperparameters[ )
if "validation" not in raw_datasets.keys():
"validation"] = load_dataset(
raw_datasets["dataset_name"],
hyperparameters["dataset_config_name"],
hyperparameters[="train[:5%]",
split=hyperparameters["cache_dir"],
cache_dir
)
"train"] = load_dataset(
raw_datasets["dataset_name"],
hyperparameters["dataset_config_name"],
hyperparameters[="train[5%:]",
split=hyperparameters["cache_dir"],
cache_dir )
Load tokenizer
Nearly every NLP task begins with a tokenizer. A tokenizer converts your text data into a format (token) that can be processed by the NLP model. The following cell loads a tokenizer for Falcon using AutoTokenizer.from_pretrained().
= {
tokenizer_kwargs "cache_dir": hyperparameters["cache_dir"],
}
= AutoTokenizer.from_pretrained(
tokenizer "tiiuae/falcon-40b", trust_remote_code=True, **tokenizer_kwargs
)
Preprocess data
The following two cells set up a function to run the tokenizer and group texts into chunks smaller than the block size.
def tokenize_function(examples):
= transformers.utils.logging.get_logger("transformers.tokenization_utils_base")
tok_logger
with CaptureLogger(tok_logger) as cl:
= tokenizer(examples[text_column_name])
output # clm input could be much much longer than block_size
if "Token indices sequence length is longer than the" in cl.out:
tok_logger.warning("^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits before being passed to the model."
)return output
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
# Concatenate all texts.
= {k: sum(examples[k], []) for k in examples.keys()}
concatenated_examples = len(concatenated_examples[list(examples.keys())[0]])
total_length # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= block_size:
= (total_length // block_size) * block_size
total_length # Split by chunks of max_len.
= {
result + block_size] for i in range(0, total_length, block_size)]
k: [t[i : i for k, t in concatenated_examples.items()
}"labels"] = result["input_ids"].copy()
result[return result
= raw_datasets["train"].column_names
column_names = "text" if "text" in column_names else column_names[0]
text_column_name
# since this will be pickled to avoid _LazyModule error in Hasher force logger loading before tokenize_function
= transformers.utils.logging.get_logger("transformers.tokenization_utils_base")
tok_logger
= raw_datasets.map(
tokenized_datasets
tokenize_function,=True,
batched=1,
num_proc=column_names,
remove_columns="Running tokenizer on dataset",
desc
)
= tokenizer.model_max_length
block_size if block_size > 1024:
logger.warning(f"The tokenizer picked seems to have a very large `model_max_length` ({tokenizer.model_max_length}). "
"Picking 1024 instead. You can change that default value by passing --block_size xxx."
)= 1024
block_size else:
if block_size > tokenizer.model_max_length:
logger.warning(f"The block_size passed ({block_size}) is larger than the maximum length for the model"
f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}."
)= min(block_size, tokenizer.model_max_length)
block_size
= tokenized_datasets.map(
lm_datasets
group_texts,=True,
batched# num_proc=args.preprocessing_num_workers,
=f"Grouping texts in chunks of {block_size}",
desc )
Set additional hyperparameters and S3 paths for mapping the train and validation datasets properly depending on the phase (training or validation) of the training job in each epoch.
if hyperparameters["do_train"]:
if "train" not in tokenized_datasets:
raise ValueError("--do_train requires a train dataset")
= lm_datasets["train"]
train_dataset
if hyperparameters["do_eval"]:
if "validation" not in tokenized_datasets:
raise ValueError("--do_eval requires a validation dataset")
= lm_datasets["validation"] eval_dataset
= None
training_dataset_location = None
validation_dataset_location
if hyperparameters["do_train"]:
"./training.json")
train_dataset.to_json(= "s3://{}/dataset/train/".format(default_bucket)
training_dataset_location
if hyperparameters["do_eval"]:
"./validation.json")
eval_dataset.to_json(= "s3://{}/dataset/validation/".format(default_bucket) validation_dataset_location
if training_dataset_location is not None:
= "aws s3 cp ./training.json {}".format(training_dataset_location)
command
os.system(command)
if validation_dataset_location is not None:
= "aws s3 cp ./validation.json {}".format(validation_dataset_location)
command os.system(command)
if hyperparameters["do_train"]:
= "rm ./training.json"
command
os.system(command)
if hyperparameters["do_eval"]:
= "rm ./validation.json"
command os.system(command)
%store training_dataset_location
%store validation_dataset_location
%store
Specify Amazon S3 bucket paths
Here you need to specify the paths for training data to be used by your job. The bucket used must be in the same region as where training will run. In the cells above you downloaded the GLUE/SST2 training and validation split datasets and uploaded the json files in an S3 bucket in your account. This example will train on those json files.
After you successfully run this example tensor parallel training job, you can modify the S3 bucket to where your own dataset is stored.
%store -r training_dataset_location
%store -r validation_dataset_location
# if you're bringing your own data, uncomment the following lines and specify the locations there
# training_dataset_location = YOUR_S3_BUCKET/training
# validation_dataset_location = YOUR_S3_BUCKET/validation
= training_dataset_location
s3_train_bucket = validation_dataset_location s3_test_bucket
The following S3 bucket will store the output artifacts of the training job. You can modify this as needed.
= f"s3://sagemaker-{region}-{account}/smp-tensorparallel-outputdir/" s3_output_bucket
Define data channels for SageMaker Training using Amazon S3
In this step, define SageMaker training data channels to the S3 buckets.
# Set use_fsx to False by default
# Set below var to True if you want to use fsx (see next cell)
= False
use_fsx if not use_fsx:
if s3_train_bucket != None:
= sagemaker.inputs.TrainingInput(
train ="FullyReplicated", s3_data_type="S3Prefix"
s3_train_bucket, distribution
)= {"train": train}
data_channels else:
= {"train": mock_data}
data_channels if s3_test_bucket != None:
= sagemaker.inputs.TrainingInput(
test ="FullyReplicated", s3_data_type="S3Prefix"
s3_test_bucket, distribution
)"test"] = test
data_channels[else:
"test"] = mock_data
data_channels[print(data_channels)
(Optional) Set up and use Amazon FSx for data channels and checkpoints
While the previous option of using Amazon S3 is easier to set up, using an FSx can be beneficial for performance when dealing with large input sizes and large model sizes. If you are using models above 13B, checkpointing should be done using FSx.
Please see the instructions from Distributed Training of Mask-RCNN in Amazon SageMaker Using FSx to create an FSx Lustre file system and import the dataset from the S3 bucket to your FSx file system. Note that the FSx file system must be created in a private subnet with internet gateway to ensure that training job has access to the internet. For general guidance on setting an FSx Lustre file system as data input channel, see Configure Data Input Channel to Use Amazon FSx for Lustre.
# Instructions obtained from:
# https://github.com/aws/amazon-sagemaker-examples/blob/master/advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-scriptmode-fsx.ipynb
if use_fsx:
from sagemaker.inputs import FileSystemInput
# Specify FSx Lustre file system id.
= "<your-file-system-id>"
file_system_id
# Specify the SG and subnet used by the FSX, these are passed to SM Estimator so jobs use this as well
= "<your-security-group-id>"
fsx_security_group_id = "<your-subnet>"
fsx_subnet
# Specify directory path for input data on the file system.
# You need to provide normalized and absolute path below.
# Your mount name can be provided by you when creating fsx, or generated automatically.
# You can find this mount_name on the FSX page in console.
# Example of fsx generated mount_name: "3x5lhbmv"
= "<your-mount-name>"
base_path
# Specify your file system type.
= "FSxLustre"
file_system_type
= FileSystemInput(
train =file_system_id,
file_system_id=file_system_type,
file_system_type=base_path,
directory_path="rw",
file_system_access_mode
)
= {"train": train, "test": train} data_channels
Set hyperparameters, metric definitions, and MPI options
The following hyperparameters
dictionary passes arguments to the training script (train.py
) and set the model parallel configuration when creating the training job.
You can also add custom mpi
flags. By default, we have --mca btl_vader_single_copy_mechanism none
to remove unnecessary logs.
Next, we add a base metric definitions to enable the metric upload in SageMaker. You can add any further metric definitions.
Note that we add the sharded_data_parallel_degree
parameter to the hyperparameter
dictionary. This will be parsed and used when we configure a SageMaker PyTorch estimator to activate sharded data parallelism.
= {
hyperparameters "max_steps": 100,
"seed": 12345,
"fp16": 0,
"bf16": 1,
"lr": 2.0e-4,
"lr_decay_iters": 125000,
"min_lr": 0.00001,
"lr-decay-style": "linear",
"warmup": 0.01,
"num_kept_checkpoints": 5,
"checkpoint_freq": 200,
"logging_freq": 1,
"save_final_full_model": 0,
"delayed_param": 1,
"offload_activations": 0,
"activation_loading_horizon": 4,
"gradient_accumulation": 1,
"validation_freq": 200,
"train_batch_size": 4,
"val_batch_size": 4,
"zipped_data": 0,
"epochs": 100,
"use_distributed_transformer": 0,
"model_type": "falcon",
# parameters for sharded data parallelism
"sharded_data_parallel_degree": 16,
}
if use_fsx:
# make sure to update paths for training-dir and test-dir based on the paths of datasets in fsx
# If you want to resume training, set checkpoint-dir to the same path as a previous job.
= "/opt/ml/input/data/train"
SM_TRAIN_DIR "checkpoint-dir"] = f"{SM_TRAIN_DIR}/checkpointdir-job2"
hyperparameters["model-dir"] = f"{SM_TRAIN_DIR}/modeldir-job2"
hyperparameters["training-dir"] = f"{SM_TRAIN_DIR}/datasets/pytorch_gpt/train_synthetic"
hyperparameters["test-dir"] = f"{SM_TRAIN_DIR}/datasets/pytorch_gpt/val_synthetic"
hyperparameters[
# The checkpoint path (hyperparameters['checkpoint-dir'] or checkpoint_s3_uri) is not unique per job.
# You need to modify as needed for different runs.
# If same path is used for unrelated runs, this may increase time when downloading unnecessary checkpoints,
# and cause conflicts when loading checkpoints.
= "-x NCCL_DEBUG=WARN -x SMDEBUG_LOG_LEVEL=ERROR "
mpioptions += (
mpioptions "-x SMP_DISABLE_D2D=1 -x SMP_D2D_GPU_BUFFER_SIZE_BYTES=1 -x SMP_NCCL_THROTTLE_LIMIT=1 "
)+= "-x FI_EFA_USE_DEVICE_RDMA=1 -x FI_PROVIDER=efa -x RDMAV_FORK_SAFE=1"
mpioptions
= [
metric_definitions "Name": "base_metric", "Regex": "<><><><><><>"}
{# Add your custom metric definitions ]
Set the model configuration.
= "falcon-7b"
model_config
if model_config == "falcon-7b":
= {
model_params "max_context_width": 2048,
"hidden_width": 4544,
"num_layers": 32,
"num_heads": 71,
"num_heads_kv": 71,
}else:
raise RuntimeError("Unknown model config")
for k, v in model_params.items():
= v hyperparameters[k]
Specify essential parameters for a SageMaker Training job
Next, you use the SageMaker Estimator class
to define a SageMaker Training Job, passing values through the following parameters for training job name, the number of EC2 instances, the instance type, and the size of the volume attached to the instances.
instance_count
instance_type
volume_size
base_job_name
Update the type and the number of EC2 instance to use
The instance type and the number of instances you specify to the instance_type
and instance_count
parameters, respectively, determine the total number of GPUs (world size).
\[ \text{(world size) = (the number of GPUs on a single instance)}\times\text{(the number of instances)}\]
= "ml.p4d.24xlarge"
instance_type
= 2
instance_count
# set to the number of GPUs on that instance
= 8 processes_per_host
To look up the number of GPUs of different instance types, see Amazon EC2 Instance Types. Use the section Accelerated Computing to see general purpose GPU instances. Note that, for example, a given instance type p4d.24xlarge
has a corresponding instance type ml.p4d.24xlarge
in SageMaker. For SageMaker supported ml
instances and cost information, see Amazon SageMaker Pricing.
Specify a base job name
= instance_type.split(".")[1] + instance_type.split(".")[2][:3]
machine_str = hyperparameters["sharded_data_parallel_degree"]
sharding_degree = (
base_job_name f'smp-{model_config}-{machine_str}-sdp{sharding_degree}-bs{hyperparameters["train_batch_size"]}'
)
if not use_fsx:
# If you want to resume training, set checkpoint_s3_uri to the same path as a previous job.
# Previous checkpoint to load must have same model config.
= f"s3://sagemaker-{region}-{account}/"
checkpoint_bucket = (
checkpoint_s3_uri f"{checkpoint_bucket}/experiments/gpt_synthetic_simpletrainer_checkpoints/{base_job_name}/"
)
print(f"base_job_name: {base_job_name} checkpoint_s3_uri: {checkpoint_s3_uri}")
Create a SageMaker PyTorch estimator
The following cell constructs a PyTorch estimator using the parameters defined above. To see how the SageMaker APIs and functions are applied to the script, see the train.py
file.
= {}
kwargs if use_fsx:
# Use the security group and subnet that was used to create the fsx filesystem
"security_group_ids"] = [fsx_security_group_id]
kwargs["subnets"] = [fsx_subnet]
kwargs[
= PyTorch(
smp_estimator ="train.py",
entry_point=os.getcwd(),
source_dir=role,
role=instance_type,
instance_type=instance_count,
instance_count=sagemaker_session,
sagemaker_session={
distribution"mpi": {
"enabled": True,
"processes_per_host": processes_per_host,
"custom_mpi_options": mpioptions,
},"smdistributed": {
"modelparallel": {
"enabled": True,
"parameters": {
"ddp": True,
"skip_tracing": True,
"delayed_parameter_initialization": hyperparameters["delayed_param"] > 0,
"offload_activations": hyperparameters["offload_activations"] > 0,
"activation_loading_horizon": hyperparameters["activation_loading_horizon"],
"sharded_data_parallel_degree": hyperparameters["sharded_data_parallel_degree"],
"fp16": hyperparameters["fp16"] > 0,
"bf16": hyperparameters["bf16"] > 0,
# partitions is a required param in the current SM SDK so it needs to be passed,
"partitions": 1,
},
}
},
},="2.0",
framework_version="py310",
py_version=s3_output_bucket,
output_path=checkpoint_s3_uri if not use_fsx else None,
checkpoint_s3_uri=hyperparameters["checkpoint-dir"] if use_fsx else None,
checkpoint_local_path=metric_definitions,
metric_definitions=hyperparameters,
hyperparameters=False,
debugger_hook_config=True,
disable_profiler=base_job_name,
base_job_name**kwargs,
)
Finally, run the estimator.fit
method to launch the SageMaker training job of the Falcon model with sharded data parallelism.
=data_channels, logs=True) smp_estimator.fit(inputs
Accessing the Training Logs
You can access the training logs from Amazon CloudWatch. Make sure to look at the logs of algo-1 because that is the main node whose output stream has the training job logs.
You can use CloudWatch to track SageMaker GPU and memory utilization during training and inference. To view the metrics and logs that SageMaker writes to CloudWatch, see SageMaker Jobs and Endpoint Metrics in the Amazon SageMaker Developer Guide.
If you are a new user of CloudWatch, see Getting Started with Amazon CloudWatch.
For additional information on monitoring and analyzing Amazon SageMaker training jobs, see Monitor and Analyze Training Jobs Using Metrics.
Deploying Trained Model for Inference
In most cases, a trained model can be deployed on a single device for inference because inference only requires a small amount of memory. You can use the SMP API to create a single, unified model after training: the smp.DistributedModel.save_model() method for TensorFlow, and the smp.save() function for PyTorch.
After you build and train your models, you can deploy them to get predictions in one of two ways:
- To set up a persistent endpoint to get predictions from your models, use SageMaker hosting services. For an overview on deploying a single model or multiple models with SageMaker hosting services, see Deploy a Model on SageMaker Hosting Services.
- To get predictions for an entire dataset, use SageMaker batch transform. For an overview on deploying a model with SageMaker Batch Transform, see Get Inferences for an Entire Dataset with Batch Transform.
To learn more about deploying models for inference using SageMaker, see Deploy Models for Inference.
Notebook CI Test Results
This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.