LLMops using AWS Sagemaker
LLMOps focuses on the operational aspects and infrastructure needed to enhance foundational models through fine tuning. These refined models then seamlessly integrate into products.
While LLMOps might not seem novel to MLOps followers (except for the term), it essentially falls within the MLOps umbrella as a sub-category. By narrowing down the scope, we can better understand the precise demands of fine tuning and deploying such specialized models. This session includes a demo of how to deploy a foundation model in AWS Sagemaker.
This session is based out of Blog - https://aws.amazon.com/blogs/machine-learning/fmops-llmops-operationalize-generative-ai-and-differences-with-mlops/