Generative AI for Enterprise using AWS Foundation Models and AWS Kendra
This session was presented in AWS Community Day (Virtual) Hungary - Thanks for Madhu Kumar - AWS Hero for this Opportunity
Summary:
This content discusses the transformative impact of Generative AI (GenAI) and large language models (LLMs), we can perform this as Serverless RAG using Amazon Bedrock and Amazon Titan. These technologies revolutionize how developers and enterprises address challenges in natural language processing and understanding, particularly in creating advanced conversational AI experiences for customer service and enhancing employee productivity. The video emphasizes the importance of Retrieval Augmented Generation (RAG) techniques to ensure accurate responses by limiting GenAI applications to company data and filtering responses based on end-user content access permissions.
Ref: https://aws.amazon.com/blogs/machine-learning/quickly-build-high-accuracy-generative-ai-applications-on-enterprise-data-using-amazon-kendra-langchain-and-large-language-models/
Agenda:
Introduction to Generative AI and Large Language Models (LLMs)
Importance of Retrieval Augmented Generation (RAG) for GenAI applications
Addressing challenges in content retrieval and the role of Amazon Kendra
Demonstrating the implementation of a RAG workflow using Amazon Kendra and LLMs
Overview of the solution architecture, including the use of Amazon Kendra for semantic search
Integration with Amazon Bedrock and the upcoming Amazon Titan for additional benefits
Best practices for GenAI app development, including prompt engineering and chat history management
Introduction to open-source frameworks like LangChain and the AmazonKendraRetriever class
Deployment guide for implementing the solution in an AWS account