Azure OpenAI Service provides REST API access to OpenAI’s powerful language models including the GPT-4, GPT-35-Turbo, and Embeddings model series.
These models can be easily adapted to your specific task including but not limited to:
semantic search
Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
General Documentation on Microsoft Learn
Grounding is the process of using large language models (LLMs) with information that is use-case specific, relevant, and not available as part of the LLM’s trained knowledge. It is crucial for ensuring the quality, accuracy, and relevance of the generated output. While LLMs come with a vast amount of knowledge already, this knowledge is limited and not tailored to specific use-cases. To obtain accurate and relevant output, we must provide LLMs with the necessary information. In other words, we need to “ground” the models in the context of our specific use-case.
First, I strongly recommand to start with this introduction post: How to create a private ChatGPT with your own data
Chat Copilot Sample Application - This sample allows you to build your own integrated large language model (LLM) chat copilot. The sample is built on Microsoft Semantic Kernel.
Building a Private ChatGPT Interface With Azure OpenAI
Azure OpenAI Landing Zone reference architecture
GitHub Repository with implementation
Implement logging and monitoring for Azure OpenAI models