IN THIS ARTICLE

This documentation provides a comprehensive guide to setting up and managing Large Language Model (LLM) deployments within HallianAI, covering configuration of cloud connections, adding model deployments, assigning models to user roles, and best practices for optimization. It supports multiple AI providers such as Microsoft Azure Open AI, AWS Bedrock, and Google Vertex AI, ensuring users can tailor AI capabilities to their organizational needs effectively.

Contents


image.png

image.png

Introduction

Large Language Models (LLMs) are the core AI engines powering HallianAI’s intelligent assistants and workflows. Properly setting up LLM deployments ensures your organization can scale AI usage effectively, optimize costs, and tailor AI capabilities to different departments or use cases. HallianAI supports multiple cloud AI providers including Microsoft Azure Open AI Service, AWS Bedrock, Google Vertex AI Studio, and others. Each provider requires specific configuration steps and dependencies such as cloud connection setup or environment variables.

This guide walks you through the complete process of adding and managing LLM deployments in HallianAI, even if you are new to the platform.


Step-by-Step Instructions

Step 1: Log into HallianAI

Step 2: Navigate to the Settings Tab

Step 3: Configure Cloud Connections (Dependency Setup)

image.png