Data Engineering at the University of Florida
NavigatorAI is UF’s AI platform providing access to multiple LLM models including GPT, Llama, Gemini, and Claude. This guide covers setup for the Toolkit API, which you’ll use for course assignments.
| Service | Description | Access |
|---|---|---|
| NaviGator Chat | Web interface for LLM conversations with custom datasets | Students, Faculty, Staff |
| NaviGator Toolkit | API access to LLM models | Students, Faculty, Staff |
| NaviGator Tutor | Personalized learning assistant | Students, Faculty, Staff |
| NaviGator Notebook | Document assistance with Gemini | Students, Faculty, Staff |
For this course, you will primarily use the NaviGator Toolkit for API access.
Store your API key as an environment variable:
# Add to your shell profile (~/.bashrc, ~/.zshrc, etc.)
export NAVIGATOR_API_KEY="your-api-key-here"
Or create a .env file in your project (add to .gitignore):
NAVIGATOR_API_KEY=your-api-key-here
uv add openai python-dotenv
import os
from dotenv import load_dotenv
from openai import OpenAI
# Load environment variables
load_dotenv()
# Initialize client with NavigatorAI endpoint
client = OpenAI(
api_key=os.getenv("NAVIGATOR_API_KEY"),
base_url="https://api.navigator.ufl.edu/v1" # Update with actual endpoint
)
# Make a request
response = client.chat.completions.create(
model="gpt-4", # Or other available models
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is data engineering?"}
]
)
print(response.choices[0].message.content)
The Toolkit provides access to multiple models:
| Provider | Models |
|---|---|
| OpenAI | GPT-4, GPT-3.5-turbo |
| Meta | Llama 2, Llama 3 |
| Gemini | |
| Anthropic | Claude |
Check the NavigatorAI documentation for the current list of available models and their identifiers.
.env to .gitignore to prevent accidental commitsAuthentication Error (401)
echo $NAVIGATOR_API_KEYRate Limit Error (429)
Model Not Found
For development and testing you can run open-weight models locally through Ollama.
On your laptop, brew install ollama followed by ollama pull llama3.1:8b and ollama run llama3.1:8b is enough to experiment.
The Ollama server exposes an OpenAI-compatible endpoint at http://localhost:11434/v1, so the same client code above works with api_key="ollama" and that base URL.
Running Ollama on HiPerGator needs extra care: models must live under /blue/cis6930, the server has to run inside an srun session that exposes port 11434, and only some models support MCP tool calling.
See the dedicated walkthrough for the full procedure, including SSH tunnels, model capability checks, and how to point navigator-cli at the local endpoint.