Data Engineering at the University of Florida
NavigatorAI is UF’s AI platform providing access to multiple LLM models including GPT, Llama, Gemini, and Claude. This guide covers setup for the Toolkit API, which you’ll use for course assignments.
| Service | Description | Access |
|---|---|---|
| NaviGator Chat | Web interface for LLM conversations with custom datasets | Students, Faculty, Staff |
| NaviGator Toolkit | API access to LLM models | Students, Faculty, Staff |
| NaviGator Tutor | Personalized learning assistant | Students, Faculty, Staff |
| NaviGator Notebook | Document assistance with Gemini | Students, Faculty, Staff |
For this course, you will primarily use the NaviGator Toolkit for API access.
Store your API key as an environment variable:
# Add to your shell profile (~/.bashrc, ~/.zshrc, etc.)
export NAVIGATOR_API_KEY="your-api-key-here"
Or create a .env file in your project (add to .gitignore):
NAVIGATOR_API_KEY=your-api-key-here
uv add openai python-dotenv
import os
from dotenv import load_dotenv
from openai import OpenAI
# Load environment variables
load_dotenv()
# Initialize client with NavigatorAI endpoint
client = OpenAI(
api_key=os.getenv("NAVIGATOR_API_KEY"),
base_url="https://api.navigator.ufl.edu/v1" # Update with actual endpoint
)
# Make a request
response = client.chat.completions.create(
model="gpt-4", # Or other available models
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is data engineering?"}
]
)
print(response.choices[0].message.content)
The Toolkit provides access to multiple models:
| Provider | Models |
|---|---|
| OpenAI | GPT-4, GPT-3.5-turbo |
| Meta | Llama 2, Llama 3 |
| Gemini | |
| Anthropic | Claude |
Check the NavigatorAI documentation for the current list of available models and their identifiers.
.env to .gitignore to prevent accidental commitsAuthentication Error (401)
echo $NAVIGATOR_API_KEYRate Limit Error (429)
Model Not Found
For development and testing, you can use local models via Ollama:
# Install Ollama
brew install ollama # macOS
# or download from https://ollama.ai
# Pull a model
ollama pull llama2
# Run the model
ollama run llama2
Using Ollama with OpenAI-compatible API:
from openai import OpenAI
client = OpenAI(
api_key="ollama",
base_url="http://localhost:11434/v1"
)
response = client.chat.completions.create(
model="llama2",
messages=[{"role": "user", "content": "Hello!"}]
)