Installation
This page covers setup for users and local operators. Developer tooling and contribution workflow live in Developer Guide.
Requirements
macOS, Linux, or Windows
Python 3.11 or newer
Git
uvpackage managermakeavailable onPATH8 GB RAM minimum, 16 GB recommended
About 2 GB free disk space for dependencies
Install
Install uv on macOS or Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
Install uv on Windows PowerShell:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
Clone the project:
git clone https://github.com/solomonsjoseph/RePORT-AI-Portal.git
cd RePORT-AI-Portal
On Windows, run the same project commands from a terminal that provides
make. Git Bash, MSYS2, WSL, or another GNU Make installation are all
acceptable as long as uv and make are on PATH.
Start the Web UI
make chat
make chat installs the web and AI Assistant dependency groups it
needs, then opens a local Streamlit page. Use it to select the model
provider, load a study, and start chat. If port 8501 is already in
use, local startup chooses the next free port; production service units
keep their configured fixed port.
Optional Developer Setup
Developers who need the full test, docs, profiling, and LLM toolchain can install every dependency group explicitly:
make sync
Verify the full developer install:
uv run python -c "import scripts; print('Installation successful')"
Prepare for First Run
Before loading a study:
Put study files under
data/raw/{STUDY_NAME}/.Choose a model provider in Configuration.
Use Load Study in the web UI. It creates the local PHI key if one does not already exist.
Then continue with Quick Start.