In This Tutorial
1 Pipeline Architecture
A CI/CD pipeline with CLAUDIUS typically includes:
- Blueprint Compilation: Verify all Blueprints compile without errors
- Asset Validation: Check for missing references, broken materials
- Lighting Builds: Automated lightmap baking
- Screenshot Tests: Visual regression testing
- Packaging: Build shipping packages
# Typical CI pipeline stages
1. Checkout code
2. Start Unreal Editor (headless)
3. Wait for CLAUDIUS to initialize
4. Run validation commands
5. Compile Blueprints
6. Build lighting (optional)
7. Run screenshot tests
8. Package project (optional)
9. Upload artifacts
Running Unreal Engine in CI/CD requires appropriate licensing. For automated builds, you typically need to use the commandline tools or ensure your license permits unattended usage.
2 Headless Mode Setup
For CI/CD, you'll run Unreal Editor in a mode that allows CLAUDIUS communication without full GUI:
@REM Start Unreal Editor with minimal UI for CI
"C:\Program Files\Epic Games\UE_5.7\Engine\Binaries\Win64\UnrealEditor.exe" ^
"C:\Projects\MyGame\MyGame.uproject" ^
-game ^
-windowed ^
-resx=640 ^
-resy=480 ^
-nosplash ^
-unattended ^
-log
Waiting for CLAUDIUS
import requests import time import sys def wait_for_claudius(timeout=120): """Wait for CLAUDIUS to become available""" start = time.time() url = "http://127.0.0.1:8080/api/v1/status" print("Waiting for CLAUDIUS to start...") while time.time() - start < timeout: try: response = requests.get(url, timeout=5) if response.status_code == 200: data = response.json() print(f"CLAUDIUS ready! Project: {data['project_name']}") return True except: pass time.sleep(2) print(".", end="", flush=True) print(f"\nTimeout after {timeout}s") return False if __name__ == "__main__": if not wait_for_claudius(): sys.exit(1)
3 Asset Validation
Use CLAUDIUS to validate assets and catch issues early:
from claudius_client import ClaudiusClient def validate_project(): """Run asset validation checks""" client = ClaudiusClient() errors = [] # 1. Check for broken asset references print("Checking for broken references...") result = client.execute("asset", "find_broken_references") if result["output"]["broken_count"] > 0: errors.append({ "type": "broken_references", "count": result["output"]["broken_count"], "assets": result["output"]["assets"] }) # 2. Validate all Blueprints compile print("Compiling Blueprints...") result = client.execute("build", "compile_blueprints") if not result["success"]: errors.append({ "type": "blueprint_compilation", "message": result["message"], "errors": result.get("output", {}).get("errors", []) }) # 3. Check for missing textures print("Checking for missing textures...") result = client.execute("asset", "validate_materials") if result["output"]["issues"]: errors.append({ "type": "material_issues", "issues": result["output"]["issues"] }) # Report results if errors: print(f"\nā Validation FAILED with {len(errors)} issues:") for error in errors: print(f" - {error['type']}") return False else: print("\nā Validation PASSED") return True
4 Automated Builds
Automate lighting builds and navigation mesh generation:
// Build lighting for all levels { "category": "build", "command": "build_lighting", "params": { "quality": "production", "levels": ["all"] } } // Build navigation mesh { "category": "build", "command": "build_navigation", "params": {} } // Build HLOD clusters { "category": "build", "command": "build_hlod", "params": {} }
def run_ci_builds(client): """Run all CI build tasks""" builds = [ ("Compiling Blueprints", "build", "compile_blueprints", {}), ("Building Lighting", "build", "build_lighting", {"quality": "medium"}), ("Building Navigation", "build", "build_navigation", {}), ] results = [] for name, category, command, params in builds: print(f"\nšØ {name}...") start = time.time() result = client.execute(category, command, params) duration = time.time() - start results.append({ "name": name, "success": result["success"], "duration": duration }) if result["success"]: print(f" ā Completed in {duration:.1f}s") else: print(f" ā Failed: {result['message']}") return results
5 Screenshot Testing
Implement visual regression testing by comparing screenshots:
import os from PIL import Image import imagehash class ScreenshotTester: def __init__(self, client, baseline_dir, output_dir): self.client = client self.baseline_dir = baseline_dir self.output_dir = output_dir def capture_test_shots(self, test_cases): """Capture screenshots for test cases""" results = [] for test in test_cases: # Load the level self.client.execute("level", "load_level", { "level_path": test["level"] }) # Position camera self.client.execute("viewport", "set_camera", { "location": test["camera_pos"], "rotation": test["camera_rot"] }) # Take screenshot screenshot_path = os.path.join(self.output_dir, f"{test['name']}.png") result = self.client.execute("viewport", "take_screenshot", { "filename": screenshot_path, "resolution": {"width": 1920, "height": 1080} }) # Compare with baseline baseline_path = os.path.join(self.baseline_dir, f"{test['name']}.png") diff = self._compare_images(screenshot_path, baseline_path) results.append({ "name": test["name"], "passed": diff < test.get("threshold", 5), "difference": diff }) return results def _compare_images(self, img1_path, img2_path): """Compare images using perceptual hashing""" if not os.path.exists(img2_path): return 100 # No baseline, max difference hash1 = imagehash.average_hash(Image.open(img1_path)) hash2 = imagehash.average_hash(Image.open(img2_path)) return hash1 - hash2 # Difference score # Usage test_cases = [ { "name": "main_menu", "level": "/Game/Maps/MainMenu", "camera_pos": {"x": 0, "y": 0, "z": 500}, "camera_rot": {"pitch": -90, "yaw": 0, "roll": 0}, "threshold": 3 }, { "name": "level_01_start", "level": "/Game/Maps/Level01", "camera_pos": {"x": 100, "y": 0, "z": 200}, "camera_rot": {"pitch": 0, "yaw": 0, "roll": 0} } ]
6 GitHub Actions Example
Here's a complete GitHub Actions workflow:
name: UE5 Validation on: push: branches: [ main, develop ] pull_request: branches: [ main ] jobs: validate: runs-on: self-hosted # UE requires self-hosted runner steps: - uses: actions/checkout@v3 - name: Setup Python uses: actions/setup-python@v4 with: python-version: '3.10' - name: Install dependencies run: pip install requests pillow imagehash - name: Start Unreal Editor run: | Start-Process -FilePath "C:\UE_5.7\Engine\Binaries\Win64\UnrealEditor.exe" ` -ArgumentList "${{ github.workspace }}\MyGame.uproject -log" shell: pwsh - name: Wait for CLAUDIUS run: python scripts/wait_for_claudius.py - name: Validate Assets run: python scripts/validate_assets.py - name: Compile Blueprints run: python scripts/compile_blueprints.py - name: Run Screenshot Tests run: python scripts/screenshot_tests.py - name: Upload Test Results uses: actions/upload-artifact@v3 if: always() with: name: test-screenshots path: screenshots/ - name: Shutdown Editor if: always() run: | python -c "import requests; requests.post('http://127.0.0.1:8080/api/v1/commands', json={'category': 'editor', 'command': 'quit'})" continue-on-error: true
Unreal Engine requires significant resources and licenses. You'll need a self-hosted runner with UE installed. GitHub-hosted runners cannot run UE due to size and licensing constraints.