AI Risk Assessment

Version 1.0 · February 2026 · Broadlake Technologies LLC

System Overview

HireProxy.ai is a multi-tenant SaaS platform that enables candidates to create AI-powered career assistants. Each assistant answers questions about its respective candidate's professional background using the Anthropic Claude API with a curated knowledge base of verified career stories, metrics, and work history provided by the candidate. The platform serves multiple independent tenants, each with isolated data stored in Supabase (PostgreSQL). The AI responds only with verified information from candidate-provided data and is instructed to acknowledge uncertainty rather than fabricate responses.

Risk Categories

1. Hallucination / Fabrication

Likelihood: Low · Impact: Medium (reputational)

Risk: The AI generates false information about a candidate's experience, credentials, or employment history.

  • Evidence gating requires Story ID citations for factual claims
  • Each tenant's knowledge base contains only candidate-verified, defensible facts
  • System prompts instruct uncertainty acknowledgment when data is insufficient
  • Tenant isolation ensures one candidate's data cannot leak into another's AI responses

2. Prompt Injection

Likelihood: Medium · Impact: Low-Medium (per-tenant scope)

Risk: A recruiter or external user attempts to override system instructions to extract system prompts, candidate data, or cause the AI to behave outside its intended scope.

  • System prompts treat all user input as untrusted
  • Rate limiting via Vercel KV (Redis) prevents abuse at scale
  • Multi-tenant architecture limits blast radius to a single candidate's assistant
  • Input length limits enforced per request

3. Inappropriate Content

Likelihood: Very Low · Impact: Medium

Risk: The AI generates offensive, harmful, or unprofessional content in a career context.

  • Anthropic's built-in content filtering applied at the API level
  • System scoped exclusively to career and professional questions
  • Conversation logs enable post-hoc review and prompt refinement

4. Cross-Tenant Data Leakage

Likelihood: Very Low · Impact: High

Risk: One candidate's data surfaces in another candidate's AI assistant responses, or unauthorized access to tenant data occurs.

  • Supabase Row Level Security (RLS) enforces data isolation at the database level
  • Each AI session is scoped to a single tenant's knowledge base
  • Authentication via Supabase Auth ensures tenant-scoped access control
  • No shared context or memory between separate tenant assistants

5. Privacy Exposure

Likelihood: Low · Impact: Medium

Risk: The AI reveals sensitive personal information that candidates did not intend to share publicly, such as compensation data, private contact details, or confidential employment circumstances.

  • Candidates control exactly what data enters their knowledge base
  • System prompts mark sensitive categories (compensation, personal details) as confidential by default
  • Branding preferences allow candidates to set disclosure boundaries

6. Availability / Abuse

Likelihood: Medium · Impact: Low (cost and availability)

Risk: Service degradation or excessive API costs through automated abuse or denial-of-service attacks.

  • Per-IP and per-tenant rate limiting via Vercel KV (Redis)
  • Input length limits enforced per request
  • Vercel edge network provides DDoS protection at the infrastructure level
  • Sentry monitoring alerts on abnormal error rates or usage patterns

Monitoring & Review

Conversation logs are reviewed periodically for quality assurance and prompt refinement. API usage and costs are tracked monthly. Sentry provides real-time error monitoring and alerting. This risk assessment is reviewed quarterly or when significant platform changes are made, including new integrations, model upgrades, or architectural changes to the multi-tenant system.

← Back to Privacy Policy