Trust & Transparency

Independent evaluation requires independence. Here is how we handle data, protect confidentiality, and maintain the objectivity that makes our findings credible to boards and buyers.

Scope of Engagement

What we do: Hands-on evaluation and advisory for AI initiatives at PE-backed portfolio companies. We assess scalability, financial viability, technical reliability, governance posture, and organizational readiness. We produce board-ready deliverables.

What we don't do: We do not host production systems, store client credentials, or train models on client data. If an engagement requires access to sensitive data, we execute a written data-use plan and NDA/BAA before any access is granted.

Independence: We maintain vendor neutrality. We do not resell AI platforms, take referral fees from vendors, or have financial relationships with the technology providers we evaluate. Our findings are not influenced by vendor partnerships.

Data Handling & Confidentiality

  • Collection: Business-contact details, scheduling information, and project artifacts only. Sensitive information is removed from all working artifacts.
  • Storage: Encrypted cloud storage with MFA and least-privilege access. Portfolio company data is segregated by engagement.
  • AI tools: No client-identifiable data is sent to public AI APIs without written consent. Private endpoints and redaction by default.
  • Evaluation logs: We retain evaluation logs with pass/fail thresholds for the engagement period. PII and secrets are redacted. Retention period is agreed in the engagement letter.
  • Deletion: At engagement close, working files are removed unless the contract specifies retention for audit evidence or board documentation.
  • Cross-portfolio isolation: For PE sponsors with multiple portfolio company engagements, data from each portfolio company is isolated. Cross-portfolio insights use only anonymized, aggregated metrics.

Security Practices

  • All accounts protected by MFA. Laptops use full-disk encryption and automatic screen lock.
  • Passwords stored in a business-grade password manager. Secrets are never shared over email or chat.
  • Patches applied within 30 days. High-severity updates prioritized sooner.
  • Deliverables shared via permissioned links. Downloads time-limited where possible.
  • CISSP-certified principal. Security practices informed by 20+ years in enterprise software and regulated industries.

Methodology & Standards

Our seven-dimension evaluation methodology maps to public frameworks. Every evaluation has clear scoring criteria, documented evidence, and exportable artifacts. We keep humans in the loop for high-impact decisions. If the data does not support scaling or continued investment, we say so—even when that is not what the portfolio company wants to hear.

Framework Alignment: Our methods map to NIST AI RMF and ISO/IEC 42001. "Alignment" means our evaluation criteria reference these frameworks; it does not imply certification or endorsement by NIST, ISO, the European Commission, or AICPA.

Evidence available on request:

  • Sample evaluation report structure (redacted) showing radar chart, Monte Carlo output, and executive summary format.
  • Seven-dimension scoring methodology with criteria definitions.
  • Control-mapping sheet to NIST AI RMF and ISO/IEC 42001, with EU AI Act obligations noted where relevant.
  • Sample NDA and data-use plan for PE engagements.

Contact & Responsible Disclosure

Questions about privacy, security, or our evaluation methodology? Email security@onrampgrc.com or use the contact form.

For security researchers: If you believe you've found a vulnerability, please send details to the same address with "VULN" in the subject. We acknowledge receipt within 5 business days.