Skip to main content

Test in Sandbox

TrustGate provides a sandbox environment for testing your integration without affecting production data or incurring charges. This guide explains how to use sandbox mode effectively.

Sandbox vs. Production

FeatureSandboxProduction
API Key Prefixsk_test_sk_live_
Data Persistence7 daysPermanent
Screening ResultsSimulatedReal watchlists
Document VerificationMock OCRReal document analysis
BiometricsMock face matchReal biometric verification
Rate Limits1000/minBased on plan
BillingFree ($0)Usage-based
WebhooksDelivered (with sandbox: true)Delivered
AttestationsValid signature (with sandbox: true)Valid signature
DashboardSeparate view (Test Mode toggle)Default view

Getting Your Sandbox API Key

  1. Log in to the TrustGate dashboard at app.bytrustgate.com
  2. Go to Integrations → API Keys (or Dev Workbench → API Keys)
  3. Find your sandbox key (starts with sk_test_) or create one
  4. Click the eye icon to reveal and copy it
Environment Variables

Use environment variables to switch between sandbox and production:

# Development
TRUSTGATE_API_KEY=sk_test_xxxxxxxxxxxx

# Production
TRUSTGATE_API_KEY=sk_live_xxxxxxxxxxxx

Dashboard Test Mode

The TrustGate dashboard includes a Test Mode toggle (similar to Stripe's approach) that lets you view sandbox data separately from production.

  • Toggle location: Top-right of the dashboard header
  • Orange banner: When Test Mode is active, an orange "TEST MODE" banner appears
  • Full isolation: All pages (Applicants, Companies, Cases, Alerts, Analytics, Audit Log) show only sandbox data when toggled on
  • No cross-contamination: Test verifications never appear in your production views

This means you can run through complete verification flows with test data, review them in the dashboard, and confirm everything works before going live.

Test Data

API Test Applicants

Use these test names to trigger specific screening results:

NameScreening ResultDescription
John SmithClearNo matches found
Sanctions MatchSanctions hitTriggers OFAC SDN match
PEP Tier OnePEP hit (Tier 1)Triggers high-level PEP match
PEP Tier ThreePEP hit (Tier 3)Triggers lower-level PEP match
Media MatchAdverse media hitTriggers negative news match
Multi HitMultiple hitsTriggers sanctions + PEP + adverse media

SDK Magic Values

When using the SDK flow (embedded verification), use these first names to control the verification outcome. Any name not listed below results in a successful (approved) verification.

first_nameOutcomeDetails
(any other)ApprovedAll checks pass — face match, document, screening
RejectedRejectedFace match fails (42% similarity)
ReviewReview RequiredScreening hit found, case auto-created
SanctionsReview RequiredOFAC sanctions match detected
ExpiredRejectedDocument expired during verification
MismatchRejectedName on ID doesn't match submitted name
FraudRejectedLiveness check failed (spoofing detected)
Default is Approved

Most integration testing is happy-path testing. TrustGate defaults to approved for any non-magic name, so you only need magic values when testing failure scenarios. This matches the industry standard (Onfido, Stripe).

SDK Error Simulation

Use these first names to simulate error conditions and test your error handling:

first_nameSimulated ErrorHTTP Status
TimeoutGateway timeout on submit504
ServerErrorInternal server error500
RateLimitToo many requests429

Test Documents

In sandbox mode, document verification uses simulated OCR. Use these test document numbers:

Document NumberVerification Result
PASS123456Verified successfully
PASS_EXPIREDRejected - expired document
PASS_FAKERejected - suspected fraud
PASS_BLURRejected - image quality

Test Countries

Use these country codes to test geographic risk scenarios:

Country CodeRisk LevelNotes
USA, GBR, CANLow riskStandard processing
VNM, VEN, SYRMedium riskFATF grey list countries
IRN, PRK, MMRHigh riskFATF black list countries
LBN, YEM, BOLMedium riskFATF grey list countries

Testing Workflows

1. Test Clear Verification Flow

Test the happy path where an applicant passes all checks:

curl -X POST https://api.bytrustgate.com/api/v1/applicants \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"external_id": "test_clear_001",
"email": "clear@example.com",
"first_name": "John",
"last_name": "Smith",
"date_of_birth": "1990-01-15",
"nationality": "USA"
}'

Expected result:

  • Screening returns clear
  • Risk score: 10-30 (low)
  • Ready for auto-approval

2. Test Sanctions Hit Flow

Test what happens when an applicant matches a sanctions list:

curl -X POST https://api.bytrustgate.com/api/v1/applicants \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"external_id": "test_sanctions_001",
"email": "sanctions@example.com",
"first_name": "Sanctions",
"last_name": "Match",
"date_of_birth": "1980-06-20",
"nationality": "USA"
}'

Expected result:

  • Screening returns hit
  • Hit type: sanctions
  • Confidence: 85-95%
  • Risk score: 80+ (high)
  • Case auto-created

3. Test PEP Hit Flow

Test politically exposed person detection:

curl -X POST https://api.bytrustgate.com/api/v1/applicants \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"external_id": "test_pep_001",
"email": "pep@example.com",
"first_name": "PEP",
"last_name": "Tier One",
"date_of_birth": "1975-03-10",
"nationality": "GBR"
}'

Expected result:

  • Screening returns hit
  • Hit type: pep
  • PEP tier: 1
  • Position: "Former Head of State"
  • Risk score: 70-85 (high)

4. Test High-Risk Country Flow

Test geographic risk flagging:

curl -X POST https://api.bytrustgate.com/api/v1/applicants \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"external_id": "test_country_001",
"email": "highrisk@example.com",
"first_name": "Test",
"last_name": "User",
"date_of_birth": "1985-08-22",
"nationality": "IRN"
}'

Expected result:

  • Risk score: 50+ (elevated)
  • Flag: high_risk_country
  • Requires enhanced due diligence

5. Test SDK Verification Flow

Test the embedded SDK end-to-end:

// Create an access token with your test key
const response = await fetch('https://api.bytrustgate.com/api/v1/sdk/access-token', {
method: 'POST',
headers: {
'Authorization': 'Bearer sk_test_YOUR_KEY',
'Content-Type': 'application/json',
},
body: JSON.stringify({
external_id: 'sdk_test_001',
first_name: 'John', // Use magic names to test different outcomes
last_name: 'Doe',
}),
});

const { access_token } = await response.json();

// Initialize the SDK with the token
TrustGate.init({
token: access_token,
onComplete: (result) => {
console.log(result.status); // "approved"
console.log(result.sandbox); // true
},
onError: (error) => {
console.error(error);
},
});

Sandbox Attestations

Sandbox verifications produce valid attestations with real cryptographic signatures, so you can test your attestation verification logic. The only difference is the sandbox: true flag in the attestation metadata.

{
"attestation_id": "att_abc123",
"applicant_id": "app_xyz789",
"status": "approved",
"sandbox": true,
"issued_at": "2026-01-15T10:30:00Z",
"signature": "eyJhbGciOiJSUzI1NiIs..."
}

The verify_url page for sandbox attestations shows a clear "TEST VERIFICATION" banner, so there's no confusion between test and real verifications.

Testing Webhooks

Set Up Test Webhook

  1. Use a webhook testing service like webhook.site or RequestBin
  2. Copy the generated URL
  3. Add it in Integrations → Webhooks with your sandbox API key
  4. Select events to receive

Sandbox Webhook Payloads

Sandbox webhooks are delivered normally but include a sandbox: true flag in the payload. This lets you test your webhook handler end-to-end while being able to distinguish test events from real ones.

{
"event": "applicant.created",
"timestamp": "2026-01-20T12:00:00Z",
"sandbox": true,
"data": {
"applicant_id": "...",
"external_id": "webhook_test_001",
"status": "pending",
"is_sandbox": true
}
}
Webhook Handler Best Practice

Always check the sandbox field in your webhook handler and route test events accordingly:

def handle_webhook(payload):
if payload.get("sandbox"):
log.info("Sandbox event - skipping production logic")
return {"status": "ok"}
# ... production handling

Trigger Test Events

Each action in sandbox triggers real webhooks:

# This will send an applicant.created webhook
curl -X POST https://api.bytrustgate.com/api/v1/applicants \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"external_id": "webhook_test_001",
"first_name": "Webhook",
"last_name": "Test",
"email": "webhook@example.com"
}'

Testing Error Scenarios

Rate Limiting

To test rate limit handling, send requests rapidly:

import requests
import time

API_KEY = "sk_test_YOUR_KEY"

for i in range(150): # Exceed 100/min limit
response = requests.get(
"https://api.bytrustgate.com/api/v1/applicants",
headers={"Authorization": f"Bearer {API_KEY}"},
)
if response.status_code == 429:
print(f"Rate limited at request {i}")
retry_after = response.headers.get("Retry-After", 60)
print(f"Retry after: {retry_after} seconds")
break

Invalid Data

Test validation errors:

# Missing required field
curl -X POST https://api.bytrustgate.com/api/v1/applicants \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"email": "incomplete@example.com"
}'

Response:

{
"detail": [
{
"loc": ["body", "first_name"],
"msg": "field required",
"type": "value_error.missing"
}
]
}

SDK Error Simulation

Test how your integration handles service errors by using magic first names in the SDK flow:

// Simulate a timeout
const token = await createAccessToken({
first_name: 'Timeout',
last_name: 'Test',
});
// SDK will receive a 504 on submit — test your retry logic

// Simulate a server error
const token2 = await createAccessToken({
first_name: 'ServerError',
last_name: 'Test',
});
// SDK will receive a 500 on submit — test your error UI

// Simulate rate limiting
const token3 = await createAccessToken({
first_name: 'RateLimit',
last_name: 'Test',
});
// SDK will receive a 429 on submit — test your backoff logic

Network Errors

Test timeout handling by using a slow network or adding delays to your webhook endpoint.

Sandbox Data Management

Cleaning Up Test Data

Sandbox data is automatically deleted after 7 days. To manually clean up:

# Delete a test applicant
curl -X DELETE https://api.bytrustgate.com/api/v1/applicants/{applicant_id}/gdpr-delete \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-G \
-d "confirmation=CONFIRM_DELETE" \
-d "reason=Test cleanup"

Seeding Demo Data

For UI testing, you can seed demo applicants via the debug endpoint (sandbox only):

curl -X POST https://api.bytrustgate.com/api/v1/auth/seed-demo-data \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-G \
-d "tenant_id=YOUR_TENANT_ID" \
-d "count=25"

Going Live Checklist

Before switching to production:

Integration

  • All test scenarios pass (clear, sanctions, PEP, high-risk country)
  • SDK flow completes successfully with default (approved) outcome
  • SDK magic values tested for rejection, review, and error scenarios
  • Webhook endpoint handles all event types
  • Webhook handler checks sandbox flag and routes accordingly
  • Attestation verification logic works with sandbox attestations

Error Handling

  • Error handling is robust (500, 504, 429 responses)
  • Rate limiting is handled gracefully with exponential backoff
  • Network timeouts are handled

Security

  • API key is stored securely (environment variables, not code)
  • sk_test_ key is NOT used in production
  • Webhook signatures are verified

Configuration

  • Production webhook URL is configured
  • Production API key (sk_live_) is set in environment
  • Team members have appropriate roles
  • Compliance workflows are reviewed and enabled

Verification

  • Dashboard Test Mode toggle shows sandbox data correctly
  • Production dashboard is clean (no test data leaking through)
  • Verify first few real verifications manually
  • Monitor error rates and latency after go-live

Switch to Production

  1. Replace sk_test_ API key with sk_live_ in your environment
  2. Update webhook URLs to production endpoints
  3. Verify first few real verifications manually
  4. Monitor error rates and latency

Best Practices

Separate Test Data

Use distinctive external IDs for test data:

const externalId = `test_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;

Automated Testing

Include API tests in your CI/CD pipeline:

# pytest example
def test_create_applicant_clear():
response = create_applicant("John", "Smith")
assert response.status_code == 201

applicant_id = response.json()["id"]
screening = run_screening(applicant_id)
assert screening.json()["status"] == "clear"

Document Test Cases

Maintain a test matrix:

ScenarioTest NameExpected ResultStatus
Clear applicanttest_clear_001Approved
Sanctions hittest_sanctions_001Case created
PEP hittest_pep_001Manual review
High-risk countrytest_country_001EDD required
SDK rejectionfirst_name: RejectedRejected (face match)
SDK reviewfirst_name: ReviewReview required
SDK errorfirst_name: Timeout504 timeout

Next Steps