Test in Sandbox
TrustGate provides a sandbox environment for testing your integration without affecting production data or incurring charges. This guide explains how to use sandbox mode effectively.
Sandbox vs. Production
| Feature | Sandbox | Production |
|---|---|---|
| API Key Prefix | sk_test_ | sk_live_ |
| Data Persistence | 7 days | Permanent |
| Screening Results | Simulated | Real watchlists |
| Document Verification | Mock OCR | Real document analysis |
| Biometrics | Mock face match | Real biometric verification |
| Rate Limits | 1000/min | Based on plan |
| Billing | Free ($0) | Usage-based |
| Webhooks | Delivered (with sandbox: true) | Delivered |
| Attestations | Valid signature (with sandbox: true) | Valid signature |
| Dashboard | Separate view (Test Mode toggle) | Default view |
Getting Your Sandbox API Key
- Log in to the TrustGate dashboard at app.bytrustgate.com
- Go to Integrations → API Keys (or Dev Workbench → API Keys)
- Find your sandbox key (starts with
sk_test_) or create one - Click the eye icon to reveal and copy it
Use environment variables to switch between sandbox and production:
# Development
TRUSTGATE_API_KEY=sk_test_xxxxxxxxxxxx
# Production
TRUSTGATE_API_KEY=sk_live_xxxxxxxxxxxx
Dashboard Test Mode
The TrustGate dashboard includes a Test Mode toggle (similar to Stripe's approach) that lets you view sandbox data separately from production.
- Toggle location: Top-right of the dashboard header
- Orange banner: When Test Mode is active, an orange "TEST MODE" banner appears
- Full isolation: All pages (Applicants, Companies, Cases, Alerts, Analytics, Audit Log) show only sandbox data when toggled on
- No cross-contamination: Test verifications never appear in your production views
This means you can run through complete verification flows with test data, review them in the dashboard, and confirm everything works before going live.
Test Data
API Test Applicants
Use these test names to trigger specific screening results:
| Name | Screening Result | Description |
|---|---|---|
John Smith | Clear | No matches found |
Sanctions Match | Sanctions hit | Triggers OFAC SDN match |
PEP Tier One | PEP hit (Tier 1) | Triggers high-level PEP match |
PEP Tier Three | PEP hit (Tier 3) | Triggers lower-level PEP match |
Media Match | Adverse media hit | Triggers negative news match |
Multi Hit | Multiple hits | Triggers sanctions + PEP + adverse media |
SDK Magic Values
When using the SDK flow (embedded verification), use these first names to control the verification outcome. Any name not listed below results in a successful (approved) verification.
first_name | Outcome | Details |
|---|---|---|
| (any other) | Approved | All checks pass — face match, document, screening |
Rejected | Rejected | Face match fails (42% similarity) |
Review | Review Required | Screening hit found, case auto-created |
Sanctions | Review Required | OFAC sanctions match detected |
Expired | Rejected | Document expired during verification |
Mismatch | Rejected | Name on ID doesn't match submitted name |
Fraud | Rejected | Liveness check failed (spoofing detected) |
Most integration testing is happy-path testing. TrustGate defaults to approved for any non-magic name, so you only need magic values when testing failure scenarios. This matches the industry standard (Onfido, Stripe).
SDK Error Simulation
Use these first names to simulate error conditions and test your error handling:
first_name | Simulated Error | HTTP Status |
|---|---|---|
Timeout | Gateway timeout on submit | 504 |
ServerError | Internal server error | 500 |
RateLimit | Too many requests | 429 |
Test Documents
In sandbox mode, document verification uses simulated OCR. Use these test document numbers:
| Document Number | Verification Result |
|---|---|
PASS123456 | Verified successfully |
PASS_EXPIRED | Rejected - expired document |
PASS_FAKE | Rejected - suspected fraud |
PASS_BLUR | Rejected - image quality |
Test Countries
Use these country codes to test geographic risk scenarios:
| Country Code | Risk Level | Notes |
|---|---|---|
USA, GBR, CAN | Low risk | Standard processing |
VNM, VEN, SYR | Medium risk | FATF grey list countries |
IRN, PRK, MMR | High risk | FATF black list countries |
LBN, YEM, BOL | Medium risk | FATF grey list countries |
Testing Workflows
1. Test Clear Verification Flow
Test the happy path where an applicant passes all checks:
curl -X POST https://api.bytrustgate.com/api/v1/applicants \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"external_id": "test_clear_001",
"email": "clear@example.com",
"first_name": "John",
"last_name": "Smith",
"date_of_birth": "1990-01-15",
"nationality": "USA"
}'
Expected result:
- Screening returns
clear - Risk score: 10-30 (low)
- Ready for auto-approval
2. Test Sanctions Hit Flow
Test what happens when an applicant matches a sanctions list:
curl -X POST https://api.bytrustgate.com/api/v1/applicants \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"external_id": "test_sanctions_001",
"email": "sanctions@example.com",
"first_name": "Sanctions",
"last_name": "Match",
"date_of_birth": "1980-06-20",
"nationality": "USA"
}'
Expected result:
- Screening returns
hit - Hit type:
sanctions - Confidence: 85-95%
- Risk score: 80+ (high)
- Case auto-created
3. Test PEP Hit Flow
Test politically exposed person detection:
curl -X POST https://api.bytrustgate.com/api/v1/applicants \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"external_id": "test_pep_001",
"email": "pep@example.com",
"first_name": "PEP",
"last_name": "Tier One",
"date_of_birth": "1975-03-10",
"nationality": "GBR"
}'
Expected result:
- Screening returns
hit - Hit type:
pep - PEP tier: 1
- Position: "Former Head of State"
- Risk score: 70-85 (high)
4. Test High-Risk Country Flow
Test geographic risk flagging:
curl -X POST https://api.bytrustgate.com/api/v1/applicants \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"external_id": "test_country_001",
"email": "highrisk@example.com",
"first_name": "Test",
"last_name": "User",
"date_of_birth": "1985-08-22",
"nationality": "IRN"
}'
Expected result:
- Risk score: 50+ (elevated)
- Flag:
high_risk_country - Requires enhanced due diligence
5. Test SDK Verification Flow
Test the embedded SDK end-to-end:
// Create an access token with your test key
const response = await fetch('https://api.bytrustgate.com/api/v1/sdk/access-token', {
method: 'POST',
headers: {
'Authorization': 'Bearer sk_test_YOUR_KEY',
'Content-Type': 'application/json',
},
body: JSON.stringify({
external_id: 'sdk_test_001',
first_name: 'John', // Use magic names to test different outcomes
last_name: 'Doe',
}),
});
const { access_token } = await response.json();
// Initialize the SDK with the token
TrustGate.init({
token: access_token,
onComplete: (result) => {
console.log(result.status); // "approved"
console.log(result.sandbox); // true
},
onError: (error) => {
console.error(error);
},
});
Sandbox Attestations
Sandbox verifications produce valid attestations with real cryptographic signatures, so you can test your attestation verification logic. The only difference is the sandbox: true flag in the attestation metadata.
{
"attestation_id": "att_abc123",
"applicant_id": "app_xyz789",
"status": "approved",
"sandbox": true,
"issued_at": "2026-01-15T10:30:00Z",
"signature": "eyJhbGciOiJSUzI1NiIs..."
}
The verify_url page for sandbox attestations shows a clear "TEST VERIFICATION" banner, so there's no confusion between test and real verifications.
Testing Webhooks
Set Up Test Webhook
- Use a webhook testing service like webhook.site or RequestBin
- Copy the generated URL
- Add it in Integrations → Webhooks with your sandbox API key
- Select events to receive
Sandbox Webhook Payloads
Sandbox webhooks are delivered normally but include a sandbox: true flag in the payload. This lets you test your webhook handler end-to-end while being able to distinguish test events from real ones.
{
"event": "applicant.created",
"timestamp": "2026-01-20T12:00:00Z",
"sandbox": true,
"data": {
"applicant_id": "...",
"external_id": "webhook_test_001",
"status": "pending",
"is_sandbox": true
}
}
Always check the sandbox field in your webhook handler and route test events accordingly:
def handle_webhook(payload):
if payload.get("sandbox"):
log.info("Sandbox event - skipping production logic")
return {"status": "ok"}
# ... production handling
Trigger Test Events
Each action in sandbox triggers real webhooks:
# This will send an applicant.created webhook
curl -X POST https://api.bytrustgate.com/api/v1/applicants \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"external_id": "webhook_test_001",
"first_name": "Webhook",
"last_name": "Test",
"email": "webhook@example.com"
}'
Testing Error Scenarios
Rate Limiting
To test rate limit handling, send requests rapidly:
import requests
import time
API_KEY = "sk_test_YOUR_KEY"
for i in range(150): # Exceed 100/min limit
response = requests.get(
"https://api.bytrustgate.com/api/v1/applicants",
headers={"Authorization": f"Bearer {API_KEY}"},
)
if response.status_code == 429:
print(f"Rate limited at request {i}")
retry_after = response.headers.get("Retry-After", 60)
print(f"Retry after: {retry_after} seconds")
break
Invalid Data
Test validation errors:
# Missing required field
curl -X POST https://api.bytrustgate.com/api/v1/applicants \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"email": "incomplete@example.com"
}'
Response:
{
"detail": [
{
"loc": ["body", "first_name"],
"msg": "field required",
"type": "value_error.missing"
}
]
}
SDK Error Simulation
Test how your integration handles service errors by using magic first names in the SDK flow:
// Simulate a timeout
const token = await createAccessToken({
first_name: 'Timeout',
last_name: 'Test',
});
// SDK will receive a 504 on submit — test your retry logic
// Simulate a server error
const token2 = await createAccessToken({
first_name: 'ServerError',
last_name: 'Test',
});
// SDK will receive a 500 on submit — test your error UI
// Simulate rate limiting
const token3 = await createAccessToken({
first_name: 'RateLimit',
last_name: 'Test',
});
// SDK will receive a 429 on submit — test your backoff logic
Network Errors
Test timeout handling by using a slow network or adding delays to your webhook endpoint.
Sandbox Data Management
Cleaning Up Test Data
Sandbox data is automatically deleted after 7 days. To manually clean up:
# Delete a test applicant
curl -X DELETE https://api.bytrustgate.com/api/v1/applicants/{applicant_id}/gdpr-delete \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-G \
-d "confirmation=CONFIRM_DELETE" \
-d "reason=Test cleanup"
Seeding Demo Data
For UI testing, you can seed demo applicants via the debug endpoint (sandbox only):
curl -X POST https://api.bytrustgate.com/api/v1/auth/seed-demo-data \
-H "Authorization: Bearer sk_test_YOUR_KEY" \
-G \
-d "tenant_id=YOUR_TENANT_ID" \
-d "count=25"
Going Live Checklist
Before switching to production:
Integration
- All test scenarios pass (clear, sanctions, PEP, high-risk country)
- SDK flow completes successfully with default (approved) outcome
- SDK magic values tested for rejection, review, and error scenarios
- Webhook endpoint handles all event types
- Webhook handler checks
sandboxflag and routes accordingly - Attestation verification logic works with sandbox attestations
Error Handling
- Error handling is robust (500, 504, 429 responses)
- Rate limiting is handled gracefully with exponential backoff
- Network timeouts are handled
Security
- API key is stored securely (environment variables, not code)
-
sk_test_key is NOT used in production - Webhook signatures are verified
Configuration
- Production webhook URL is configured
- Production API key (
sk_live_) is set in environment - Team members have appropriate roles
- Compliance workflows are reviewed and enabled
Verification
- Dashboard Test Mode toggle shows sandbox data correctly
- Production dashboard is clean (no test data leaking through)
- Verify first few real verifications manually
- Monitor error rates and latency after go-live
Switch to Production
- Replace
sk_test_API key withsk_live_in your environment - Update webhook URLs to production endpoints
- Verify first few real verifications manually
- Monitor error rates and latency
Best Practices
Separate Test Data
Use distinctive external IDs for test data:
const externalId = `test_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
Automated Testing
Include API tests in your CI/CD pipeline:
# pytest example
def test_create_applicant_clear():
response = create_applicant("John", "Smith")
assert response.status_code == 201
applicant_id = response.json()["id"]
screening = run_screening(applicant_id)
assert screening.json()["status"] == "clear"
Document Test Cases
Maintain a test matrix:
| Scenario | Test Name | Expected Result | Status |
|---|---|---|---|
| Clear applicant | test_clear_001 | Approved | ✓ |
| Sanctions hit | test_sanctions_001 | Case created | ✓ |
| PEP hit | test_pep_001 | Manual review | ✓ |
| High-risk country | test_country_001 | EDD required | ✓ |
| SDK rejection | first_name: Rejected | Rejected (face match) | ✓ |
| SDK review | first_name: Review | Review required | ✓ |
| SDK error | first_name: Timeout | 504 timeout | ✓ |
Next Steps
- Quick Start Guide - Build your first integration
- API Reference - Full API documentation
- Webhooks - Event notification setup