Why Batch Matters
Individual timestamping works for creators protecting one file at a time. But organizations operate at a different scale:
- A law firm processing 500 documents per day
- A software company releasing builds with thousands of artifacts
- A research institution archiving datasets weekly
- A photography agency timestamping entire shoots of 2,000+ images
- A compliance department recording daily business transactions
At these volumes, manual timestamping isn’t practical. Batch processing transforms timestamping from a manual task into an automated infrastructure component.
Current Status
TimeProof already supports scheduled batching in the live product today. What is still future-phase is the direct Batch Processing API and deeper workflow automation described below.
Use the current platform when you need batch-friendly timestamping right now. Treat the API workflow on this page as roadmap guidance for the future module, not as a released V1 endpoint.
Batch Workflow
The workflow below describes the planned future-phase module interface. It is included here so teams can understand the intended shape of the API before launch.
Step 1: Collect Hashes
Compute SHA-256 hashes for all files in the batch. This happens on your infrastructure — in your CI pipeline, your document management system, or a simple script.
# Example: hash all files in a directory
find ./artifacts -type f -exec sha256sum {} \; > hashes.txt
# Example: Python batch hashing
import hashlib, os
hashes = []
for root, dirs, files in os.walk('./artifacts'):
for f in files:
path = os.path.join(root, f)
with open(path, 'rb') as fh:
h = hashlib.sha256(fh.read()).hexdigest()
hashes.append({'file': f, 'hash': h})
Step 2: Submit Batch
Send the array of hashes to TimeProof’s API:
const response = await fetch(
'https://api.timeprooflabs.com/api/v1/timestamp/batch',
{
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
hashes: hashes.map(h => h.hash),
type: 'ST' // Scheduled for best batch economics
})
}
);
Step 3: Receive Certificates
After the batch is anchored (immediately for IT, at next batch window for ST), each file receives its individual certificate containing:
- The file’s SHA-256 hash
- Its unique Merkle proof (path from leaf to root)
- The blockchain transaction hash
- The Merkle root and block timestamp
- Verification instructions
Step 4: Store and Distribute
Certificates can be:
- Stored alongside the original files in your document management system
- Archived in a separate evidence repository
- Distributed to relevant stakeholders
- Ingested into compliance platforms
Integration Patterns
CI/CD Pipeline Integration
Timestamp build artifacts automatically after successful builds:
# GitHub Actions example
- name: Timestamp build artifacts
run: |
# Hash all build outputs
find ./dist -type f -exec sha256sum {} \; > hashes.txt
# Submit to TimeProof
curl -X POST https://api.timeprooflabs.com/api/v1/timestamp/batch \
-H "Authorization: Bearer $TIMEPROOF_API_KEY" \
-d @hashes.txt
This creates an immutable record of what was built and when — invaluable for audit trails, compliance, and IP protection.
Document Management Integration
Hook into document creation/modification events:
// On document save
documentSystem.on('save', async (doc) => {
const hash = sha256(doc.content);
await timeproof.timestamp(hash, { type: 'ST' });
doc.metadata.timestampHash = hash;
doc.metadata.timestampDate = new Date();
});
Every saved document is automatically timestamped, building a comprehensive provenance trail.
Scheduled Archival
Run nightly/weekly batch jobs for bulk operations:
# Nightly compliance archival
def nightly_timestamp():
# Collect all documents modified today
docs = db.query(
"SELECT * FROM documents WHERE modified >= CURRENT_DATE"
)
# Hash each document
hashes = [sha256(doc.content) for doc in docs]
# Submit batch
result = timeproof.batch_timestamp(
hashes, timestamp_type='ST'
)
# Store certificate references
for doc, cert in zip(docs, result.certificates):
db.update(doc.id, timestamp_cert=cert.id)
Economics at Scale
Batch processing with Scheduled Timestamps provides the best per-file economics:
| Batch Size | Scheduled Credits Needed | Example Purchase Path | Notes |
|---|---|---|---|
| 50 files | 50 | Half of a 100-credit Micro pack | Good for one-off batches. |
| 200 files | 200 | Two 100-credit packs or one 350-credit Basic pack | Leaves room for additional scheduled files on the Basic pack. |
| 1,000 files | 1,000 | One Pro pack | Fits recurring large archive batches. |
| 10,000 files | 10,000 | Enterprise plan or multiple Bulk packs | Best for constant high-volume automation. |
For high-volume users, Scheduled timestamps at 1 credit per file remain the lowest-cost anchoring mode on the platform:
| Alternative | Per-File Cost |
|---|---|
| Notarization | $25-$150 per session |
| Copyright registration | $45-$65 per work |
| RFC 3161 timestamp authority | $0.10-$1.00+ |
| TimeProof Scheduled | 1 credit/file |
Batch Certificate Management
Each file in a batch receives an independent certificate. This means:
- Individual distribution: Send each certificate to the relevant stakeholder without exposing other files in the batch
- Individual verification: Each certificate is self-contained — it can be verified without referencing other files
- Selective disclosure: Share proof for specific files without revealing the batch composition
- Flexible storage: Certificates can be stored in different systems, locations, or access-controlled repositories
The Merkle tree architecture ensures batch efficiency without sacrificing individual file privacy or independent verifiability.
Best Practices
Hash before transmitting
Always compute hashes on your own infrastructure. Never send file contents to any external service for hashing. This preserves confidentiality and reduces network bandwidth.
Keep original files
Timestamps prove a specific hash existed at a specific time. To verify later, you need the original file to recompute the hash. Store originals securely with their certificates.
Use consistent hashing
Ensure you use the same SHA-256 implementation consistently. Different tools should produce identical hashes for identical files, but verify this during integration testing.
Monitor batch results
Implement alerting for failed timestamps. In automated workflows, a failed timestamp might indicate infrastructure issues, expired credits, or API problems that need attention.