Private inference without fine-tuning (NEW!)
Generate Privacy-Guaranteed Synthetic Data for Training/Evaluating LLMs
Inject realistic PII into training data to stress-test LLMs and quantify leakage rate
When it comes to language models, data masking doesn’t quite cut it. Here's why