Red Teaming Report
Get the latest test results on the Databricks LLM (dbrx-instruct)
Red Teaming Report: Databricks LLM (dbrx-instruct)
Get our latest Red Teaming report on this popular Databricks LLM. We conduct rigorous security tests to detect vulnerabilities like malware and injection attacks, while also evaluating model integrity by assessing biases, toxicity, and hallucinations, ensuring alignment with regulatory standards and brand values.
The report will include:
- Security Risks for Jailbreak and Malware
- Ethical Risks for Toxicity and Bias
- Overall risk comparison with Llama3-8B
- LLM Recommendations and a checklist of safety and security guidelines.