AI Guardrails¶
Guardrails are a set of guidelines and best practices designed to ensure the safe and effective use of AI systems. They help mitigate risks and promote responsible AI usage. API Platform provides 3 types of guardrails to enhance the security and reliability of AI APIs:
- Basic Guardrails: These are the foundational security measures that apply to all AI APIs, ensuring a baseline level of protection.
- Advanced Guardrails: For more sophisticated AI applications that require enhanced flexibility and control, API Platform integrates with Guardrails AI—an extensible framework that enables seamless integration of complex AI models and services. This framework allows you to leverage open-source fine-tuned language models to implement advanced guardrails tailored to your specific requirements.
- Third Party Guardrail Integrations: These are integrations with third-party services that offer additional security and compliance features for AI APIs.