Last Updated on 26. November 2024 by mgm-marketing
More and more insurance companies are introducing AI solutions: From individual POCs to company-wide rollouts of AI chat and assistant solutions. After initial euphoria, disillusionment often follows as the solutions do not deliver the expected results: AI responses are imprecise, hallucinate or simply do not do what the user expects.
The Cosmo AI Assistant focuses on the topic of “data efficiency”, i.e. the conversion of unstructured data into structured formats for automatic further processing. Example: When a broker sends an email about changes to his customer’s motor fleet, all motor vehicle and claims data is read out fully automatically, transferred to the underwriting system, the premium adjusted and the supplementary document generated.
This data and process efficiency can only be guaranteed if the quality of the AI solution can be expressed in measurable and comprehensible figures: Which data is recognized with which rate, for which data might a higher error tolerance be acceptable and for which data not at all? When should an expert always check the data?
The mgm AI Evaluation Framework answers these questions right from the start:
- Before the project starts: What AI quality can I expect with my data when using the AI assistant?
- During the project: How can I systematically and comprehensibly improve the AI quality, e.g. through AI fine-tuning or AI training?
- In operation: Is it worth switching to a different LLM provider (OpenAI, Llama, Mistral…)? What specific quality gains can I expect?
The Evaluation Framework works with your data (anonymized if necessary) and metrics that are customized for you and can be understood by non-IT professionals.
More information can be found here.