Enabling auto-scoring in Zoom Quality Management allows for automated evaluation of agent interactions based on predefined criteria, improving efficiency and consistency. To enable auto-scoring, configure your scorecard with auto-scored sections, define the indicators that trigger those scores, and associate the scorecard with your desired evaluation workflows.
Prerequisites
- A Zoom account with Zoom Contact Center and Zoom Quality Management licenses.
- Administrator privileges within the Zoom web portal.
- Familiarity with creating and managing scorecards in Zoom Quality Management.
- Defined quality indicators and corresponding scoring criteria.
Configure the Scorecard for Auto-Scoring
- Sign in to the Zoom web portal as an administrator.
- Navigate to Quality Management. The exact path might depend on your specific Zoom account configuration. If you don’t see it directly, look under Advanced or Admin.
- Click on Scorecards in the left-hand navigation menu.
- Select an existing scorecard to edit or click Add to create a new one.
- Within the scorecard editor, identify the sections you want to automate scoring. These sections should be objective and based on quantifiable indicators.
- For each auto-scored section, locate the settings or options to enable auto-scoring. The specific label may vary but look for a checkbox or toggle labeled something like “Auto-Score,” “Automated Scoring,” or “Enable AI Scoring.”
- Enable the auto-scoring setting for the chosen section.
- Configure the scoring rules. This involves linking the section to specific indicators and defining how those indicators affect the score. For example, specify that if the indicator “Proper Greeting” is detected, the score for that section increases by a certain amount.
- Define score ranges and associated points for each indicator. Ensure the total points for each section are clearly defined and contribute proportionally to the overall scorecard score.
- Save your changes to the scorecard.
Define and Configure Indicators
- Navigate to Indicators in the Quality Management section of the Zoom web portal.
- Click Add to create a new indicator or select an existing one to edit.
- Define the indicator’s parameters. This includes the keywords, phrases, or acoustic patterns that will trigger the indicator.
- Associate the indicator with the relevant auto-scored section in your scorecard. The platform should have a linking mechanism. If not, you may need to rely on evaluator training to ensure consistency.
- Specify the conditions under which the indicator should trigger auto-scoring. For example, only trigger if the indicator appears within the first 30 seconds of the interaction.
- Save the indicator settings.
- Repeat these steps for all indicators relevant to your auto-scored sections.
Associate Scorecard with Evaluation Workflows
- Navigate to Evaluations or Evaluation Workflows in the Quality Management section. The exact name depends on how evaluations are managed.
- Select the evaluation workflow you want to use with auto-scoring. You may need to create a new workflow if auto-scoring requires different steps.
- Assign the auto-scoring enabled scorecard to the workflow.
- Configure the evaluation settings to ensure auto-scoring is enabled for the interactions being evaluated. Some systems may have a separate toggle for automated evaluation on the evaluation settings page.
- Save the changes to the evaluation workflow.
- Test the auto-scoring configuration by conducting test evaluations on sample interactions. Verify that the indicators are being detected correctly and that the scores are being calculated accurately. Adjust settings as necessary.
Review and Adjust Auto-Scoring Settings
- After the initial implementation, regularly review the auto-scoring results to ensure accuracy and effectiveness.
- Analyze the data generated by the auto-scoring process to identify areas for improvement.
- Adjust the indicator definitions, scoring rules, or evaluation workflows as needed to optimize the auto-scoring process.
- Provide feedback to the AI models if the Quality Management tool has learning algorithms. Many systems allow the evaluator to adjust and submit corrected results that will train the AI model.
- Consider user feedback from agents and supervisors to continuously improve the accuracy and relevance of auto-scoring.
Common Issues
- Indicators are not being detected:
- Verify the accuracy of the keywords, phrases, or acoustic patterns defined for the indicator.
- Ensure the indicator is properly associated with the relevant auto-scored section.
- Check that the indicator is active and enabled.
- Scores are not being calculated accurately:
- Review the scoring rules defined for each indicator and ensure they are correctly configured.
- Verify that the score ranges and associated points are appropriate for the desired outcomes.
- Confirm that the scorecard is properly associated with the evaluation workflow.
- Evaluations are not being automatically scored:
- Ensure auto-scoring is enabled for the specific evaluation workflow being used.
- Check that the interactions being evaluated meet the criteria for auto-scoring (e.g., specific channels, agents, or time periods).
- Examine the logs for errors or warnings related to the auto-scoring process.
- The AI model is not learning or improving:
- Ensure that evaluators are providing feedback on the auto-scoring results.
- Verify that the AI model is properly configured to learn from the feedback.
- Consider providing additional training data to improve the accuracy of the AI model.
- Permissions Issues: Ensure that the user attempting to configure Auto-Scoring has the correct admin permissions. Verify the user has access to manage scorecards, indicators, and evaluation workflows within the Zoom web portal.