Voice Enrollment Engine settings
Use the Voice Enrollment Engine settings to configure the system to perform automatic enrollment Process in the IAFD product where an employee, customer, or target is registered and the system is then able to assist with identity verification and watch list detection when that speaker is on a call. (automatically create and store voiceprint File containing a mathematical summary of the vocal patterns of a person’s voice used in the IAFD product to assist with identity verification and detection. models for users). The Voice Enrollment Engine settings are available from the Recording Management > Recorder Analytics Rules > Engine Settings screen.
Voice Enrollment EngineSetting |
Description |
---|---|
Voiceprint Detection settings |
|
Available/Selected Watch List - Use these settings to configure watch list List that contains one of more voiceprints of people who are of particular interest to an enterprise. detection Function of the IAFD product that compares a caller’s voice to one or more voiceprints in a watch list to detect if a target is participating in a call. on interactions handled by the Voice Enrollment Engine. |
|
Available Watch List |
The Recorder Analytics Rule does not use the watch lists in this box to perform watch list detection. If you want the rule to use a watch list in this box to perform watch list detection, click on the watch list, and then click the right-pointing arrow If you do not want this Recorder Analytics Rule to perform watch list detection, move all watch lists into this box. In a Multi-tenant SaaS environment, only watch lists associated to the same tenant as the Recorder Analytics Rule are available. |
Selected Watch List |
The Recorder Analytics Rule uses the watch lists in this box to perform watch list detection. When this Recorder Analytics Rule is invoked on an interaction In Speech Analytics, an interaction represents a single part of the contact between one employee and the same customer. In Text Analytics, an interaction is the communication session between one or more employees and the same customer with a unifying contextual element., the system compares the voice on the interaction to the voiceprint models that are included in all watch lists that display in this box. If the system detects that a voice on an interaction matches a voiceprint model in a watch list, that interaction is not used to perform automatic enrollment (the interaction audio is not used to create or enhance a voiceprint model). If you do not want the system to use a watch list in this box to perform watch list detection, click on the watch list, and then click the left-pointing arrow Important: If no watch lists display in this box, this Recorder Analytics Rule does not perform watch list detection on interactions handled by the Voice Enrollment Engine. In a Multi-tenant SaaS environment, only watch lists associated to the same tenant as the Recorder Analytics Rule are available. |
Minimum Score |
A watch list detection score must exceed this value for the system to conclude that a watch list detection has occurred. If the detection score exceeds this value, the system returns a result of “Detected” for the watch list detection operation. Explanation: When performing watch list detection, the system analyzes the interaction and assigns a detection score to the interaction. The detection score is a numerical representation of the similarities that exist between a voice on an interaction and a voiceprint model. Detection score values range from -100 to 100. The default and recommended setting is 10.0. Higher detection scores indicate a higher degree of similarity between the voice on the interaction and the voiceprint model (that is, higher scores indicate a higher probability that the person on the interaction is the person from whom the voiceprint model was created). For example, with the default setting:
Adjust this setting only if the system performance with the current setting is unsatisfactory. If this value is too high, the system may return a result other than “Detected”, even when the person is the person from whom the voiceprint model was created. If this value is too low, the system may return the “Detected” result when not the person from whom the voiceprint model was created (the system returns false-positives). |
Voiceprint Settings |
|
Minimum Calls for Voiceprint |
Specify the minimum number of interactions from a particular user that the engine must handle before that user’s voiceprint model is valid for use in identity verification Feature in voice biometrics where the speaker’s voice is compared to a collection of employee or customer voiceprints that should match the call. If a match occurs, the speaker is said to be verified. operations. The default setting is 3. The maximum setting is 10. This setting must be a whole number. If you specify a decimal number, such as 2.1 or 2.6, it is rounded automatically to the nearest whole number. With the default setting of 3, the initial voiceprint model is created (or trained) on the first interaction in which the user participates, and then enhanced using audio from the next two interactions in which the user participates.
For automatic customer enrollment, you can have an optional customer cooling-off period. The customer cooling-off period, when configured, works in conjunction with the Minimum Calls for Voiceprint setting. After the minimum number of interactions has been reached, the cooling-off period must elapse before a customer voiceprint model is used for verification.. |
Maximum Calls for Voiceprint |
Specify the maximum number of interactions from a particular user that the system must handle before the enhancement process for this user’s voiceprint model is complete and the voiceprint model is considered final. The default setting is 3. The maximum setting is 10. If you specify a decimal number, such as 2.1 or 2.6, it is rounded to the nearest whole number. With the default setting of 3, a user’s voiceprint model is final after the system handles three interactions from that user. The system performs no enhancement to the voiceprint model after the third interaction from the user. If the Requires Manual Approval setting is selected, the Maximum Calls for Voiceprint setting is ignored by the rule. A user with the role privilege of Approve/Unapprove Interactions for Enrollment must use Risk Management to review and approve each interaction. Note that a voiceprint model can be simultaneously enhanced and used in identity verification operations. This scenario occurs when this setting is greater than the Minimum Calls for Voiceprint setting. For example, assume the Minimum Calls for Voiceprint setting is 3 and this setting is 5. With this configuration:
|
Minimum Days Between Calls |
After one interaction is used to create (train) or enhance a voiceprint model, this setting specifies the number of days that must pass before another interaction is used to enhance the voiceprint model. The default setting is 5. If the Requires Manual Approval setting is selected, the Minimum Days Between Calls setting is ignored by the rule. A user with the role privilege of Approve/Unapprove Interactions for Enrollment must use Risk Management to review and approve each interaction. For example, assume an interaction from a user is handled by the Voice Enrollment Engine on June 5 at 12:00 noon. With the default setting of 5:
This setting prevents voiceprint models from being enhanced by multiple audio samples in which the user’s speaking voice is not normal for some temporary reason. For example, if a user who is sick calls three days in a row, a voiceprint model created from those three interactions may not accurately model the user’s normal speaking voice. |
Requires Manual Approval |
Select this option to require a user with the role privilege of Approve/Unapprove Interactions for Enrollment to manually approve each interaction that is used to create a voiceprint model. When this option is used, the rule ignores the configured voiceprint settings for:
This option enables verification of interactions to ensure that voiceprint models are created only from interactions that have high audio quality. During review in Risk Management, the decision is made whether the audio quality is suitable for model creation or enhancement. After manual approval of one or more interactions, training Semantic Intelligence (Si) process in Speech Analytics in which the system implements machine learning-based methods to extract and surface ontology-related items found in a sampling of transcribed interactions. or enhancement of the voiceprint model occurs and the model can be used for verification operations. This option is selected by default. For information on manually approving interactions used in voiceprint models, see the related topics section at the end of this topic. If this option is not selected, manual review and approval is not required for the interactions used to train and enhance voiceprint models. In this case, the models are available for identity verification operations as noted in Minimum Calls for Voiceprint above. |
Identification Rules |
|
Customer Identification Rules |
Select the Identification Rules to enable the Real-Time Analytics (RTA) Framework to automatically create and enhance voiceprint models for customers (people who contact The entire communication experience for a customer, from beginning to end. or are contacted by employees in the enterprise). The Identification Rule selected for this setting specifies the interaction attribute(s) whose value(s) is associated to the voiceprint model created for the customer. The rule is applied to the audio channel of the interaction that is inbound to the enterprise device (the channel on which the person who contacts or is contacted by an enterprise employee is speaking). If you select None, the system does not automatically create or enhance voiceprint models for customers. A Warning icon |
Employee Identification Rule |
Select one of these Identification Rules to enable the Real-Time Analytics (RTA) Framework to automatically create and enhance voiceprint models for employees (people who represent the enterprise on captured interactions). The Identification Rule selected for this setting specifies the interaction attribute(s) whose value(s) is associated to the voiceprint model created for the employee. The Identification Rule selected for this setting: The identification rule is applied to the audio channel of the interaction that is outbound from the enterprise device (the audio channel on which the employee representing the enterprise is speaking). If you select None, the system does not automatically create or enhance voiceprint models for employees. A Warning icon |
Voice Biometrics Engine settings
Workflow: Configure a Recorder Analytics Rule
Configuring Identification Rules for IAFD