Abstract:
Alcohol consumption can have impacts on the voice, and excessive consumption can lead to long-term damage to the vocal
cords. A new procedure to automatically detect alcohol drinkers using vowel vocalizations is an earlier and lower-cost method than
other alcohol drinker-detecting models and equipment. The hidden parameters of vowel sounds (such as frequency, jitter, shimmer,
harmonic ratio, etc.) are significant for recognizing individuals who drink or do not drink. In this research, we analyze 509 multiple
vocalizations of the vowels (/a, /e, /i, /o, and /u) from 290 multiple records of 46 drinkers and 219 multiple records of 38 non-drinkers.
The age group is 22 to 34 years. Apply the 10-fold cross-validation vowelized dataset on intelligent machine learning models and
incremental hidden layer neurons of artificial neural networks (IHLN-ANNs) with Backpropagation. The findings showed that
experimental ML models such as Naïve Bayes (NB), Random Forest (RF), k-NN, SVM, and C4.5 (Tree) performed well. The RF
model performed best, with 95.3% accuracy. We also applied the incremental hidden layer (HL) neurons BP-ANNs model (from 2 to
5). In this analysis, accuracy increased proportionally with the incremental neurons (2–5) in the HL of the ANN. Now of 5 neurons HL
ANN, the model performed with a highly accurate 99.4% without an over-fit problem. It will implement smartphone apps for caution
and alerts for alcohol consumers to avoid accidents. Voice analysis has been explored as a non-invasive and cost-effective means of
identifying alcohol consumers.