The purpose of this work is to develop methods for automatic assessment of pronunciation quality, to be used as part of a computer-aided language instruction system. The basic pronunciation scoring paradigm [Bernstein et al., ICSLP 90, Kobe, Japan] uses hidden Markov models (HMMs) to generate phonetic segmentations of the student's speech. From these segmentations, different machine scores are obtained based on HMM log-likelihoods and durations. The machine scores are evaluated based on their correlation with human scores on a large database. Previous approaches were based on statistical models built for specific sentences. The current algorithms were designed to produce pronunciation scores for arbitrary sentences, i.e., sentences for which there is not training acoustic data [Neumeyer et al., ICSLP 96, Philadelphia]. This approach allows great flexibility in the design of language instruction systems because new pronunciation exercises can be added without retraining the scoring system. Initial results showed that duration-based scores outperformed HMM log-likelihood scores. Recently, it was found that HMM-based scores can be significantly improved by using average log-posterior probabilities of phone segments. Correlation went up from r=0.48 to r=0.84, which is similar to that of duration-based approaches. This level of performance approaches that of human raters.