A range of different statistical downscaling models was calibrated using both observed and general circulation model (GCM) generated daily precipitation time series and intercompared. The GCM used was the U.K. Meteorological Office, Hadley Centre's coupled ocean/atmosphere model (HadCM2) forced by combined CO2 and sulfate aerosol changes. Climate model results for 1980-1999 (present) and 2080-2099 (future) were used, for six regions across the United States. The downscaling methods compared were different weather generator techniques (the standard 'WGEN' method, and a method based on spell-length durations), two different methods using grid point vorticity data as an atmospheric predictor variable (B-Circ and C-Circ), and two variations of an artificial neural network (ANN) transfer function technique using circulation data and circulation plus temperature data as predictor variables. Comparisons of results were facilitated by using standard sets of observed and GCM-derived predictor variables and by using a standard suite of diagnostic statistics. Significant differences in the level of skill were found among the downscaling methods. The weather generation techniques, which are able to fit a number of daily precipitation statistics exactly, yielded the smallest differences between observed and simulated daily precipitation. The ANN methods performed poorly because of a failure to simulate wet-day occurrence statistics adequately. Changes in precipitation between the present and future scenarios produced by the statistical downscaling methods were generally smaller than those produced directly by the GCM. Changes in daily precipitation produced by the GCM between 1980-1999 and 2080-2099 were therefore judged not to be due primarily to changes in atmospheric circulation. In the light of these results and detailed model comparisons, suggestions for future research and model refinements are presented.