Event-related potentials that follow feedback in reinforcement learning tasks have been proposed to reflect neural encoding of prediction errors. Prior research has shown that in the interval of 240–340 ms multiple different prediction error encodings appear to co-occur, including a value signal carrying signed quantitative prediction error and a valence signal merely carrying sign. The effects used to identify these two encoders, respectively a sign main effect and a sign × size interaction, do not reliably discriminate them. A full discrimination is made possible by comparing tasks in which the reinforcer available on a given trial is set to be either appetitive or aversive against tasks where a trial allows the possibility of either. This study presents a meta-analysis of reinforcement learning experiments, the majority of which presented the possibility of winning or losing money. Value and valence encodings were identified by conventional difference wave methodology but additionally by an analysis of their predicted behavior using a Bayesian analysis that incorporated nulls into the evidence for each encoder. The results suggest that a valence encoding, sensitive only to the available outcomes on the trial at hand precedes a later value encoding sensitive to the outcomes available in the wider experimental context. The implications of this for modeling computational processes of reinforcement learning in humans are discussed.