This study explores how large language models (LLMs) can create synthetic survey data to make the validation of constructs and pre-test of surveys faster and more reliable. We propose a strategy and a framework for designing prompts and personas that enable LLMs to simulate human responses. We apply the framework to four management accounting studies and collect ChatGPT-generated answers. Using this synthetic data, we conduct widely used validity tests and run the structural estimation. We find that LLMs can replicate human behavior and validate instruments in management accounting in a way consistent with theory, addressing challenges such as survey design, construct validity, reliability, and generalizability. Despite the limitations that we document, we posit that LLMs can help evaluating the epistemic relationships of constructs and that synthetic data can be a complement to real-world data to advance the rigor and efficiency of survey-based research.