By Jackson Dille
Questions for Practitioners
- How does the starting point of a budget simulation shape how participants engage with the budget making process?
- What role can simple design elements—such as prompts, visual indicators, or default values—play in encouraging more realistic or legally compliant budget outcomes?
- Is the purpose of your public engagement tool to collect preferences or actively educate participants about legal and fiscal constraints?
Introduction
Public budgeting might not seem like the most obvious place for behavioral economics, but it turns out to be a fertile ground for testing how people make decisions. As more governments turn to online budget simulations to engage residents, researchers are uncovering just how much small design choices—like where a budget starts or what kind of feedback users receive—can shape the public’s fiscal preferences. Despite increasing interest in choice architecture and nudging across the public sector, the literature on behavioral budgeting still lags behind practice. Recent work by Afonso and Mohr seeks to close that gap by exploring how to improve simulation outcomes without taking away users’ autonomy. The research builds on the NUDGES framework by Thaler and Sunstein, which identifies six key elements of effective choice design, from incentives to default settings to error expectations.
Research Design
To test how budget simulation design influences user behavior, researchers conducted an experiment using the Balancing Act budget tool. Participants interacted with a simulation modeled on a real municipal budget. They were randomly assigned to one of four conditions: beginning with a budget surplus or deficit, and either receiving or not receiving an informational prompt before submission.
In each scenario, participants had to adjust spending or revenue levels to balance the city’s budget, which was either pre-loaded with a $72.8 million surplus or deficit, indicated by a green or red “balance bar.” In surplus conditions, no action was required to complete the simulation—highlighting the potential power of defaults. However, for those in the ending surplus treatment group, a prompt reminded participants that North Carolina law requires a balanced budget, encouraging them to revise large surpluses. Participants could still submit unbalanced budgets, preserving autonomy. Despite the younger, more female, and more diverse demographic of the sample compared to the general population, there were no significant differences across treatment groups, allowing researchers to credibly test the influence of the experimental conditions using OLS regression.
Findings
The study revealed several actionable insights for local governments designing or deploying budget simulations:
- Defaults drive decisions: Participants who started the simulation with a surplus tended to keep it, often ending with unrealistic surplus levels. This behavior likely stemmed from status quo bias or a perception that no changes were necessary. In contrast, starting with a deficit prompted more active engagement and larger budget adjustments.
- Nudges increase engagement: The ending surplus prompt led to significantly more changes, especially to revenue categories, and helped reduce extreme surpluses. Participants did not alter spending categories as much—suggesting a stronger preference to maintain existing service levels. For practitioners, this highlights the power of targeted feedback to align participant behavior with legal or fiscal expectations without restricting choice.
- Design for participation, not just compliance: Participants in surplus conditions made fewer and smaller changes overall. This suggests that simulations starting “in balance” may appear complete or safe, discouraging deeper participation. To encourage meaningful input, starting with a modest deficit may better prompt users to consider trade-offs—particularly when paired with a legally grounded prompt at the end.
- Balance autonomy with guidance: While nudges can steer behavior, overly strong interventions may feel like instructions rather than choices, potentially undermining the quality of feedback. Practitioners must strike the right tone—providing enough context and encouragement to support better decisions without dictating outcomes.
Overall, these findings offer concrete strategies for enhancing the quality of engagement in public budget simulations. Whether the goal is education, consultation, or co-creation, governments can design more effective tools by leveraging behavioral insights and carefully structuring participant experiences.
Conclusion
Choice architecture is more than just simplifying heuristics or offering incentives—it’s a strategic framework for guiding behavior in complex decision environments like public budgeting. The NUDGES framework encourages practitioners to anticipate errors, design smart defaults, and offer meaningful feedback to support thoughtful, welfare-enhancing choices.
Crucially, public budgeting operates within regulatory and legal boundaries, meaning nudges alone may not always suffice. In such cases, “budges”—behaviorally informed interventions with more directive power—may be appropriate to ensure compliance while still capturing authentic public input. This study illustrates how simulations can honor both behavioral realism and institutional constraints.
For public administrators, this means designing tools that do more than just collect preferences—they should promote engagement, reduce error, and deliver legally viable and practically useful insights. The goal is to use digital simulations not just as feedback mechanisms, but as bridges between citizen preferences and responsible public finance.
Full article: Afonso, Whitney, and Zachary Mohr. “More than a wink and a nudge: examining the choice architecture of online government budget simulations.” Behavioural Public Policy (2024): 1-17.