Crowdsourcing ideation contests allow solution-seeking firms (seekers) to solicit ideas from external individuals (solvers). Contest platforms often recommend seekers to provide examples of solutions (i.e., seeker exemplars) to guide and inspire solvers in generating ideas. In this study, we delve into solvers’ ideation process and examine how different configurations of seeker exemplars affect the quantitative outcomes in solvers’ scanning, shortlisting, and selection of ideas. Results from an online experiment show that solvers generally search for, shortlist, and/or submit fewer ideas when shown certain seeker exemplars. In addition, solvers who submit fewer ideas tend to submit lower-quality ideas, on average. Thus, a key insight from this study is that showing seeker exemplars, which contest platforms encourage and seekers often do, could negatively affect quantitative ideation outcomes and thereby impair idea quality. To help mitigate these adverse ideation outcomes, we propose a few areas of which seekers should be mindful. We also suggest ways that contests’ platforms can contribute to the idea generation process that solvers undertake.