Discussion about this post

User's avatar
DataChefs - Research Recipes's avatar

I’m not actually too bothered if the bots & fakers are identified & screened before entering the survey. That’s the direct hit the sample supplier has to pay. What concerns me more are the ones we have to weed out once they’re in there (mainly in the form of bad open end answers). In any case, if we identify them, they are rejected. It’s a pain and we spend considerable time on it, but no sample source is perfect.

In my experience, B2B respondents are much more problematic than consumer. Paying $20-80 per respondent attracts the scammers more than the $2-3 for consumer work.

Expand full comment
Joey Belle's avatar

Perhaps someone’s investment in a side-by-side comparison of data from that 15-minute survey could make the case for data quality. What is data quality? Data that you trust to use to make decisions. Let’s say the $1 sample attracts many more bots and fakers than your $4 sample, but the data itself from the ending sample looks the same as yours. Well, that doesn’t support your case. If the data is different, how would you know which data set is the source of truth? I guess you’d need to sprinkle in some sort of factual questions or something where the outcome is already known in order to validate that your data is “better.” You’re fighting an uphill battle. I used to go to SampleCon, and they’d say how the entire ecosystem needs to change and improve, but nothing changes. And in some respects, they’ve worsened. I sincerely wish you the best in this endeavor.

Expand full comment

No posts