As UX researchers, we spend a lot of time discussing and exploring ways to incorporate AI into our work. However, we’ve been noticing in projects that potential research candidates seem to be using the same technology to trick us into letting them join a study they should not be part of. Some are simple to spot, such as a bizarre and amusing paragraph about intergalactic space llamas (we are not kidding; someone spent the time to troll us using a well-known panel service). Others are more subtle, where the answers aligned exactly with what we were looking for but felt too well crafted, too on point.

Tips for Identifying Rogue Respondents

As AI capabilities and access evolve, we expect this issue to become more of a problem in the years ahead. The financial incentive to join a study is just too tempting, and AI-generated responses can now let someone with no knowledge of a complex topic appear to be an expert—the exact type of person you are looking for in that project with a recruitment deadline fast approaching. Many recruitment platforms now offer double-screening or video messaging as a paid add-on to weed out fraudulent respondents, which is a great option as well. 

Over the past few months, we’ve developed methods and revised our mental models to address what we are seeing. Here are our top recommendations:

  1. Make your open-ended questions more complex
    We used to rely on reviewing open-ended questions as a great first-pass method for refining our initial respondent pool. Today, however, a once useful inquiry into the complex use of a tool can be easily generated by a respondent using our own question as the AI prompt. In response, we now tend to create questions that require detailed, anecdotal, context-specific answers to help expose potential fraudulent answers.

  2. Be wary of extremely polished answers
    Sure, someone may take the time to provide an extensive, paragraph-long, well-structured answer with perfect grammar, but we’ve found these comprehensive, overly formal answers that lack any errors should be closely examined. They are just too good. We often flag anything that looks too good and cross reference with other checks. Sometimes as a team, we take the time to discuss and come to a consensus on this flagged group.

  3. Verify authenticity in other curated, reliable platforms
    If you’re using a platform that links to participant social profiles, and you should be whenever possible, take time to review their background and assess if their answers align with what you see. We’ve found that someone claiming expertise on LinkedIn in the niche area we are desperately looking for sometimes has an online CV that looks radically different than what we would expect.

  4. Follow up with additional questions
    One round of screeners often doesn’t work for us anymore. As you listen to your gut and identify the responses that feel fishy, leveraging follow-up questions is a great way to feel more confident in making a determination. Asking the same open-ended question slightly differently and comparing the first and second responses should give the insight you need into including or excluding a participant.

  5. When possible, build a trusted, long-term panel
    Every salesperson knows it’s harder to find a new customer than to expand with the ones they already have. While research often looks for fresh outlooks, there are many times when tapping into a trusted panel makes the most sense. We love using a dedicated panel when possible, as it drastically cuts the time to recruit for a study, and it allows us to track behavior and attitudes over time. In the age of AI, small, long-term panels are likely to become more common.

To Do: Frequently Revise Your Techniques

The above recommendations are what we use today. Will we modify them in the months ahead? We already plan to. We do not doubt that as AI services get better at mimicking people’s actual responses, it will be trickier to weed out those trying to slide into a study. Prompts will include not only an answer to our now more complex open-ended questions, but they may also add “and make it seem like I’m in a bit of a hurry and have a spelling error or grammar mistake."

What do you think? Do you have other tips for researchers? Let us know at


Understanding the behavior, attitudes, and needs of prospects, customers, and internal users is critical for organizations that build effective SaaS products and mobile apps. However, even a well-funded and staffed UX research team has trouble meeting the needs of product groups and senior executives. For those with limited resources, delivering insights that translate into exceptional experiences is even harder. Most rightfully resort to what is possible: point research initiatives designed to tackle today’s most pressing issue. While this is the pragmatic solution, it limits flexibility and often produces valuable but siloed, frozen-in-time results. Another option, available at a variety of price points based on scope, is a long-term research panel. It can often deliver more of what is needed, and in a more timely fashion.

Common Research Challenges Hinder Product Improvement

Product teams want insight and actionable feedback from users to short- and long-term questions, but often, as UX researchers, we have to temper expectations. We know what we would like to deliver, but we often take longer than anyone wants or can only provide limited results. Some of the common challenges faced include:

The Value of a Turn-key Research Panel

To combat this issue, Proximity Lab developed our Research Panel offering. It’s a flexible product designed to meet the real-world needs of organizations. For example, it can help track customer sentiment over a 12-month period, but it can also make it simple to jump-start a last-minute research project in minimal time with qualified, pre-vetted respondents. Having Proximity Lab manage the panel delivers many benefits:

Panel Details

So what, exactly, does a panel look like? While each panel is a bit different, key features include:

A Complementary Research Tool Delivering Greater Actionable Insight

As with any third-party service, offerings like this can sometimes be seen as competitive with internal resources. But rather than competing with in-house teams, the Research Panel and Proximity Lab team is about enabling them to deliver better, faster insights enabling evidence-based business decisions. 

With a long history of working with many types and sizes of internal research teams, we know how to be an exceptional partner. For us, the Research Panel is a natural evolution of what we have been successfully delivering for over 20 years. 

Contact us to discuss how a panel could benefit you, your research and product teams, and your organization as a whole at

In our first Design Conversation Series event of 2024, our CEO, Nick Allen, spoke with Christian Crumlish, a design industry veteran and the author of “Product Management for UX People: From Designing to Thriving in a Product World." Christian shared his perspective on the evolving relationship between UX and product management over time, and we were thrilled to have many questions from attendees curious about practical ways to implement some of the ideas he discussed. 

The event provided great advice and insights on how design and product teams can better work together. Some of the key topics include:

Key Topics 



What's Next

Be sure to follow us on LinkedIn to learn about upcoming discussions with other design leaders at: In the next few months, we’ll be joined by Krystal Higgins to discuss her book, “Better Onboarding,” as well as Tom Greever, who will join us to talk about his book, “Articulating Design Decisions.”

If you have ideas about an interesting topic or speaker you'd like to see, or if you’d like to talk to us about our perspective and experience with all things UX research and product design, please reach out at


In our latest installment of our Design Conversation Series, we spoke with Chris Avore, head of design at Northwestern Mutual, and co-author of "Liftoff! Practical Design Leadership to Elevate Your Team, Your Organization, and You." Chris discusses some of the ways that, like so many things, design leadership has evolved and adapted since the book was released in 2020.

Key Topics 

What's Next

Be sure to follow us on LinkedIn to learn about upcoming discussions with other design leaders at: If you have ideas about an interesting topic or speaker you'd like to see, please reach out at


Check out our recent discussion with Jon Yablonski, author of "Laws of UX" and Senior Product Designer at Mixpanel. We explored the fundamental concepts behind the Laws of UX framework and how understanding user behavior psychology can enhance the work of any UX/UI or product designer. Don't miss out on the valuable insights and takeaways from this discussion.

Key Topics 

What's Next

This event is the fifth session in a discussion series that explores the topic from the perspectives of design leaders, product strategists and product designers. If you have suggestions for topics, speakers, or questions please send them to

Admin consoles are typically used to let SaaS (software as a service) administrators configure and manage software for their organization – adding and removing users, assigning licenses, enabling features, managing billing, etc. The SaaS admin console experience is often a second-class citizen in terms of UX, as vendors spend most of their time working on improving the end-user experience. However, as more and more companies hope to expand their product offerings directly, with new features and services, or indirectly, through integrations, the role of the console and those using it becomes more important.

To get a better sense of what admins think about the consoles they use and how they roll out new features, we surveyed a handful of them.

Satisfied, but Plenty on the Wish List

Of our 12 respondents, the average number of apps they managed was 5, with offerings from Microsoft, Salesforce, Amazon, and Okta being the most common. Overall, these admins reported neutral or above-average grades for the consoles they used most often. However, they pointed out a wide variety of issues, including:

“Having to look around to find certain functions of the admin console when they aren't properly documented.”

“Fixing mis-configurations and/or making sure patches and updates are installed correctly.”

“Often in these portals I find that key information and tasks are hidden away in areas that are often unclear or vague. This increases the time spent performing these tasks and also brings my confidence level in the applications down.”

So what could be improved? As with pain points, there was no consensus. The most common items on the wish list centered on:

One admin did mention “New features & how they apply to my organization” as the No. 1 most important improvement to their console experience, which should be music to the ears of product managers. And while others didn’t explicitly mention this, 8 of the 12 did report that they were critical in promoting new features and services. 

Finally, for vendors rolling out new services, admins made it clear that email and collaboration tools, such as Slack and Microsoft Teams, were the leading ways to get the word out.

Final Thoughts

It’s important for those building SaaS apps to spend time improving many aspects of the admin experience. While most of the console will have to be used no matter how intuitive, the ability of these tools to effectively promote and roll out new services should be explored.

From conversations with admins outside of this survey, we’ve learned that they are hesitant to simply click a button to blast out new capability to their organization. Providing a solid value proposition that can be shared with business leaders and influential power users, as well as allowing admins to test out new services with smaller groups of users, will likely lead to a greater chance of driving new service adoption.

If you would like to chat with us about UX research or design for improving your admin console, contact us at