At ThinkNow, we believe that understanding people starts with listening and getting beyond data points. By integrating artificial Intelligence (AI) into our online panels, we’re transforming how we capture and analyze open-ended responses in market research.
For years, open-text analysis was a manual, costly, and limited process. Today, AI enables us to process qualitative insights with unprecedented speed and precision, optimizing every stage of the research cycle. With these technologies, we don’t just analyze words; we interpret emotions, tone, and context, uncovering the authentic voice of the consumer that traditional methods often miss.
One of the most significant innovations is the ability to collect responses in audio or video format within the panel. This approach allows participants to express themselves more naturally, adding nuances that written text cannot capture. AI transforms these recordings into structured, automatically coded information, available in real time to analysis teams.
Moreover, machine-learning algorithms can assess the coherence and authenticity of responses, enhancing panel quality and reducing human bias. This results in more reliable, representative insights, especially in multicultural studies where expression and context are key to accurate interpretation.
This convergence of AI and online panels ushers in a new era in research, one where the boundaries between quantitative and qualitative blur, giving way to a faster, smarter, and more human ecosystem of insights.
ThinkNow is also expanding these innovations through synthetic sample, an advanced approach that broadens the reach and representativeness of studies without compromising methodological integrity.
If you’d like to learn more about how AI, online panels, and synthetic sampling are revolutionizing research, click here.
Artificial intelligence (AI) is rapidly reshaping society, but with its transformative power comes pressing ethical, cultural, and social questions. The conversation around AI often centers on new capabilities, but equally important are the implications for equity, transparency, and human values.
A key concern is the concentration of AI development in a handful of industries, particularly technology and finance, which risks creating tools that benefit only a narrow segment of society. When innovation prioritizes speed and competition, the so-called “AI race” can result in systems being released prematurely, riddled with bias, or inaccessible to much of the global population.
Language representation in AI models is another critical issue. Many large language models are predominantly trained in English, resulting in the underrepresentation of other languages and cultural perspectives. This imbalance not only limits accessibility but also reduces the quality of AI outputs. Advocates stress that LLMs trained on multicultural data lead to better, more representative systems, ones capable of reflecting the world’s diversity rather than reinforcing existing biases and stereotypes.
Still, the potential for AI to drive positive impact is significant. From creating accessible tools for immigrants navigating new systems to providing voice-based digital companions for older adults, socially conscious applications of AI can foster inclusion and improve quality of life.
On this episode of The New Mainstream podcast, Norman Valdez, CEO of BrainTrainr, discusses the urgency of developing responsible AI and highlights both the dangers of exclusion and the opportunities for technology to serve as a force for good.
We're halfway through 2025 and one thing is undeniable: AI is no longer on the horizon, it is in the room. For the market research industry, this has come faster than most expected. What felt like an existential threat just a year ago is now transforming how researchers approach everything from segmentation to recruitment to data analysis.
But as AI becomes embedded in our workflows, a critical question arises. Are the datasets powering these models truly inclusive? Do they reflect the diverse populations researchers aim to understand, or are they building the next generation of tools on top of the same old blind spots?
Market research has long struggled with inclusivity. Reaching Spanish-dominant Latinos, Gen Z respondents and even male participants has always been difficult. Despite decades of effort, many of these groups continue to be underrepresented in online panels and large-scale studies.
Now, imagine deploying AI on top of these incomplete datasets. Instead of closing representation gaps, AI trained on biased data risks amplifying them at scale. Biases that were once isolated can now be baked into algorithms and amplified across the entire research ecosystem, undermining the potential of AI to drive more inclusive insights.
When AI began gaining traction in the industry, initial skepticism emerged among some researchers, particularly regarding the use of synthetic data and AI-powered moderators. These tools seemed impersonal, disconnected from the human insights that drive understanding and trust among respondents.
Yet, over time, AI has proven itself capable of complementing, rather than replacing, researchers’ work. Instead of diluting what makes insights meaningful, AI can expand them by enabling researchers to finally address representation issues that more conventional methods have never been able to. This shift has prompted a more intentional approach to innovation. If synthetic data is going to shape the future of insights, it must be inclusive by design, representing the full diversity of the populations it aims to model.
The market research industry is uniquely positioned to lead in this space. While many tech companies face lawsuits for training AI on copyrighted or illegally scraped data, researchers have operated under strict privacy laws like GDPR and CCPA for decades. Upholding consent, data stewardship and adherence to ethical standards has been the norm.
Our datasets are not only large, but they are also permission-based and carefully vetted. This makes them ideal for training AI models that need to mirror real-world diversity.
But it is not enough to have access to data. The same rigor applied when building representative samples must be applied to training AI models. This means proactively identifying gaps, asking who is missing from the data and taking measurable steps to responsibly include them.
This brings us to the future of multicultural segmentation. Relying solely on broad demographic categories or historical internal datasets is no longer sufficient. Today’s consumers are multidimensional, and AI gives us the tools to see them more clearly.
To generate synthetic data that accurately reflects multicultural audiences, it is essential to incorporate information from historically underrepresented communities. This requires collaboration between technologists and cultural experts, as well as a commitment to designing systems that accurately reflect the reality of diverse identities.
For researchers generating synthetic datasets, combining privacy-compliant methods with culturally rich data points, powered by AI, helps ensure that communities often left out of the conversation are fully represented moving forward.
AI is not a passing trend. It is here to stay, and it is reshaping how we segment audiences, recruit respondents and activate insights. However, AI’s success depends on the quality and inclusiveness of the data behind it, and the researchers guiding its application.
For market research professionals, this is a challenge worth embracing. With deep expertise, ethical frameworks and a foundation in representative sampling, the industry is uniquely positioned to ensure that AI serves all communities, not just the most accessible ones.
The future of multicultural segmentation will belong to those who successfully integrate innovation and intention because the question is no longer whether to adopt AI, but how to use it in a way that advances representation.
Those investing in synthetic data and inclusive segmentation strategies play a crucial role in achieving this, and those seeking better representation in data must continue to demand it.
This blog post was originally published on Quirk's Media.
For decades, the foundation of market research rested on one powerful tool: the survey. It was the standard way to understand consumers, what they like, want, and feel. Researchers spent years mastering the art of crafting questions, selecting the right sample, and interpreting the answers. And for a long time, that worked well.
But over the last few years, something fundamental has changed.
As the digital world expanded, so did the ways consumers interact with brands. People now browse online stores, leave reviews, post on social media, click on ads, abandon carts, binge-watch videos, and scroll through countless pieces of content. Each of these actions generates a trail of data. These behavioral breadcrumbs reveal more than a simple survey ever could.
But a new era of predictive market research is emerging, one that relies less on what consumers say and more on what their behavior reveals. With the help of predictive analytics, researchers are not just looking at current trends, they’re forecasting future ones.
The shift is happening for good reason. In today’s hyper-competitive, always-on business environment, companies need faster, deeper, and more accurate insights to make decisions. Waiting days or weeks for survey responses isn’t always practical, especially when product launches, ad campaigns, and market shifts happen at the speed of social media. Predictive insights, powered by machine learning and advanced analytics, are giving businesses the edge they need by offering a more dynamic and forward-looking understanding of consumer behavior.
This is especially relevant for industries where consumer expectations shift quickly, like retail, consumer tech, travel, and even healthcare. Imagine being able to predict what your customers are likely to buy next month, which messages will resonate best, or which audience segments are most likely to churn. That’s not science fiction. It’s becoming the reality for modern market research.
The tools driving this shift are growing more advanced every day. Artificial intelligence (AI) can now comb through huge datasets, like website analytics, purchase history, CRM data, and social media posts, to identify patterns, spot anomalies, and generate forecasts with surprising accuracy. But it is not just about the numbers. These tools are translating raw data into clear, actionable insights, helping researchers and strategists move from descriptive data (“what happened”) to prescriptive guidance (“what to do next”). The integration of behavioral data and AI is at the heart of predictive market research, allowing for faster and more accurate decisions.
Of course, this doesn’t mean traditional methods are obsolete. Surveys still play a critical role in understanding motivations, emotions, and the “why” behind consumer actions. They’re particularly useful in early-stage product development, brand perception studies, and testing creative concepts. But increasingly, surveys are being complemented or even preceded by predictive techniques that shape where and how questions are asked.
There’s also a shift in how research teams are structured. We're seeing data scientists working alongside qualitative researchers, blending statistical modeling with human-centered design thinking. The most forward-thinking research departments aren’t picking one method over the other. Instead, they are integrating them to get a more complete, nuanced view of the market.
But with all this advancement comes a new responsibility. Predictive analytics depends on data, and a lot of it. Market researchers must now be more mindful than ever about how that data is collected, stored, and used. Data privacy laws are tightening, and consumers are becoming more aware of how their information is being tracked. Trust and transparency are quickly becoming just as important as accuracy.
At its core, market research is still about understanding people. That hasn’t changed. What has changed is the how. Instead of relying solely on consumers to tell us what they think through a form or a phone call, we now have the tools to listen to what their actions are already saying. And in many ways, those actions tell a more complete story.
We’re entering the era of predictive market research, where data doesn’t just describe what happened, it guides what to do next. For researchers, analysts, and business leaders alike, the question isn’t if they should adapt, but how fast they can.
Want to learn more about how we're using AI? Check out what we're doing with ThinkNow Synthetic?
I attended the Quirk's Los Angeles Market Research Event last week, and one thing became apparent: AI is coming for Market Research. I sat through presentations and sales pitches on AI Qual moderation, AI Co-Workers, AI Social Media Monitoring, AI-assisted Survey Creation, Data Analysis, and Report Writing. Towards the end of the conference, I wondered if next year we would all send our AI-enabled robot doppelgangers to listen to AI presenters discussing whether humans were still necessary in consumer research.
That’s not so say that I wasn’t impressed with some of the things AI is capable of doing. AI is revolutionizing market research by streamlining processes and providing faster, more efficient insights. Here are some of the major benefits:
Despite these advancements, AI has limitations that market researchers must acknowledge:
While AI promises many advantages, rapid adoption without careful oversight presents risks:
AI is undeniably transforming market research, but it should be viewed as a tool to enhance, rather than replace, human expertise. Running forward too quickly risks running into a dead-end. The best approach is a hybrid model, where AI handles time-consuming tasks while human researchers focus on interpretation, storytelling, and strategic decision-making.
As AI continues to evolve, the key to success will be striking the right balance—leveraging its strengths while mitigating its risks. Market researchers who adapt, upskill, and find ways to integrate AI effectively will be the ones leading the industry, not just observing its transformation. And hopefully, our AI doppelgangers will decide that humans are useful and nice to have around after all.
Ready to leverage AI in your market research? Find out how ThinkNow Synthetic can work for you.
The United States is experiencing a significant demographic shift, with multicultural communities driving the nation's growth. This highlights the importance of data accurately representing this multicultural reality.
In artificial intelligence and machine learning, the quality and representativeness of data are paramount. Synthetic data – artificially generated information that maintains the statistical properties of real-world data – has emerged as a powerful tool for training AI models. However, the effectiveness of these models hinges on the diversity embedded within the synthetic data.
Without adequate representation of various cultural and ethnic groups, AI systems risk perpetuating existing biases which can lead to skewed outcomes and can reinforce systemic inequalities.
Companies investing in synthetic data are particularly interested in capturing the nuances of diverse consumer behaviors. As multicultural communities drive population growth and influence market trends, understanding their unique preferences and needs becomes essential for businesses aiming to remain competitive. Synthetic data that accurately represents these groups offers a cost-effective way to gain insights compared to traditional data collection methods.
Representation is a significant issue facing AI today. But, by starting with the hardest-to-reach groups, such as multicultural communities, synthetic data creators can address the most complex challenges first. Addressing these challenges results in a more inclusive dataset and leads to higher-quality AI systems overall. Models that can effectively handle the nuances of diverse populations tend to perform better across all demographics, creating more robust and versatile solutions.
Multicultural communities not only represent the fastest-growing demographic groups in the U.S., but they are also leading drivers of economic expansion. For instance, in 2023, the employment rate among Black and Hispanic Americans aged 25-54 reached a record high. These groups also experienced faster wage growth, contributing to higher income levels. Black women are the fastest-growing group of entrepreneurs, while Hispanics represent one of the fastest-growing populations in the U.S.
Businesses that fail to recognize these shifts risk missing out on opportunities to engage with a significant portion of the market. However, these communities are not monoliths. Due to the complexity of these thriving markets, tapping into them from a research perspective, can be daunting.
Generating synthetic data allows market researchers, marketers and strategists to address this growth opportunity in a scalable way. Instead of being hampered by incomplete or biased real-world datasets, they can rely on synthetic data that mirrors the full spectrum of human diversity.
By understanding the value of underrepresented groups, companies can create more relevant marketing strategies that deliver greater value to their audiences.
The future of synthetic data is inherently multicultural. As the U.S. becomes more diverse, it is important to create AI and data solutions that reflect this reality. Training AI with multicultural insights helps create reliable synthetic data, leading to more inclusive applications and ultimately, better outcomes for businesses, consumers and society.
This blog post was originally published on Quirk's Media.
Once seen as an industry resistant to change, market research has embraced transformative technologies in recent years, with AI leading in reshaping traditional methods. Yet, diversity in the data remains elusive, presenting both an opportunity and challenge for researchers. As the founder of ThinkNow, a company at the forefront of multicultural insights, I’ve witnessed firsthand how critical accurate representation is in understanding diverse consumer behavior.
One way we are addressing this disparity is by creating synthetic samples. Over the years, we’ve developed ThinkNow Synthetic—a synthetic sample solution that leverages artificial intelligence to enhance diversity in data collection. However, for synthetic data to advance diversity, the quality of the training data is paramount. This article examines how AI, particularly synthetic sampling, can revolutionize the industry by producing more inclusive and representative datasets, while also highlighting the differences between synthetic sampling and traditional methods like weighting.
Traditional sampling techniques in market research often fall short when it comes to representing hard-to-reach demographics such as Hispanic, Black, AANHPI, and LGBTQIA+ communities. Even with diligent panel recruitment efforts, certain populations remain underrepresented. ThinkNow Synthetic was born out of this necessity, using large language models (LLMs) trained on multicultural data to create synthetic responses that mirror real-world diversity.
The process begins with training the model on diverse datasets, like the General Social Survey (GSS) and ThinkNow’s proprietary data collected from our panel, DigaYGane.com. This ensures that the synthetic sample reflects the population in question and produces responses representing a wide range of cultural experiences. Our approach enhances the inclusiveness of the data and reduces biases often associated with AI-generated responses.
A common misconception in market research is to equate synthetic sampling with weighting. While both aim to adjust the data to reflect population diversity better, they employ fundamentally different methodologies. Weighting, as many researchers are familiar with, takes a small sample size and extrapolates the results to a larger population. This can inflate the representation of underrepresented groups but doesn’t truly increase the diversity of responses. Essentially, weighting adjusts the numbers, not the underlying richness or authenticity of the data.
In contrast, synthetic sampling, particularly ThinkNow Synthetic, is designed to create entirely new data points based on the learned behavior of respondents from diverse communities. For example, if you are conducting a study among bicultural Latinos and face difficulty recruiting sufficient respondents, our AI model can generate synthetic responses that mimic those of a bicultural Latino based on actual data collected from our panel. This method doesn’t simply inflate responses but creates new, culturally nuanced data that enriches the overall dataset.
This difference is significant. Weighting amplifies a limited dataset, while synthetic sample expands it by simulating a broader range of responses. This approach has the potential to dramatically increase representation without sacrificing data accuracy.
The success of any synthetic sample solution hinges on the quality and diversity of its training data. If the training data used to create synthetic responses is skewed or biased, the results will reflect those same biases. Training LLMs on rich, multicultural datasets ensures that synthetic responses are representative and culturally relevant, effectively mitigating the biases often found in AI-generated content.
ThinkNow Synthetic’s hybrid model combines panel data and synthetic responses to create complete and representative datasets. When a client comes to us with a quantitative study needing 1,000 completed responses, for example, we can provide 500 actual survey responses from our diverse panel and supplement the remaining 500 using synthetic data generated by our AI. This hybrid approach preserves the integrity of the study while reducing costs and accelerating delivery.
Synthetic sampling is still in its early stages, but the potential applications are vast. From understanding consumer trends to informing policy decisions, synthetic sample can provide a fuller picture of societal behaviors across diverse populations. By filling gaps in datasets with culturally relevant synthetic responses, ThinkNow Synthetic helps clients make more informed decisions that reflect the reality of the communities they serve.
This approach also addresses a major challenge in market research: the underrepresentation of marginalized groups. As brands seek to engage diverse audiences, producing accurate and inclusive data authentically has become a business imperative. Synthetic sampling offers a path forward, equipping researchers with the tools to understand these audiences more deeply.
AI-powered synthetic sample has the potential to revolutionize diversity in market research. However, this is only possible if the training data is as diverse as the populations we aim to represent. At ThinkNow, we are committed to using our years of expertise and rich multicultural datasets to ensure that synthetic sampling doesn’t just mimic diversity but truly reflects it. By combining synthetic data with real-world panel responses, we are creating a new era in market research—one where inclusivity and accuracy go hand in hand.
The future of market research is diverse, and with synthetic sample, we’re ensuring that no voice is left unheard.
This blog post was originally published on HispanicAd.com