Research Question Examples: 25+ Strong Examples Across 5 Types
See 25+ research question examples across descriptive, comparative, causal, exploratory, and evaluative types. Includes frameworks, discipline examples, and common mistakes.
Every research project begins with a question. A sharp, focused research question orients the entire study: it determines what literature is relevant, what methods are appropriate, what data you need to collect, and how you will know when you have an answer. A vague or poorly framed question, by contrast, produces a vague project that meanders without a clear endpoint.
This guide collects 25+ research question examples across five question types, with discipline-specific examples from psychology, biology, history, computer science, and nursing. It also covers the frameworks you can use to structure your own questions and the common mistakes that weaken otherwise interesting projects. If you are working on a longer piece of writing that grows out of a single question, our guide to writing a literature review picks up where this one leaves off.
What Makes a Good Research Question
A strong research question satisfies four criteria, often summarized with the acronym FINER:
- Focused. The question addresses one specific issue, not a bundle of related ones. You should be able to imagine a single study that answers it.
- Researchable. The question can be answered with evidence you can plausibly collect. Questions of pure opinion, metaphysics, or value judgment may be interesting but are not researchable in an empirical sense.
- Specific. The terms in the question are precisely defined. "Does social media affect mental health?" is too loose; "Does daily Instagram use of more than two hours predict higher depression scores among college students?" is usable.
- Significant. The answer would matter to someone - other researchers, practitioners, policymakers, or affected populations. A question that nobody cares about is technically researchable but not worth pursuing.
Add to these the criterion of novelty: a good research question addresses something that has not already been definitively answered. That does not mean the topic has never been studied, but your specific angle should add something new.
Let us look at examples across the five most common question types.
The Five Types of Research Questions
Research questions fall into five broad categories based on what they ask and what kind of answer they seek. Each type calls for different methods, different literature, and different kinds of evidence.
Descriptive questions
Descriptive questions ask what is happening, how much, how often, or to whom. They are the foundation of many fields and often the first step in a research program. The answers are typically statistics, patterns, or characterizations rather than explanations.
Examples:
-
Psychology: What proportion of first-generation college students at public universities in the United States report experiencing impostor syndrome during their first year?
-
Public health: How has the prevalence of type 2 diabetes in adults aged 25-45 changed in urban India between 2010 and 2025?
-
Education: What instructional strategies do community college mathematics instructors most commonly use when teaching algebra to adult learners?
Descriptive questions are often dismissed as "just describing," but high-quality description is the basis for most subsequent research. You cannot explain or predict a phenomenon you have not yet accurately described.
Comparative questions
Comparative questions examine differences between groups, conditions, time periods, or places. They ask whether X differs between A and B, and if so, how.
Examples:
-
Sociology: Do neighborhoods with community land trusts have lower rates of resident displacement than comparable neighborhoods without them?
-
Computer science: How does the energy consumption of transformer-based language models trained with sparse attention compare to those trained with dense attention on similar tasks?
-
Economics: How did employment patterns in the service sector in France differ from those in Germany during the 2020-2024 period?
Comparative questions require that the groups being compared are genuinely comparable on dimensions other than the one of interest, or that differences on other dimensions are measured and controlled.
Causal questions
Causal questions ask whether one variable causes a change in another. They are among the most demanding to answer because they require ruling out alternative explanations. Randomized experiments are the gold standard; observational methods with careful controls can sometimes approximate causal inference.
Examples:
-
Education: Does participation in an evidence-based reading intervention during grades 1-2 cause improved reading comprehension scores three years later, compared to business-as-usual instruction?
-
Medicine: Does adding a twelve-week structured exercise program to standard chemotherapy reduce fatigue scores in adults undergoing treatment for breast cancer?
-
Environmental science: Does the installation of urban green roofs reduce surface temperatures in surrounding city blocks compared to conventional roofing?
Causal questions should almost always specify the mechanism under investigation, the relevant population, and the outcome of interest with enough precision that another researcher could replicate the design.
Exploratory questions
Exploratory questions investigate phenomena that are not yet well understood. They generate hypotheses rather than test them. These questions are common at the early stages of a research program, in emerging fields, and in qualitative research traditions.
Examples:
-
Anthropology: How do TikTok creators in rural Appalachia negotiate their identities when their content circulates globally?
-
Nursing: What experiences and concerns shape the medication decisions of newly diagnosed patients with type 1 diabetes in the first six months after diagnosis?
-
Business: How do small business owners perceive and respond to the availability of AI tools for customer service and content creation?
Exploratory questions often use qualitative methods (interviews, ethnography, case studies) because the goal is to understand something new in depth, not to measure something already understood.
Evaluative questions
Evaluative questions assess whether a program, policy, intervention, or tool achieves its intended goals. They are common in applied research and have overlap with causal questions, but evaluative questions are typically tied to specific real-world implementations rather than general theoretical relationships.
Examples:
-
Social work: To what extent does the new family support program implemented in Chicago Public Schools improve student attendance and reduce disciplinary incidents?
-
Public policy: How effective has the city's congestion pricing scheme been in reducing downtown traffic volume and improving air quality over its first two years?
-
Health technology: Does the telehealth platform deployed by a regional health system improve access to mental health services for rural patients, as measured by appointment attendance and patient-reported satisfaction?
Evaluative questions usually require multiple types of evidence: quantitative outcomes, implementation fidelity data, and often qualitative input from stakeholders.
Discipline-Specific Research Question Examples
The same question type looks different across disciplines. Here are additional examples grouped by field.
Psychology
- How does daily gratitude journaling over eight weeks affect self-reported well-being in adults with mild depressive symptoms?
- What are the common cognitive distortions reported by adolescents with social anxiety during peer interactions?
- Does cognitive-behavioral therapy delivered via video call produce comparable outcomes to in-person delivery for adults with generalized anxiety disorder?
Biology
- How does gut microbiome composition differ between wild and captive-bred populations of a specific amphibian species?
- What effect does increased ocean acidity have on the calcification rate of juvenile oysters in laboratory conditions?
- Does expression of a specific gene vary between cancerous and adjacent healthy tissue in patients with colorectal adenocarcinoma?
History
- How did local newspaper coverage of women's suffrage events in the midwestern United States change between 1900 and 1920?
- What role did letter-writing networks play in coordinating anti-colonial activism in interwar East Africa?
- How did working-class attitudes toward public health measures shift during the 1918-1920 influenza pandemic in industrial Britain?
Computer science
- How does the accuracy of named entity recognition models trained on synthetic data compare to models trained on equivalent amounts of real-world data?
- What architectural choices most affect the energy efficiency of on-device image classification models?
- Does fine-tuning a foundation model on domain-specific legal text improve its performance on contract classification tasks?
Nursing
- How do new graduate nurses describe the transition from student to clinician during their first six months of practice?
- Does the implementation of a standardized handoff protocol reduce medication errors during shift changes in medical-surgical units?
- What factors most strongly predict medication adherence in older adults with heart failure six months after hospital discharge?
Notice how the structure repeats across fields. The topics differ, but the logic of the questions - descriptive, comparative, causal, exploratory, evaluative - is consistent.
Refining a Weak Question Into a Strong One
The difference between a weak and strong research question is usually not the topic but the framing. Here is an example of the refinement process.
Starting point: "How does AI affect education?"
This is a topic, not a research question. It is too broad to answer, the terms are undefined, and it does not specify what kind of evidence would count.
First refinement: "Does AI use affect student learning outcomes?"
Better, but still too broad. "AI use" covers dozens of possible uses; "student learning outcomes" covers hundreds of possible measures; "students" could mean anyone from kindergarten to postgraduate.
Second refinement: "Does use of ChatGPT for essay drafting affect the writing quality of undergraduate students in first-year composition courses?"
Much sharper. The tool is specific (ChatGPT), the use is specific (essay drafting), the outcome is specific (writing quality), and the population is specific (first-year undergraduates in composition).
Third refinement: "Does unsupervised use of ChatGPT for essay drafting, compared to drafting without AI assistance, affect the structural coherence and argumentative quality of essays written by first-year undergraduates in required composition courses at a large public university in the United States?"
Now the question is fully operationalizable. You know the intervention (unsupervised ChatGPT use during drafting), the comparison (no AI assistance), the outcomes (structural coherence and argumentative quality), the population (first-year undergraduates at a large public US university), and the setting (required composition courses). A reader can imagine the study and anticipate what kind of evidence would answer the question.
The refinement process usually follows this pattern:
- Start with the topic you care about.
- Narrow the population.
- Narrow the variables or concepts.
- Specify the setting and time frame.
- Clarify what kind of answer you are seeking.
Our research question generator walks through these steps interactively and produces a refined question plus a short list of the existing literature that bears on it.
Frameworks for Structuring Research Questions
Several established frameworks help structure questions, especially in applied fields.
PICO (Population, Intervention, Comparison, Outcome)
PICO is the standard framework for evidence-based medicine and related fields. It is especially useful for intervention questions.
- Population: Who is being studied?
- Intervention: What is being done or evaluated?
- Comparison: What is the alternative or control?
- Outcome: What is being measured?
Example: In hospitalized adults with pneumonia (P), does early ambulation within 24 hours of admission (I), compared to bed rest for 72 hours (C), reduce length of stay (O)?
Variants include PICOT (adding Time) and PICOS (adding Study design).
SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type)
SPIDER is adapted for qualitative and mixed-methods questions, where PICO is often a poor fit.
- Sample: Who is being studied?
- Phenomenon of Interest: What experience or behavior?
- Design: What research design?
- Evaluation: What outcomes or themes?
- Research type: Qualitative, quantitative, or mixed?
Example: How do young adult caregivers (S) describe the emotional experience of supporting a parent with early-onset dementia (PI), as explored through semi-structured interviews (D), focusing on themes of identity, time, and coping (E), in a qualitative thematic analysis (R)?
PEO (Population, Exposure, Outcome)
PEO works well for observational questions where there is no active intervention.
- Population: Who is being studied?
- Exposure: What are they exposed to?
- Outcome: What is the effect?
Example: In urban residents aged 50-70 (P), is exposure to long-term air pollution above WHO guidelines (E) associated with increased incidence of cardiovascular events (O)?
Choose the framework that fits your question type. Forcing a qualitative question into PICO, or an intervention question into SPIDER, produces awkward framing that does not help and may obscure the real question.
For doctoral-level research questions that span multiple studies or require integration across subfields, the structure often combines frameworks. Students in the PhD research track often develop umbrella questions that are broken down into sub-questions using different frameworks for each.
Common Mistakes in Research Questions
A few patterns show up repeatedly in weak research questions. Watching for them during the planning stage saves substantial work later.
Asking multiple questions as one
"What factors affect student engagement, and how do teachers respond to disengagement, and do school policies make a difference?" This is three questions, each of which is worth asking, but each requires its own study design. Pick one.
Yes/no questions with obvious answers
"Does exercise improve health?" We already know exercise improves health. A more useful question specifies a population, an intervention dose, and a comparison that is not already obvious. "Does 20 minutes of daily moderate-intensity exercise reduce depression scores in postpartum women more than mindfulness meditation of equivalent duration?"
Questions that cannot be researched
"Is capitalism morally justified?" This is a philosophical question. It can be debated but not answered with empirical methods. Research questions should be answerable with evidence.
Jargon-heavy questions that obscure the core issue
"To what extent does neoliberal epistemic hegemony inflect pedagogical praxis in sites of marginalized educational production?" A research question should be comprehensible to a reader outside your immediate subfield. Ideas can be sophisticated; phrasing should be clear.
Questions without a clear unit of analysis
"How does culture affect behavior?" Whose culture? Whose behavior? In what context? A usable question specifies the unit at which you are measuring the phenomenon - individuals, families, organizations, regions, and so on.
Questions that assume what they ask
"Why are low-income communities more resistant to vaccination?" This assumes the premise (that low-income communities are more resistant), which may or may not be true, and it embeds a causal claim (that income causes resistance) that should be examined rather than assumed. A stronger question tests the premise first: "Do vaccination rates among low-income adults differ from those of middle-income adults in the same region, after controlling for insurance status and clinic access?"
Questions where the answer is untestable
"Would the French Revolution have happened without Rousseau?" Counterfactual historical questions are interesting but usually not testable. Research questions should ideally be answerable in principle, even if the specific study has limitations.
Putting It All Together
A good research question is the first real achievement of a research project. Everything downstream - literature review, methodology, data collection, analysis, writing - flows more easily from a strong question than it ever will from a weak one. Projects that drift or stall usually trace back to a question that was never sharp enough.
The process of writing a strong research question is iterative. You start with a topic, narrow it, test it against the FINER criteria, and refine it as you read the literature. Most projects go through multiple versions of the research question before settling on the one that makes it into the final paper. That is the normal process, not a sign that you do not know what you are doing.
When you can state your research question in one clean sentence - specific about population, variables, and the kind of evidence you seek - you are ready to begin the rest of the research. Until then, more reading and more rewriting will save you far more time than it costs.
Related reading
Qualitative vs Quantitative Research: Key Differences and When to Use Each
Compare qualitative and quantitative research methods with a detailed table, discipline-specific examples, mixed methods guidance, and tips on rigor.
How to Identify Research Gaps: 6 Proven Strategies for Scholars
Learn six proven strategies for identifying research gaps in academic literature. Covers systematic mapping, contradiction analysis, and AI tools.
Annotated Bibliography: Complete Guide with APA, MLA, and Chicago Examples
A complete guide to writing an annotated bibliography with step-by-step process, worked examples in APA 7, MLA 9, and Chicago, and a downloadable template.