Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

This article follows What is AI Agent Infrastructure
Most AI market research fails before the real work even begins.
A founder, operator, or strategy team opens ChatGPT, Perplexity, Gemini, or Claude and asks a broad question like: What is the best industry for an AI agent infrastructure startup? The answer usually comes back quickly. It sounds polished, strategic, and intelligent. But in most cases, it is still too vague to support a real business decision.
The problem is not the tools. The problem is the method.
To find the best markets for AI agent infrastructure, consultants do not start by asking for a final answer. They start by building a research process. They define the question properly, break the problem into evaluation criteria, gather evidence in stages, challenge the first conclusion, and only then score the options.
That is what makes AI market research useful instead of merely impressive.
In this guide, I will walk through the consultant-style method for using Gen AI tools for market research. The goal is not to jump straight to one dramatic conclusion. The goal is to build a process that is structured, repeatable, and trustworthy.
In general, AI research starts with a question that is too broad.
If you ask, What is the best industry for AI agents?, most AI tools will blend together several very different ideas:
The result sounds smart, but the logic underneath is often thin.
A better question is this:
Which industries are the most attractive and winnable for launching an AI agent infrastructure business?
That framing matters because it forces you to evaluate both sides of the opportunity:
That is the difference between interesting research and decision-grade research.
Before using any Gen AI tool, define the research question properly.
A stronger version looks like this:
Which industries are the most attractive and winnable for launching an AI agent infrastructure business, based on market size, market growth, competitive intensity, startup winnability, value-chain fit, and team fit?
This is stronger because it separates demand from feasibility.
A market can look exciting on paper and still be a bad startup opportunity if:
At this stage, ChatGPT is most useful as a structuring tool, not as a verdict machine. OpenAI says ChatGPT Search is built to provide timely answers with links to relevant web sources, while Deep Research in ChatGPT is designed to reason, research, and synthesize complex online tasks into a documented report.
A practical prompt looks like this:
Act like a global strategy consultant helping me evaluate where to start an AI agent infrastructure business. Do not answer the final question yet. First, turn this into a precise research question. Then break the problem into MECE evaluation buckets. I want: market size, market growth, competitive intensity, startup winnability, value-chain fit, and team fit. For each bucket, define what it means, how it should be measured, and what proxy metrics can be used if direct data does not exist. Separate facts, assumptions, and hypotheses. Do not recommend an industry yet.
The goal here is not insight yet. The goal is structure.
Once the question is clear, the next step is to decide how industries will be compared.
A consultant-style framework for AI startup market analysis can include six criteria.
Is the market large enough to support a meaningful business? Look at spending levels, customer volume, and budget availability.
Is the market expanding fast enough to make entry attractive now? Look for growth in spending, adoption, workflow digitization, and vendor momentum.
Are there already strong incumbents dominating the obvious entry points? If so, a new startup may struggle to find open space.
Can a startup realistically gain traction? Consider integration difficulty, switching costs, buyer behavior, sales cycles, and operational complexity.
Does the strongest pain sit in a specific workflow where agent infrastructure can solve a clear problem? Narrow workflow fit is often better than broad theoretical demand.
Does your team actually have the right capabilities to build, sell, and operate in that market? A great market can still be the wrong market for your team.
This step matters because it keeps the research disciplined. Without a screen, you are just collecting interesting facts.
One of the biggest mistakes people make is asking every AI tool the same question and comparing the tone of the answers.
A better approach is to give each tool a different job.
The best workflow looks like this:
frame first, search second, synthesize third, challenge fourth, score fifth, write last.
That order matters because the tools are designed for different kinds of work.
Used in sequence, these tools complement each other. Used interchangeably, they often duplicate weaknesses.
The first job belongs to ChatGPT.
This is where it is especially useful: clarifying the problem, structuring the framework, and identifying what evidence matters.
At this stage, do not ask it to rank industries. Ask it to build the logic behind the research.
A strong prompt is:
Using the research question we just defined, build a decision framework for screening industries for an AI agent infrastructure startup. Show the evaluation buckets, what evidence would support each bucket, and what proxy metrics to use when direct public data is missing. Do not rank industries yet.
This is the foundation. If the logic is weak here, the rest of the research will also be weak.
Once the framework is built, move to a search-first tool.
Now the goal is not synthesis. The goal is to build a source pack.
You want current, relevant materials such as:
This is where tools like Perplexity or ChatGPT Search are especially useful.
A strong prompt for Gen AI tools for market research at this stage is:
I am researching which industries are the most attractive and winnable for a new AI agent infrastructure startup. Do not give me a final answer yet. Find the best current sources from the last 12 to 18 months that would help answer this question. Search for: (1) evidence of market size by industry, (2) evidence of market growth and adoption momentum by industry, (3) examples of businesses already building agent infrastructure in each industry, (4) signals of competitive intensity, and (5) regulatory or operational barriers affecting startup entry. Prefer primary sources, official company pages, investor materials, earnings calls, government publications, and top-tier research firms. Label each source as primary, secondary, or vendor marketing. Flag weak or missing evidence.
The key is simple: do not ask for the answer yet. Ask for the evidence first.
Before running a full market scan, create a shortlist of industries that appear worth deeper analysis.
This candidate set should come from your framework, not guesswork.
Look for industries that repeatedly show the signals that AI agent infrastructure typically needs:
Only after this first screen should you decide which industries deserve deeper research.
This step matters because deep research works best when it starts with a thoughtful candidate set, not with the entire economy.
Once you have a candidate set and a source pack, move to a deeper research phase.
This is where you compare industries more rigorously:
A practical prompt looks like this:
I want a deep research report on which industries are the most attractive and winnable for a startup building AI agent infrastructure. Research the candidate industries identified from the initial screen. My definition of AI agent infrastructure includes orchestration, memory and context layers, permissions and identity, tool use and system integrations, observability and evaluation, auditability, human-in-the-loop control, and workflow execution environments. Assess each industry on market size, market growth, startup winnability, the most agent-dense workflows within each industry, real companies operating in those workflows, and how crowded each segment already is. Use current, citable sources. If direct public data for number of agents by industry does not exist, use proxies and say so clearly. Distinguish facts from inferences. Include a comparison matrix, rank the top industries, and add a short section called what could make this conclusion wrong.
This is the point where research becomes genuinely decision-useful.
After the first research pass, do not stop.
This is where many people get fooled by a neat answer. A consultant does the opposite. A consultant pressure-tests the result.
A useful prompt is:
I have a draft conclusion about the most attractive industries for an AI agent infrastructure startup. Act as a skeptical strategy partner. Your job is not to agree. Your job is to find where this thesis could be wrong. Identify missing industries that may be more attractive, weak evidence or over-reliance on vendor claims, contradictions in current market signals, places where the market may be too crowded for startups, and places where regulation, procurement, or switching costs may make entry harder than the thesis suggests. Use web search and cite sources. Separate hard contradictions from softer caveats. Rank the top five reasons this conclusion could fail.
This is one of the most important steps in the entire process.
Weak research looks for agreement. Strong research looks for disconfirming evidence.
Only after the conclusion has been challenged should you move to scoring.
Scoring does not create perfect precision. It creates disciplined comparison.
A practical prompt is:
Using the research findings below, build a consultant-style scoring model for industry selection. I want a 1 to 5 score for each industry across market size, market growth, workflow pain and density, infrastructure dependence, competitive intensity, buyer accessibility, regulatory burden, and startup winnability. Score only using evidence already identified. If evidence is weak, lower confidence and say so. Provide the score by category, a weighted total, a confidence level, a short justification for each score, and a final ranking. After the table, write two short sections: what this scoring implies and what would change the ranking.
This is where a real shortlist starts to emerge.
If this method is working, the final output should not be a dramatic statement like, “This is the best market.”
It should look more like a real consulting work product: a clear research question, a defined screening framework, a source pack, a candidate industry set, a deep comparison across industries, a red-team memo, and a scored shortlist with confidence levels.
That is the point where writing becomes easy, because the hard thinking is already done.
The biggest lesson is not that one model is better than another.
The real lesson is that good AI market research comes from method, not model confidence.
ChatGPT is useful for framing and synthesis. Search-first tools are useful for gathering evidence. Deep research tools are more useful once you know exactly what you are testing. A second model is useful for contradiction-hunting and red-teaming.
But none of these tools replaces the strategist’s job of:
That is the difference between asking AI for an opinion and using AI to do serious market research.
In the next article, this method becomes concrete: which industries survived the screen, why they survived, and how the final shortlist was chosen for deeper analysis.