
Automating Product Research: A Multi-Agent AI Adventure with CrewAI
Automating Product Research: A Multi-Agent AI Adventure with CrewAI
Hey folks, welcome back to my tech tinkering corner!
Today I’m sharing a story that started with a flooded laundry room and ended with an AI-powered experiment that works for ANY product. If you’ve ever tried to buy anything substantial online, you know the drill:
- Endless tabs with specs and reviews
- Forum hopping to figure out if “quiet” really means quiet
- Checking whether the brand even has service in your city
- Price comparisons across Amazon, Best Buy, and random e-stores
Thorough? Yes. Efficient? Not at all. My washing machine hunt ate up hours—maybe days—of my life.
So I thought: Why not automate this?
That’s where CrewAI came in. It’s a framework for building multi-agent systems—AI “teammates” that each handle a slice of a complex workflow. Instead of me juggling spreadsheets and bookmarks, I built a mini AI crew to research, validate, and recommend the right product.
Here’s how I turned shopping chaos into an automated pipeline.
The Spark: From Flooded Floors to AI Helpers
Shopping research is basically an investigation:
- Collect product and review data
- Validate details like service coverage and fake reviews
- Optimize for budget and deals
Manually, that’s painful. With CrewAI, you can break it into roles and let agents hand tasks off to each other. For my generalized product research system, I set up three agents:
- Product Researcher – scouts options, specs, and user reviews
- Review Validator – cross-checks reliability and service availability
- Recommendation Curator – filters, ranks, and finds the best deals
Input: any product in natural language (“microwave oven 20liter city noida type convection budget $300”) Output: a curated short list with pros, cons, and buy links—in minutes.
Designing the Crew: Agents With Personalities
1. Product Researcher
- Backstory: A savvy shopper who unearths hidden gems from forums and e-commerce sites
- Tools:
SerperDevTool
(searches) +ScrapeWebsiteTool
(pulls specs and ratings) - Job: Build the raw list of candidate models
researcher = Agent(
role='Product Researcher',
goal='Gather product options and reviews',
backstory='A savvy online shopper mastering searches and data extraction',
tools=[SerperDevTool(), ScrapeWebsiteTool()],
verbose=True
)
2. Review Validator
- Backstory: A skeptical analyst trained to spot fake reviews and verify local service
- Tools:
SerperDevTool
for service lookups and cross-checking - Job: Ensure no bad apples sneak through
validator = Agent(
role='Review Validator',
goal='Verify reviews, specs, and service availability',
backstory='A skeptical analyst confirming details and support',
tools=[SerperDevTool()],
verbose=True
)
3. Recommendation Curator
- Backstory: A personal advisor who balances features, budget, and deals
- Tools:
FileReadTool
to consume validated data - Job: Rank and format the top picks
curator = Agent(
role='Recommendation Curator',
goal='Curate final product recommendations with deals',
backstory='A personal advisor matching needs with best prices',
tools=[FileReadTool()],
verbose=True
)
```mermaid
flowchart LR
A([User Input: Item + Budget + Preferences]) --> B[Product Researcher]
B -->|Finds models & reviews| C[Review Validator]
C -->|Cross-checks specs & service| D[Recommendation Curator]
D -->|Curated shortlist with deals| E([Final Report])
Putting It Together: Tasks + Crew
Each agent needs tasks. CrewAI chains them so outputs become inputs:
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool, ScrapeWebsiteTool, FileReadTool
from openai import OpenAI
import json
import os
def parse_product_input(user_input):
"""Extract product details from user input using LLM"""
client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))
prompt = f"""
Parse this product search query into structured data. Extract:
- product: the main product name/category
- specifications: technical specs, features, size, etc.
- budget: price limit (default INR 10000 if not mentioned)
- city: location for service/delivery (default "New Delhi" if not mentioned)
Query: "{user_input}"
Return only a JSON object with these 4 keys. Example:
{{"product": "Microwave Oven", "specifications": "20liter convection type", "budget": "INR 5000", "city": "Noida"}}
"""
try:
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}],
temperature=0.1,
max_tokens=200
)
result = json.loads(response.choices[0].message.content.strip())
# Ensure all required keys exist with defaults
return {
'product': result.get('product', 'Product'),
'specifications': result.get('specifications', ''),
'budget': result.get('budget', 'INR 10000'),
'city': result.get('city', 'New Delhi')
}
except Exception as e:
print(f"LLM parsing failed: {e}")
# Fallback to simple parsing
words = user_input.split()
return {
'product': words[0].title() if words else 'Product',
'specifications': ' '.join(words[1:]) if len(words) > 1 else '',
'budget': 'INR 10000',
'city': 'New Delhi'
}
def create_product_crew(product_details):
"""Create a generalized product research crew"""
# Initialize tools
serper_tool = SerperDevTool()
scrape_tool = ScrapeWebsiteTool()
file_tool = FileReadTool()
# Create agents
researcher = Agent(
role='Product Researcher',
goal='Gather product options and reviews',
backstory='A savvy online shopper mastering searches and data extraction',
tools=[serper_tool, scrape_tool],
verbose=True
)
validator = Agent(
role='Review Validator',
goal='Verify reviews, specs, and service availability',
backstory='A skeptical analyst confirming details and support',
tools=[serper_tool],
verbose=True
)
curator = Agent(
role='Recommendation Curator',
goal='Curate final product recommendations with deals',
backstory='A personal advisor matching needs with best prices',
tools=[file_tool],
verbose=True
)
# Define dynamic tasks based on product details
task1 = Task(
description=f'Research {product_details["product"]} under {product_details["budget"]}, {product_details["specifications"]}',
agent=researcher,
expected_output='5-10 product models with reviews and links'
)
task2 = Task(
description=f'Validate reviews and service availability in {product_details["city"]}',
agent=validator,
expected_output='Validated list with pros/cons and service notes'
)
task3 = Task(
description='Curate top 3-5 recommendations with cheapest buy links',
agent=curator,
expected_output='Formatted report: Model, Price, Pros/Cons, Buy Link'
)
return Crew(
agents=[researcher, validator, curator],
tasks=[task1, task2, task3],
verbose=True,
process=Process.sequential
)
if __name__ == "__main__":
print("=== AI Product Research Assistant ===")
print("Enter product details in natural language.")
print("Example: 'microwave oven 20liter city noida type convection budget INR 5000'")
print("Example: 'laptop gaming 16gb ram under INR120000 city mumbai'")
user_input = input("What product are you looking for? ")
# Parse the input
product_details = parse_product_input(user_input)
print(f"\nParsed details:")
print(f"Product: {product_details['product']}")
print(f"Specifications: {product_details['specifications']}")
print(f"Budget: {product_details['budget']}")
print(f"City: {product_details['city']}")
# Create and run the crew
crew = create_product_crew(product_details)
result = crew.kickoff()
print("\n" + "="*50)
print("FINAL RECOMMENDATIONS:")
print("="*50)
print(result)
That’s it—your automated shopping squad.
Setting Up Your Environment
Before running the code, you’ll need to set up your development environment and API keys:
1. Install Dependencies
Create a requirements.txt
file:
crewai
crewai-tools
python-dotenv
Install the packages:
pip install -r requirements.txt
2. Get Your API Keys
You’ll need two API keys:
- Serper API (for web search): Get yours at serper.dev
- OpenAI API (for the LLM): Get yours at platform.openai.com
3. Set Environment Variables
Create a .env
file in your project directory:
SERPER_API_KEY=your_serper_api_key_here
OPENAI_API_KEY=your_openai_api_key_here
Results and Reflections
My final run produced a clean shortlist with prices, pros/cons, and links. I ended up snagging an LG front-loader on sale, without the research headache.
The beauty of this approach is it’s truly universal. Just type in natural language what you’re looking for:
"microwave oven 20liter city noida type convection budget INR5000"
"laptop gaming 16gb ram under INR12000 city mumbai"
"smartphone android 5g budget INR8000 city Jaipur"
The system automatically parses your input and customizes the research crew for any product category.
Final Thoughts
What started as a soggy disaster became a fun AI experiment. CrewAI let me delegate shopping research to agents so I could focus on deciding, not digging.
Want to try this yourself?
- Test different products: Try
"air fryer 5 quart city denver budget INR1500"
or"running shoes size 10 for city Faridabad"
- Extend the crew: Add specialized agents like a “Promo Code Hunter” or “Warranty Analyzer”
- Customize parsing: Modify the regex patterns to handle more input formats
- Share your results: I’d love to see what products you research with this system!
The complete working code with natural language parsing is available in the setup section above. No more hard-coding product details—just tell it what you want! 🚀