Generative AI
ResearchD AI
Team
Sejal Amrutkar, Ayushi Gupta
Role
UX Designer and Strategist
Tools
Miro, MS Office, Figma, Canva, Adobe Suite, ChatGPT, Gemini, Perplexity, Notably, Dovetail, Board of Innovation AI toolkit
Timeline
5 months
Ideation:
Brainstorming
SCAMPER
Crazy 8s
Concept Testing:
Concept Walkthrough, System Map, Journey map, and Service Blueprint
Gamified Card Sorting
A/B Testing
POC Testing
Prototyping & Testing
Use Case Analysis
User Flows
Brand Guide & Style Guide
Information Architecture
Lo-fi and Hi-fi prototyping
Heuristic Evaluation
Business Strategy:
Business Model Canvas
Business Development Roadmap
Functions and Actions Tree
Pricing Model
Adaptation Model
Strategy Director at Smart Design
Process in Detail ⬇️
Conceptualization
We began by brainstorming ideas and evaluating them based on adherence to the opportunity statement, feasibility, scalability, ease of adoption, uniqueness, and X-factor. This phase was particularly challenging as we had to cater to both AI skeptics and enthusiasts. After extensive evaluation and several iterations, we finalized our concept:
A hyper-personalized AI tool designed for design studios, operating on a B2B subscription model, where each studio customizes the AI using their own primary and secondary research data for each project.
Landscape Analysis
To further refine our idea, we conducted a landscape analysis of existing offerings. This revealed a significant gap in AI tools tailored for design researchers, particularly for model optimization to generate personalized insights. Our concept effectively addressed this gap.
*This step was both challenging and enlightening, as we discovered numerous tools with similar features. It was crucial for us to establish clear differentiation to compete effectively in this crowded market.

Concept Testing
Over a three-month period, we tested our concept against various parameters such as brand communication, product usability, value proposition, and product-market fit. We continuously updated our approach based on feedback, refining and validating our hypotheses as they arose.











Users were concerned about…

Value Proposition Clarity
"I’m having trouble understanding exactly how ResearchD benefits me at each stage of the process. The terms like 'hyper-personalized AI models' are a bit too technical."

Competitive Advantage
"Why should I choose ResearchD over free tools like ChatGPT or Gemini? They’re already widely available and easy to access."

Data Privacy
"I’m concerned about the privacy of my data. How does ResearchD handle the research data we provide, and what happens to it after it’s used to train the AI?"
Testing the concept helped improve our communication strategy
We designed a landing page to gauge interest in our concept and evaluate the impact of our brand voice and communication strategy. Feedback highlighted the need to shift from a "service-oriented" approach to a "benefit-oriented" approach, emphasizing simplicity and clarity.
We identified key features through..
Information Architecture
To ensure users could effortlessly navigate the complex functionalities offered, we created a detailed information architecture. This involved developing a sitemap that charted key user flows to accomplish most common goals.

Heuristics Evaluation
We conducted a heuristic evaluation of our high-fidelity prototype with design researchers, UX designers, and AI/ML engineers. The aim was to assess the accessibility and identifiability of product features, ease of use and navigation, and intuitiveness of website interactions.
We tested ResearchD’s interface against five key heuristics:
Visibility of system status
Match between system and real-world
Consistency and standards
Recognition rather than recall
Aesthetic and minimalist design.








Proof of Concept Testing with Lazarus AI
This phase was the most rewarding, as it validated our concept. To test whether a custom-trained RAG-based AI could contextualize research insights and highlight unique findings, we partnered with Lazarus AI, using their RAG framework. We built a proof of concept with two document sets with different research scopes — AI in Design Research and Elderly Mobility in India — and compared the results with ChatGPT 3.5 and Bard.
Some Aha Moments from our testing!
Truth Over Guesswork: The RAG model outperformed ChatGPT by acknowledging gaps in the data instead of generating false responses, ensuring more reliable results.
Contextual Nuance: The AI accurately categorized subgroups within the elderly population based on behavior patterns, adding valuable depth to our research findings.
Cultural Clarity: The AI effectively captured and quoted culturally specific terms and activities directly from user data, showcasing its ability to reflect real-world nuances.
Takeaways
Navigating a Competitive Market
Identifying a gap in a crowded space like AI research tools required deep market analysis and differentiation. This showed us that product innovation alone isn’t enough; it needs to be backed by a strong business strategy.
Value of Specialized Knowledge
Consulting subject matter experts early in the project is crucial. We faced a steep learning curve due to our initial lack of AI expertise, which could have been mitigated by their guidance from the start.
Simplifying Communication
Being immersed in the project, we became accustomed to technical terms. However, we quickly learned that we needed to be more mindful of our audience and avoid jargon when presenting our ideas.







