FOCUS:
Consumer Robotics ·Discounted Media
ROLE:
Market Researcher · Product Research Lead
DELIVERABLES:
Strategic Implications
CONTEXT:
Online Experimental Study · Various Stimuli
Does Medium Matter? Evaluating Social Robots Using Video, GIF, and Photo Stimuli
A product and research strategy study comparing evaluation accuracy across three presentation formats to guide early-stage testing of robot concepts.
Impact at a Glance
To evaluate consumer purchasing decisions, it is critical to use content that individuals are likely to use to inform perceptions and behavior, mainly marketing videos. In human-robot interaction, photos are often used to determine perceptions of robots, but how well can this media approximate evaluation done on more interactive or informative material? I set out to compare photos, which are often used in HRI studies, and 5-second product animations to marketing videos to determine whether either can yield accurate and actionable consumer insights during early-stage product evaluation. As well, this study was used to determine whether photos or GIFs could be used as a proxy during evaluation studies, as a way to gather data more quickly and affordably.
Market Researcher · Product Research Lead
I independently led study design and execution for this media-comparison research, focused on improving early-stage evaluation methods for consumer robotics. I defined the core behavioral measures, selected stimuli formats, and conducted comparative analysis to identify low-cost strategies for testing robot concepts before physical prototyping. My work supported both design decision-making and product-market fit evaluation by clarifying how presentation format influences consumer perception.
600 participants were recruited online for this between-subjects study. Each was randomly exposed to one of three media formats (photo, animated GIF, or video) showing the same three social robot concepts (Liku, Jibo, and Olly, as below). All stimuli featured the same physical robot.
Participants rated the robot on key dimensions—liking, eeriness, human-likeness, privacy concerns, performance expectations, desire to seek out additional information, and willingness to purchase—enabling comparison across media. This design allowed me to isolate how media format itself may shape evaluative outcomes in concept testing.
Statistical analyses assessed group differences across the seven dimensions, identifying media-specific distortions and evaluating which format best approximates real-world marketing video perceptions.
Participants were also asked to described what led to their selections. This allowed for review of any specific feature differences that contributed to evaluation of the overall dimensions.
Features Associated with Key Perception Metrics by Media Type
Video was most emotionally powerful, but also polarizing
Participants shown videos rated robots significantly more human-like and had different performance expectations than those shown photos or GIFs—suggesting greater emotional and interpretive load.
GIFs struck a pragmatic balance
Animated GIFs generated comparable judgments to video on key metrics (e.g., liking, privacy concern, purchase intent), while being lower-cost and easier to produce—making them a viable substitute in early to mid-stage evaluations.
Photos downplayed nuance and motion
Static images underrepresented expressivity and embodiment, resulting in more neutral (but also less realistic) evaluations. Use of photos alone risks underestimating emotional impact or privacy concern, and overestimating purchase intention.
Overlooking media effects risks false confidence
The same robot was perceived differently depending on how it was shown. Early product feedback is only as accurate as the medium allows.
Human-likeness is highly sensitive to media format
Robots shown in video were rated as significantly more human-like than those shown in photo or GIF format. Relying on static imagery may underestimate emotional risk or relational perceptions, except for very human-like robots.
Low-fidelity ≠ low-impact
Even brief GIFs captured enough motion and personality to approximate user perception from video—offering a lean, strategic format for iterative testing.
Photos can support early-stage concept testing
Although photos produced statistically significant differences from video on liking and willingness to purchase, the magnitude was small—less than 0.5 points on a 7-point scale. For evaluating perceived use case or intended application, there was no significant difference. These results suggest that static imagery may still be a practical, low-cost option for early-stage concept evaluation.
Enabled low-fidelity animation as a concept testing tool
Findings supported the use of animated prototypes (e.g. GIFs) for evaluating emotional and behavioral responses, leading directly to the design and testing of four original robotic concepts using animation instead of hardware.
Informed staged media strategy for concept evaluation
Study results helped establish a phased approach: early reactions could be gauged using sketches or photos, followed by GIFs, before committing to full video production or builds