How Do We Know Which Research to Trust? Rachel Kennedy

6 Min Read


Welcome to this series of live blogs from the IJMR Research Methods Forum in London. Any errors, omissions, or silly side comments are my own.


Welcome to this series of live blogs from the IJMR Research Methods Forum in London. Any errors, omissions, or silly side comments are my own.

09.25 Keynote address – How do we know which research to trust?
Rachel Kennedy, Associate Professor, Ehrenberg-Bass Institute, University of South Australia

[Great presentation!]

  • Would you trust EG or eyetracking? The neuroscience technologies look cool. It’s easy to be bamboozled. How do you know what’s right for your research?
  • Galvic skin measurements have been around since 1888 but we still don’t completely know what it means or what to do with it
  • “This is old therefore it’s good” “This is new, therefore it is better”  Willima Ralph Inge
  • #1 We need to get repeatable results among different researchers
  • #2 It must predict in market behaviours of interest, not interest, not liking, but sales
  • #3 Providers must be transparent – what we measure, how it is being analyzed, and what we expect to see in given conditions
  • #4 It should be actionable
  • We’d like to know that the research in journals is the trustworthy research, we prefer peer review journals. BUT we can’t always trust is.
  • 10% of psychologists report they have falsified data – selectively reporting only studies that worked, don’t report all independent measures, excluding post-hoc data
  • Most results from the TOP journals can’t be reproduced
  • Science has a lovely way of correcting itself over time. You can’t accept a single validation study. You need lots of validation.
  • Biometics are exciting because they can give you an objective measure of truth – that’s the gut feel. But really?
  • Case study 1: They put a salmon into an fMRI. It responded to the stimuli in the scanner. But the fish was dead. So what then when humans respond to ads? Are the researcher’s data being corrected for false positives?
  • Case study 2:  Study took the same data and put it through two different software systems. They got two different results. [Yikes, I hope SPSS and SAS don’t do this. ]
  • Case study 3: What test-retest reliability would you want to see ? [95% please] With fMRI, only one third to one half of the same neural activity occurs. [sooooo, chance]
  • Add one complex brain (experience, IQ) to one complex marketing stimuli (sound, colour, movement), how can you get any reliability?
  • Hemispheric asymmetry – some favour the left side, some favour the right. Do you balance your research responders on this? Should you? Does it matter? Asymmetry depends on time of day and time of year. How do you balance on that? We just don’t know which matter.
  • How do you analyze evoked potential? Patterns of response over the time of an event (an ad).
  • Skin conductance claims says 98% rank order correlations [as soon as you said rank ordered, I could tell the researcher was hiding something.]. This study can’t be found now, company doesn’t exist anymore.
  • In one study, at best, only biometrics OR traditional measure can be correct because they lead to different conclusions. They are measuring very different things.
  • Traditional measures such as pleasure, likeability don’t correlate highly with sales.
  • Don’t take the negativity as we should not do neuroscience, but as know what you’re getting into.
  • What about virtual shopping to understand real life? market share results are different.
  • People are less likely to buy store brand and more likely to buy premium products in virtual testing. The penalities in fake life aren’t the same. Can we calibrate these differences?  People know how to “play” shop.
  • Purchase rates and dollar estimates are inflated. Most (mode) people actually buy only one item. But in virtual environment, most people actually buy three to six items. Maybe they are showing you the average or maybe it’s just wrong. We are missing standard research control. [yeah baby!]
  • It’s really easy to get things completely wrong without rigorous controls.
  • What about a vending maching mini-shop study. How long at the machine, what they look at, how long they look at it. [never thought of that!]
  • Big data doesn’t mean necessarily NOT biased. e.g., facebook fan base can be huge, like Skittles. But that’s a biased base. You need to understand the bias. Facebook skews to heavy brand bias, the people who already like you and don’t need to be convinced. Most people are actually non-buyers, the completely opposite.
  • You must have good research design and the right researchers for the job.
  • Ask suppliers about their validation techniques, what they predict, their knowledge of marketing, their knowledge of market research (e.g., experimental control)
  • What is trustworthy? Well grounded measures, validated technologies, passive/unobtrusive measurement, natural environment [hmmm…. do I hear social media listening research…]
  • [Thank you Rachel for reminding people that we need GOOD research with experimental design not just cool data and neato conclusions]
Share This Article
Exit mobile version