Date: Wed, 1 Apr 2009 10:08:35 -0500
Reply-To: Joe Matise <snoopy369@GMAIL.COM>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Joe Matise <snoopy369@GMAIL.COM>
Subject: Re: Any ideas for smart way of including text responses into
existing data set -- re: recoded variables?
Content-Type: text/plain; charset=UTF-8
To answer your last question - they'd be used as I suggested in my first
note on the subject. Word frequencies for 'venting' questions are very
effective at getting at problems you might not have known about. If 15% of
your (say, steakhouse) customers 'vent' including the word 'overcooked', you
know you have some training to do. If 28% use the word "marbled" associated
with "poorly", then you know you should consider changing steak suppliers.
Etc. Even though it sounds silly, you can pick up a lot even in seemingly
useless comment boxes.
On Wed, Apr 1, 2009 at 9:43 AM, Kevin Viel <firstname.lastname@example.org> wrote:
> On Wed, 1 Apr 2009 09:13:45 -0500, Joe Matise <snoopy369@GMAIL.COM> wrote:
> >The point is not to replace your survey with free text responses entirely.
> >That's lazy, bad research. However, including free text responses in
> >surveys can (sometimes) be very useful in discovering things you didn't
> >to ask. Again, I'm not talking about well-controlled scientific studies;
> >although I am not a scientist, I certainly could see the problem there.
> >in a fact-finding study (such as a customer service satisfaction study, or
> >as Mary noted a fact-finding medical research study) they definitely have
> >their uses.
> >You also typically find a lot of people that answer "other" to questions
> >really mean one of your intended responses, and you can code them back up
> >those responses; you could just eliminate other, but particularly in
> >situations where other is a valid response ("Was your recent doctor's
> >a) to a Primary Care doctor, b) to an Internal Medicine specialist, c) to
> >Dermatologist, d) to an OB/GYN, e) to the Emergency Room, or f) Other
> >Specialist"), where you don't want to list every potential kind of doctor,
> >you'll find responses of f) Other Specialist where if you include a free
> >text field they list "Gastroenterologist" (which you might consider
> >Medicine), "Family Doctor" (primary care), etc.; clearly you want to
> >those back to the original data and not lose valid responses, especially
> >you are working with a small sample size.
> >Also, you have "unaided" answers. For example, imagine this study:
> >"Please describe any symptoms you are feeling right now related to PTSD."
> >and then follow the question up with
> >"Please check which of the following PTSD symptoms you are feeling right
> >( ) Anxiety ( ) Sleeplessness ( ) Depression ( ) Suicidal Thoughts
> >etc ...)
> >If you'd put the second question solely in the survey, I guarantee you'd
> >find a different result than if you put them both in. This is standard in
> >market research, where the goal is to find which brands (say) a consumer
> >mention off the top of their head, and then list the brands of interest;
> >only to find out brands that we might not have included in the survey
> >some local brand that we weren't aware of, or a small brand that is
> >performing better than expected), but also because knowing what people
> >of off the top of their head is useful. If 80% of people recognize your
> >brand name, but only 5% list it off the top of their head when asked,
> >probably not doing as well as if it's 60% recognize and 40% list it off
> >top of their head. That would be the difference between Chick-Fil-A and
> >In&Out Burger, I'd suspect [one is a national brand with low awareness but
> >high recognition due to an effective advertizing campaign, while the other
> >is a brand with only super-regional presence but high awareness in that
> >- and no, I'm not basing this off any real survey.]
> >Anyhow, I think to some extent this comes down to the type of research
> >you're doing, and thus the differences in opinion ;) I certainly woudn't
> >suggest my fianc�e (an immunologist) do research with free-text questions,
> >were she to do any sort of human survey research, but market research and
> >some less controlled health research certainly make good use of free-text
> >questions, and find more value than you'd imagine. :)
> The "other" option to a list is not quite a free-form text question. I
> think it still requires a set approach to interpret the answers before the
> survey is employed, but good luck achieving that.
> I certainly would object to your example questing concerning PTSD. It
> would be too leading. In fact, even questions like "How many nights a
> week is your sleep restless" bother me. They are hard to quantify and may
> have a relative component that is hard to standardize.
> (For what its worth, look up research concerning the size of the
> hippocampus and amygdala and PTSD. It seems interesting. Not to mention
> the use of MDMA, or ecstasy, in therapy.)
> I followed a few marketing surveys, not to mention questions concerning
> datamining. It makes me wonder if some companies get it wrong, but have
> consumate consumers as customers. Have credit, will spend.
> The point of a free form box at the end of customer surveys might just to
> be to give the customer a chance to vent. In the last couple of times I
> wrote in them, I was sure that they would not be used, simply because I
> could not see how the information might have been converted.