[Air-l] Any opinions about Opinio web survey software

Rowin Young rowin.young at strath.ac.uk
Wed Jul 5 07:01:23 PDT 2006


I would also be very interested in seeing the results of your research and am looking forward to them being announced.  
 
I also wondered how many (if any) of the tools you've looked at use the IMS Question and Test Interoperability specification (http://www.imsglobal.org/question/index.html), an open standard mainly being used in eassessment but which can also be used for surveys.
 
Rowin Young
CETIS Assessment Domain Coordinator
http://assessment.cetis.ac.uk 

________________________________

From: air-l-bounces at listserv.aoir.org on behalf of Ulf-Dietrich Reips
Sent: Wed 05/07/2006 14:26
To: air-l at listserv.aoir.org
Subject: Re: [Air-l] Any opinions about Opinio web survey software



Hi Charlie, all,

At 6:21 Uhr -0500 5.7.2006, Charlie Balch wrote:
>I would be very interested in seeing your results.

No problem, please just send me a reminder if you
don't see our posting here before the end of the
year. Btw., the first presentation of the results
will be at the Web data collection workshop in
Dubrovnik this fall:
http://pdw2006.internet-research.info/

>I'm also interested in why you think item randomization is important.  I'm
>aware that there is some bias towards answering areas in web surveys.  I'm
>also aware of the argument that any changes to a survey at the participant
>level create different environments and thus make the data questionable.

Admittedly, for some surveys and applications
item randomization is not important or even
harmful (e.g. with validated measures in
personality research). However, there is a vast
literature about order effects and context
effects that cleary indicates vulnerability of
survey results to fixed orders. The best solution
to get rid of these problems is item
randomization.
As an illustration I would like to point you to a
study two of my students and I reported in 2001
in Dimensions of Internet Science
(http://www.psychologie.unizh.ch/sowi/reips/dis/).
Changing the order of just 2 items made a
difference of about 100 minutes (!) in reported
television consumption per week (an effect of
context and social desirability). Also, the order
of groups of items influenced dropout behavior
and data quality in the Web experiment.

>By the way, http://birat.net <http://birat.net/>  is free including the source in ASP, but only
>runs on Windows Servers, and does not provide item randomization.

Thank you for the pointer (I also saw your
earlier post and took a look at the system). A
great initiative, but you may want to reconsider
the platform restriction and set of features. In
particular, I am afraid (or rather I am happy)
the Internet will render most platform-dependent
systems obsolete within the foreseeable future
for a number of reasons you'll find below in an
excerpt from a recent article.
So better switch strategies ;-)

Cheers, --u

P.S. I liked "Dissertation Hell" as the building
specification in your sig *grin*

Excerpt from
Reips, U.-D., & Lengler, R. (2005). The Web
Experiment List: A Web service for the
recruitment of participants and archiving of
Internet-based experiments. Behavior Research
Methods, 37, 287-292.
http://homepage.mac.com/maculfy/filechute/BSC515.pdf

"A number of tools have been developed for Internetbased
experimenting that form a general framework of
reference for the methodology. These tools can be grouped
into two general classes of "software": programs and
Web services. Programs follow the traditional format.
They need to be installed on a computer and run locally.
The working of the program depends on the computer's
configuration, which may vary considerably over time
(as other software is installed) and from user to user. Different
types of operating systems may not allow a user to
install the software at all. Upgrades and updates may be
necessary. However, the user is in control of the service and
independent of a connection to the Internet. An example of
a tool for Internet-based experimenting (in this case for
Web-based decision-making experiments) of the program
type is WebDIP (Schulte-Mecklenbeck & Neun, 2005).
Web services, on the other hand, run on a server that
is connected to the Internet. Users access it via a Web
browser and can use it only while they are connected to
the Internet. Because the functionality of Web browsers
is less dependent on the operating system (sometimes
they are even referred to as being platform independent),
all who access a Web service are likely to see and experience
almost the same interface (but see, e.g., Dillman
& Bowker, 2001, for browser-related problems in
Internet-based research). Web services spare the user
from upgrading and updating, since this is done by the
Web service administrators at the server. Nothing is installed
on the user's computer, saving space and time." (p. 287)
--
PD Dr. Ulf-Dietrich Reips
                  
            President, Society for Computers in Psychology (http://scip.ws <http://scip.ws/> )
            Editor, International Journal of
Internet Science (http://www.ijis.net <http://www.ijis.net/> )
                     Universität Zürich
            Psychologisches Institut            
                     Rämistr. 62
            8001 Zürich, Switzerland

iScience portal (http://psych-iscience.unizh.ch/)
_______________________________________________
The air-l at listserv.aoir.org mailing list
is provided by the Association of Internet Researchers http://aoir.org <http://aoir.org/> 
Subscribe, change options or unsubscribe at: http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org

Join the Association of Internet Researchers:
http://www.aoir.org/





More information about the Air-L mailing list