S.O.S. Mathematics CyberBoard

Your Resource for mathematics help on the web!
It is currently Fri, 22 Aug 2014 00:54:54 UTC

All times are UTC [ DST ]

Post new topic Reply to topic  [ 3 posts ] 
Author Message
PostPosted: Mon, 7 May 2012 17:45:21 UTC 
Math Cadet

Joined: Sat, 5 Aug 2006 10:39:54 UTC
Posts: 8
Dear all,

I hope you can point me in the right direction regarding two issues I am having with simulating experiments (1: regarding the legality of my assumption of different from chance, 2: regarding whether my simulation is valid at all).

We conducted a study (here's a poster: http://www.opensourcesci.com/pdf/CrossCultural_colourOdourAssociations.pdf) where participants smelled a range of odours and assigned each 3 colours that seemed most compatible with that odour. We did this in a variety of cultures. Our plan was to first see if the selection of colours to odours differed from that expected by chance, and then see if colours-odour associations differed over cultures.

My initial thought was to use multinomial theory and determine the likelihoods for a colour being picked 1,2,3,4,5,6,7,8,9 etc times for a given odour, and then to assign colour-odour associations in the actual experiment that differ from that expected by chance when p<0.05. However, the calculations for multinomial theory proved beyond me (because of this issue participants could assign each odour THREE colours but could not assign the same colour more than once).

I then moved on to simulating the experiment. Here, colours chosen for each colour were done so at random. The experiment was simulated 100,000 times and the possibility of a colour being picked, at random, 1,2,3,4,5,6,7,8,9 etc times was outputted (if there was observed 600 indidences of a colour being randomly chosen 4 times, then 600/100000= p= 0.06). As before, I assumed that if the number of times a given colour was picked for the same odour in our experiment was greater than that expected by chance (p<0.05) we had a significant result (H1: 'within a given culture there are culturally specific colour-odour associations').

To test if colour-odour associations differed over cultures, I conducted a further 'higher-order' simulation: I ran two simulations as described before, one for each culture, and looked to see the possibility of those two cultures colour-odour associations picked differing by (root square) 1,2,3,4,5,6,7,8,9 etc times.

Issue 1: Is it legal to assume here that, for a given colour C picked X times, we can say this occured at a below-chance level if the possibility of getting X colours picks in the simulation experiments was below p<.05?
Issue 2: I have yet to find what I have done in textbooks (e.g. 'monte carlo simulation' Mooney; 'Simulation for the Social Scientist' Gilbert & Troitzsch). Textbooks advocate mathematically describing the distribution of each of the factors of your experiment, building these into a model of your experiment, and then simulating this. However, I've simulated the actual experiment itself. Is this legal??! Are there any books or articles you can advise me read on this technique, or is what I've done just wrong?

I very much appreciate your efforts in first wading through my lengthy description and then pondering my questions.
With kind regards,
Andy Woods.

PostPosted: Fri, 11 May 2012 06:01:03 UTC 
Member of the 'S.O.S. Math' Hall of Fame

Joined: Tue, 20 Nov 2007 04:36:12 UTC
Posts: 837
Location: Las Cruces
How to apply statistics is a subjective question and different disciplines have their own traditions about what methods are used. In most real world problems, there is not enough known information for a mathematician to prove that one way of doing a statistical test must be the best way or the only valid way. (Whether there are any legal statutes that apply to your field, I don't know!)

The wisest thing to do is to consider who will be evaluating your work. For example, will it be the editors of journals, a thesis committee etc. Find out what kind of statistical methods are acceptable to them. Sometimes you can simply ask them. If you can't ask them look at what other people have done (articles, theses) that they have approved.

Computing the distribution of a statistic by using simulations is a recognized technique and directly simulating the experiment in order to do this is a recognized techique. Where you may get criticism is from someone who can point out how you could have calculated the statistic in a deterministic manner or from someone who disagrees with the statistic you are using.

When I think about this problem, I think of it in a chi-square setting. If I imagine the choices were 7 colors then there are (I think) 35 different combinations of 3 colors. If I imagine each of these combinations to be a "cell", then if a person choses colors in a totally random manner, we can imagine him being throw at random into one of the cells. The chi-square test could be used to judge whether people are tossed into the cells at random. That wouldn't prove cultural differences, only net non-randomness. However, visualizing the problem this way, I think there are also other tests you could use to investigate differences among cultural groups.

If you offered choices from 100 different colors, this might not be practical since there would be so many combinations. As I recall, there are also cautions about using the chi-square tests if some of the cells are empty or have very few occupants. (I'm not a chi-square expert, but you shouldn't have trouble finding one.)

PostPosted: Fri, 11 May 2012 06:22:36 UTC 
Math Cadet

Joined: Sat, 5 Aug 2006 10:39:54 UTC
Posts: 8
Thanks Tashirosgt. Much appreciated :)

Afraid we had 34 colours. Retrospectively, this was far too much! I explored Fisher's exact test (as with 34 colours, some 'cells' would have a zero occurrence value) but the test was quite underpowered. A collaborator is looking into some form of cluster analysis (he does brain imaging and similar problems exist when figuring out sig differences when you have many activitating 'cells', that is 3mmx3mmx3mm blobs of brain), to link 'similar' colours, and then perform some cleaver stats from these.

You are right about scoping the statistics for the publishing medium. We hope to submit a paper on this to a Psychology journal so I'll look into this.

I just wonder if there are more sophisticated statistics than calculating p values. E.g. I think it would be prudent to look at the distribution around the p-value. A few outliers could cause havoc.

Many thanks,

Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 3 posts ] 

All times are UTC [ DST ]

Who is online

Users browsing this forum: No registered users

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
Contact Us | S.O.S. Mathematics Homepage
Privacy Statement | Search the "old" CyberBoard

users online during the last hour
Powered by phpBB © 2001, 2005-2011 phpBB Group.
Copyright © 1999-2013 MathMedics, LLC. All rights reserved.
Math Medics, LLC. - P.O. Box 12395 - El Paso TX 79913 - USA