Week 2: Preparing to Programme
Although I created my stimuli last week there was still a lot more preparation to do before I could begin to upload the stimuli onto Gorilla, the experiment building website! This week, I focused on allocating my stimuli into specific counterbalanced subsets to meet the study's design criteria!
This week has been much harder than week 1 as it has involved a lot of tasks that I have never tried before, however it has been really rewarding to be able to challenge myself and succeed!
I started my second week off with a zoom with Dr De Bruin- we finalised my stimuli and then discussed the next stages of my OSF pre-registration form and programming! This was quite exciting, as although I'd been quite apprehensive about programming I was also looking forward to the challenge. After a quick whistle stop tour of of Gorilla, I was left to my own devices to begin preparing the study!
My first task was to start dividing each of my sentence stimuli into plausible chunks, as this is how they will be presented during the experiment. Once I had divided the matched sentences, I decided to take a look at Gorilla and some of the consent and debrief templates that Dr De Bruin had shared with me. It was quite overwhelming at first as quite a lot of code was used to build the forms, however I soon started to pick up the code and altered the templates to fit my study.
My next job was to work out how to build the self-paced reading task for my experiment. It took me a while to figure out how to format the experiment on Gorilla so that the task ran correctly, however as I got more familiar with the code and tools I was soon on a roll! I managed to build the instruction page and practice trials today, so tomorrow I aim to chunk more of my sentences so that I can start to upload them onto Gorilla!
I spent this morning completing the set 1 chunked sentences. This took longer than I'd expected as I also had to input the sentence number, target and condition for each sentence, but by 2 o'clock I had them all neatly presented and ready to be standardised!
I took a break from the sentences for a while and decided to complete some of my OSF pre-registration form. The sections this week were harder to fill out as they predominantly focused on analysis (types of analyses that I have yet to learn about in my degree), however I found it beneficial as it forced me to really think and consolidate my understanding of the hypotheses and analyses chosen for the study.
I ended the day by carrying out the standardisation for the set 1 stimuli. To do this, I had to count how many chunks preceded the target word in each sentence, then take an average from each condition to make sure the conditions were balanced- luckily for me they were!
Today my initial focus was chunking all the set 2 sentences. This didn't take as long as set 1 because I could copy and paste the neutral stimuli and just input the target word, so I only had to chunk the matched and mismatched sentences. I made sure that the averages matched set 1 and ran the t-tests again to double check that the averages were similar within conditions (they were!). Once I had done that I was able to fill out the rest of the table. This was a pretty simple task, as I simply had to input the display names for each sentence which are all identical so could be copy and pasted.
Once all my tables were complete, I had to re-arrange the chunked sentences into subsets that will be counterbalanced during the experiment. This was very confusing at first- not so much the logic behind the counterbalancing but the way I was supposed to present it in a document that could be uploaded onto Gorilla. I sat for a while pondering over the format, but soon realised that actually doing something helped me to understand how to format all the stimuli. I started trying out different arrangements and soon found the correct way to display all the different counterbalanced combinations of stimuli, which was very satisfying! This took up the majority of my afternoon, however I was very pleased with my progress and the fact that I'd overcome one of the most challenging obstacles in my research journey so far!
Yesterday I managed to arrange set 1 into counterbalanced subsets A and B, so today I did the same thing with set 2. This took up the whole morning, so after dinner I moved onto my next big task... randomisation! The subsets must be randomised so that A) sentences belonging to the same context are not repeated 3x consecutively and B) identical target words are not repeated 2x consecutively. This was an easy but long process, as it required using the =rand() formula in excel to generate a unique randomised code for every sentence in both subsets. I managed to generate codes for both subsets in set 1, so tomorrow I hope to do the same for set 2!
Towards the end of the day I spent some more time completing my pre-registration form. I focused on the sampling section today, which was fairly straight-forward as it required information about the participants and how we plan to recruit etc.
This morning I finished generating the randomised codes for set 2, so I was able to begin actually randomising the conditions within my counterbalanced subsets! I was able to do this quite efficiently using one of the sort functions, which allowed me to shuffle my sentences around whilst keeping each chunk that belonged to the same sentence together. I then had to check that there were no contexts 3x consecutively or target words 2x consecutively- any that were had to be switched manually. This was all going very smoothly until I encountered a problem... I noticed that there were inconsistencies in one of my sentences across the subsets :/
This meant that I had to go right back to the beginning to make sure that the sentence was consistent from my stimuli creation spreadsheets all the way through to my randomisation spreadsheets. This did take a while, however now I'm satisfied that the sentence is accurate throughout my spreadsheets and will not cause any issues later down the line.
For the last half hour I started to learn how to use =v-lookup() and =concatenate()- these functions will make it much easier for me to import my questions into the randomised code spreadsheets that I have been working on, This was quite difficult to learn as Dr De Bruin had to try and explain via email, however eventually I successfully inputted my first question using the functions! Next week I will finish off my input lists ready to upload to Gorilla :D