Week 1: Creating Stimuli

For my experiment I need two types of stimuli: sentences and questions. The sentences will all contain a target word which will either match, mismatch or be neutral to the context of the sentence. The questions will be based upon those sentences and will either ask about the target word or the context of the sentence.

Like Comment

Hi everyone! I've decided to post a weekly blog to keep track of my progress, so here's to week 1! :D


I started the day with a zoom call with my supervisor and discussed the first step to get the study up and running- creating the stimuli! The stimuli that I will be using are sentences and questions, which have to be precisely constructed to ensure that they fit grammatical and contextual criteria. My co-supervisor, Naveen, is focusing her PhD on language production, so she has kindly allowed me to use her stimuli as a starting point for creating my own. Since her stimuli are all questions my first task was to use her stimuli as inspiration for formulating sentences in varying contexts, so I spent my day doing just that! 

I successfully constructed sentences based on Naveen's stimuli, however creating my own from scratch was much harder than I had anticipated! With a bit of perseverance and inspiration from previous studies I've managed to start composing a few of my own sentences that I hope to incorporate into the experiment. 

I ended the day by filling out a few sections of my OSF pre-registration form, which will help me to manage my study and maintain it's credibility throughout the research process!


My aim today was to finish what I started yesterday! There were a few sets of sentences that I hadn't quite finished the day before, so I completed those before moving onto completely new sets. Due to the design of my study I need to have 22 more stimuli sets than was required for the production research. I made a start on 15 sets yesterday, so I just needed to create the remaining 7! 

After that I had to standardise the sentence stimuli by ensuring that (on average) they were all the same length and the same number of words preceded the target word- this was quite fiddly as it involved a lot of counting and I had to revisit each sentence to decide where I could add or remove words. I was able to do this quite efficiently though by using excel formulas, which automatically updated my word counts and averages to keep track of my progress. My supervisor Dr De Bruin then introduced me to a software called N-Watch, which calculates the frequency of the subject and verb I have used in each sentence (another important variable to standardise). I spent the last hour of my day making a start on the frequency analyses for the matched and mismatched sentences, and hopefully I will be able to finish them all tomorrow! 


This morning I focused on finishing off my frequency analyses for my sentence contexts. I managed to run my data through N-Watch relatively quickly, however some of the results came back as 0.00 so I selected new words that could be detected by the software. Once I had results for each word I had to average the frequencies in each context to check they were similar. This was quite tricky as I needed to balance the frequency averages with the word letter count averages, which also have to be similar across contexts. This took up a considerable amount of my day, but with a little guidance from Dr De Bruin I eventually managed to balance both averages across contexts! 

This meant that I could begin creating my next type of stimuli... the questions! These were much quicker to construct as I already had all my sentences to base the questions on, so didn't have to look too far for ideas. I was able to start creating the questions for my matched and mismatched sentences, so I will continue with this tomorrow.

(I also completed this week's worth of my OSF form, so that's one thing crossed off my to-do-list until week 2 !) 


I continued to create my questions this morning and had completed them for each condition by dinner time- or so I thought... After my dinner break I checked back over the lists and realised that somehow I didn't have an equal amount of context and target questions (there should have been 30 each)! I checked my list question-by-question against my list of sentences and soon spotted where I had missed a question/done one too many target questions and not enough context questions.

I didn't need to run as many tests to ensure that my questions were standardised because A) they're based on the same contexts and target words in set 1 & 2 so are consistent across participants and B) they will be read after the sentences so will have no impact on the processing speed of the target word. However, I did check whether there were a similar amount of who/what/when/where questions in each condition. Although this wasn't perfectly balanced (and couldn't be amended entirely due to the nature of the sentences they were based on) I did have around 2/3 of who/what questions and 1/3 of when/where questions for each sentence context, which is equal enough for the study! 


Amazingly, I've finished week 1 ahead of schedule! All my stimuli are complete and have been standardised ready for review by Dr De Bruin over the weekend, so with a bit of luck I'll be on track to begin programming my experiment on Monday! For now, I'm going to have one last check over the stimuli and also watch a couple of videos to familiarise myself with the experiment builder website ahead of week 2 :D

Pennie Haigh

Psychology Student , University of York

Hi! I'm Pennie, a BSc Psychology student at the University of York. My interests lie within language comprehension/production, and I'm excited to begin my research project which aims to assess whether sentence context and task influence age effects during language comprehension.