When I first applied to the Laidlaw Programme, I wrote that I hoped to advocate for meaningful improvements for marginalized groups through practical action. At the time, leadership meant initiative: identifying a problem and building something in response. Two years later, I still believe in action, but I now understand leadership as sustained responsibility rather than isolated intervention.
My volunteer work at Literacy Pirates had already exposed me to how inequality accumulates quietly. Tutoring children from immigrant families every afternoon, I saw how limited language support at home or small gaps in confidence could compound over time. The most important lesson was not about designing better materials, but about consistency. Showing up every Saturday, remembering a child’s anxiety about an upcoming test, adjusting explanations patiently — these small acts built trust. Leadership, I realised, is often relational and steady rather than dramatic.
Laidlaw deepened this understanding through research. In our project examining how generative AI systems engage with far-right narratives and reproduce dominant online discourses, I began to see how technological systems can subtly amplify exclusion. Innovation is not neutral. Decisions about data, design, and deployment shape whose voices are heard and whose are marginalised. Leadership in this space requires not only technical curiosity, but ethical vigilance.
The research process itself reshaped how I lead. Our initial methodology proved too optimistic; access limitations and inconsistent outputs forced us to revise our framework multiple times. At first, I felt that these disruptions reflected poor planning. Over time, I learned that leadership in research means stabilising the team when plans change. Adapting our approach was not failure — it was responsibility to the integrity of the question.
I also became more comfortable facilitating rather than directing. Encouraging teammates to challenge our assumptions improved the rigor of our findings. Intellectual humility strengthened collaboration. Through Toby’s workshops, I saw this shift reflected in the letters I wrote to myself. Earlier, I focused on ambition and impact. Later, I wrote about accountability — about ensuring that my future work in AI does not unintentionally widen the very inequalities I hope to address.
Completing Laidlaw has clarified the direction of my future work. Before the programme, my interest in AI and inequality was largely academic. Through researching algorithmic bias and observing how online narratives shape public discourse, I began to recognise a gap that policy alone cannot close: AI literacy at the community level. If individuals lack the tools to critically understand and question AI systems, even well-designed governance frameworks have limited reach.
Laidlaw has given me both the analytical discipline and reflective space to imagine initiating my own AI education and literacy project in the future. Drawing on my experience in community tutoring and my research training, I hope to design accessible programmes that introduce underrepresented students to AI tools while encouraging critical engagement. Rather than viewing technological progress as inherently inclusive, I now see leadership as the responsibility to shape its direction intentionally.
The programme has not transformed my ambitions; it has refined them. I leave Laidlaw with a clearer sense that leadership is not about scale alone, but about alignment — aligning technical skill with ethical awareness, and ambition with accountability. This is the standard I hope to carry forward in my academic and professional journey.