Exploring LLM-Enhanced Program Generation and Security Evaluation
Hello everyone! My name is Hanyu Wang, Amelia, and I am an undergraduate at The University of Hong Kong. I am excited to begin my research journey as a Laidlaw Scholar and to connect with fellow scholars worldwide.
Like
Be the first to like this
Research Topic:
My project investigates the application of Large Language Models (LLMs) in program generation, automatic software testing, and security evaluation. Specifically, I aim to explore LLM-enhanced domain-specific language generation to improve the process of library fuzzing.
Objectives:
- Leverage LLMs to generate code snippets and test cases efficiently for a variety of programming libraries.
- Automate the detection of vulnerabilities by integrating LLM-generated inputs into fuzzing workflows.
- Evaluate the security and robustness of software libraries through comprehensive, LLM-assisted testing methods.
Anticipated Outcomes:
- A framework for integrating LLMs with domain-specific language generation for effective library fuzzing.
- Insights into the strengths and limitations of LLMs in program synthesis and security assessment.
- Recommendations for the future use of AI in software testing and cybersecurity.
I am eager to learn from the experiences and perspectives of scholars around the world. If you have worked on related topics, or have suggestions and insights on applying LLMs in software engineering or security research, I would love to hear your thoughts!
Looking forward to engaging discussions and collaborations!
Please sign in
If you are a registered user on Laidlaw Scholars Network, please sign in