- How does model size play into providing feedback?
7
- Can this be used for personalization?
8
- Discussing experimental results
9
- Can this be paired with recommender systems?
10
- What are obvious next steps to make the system more powerful?
11
- Clarifying the baseline methods
12
- Exploring cross-lingual customization
13
- Where did the idea for the clarification prompt come from?
14
- What did not work out during this project?
15
- What did you learn about interacting with large models?
16
- Final thoughts
Description:
Explore an in-depth interview with authors Aman Madaan and Niket Tandon discussing their research on improving GPT-3 performance after deployment without model retraining. Learn about their innovative method of maintaining a memory of interactions to dynamically adapt new prompts, enabling non-intrusive fine-tuning and personalization. Discover insights on motivations, experimental results, cross-lingual customization, and potential applications in recommender systems. Gain valuable knowledge about interacting with large language models and the challenges faced during the project. Delve into discussions on model size implications, clarification prompts, and future directions for enhancing very large pre-trained language models.
Author Interview - Memory-Assisted Prompt Editing to Improve GPT-3 After Deployment