Key Topics
Prompt Engineering
- Crafting effective prompts to guide generative model outputs (e.g., few-shot and instruction-based formats)
- Designing reusable templates for story and quiz generation
- Experimenting with zero-, one-, and few-shot methods to improve model consistency
Dataset Preparation & Fine-Tuning
- Extracting and cleaning story content from documents and converting them into structured JSONL format
- Organizing datasets with metadata (e.g., difficulty, topic) to support better fine-tuning outcomes
- Using OpenAI’s fine-tuning API to train models with curated datasets and evaluate performance metrics
AI Model Integration & Evaluation
- Deploying fine-tuned GPT models within Flask applications for real-time user interaction
- Routing requests between base and fine-tuned models based on use case or session context
- Evaluating model output using metrics (e.g., BLEU score, quiz alignment accuracy) and user feedback for iterative improvement
Summary
Prompt Engineering

Source: https://coderpad.io/blog/development/what-is-prompt-engineering/
We started with prompt engineering so that we could improve the performance of the GPT with storytelling and quiz prompting. We designed few shot templates that demonstrated how the input stories could produce questions and answers. Using fine-tuning, we could remove redundant information, and assure that the output from the model took into account intended educational outcomes in using the app with user learners in mind. This allowed for better control and continuity of AI responses.
Dataset Preparation & Fine-Tuning