Guidelines to Prompt Large Language Models for Code Generation: An Empirical Characterization
Автор: AI Papers Podcast Daily
Загружено: 2026-01-28
Просмотров: 48
Описание:
This study establishes empirical guidelines for optimizing prompts to enhance Large Language Model (LLM) performance in code generation tasks. By analyzing coding challenges from benchmarks like BigCodeBench and HumanEval+ where models such as GPT-4o mini and Llama 3.3 consistently failed, the researchers utilized an automated, iterative process to refine prompts until they yielded correct code. This analysis resulted in a taxonomy of ten prompt improvement patterns, identifying critical elements often missing from initial requests, such as specific algorithmic details, input/output formats, and explicit pre- and post-conditions. A subsequent survey of 50 practitioners highlighted a discrepancy between effective techniques and actual habits; while developers commonly clarify I/O formats, they frequently underutilize highly rated strategies like providing usage examples or employing assertive language.
https://arxiv.org/pdf/2601.13118
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: