RL on GPT-5 to write better kernels

(arxiv.org)

4 points | by atallahw 5 hours ago ago

1 comments

  • atallahw 5 hours ago ago

    We got access to OpenAI's RFT API with GPT5 and tried to see how good we could get it at one-shot Triton kernel generation. Some key decisions/observations: 1. tool use instead of multi-turn rl 2. skip SFT altogether 3. dataset curation was more important than dataset scale 4. reward hacks detection must be robust 5. models are getting a lot better at this