5 comments

  • runjake 8 hours ago ago

    Past discussion, which may be helpful:

    Ask HN: How do you manage your prompts in ChatGPT? https://news.ycombinator.com/item?id=41479189

    I'm curious to see how people's workflows have changed.

    Us, we manually catalog them in well-named Markdown files and folders and store in a git repo. I would like a more taxonomical approach.

  • sebastiennight 8 hours ago ago

    To clarify, you seem to be asking about end-users storing their (eg. ChatGPT) prompts, not software devs managing the prompts they use in the API?

    These are two very different questions.

    • toomuchtodo 8 hours ago ago

      Both actually, as I think there is a lot of overlap between the use cases. Someone mentioned to me that they recently "graduated" from using the Mac ChatGPT app to something more robust, for example, due to how many prompts they were managing and the systems they were submitting them to. I would also be interested in your perspective why these are vastly different (user prompts vs system prompts), as I might be missing context.

  • hchak 8 hours ago ago

    promptlayer is great. Highly recommend for prompt versioning, play grounding, etc.

  • paulcole 8 hours ago ago

    I don't do this at all. I find that being obsessed with optimizing prompts is exactly what's not needed at this stage of AI's development.

    I just prompt as I go and find that the "cost" of prompting again to get a better output is lower than the cost of having some system for cataloging, maintaining, and versioning my prompts.

    I might be wrong but I'm getting good results out of LLMs.