prompt.fail is a project dedicated to exploring and documenting techniques for prompt injection in large language models (LLMs). Our mission is to enhance the security and robustness of LLMs by identifying and understanding how malicious prompts can manipulate these models. By sharing and analyzing these techniques, we aim to build a community that contributes to the development of more resilient AI systems.