aimanipulation

  1. A

    Prompt Injection as Evolving Adversarial Behavior in LLMs: A Cognitive and Methodological Perspective

    Abstract As large language models (LLMs) become increasingly complex and context-aware, methods used to bypass their safety, ethical, or operational constraints commonly known as prompt injection or jailbreaking have become both more sophisticated and more transient. This paper presents a...
Back
Top