Skip to content

Probably not safe against LLM de-obfuscating #7

@MartinEls

Description

@MartinEls

Seems like language models do a really decent job in de-obfuscating the generated code (same holds for other obfuscating tools, not Carbon in particular). Didn't test it in all any depth, but GPT-3.5 give great hints when fead with a function from the example folder: https://chat.openai.com/share/0dd8d626-4de1-4de4-af79-d9acbd66c7b5

So, be careful when you use it against important stuff. If larger code bases are used, at least for now, the limited context length of LLMs may give a bit of protection.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions