-
Notifications
You must be signed in to change notification settings - Fork 11
Open
Description
Seems like language models do a really decent job in de-obfuscating the generated code (same holds for other obfuscating tools, not Carbon in particular). Didn't test it in all any depth, but GPT-3.5 give great hints when fead with a function from the example folder: https://chat.openai.com/share/0dd8d626-4de1-4de4-af79-d9acbd66c7b5
So, be careful when you use it against important stuff. If larger code bases are used, at least for now, the limited context length of LLMs may give a bit of protection.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels