> Also, it's a bit ironic how one way to prevent bugs is using stronger type systems and formal methods. But, AI is particularly bad at formal methods.
It kinda works tho. In my anecdote Copilot works much better with C# than Python simply because I can write the signature of a function and that it generates the content.
(I know Python has type annotations too, but Copilot just isn't as smart as with C#. Perhaps because there isn't enough training data in typed Python?)
Hmm, so the endgame would be the most strongly defined and strict language so LLMs can immediately see and fix mistakes, but automate the tedium of writing it by having them as an intermediary?
It kinda works tho. In my anecdote Copilot works much better with C# than Python simply because I can write the signature of a function and that it generates the content.
(I know Python has type annotations too, but Copilot just isn't as smart as with C#. Perhaps because there isn't enough training data in typed Python?)