How does that compare to using agent mode in VS Code?
Is the main difference that the files are being edited remotely instead of on your own machine, or is there something different about the AI powering the remote agent compared to the local one?
I think it does help in some scenarios like small scripts or if you are learning something new. But often it adds an overhead. You have to check constantly wether or not to accept the suggestions.
I dont use copilot anymore (at least for now). Just chatgpt as an alternative to google/SO
Isnt Yew (https://yew.rs/) similar in regard of the react like way to update the html document?
I understood that Dioxus goes beyond and is the whole platform to run the app by including a webview,
but wouldnt make sense to have another framework like yew as dependency to manipulate a "Virtual Dom"?
Dioxus uses the virtual DOM as the core that allows rendering to many different platforms. Yew and Dioxus are somewhat similar but Dioxus represents it's virtual DOM very differently internally. Because of the way Dioxus' virtual DOM is made unlike Yew, Dioxus will never diff the static parts of a component (read more here https://dioxuslabs.com/blog/templates-diffing)
Even within the same repo, it is very likely that the old version of your code will coexist with the new version during the deploy roll out. Often having different commits and deploys is a requirement.
For instance, imagine that you add a column in the database and also is using it in the backend service. You probably have add the column first and then commit and deploy the usage of the column later, because you can't easily guarantee that the new code won't be used before the column is added.
Same would apply for a new field in the API contract.
The old version won't coexist when the change is contained within a single binary, which seems like it would be true in a bunch of cases.
In our monorepo we have to treat database changes with care, like you mention, as well as HTTP client/server API changes, but a bunch of stuff can be refactored cleanly without concern for backwards compatibility.
Do you only have a single instance of the binary running across the whole org? And during deployment to you stop the running instance before starting the new one?
Any change that won't cross deployable binary boundaries (think docker container) can be made atomically without care about subsequent deployment schedules. So this doesn't work for DB changes or client/server API changes as mentioned by OP, but does work for changes to shared libraries that get packaged into the deployment artifacts. For example, changing the interface in an internal shared library, or updating the version of a shared external library.
Seems like a common misconception to me that people seem to believe that you can never change an interface. You actually can as long as it is not published to be used outside of your repositories.
That is only likely to apply in very small-scale environments or companies.
And if only a single binary is produced, quite likely a single source code repo would be used as well - sounds like 'single developer mode', well 'small team' at most.
Our monorepo is millions of lines of source code, and hundreds of developers. Not small scale.
In this scenario, the single binary is the key encapsulation boundary, but your monorepo could be producing N binaries, each of which receives the change.
For example, if the change is to remove a single, unnecessary allocation in a low-level library function used across the repo, you can refactor it out and push the change as N binaries without worrying about compatibility.
Nice.
I used to think that creating the smallest services, function, methods, etc would produce the easier software to deal with from the programmer perspective. But turns out you will end with many interfaces to keep when perhaps you just needed one
Is there any books that touch in this subject directly?