Yes, I mean, I’m an engineer on a cloud Kubernetes service, and I don’t run Kubernetes for my home services. I just run podman quadlets (systems units).
But that is entirely different from an enterprise scale setup with monitoring, alerting, and scale in mind…
Similar deal here. My $dayjob title is "Cloud Engineer" and I spend a lot of my time working with AKS and Istio. But for some recent personal projects at home, I've just been running Docker Swarm on a single server. It's just lighter and less complicated, and for what I'm doing it more than satisfies my needs. Now if this was going to production at mass scale, I might consider switching to K8S, but for experimentation and initial development, it would be way overkill.
> But that is entirely different from an enterprise scale setup with monitoring, alerting, and scale in mind
Do you have experience with Kubernetes solving these issues? Would love to hear more if so.
Currently running podman containers at work and trying to figure out better solutions for monitoring, alerting, etc. Not so worried about scale (my simple python scripts don't need it) but abstracting away the monitoring, alerting, secure secret injection, etc. seems like it'd be a huge win.
I recognize the problem statement and decomposition of it. But not the solution.
Especially saying that he sees the same problem being worked on by N people. And now that makes in N+1?
I’ve been more interested by the protocols and standard that could truly solve this for everyone in a cross-compatible way.
Some people have dabbled with atproto as the transport and “memory” storage for example.
Should have LLM providers create stack overflow type of site based on user’s most asked problem. At least we won’t deplete de source of normal searches results.