Hacker Newsnew | past | comments | ask | show | jobs | submit | silverfox17's commentslogin

'Arcane terminal commands'? These are standard basic terminal commands.


You can't really make an informed decision without knowing how much data they were moving. For it to be that expensive, you'd need to be moving a ludicrous amount of data, and you can always parse data down to the required fields before indexing, which saves on licensing costs.


in 20 years of doing SIEM and SIEMlike solutions, I've yet to find an engagement that said 'Oh, yes...our volumes are XX and YY'...mostly it's a /shrug and a less than educated guess.

There's even reluctance to turning things on and _watching_ it for 10 minutes. An activity that would immediately give you a much better idea of volume. Folks just don't like doing it.

Then you get the things were setting up a redundant logsource is just unwise. DNS logging was 2 orders of magnitude greater than everything else a SIEM was doing. And Email was about the same size.


What are the required fields in an incident with a new bug pray tell?


It obviously depends. It's not a one size fits all answer.


Just because it's an old method doesn't mean it's a bad method.


I believe they titled it that way on purpose.


Windows needs sysadmins too, and Mac does as well. There are group policies and all sorts of other tools out there for managing Macs. Apple even has documentation on this.

Someone who is running a newer Linux desktop distro is not going to need sysadmin experience to web browse and read email. The same way someone using Windows or a Mac for a desktop distro isn't going to need sysadmin experience, either.

As for your websites, no one is going to build a simple website that needs a dedicated database server, a "memory cache server", or whatever. Those issues become relevant with scale.


This seems a bit generalized. Yes, infrastructure as code/etc. are becoming more prevalent, but underneath there are still systems running.

While a lot of jobs can abstract away these things, they are still there, and very real. If more people had an understanding of the underlying operating systems and file systems we could mitigate a lot of vulnerabilities, or find performance issues that may otherwise be obstructed, or a myriad of other things.

Hacker News is in a bit of a bubble in that the answer to everything seems to be "kubernetes" or "$newHotSolution". I think part of it is that a lot of developers haven't actually worked with machines. At one point in time there was a post here about how hard it is to set up a LAMP stack, a task you could hand a first year sysadmin and they should be able to figure it out. Abstraction and automation are nice, but the underlying concepts are still important.


Comparing kubernetes to a sysadmin manually provisioning a LAMP stack is like comparing a home kitchen to commercial factory. They can both make a pizza, but one can make an order of magnitude more - at same time.

They’re solving two different problems.


They are, but a lot of people think they need a commercial kitchen to make a bagel now a days, because they've only worked in commercial kitchens.


We’re in agreement there. Right tool for the job.


Developers deploying static website in 2022 using K8S be like: https://xkcd.com/1319/


I'm a dude who uses Kubernetes to make pizzas. Kubeternetes is absolutely a commercial bakery class machine, but much of it's adoption is due to the fact that for just a bit more in price and effort, you can have that class of machine in your home and run real things on it.

Seriously: I run clusters from a few dozen nodes (down from a few hundred at my peak, sigh) down to a trio of Raspberry Pis in my living room. They're overkill by a little bit, but not by much. And it's definitely my ambition to make the tooling even easier and even more powerful, such that every small home can run something with enterprise level reliability.


I'd say Kubernetes was more like having a lathe in your shed. Almost no-one needs one but it sure does make some projects a lot easier.


I just got one and not sure I need it. The lathe always seemed so dangerous to me.


Of topic but having a lathe in my shed is a definite life goal.


Wrong comparison, sorry. K8s is enterprise thing, while LAMP is good for SOHO. So cookies factory vs small bakery.


This is a poor analogy overall, but I think to would be better to think of it thusly...

K8 is where you don't own the kitchen, and you lease it when you wish to cook. You aren't aware of how to maintain the kitchen, or buy the ingredients you choose to cook with. In fact, you aren't even able to tell if a mango is ripe or not when shopping, because you don't shop and don't know how to.

That's what K8s are.

Meanwhile, SysAdmins know how to maintain, manage, run the kitchen... as well as cooking the meal. SysAdmin knowledge scope is greater than K8 knowledge scope for this reason.

What AWS, what docker, what K8s have done, is outsource specific realms of knowledge and skills, so people don't have to "deal with that". But if one is outsourcing knowledge and skills, one cannot claim that this makes the work more sophisticated.


A couple of examples I've found:

Celery spawns n workers, defaulting to the number of logical processors available. As anyone familiar with cgroups can tell you, this is fraught with problems when containerized, since nearly all mechanisms to detect processor count (or available memory) lead back to `/proc`, which will dutifully report the host's information out to the container. This leads to questions like, "I requested 4 vCPUs; why do I have 4+n threads?"

ORM in general. The worst example I've seen was querying a massive table, and to get n results from the end, was using OFFSET + LIMIT. It was also deliberately not using the PKEY, leading to a full table scan every time the query ran. If you aren't familiar with DBs, it may seem perfectly reasonable that querying a ~100 million row DB would take a long time, when in fact it could and should be extremely fast with a properly written query.


You do realize some of the internet's biggest sites run on the lamp stack, right?


With Apache httpd inside? They don’t have devops at all?


not really. its a spectrum.

the only real difference is the sense of smugness when it works. In the old days deploying LAMP was a sense of achievement. save for patching, there wasn't much more work to do.

Kubernetes is basically the same level of effort, but the upkeep is a bit more.

Also the networking is batshit, and so is the aversion to swap.


Doesn't seem to me that there's a bubble here where the answer is always Kubernetes because everytime that topic pops up there are a lot of posts like yours.


Another part of the problem is that developers are often discouraged or outright not allowed to work with machines, due to "it's not your job" kind of arguments, or corporate security enforced by auditors.


You don't need corporate security or auditors; even in a small web-shop, administering production servers is a power reserved for wizards. Sure, you can administer your own developer workstation; perhaps you get a Linux VM to yourself, that you can tinker with.

It would be nuts to let every developer tinker with the production server, or the source-code repo, or the fileserver. The private VM gives them a playground where they can learn to play at sysadmin, if they want; real sysadmin, I would contend, is taking the burden of responsibility when shit happens. Nobody cares much about wizards, until shit happens; the wizard then becomes an essential scapegoat. The newsgroup alt.sysadmin.recovery wasn't created for nothing.


> It would be nuts to let every developer tinker with the production server, or the source-code repo, or the fileserver.

I think that it might be a good idea to have most of the configuration for servers be based on Git repos, with something like Ansible or another such solution (Chef, Puppet, Salt), so that server configuration wouldn't be treated that differently from versioned code (which also serves as a historical record).

Don't give developers access to just push changes for production servers, but absolutely let them create merge/pull requests with changes to be applied once proper review is done: ideally with a description of what these changes accomplish, references to other merge/pull requests for development/test/staging environments where they were tested beforehand (perhaps after prior tests against local VMs) and a link back to whatever issue management system is used.

Then, have an escape hatch that can be used by whoever is actually responsible for keeping the servers up and running, in cases an update goes bad or other manual changes are absolutely necessary, but other than that generally disallow anyone to access the server directly for whatever reason (or at least remove any kinds of write permissions).

Personally, I'd also argue that containers should be used to have a clear separation between infrastructure and business applications, but that's mostly the approach to use when dealing with web development or similar domains. Regardless, I find that it's one of the more sane approaches nowadays, the Ops people can primarily worry about the servers, updates, security, whereas the developers can worry about the applications and updates/security for those.


A newer Raspberry Pi can run most of the web fine, the $600 argument is laughably false.


You are both too normal so the product works as intended for you. Unfortunately however, real engineering means making it work in all likely cases. Anyone outside of upper-middle class suburbia has an absolutely terrible experience with computers. The browser takes a minute or more to start. Button clicks take 10 seconds. You wouldn't know this because they are not your target market. But don't get confused and think I'm appealing to feelings. I'm saying software should be acceptably performant, when currently most is not, especially not websites. I'm saying that 99% of people get a terrible experience with software, once you step outside your bubble world. Real software should work on only 1000Mhz when performing trivial tasks like displaying text and forms (to say the least). Real software should not break down because of certain attributes about your IP address, user agent, etc.

I can't tell where you're coming from without getting to know you, but all these "works for me" forum posts are invalid for the simple fact that if I have a real conversation with someone in person, 99 times out of 100 it will just turn out they are acclimatized to slow half working bullshit. On forums the only reason they get away with spouting this nonsense is due to the upvote system and their cliques. Like if I complained about battlefield 2 taking 20 seconds to load the menu mid game back in 2004, a group of stupid forum posters will all either have a $2000 rig (in current dollars) that was able to bypass this bottleneck, or they're just some casual who played the game for 10 minutes in their life so they haven't had to open the menu a lot yet.

In the case of a signup form, it could and should have a standard solution that doesn't require amateurs to code it themselves, 20 years ago. Like a standard authentication protocol. And of course it should not rely on flaky communications like a verification email. All these things work for you because you're too normal, despite being terrible misconceived ideas. Since the web as a platform is a misconception, you could not make an application that uses a standard authentication protocol because the web introduces all kinds of problems due to its weird way of operating like the fact that you're vulnerable to CSRF by default and asinine nonsense like JSONP, and the fact that web applications aren't really a real mode of software but a bunch of hacks like embedding a script tag, setting the doc type, and fiddling with strange headers, and you can't really have libraries without all these fixes from 5 minutes ago like pinning hashes of resources and building some giant half working external dependency management system. Then at the end of the day it's still each website pulling the authentication code with whatever implementation of their choosing so you still can't trust it as well. Your "working" system is built out of a compatible stack of misconception-compatible software. The moment you misstep, for example if you want some privacy and put on a proxy, you will be severely punished since that breaks half of the web and leaves you with the overhead of doing workarounds for every single interaction.


Did you ever update your root certificates after some of the LetsEncrypt certificates expired in September?


Are you saying that developers should improve their security by monitoring their own commits?


They should setup monitoring (email/slack) for commits and merges to the prod/main/master not their working branch is what I meant. But yes, even for random branches, setup a pipeline that will notify you after whatever tests/checks.

Assume it is only a matter of time before at least one dev's machine or git creds/keys are compromised. This way, it serves as a layer of defense to notify you of unauthorized modifications which could be subtle enough to make it past any review or qa.


You can't think of any examples around you of organizations that have networks?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: