Speed is useful, when you have a good idea or a hypothesis you want to test. But if you are running in the wrong direction, speed is of very little value. With LLMs it might be even harder to stop and realize that you are creating the wrong thing, because you are not spending effort to create the wrong thing.
I'm seeing this cultural pattern where developers have started accepting LLM output with very little scrutiny. This ends up code that works on the surface, but most of the times problems are not addressed at their source.
Creating these wrong things is only cheaper with LLMs. Since developers now spend less time and effort to create that wrong thing, they don't feel the need validate or reflect on them so much.
The risk is not the tool itself, but the over-reliance on it and forgoing feedback loops that have made teams stronger, e.g. debugging, testing, and reasoning why something works a particular way.
> But if you are running in the wrong direction, speed is of very little value.
I think of it differently. Speed is great because it means you can change direction very easily, and being wrong isn't as costly. As long as you're tracking where you're going, if you end up in the wrong place, but you got there quickly and noticed it, you can quickly move in a different direction to get to the right place.
Sometimes we take time mostly because it's expensive to be wrong. If being wrong doesn't cost anything, going fast and being wrong a lot may actually be better as it lets you explore lots of options. For this strategy to work, however, you need good judgment to recognize when you've reached a wrong position.
> Wouldn’t it be ironic if all the early adopters were the losers because they liked the hacky nature of it? This happened to a lot of early computer adopters, low level programmers, etc.
> Might I be 7% more effective if I'd suffered through the early years? Maybe. But so what? I could just as easily have wasted my time learning something which never took off.
And thus it is reasonable to not use LLMs. And it's not only the training process that's problematic. There are so many individual LLM users who waste electricity on tasks that they would have solved by thinking for themselves just a few years ago.
> Having done a fair bit of logging to databases with various scripts, I believe this was a simple matter of overflowing the SQL column length for a field, causing the entire INSERT to fail. This is a common beginner mistake when you first start to work with databases.
I'm not sure if I understand this part. I'm trying to put it into my own words. Is the following correct? The attacker provided an input that was so long, that it was rejected by the database. And the program that submitted the SQL query to the database did not have any logic for handling a query failure, which is why there is no trace of the login attempt in the log or elsewhere.
That was my understanding. You have two services, one validates, another logs. The validation triggers a failure, and requests that to be inserted into the audit database, but the audit log services fails and that apparently doesn't block the validator from sending a response back to the attacker.
Reading through the article I can't help but think that many of these authentication/authorization flows are entirely to complex. I understand that they need to be, for some use cases, but those are probably not the majority.
What language is universally better than Python? I don't think Python is perfect, but it is definitely one of the best languages out there. It is elegant and it is has a huge ecosystem of libraries, frameworks and tutorials. There is a lot of battle-tested software in Python that is running businesses.
It's fast enough for many use cases. That doesn't mean that there is no room for optimization, but this is far less a deciding factor these days.
> it's not type safe
You can do static analysis with Mypy and other tools.
> it has not real concurrency.
There's different mechanisms for running things concurrently in Python. And there's an active effort to remove the GIL. I also have to ask: What is "real" concurrency?
Admittedly, the things you mention are not Python's strongest points. But they are far from being dealbreakers.
The greatest barrier to understanding is not lack of knowledge but incorrect knowledge. That's why good names matter. And naming things is hard, which is why it makes sense to comment on variable names in a review.
Unless the naming convention were written in the 90s and all variable must follow a precise algorithm to be made of only abbreviation and a maximum length of 15.
Or for some, if it contains the value of a column in the db, it must have the same name as the column.
So yeah, instead of "UsualQuantityOrder", you get "UslQtyOrd" or "I_U_Q_O"... And you must maintain the comments to explain what the field is supposed to contain.
There's grammatical mistakes and then there is sloppiness. Only the second makes me disregard someone's comment.
> I will lose the credibility of my message if there is too much mistake...
The correct way to write this is "if there are too many mistakes", because mistakes are countable and plural. And it's fine to make grammatical mistakes if English is not your native language. You can only get better by practising :-)
reply