Hacker Newsnew | past | comments | ask | show | jobs | submit | someengineer's commentslogin

One thing about Agile that I don't see mentioned much is that it provides a way for bad managers to marginalize the effects of low performers among their groups, though at the expense of high performers.

Every company has "cowboy" programmers who break shit and ruin their coworkers' lives with massive 1am check-ins, and a lot Agile seems designed specifically to put up road blocks to prevent this from happening. Managers love it because it provides an escape from dealing with problem employees directly. Most of the vehemently anti-Agile people I've worked with were grossly overconfident in their abilities and difficult to work with, which I think supports this idea.

Unfortunately, the only way for this non-management technique to be effective is to do it dogmatically with no deviations (otherwise the cowboys will always have some excuse for why it shouldn't apply to them), and this is where things start pissing off the actual talent.


A common pattern in embedded C is to use something like "goto fail" instead of assert, wrap your function calls in this sort of macro, and then do error handling in one place at the end of the function.

Apparently Apple is unaware of this technique...


I use that style too, but I've not found it useful for OpenGL. When you're developing your product, you want every OpenGL error to explode in your face, so you know it happened. In your final build, you probably won't ever check, because... well, what will you do if something happens? OpenGL errors are much more like ENOMEM than they are like ENOENT, and if you get one in production, you're basically stuffed. The best thing you can do is just carry on and hope that the driver ignores it correctly! (One advantage to working on code whose primary purpose is merely to display something on the screen: you can do stuff like that, and it's OK.)

There are exceptions to this general rule, and sometimes you do need to use glGetError in the normal run of things, and take action based on the result. But they are very much exceptions.


Okay, so as someone who is ramping up on C where would I go to learn all these common patterns? I could start going through github repos and start reading code but this seems very inefficient and I might pick up something that is actually a bad technique.


I like Zed Shaw's Debug Macros. Actually his whole book is great.

http://c.learncodethehardway.org/book/ex20.html


Intel is indeed moving to LLVM for this reason and being totally transparent about it.

From https://software.intel.com/en-us/blogs/2009/05/27/why-we-cho...

--- LLVM is distributed under the Illinois Open Source License, which is nearly identical to the BSD license and is very open for use both by open-source and proprietary projects. Some other projects, such as Open64, are licensed under more restrictive licenses like the GPL, which were not compatible with our own licensing model. ---

So, what RMS feared is exactly what's happening with Clang, and he's right in terms of pushing the GPL to prevent this sort of thing.

Now, whether or not this is actually bad is another question. Personally, I think this sort of bleeding between proprietary and open source ultimately improves the ecosystem as a whole.


I don't suppose there's any plans for a Word export feature, is there?

Unfortunately, a lot of consulting jobs require all docs to be in MS Word format... Infuriating I know.


I don't have Word export on the immediate road map, but I'm not ruling it out. Sorry I can't give you anything more specific than that.


The point the article is making is that all the problems the anti-agile crowd complains about are less related to agile and more to a dogmatic adoption of specific tools. I think the point the author misses is the irony of the whole thing. The anti-agile people get annoyed at something like religious demands for 100% unit test coverage, so they instead adopt their own extreme, narrowly focused rules like refusing all TDD. This is how flame wars work.

I've seen agile methods be most effective at companies where software developers are a minority. Scrums and sprints can be very useful when you've got many outside interests hammering at you for various feature requests and bug fixes. Done right, it keeps the software group's priorities aligned with the company's.


Though everyone'll flame you about "the future", you're probably right.

If you read the PLOS One acceptance criteria:

http://www.plosone.org/static/publication

you'll notice that scientific merit doesn't show up in there. The official policy is that if the methods and analysis are sound, then it will be accepted regardless of how irrelevant the actual study is. This is well known in academia and has resulted in a generally negative opinion of the journal among publishing researchers.

edit: I'd also like to point out that this isn't the case for all PLOS publications.


Yeah, I touched a sensitive nerve. It seems "the future" is interpreted by some as "Moore's law applied to everything", with gross disregard to physics, no thermal or optical or energy limits etc.

Year 2000 has passed, and we're all still waiting for our flying cars.


The reason we don't have flying cars is economics, not physics.

Also, people tend to totally overestimate limits of possibility. We haven't explored a lot of things that are possible with our current level of technology (again, mostly because economics). Moreover, our image processing algorithms are very crude. We're nowhere near efficient use of information encoded in images (in a way a theoretical Bayesian superintelligence would). A lot of things thought impossible become possible when you start throwing more and more "compute" at it. You can't break the laws of physics, but those laws are quite lenient.


Totally agree. His reply is an overly polite response to a rather snarky rant on why the Linux devs should just shut up and accept the author's massive merge. Giant projects like the Linux kernel just don't work that way.


No one in that thread is recommending the Linux devs take the monolithic GRSecurity patches flat out. If you read the originally linked thread Daniel explains why it can't be accepted this way nor does he propose it should.

Rather attempts to submit it in smaller patches have been met with disinterest. As well as the fact security in general has the appearance of being sidelined by the core developers - which has created a large disincentive for developers interested in getting GRSecurity upstreamed from even trying (again).


I still really can't figure out how you characterised that particular post as 'snarky'. You complain of 'massive politics', but you're contributing to it with heavy mischaracterisations like that, turning an apologetic, helpful, and informative email into 'a snarky reply'.


A singular email example will always be missing a lot of context. Just because someones tone is nice and friendly doesn't mean there isn't a ton of subtext to what is being said. I'll give a few examples:

1. Saying that since no one has yet "paid for a team of people to do it" then it "must not be worth doing"

2. Sarcastically using info leak in quotes (see KASLR post in my original email for context on info leaks)

3. Repeatedly saying: if you discover a problem "I can help out with that" or "just let me know" when there is a long history of people doing exactly that and linux core devs including Greg K H largely ignoring them.

Etc, I could go on.

And this is all politics. I never said I was apolitical in the posts above. The whole reason people are saying it would take a team of people to submit patches is because politics.


Saying that since no one has yet "paid for a team of people to do it" then it "must not be worth doing"

Except that it's much less declarative than you're stating ('kind of implies' is pretty far from 'must'), and even has an emoticon added to indicate commiseration: "kind of implies that no one thinks it is worth doing :(". I agree that context can be missing, but at the same time, you shouldn't be significantly changing the visible context like that - you seem to be more about projecting your own issues rather than reading what's on the page when you do that.


Right, I should take exemplary lessons of politeness and politics from Greg and Linus.


Where did I imply that you should pattern yourself after someone else? I'm working from your own complaints and behaviour. You're projecting again.

A nicely ironic reply, though - if you do actually have problems with the way they behave, why invoke their behaviour to defend your own?


Well speaking of projections, I am not pointing to the lack of politeness, nor politics, as the problem in itself here.

I remarked on his snarkyness simply because it indicative of the problem: there has been a long history of dismissiveness during any discussion of upstreaming PaX/grsec-style mitigations. So considering it is not being taken seriously we will continue to enjoy the side-effects for the foreseeable future.


Well, I wouldn't take them off whoever posts under the PAXTeam account to LWN.


Maybe the grsec people should better communicate the advantage. I suggest taking each CVE and listing whether it would have been mitigated by running a grsec kernel, and compare it to something else (selinux or whatever)


If there is a kernel privilege escalation then SELinux can be disabled as Spender loves to demonstrate https://www.youtube.com/watch?v=WI0FXZUsLuI GRSec does includes it's own MAC system as an alternative to SELinux but that is only a small part.

PaX/grsec is in a different class of mitigation. I don't really know any competitors besides other implementations of small subsets by different operating systems or hardware manufacturers.

To your other point, I don't think anyone who has been following Linux security for any amount of time thinks that Spender or PaX are in need of proving themselves.


> To your other point, I don't think anyone who has been following Linux security for any amount of time thinks that Spender or PaX are in need of proving themselves.

No major distro carries the patch, and the kernel devs don't want to merge it as it is.

A change in tactics is needed - make it easier for everyone to see how much better things with grsec are. The tweets are good, a summary of those tweets would be better.



To elaborate on why you can typically avoid derivative control for thermodynamic processes:

There's no natural resonance in the system caused by energy-storing elements. For instance, if you drop a cold egg into a pot of water at exactly 80 C, it won't overshoot and go up to 85 C before settling to 80 C. Mechanical systems on the other hand often have "spring-like" behavior that causes them to resonate at certain frequencies.

If you want to get hardcore with the control design then you could look into fuzzy logic controllers. It's common in high-end appliances: http://en.wikipedia.org/wiki/Fuzzy_control_system


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: