The garbage traffic came from about a hundred thousand
infected servers, most noticeably, in LeaseWeb B.V.,
Hetzner Online AG, PlusServer AG, NFOrce Entertainment
BV, Amazon and Comcast networks. That said, the attack
was distributed evenly across thousands of hosts and none
contributed more than 5% of the total volume.
I used to host a lot with Hetzner, and while quite expensive, they mostly responded to these kinds of things very quickly and with a certain level of technical competence (which definitely cannot be said of every hoster). Also, I'm quite surprised to not see OVH in there, as their network has a kind of "reputation" for these things...
Fighting back would‘ve been a little easier, if the abuse
departments in most of the mentioned companies didn’t
process requests 9-5, Mon-Fri only. (Hours more befitting
a scuba-diving shop in Vatican.)
Business as usual I would say...although I don't scuba-dive...
After playing with various (note-taking) applications/apps I found all of them severely lacking (for several reasons). I was always for the lookout of the one-size-fits-all application, which I obviously never found.
Realizing that there is no such system/app I split things out:
* Important Stuff as well as trivia -> CalDav... believe it or not, but CalDav beats most other systems/apps out there, it's accessible on almost any device and you usually have a wide variety of applications to edit your "calendar events", use different calendars for important vs trivia
* Stuff you read on the internet -> obviously (synced) bookmarks (firefox, chrome, opera and others have builtin sync)
* Ideas, plans, drawings -> A5 pen and paper notebook (most people will advocate moleskine, I prefer Leuchtturm notebooks (to each his/her own)
* Research, papers, references -> good old text files, index + txt + pdf + bib (vim + vimwiki + git + some zsh alias like wiki="cd ~/wiki/; git pull; vi index.wiki; git commit -a; git push; cd -")
So far, this works quite well, although I have to admit that while separation is king, it also hinders creativity at times, so I'm slowly starting to integrate other things into the wiki (write firefox bookmark and caldav importer/parser, thinking about scanning/digitizing notebooks...) to be able to cross-reference things. The long term goal is to create a visualization that allows me to visualize (duuuh) all this data in different ways (especially useful for research and connecting the dots).
Hope this helps and I would really be interested how others manage this, especially regarding research, papers etc (Mendeley and others just aren't flexible enough for me...).
I use Gitit [1] as my personal wiki for notetaking. I've been pretty happy with it so far, as it uses the excellent Pandoc as the backend. I have not heard of Vimwiki until now - can you tell me your favourite features?
* links (to other wiki pages and content), move cursor over link and <Enter> will open wiki page, link in browser, image in image viewer, pdf in ... all from your console
* manages todo lists (including status indicator auto update for sublists: [.]->[o]->[O]->[X])
* headers (mostly useful when exporting to html)
* table creation and management
Overall a very lightweight and tightly integrated vim plugin, but gitit looks quite interesting, might give it a try.
After years of writing everything in casebound A4 notebooks, I am currently experimenting with B5 size (in between A4 and A5). It's a bonny little size, still plenty of space on the page and fits on shelves better.
Yip, I tried out vimwiki a while back and have stuck with it. Pretty simple to setup and yes, a git commit generates the html files and rsync's them up to the internet. Pretty bombproof.
This axe looks really awesome, physics for the win.
Also, instead of using an old rubber tire, I highly recommend building a variable length, tensioning chain, much like this one: http://www.youtube.com/watch?v=wrLiSMQGHvY
Makes chopping wood so much more fun.
And then, there is also the stikkan: http://www.stikkan.com/
Perfect to hang it up next to your fireplace to do some more fine grained wood chopping, cutting larger pieces into smaller ones.
It amazes me when someone makes improvements to a job as old as splitting wood. Millions (billions?) of people have been doing it for thousands of years, and there's still room for improvement.
Chopping firewood takes up so much labor, that there have always been specialised tools, there is no such thing as a "normal axe" - an axe for felling trees is different than an axe for splitting firewood, there are differences in the shape, width and weight of the axe head and you wouldn't want use one for other if you're doing it more than once a year; and a carpenter would use a different type of axe(s) than those.
And that's not a modern invention, it goes back for centuries. I'd guess that even stone age flint tools have been specialized in similar ways.
True. It's also worth nothing that axes intended as weapons are also very specialized. Indeed, there are multiple types, depending on just how you want to kill people with your axe.
This isn't actually a new improvement, just a 'hardware' implementation of a really old technique. You can get the twisting action with any normal axe, even if you're a barefoot girl:
I found it entertaining, mostly for the interesting use of language.
"Once in a while he found new axes at the hardware store. They were proclaimed to give greater striking power and strength through added weight and a variety of shenanigans to the sides of the blade."
It looks interesting. I have to wonder if the sideways action of this ax is tough on the wrists? I know with a regular axe when you get a bad hit and the ax goes sideways, it's very unsatisfying - not to mention slightly jarring to the wrists and arms.
I use a maul (about 5lb head) to split wood (mostly elm since a neighbor cut down dozens of elm trees in the attempt to stop Dutch Elm disease from propagating) and the best motion I know of is to lift it straight overhead with hands spread wide. Then accelerate it downward while sliding the hand closer to the blade down to the base. Focus on coming down in a line through the center of your body. At the moment of impact, both hands are close together and your grip is just tight enough to hold onto the maul, with arms and shoulders relaxed.
It substantially reduces stress on your body and you still maintain good control. Feels like a Kendo "shomen uchi" strike.
Isn't the generated/collected water like condensed water free of any kind of salts etc. that would naturally occur in ground/drinking water, so shouldn't it be unsafe to drink large amounts of it (much like it is unsafe to drink large amounts of salt/sea water due to the saline imbalance)?
Everyone needs to eat salt, and does. If there isn't any salt in your water, then you need slightly more salt in your food. But only very slightly more; drinking water is not very salty.
Just to be clear, I meant that water collected from pure air should have no salt in it at all, so drinking large amounts of it will be unsafe as it will draw those salts from your body (osmosis and diffusion), just like drinking large amounts of seawater might increase your blood pressure, but more importantly, it's very hard for your body to get it out of your system, your urine produced by your kidneys is not saltier than saltwater, so you need more water than you can drink (with seawater) to get it out again.
So my original question still stands, is it unsafe to drink large amounts of water collected from pure air and do they have to take care of that in any way? Or will simply eating salty food fix it? I still have not found an answer in the article.
It's dangerous if you're dehydrated and drink a lot of soft water (that's the term for water with low/no mineral content) at once, otherwise it's okay as long as most minerals come from the diet.
Tap water in most places has a GH/KH of 0 (to avoid clogging pipes with carbonates) and nobody dies from it.
Specifically, if your body is low on electrolytes it will hold water until it gets them and that holding of excess water causes problems. However, almost any amount of food is sufficient to prevent that.
However, in this case I tend to doubt the water will be all that pure due to dust; I'd guess it will actually have a decent mineral content. Commercial water purifiers often produce mineral-free water and the only issue I have heard of in connection with this (at least as long as you are eating any food) is that most people find that the resulting water tastes bad. Adding minerals back is not difficult at all.
Edit: Sounds like it is a little more complicated than that; the WHO paper that the Wikipedia article that sbierwagen cites is a good read with 16 pages on this topic. One thing they mention as a potential issue is that cooking food in low mineral content water can leach minerals from the food. If the food is then consumed but the water isn't, that would lower mineral intake (discarding water used in the first cooking of some beans is necessary because it removes toxins from the beans but these beans can be a good nutrient source, so this could still potentially be an issue where water is scarce). The Wikipedia article could be skipped, though, so here is a direct link to the paper:
http://www.who.int/water_sanitation_health/dwq/nutrientschap...
Drinking water is supposed to be good health if the salt content is between 30 to 50 parts per million. If we speak about details the salt content present in water is supposed to be source of some minerals which you might not be able to get from food. It can also be supplemented with vitamin tablets but that would make distilled water not ideal drinking water.
The issue with deionized water isn't what your GI system will or will not absorb. The issue is that the water may absorb minerals and nutrients from you.
I have to imagine that this is easily combated if it proves to be a significant factor. Worse case scenario, now that you have water that doesn't make you violently ill, you have more energy to farm/eat nutrient rich foods?
We are talking about trace quantities of salts though. You could get around any measurable problem by passing the water through a large box full of pebbles. Also, water is for agriculture as well as drinking, so this should increase the stability of food supply anyway.
I'm waiting for the first system that is equivalent of C++.
Not gonna happen anytime soon. C++ is much more than just a programming language, it is an entire "ecosystem". You have toolchains, libraries, drivers, APIs, existing software stacks, all written in C++ and able to interface with C/C++ directly, i.e. no translation layer needed. You're not just gonna replace that with a new language.
Sure, Halide may be seen as just some syntactic sugar (much like quite a few bits and pieces of C++11), but it actually provides you with a different level of abstraction than say OpenCl, MPI or OpenMP where you are very specific about e.g. level of concurrency (which heavily impacts the design of your algorithm) while Halide tries to almost completely separate algorithm and scheduling.
I maybe wrote it badly. Skrebbel said it a lot better. It's about providing something that's on truly higher level.
It's nice that Halide attempts to separate the algorithm and scheduling, however as of now it's also possible in OpenCL too. The implementations are just so bad at it that it's not really useful.
As a computer vision researcher, this looks very interesting, although somehow I have yet to understand how they want to generalize highly complicated optimization patterns (access order, locality, pipelines, platform limitations ...), especially since some algorithms (other than the shown blur filter) require quite complicated access patterns on the image data and can only be hand optimized most of the time (that doesn't imply that they would not benefit from general optimization at all, just that they might be way faster when hand optimized). Still, if Halide produces faster code for some cases (e.g. filter operations amongst others), it will still be worth its salt.
I went to one of the SIGGRAPH talks they did a couple years ago.
The theoretical plan is basically to write an optimizer that could intuit good schedules, recognizing that they actually don't have a good idea of how to do this. I think they can currently run some ML algorithms that churn through a bunch of different schedules and find the one most fit for a problem but it's rather brute force and slow at the moment.
That said, the conceptual distinction of separating the algorithm from the scheduling also allows you to tune scheduling by hand much easier than would be possible otherwise.
Since this is a language and a compiler, my guess would be the answer to your question is: the compiler will optimize for the underlying platform. The whole point of Halide is stated in their abstract: "... make it easier to write high-performance image processing code" which is the exact opposite of "hand optimization". Halide allows developers to express what to do in a powerful, domain specific language - the compiler takes care of the "how".
This approach makes a lot of sense: abstract the annoying low level architecture details. They have a lot of targets which is fantastic: x86/SSE, ARM v7/NEON, CUDA, Native Client, and OpenCL. Let the architecture specialist worry about the architecture specifics. The disadvantage: the achieved performance then depends on quality and wisdom of the compiler. But once certain things are optimized for a specific architecture, every user will benefit.
How they do it on the compiler end of things, I'm not sure. There are a number of techniques. Among the simpler is auto-tuning. There is also a new term: "copious-parallelism" [0]. It acknowledges that to achieve performance portability across platforms, algorithms must offer explicit ways of parametrization and tuning to adapt to different platforms. I think this is the right concept but believe that it could be implemented within the compiler. The domain specialist should not have to think about those things.
The paper was specifically about hand doing your schedules, not the compiler doing them automagically, which is a huge pipe dream at any rate. For the programs in the class, you are looking at a few orders of magnitude in perf differences for different schedules, which is why the programmer needs control so they can be guaranteed the performance they are expecting. Compilers only reliably optimize at the few percentage point nice to have level.
You're right, I completely misunderstood the purpose of Halide. I read up on it and I see now that they simplify how developers can do the copious implementation by hand. The schedule must be specified by the developer, the compiler doesn't to that.
The only non-FreeBSD systems I have to deal with by now are my Cellphone (Android) and my TV (funny enough, also Android). Made the switch from YOUR_FAVOURITE_LINUX_FLAVOUR after having to deal with (st)Ubuntu way too much at work.
Nonetheless, as much I prefer running FreeBSD on any kind of server system (as I have been for years), after using it on my (fairly new) laptop for a couple months now, I really wish there was better (new and shiny) hardware support (a commonly acknowledged FreeBSD deficiency, so yeah, I knew what I was getting myself into). Given, I only bought it because my old laptop was stolen at work, it belonging to the Haswell family did not really help (no internal WiFi [Intel IWN 7260], no hardware accelerated graphics [HD4400] and so forth...). But hey, who am I to complain, time to get hacking, which I suppose is more in the spirit of the BSD culture anyway and I'm not going back to NIX.
So yeah, glad this book got updated, most likely picking up this new edition...
I wish I could upvote this more than just once. Using 'struct' instead of 'class' and combining that with templated "pure functions" [1], when applicable, make for some really readable/maintainable/extensible C++ code, which, although often overlooked, is a multi paradigm programming language (the OOP proponents want you to believe otherwise ;-). Especially, as mentioned in the article, the fact that if you're honest you almost never really need private class members (that might later force you to create set/get-ters) really resonates.
These days, I prefer to create an interface using the PIMPL idiom [2] completely hiding the underlying implementation, whilst the implementation itself preferably consists of data structures and collections of these (hence 'struct') and some templated functions to modify these. This also resonates with an old C mantra: design data structures first, model functions after their behaviour or something similar [where I just cannot find a quote for].
My main gripe with using 'struct' in favor of 'class' is probably the compatibility problem with other programmers that might become confused as to why you are doing such weird things...
So, is anybody here actually active on academia.edu? Asking, since as a computer vision researcher (3D reconstruction, obstacle recognition/identification, augmented reality, ...) I am constantly looking for scientific resources, and to be frank, it is quite annoying because it can become really time consuming.
So a genuine question, is the site worth signing up for? I remember when mendeley just started, there was lots of excitement about it, mainly because it was sort of marketed as the last.fm of research (giving you good recommendations for your scientific interests), but then it turned out to be just meeh (getting bought by Elsevier didn't help much ;-).
Also, on a side note, are there any other good sites doing something similar out there (like a reddit/HN for insert your research interests here type of site) that are actually any good? I'm sure there must be a lot of other researchers active on HN...
I've been a regular user of Mendeley, not for recommendations--that's what journal TOC alert services are for--but just for managing my bibliographies and preprint PDFs. I'm allergic to Endnote and I got tired of Zotero's sluggish speed (not that I've tried using it in a few years). Mendeley has been a nice replacement.
(All that said, I really do hope Zotero does well, since it's open-source and not owned by a major commercial publisher.)
We're trying to build a social learning network for science that aims to provide a good discussion, learning and networking environment. Here's the link - http://functionspace.org
I do not know where this op-ed piece was intended for, but I think you should have written it anyway, stating just this feeling of yours. There was/is/will be so much media frenzy about this, every article (even if you think you really had nothing useful to say, which you actually do), op-ed or not, that does not resolve to sensationalists headlines is just really helpful.
I seriously considered it -- I even started trying to write a "there's nothing new here, the NSA has always been spying on us" editorial -- but I just couldn't make it work.
Edit: formatting