I totally automate my roles on projects all the time so I can move on to more interesting things. I guess you mean that no one wants to be fired, but I don't see how that can result from automating one's work.
Also, I HATE doing repetitive things. Some people seem to like it though. To each their own, I guess. Reminds me of https://youtu.be/wNVLOuQNgpo
We all believe our job is so challenging and has such special requirements that it _can't_ be automated. It requires someone with the kind of experience learned with wisdom over a long time. Blah blah blah.
This makes it so that 1. my quality overall becomes better and my bosses always liked that (doing things per hand are more error prone, not on time etc.)
2. I can go on holiday knowing my company doesn't need me desperate
3. I can spend the free time of actually innovating and bringing more value to the company/product
The problem is not automating yourself out of a job but not being able to leverage the new gained capacity.
My way helped me succeed. I took my skills and my achievements (which i made in my R&D Time) to another company and got more money and than i did it again and got more money again.
Right. Presented with efficiency gains, firms tend to increase profit, not wages. One way to change that is to give workers more bargaining power through market shifts or unionization.
All the productivity gains are first transferred to the consumer(because of market dynamics) and then(by the market winners) to shareholders. The workers' wage market is not related to productivity but how the company is internally organized is linked to productivity.
Just a heads up, your javascript doesn't work for me in Chrome because it's assigning to a global `history` variable. `window.history` is read-only so the js errors out. If you scope your function or rename the variable it should work. Nice challenge! Good luck!
Funny, Sublime was my gateway drug to Vim. It has "Vintage Mode" (which gives you vim keybindings) which I enabled just for macros since they are so powerful. After a month of going directly to insert mode I started accidentally picking up more and more random keys. Eventually I was so fast in normal mode it just didn't make sense not to use vim since everything else I use is in the terminal anyway.
Not sure of the original intent, but when I started using Rust I used Rust By Example for finding snippets of things like file io where I get the gist but want to quickly see the pattern or involved parts of the stdlib.
The Rust docs are great, but having a starting point can be super welcoming so you aren't poking around blindly at different types that might be relevant.
I can't help but feel we're all underestimating boredom in this scenario. Yeah, you have to survive, but eventually things will be stable enough for there to be SOME downtime, with very few options for entertainment. When sanity is involved, boredom is a massive problem and sex might be one of the easiest forms of entertainment to arrange depending on what constraints were involved when planning the mission.
Humans have an amazing capacity for addiction. It may be easy to say no to micropayment games, but "products that engender addiction" is a very broad and fuzzy category.
For any other casual viewers unfamiliar with but interested in economics, the 'r' refers to return on capital and 'g' refers to the growth rate of the economy. This section of the wikipedia article on his book seems to give a quick and decent high level overview that puts this (and a few other comments) into perspective: https://en.wikipedia.org/wiki/Capital_in_the_Twenty-First_Ce...
The whole book is great. It may be ~700 pages, but it's more of a story than a heavy economy book. If anyone is interested enough to comment on this issue, this book is definitely something you'd be interested to read.
Veering a bit offtopic, but does anyone have any pointers to important recent work on parsing? There are alot of papers out there and I guess I don't know how to sift through them. I've heard the "parsing is solved" line before, but so much of my time is spent doing some type of parsing that even incremental improvements are extremely interesting.
It really depends on the kind of parsing that you're doing.
Full context-free grammar are supported by "generalised parsing". The older stuff is GLR by Tomita, Early parsing, the CYK algorithm. The newer stuff based on Tomita is the particular rabbit hole I stuck with for a while. I read about SGLR which eliminates lexers, GLL which is GLR but based on the LL algorithm. The people who do GLL research also did improvements on SGLR with improved speed on right-nulled grammars. Then there is the SGLR improvements with disambiguation filters, automatic derivation of error recovery with island grammars, etc. The disambiguation filters include a kind of negation, making the current implementation of SGLR for SDF capable to parsing Boolean Grammars, which are a superset of context-free grammars.
Anyway, there's more than context-free grammars. Definitely look into data-dependant grammars! It's able to define a lot of interesting things, like network protocol formats where you parse the length of the payload, then based on that length you know how much to interpret as the payload before you read the footer of the message. And you write all of that in a more declarative way and get a nice parser generated from it.
There is so much more, but I think I should to stop now :)
I can probably help if you can narrow things down a bit. Are you parsing deterministic languages (e.g. for programming languages), general context-free languages, binary file formats, or something else?
Because parsing is a field of little active research interest where most of the work happened more than 20 years ago, there are a lot of techniques from the 70s, 80s, and 90s that are relatively unknown.
I'm most interested in deterministic languages. Non-deterministic context-free languages would be extremely interesting as well but more out of curiosity than an applicable need. Thanks!
For deterministic languages, hand written recursive descent is usually the choice even today because of simplicity and ease of error reporting.
There are exceptions, but relatively few production compilers uses anything else, and most of the innovations in parsing provides relatively little value in this space because they tend to be focused on better expressing complex languages rather than provide improved ways of expressing / capturing errors, and it's the latter that would provide most benefit for deterministic languages.
> Veering a bit offtopic, but does anyone have any pointers to important recent work on parsing?
Parser combinators are a relatively modern topic in parsing. First research into the topic was made in the late 1980s, but parser combinators became popular after Parsec [0], a practical implementation of parser combinators in Haskell. These ideas have been borrowed to many different implementations in different languages since then.
It is important to distinguish between PEG and packrat.
PEG is a model for recognizers, distinct from traditional grammers whose theoretical model is usually based on generating strings rather than recognising them. (Yes this is a bit backwards.) The most distinctive feature of PEGs is the ordered choice operator - traditional context-free grammars and regexes use unordered choice, but ordered choice is a closer model of how recursive descent parsers work in practice. Ordered choice naturally leads to implementations that have less backtracking than common regex matchers, e.g. Lpeg.
Packrat parsers are based on a clever non-backtracking PEG matching algorithm. They spend O(n) memory to get O(n) parsing time, so they are reliably fast but not good for long inputs. They are also effective in a pure, lazy setting like Haskell.
Yes, I implemented exactly this approach. But for the binary expressions I am using Pratt, it is faster and easier to construct left-associative nodes.
Would someone mind fixing the typo in the title? It's AvanceDB, not AdvanceDB. Due to the bravado of a name with "advance" in it, it might actually make a difference to how it's perceived.