Hacker Newsnew | past | comments | ask | show | jobs | submit | k_f's commentslogin

A (weak) stationary distribution has no trend, no seasonality, and no changes in variance. With these properties, predictions are independent of the absolute point in time. The transformations to turn non-stationary series into stationary ones are reversible (usually differencing, log-transformations and alike), thus the predictions can be applied back to the original time series.

Treating time as a first-class component really just means to factor in the absolute point in time into the models at training time. This only makes sense if the absolute time changes properties of the distribution that cannot be accounted for with regular transformations. If that's the case, then we assume that these changes cannot be modeled, and are thus either random or follow a complicated systematic we can't grasp. In the first case, a NN wouldn't improve either, in the second case, we either need to always use the full history of the time series to make a prediction, or hope that a complex NN like LSTM might capture the systematic.

In any case, I think one of the more compelling reasons to use NN is to not have to do preprocessing. The trade-off is that you end up with a complicated solution compared to the six or so easy-to-understand parameters a SARIMA model might give you. And the latter even might give you some interpretable intuition for the behavior of the process.


Our IMPALA also currently only supports discrete action spaces because we wanted to exactly replicate Deepmind's implementation for benchmarking. In your case I'd suggest looking into our SAC implementation, which learned typical continuous action benchmarks (e.g. Pendulum-v0) in a few dozen episodes.

Regarding code quality and ease of use, we follow a strict modular approach with separate components that can be tested individually. Component dataflow is defined on an abstract level, which makes it rather easy to create new components and algorithms. So, instead of having to adjust complex code structures with lots of intertwined behavior, you usually can just plug in another component that covers your use case.


For anyone wondering where that weird 1.27059 factor comes from: Amazon takes 15% commission (+ a fixed commission fee) for book sales. 1.27059 * 0.85 = 1.08(00015), so (practically) exactly 8% price difference (or possible profit margin).


Actually, MM in Roman numerals is 2,000, not a million.


You’re right. Don’t know what I was thinking. Guess it’s an accounting only thing


Am I the only one triggered by the fact that they used blue USB 2.0 ports?


Thanks for your kind words and thanks for your suggestion. I agree it's sensible to provide information on how to connect your problem space to our library. We have some more blogposts on the roadmap and might add that one as well (we had some information on that in the documentation, but it's outdated as of now). Until then I would suggest you take a look at the source of our OpenAI gym connector:

https://github.com/reinforceio/tensorforce/blob/master/tenso...

and the environment interface:

https://github.com/reinforceio/tensorforce/blob/master/tenso...


At least in terms of integration, TensorForce aims to be a "plug and play" library. However, RL is not at a stage right now where you can just plug an algorithm to any kind of problem and expect it to learn. Hyperparameter tuning is always necessary.

Still, TensorForce does provide pluggable implementations of state-of-the-art algorithms as well as runner utilities and environment abstractions to make it easy to connect your learning problem to it.


I actually implemented pretty much the same layout (custom kernel, A/B partitions, but one each for the OS and the player software) for a similar use case (background music player). We used AUFS for partition layering. In particular, our setup was as such:

OS: SquashFS file system (ro) -> Config partition (ro/rw) -> tmpfs partition (rw)

Player software: SquashFS file system (ro) -> Data partition (ro/rw) -> tmpfs partition (rw)

The config partition is mounted read-only, all changes are written to the tmpfs partition. We then use a script to bulk write desired files to the Config partition (i.e. remount rw, copy files, remount ro). This limits the write time to a fraction of a second (these are only small text files).

As for the data partition, we remount the partition rw when we're updating the music files and check for filesystem consistency on bootup.

We hadn't had a single system fail because of the SD card in over 2 years.


Neat! We're also using a SquashFS root file system. It's only 30MB total at the moment.

Right now there isn't a way to do remote system configuration changes (through a UI, it's doable from a command line). Most changes happen through the "apps" deployed on those devices and those configuration is treated as volatile and might get lost during power outages (but then repaired later). If we decide to allow modification of the few system settings we have, I might treat them as a new A/B cycle and recreate the complete system with the changed settings on the other partition. That way I also get automated fallback if anything is misconfigured and (for example) the device fails to connect to the new WiFi.


Thanks, that looks promising! Do you use it yourself, or just happen to know about it?


I've had drinks with Loris Degioanni and Brendan Gregg after a conference one year (Monitorama). Getting to ask two legit heavyweights in the industry about how to monitor containers at scale was definitely a high point for me. Also, Loris is an incredibly smart and entirely reasonable guy. I grilled him pretty hard and came away with the distinct impression his tech is exceptional for what it is, and the product pretty much sells itself as a result.

I don't currently use it as we just use prometheus + kubernetes just because my company isn't big on cloud services and doesn't have a need for it just yet.

Note: I have absolutely no relationship to him or Sysdig, just am a random tech guy impressed occasionally by good tech.


Thanks for sharing. I had no idea who Degioanni was, although I'm familiar with sysdig. Everyone knows Brendan Greg I guess, especially FreeBSD/Solaris folks (dtrace and all).

I use it to monitor connections, connection speed and processes running on specific containers (usually Jenkins builders) and it's really good & easy to use.


Thanks again for the elaborate answer. It sounds like a great product with solid tech. I'll definitely be looking into it!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: