My dotfiles git repo is meant to be cloned in my home directory. It comes with this .gitignore committed in the repo:
/*
!/.vim/
/.vim/.netrwhist
Basically it ignores everything in my home directory, unless I explicitly `git add` it, which matches my workflow. For the few cases where I want to notice changes (like the entire ~/.vim/ subdirectory), I explicitly un-ignore it as you can see above.
The only downside I've experienced is my bash PS1 prompt shows the status of the dotfiles repo (branch/dirty/etc) in any directory I'm in that's inside my home dir - I've learnt to ignore it, and it doesn't interfere with CDing into an actual directory that's its own git repo.
In the past I wrote a bash setup script for my dotfiles repository which pretty much does the opposite, symlinking a combination of shared and os-specific directories and files into my home dirs. One definite advantage with your technique is that no special setup script is required. I'm thinking I can obviate the need for splitting ommon and os-specific dirs by just detecting when the OS is Linux or macOS within the scripts themselves and using an if statement to gate their execution or sourcing.
This concept is related to another HN personal favorite of mine: "Best thing in your bash_profile / aliases" [0]. Lots of interesting command-line shell optimization and slick hack ideas in there.
i do the same (dotfiles with symlinking script) and it’s mostly great, but I’d recommend against os-specific switching in the scripts and files themselves. it’s the way i’d always done it previously but i switched to git-branch-per-system and it’s much better. the problems started when “chromebook linux” was different than “other laptop linux” and then “server linux” (not to mention os x) and there kept being more and more messy logic. separate git branches is way easier and has the benefit of a sort of “inheritance” of the shared stuff (branching and rebasing)
I think that method requires git to scan all the files in the directory so it can then ignore them. The advantage of the “showUntrackedFiles no” method is that git will only look at the tracked files, which is much faster if you have a million files in your home dir, like I do. (Or so I believe.)
I tried quickly looking into this to verify the claim, but haven't yet found anything to explain why this would be the case. Do you have any details about how or why git will scan everything in this case?
I did find a pretty neat stack overflow post on the subject of gitignore whitelisting [0]. If we can get to the bottom of possible performance impact, would be cool to add the info there.
You can use the `git check-ignore` command to check if a certain directory is ignored. I used it in my zsh config to suppress the status of ignored directories in my prompt.
It originated as a way for people who aren't familiar with CLI to install things. People have now been trained to expect this level of simplicity. I've worked with people that will blindly copy and paste these lines into terminals, having absolutely no idea what they do, and even blindly type in sudo password when prompted. It's basically the worst of all worlds from a security perspective. In my opinion this should be burned to the ground.
Normally when I see this I will manually download the bash script, read through exactly what it does, and manually type in each command instead of running the script directly. This way I know what it is doing, and it can't hide command output by piping into /dev/null and doing something without my knowledge.
Downloading and reading the bash script isn't a solution either. Even if it isn't deliberately malicious, those things usually want to just puke files into some arbitrary corner of your system or homedir - really the author just made a "works fine on my system" crutch rather than doing the actual work of packaging. Then to double down, they'll often add some "clever" hooks to self-auto-update your local junkheap from their git nightly, because releasing deliberate versions doesn't "move fast and break enough things".
I either want to pull from the standard distribution repositories and rely on things updating automatically, compile from source tarballs with explicit version numbers, or at the very worst have a path-independent binary tarball that can be unpacked anywhere. If you can't manage any of these, then your project simply isn't ready for general availability.
I've seen scripts that "clean up" with an 'rm *.o' statement (for example), which strongly suggests the potential for total disaster if you blindly run it from the wrong directory. Glad I'm not the only one with script paranoia.
Especially since it is possible to detect merely downloading from actually piping to a shell serverside[0], you should never do this even if you've examined the script first.
I don't know why you are being downvoted but I'd like to know. I have a very official and very governmental API that isn't properly set up and the official doc says to use curl -k to talk to it. I had a long argument with one of the dev, with a PoC and a live example, about how it was a bad idea, especially considering how it can easily be fixed but... the 'feature' is still there.
I suppose the next $2,000 a day consultant will get them the memo.
It's fair to say that the technique described by SneakyCobra is amazing. I previously used this to manage my own dotfiles but there's still a few problems with SneakyCobra's approach.
* initial setup can be tricky even for experienced users
* Incorrect use can potentially destroy your home directory
So, I wrote a little utility call "SDF: Sane dotfile manager" that makes the technique used by SneakyCobra approachable to a complete novice and hence more reliable to use.
Same - git repository (I presume), and GNU stow. Stow is a good way to maintain symlinks, but I see the value of not having that additional dependency. The OP's method is nifty, I might try it out.
It's orthogonal, isn't it? You keep your dotfiles in a repo, then have stow "install" the checkout/clone thereof.
EDIT: Also on stow - I tried it a couple times and it never seemed flexible enough. Can it handle things like keeping ~/.config/nvim and ~/.vim in the same stow directory? (And for that matter, can it link them together? I like having the same config for vim and neovim)
Re: your edit, Yeah it can! Set up a folder, say 'vim' for your example, and then inside of that folder you have a '.vim' file and a '.config/nvim' file.
Then when you run `stow vim` from the parent of the original 'vim' folder it will symlink everything in there to ~.
You can even have more folders that have a '.config/foobar' inside of them, and when you stow those it will all work itself out nicely!
Edit: You can find examples of this in my dotfiles https://github.com/AnthonyWharton/dotfiles/ (I use stow if you didn't guess :P). For example look at the 'i3' folder and the 'polybar' folder, they both add things to '~/.config '
I just have both directories in my dotfiles repo and have the vim one symlinked to the nvim one. Admittedly I haven't fully explored the capabilities of stow, there might be a better way.
I do almost exactly the same as you; my install.sh is a glorified wrapper around `ln -s`, but for each file, it verifies whether the file is already symlinked and if not renames the original to something like `.foo.bak.$(date -I)`. This is probably overkill, but it was especially nice when I was just starting to version control my dotfiles and still found unmanaged files sometimes that contained things worth saving.
EDIT: Some quirks to note, especially if you want to steal this script: 1. I use ~/.local/etc as my dotfile directory. 2. I support multiple shells but have all of them use ~/.profile rather than shell-specific files (most config overlaps, and there's a case stmt that deals with per-shell settings). 3. The vim/neovim bit at the bottom will fail silently if the directory already exists; thankfully this is rare, but it should be fixed some time.
I use a combination of stow and git via a script I wrote that I call stash - https://github.com/scotte/stash - it's very basic and simple, but has served me well.
I'm still looking for a good way to I'm using this approach, but looking for a way manage dotfiles for multiple machines. Having separate branches feels clunky, since there is a lot of overlap and tweak may involve making the same tweak on several branches. Any recommendations for managing this situation?
I have a low-key solution. I check in files named by the hostname. For example `.bashrc.[hostname]`. Then I have a quick conditional in all my .bashrc files that checks for a hostname-specific file. This way, I commit them all, but only the relevant ones get loaded.
Surprised nobody has mentioned YADM - it can do per-device files and/or per-device templating too (jinja2 syntax). It's just a thin wrapper around git so you can use any git commands too.
There's nothing really to gain from using git directly. Yadm is awesome.
I put up my dotfiles here https://github.com/thingfox/dotfiles (sample documentation repository with examples), in it I show how I did the templating for ssh hosts amongst other configs.
The reason I want to be able to have different files for different machines was to make slight variations to some of my dotfiles. I used to use branches but it was too much error prone work keeping all my branches up-to-date. I switched to a system where I template my dotfiles, but now I have to expand those templates for them to actually work. I do have leverage when to expand the templates and how to install them. There are a bunch of different ways to do this depending on what you want but what I ended up doing was:
1. Template files using a syntax that was easy find / replace using a regex. You could use an existing one if you like.
2. Generate a bash install script with all the file variants embedded as base64 strings. I can build this script locally, but I also have a travis ci build that pushes up the install.sh script as a gh-pages like branch.
3. I can now curl the install.sh script from any machine I want and bootstrap my dotfiles. The only install time dependencies are bash, curl, git, base64, mkdir, and echo so it's a very portable self-contained script.
4. During install time, I use a case on hostname to determine which files to use and I use git to put them into my $HOME directory using a similar strategy described by the article.
I template trees using a python script, for both ~ and /. The template language isn't even currently that complex - basic per-host conditionals suffice. The host pushing the config does the templating, serializes it, and shoves it over ssh to the receiver. This way I can do things like leave passwords in config files (eg mpd.conf) and not have them end up on eg a VPS. Another example is having helpful comments in authorized_hosts to say where a key is from without that information ending up on the hosts themselves.
The receiving host runs python, so it can do things like refuse to overwrite files that have been changed locally. I still need to add a notion of hooks to run on the receiver when a given file is changed. If the remote dependency on python/ssh becomes a problem, I will simply add an option to dump a tarball locally.
I really tried to use ansible et al, but those tools seem to be geared towards managing large groups of essentially identical hosts, rather than generally differing hosts with some commonality.
it's great. It has tags to only pull up specific dotfiles (say for emacs, .config etc), and supports configurations for multiple hosts and multiple source folders.
This is overkill, but I have a DAG of profiles. Each profile can refer to one or more parent profiles. When I produce a config for a particular profile, a small Python script applies the profiles starting from the root node(s).
To avoid trashing my home directory, this actually is done to the side and committed into a bare git repository (this part is similar to the article). Afterwards, I use `git --git-dir=... --work-tree=~ checkout -p` to apply any changes one-by-one, allowing me to preserve any local edits I may have made.
Check $uname and other variables with if statements to activate aliases depending on OS, username etc.
I also keep a barebones “core” of aliases/functions that i use everywhere (e.g. on linux servers as well as my current macos laptop). And then a file that contains the non-core stuff that only get used on my dev environment (MacBook) but not on servers.
—
It’d help if you provided examples of what differences you have between machines though. Most should be pretty simple e.g. slightly different dir structure, different package managers etc.
I ended up putting everything common into the master branch, and keep only the varying parts (not the common parts) in machine-specific branches. These are normally "include files" that end up in ~/.profile.d/, ~/.emacd.d/, etc.
I have to check out both the main branch and the machine-specific branch in separate directories, and use symlinks. OK by me, though; I don't set up a new workstation often.
Why do you want different configs on different machines? I think the "easy" solution is "don't do that";) But of course that's useless advice if you actually have some usecase, so a suggestion: Have case/if blocks on $(hostname), or even do something like `test -f ~/dotfiles/bashrc.$(hostname).local && source ~/dotfiles/bashrc.$(hostname).local`
The problem with keeping all dotfiles in a single repo is that if you want to get an older version of one particular dotfile, you'll also be getting older versions of other dotfiles as well.
I want every dotfile I use to be independent of the rest and a log that shows changes to just that one dotfile, so I store each of them in separate repos and use GNU Stow[1] to manage them.
The above is actually a bit of an oversimplification of what I do, as I store related dotfiles in a single repo as well, so that (for example) all my weechat dotfiles are in a single repo, as I rarely want to checkout a single file independantly of the rest there.
Git can fetch a single file as of any commit with 'git show' and show the history of any single file with 'git log' or 'git diff'. Any decent web/GUI tool will do these things too.
A Git repo per small text file seems like overkill to me.
You basically make your home dir a git repo and by default ignore everything inside it. When you want to store a dotfile in that repo, you have to force add it (`git add -f`) and it will be tracked.
Who says your git repo has to be public? Use a GitHub private repo, host the repo yourself behind SSH on a $5 Digital Ocean droplet, use the free private repos that come with Gitlab.com... securing git repositories is a solved problem.
There's a lot of dotfiles on Github and it doesn't seem to be a problem (Except if you check in private credentials, but that's not a problem unique to dotfiles).
If you rely on your configuration to be secret to be secure it's just security by obscurity and not worth much anyway.
> If you rely on your configuration to be secret to be secure it's just security by obscurity and not worth much anyway.
What I had in mind is that the average person wouldn't be a target, but publicly declaring their security vulnerability would attract attacks they wouldn't receive otherwise.
My dotfiles git repo is meant to be cloned in my home directory. It comes with this .gitignore committed in the repo:
Basically it ignores everything in my home directory, unless I explicitly `git add` it, which matches my workflow. For the few cases where I want to notice changes (like the entire ~/.vim/ subdirectory), I explicitly un-ignore it as you can see above.The only downside I've experienced is my bash PS1 prompt shows the status of the dotfiles repo (branch/dirty/etc) in any directory I'm in that's inside my home dir - I've learnt to ignore it, and it doesn't interfere with CDing into an actual directory that's its own git repo.