The eager vs. lazy distinction is not relevant here. There are some programs that under eager application semantics will not terminate, but under lazy application semantics they will, and vice versa. What complicates reasoning about termination is the presence of mutation. In a lazy language, you'll never have mutation so you don't have to worry about those complications. You could have an eager language with no mutation (e.g., Elm[0]), but for the most part eager languages include some form of mutation and so you may have a harder time proving termination.
I don't think that's right. There is a basic theorem in (untyped) lambda calculus that says that if a term has a normal form, then any evaluation strategy, including strict and non-strict, will reduce that term to that normal form. Since non-strict evaluation can "skip" arguments that may have no normal form, there are expressions in non-strict languages that terminate while their strict counterparts don't. The opposite scenario does not exist: if a term has to be evaluated, it has to be done in both the strict and non-strict versions. (And in simply typed lambda calculus, all reduction sequences terminate.)
[0]: http://elm-lang.org