I actually work in the Tech Support section of Diskeeper Corporation and as such have come across the issue with defragmenting and VSS and well as the consideration that Richc46 brought up about 10% fragmentation or better. To shed some light on things as I've come to learn in my work here. First, as far as the VSS issue, since Shadow Copy employs a "copy-on-write" system it backs up all data believe to be "changed". With the default 4KB cluster size file movements can produce an increase in the snapshots, then adding defragmentation on top of that, the snapshots can really balloon. As Benjamin pointed out, there are defrag utilities out there that include VSS modes to address this problem though.
As for the point that richc46 originated, you actually right about that although I'd be hard pressed to suggest a specific percentage threshold. Most defragmenters do their best to achieve 0 fragments and although it's a worthy objective, some CPU can be thrown away on needless defragmentation. Diskeeper's program actually looks to only address fragmentation to the degree that performance is back at its peak, so you might say there is actually an "acceptable level" of fragmentation. Then as far as the idea of defragmenting being rough on the drive, reducing the number of read/write cycles is essentially the objective to reduce the wear and tear. Since both defragmenting and the normal reading from and writing to files produce disk activity, they both can produre some wear. When a file is left fragmented however, the read and write cycles will be increased and then multiplied by the frequency with which that file is accessed. That can produce more wear than the task of defragmenting it. Anyway, I just wanted to share my experience here. I hope that adds some helpful info.