Things got noticeably worse when the 2514 patch came out for PD17, but I had a difficult time finding any common ground between various problems I kept seeing - even though they kept coming up again and again. Now, after some hunches and a ton of forensic work, I have a big part of the answer.
It's a frame access problem with PD17 (specifically 17.0.2514.2) and H.264 M2TS clips.
The gist of the issue is that for some reason, PD17 is unable to access every frame in this type of clip when doing normal timeline editing. It's not much of an issue with playback or producing, and it's only when you need access to a specific frame that the real problems start.
I made a 5 second test video with a unique number in each frame and produced it many different ways with several different hardware configurations in both PD14 and 17. In general, PD14 handles the clips reasonably well, with only a few specific frames "unreachable" from the timeline regardless of whether the test clips were produced by PD14 or PD17.
PD17 is a whole other story, with as many as 1/2 of the frames in any given second absolutely unavailable from the timeline, and clpis it's produced have more issues than when produced with the same settings by PD14. This explains why it's been so hard to have a clip behave consistently when editing.
An important thing to note is that I each produced clip actually has every single frame present. You can see that everything is intact on other platforms with easy frame access, like VirtualDub2, so this is solely a timeline/editing access issue in PD17.
Here's a video showing a clip produced by PD14 alongside one produced with the exact same settings in PD17. The first section is what PD17 does when previewing them frame by frame, and the second half is the exact same clips in PD14:
In my testing, I've found that H.264 clips produced as MP4s do not have this issue, neither do H.265 clips produced as M2TS. The problem is much worse with nVidia HW encoding (both with v411 and v417 drivers). Using a GTX 780Ti gives different results than using an RTX 2070, but both are seriously flawed.
Using CPU encoding or Intel QuickSync avoided the biggest frame issues (blocks of frames that PD17 cannot access from the timeline), but when produced by PD17, all profiles had a significant number of inaccessible or displaced frames. Interestingly, whether the Intel or nVidia card is seen by PD17 determines which artifacts CPU-only producing will generate.
Here's a partial screenshot of my data table. Anyone who's bored (or masochistic) enough can download it and see all the gory details here.

So, anyone who wants to test this out can check out any of my test clips in this OneDrive folder. Try putting a few on the timeline and see what the previews look like when you step frame by frame like in my video. I also have the original project that created my test clips in thisfolder. It has a 1.5GB source clip, but you can skip downloading that and simply use any 60p clip when you open it.
I'd really like to make sure that the debacle I've experienced isn't limited to my machine.
YouTube/optodata
DS365 | Win11 Pro | Ryzen 9 3950X | RTX 4070 Ti | 32GB RAM | 10TB SSDs | 5K+4K HDR monitors
Canon Vixia GX10 (4K 60p) | HF G30 (HD 60p) | Yi Action+ 4K | 360Fly 4K 360°