I run an i7 with 16 gigs of ram and my primary drive (and the drive that holds the cache of thumbnails) is an SSD. CPU perf never really goes above 25% when starting PHd and the ram usage will grow to about 800 megs before the process just hangs.
To put some better perspective on this, I wiped out my old cache and PHd project file and started a new one. I have 8143 pictures in the library, memory usage is 330 megs and the CPU (while started and loading the library) was about 10-15%. It looks like the bulk of the time is spent in I/O with my drive reading the tree structure to load up. Using sysinternals process monitor I can see that it spins my HD at 11:52:09 for the 1st directory in my library and then it does not get done until 11:52:31. So there was 13 seconds of I/O just to rebuild the tree in PHd.
This 13 seconds was for 8700 photos in 165 folders. If you go back to my 87,000 that will not load is 10x the number of photos in 8500 (yes, 8.5k) folders. I guess if we start to do some basic math we can see how it would take some time to load. However, as i stated the process just dies and never really finishes.
At this I would assume the devs at Phd could or would potentially do something a bit more friendly on load (maybe some async jobs to read the file structure while at last allowing me into the program.