Recently noticed that a test domain from us had it\’s share of known pages by Google increased dramatically. Great – but –
on closer inspection only about 50% are fully indexed – 25% are supplemental results – 25% are known to G (e.g. no title / snippet).
Hold on supplemental results – last saw this in 2003 when the last size war was about.
See article by Danny Sullivan from 2003.
So cynically I assume that G now actually knows (indexed) about the size it claimed before the IPO and the latest is just a new number of known about! (but does bugger all). Also web pages? Sorry pdf doc txt are no web pages they maybe documents and can contain zillions of pages.
So is G just blowing straw to insulate against frosty times ahead?
This may make for a comfortable nest but is very irritating to the actual user.