But surely we are comparing apples with oranges here, since CAB's code will remove multiple duplicates, e.g. consider:
Yes. Didn't mean to imply CAB's as my final use solution. CAB's is not currently a viable solution, because pure duplicates will be lost completely, and I want the text, just not the "extra" data on the duplicate coords.
Was keeping it an option in case it can be tweaked to not loose pure duplicate's text, because that 7 fold is lucrative.
And because I subconsciously probably wanted to give some credit for his piece that fit nicely into the other's to bring them up to par. :kewl:
As far as the
"extra" data loss on duplicates - is a desirable affect. The order of the supplied live list is already determined by another function, and the first one in a set will always hold the defining "extra" data for other duplicate coords. I intend to use that field with formatting options eventually.
In the run-off, though Lee's looks like the one I'll be using / already using. I don't forsee ever having data exceeding 1500, but even then the breaking point for mine to become more efficient is extremely large (5000+), never will use, lists.
In the grand scheme of things, we're talking a 12 milisecond spread for live data between the worst (mine) and the best (LM). And it was worth the ride to get here.