But this requires that the list is sorted before processing otherwise the grouping will be ambiguous...
Well if the list is sorted (if such assumption can be made) then the algorithm could be optimized a lot more - seeing as the search for duplicates only need to continue past current until a new non-duplicate is found.
But I'm not sure about the "groupings". They're pretty arbitrary IMO. E.g. say the list consists of '(1.00 1.03 1.06 1.09 2.06 2.09 3.02). Is the grouping simply relating to the first item found or all groupings? I.e. should the result (with fuzz factor of 0.05) only be:
((1.00 1.03) (1.06 1.09) (2.06 2.09))
But then 1.03 should also group together with 1.06 (e.g.) shouldn't it? Not to mention shouldn't 2.09 and 3.02 also be grouped? But then a duplicate is repeated in the result.
IMO there's 2 alternative approaches to this:
; Option 1
((1.00 1.03) (1.03 1.06) (1.06 1.09) (2.06 2.09) (2.09 3.02))
; Option 2
((1.00 1.03 1.06 1.09) (2.06 2.09 3.02))
And as per the OP it seems option 2 is what's supposed to happen.
Though there's another ambiguous situation:
(UNIQUE_PAIRS '(1.0 1.1 1.19 1.2 1.21 1.22 1.23 1.3 )0.05)
;0.1 is fuzz
-->((1.19 1.2 1.21 1.22 1.23))
[/code]
If the comment is to be believed then it's not as I thought. Since in such case 1.23 and 1.3 should also be considered duplicates, shouldn't they? Not to mention 1.0 and 1.1 (if 0.1 fuzz is considered inclusive).