Hey...
You tested two times the same function with different results?
whats about dg:count-found ?
reltro
Oops, yes ... was silly of me. But the variance in the same function isn't that strange at all - any number of things may affect performance (especially if there's other programs running in the background as is ALWAYS the case in Windows). The only way to get past such inconsistencies is to rerun the benchmark a few times and then only take the figures which seem to be the "norm". Notice how the values differ between 3 consecutive tests:
_$ (QuickBench '((reltro:count-found aList search) (reltro:count-found1 aList search) (dg:count-found aList search) (ALE_ListCountItemFuzz2 search aList 1e-8) (ALE_ListCountItemFuzz search aList 1e-8) (LM:countitemfuzz search aList 1e-8) (ALE_ListCountItem search aList) (ALE_ListCountItem2 search aList)))
Benchmarking ........ done for 128 iterations. Sorted from fastest.
Statement Increment Time(ms) Normalize Relative
--------------------------------------------------------------------------------
(ALE_LISTCOUNTITEM SEARCH ALIST) 128 1030 1030 1.00
(DG:COUNT-FOUND ALIST SEARCH) 32 1156 4624 4.49
(ALE_LISTCOUNTITEM2 SEARCH ALIST) 32 1310 5240 5.09
(ALE_LISTCOUNTITEMFUZZ SEARCH ALIST ...) 32 1344 5376 5.22
(ALE_LISTCOUNTITEMFUZZ2 SEARCH ALIST...) 32 1390 5560 5.40
(RELTRO:COUNT-FOUND ALIST SEARCH) 32 1434 5736 5.57
(RELTRO:COUNT-FOUND1 ALIST SEARCH) 32 1467 5868 5.70
(LM:COUNTITEMFUZZ SEARCH ALIST 1.0e-008) 16 1153 9224 8.96
--------------------------------------------------------------------------------
_$ (QuickBench '((reltro:count-found aList search) (reltro:count-found1 aList search) (dg:count-found aList search) (ALE_ListCountItemFuzz2 search aList 1e-8) (ALE_ListCountItemFuzz search aList 1e-8) (LM:countitemfuzz search aList 1e-8) (ALE_ListCountItem search aList) (ALE_ListCountItem2 search aList)))
Benchmarking ........ done for 128 iterations. Sorted from fastest.
Statement Increment Time(ms) Normalize Relative
--------------------------------------------------------------------------------
(ALE_LISTCOUNTITEM SEARCH ALIST) 128 1062 1062 1.00
(DG:COUNT-FOUND ALIST SEARCH) 32 1185 4740 4.46
(ALE_LISTCOUNTITEM2 SEARCH ALIST) 32 1294 5176 4.87
(ALE_LISTCOUNTITEMFUZZ2 SEARCH ALIST...) 32 1356 5424 5.11
(ALE_LISTCOUNTITEMFUZZ SEARCH ALIST ...) 32 1404 5616 5.29
(RELTRO:COUNT-FOUND ALIST SEARCH) 32 1435 5740 5.40
(RELTRO:COUNT-FOUND1 ALIST SEARCH) 32 1450 5800 5.46
(LM:COUNTITEMFUZZ SEARCH ALIST 1.0e-008) 16 1156 9248 8.71
--------------------------------------------------------------------------------
_$ (QuickBench '((reltro:count-found aList search) (reltro:count-found1 aList search) (dg:count-found aList search) (ALE_ListCountItemFuzz2 search aList 1e-8) (ALE_ListCountItemFuzz search aList 1e-8) (LM:countitemfuzz search aList 1e-8) (ALE_ListCountItem search aList) (ALE_ListCountItem2 search aList)))
Benchmarking ........ done for 128 iterations. Sorted from fastest.
Statement Increment Time(ms) Normalize Relative
--------------------------------------------------------------------------------
(ALE_LISTCOUNTITEM SEARCH ALIST) 128 1091 1091 1.00
(DG:COUNT-FOUND ALIST SEARCH) 32 1186 4744 4.35
(ALE_LISTCOUNTITEM2 SEARCH ALIST) 32 1373 5492 5.03
(ALE_LISTCOUNTITEMFUZZ SEARCH ALIST ...) 32 1406 5624 5.15
(ALE_LISTCOUNTITEMFUZZ2 SEARCH ALIST...) 32 1419 5676 5.20
(RELTRO:COUNT-FOUND ALIST SEARCH) 32 1434 5736 5.26
(RELTRO:COUNT-FOUND1 ALIST SEARCH) 32 1465 5860 5.37
(LM:COUNTITEMFUZZ SEARCH ALIST 1.0e-008) 16 1216 9728 8.92
--------------------------------------------------------------------------------
But the others have done similar already, though they're using another benchmarking function than I did. I'm just using another routine (attached) because it doesn't hang when the test takes a huge long time (if there's a badly performing test case) - won't test for longer than 1 second on any test expression.
BTW, I've been asked on numerous occasions why that QuickBench of mine is showing the relative speed for faster functions as a higher figure than the slowest. It was due to the benchmarking routine the others are using (notice it also gives 1.0 to the slowest and then higher numbers relative to each faster test, so if you see 2.0 it means it's 2 times faster):
http://autolisp.ru/wp-content/uploads/2009/09/benchmark.lspBut I've changed it due to some comments - now the fastest is shown as 1.0 and the slower as how much longer they take - 3 times would be 3.0. Obviously now I'm waiting for those comments asking why mine's relative differs from the one by Michael Puckett.