Very often you need to filter an array with more than one condition:
return results
.filter(x => controlGroup[x] !== undefined)
.filter(x => x.size > 10)
or you could do the same with a complex condition and a single filter
:
return results
.filter(x => controlGroup[x] !== undefined && x.size > 10)
// or
.filter(x =>
controlGroup[x] !== undefined // <- Prettier would place && here. What a joke. 101
&& x.size > 10
)
My guess is that since the second approach doesn't iterate the array multiple times, it has better performance characteristics,
on the other hand, the first wins in readability and can be easily extended with neat git diff LOC.
In my particular case I have a function on a Node.js server that runs 3 times per request on a set of ~40 items (after the flat()
). There is already lots of iterations and some lambda functions, so I even kind of doubt that micro-optimizing the two filters would make a difference. I believe it could all be rewritten using just a single reduce
to save 7 iterations (if I count correctly), but at what cost?
PS: I am looking for a general answer, but you could answer my particular case as a bonus. Thank you.
export const getProductFeatures =
(technicalSpecifications: TechnicalSpecifications, featuresBySpecLabel: FeaturesBySpecLabel): ProductFeature[] =>
[...technicalSpecifications
.map(specGroup => specGroup.specifications)
.flat()
.filter(spec => featuresBySpecLabel[spec.label] !== undefined)
.filter(spec => verifyFeature(spec, featuresBySpecLabel[spec.label]))
.reduce((productFeatures, spec) => {
productFeatures.add(featuresBySpecLabel[spec.label].featureLabel)
return productFeatures
}, new Set<string>())]
.map(label => ({ label }))
Based on the comments, there is no significant performance improvement for the particular case of ~40 items in an array.
Technically, each additional iteration increases the time complexity n-times in worst case, so with 2 iterations O(2n)
, it could take twice as long, but we are talking about time spans way down in the nano-scale, eg. 10ns
vs 20ns
in this particular case. It might become worth considering depending on the size of the dataset, the machine the code is run on, etc.
Constants are usually ignored when computing
O
notation. I.e.O(n)
vsO(2n)
.
~ Felix Kling (taken from comments)
That being said, the performance can be taken out of the question, which now leaves only the readability factor, which is hugely subjective and opinion based. Such QAs are not allowed on SO. To get an answer to this, anyone wondering should probably discuss this with their team or decide by heart. Generally though, readability is more often than not valued more for an equivalent code, which the above is. However, what anyone considers better readable depends on personal preferences or team/company guidelines.