flutterdartoopmixinssoftware-design

Do Mixins really slow down compilation time?


Was earlier dealing with mixins, but i felt a slow down in compilation time after adding it to the existing code.

right now, i was reading an article that mentioned the following :

Disadvantages of mixins

Mixins are a useful technique, but hardly a panacea. There are some significant disadvantages:

The above sentences are about compilation time(speaking about mixins in general as a principle ), The next is about usages.

suppose there are a dog and a horse those need to run.

mixin RunMixy{
void run()=>print('running');
}
 
class Dog with RunMixy{}
class Horse with RunMixy{}

Now, the running functionality is mixed with the two classes, but there's an alternative way to get this done either by using inheritance or abstraction.

abstract class RunningMamal{
void run();
}
 
class Dog extends RunningMamal{
  void run()=>print('running');
}
class Horse   extends RunningMamal{
    void run()=>print('running');
}

So, do mixins really slow down the app which leads to avoid using them, and when to use them.


Solution

  • This reply addresses the question: "do mixins really slow down the app"?

    Just out of curiosity I benchmarked the two approaches. Note: I changed your class definitions by removing the print statements. That way, the benchmark scores should reflect the performance penalty that comes with using mixins.

    import 'package:benchmark_runner/benchmark_runner.dart';
    
    abstract class RunningMamal {
      void run();
      String _state = 'resting';
      String get state => _state;
    }
    
    class Dog extends RunningMamal {
      @override
      void run() => _state = 'running';
    }
    
    class Horse extends RunningMamal {
      @override
      void run() => _state = 'running';
    }
    
    final dog = Dog();
    final horse = Horse();
    
    void main(List<String> args) {
    
      group('Inheritance:', () {
        benchmark('horse running', () {
          horse.run();
        });
        benchmark('dog running', () {
          dog.run();
        });
      });
    }
    
    import 'package:benchmark_runner/benchmark_runner.dart';
    
    mixin RunMixy {
      String _state = 'resting';
      void run() => _state = 'running';
      String get state => _state;
    }
    
    class Dog with RunMixy {}
    
    class Horse with RunMixy {}
    
    final dog = Dog();
    final horse = Horse();
    
    void main(List<String> args) {
      group('Mixin:', () {
        benchmark('horse running', () {
          horse.run();
        });
        benchmark('dog running', () {
          dog.run();
        });
      });
    }
    

    And here the benchmark scores (on an Intel Core i5-6260U):

    $ dart run benchmark_runner
    
    Finding benchmark files... 
      benchmark/animal_benchmark.dart
      benchmark/animal_mixin_benchmark.dart
    
    Running: dart --define=isBenchmarkProcess=true benchmark/animal_benchmark.dart
      Inheritance: horse running; mean: 0.093 ± 0.13 us, median: 0.083 ± 0.00 us
                                  sample size: 100 (averaged over 171 runs)
      
      Inheritance: dog running; mean: 0.066 ± 0.00035 us, median: 0.066 ± 0.00 us
                                sample size: 100 (averaged over 208 runs)
      
      
    
    Running: dart --define=isBenchmarkProcess=true benchmark/animal_mixin_benchmark.dart
      Mixin: horse running; mean: 0.091 ± 0.094 us, median: 0.083 ± 0.00100 us 
                            sample size: 100 (averaged over 167 runs)
      
      Mixin: dog running; mean: 0.082 ± 0.038 us, median: 0.066 ± 0.00100 us
                          sample size: 100 (averaged over 201 runs)
        
    -------      Summary     -------- 
    Total run time: [01s:224ms]
    Completed benchmarks: 4.
    Completed successfully.
    Exiting with code: 0.
    

    Verdict: At least in this case, the benchmark scores indicate that using Dart mixins does not affect the performance.


    Answer to OP's question: Do you get the same results on every run?

    The benchmark scores will not be exactly the same but will fluctuate around approximately 0.1 us. That's why score statistics can be useful.

    Just to clarify, for example, in the first benchmark the function horse.run() is called 171 times in a loop. The time is then averaged and the score is recorded. This process is repeated 100 times to generate the score sample. From this sample the mean, standard deviation, etc are calculated.