c++operator-overloadingtemplate-meta-programmingexpression-templates

C++ Operator Modification/Metaprogramming Strategies for Less-Verbose Syntax


I am learning about template meta programming and expression templates in C++ right now, so as an exercise, I am creating a linear algebra library to practice the concepts I am learning.

So far, my library has a complete list of non-member operator overloads for all of the binary operators that can be overloaded, and has a fairly-slick interface that's easily expandable. One problem I've run into, however, is that matrix operations often have multiple variations. For example, for multiplication, there is the general matrix multiplication, the dot product, the kroenecker product, the hadamard product, the cross product, and more.

One slick way around this that is employed in Matlab is the .* operator used for hadamard multiplication (and .^, ./, etc). In this case, the Matlab language employs the . operator as a modifier for the * operator. However, I'm unaware of any mechanisms in the c++ language that allow operators to be modified like this. Are there any clean workarounds for this behavior?

Here are some things I've thought about already:

template<typename lhs_t, typename rhs_t, typename op_t = Gemm>
auto operator*(lhs_t lhs, rhs_t rhs)
{
    ...
}

// Operator template specializations ...

int main()
{
    Matrix<double, 2, 2> mat1(1.0, 2.0, 3.0, 4.0);
    Matrix<double, 2, 2> mat2(1.0, 2.0, 3.0, 4.0);

    mat1 * mat2; // Works
    mat1 *<Hadamard> mat2; // Error!  Syntax????
}
Hadamard(mat1 * mat2); // Hadamard wraps or modifies binary expression created by operator*
                       // SFINAE or Concepts used to select the correct routine based on the trait set
Hadamard<Multiplication>(mat1, mat2);
Hadamard_Multiplication(mat1, mat2);
mat1.hadamard_multiplication(mat2);

None of these seem to have syntax quite as elegant as Matlab's:

mat1 .* mat2;

Are there any techniques available to come close to an operator modifier syntax that I can consider? Or any general techiques to make the usage syntax less verbose? If not, is there any notion that something could be included in future versions of C++ that may be of any use here?


Solution

  • Here is the realization that I came to with respect to this:

    1. Multiple syntaxes can be supported. I do not need to pick one, I just need to implement one way and add abstractions over the top mimick the other syntaxes if so desired.
    2. Hadamard multiplication (and division, etc) are the default multiplication style for arrays, while GEMM is the default multiplication mode for matrices.

    With those two items, I chose to implement an n-dimensional array class (since CRTP base classes and free functions are used for the implementation, this was simply a matter of providing an empty struct with the necessary traits/alias declarations).

    The following situations can then result in hadamard multiplications:

    1. Matrix A has the same dimensions as Matrix B, and neither matrix is a square matrix, it is then clear that gemm is not valid, but hadamard multiplication is. Therefore, it makes sense to use a concept to override this behavior. in other words:

      // Results in hadamard multiplication
      template<object_type lhs_t, object_type rhs_t> requires 
      (is_same_dimensions_v<lhs_t, rhs_t> && !is_special_mult_supported_v<lhs_t, rhs_t>)
      constexpr auto operator*(lhs_t&& lhs, rhs_t&& rhs) noexcept -> mul_expr<lhs_t, rhs_t>
      {
          return { std::forward<lhs_t>(lhs), std::forward<rhs_t>(rhs) };
      }
      
      // Results in general matrix multiplication
      template<typename lhs_t, typename rhs_t> requires is_gemm_supported_v<lhs_t, rhs_t>
      constexpr auto operator*(lhs_t&& lhs, rhs_t&& rhs) noexcept -> gemm_expr<lhs_t, rhs_t>
      {
          return { std::forward<lhs_t>(lhs), std::forward<rhs_t>(rhs) };
      }
      
    2. If a gemm expression is assigned to an array, and a hadamard multiplication is valid, it is implicitly converted to a hadamard multiplication.

      // implicit conversion to hadamard multiplication
      array = a * b;
      
    3. Gemm expressions can be cast to hadamard expressions explicitly.

      // explicitly create a hadamard multiplication expression
      auto c = static_cast<mul_expr>(a * b);
      
    4. Gemm expressions can be cast to arrays explicitly, resulting in a hadamard multiplication

      // explicitly create an array using hadamard multiplication
      auto c = static_cast<array>(a * b);
      
    5. In the case of mixed array/matrix types where both hadamard and gemm are supported, the left-hand side type is selected to prevent ambiguity.

       // if a is an array, and b is a matrix, then c is a mult_expr
       // if a is a matrix, and b is an array, then c is a gemm_expr
       auto c = a * b;
      

    With these rules in place, then api-level abstractions can be added for clarity:

        // c = hadamard(a * b);
        template<expr_type expr_t> requires std::is_convertible_v<std::decay_t<expr_t>, mul_expr>
        constexpr auto hadamard(expr_t&& expr) noexcept
        {
            return static_cast<mul_expr>(expr);
        }
    
        // support c = hadamard_mult(a, b); syntax
        template<object_type lhs_t, object_type rhs_t> requires is_same_dimensions_v<lhs_t, rhs_t>
        constexpr auto hadamard_mult(lhs_t&& lhs, rhs_t&& rhs) noexcept
        {
            return hadamard(lhs * rhs);
        }
    

    Note that I've omitted static_casts and paraphrased some of the syntax to get the idea across. The big realization to take away here is that c++ can utilize the type system to simplify syntax fairly drastically, which is where matlab differs. There are many cases where

        c = a * b;
    

    can (and should) result in hadamard multiplication. Furthermore, simple manipulation of the type system can quite easily result in situations where function syntax is easily supported.

    A big thank you to those of you in the comments above for helping me brainstorm and arrive at this conclusion. I am quite satisfied with my library as a consequence of the discussion here.