Showing posts with label Tensor. Show all posts
Showing posts with label Tensor. Show all posts

Saturday, 19 August 2017

Xtensor & Xtensor-blas Library - Numpy for C++

Xtensor & Xtensor-blas Library - Numpy for C++

Intro - What & Why?

I am currently working on my own deep learning & optimization library in C++, for my research in Data Science and Analytics Course at Maynooth University, Ireland. While searching for an existing tensor library (eigen/armadillo/trilinos - do not support tensors). I discovered Xtensor and Xtensor-blas, which has syntax like numpy and is avaliable for for C++ and Python.

Capabilities/Advantages (Xtensor to Numpy cheatsheet)

  • Numpy Like Syntax

    typedef xt::xarray<double> dtensor;
    
    dtensor arr1 {{1.0, 2.0, 3.0},   {2.0, 5.0, 7.0},   {2.0, 5.0, 7.0}}; // 2d array of double
    
    dtensor arr2 {5.0, 6.0, 7.0}; // 1d array of doubles
    
    cout << arr2 << "\n"; // outputs : {5.0, 6.0, 7.0}
  • Intuitive Syntax For Operation

    typedef xt::xarray<double> dtensor;
    
    dtensor arr1 {{1.0, 2.0, 3.0},   {2.0, 5.0, 7.0},   {2.0, 5.0, 7.0}}; // 2d array of double
    
    dtensor arr2 {5.0, 6.0, 7.0}; // 1d array of doubles
    
    cout << arr2 << "\n"; // outputs : {5.0, 6.0, 7.0}
    
    // Reshape
    arr1.reshape({1, 9});
    arr2.reshape({1,9});
    cout << arr1 << "\n"; // outputs : {1.0, 2.0, 3.0, 2.0, 5.0, 7.0, 2.0, 3.0, 7.0}
    
    // Addition, Subtraction, Multiplication, Division
    dtensor arr3 = arr1 + arr2;
    dtensor arr3 = arr1 - arr2;
    dtensor arr3 = arr1 * arr2;
    dtensor arr3 = arr1 / arr2;
    
    // Logical Operations
    dtensor filtered_out = xt::where(a > 5, a, b);
    dtensor var = xt::where(a > 5);
    dtensor logical_and = a && b;
    dtensor var = xt::equal(a, b);
    
    // Random numbers
    dtensor random_seed = xt::random::seed(0);
    dtensor random_ints = xt::random::randint<int>({10, 10});
    
    // Basic operations
    dtensor summation_of_a = xt::sum(a);
    dtensor mean = xt::mean(a);
    dtensor abs_vals = xt::abs(a);
    dtensor clipped_vals = xt::clip(a, min, max);
    
    // Exponential & Power Functions
    dtensor exp_of_a = xt::exp(a);
    dtensor log_of_a = xt::log(a);
    dtensor a_raise_to_b = xt::pow(a, b);
  • Easy Linear Algebra

    // Vector product
    dtensor dot_product = xt::linalg::dot(a, b)
    dtensor outer_product = xt::linalg::outer(a, b)
    
    // Inverse & solving system of equation
    xt::linalg::inv(a)
    xt::linalg::pinv(a)
    xt::linalg::solve(A, b)
    xt::linalg::lstsq(A, b)
    
    // Decomposition
    dtensor SVD_of_a = xt::linalg::svd(a)
    
    // Norms & determinants
    dtensor matrix_norm = xt::linalg::norm(a, 2)
    dtensor matrix_determinant = xt::linalg::det(a)

Installation

  • Install Xtensor
    cd ~ ; git clone https://github.com/QuantStack/xtensor
    cd xtensor; mkdir build && cd build;
    cmake -DBUILD_TESTS=ON -DDOWNLOAD_GTEST=ON ..
    make
    sudo make install
  • Install xtensor-blas
    cd ~ ; git clone https://github.com/QuantStack/xtensor-blas
    cd xtensor-blas; mkdir build && cd build;
    cmake ..
    make
    sudo make install

Use In Your Code

  • It is a header only library
    
    #include <xtensor/xarray.hpp>
    
    
    #include <xtensor/xio.hpp>
    
    
    #include <xtensor/xtensor.hpp>
    
  • Linking & Compilation flags
    g++ -std=c++14 ./myprog.cpp -lblas

Where have I used it?

As mentioned in the intro, Xtensor and Xtensor-blas are the core component on which I have built my own deep learning & optimization library. This library is a monumental shift in C++ and ease of computation. In upcoming series of posts I will show you how to create your own library using xtensor.

Next Post

In the next post, I will give an overview of the architecture of the project for your own library. And alongside I will introduce blas routines.
Share:

Contact Me

Name

Email *

Message *

Popular Posts