#
**Xtensor & Xtensor-blas Library** - Numpy for C++

##
*Intro - What & Why?*

I am currently working on my own

deep learning & optimization library in C++, for my research in Data Science and Analytics Course at Maynooth University, Ireland. While searching for an existing tensor library

**(eigen/armadillo/trilinos - do not support tensors)**. I discovered

Xtensor and

Xtensor-blas, which has syntax like numpy and is avaliable for for

*C++* and

*Python*.

###
*Numpy Like Syntax*

```
typedef xt::xarray<double> dtensor;
dtensor arr1 {{1.0, 2.0, 3.0}, {2.0, 5.0, 7.0}, {2.0, 5.0, 7.0}}; // 2d array of double
dtensor arr2 {5.0, 6.0, 7.0}; // 1d array of doubles
cout << arr2 << "\n"; // outputs : {5.0, 6.0, 7.0}
```

###
*Intuitive Syntax For Operation*

```
typedef xt::xarray<double> dtensor;
dtensor arr1 {{1.0, 2.0, 3.0}, {2.0, 5.0, 7.0}, {2.0, 5.0, 7.0}};
dtensor arr2 {5.0, 6.0, 7.0};
cout << arr2 << "\n";
arr1.reshape({1, 9});
arr2.reshape({1,9});
cout << arr1 << "\n";
dtensor arr3 = arr1 + arr2;
dtensor arr3 = arr1 - arr2;
dtensor arr3 = arr1 * arr2;
dtensor arr3 = arr1 / arr2;
dtensor filtered_out = xt::where(a > 5, a, b);
dtensor var = xt::where(a > 5);
dtensor logical_and = a && b;
dtensor var = xt::equal(a, b);
dtensor random_seed = xt::random::seed(0);
dtensor random_ints = xt::random::randint<int>({10, 10});
dtensor summation_of_a = xt::sum(a);
dtensor mean = xt::mean(a);
dtensor abs_vals = xt::abs(a);
dtensor clipped_vals = xt::clip(a, min, max);
dtensor exp_of_a = xt::exp(a);
dtensor log_of_a = xt::log(a);
dtensor a_raise_to_b = xt::pow(a, b);
```

###
*Easy Linear Algebra*

```
dtensor dot_product = xt::linalg::dot(a, b)
dtensor outer_product = xt::linalg::outer(a, b)
xt::linalg::inv(a)
xt::linalg::pinv(a)
xt::linalg::solve(A, b)
xt::linalg::lstsq(A, b)
dtensor SVD_of_a = xt::linalg::svd(a)
dtensor matrix_norm = xt::linalg::norm(a, 2)
dtensor matrix_determinant = xt::linalg::det(a)
```

##
*Installation*

- Install Xtensor

```
cd ~ ; git clone https:
cd xtensor; mkdir build && cd build;
cmake -DBUILD_TESTS=ON -DDOWNLOAD_GTEST=ON ..
make
sudo make install
```

- Install xtensor-blas

```
cd ~ ; git clone https://github.com/QuantStack/xtensor-blas
cd xtensor-blas; mkdir build && cd build;
cmake ..
make
sudo make install
```

##
*Use In Your Code*

##
*Where have I used it?*

As mentioned in the intro, Xtensor and Xtensor-blas are the core component on which I have built my own

deep learning & optimization library. This library is a monumental shift in C++ and ease of computation. In upcoming series of posts I will show you how to create your own library using xtensor.

##
*Next Post*

In the next post, I will give an

*overview of the architecture* of the project for your own library. And alongside I will introduce

*blas routines*.