CauseandEffect in a Tensor Framework
Tensor factorizations have been succesfully employed in representing the causal factor structure of data formation, in econometrics, psychometrics and chemometricss for over fifty years. More recently, the tensor factorization approach has been successfully employed in representing the causal structure in computer vision, computer graphics. While in various machine learning tasks the tensoe approach has been employed in developing predictive models
Data tensor factorizations were first employed in computer vision to recognize people from the way they move (Human Motion Signatures in 2001) and from their facial images (TensorFaces in 2002), but it may be used to recognize any objects, or object attributes.
Natural images are the compositional consequence of multiple causal factors related to scene structure, illumination (i.e. the location and types of light sources), and imaging (i.e. viewpoint, viewing direction, lens type and other camera characteristics). Most observed data are formed from the interaction of intrisic and extrinsic hierarchical causal factors. Tensor algebra, the algebra of higherorder tensors offers a potent mathematical framework for explicitly representing and disentangling the causal factors of data formation. Determining the causal factors of observable data allows intelligent agents to better understand and navigate the world, an important tenet of artificial intelligence, and an important goal in data science. Theoretical evidence has shown that deep learning is a neural network equivalent to multilinear tensor decomposition, while a shallow network corresponds to linear tensor factorization (aka. CANDECOMP/Parafac tensor factorization).
There are two main classes of tensor decompositions which generalize different concepts of the matrix SVD,

rankK decomposition  represents a tensor as a sum of rank1 terms

rank(R1, R2,...,R M) decomposition  computes the orthonormal mode matrices,

These factorizations have been applied to observations when an image has been (1) vectorized and treated as a point in high dimensional space or (2) treated as a datamatrix or a datatensor. The tutorial will addresses the advantages/disadvantages and misconceptions of treating an imageasavector versus imageasamatrix.
We will discuss several multilinear factorizations that represent causeandeffect, such as, Multilinear PCA, Multilinear ICA, Block Tensor Decomposition, Compositional Hierarchical Tensor Factorization, as well as multilinear projection operator which is important in performing recognition in a tensor framework. (MultilinearICA should not be confused with the computation of the linear ICA basis vectors by employing the CP tensor decomposition on a tensor of higher order statistics computed from a colection of observed data.)
Tensor factorizations can also be efficiently combined with deep learning using TensorLy, a high level API for tensor algebra decomposition and regression. Deeply tensorized architecture results in stateoftheart performance, large parameter savings and computational speedups on a wide range of applications.
Date: Monday, June 17
Time: 1 PM
Location: 203 C
Tutorial Schedule:
Basic Concepts (1:00  2:15, Lieven De Lathauwer)

Basic definitions and properties :

Rank of higherorder tensors

Multilinear rank of higherorder tensors

2. Tensor factorizations:

Canonical Polyadic Decomposition ― Lowrank tensor approximation ― Latent Variable Analysis

Tucker Decomposition and Multilinear Singular Value Decomposition ― Low multilinear rank tensor approximation

Block Term Decomposition (time permitting)
Causality in a Tensor Framework:  Tensor Factorizations for Computer Vision (2:30pm  3:45pm, M. Alex O. Vasilescu)

Why should one treat an image as a vector, and not a matrix, or a tensor?
Which arguments for treating an image as a matrix or tensor are mathematically provably false?

Representing CauseandEffect from training data based on

2nd order statistics – MultilinearPCA (TensorFaces, Human Motion Signatures)

higherorder statistics – MultilinearICA
(not to be confused with computing ICA by employing tensor methods,
an approach typically employed to reparameterize deep learning models) 
kernel variants, etc.


Recognition:  Determining the causal factors of data formation from an unlabeled test data (one unlabeled image or more)

Representing a Hierarchy of Intrinsic and Extrinsic Causal Factors  to appear at KDD'19
"Compositional Hierarchical Tensor Factorization: Representing Hierarchical Intrinsic and Extrinsic Causal Factors”, M.A.O. Vasilescu, E.Kim, In The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD’19): Tensor Methods for Emerging Data Science Challenges, August 0408, 2019, Anchorage, AK. ACM, New York, NY, USA
Tensorizing Deep Neural Network Architectures (4:00  5:15pm, Jean Kossaifi)

Parametrizing neural networks with tensor decomposition

Higher order operations as deep net layers

Improving deep net training

Domain adaptation with deep learning and tensor methods

Practical implementations with TensorLy
Speakers/Organizers:
Lieven De Lathauwer was educated at KU Leuven, Belgium. From 2000 to 2007 he was a Research Associate of the French Centre National de la Recherche Scientifique, research group CNRSETIS. He is currently Full Professor with KU Leuven, affiliated with both the Group Science, Engineering and Technology of Kulak and with the group STADIUS of the Electrical Engineering Department (ESAT). He is an Associate Editor of the SIAM Journal on Matrix Analysis and Applications and has served as Associate Editor for the IEEE Transactions on Signal Processing. He is corecipient of the 2018 IEEE SPS Signal Processing Magazine Best Paper Award. He is Fellow of EURASIP, SIAM and the IEEE. His research concerns the development of tensor tools for mathematical engineering. It centers on the following axes: 1) algebraic foundations; 2) numerical algorithms; 3) generic methods for signal processing, data analysis, and system modeling; and 4) specific applications. Keywords are linear and multilinear algebra, numerical algorithms, statistical signal and array processing, higherorder statistics, independent component analysis and blind source separation, harmonic retrieval, factor analysis, blind identification and equalization, big data, data fusion. Algorithms have been made available as Tensorlab () (with N. Vervliet, O. Debals, L. Sorber and M. Van Barel).
M. Alex O. Vasilescu received her education at the Massachusetts Institute of Technology and the University of Toronto. Vasilescu introduced the tensor paradigm for computer vision, computer graphics, machine learning, and extended the tensor algebraic framework by generalizing concepts from linear algebra. Starting in the early 2000s, she reframed the analysis, recognition, synthesis, and interpretability of sensory data as multilinear tensor factorization problems suitable for mathematically representing causeandeffect and demonstratively disentangling the causal factors of observable data. The tensor framework is a powerful paradigm whose utility and value has been further underscored by recently provided theoretical evidence showing that deep learning is a neural network approximation of multilinear tensor factorization.
Vasilescu’s face recognition research, known as TensorFaces, has been funded by the TSWG, the Department of Defenses Combating Terrorism Support Program, and by IARPA, Intelligence Advanced Research Projects Activity. Her work was featured on the cover of Computer World, and in articles in the New York Times, Washington Times, etc. MITs Technology Review Magazine named her as a TR100 honoree, and the National Academy of Science coawarded the KeckFutures Initiative Grant.
Jean Kossaifi was educated at Imperial College London. His research is mainly focused on face analysis and facial affect estimation in natural conditions, a field which bridges the gap between computer vision and machine learning. He is currently working on tensor methods, and how to efficiently combine these with deep learning. He is the creator of TensorLy, a highlevel API for tensor methods and deep tensorized neural networks in Python that aims at making tensor learning simple and accessible.