cause_and_effect_logo_red_border_cvpr10_
tensorfaces_representation2.png

Cause-and-Effect in a Tensor Framework

 

 

 

 

 

Natural images are the compositional consequence of multiple causal factors related to scene structure, illumination, and imaging. More generically, most observed data are formed by the interaction of multiple causal factors. Tensor algebra, the algebra of higher-order tensors offers a potent mathematical framework for explicitly representing and disentangling the causal factors of data formation.  Determining the causal factors of observable data allows intelligent agents to better understand and navigate the world, an important tenet of artificial intelligence, and an important goal in data science. Theoretical evidence has shown that deep learning is a neural network equivalent to multilinear tensor decomposition, while a shallow network corresponds to linear tensor factorization (aka. CANDECOMP/Parafac tensor factorization).

 

 

 

 

 

Date: Monday, June 17

Time: 1 PM

Location: 203 C

 

 

 

 

 

 

Tensor factorizations have been successfully applied in numerous computer vision, signal processing, computer graphics, and machine learning tasks. Data tensor modeling was first employed in computer vision to recognize people from the way they move (Human Motion Signatures in 2001) and from their facial images (TensorFaces in 2002), but it may be used to recognize any objects, or object attributes.
 

There are two main classes of tensor decompositions which generalize different concepts of the matrix SVD,

  • rank-K decomposition - represents a tensor as a sum of rank-1 terms

  • rank-(R1, R2,...,R M) decomposition - computes the orthonormal mode matrices,

which we will address, in addition to various tensor factorizations with different constraints.



We will also discuss several multilinear representations that represent cause-and-effect, such as, Multilinear PCA, Multilinear ICA, Block Tensor Decomposition, Compositional Tensor Factorization, as well as multilinear projection operator which is important in performing recognition in a tensor framework. (Multilinear-ICA should not be confused with the computation of the linear ICA basis vectors by employing the CP tensor decomposition on a tensor of higher order statistics  computed from a  colection of observed data.)

 

Tensor factorizations can also be efficiently combined with deep learning using TensorLy, a high level API for tensor algebra decomposition and regression.  Deeply tensorized architecture results in state-of-the-art performance, large parameter savings and computational speed-ups on a wide range of applications.

 

Tutorial Schedule:


Basic Concepts (1:00 - 2:15, Lieven De Lathauwer)​
 

  1. Basic definitions and properties :
     

    •  Rank of higher-order tensors

    •  Multilinear rank of higher-order tensors

 2. Tensor factorizations:

  • Canonical Polyadic Decomposition ― Low-rank tensor approximation ― Latent Variable Analysis

  • Tucker Decomposition and Multilinear Singular Value Decomposition ― Low multilinear rank tensor approximation

  • Block Term Decomposition (time permitting)​

human_motion_signature-03-01.png

Tensorizing Deep Neural Network Architectures (4:00pm - 5:15pm, Jean Kossaifi)

  1.  Parametrizing neural networks with tensor decomposition

  2.  Higher order operations as deep net layers

  3.  Improving deep net training

  4.  Domain adaptation with deep learning and tensor methods

  5.  Practical implementations with TensorLy​

domain-adaptation-01.png

Causality in a Tensor Framework: Tensor Factorizations for Computer Vision (2:30pm - 3:45pm, M. Alex O. Vasilescu)

  1. Why should one treat an image as a vector, and not a matrix, or a tensor?
    Which arguments for treating an image as a matrix or tensor are mathematically provably false? 
     

  2. Representing Cause-and-Effect from training data based on

    • 2nd order statistics – Multilinear-PCA  (TensorFaces, Human Motion Signatures)

    • higher-order statistics – Multilinear-ICA
                                             (not to be confused with computing ICA by employing tensor methods,
                                              an approach typically employed to reparameterize deep learning models)

    • kernel variants, etc.
       

  3. Recognition: -- Determining the causal factors of data formation from an unlabled test data (one unlabeled image or more)

    • Multilinear Projection​
       

  4. Compositional  Hierarchical Tensor Factorization – representing an object hierarchy with a unified tensor model of parts and wholes - to appear KDD'19 (time permitting)

 

Speakers/Organizers:

Lieven De Lathauwer  was educated at KU Leuven, Belgium. From 2000 to 2007 he was a Research Associate of the French Centre National de la Recherche Scientifique, research group CNRS-ETIS. He is currently Full Professor with KU Leuven, affiliated with both the Group Science, Engineering and Technology of Kulak and with the group STADIUS of the Electrical Engineering Department (ESAT). He is an Associate Editor of the SIAM Journal on Matrix Analysis and Applications and has served as Associate Editor for the IEEE Transactions on Signal Processing. He is corecipient of the 2018 IEEE SPS Signal Processing Magazine Best Paper Award. He is Fellow of EURASIP, SIAM and the IEEE. His research concerns the development of tensor tools for mathematical engineering. It centers on the following axes: 1) algebraic foundations; 2) numerical algorithms; 3) generic methods for signal processing, data analysis, and system modeling; and 4) specific applications. Keywords are linear and multilinear algebra, numerical algorithms, statistical signal and array processing, higher-order statistics, independent component analysis and blind source separation, harmonic retrieval, factor analysis, blind identification and equalization, big data, data fusion. Algorithms have been made available as Tensorlab (www.tensorlab.net) (with N. Vervliet, O. Debals, L. Sorber and M. Van Barel).

M. Alex O. Vasilescu  received her education at the Massachusetts Institute of Technology and the University of Toronto. Vasilescu introduced the tensor paradigm for computer vision, computer graphics, machine learning, and extended the tensor algebraic framework by generalizing concepts from linear algebra. Starting in the early 2000s, she re-framed the analysis, recognition, synthesis, and interpretability of sensory data as multilinear tensor factorization problems suitable for mathematically representing cause-and-effect and demonstratively disentangling the causal factors of observable data.  The tensor framework is a powerful paradigm whose utility and value has been further underscored by recently provided theoretical evidence showing that deep learning is a neural network approximation of multilinear tensor factorization.

 

Vasilescu’s face recognition research, known as TensorFaces, has been funded by the TSWG, the Department of Defenses Combating Terrorism Support Program, and by IARPA, Intelligence Advanced Research Projects Activity. Her work was featured on the cover of Computer World, and in articles in the New York Times, Washington Times, etc. MITs Technology Review Magazine named her as a TR100 honoree, and the National Academy of Science co-awarded the KeckFutures Initiative Grant.

Jean Kossaifi was educated at Imperial College London. His research is mainly focused on face analysis and facial affect estimation in natural conditions, a field which bridges the gap between computer vision and machine learning. He is currently working on tensor methods, and how to efficiently combine these with deep learning. He is the creator of TensorLy, a high-level API for tensor methods and deep tensorized neural networks in Python that aims at making tensor learning simple and accessible. 

TRL-01.png