top of page
CVPRLogo_edited.png

ABOUT             RESEARCH             COURSES             STUDENTS             CONTACT

Cause-and-Effect in a Tensor Framework

Agenda
cause_and_effect_logo_red_border_cvpr10_

 

 

 

 

Tensor factorizations have been succesfully employed in representing the causal factor structure of data formation, in econometrics, psychometrics and chemometricss for over fifty years. More recently, the tensor factorization approach has been successfully employed in representing the causal structure in computer vision, computer graphics. While in various machine learning tasks the tensoe approach has been employed in developing predictive models  

 

Data tensor factorizations were first employed in computer vision to recognize people from the way they move (Human Motion Signatures in 2001) and from their facial images (TensorFaces in 2002), but it may be used to recognize any objects, or object attributes.

 

 

 

 

 

Natural images are the compositional consequence of multiple causal factors related to scene structure, illumination (i.e. the location and types of light sources), and imaging (i.e. viewpoint, viewing direction, lens type and other camera characteristics). Most observed data are formed from the interaction of intrisic and extrinsic hierarchical causal factors. Tensor algebra, the algebra of higher-order tensors offers a potent mathematical framework for explicitly representing and disentangling the causal factors of data formation.  Determining the causal factors of observable data allows intelligent agents to better understand and navigate the world, an important tenet of artificial intelligence, and an important goal in data science. Theoretical evidence has shown that deep learning is a neural network equivalent to multilinear tensor decomposition, while a shallow network corresponds to linear tensor factorization (aka. CANDECOMP/Parafac tensor factorization).

3layers_generic_compositional_example14_

There are two main classes of tensor decompositions which generalize different concepts of the matrix SVD,

  • rank-K decomposition - represents a tensor as a sum of rank-1 terms

  • rank-(R1, R2,...,R M) decomposition - computes the orthonormal mode matrices,​

These factorizations have been applied to observations when an image has been (1) vectorized and treated as a point in high dimensional space or (2)  treated as a data-matrix or a data-tensor.   The tutorial will addresses the advantages/disadvantages and misconceptions of treating an image-as-a-vector versus image-as-a-matrix.

We will discuss several multilinear factorizations that represent cause-and-effect, such as, Multilinear PCA, Multilinear ICA, Block Tensor Decomposition, Compositional Hierarchical Tensor Factorization, as well as multilinear projection operator which is important in performing recognition in a tensor framework. (Multilinear-ICA should not be confused with the computation of the linear ICA basis vectors by employing the CP tensor decomposition on a tensor of higher order statistics  computed from a  colection of observed data.)

Tensor factorizations can also be efficiently combined with deep learning using TensorLy, a high level API for tensor algebra decomposition and regression.  Deeply tensorized architecture results in state-of-the-art performance, large parameter savings and computational speed-ups on a wide range of applications.

 

 

 

 

 

Date: Monday, June 17

Time: 1 PM

Location: 203 C

Tutorial Schedule:


Basic Concepts (1:00 - 2:15, Lieven De Lathauwer)​
 

  1. Basic definitions and properties :
     

    •  Rank of higher-order tensors

    •  Multilinear rank of higher-order tensors

human_motion_signature-03-01.png

 2. Tensor factorizations:

  • Canonical Polyadic Decomposition ― Low-rank tensor approximation ― Latent Variable Analysis

  • Tucker Decomposition and Multilinear Singular Value Decomposition ― Low multilinear rank tensor approximation

  • Block Term Decomposition (time permitting)​

Causality in a Tensor Framework: -- Tensor Factorizations for Computer Vision (2:30pm - 3:45pm, M. Alex O. Vasilescu)

  1. Why should one treat an image as a vector, and not a matrix, or a tensor?
    Which arguments for treating an image as a matrix or tensor are mathematically provably false? 
     

  2. Representing Cause-and-Effect from training data based on

    • 2nd order statistics – Multilinear-PCA  (TensorFaces, Human Motion Signatures)

    • higher-order statistics – Multilinear-ICA
                                             (not to be confused with computing ICA by employing tensor methods,
                                              an approach typically employed to reparameterize deep learning models)

    • kernel variants, etc.
       

  3. Recognition: -- Determining the causal factors of data formation from an unlabeled test data (one unlabeled image or more)

  4. Representing a Hierarchy of Intrinsic and Extrinsic Causal Factors -- to appear at KDD'19

    "Compositional Hierarchical Tensor Factorization: Representing Hierarchical Intrinsic and Extrinsic Causal Factors”, M.A.O. Vasilescu, E.Kim, In The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD’19): Tensor Methods for Emerging Data Science Challenges, August 04-08, 2019, Anchorage, AK. ACM, New York, NY, USA

Tensorizing Deep Neural Network Architectures (4:00 - 5:15pm, Jean Kossaifi)

  1.  Parametrizing neural networks with tensor decomposition

  2.  Higher order operations as deep net layers

  3.  Improving deep net training

  4.  Domain adaptation with deep learning and tensor methods

  5.  Practical implementations with TensorLy​

domain-adaptation-01.png
Anchor 1
Organizers

Speakers/Organizers:

Lieven De Lathauwer  was educated at KU Leuven, Belgium. From 2000 to 2007 he was a Research Associate of the French Centre National de la Recherche Scientifique, research group CNRS-ETIS. He is currently Full Professor with KU Leuven, affiliated with both the Group Science, Engineering and Technology of Kulak and with the group STADIUS of the Electrical Engineering Department (ESAT). He is an Associate Editor of the SIAM Journal on Matrix Analysis and Applications and has served as Associate Editor for the IEEE Transactions on Signal Processing. He is corecipient of the 2018 IEEE SPS Signal Processing Magazine Best Paper Award. He is Fellow of EURASIP, SIAM and the IEEE. His research concerns the development of tensor tools for mathematical engineering. It centers on the following axes: 1) algebraic foundations; 2) numerical algorithms; 3) generic methods for signal processing, data analysis, and system modeling; and 4) specific applications. Keywords are linear and multilinear algebra, numerical algorithms, statistical signal and array processing, higher-order statistics, independent component analysis and blind source separation, harmonic retrieval, factor analysis, blind identification and equalization, big data, data fusion. Algorithms have been made available as Tensorlab (www.tensorlab.net) (with N. Vervliet, O. Debals, L. Sorber and M. Van Barel).

M. Alex O. Vasilescu  received her education at the Massachusetts Institute of Technology and the University of Toronto. Vasilescu introduced the tensor paradigm for computer vision, computer graphics, machine learning, and extended the tensor algebraic framework by generalizing concepts from linear algebra. Starting in the early 2000s, she re-framed the analysis, recognition, synthesis, and interpretability of sensory data as multilinear tensor factorization problems suitable for mathematically representing cause-and-effect and demonstratively disentangling the causal factors of observable data.  The tensor framework is a powerful paradigm whose utility and value has been further underscored by recently provided theoretical evidence showing that deep learning is a neural network approximation of multilinear tensor factorization.

 

Vasilescu’s face recognition research, known as TensorFaces, has been funded by the TSWG, the Department of Defenses Combating Terrorism Support Program, and by IARPA, Intelligence Advanced Research Projects Activity. Her work was featured on the cover of Computer World, and in articles in the New York Times, Washington Times, etc. MITs Technology Review Magazine named her as a TR100 honoree, and the National Academy of Science co-awarded the KeckFutures Initiative Grant.

Jean Kossaifi was educated at Imperial College London. His research is mainly focused on face analysis and facial affect estimation in natural conditions, a field which bridges the gap between computer vision and machine learning. He is currently working on tensor methods, and how to efficiently combine these with deep learning. He is the creator of TensorLy, a high-level API for tensor methods and deep tensorized neural networks in Python that aims at making tensor learning simple and accessible. 

TRL-01.png
bottom of page