-
Notifications
You must be signed in to change notification settings - Fork 14
Home
-
Introduction to CCA, GCCA and MCCA
i. Summary
ii. CCA
iii. CCA for multiple datasets - Proposed method: Deep MCCA
- Peformance Evaluatation : Affinity measures
-
Synthetic data experiments
i. Data generation
ii. Effect of batch size and embedding dimensions
iii. Comparison with a supervised DNN model -
Real world data experiments
i. Baseline methods
ii. Noisy MNIST - Latin script
iii. Noisy MNIST - Bangla script
We propose Deep Multiset Canonical Correlation Analysis (dMCCA) as an extension to representation learning using CCA when the underlying signal is observed across multiple (more than two) modalities. We use deep learning framework to learn non-linear transformations from different modalities to a shared subspace such that the representations maximize the ratio of between- and within-modality covariance of the observations. Unlike linear discriminant analysis, we do not need class information to learn these representations, and we show that this model can be trained for complex data using mini-batches. Using synthetic data experiments, we show that dMCCA can effectively recover the common signal across the different modalities corrupted by multiplicative and additive noise. We also analyze the sensitivity of our model to recover the correlated components with respect to mini-batch size and dimension of the embeddings. Performance evaluation on noisy handwritten datasets shows that our model outperforms other CCA-based approaches and is comparable to deep neural network models trained end-to-end on this dataset.
CCA finds the projection space that maximizes the correlation between two datasets. The mathematical formulation is given below:
CCA for more than two datasets
Generalized CCA (GCCA)
Multiset CCA (MCCA)
Deep MCCA formulation