
Spatial-spectral structured sparse low-rank representation for hyperspectral image super-resolution
- Author
- Jize Xue, Yong-Qiang Zhao, Yuanyang Bu, Wenzhi Liao (UGent) , Jonathan Cheung-Wai Chan and Wilfried Philips (UGent)
- Organization
- Abstract
- Hyperspectral image super-resolution by fusing high-resolution multispectral image (HR-MSI) and low-resolution hyperspectral image (LR-HSI) aims at reconstructing high resolution spatial-spectral information of the scene. Existing methods mostly based on spectral unmixing and sparse representation are often developed from a low-level vision task perspective, they cannot sufficiently make use of the spatial and spectral priors available from higher-level analysis. To this issue, this paper proposes a novel HSI super-resolution method that fully considers the spatial/spectral subspace low-rank relationships between available HR-MSI/LR-HSI and latent HSI. Specifically, it relies on a new subspace clustering method named "structured sparse low-rank representation" (SSLRR), to represent the data samples as linear combinations of the bases in a given dictionary, where the sparse structure is induced by low-rank factorization for the affinity matrix. Then we exploit the proposed SSLRR model to learn the SSLRR along spatial/spectral domain from the MSI/HSI inputs. By using the learned spatial and spectral low-rank structures, we formulate the proposed HSI super-resolution model as a variational optimization problem, which can be readily solved by the ADMM algorithm. Compared with state-of-the-art hyperspectral super-resolution methods, the proposed method shows better performance on three benchmark datasets in terms of both visual and quantitative evaluation.
- Keywords
- Computer Graphics and Computer-Aided Design, Software, Superresolution, Sparse matrices, Spatial resolution, Dictionaries, Correlation, Tensors, Task analysis, Hyperspectral and multispectral images fusion, low-rank representation, structured sparse, subspace low-rank recovery, affinity matrix
Downloads
-
(...).pdf
- full text (Published version)
- |
- UGent only
- |
- |
- 8.77 MB
Citation
Please use this url to cite or link to this publication: http://hdl.handle.net/1854/LU-8719982
- MLA
- Xue, Jize, et al. “Spatial-Spectral Structured Sparse Low-Rank Representation for Hyperspectral Image Super-Resolution.” IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 30, 2021, pp. 3084–97, doi:10.1109/tip.2021.3058590.
- APA
- Xue, J., Zhao, Y.-Q., Bu, Y., Liao, W., Chan, J. C.-W., & Philips, W. (2021). Spatial-spectral structured sparse low-rank representation for hyperspectral image super-resolution. IEEE TRANSACTIONS ON IMAGE PROCESSING, 30, 3084–3097. https://doi.org/10.1109/tip.2021.3058590
- Chicago author-date
- Xue, Jize, Yong-Qiang Zhao, Yuanyang Bu, Wenzhi Liao, Jonathan Cheung-Wai Chan, and Wilfried Philips. 2021. “Spatial-Spectral Structured Sparse Low-Rank Representation for Hyperspectral Image Super-Resolution.” IEEE TRANSACTIONS ON IMAGE PROCESSING 30: 3084–97. https://doi.org/10.1109/tip.2021.3058590.
- Chicago author-date (all authors)
- Xue, Jize, Yong-Qiang Zhao, Yuanyang Bu, Wenzhi Liao, Jonathan Cheung-Wai Chan, and Wilfried Philips. 2021. “Spatial-Spectral Structured Sparse Low-Rank Representation for Hyperspectral Image Super-Resolution.” IEEE TRANSACTIONS ON IMAGE PROCESSING 30: 3084–3097. doi:10.1109/tip.2021.3058590.
- Vancouver
- 1.Xue J, Zhao Y-Q, Bu Y, Liao W, Chan JC-W, Philips W. Spatial-spectral structured sparse low-rank representation for hyperspectral image super-resolution. IEEE TRANSACTIONS ON IMAGE PROCESSING. 2021;30:3084–97.
- IEEE
- [1]J. Xue, Y.-Q. Zhao, Y. Bu, W. Liao, J. C.-W. Chan, and W. Philips, “Spatial-spectral structured sparse low-rank representation for hyperspectral image super-resolution,” IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 30, pp. 3084–3097, 2021.
@article{8719982, abstract = {{Hyperspectral image super-resolution by fusing high-resolution multispectral image (HR-MSI) and low-resolution hyperspectral image (LR-HSI) aims at reconstructing high resolution spatial-spectral information of the scene. Existing methods mostly based on spectral unmixing and sparse representation are often developed from a low-level vision task perspective, they cannot sufficiently make use of the spatial and spectral priors available from higher-level analysis. To this issue, this paper proposes a novel HSI super-resolution method that fully considers the spatial/spectral subspace low-rank relationships between available HR-MSI/LR-HSI and latent HSI. Specifically, it relies on a new subspace clustering method named "structured sparse low-rank representation" (SSLRR), to represent the data samples as linear combinations of the bases in a given dictionary, where the sparse structure is induced by low-rank factorization for the affinity matrix. Then we exploit the proposed SSLRR model to learn the SSLRR along spatial/spectral domain from the MSI/HSI inputs. By using the learned spatial and spectral low-rank structures, we formulate the proposed HSI super-resolution model as a variational optimization problem, which can be readily solved by the ADMM algorithm. Compared with state-of-the-art hyperspectral super-resolution methods, the proposed method shows better performance on three benchmark datasets in terms of both visual and quantitative evaluation.}}, author = {{Xue, Jize and Zhao, Yong-Qiang and Bu, Yuanyang and Liao, Wenzhi and Chan, Jonathan Cheung-Wai and Philips, Wilfried}}, issn = {{1057-7149}}, journal = {{IEEE TRANSACTIONS ON IMAGE PROCESSING}}, keywords = {{Computer Graphics and Computer-Aided Design,Software,Superresolution,Sparse matrices,Spatial resolution,Dictionaries,Correlation,Tensors,Task analysis,Hyperspectral and multispectral images fusion,low-rank representation,structured sparse,subspace low-rank recovery,affinity matrix}}, language = {{eng}}, pages = {{3084--3097}}, title = {{Spatial-spectral structured sparse low-rank representation for hyperspectral image super-resolution}}, url = {{http://doi.org/10.1109/tip.2021.3058590}}, volume = {{30}}, year = {{2021}}, }
- Altmetric
- View in Altmetric
- Web of Science
- Times cited: