通过学习多个变换算子进行有监督的判别降维外文翻译资料

 2022-08-02 02:08

Expert Systems With Applications 164 (2021) 113958

Contents lists available at ScienceDirect

Expert Systems With Applications

journal homepage: www.elsevier.com/locate/eswa

Supervised discriminative dimensionality reduction by learning multiple transformation operators

Hossein Rajabzadeh a,lowast;, Mansoor Zolghadri Jahromi a, Ali Ghodsi b

a Department of Computer Science amp; Engineering and Information Technology, Shiraz University, Shiraz, Iran

b Department of statistics and actuarial science, University of Waterloo, Waterloo, Canada

ABSTRACT

Analyzing and learning from high dimensional data have always been challenging in machine learning tasks, causing serious computational complexities and poor learning performances. Supervised dimensionality reduction is a popular technique to address such challenges in supervised learning tasks, where data are accompanied with labels. Traditionally, such techniques mostly learn one single transformation to project data into a low-dimensional discriminative subspace. However, learning only one transformation for the whole data could be dominated by one or several classes, and the rest of classes receive less discrimination in the reduced space. That is to say, learning one transformation is insufficient to properly discriminate classes of data in the reduced space because they may have complex and completely dissimilar distributions. This insufficiency becomes even more serious if the number of classes increases, leading to poor discrimination and lessening the learning performance in the reduced space. To overcome this limitation, we propose a novel supervised dimensionality reduction method, which learns per-class transformations by optimizing a newly designed and efficient objective function. The proposed method captures more discriminative information from each single class of data compared to the case of one single transformation. Moreover, the proposed objective function enjoys several desirable properties: (1) maximizing margins between the transformed classes of data, (2) having a closed-form solution, (3) being easily kernelized in the case of nonlinear data, (4) preventing overfitting, and (5) ensuring the transformations are sparse in rows so that discriminative features are learned in the reduced space. Experimental results verify that the proposed method is superior to the related state-of-the-art methods and promising in generating discriminative embeddings.

1. Introduction

Many learning algorithms suffer from high-dimensional data. Such data cause serious problems in learning processes such as time and/or space complexity, performance deterioration, and in general the so-called curse of dimensionality (Bellman, 1956), that refers to various problems arising from high-dimensional data. Supervised dimensionality reduction (SDR) is a common technique to overcome such problems and enhance the performance of learning models in supervised learning tasks, where data are accompanied with class labels. Such techniques can be considered as a preprocessing step in learning tasks and has a wide range of applications, including finance and marketing (Zhong amp; Enke, 2017), recommender systems (Nilashi, Ibrahim, amp; Bagherifard, 2018), stock trading (Han amp; Ge, 2020), data mining (Houari, Bounceur, Kechadi, Tari, amp; Euler, 2016), text mining (Abualigah, Khader, Al-Betar, amp; Alomari, 2017), data visualization and classification (Huang, Zhang, amp; Chen, 2019), and many others. Hence, improving SDR techniques leads to great achievements in many learning tasks.

SDR methods mainly try to learn one single transformation to project data into a low-dimensional subspace based on the provided labels information so that the transformed classes become separated

from each other. However, projecting data through a single transformation may fail to capture the entire characteristics of data, particularly when the classes of data have complex and dissimilar distributions. As a direct result, the reduced space cannot properly distinguish between classes of data, leading to a poor learning performance in the reduced space. A possible explanation for this failure roots in the fact that each transformation has a limited learning capacity and therefore fails to properly discriminate classes if they come from highly complex and different distributions (Weinberger amp; Saul, 2009). Moreover, there are several interesting properties that are desirable in SDR methods, enabling them to produce more discrimination in the reduced spaces, such as margin maximization (Smola, Bartlett, Schuurmans, amp; Schouml;lkopf, 2000), overfitting prevention (Liu amp; Gillies, 2016), kernelization (in case of linear SDR methods) (Kung, 2014), and learning discriminative features in the reduced space (McLachlan, 2004). To the best of our knowledge, however, none of the exiting related work satisfies all these properties together. Therefore, this study aims to address the aforementioned limitations by proposing a novel SDR method that learns multiple transformations by optimizing an efficient objective function. The choice of learning multiple transformations solves the limitation of learning capacity while the proposed objective function

lowast; Corresponding author.

E-mail addresses: rajabzadeh@cse.shirazu.ac.ir, hrajabza@uwaterloo.ca (H. Rajabzadeh), zjahromi@shirazu.ac.ir (M.Z. Jahromi), ali.ghodsi@uwaterloo.ca (A. Ghodsi).

https://doi.org/10.1016/j.eswa.2020.113958

Received 2 June 2019; Received in revised form 30 August 2020; Accepted 31 August 2020 Available online 3 September 2020

0957-4174/copy; 2020 Published by Elsevier Ltd.

Expert Systems With Applications 164 (2021) 113958

H. Rajabzadeh et al.

Fig. 1. A schemat

剩余内容已隐藏,支付完成后下载完整资料


英语译文共 10 页,剩余内容已隐藏,支付完成后下载完整资料


资料编号:[267318],资料为PDF文档或Word文档,PDF文档可免费转换为Word

原文和译文剩余内容已隐藏,您需要先支付 30元 才能查看原文和译文全部内容!立即支付

以上是毕业论文外文翻译,课题毕业论文、任务书、文献综述、开题报告、程序设计、图纸设计等资料可联系客服协助查找。