Jointly Optimized Regressors as long as Image Super-resolution The Super-resolution Problem

Jointly Optimized Regressors as long as Image Super-resolution The Super-resolution Problem www.phwiki.com

Jointly Optimized Regressors as long as Image Super-resolution The Super-resolution Problem

Campbell, Deana, Operations and Traffic Director has reference to this Academic Journal, PHwiki organized this Journal Jointly Optimized Regressors as long as Image Super-resolution Dengxin Dai, Radu Timofte, in addition to Luc Van GoolComputer Vision Lab, ETH Zurich12The Super-resolution ProblemInterpolate: align coordinates Super-resolution: high-freq. contentWhy Image Super-resolutionThis kitten made out of legos They aren’t cuddly at all!This kitten is adorable! I want to adopt her in addition to give her a good home!(1) For good visual qualityImage source: http://info.universalprinting.com/blog/3

Kent State University - East Liverpool US www.phwiki.com

This Particular University is Related to this Particular Journal

Low-resolutionSuper-resolution result(2) Pre-processing component as long as other computer vision systems, such as recognitionFeatures & models are often trained with images of normal resolution Why Image Super-resolution4Example-based approachesTraining examplesHighly ill-posed problemCore part: learning5Core idea – patch enhancement InputLearning trans as long as mation function as long as small patchesLess complex, tractableBetter chance to find similar patterns from exemplarsInterp.Patch enhanceOutput& average6

Training dataLR images Training pairs (easy to create)HR imagesMatching patch-pairsLearning7 Training the Dictionaries – General Feature extraction (HR)InterpolateDown-sampleLRHigh-Freq.Patch Size: 6×6, 9×9 or 12×128 Training the Dictionaries – General Feature extraction (LR)Down-sampleHRGradientLaplacianPatch Size: 6×6, 9×9 or 12×129

Training the Dictionaries – General Learning methodsThe trans as long as mation from LR patches to HR onesRelated Work: kNN + Markov r in addition to om field [Freeman et al. 00]Neighbor embedding [Chang et al. 04]Support vector regression [Ni et al. 07]Deep neural network [Dong et al. 14]Simple functions [Yang & Yang 13]Anchored neighborhood regression [Timofte et al. 13]Efficient, but regressors learned separately Computationally heavy 10Complex optimization Training the Dictionaries – General Differences to related approaches 11 Training the Dictionaries – General Our approach – Jointly Optimized RegressorsLearning: a set of local regressors, collectively yield smallest error as long as all training pairsIndividually precise Mutually complementary Testing: each patch is super-resolved by its most suitable regressor, voted by nearest neighbors 12

Training the Dictionaries – General Our approach – learningTwo iterative steps (similar to k-means):Update step: learn repressors to minimize the SR error of all pairs of each cluster Assignment step: assign each pair to the regressor yielding the least SR error Initialization: separate matching pairs into O clusters Cluster 1Cluster 2Cluster OLRHR13 Training the Dictionaries – General Our approach – learningUpdate step: learn a regressor per group by minimizing the SR error Regressor 1Regressor 2Regressor O LRHR14Ridge Regression: Training the Dictionaries – General Our approach – learningAssign. step: assign each pair to the regressor yielding the least SR error Cluster 1 Regressor 1Cluster 2Regressor 2Cluster 3,Regressor 3 LRHRRe1Re4Re2Re3Re5Assign. stepUpdate stepUntil convergence (~10 iterations)SR error15

Training the Dictionaries – General Our approach – learningkd-Tree After iterations, each LR patch is associated with a vector indicating the SR error by each of the O regressors LRHRSR error5 million patches16[Vedaldi in addition to Fulkerson 08] Training the Dictionaries – General Our approach – testing interpolateLRinputfilteringsearch kNNKd-TreevoteRe1Re4Re2Re3Re5RegressorsSR errorSimilar patches share regressors17LR Training the Dictionaries – General Our approach – testing interpolateLRinputoutputRegressor 3=Ridge RegressionAverageHigh-Freq.18

ResultsCompared with 7 competing methods on 4 datasets (1 newly collected)Our method, yet simple, outper as long as ms others consistently19Average PSNR (dB) on Set5, Set14, BD100, in addition to SuperTex136ResultsBetter results with more iterations Better results with more regressors20PSNR (dB)PSNR (dB)The number of iterationsThe number of regressorsBetter results with more training patch pairs21PSNR (dB)The number of training patchesResults

22Ground truth/ PSNR Factor x323Bicubic / 27.9 dB Factor x324Zeyde et al. /28.7 dB Factor x3

Campbell, Deana KGMN-FM Operations and Traffic Director www.phwiki.com

25SRCNN /29.0 dB Factor x326JOR/29.3 dB Factor x327Results: factor x4 JOR / 32.3 dBSRCNN/ 31.4 dBGround truth / PSNR

28Results: factor x4 JOR / 33.7 dBBicubic / 32.8 dBGround truth / PSNR29Bicubic JOR / 27.7 dBSRCNN / 27.1 dBANR/ 26.9 dBZeyde et al. / 26.7 dBBicubic / 25.5 dBGround truth / PSNR Results: factor x4 ConclusionA new method by jointly optimizing regressors with the ultimate goal of ISRThe method, yet simple, outper as long as ms competing methodsThe code is available at www.vision.ee.ethz.ch/~daid/JORA new dataset, 136 textures evaluating texture recovery ability30

Thanks as long as your attention! Questions 31JOR / 34.0 dBSRCNN / 33.3 dBBicubic / 31.2 dBGround truth / PSNR ReferenceDai, D., R. Timofte, in addition to L. Van Gool. “Jointly optimized regressors as long as image super-resolution.” In Eurographics, 2015.

Campbell, Deana Operations and Traffic Director

Campbell, Deana is from United States and they belong to KGMN-FM and they are from  Kingman, United States got related to this Particular Journal. and Campbell, Deana deal with the subjects like Roads and Highways

Journal Ratings by Kent State University – East Liverpool

This Particular Journal got reviewed and rated by Kent State University – East Liverpool and short form of this particular Institution is US and gave this Journal an Excellent Rating.