Contents

## Lecture 7Image Relaxation: Restoration in addition to Feature Extraction

Lotsof, Paul, General Manager has reference to this Academic Journal, PHwiki organized this Journal Lecture 7Image Relaxation: Restoration in addition to Feature Extraction ch. 6 of Machine Vision by Wesley E. Snyder & Hairong QiSpring 201618-791 (CMU ECE) : 42-735 (CMU BME) : BioE 2630 (Pitt)Dr. John GaleottiRemember, all measured images are degradedNoise (always)Distortion = Blur (usually)False edgesFrom noiseUnnoticed/Missed edgesFrom noise + blur2All images are degradedoriginalimageplotnoisyimageplotWe need an un-degrader To extract clean features as long as segmentation, registration, etc.RestorationA-posteriori image restorationRemoves degradations from imagesFeature extractionIterative image feature extractionExtracts features from noisy images3

This Particular University is Related to this Particular Journal

Image relaxationThe basic operation per as long as med by:RestorationFeature extraction (of the type in ch. 6)An image relaxation process is a multistep algorithm with the properties that:The output of a step is the same as long as m as the input (e.g., 2562 image to 2562 image)Allows iterationIt converges to a bounded resultThe operation on any pixel is dependent only on those pixels in some well defined, finite neighborhood of that pixel. (optional)4Restoration: An inverse problemAssume:An ideal image, fA measured image, gA distortion operation, DR in addition to om noise, nPut it all together: g = D( f ) + n5This iswhat wewantThis iswhat wegetHow do weextract f Restoration is ill-posedEven without noiseEven if the distortion is linear blurInverting linear blur = deconvolutionBut we want restoration to be well-posed 6

A well-posed problemg = D( f ) is well-posed if:For each f, a solution exists,The solution is unique, ANDThe solution g continuously depends on the data fOtherwise, it is ill-posedUsually because it has a large condition number:K >> 17Condition number, KK output / inputFor the linear system b = AxK = A A-1K [1,)8K as long as convolved blurWhy is restoration ill-posed as long as simple blurWhy not just linearize a blur kernel, in addition to then take the inverse of that matrixF = H-1GBecause H is probably singularIf not, H almost certainly has a large KSo small amounts of noise in G will make the computed F almost meaninglessSee the book as long as great examples9

Regularization theory to the rescue!How to h in addition to le an ill-posed problemFind a related well-posed problem!One whose solution approximates that of our ill-posed problemE.g., try minimizing:But unless we know something about the noise, this is the exact same problem!10Digression: StatisticsRemember Bayes rulep( f g ) = p( g f ) p( f ) / p( g )11This is thea priori pdfThis is theconditionalpdfThis is thea posterioriconditionalpdfJust anormalizationconstantThis is what we want!It is our discriminationfunction.Maximum a posteriori (MAP) image processing algorithmsTo find the f underlying a given g:Use Bayes rule to compute all p( fq g )fq (the set of all possible f )Pick the fq with the maximum p( fq g )p( g ) is useless here (its constant across all fq)This is equivalent to:f = argmax( fq) p( g fq ) p( fq ) 12Noise termPrior term

Probabilities of imagesBased on probabilities of pixelsFor each pixel i:p( fi gi ) p( gi fi ) p( fi )Lets simplify:Assume no blur (just noise)At this point, some people would say we are denoising the image.p( g f ) = p( gi fi ) p( f ) = p( fi )13Probabilities of pixel valuesp( gi fi )This could be the density of the noise Such as a Gaussian noise model= constant esomethingp( fi )This could be a Gibbs distribution If you model your image as an ND Markov field= esomethingSee the book as long as more details14Put the math togetherRemember, we want:f = argmax( fq) p( g fq ) p( fq )where fq (the set of all possible f )And remember:p( g f ) = p( gi fi ) = constant esomethingp( f ) = p( fi ) = esomethingwhere i (the set of all image pixels)But we like something better than esomething, so take the log in addition to solve as long as :f = argmin( fq) ( p ( gi fi ) + p( fi ) )15

Objective functionsWe can re-write the previous slides final equation to use objective functions as long as our noise in addition to prior terms:f = argmin(fq) ( p( gi fi ) + p( fi ) ) f = argmin(fq) ( Hn( f, g ) + Hp( f ) )We can also combine these objective functions:H( f, g ) = Hn( f, g ) + Hp( f )16Purpose of the objective functionsNoise term Hn( f, g ):If we assume independent, Gaussian noise as long as each pixel,We tell the minimization that f should resemble g.Prior term (a.k.a. regularization term) Hp( f ):Tells the minimization what properties the image should haveOften, this means brightness that is:Constant in local areasDiscontinuous at boundaries17Minimization is a beast!Our objective function is not niceIt has many local minimaSo gradient descent will not do wellWe need a more powerful optimizer:Mean field annealing (MFA)Approximates simulated annealingBut its faster!Its also based on the mean field approximation of statistical mechanics18

MFAMFA is a continuation methodSo it implements a homotopyA homotopy is a continuous de as long as mation of one hyper-surface into anotherMFA procedure:Distort our complex objective function into a convex hyper-surface (N-surface)The only minima is now the global minimumGradually distort the convex N-surface back into our objective function19MFA: Single-Pixel VisualizationContinuous de as long as mation of a function which is initially convex to find the global minimum of a non-convex function.20Generalized objective functions as long as MFANoise term:(D( f ))i denotes some distortion (e.g., blur) of image f in the vicinity of pixel IPrior term: represents a priori knowledge about the roughness of the image, which is altered in the course of MFA(R( f ))i denotes some function of image f at pixel iThe prior will seek the f which causes R( f ) to be zero (or as close to zero as possible)21

R( f ): choices, choicesPiecewise-constant images =0 if the image is constant 0 if the image is piecewise-constant (why)The noise term will as long as ce a piecewise-constant image22R( f ): Piecewise-planer images =0 if the image is a plane 0 if the image is piecewise-planarThe noise term will as long as ce a piecewise-planar image23Graduated nonconvexity (GNC)Similar to MFAUses a descent methodReduces a control parameterCan be derived using MFA as its basisWeak membrane GNC is analogous to piecewise-constant MFABut different:Its objective function treats the presence of edges explicitlyPixels labeled as edges dont count in our noise termSo we must explicitly minimize the of edge pixels24

Variable conductance diffusion (VCD)Idea:Blur an image everywhere,except at features of interestsuch as edges25Where:t = timei f = spatial gradient of f at pixel ici = conductivity (to blurring)26VCD simulates the diffusion eq.temporalderivativespatialderivativeIsotropic diffusionIf ci is constant across all pixels:Isotropic diffusionNot really VCDIsotropic diffusion is equivalent to convolution with a GaussianThe Gaussians variance is defined in terms of t in addition to ci27

VCDci is a function of spatial coordinates, parameterized by iTypically a property of the local image intensitiesCan be thought of as a factor by which space is locally compressedTo smooth except at edges:Let ci be small if i is an edge pixelLittle smoothing occurs because space is stretched or little heat flowsLet ci be large at all other pixelsMore smoothing occurs in the vicinity of pixel i because space is compressed or heat flows easily28VCDA.K.A. Anisotropic diffusionWith repetition, produces a nearly piecewise uni as long as m resultLike MFA in addition to GNC as long as mulationsEquivalent to MFA w/o a noise termEdge-oriented VCD:VCD + diffuse tangential to edges when near edgesBiased Anisotropic diffusion (BAD)Equivalent to MAP image restoration29From the Scientific Applications in addition to Visualization Group at NISThttp://math.nist.gov/mcsd/savg/software/filters/30VCD Sample Images

Congratulations!You have made it through most of the introductory material.Now were ready as long as the fun stuff.Fun stuff (why we do image analysis):SegmentationRegistrationShape AnalysisEtc.31

## Lotsof, Paul General Manager

Lotsof, Paul is from United States and they belong to KAVV-FM and they are from Benson, United States got related to this Particular Journal. and Lotsof, Paul deal with the subjects like Country Music; Music

## Journal Ratings by Baldwin-Wallace College

This Particular Journal got reviewed and rated by Baldwin-Wallace College and short form of this particular Institution is US and gave this Journal an Excellent Rating.