Publications by authors named "Ernö Robert Csetnek"

9 Publications

  • Page 1 of 1

Tikhonov regularization of a second order dynamical system with Hessian driven damping.

Math Program 2021 11;189(1-2):151-186. Epub 2020 Jun 11.

Department of Mathematics, Technical University of Cluj-Napoca, Memorandumului 28, Cluj-Napoca, Romania.

We investigate the asymptotic properties of the trajectories generated by a second-order dynamical system with Hessian driven damping and a Tikhonov regularization term in connection with the minimization of a smooth convex function in Hilbert spaces. We obtain fast convergence results for the function values along the trajectories. The Tikhonov regularization term enables the derivation of strong convergence results of the trajectory to the minimizer of the objective function of minimum norm.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10107-020-01528-8DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8550339PMC
June 2020

Fixing and extending some recent results on the ADMM algorithm.

Numer Algorithms 2021 14;86(3):1303-1325. Epub 2020 May 14.

Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, A-1090 Vienna, Austria.

We investigate the techniques and ideas used in Shefi and Teboulle (SIAM J Optim (1), 269-297, 2014) in the convergence analysis of two proximal ADMM algorithms for solving convex optimization problems involving compositions with linear operators. Besides this, we formulate a variant of the ADMM algorithm that is able to handle convex optimization problems involving an additional smooth function in its objective, and which is evaluated through its gradient. Moreover, in each iteration, we allow the use of variable metrics, while the investigations are carried out in the setting of infinite-dimensional Hilbert spaces. This algorithmic scheme is investigated from the point of view of its convergence properties.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11075-020-00934-5DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7862544PMC
May 2020

The Proximal Alternating Minimization Algorithm for Two-Block Separable Convex Optimization Problems with Linear Constraints.

J Optim Theory Appl 2019 24;182(1):110-132. Epub 2018 Dec 24.

1Faculty of Mathematics, Chemnitz University of Technology, 09126 Chemnitz, Germany.

The Alternating Minimization Algorithm has been proposed by Paul Tseng to solve convex programming problems with two-block separable linear constraints and objectives, whereby (at least) one of the components of the latter is assumed to be strongly convex. The fact that one of the subproblems to be solved within the iteration process of this method does not usually correspond to the calculation of a proximal operator through a closed formula affects the implementability of the algorithm. In this paper, we allow in each block of the objective a further smooth convex function and propose a proximal version of the algorithm, which is achieved by equipping the algorithm with proximal terms induced by variable metrics. For suitable choices of the latter, the solving of the two subproblems in the iterative scheme can be reduced to the computation of proximal operators. We investigate the convergence of the proposed algorithm in a real Hilbert space setting and illustrate its numerical performances on two applications in image processing and machine learning.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10957-018-01454-yDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6560466PMC
December 2018

Inducing strong convergence into the asymptotic behaviour of proximal splitting algorithms in Hilbert spaces.

Optim Methods Softw 2019 10;34(3):489-514. Epub 2018 Apr 10.

Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, A-1090Vienna, Austria.

Proximal splitting algorithms for monotone inclusions (and convex optimization problems) in Hilbert spaces share the common feature to guarantee for the generated sequences in general weak convergence to a solution. In order to achieve strong convergence, one usually needs to impose more restrictive properties for the involved operators, like strong monotonicity (respectively, strong convexity for optimization problems). In this paper, we propose a modified Krasnosel'skiĭ-Mann algorithm in connection with the determination of a fixed point of a nonexpansive mapping and show strong convergence of the iteratively generated sequence to the minimal norm solution of the problem. Relying on this, we derive a forward-backward and a Douglas-Rachford algorithm, both endowed with Tikhonov regularization terms, which generate iterates that strongly converge to the minimal norm solution of the set of zeros of the sum of two maximally monotone operators. Furthermore, we formulate strong convergent primal-dual algorithms of forward-backward and Douglas-Rachford-type for highly structured monotone inclusion problems involving parallel-sums and compositions with linear operators. The resulting iterative schemes are particularized to the solving of convex minimization problems. The theoretical results are illustrated by numerical experiments on the split feasibility problem in infinite dimensional spaces.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/10556788.2018.1457151DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6474735PMC
April 2018

A second-order dynamical approach with variable damping to nonconvex smooth minimization.

Appl Anal 2020 9;99(3):361-378. Epub 2018 Jul 9.

Department of Mathematics, Technical University of Cluj-Napoca, Cluj-Napoca, Romania.

We investigate a second-order dynamical system with variable damping in connection with the minimization of a nonconvex differentiable function. The dynamical system is formulated in the spirit of the differential equation which models Nesterov's accelerated convex gradient method. We show that the generated trajectory converges to a critical point, if a regularization of the objective function satisfies the Kurdyka- Lojasiewicz property. We also provide convergence rates for the trajectory formulated in terms of the Lojasiewicz exponent.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/00036811.2018.1495330DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7077366PMC
July 2018

A second-order dynamical system with Hessian-driven damping and penalty term associated to variational inequalities.

Optimization 2019 21;68(7):1265-1277. Epub 2018 Mar 21.

Faculty of Mathematics, University of Vienna, Vienna, Austria.

We consider the minimization of a convex objective function subject to the set of minima of another convex function, under the assumption that both functions are twice continuously differentiable. We approach this optimization problem from a continuous perspective by means of a second-order dynamical system with Hessian-driven damping and a penalty term corresponding to the constrained function. By constructing appropriate energy functionals, we prove weak convergence of the trajectories generated by this differential equation to a minimizer of the optimization problem as well as convergence for the objective function values along the trajectories. The performed investigations rely on Lyapunov analysis in combination with the continuous version of the Opial Lemma. In case the objective function is strongly convex, we can even show strong convergence of the trajectories.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/02331934.2018.1452922DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6817320PMC
March 2018

An Inertial Proximal-Gradient Penalization Scheme for Constrained Convex Optimization Problems.

Vietnam J Math 2018 1;46(1):53-71. Epub 2017 Sep 1.

Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok, 65000 Thailand.

We propose a proximal-gradient algorithm with penalization terms and inertial and memory effects for minimizing the sum of a proper, convex, and lower semicontinuous and a convex differentiable function subject to the set of minimizers of another convex differentiable function. We show that, under suitable choices for the step sizes and the penalization parameters, the generated iterates weakly converge to an optimal solution of the addressed bilevel optimization problem, while the objective function values converge to its optimal objective value.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10013-017-0256-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7370989PMC
September 2017

Gradient-type penalty method with inertial effects for solving constrained convex optimization problems with smooth data.

Optim Lett 2018 14;12(1):17-33. Epub 2017 Jun 14.

3Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok, 65000 Thailand.

We consider the problem of minimizing a smooth convex objective function subject to the set of minima of another differentiable convex function. In order to solve this problem, we propose an algorithm which combines the gradient method with a penalization technique. Moreover, we insert in our algorithm an inertial term, which is able to take advantage of the history of the iterates. We show weak convergence of the generated sequence of iterates to an optimal solution of the optimization problem, provided a condition expressed via the Fenchel conjugate of the constraint function is fulfilled. We also prove convergence for the objective function values to the optimal objective value. The convergence analysis carried out in this paper relies on the celebrated Opial Lemma and generalized Fejér monotonicity techniques. We illustrate the functionality of the method via a numerical experiment addressing image classification via support vector machines.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11590-017-1158-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6956900PMC
June 2017

Proximal-gradient algorithms for fractional programming.

Optimization 2017 Aug 24;66(8):1383-1396. Epub 2017 Feb 24.

Faculty of Mathematics, University of Vienna, Vienna, Austria.

In this paper, we propose two proximal-gradient algorithms for fractional programming problems in real Hilbert spaces, where the numerator is a proper, convex and lower semicontinuous function and the denominator is a smooth function, either concave or convex. In the iterative schemes, we perform a proximal step with respect to the nonsmooth numerator and a gradient step with respect to the smooth denominator. The algorithm in case of a concave denominator has the particularity that it generates sequences which approach both the (global) optimal solutions set and the optimal objective value of the underlying fractional programming problem. In case of a convex denominator the numerical scheme approaches the set of critical points of the objective function, provided the latter satisfies the Kurdyka-ᴌojasiewicz property.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/02331934.2017.1294592DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5632963PMC
August 2017
-->