Spelling suggestions: "subject:"expansive"" "subject:"expansively""
1 |
Iterative Methods for Common Fixed Points of Nonexpansive Mappings in Hilbert spacesLai, Pei-lin 16 May 2011 (has links)
The aim of this work is to propose viscosity-like methods for finding a specific common fixed point of a finite family T={ T_{i} }_{i=1}^{N} of nonexpansive self-mappings of a closed convex subset C of a Hilbert space H.We propose two schemes: one implicit and the other explicit.The implicit scheme determines a set {x_{t} : 0 < t < 1} through
the fixed point equation x_{t}= tf (x_{t} ) + (1− t)Tx_{t}, where f : C¡÷C is a contraction.The explicit scheme is the discretization of the implicit scheme and de defines a sequence {x_{n} } by the recursion x_{n+1}=£\\_{n}f(x_{n}) +(1−£\\_{n})Tx_{n} for n ≥ 0, where {£\\_{n} }⊂ (0,1) It has been shown in the literature that both implicit and explicit schemes converge in
norm to a fixed point of T (with additional conditions imposed on the sequence {£\ _{n} } in the explicit scheme).We will extend both schemes to the case of a finite family of nonexpansive mappings. Our proposed schemes converge in norm to a common fixed point of the family which in addition solves a variational inequality.
|
2 |
Quasi-Fejer-monotonicity and its applicationsHuang, Jun-Hua 05 July 2011 (has links)
Iterative methods are extensively used to solve linear and nonlinear problems arising from both pure and applied sciences, and in particular, in fixed point theory and optimization. An iterative method which is used to find a fixed point of an operator or an optimal solution to an optimization problem generates a sequence in an iterative manner. We are in a hope that
this sequence can converge to a solution of the problem under investigation. It is therefore quite naturally to require that the distance of this sequence to the solution set of the problem under investigation be decreasing from iteration to iteration. This is the idea of Fejer-monotonicity. In this paper, We consider quasi-Fejer monotone sequences; that is, we consider Fejer monotone sequences together with errors. Properties of quasi-Fejer monotone sequences are investigated, weak and strong convergence of quasi-Fejer monotone sequences are obtained, and an application to the convex feasibility problem is included.
|
3 |
Convergence Analysis for Inertial Krasnoselskii-Mann Type Iterative AlgorithmsHuang, Wei-Shiou 16 February 2011 (has links)
We consider the problem of finding a common fixed point of an infinite family ${T_n}$
of nonlinear self-mappings of a closed convex subset $C$ of a real Hilbert space $H$. Namely,
we want to find a point $x$ with the property (assuming such common fixed points exist):
[
xin igcap_{n=1}^infty ext{Fix}(T_n).
]
We will use the Krasnoselskii-Mann (KM) Type inertial iterative algorithms of the form
$$ x_{n+1} = ((1-alpha_n)I+alpha_nT_n)y_n,quad
y_n = x_n + eta_n(x_n-x_{n-1}).eqno(*)$$
We discuss the convergence properties of the sequence ${x_n}$ generated by this algorithm (*).
In particular, we prove that ${x_n}$ converges weakly to a common fixed point of the family
${T_n}$ under certain conditions imposed on the sequences ${alpha_n}$ and ${eta_n}$.
|
4 |
Convergece Analysis of the Gradient-Projection MethodChow, Chung-Huo 09 July 2012 (has links)
We consider the constrained convex minimization problem:
min_x∈C f(x)
we will present gradient projection method which generates a sequence x^k
according to the formula
x^(k+1) = P_c(x^k − £\_k∇f(x^k)), k= 0, 1, ¡P ¡P ¡P ,
our ideal is rewritten the formula as a xed point algorithm:
x^(k+1) = T_(£\k)x^k, k = 0, 1, ¡P ¡P ¡P
is used to solve the minimization problem.
In this paper, we present the gradient projection method(GPM) and different choices of the stepsize to discuss the convergence of gradient projection
method which converge to a solution of the concerned problem.
|
5 |
Viscosity Approximation Methods for Generalized Equilibrium Problems and Fixed Point ProblemsHuang, Yun-ru 20 June 2008 (has links)
The purpose of this paper is to investigate the problem of finding a common element of the set of solutions of a generalized equilibrium problem (for short, GEP) and the set of fixed points of a nonexpansive mapping in a Hilbert space. First, by using the well-known KKM technique we derive the existence and uniqueness of solutions of the auxiliary problems for the GEP. Second, on account of this result and Nadler's theorem, we introduce an iterative scheme by the viscosity approximation method for finding a common element of the set of solutions of the GEP and the set of fixed points of the nonexpansive mapping. Furthermore, it is proven that the sequences generated by this iterative scheme converge strongly to a common element of the set of solutions of the GEP and the set of fixed points of the nonexpansive mapping.
|
6 |
Random Function Iterations for Stochastic Feasibility ProblemsHermer, Neal 24 January 2019 (has links)
No description available.
|
7 |
Averaged mappings and it's applicationsLiang, Wei-Jie 29 June 2010 (has links)
A sequence fxng generates by the formula
x_{n+1} =(1- £\\_n)x_n+ £\\_nT_nx_n is called the Krasnosel'skii-Mann algorithm, where {£\\_n} is a sequence in (0,1) and {T_n} is a sequence of nonexpansive mappings. We introduce KM algorithm and prove that the sequence fxng generated by KM algorithm converges weakly. This result is used to solve the split feasibility problem which is to find a point x with the property that x ∈ C and Ax ∈ Q, where C and Q are closed convex subsets form H1 to H2, respectively, and A is a bounded linear operator form H1 to H2. The purpose of this paper is to present some results which apply KM algorithm to solve the split feasibility problem, the multiple-set split feasibility problem and other applications.
|
8 |
Iterative Methods for Minimization Problems over Fixed Point SetsChen, Yen-Ling 02 June 2011 (has links)
In this paper we study through iterative methods the minimization problem
min_{x∈C} £K(x) (P)
where the set C of constraints is the set of fixed points of a nonexpansive mapping T in a real Hilbert space H, and the objective function £K:H¡÷R is supposed to be continuously Gateaux dierentiable. The gradient projection method for solving problem (P) involves with the projection P_{C}. When C = Fix(T), we provide a
so-called hybrid iterative method for solving (P) and the method involves with the mapping T only. Two special cases are included: (1) £K(x)=(1/2)||x-u||^2 and (2) £K(x)=<Ax,x> - <x,b>. The first case corresponds to finding a fixed point of T which is closest to u from the fixed point set Fix(T). Both cases have received a lot of investigations recently.
|
9 |
Hybrid Steepest-Descent Methods for Variational InequalitiesHuang, Wei-ling 26 June 2006 (has links)
Assume that F is a nonlinear operator on a real Hilbert space H which is strongly monotone and Lipschitzian on a nonempty closed convex subset C of H. Assume also that C is the intersection of the fixed point sets of a finite number of nonexpansive mappings on H. We make a slight modification of the iterative algorithm in Xu and Kim (Journal of Optimization Theory and Applications, Vol. 119, No. 1, pp. 185-201, 2003), which generates a sequence {xn} from an arbitrary initial point x0 in H. The sequence {xn} is shown to converge in norm to the unique solution u* of the variational inequality, under the conditions different from Xu and Kim¡¦s ones imposed on the parameters. Applications to constrained generalized pseudoinverse are included. The results presented in this paper are complementary ones to Xu and Kim¡¦s theorems (Journal of Optimization Theory and Applications, Vol. 119, No. 1, pp. 185-201, 2003).
|
10 |
Iterative Approaches to the Split Feasibility ProblemChien, Yin-ting 23 June 2009 (has links)
In this paper we discuss iterative algorithms for solving the split feasibility
problem (SFP). We study the CQ algorithm from two approaches: one
is an optimization approach and the other is a fixed point approach. We
prove its convergence first as the gradient-projection algorithm and secondly
as a fixed point algorithm. We also study a relaxed CQ algorithm in the
case where the sets C and Q are level sets of convex functions. In such case
we present a convergence theorem and provide a different and much simpler
proof compared with that of Yang [7].
|
Page generated in 0.0727 seconds