Spelling suggestions: "subject:"heighted"" "subject:"eighted""
191 |
On minimally-supported D-optimal designs for polynomial regression with log-concave weight functionLin, Hung-Ming 29 June 2005 (has links)
This paper studies minimally-supported D-optimal designs for polynomial regression model with logarithmically concave (log-concave) weight functions.
Many commonly used weight functions in the design literature are log-concave.
We show that the determinant of information matrix of minimally-supported design is a log-concave function of ordered support points and the D-optimal design is unique. Therefore, the numerically D-optimal designs can be determined e¡Óciently by standard constrained concave programming algorithms.
|
192 |
Using Enhanced Weighted Voronoi Diagram for Mobile Service Positioning SystemTsai, Yi-Chun 05 September 2005 (has links)
The objective of this thesis is to design a mobile positioning system on the premise that low system complexity and less modification of components of Mobile Communication System to improve the possibility that adopted by service provider. Therefore we propose a Mobile Service Positioning System for Cellular Mobile Communication System. It works based on location information of base station and mutual relations of signal strength of base stations received by mobile phone. We adjust the environment factor upon different path loss caused by different geographical feature. And then we perform EWVD Algorithm to estimate the area where mobile phone locates in. Eventually, we obtain a Mobile Positioning System which has properties: lower building cost, smaller locating area, and faster response time.
|
193 |
Monitoring High Quality Processes: A Study Of Estimation Errors On The Time-between-events Exponentially Weighted Moving Average SchemesOzsan, Guney 01 September 2008 (has links) (PDF)
In some production environments the defect rates are considerably low such that
measurement of fraction of nonconforming items reaches parts per million level. In
such environments, monitoring the number of conforming items between
consecutive nonconforming items, namely the time between events (TBE) is often
suggested. However, in the design of control charts for TBE monitoring a
common practice is the assumptions of known process parameters. Nevertheless,
in many applications the true values of the process parameters are not known.
Their estimates should be determined from a sample obtained from the process at a
time when it is expected to operate in a state of statistical control. Additional
variability introduced through sampling may significantly effect the performance
of a control chart. In this study, the effect of parameter estimation on the
performance of Time Between Events Exponentially Weighted Moving Average
(TBE EWMA) schemes is examined. Conditional performance is evaluated to
show the effect of estimation. Marginal performance is analyzed in order to make
recommendations on sample size requirements. Markov chain approach is used for
evaluating the results.
|
194 |
A Heuristic Approach For The Single Machine Scheduling Tardiness PorblemsOzbakir, Saffet Ilker 01 September 2011 (has links) (PDF)
ABSTRACT
A HEURISTIC APPROACH FOR THE SINGLE MACHINE SCHEDULING TARDINESS PROBLEMS
Ö / zbakir, Saffet Ilker
M.Sc., Department of Industrial Engineering
Supervisor : Prof. Dr. Ö / mer Kirca
September 2011, 102 pages
In this thesis, we study the single machine scheduling problem. Our general aim is to schedule a set of jobs to the machine with a goal to minimize tardiness value. The problem is studied for two objectives: minimizing total tardiness value and minimizing total weighted tardiness value.
Solving optimally this problem is difficult, because both of the total tardiness problem and total weighted tardiness problem are NP-hard problems. Therefore, we construct a heuristic procedure for this problem. Our heuristic procedure is divided to two parts: construction part and improvement part. The construction heuristic is based on grouping the jobs, solving these groups and then fixing some particular number of jobs. Moreover, we used three type improvement heuristics. These are sliding forward method, sliding backward method and pairwise interchange method.
Computational results are reported for problem size = 20, 40, 50 and 100 at total tardiness problem and for problem size = 20 and 40 at total weighted tardiness problem. Experiments are designed in order to investigate the effect of three factors which are problem size, tardiness factor and relative range of due dates on computational difficulties of the problems. Computational results show that the heuristic proposed in this thesis is robust to changes at these factors.
|
195 |
A Novel Scatternet Scheme with QoS Support and IP CompatibilityTan, Der-Hwa 03 August 2001 (has links)
The bluetooth technology encompasses a simple low-cost, low-power, global radio system for
integration into mobile devices to solve a simple problem: replace the cables used on mobile devices
with radio frequency waves. Such devices can form a quick ad-hoc secure "piconet" and
communicate among the connected devices. While WLANs had good ad-hoc networking capabilities,
there was no clear market standard among them. Moreover, there were no global standards that can
be integrated and implemented into small handheld devices. Some market analysts predict that there
will be some 1.4 billion Bluetooth devices in operation by the year 2005 [1]. That is the reason we
replace the cable from the "Network Adapter" with a low-cost RF link that we now call Bluetooth.
However, the current specification1.1 [2][3] does not describe the algorithms or mechanisms to create
a scatternet due to a variety of unsolved issues. Since the upper layers are not defined in Bluetooth, it is
not possible to implement scatternet in current specification. Hence in this research, we need make
some modifications to Bluetooth protocol in order to support the transmissions of packets in scatternet.
In this paper we describe a novel scatternet architecture, and present link performance results of the
proposed architecture.
|
196 |
Development of a branch and price approach involving vertex cloning to solve the maximum weighted independent set problemSachdeva, Sandeep 12 April 2006 (has links)
We propose a novel branch-and-price (B&P) approach to solve the maximum weighted independent set problem (MWISP). Our approach uses clones of vertices to create edge-disjoint partitions from vertex-disjoint partitions. We solve the MWISP on sub-problems based on these edge-disjoint partitions using a B&P framework, which coordinates sub-problem solutions by involving an equivalence relationship between a vertex and each of its clones. We present test results for standard instances and randomly generated graphs for comparison. We show analytically and computationally that our approach gives tight bounds and it solves both dense and sparse graphs quite quickly.
|
197 |
Multi-resolution methods for high fidelity modeling and control allocation in large-scale dynamical systemsSingla, Puneet 16 August 2006 (has links)
This dissertation introduces novel methods for solving highly challenging model-
ing and control problems, motivated by advanced aerospace systems. Adaptable, ro-
bust and computationally effcient, multi-resolution approximation algorithms based
on Radial Basis Function Network and Global-Local Orthogonal Mapping approaches
are developed to address various problems associated with the design of large scale
dynamical systems. The main feature of the Radial Basis Function Network approach
is the unique direction dependent scaling and rotation of the radial basis function via
a novel Directed Connectivity Graph approach. The learning of shaping and rota-
tion parameters for the Radial Basis Functions led to a broadly useful approximation
approach that leads to global approximations capable of good local approximation
for many moderate dimensioned applications. However, even with these refinements,
many applications with many high frequency local input/output variations and a
high dimensional input space remain a challenge and motivate us to investigate an
entirely new approach. The Global-Local Orthogonal Mapping method is based upon
a novel averaging process that allows construction of a piecewise continuous global
family of local least-squares approximations, while retaining the freedom to vary in
a general way the resolution (e.g., degrees of freedom) of the local approximations.
These approximation methodologies are compatible with a wide variety of disciplines
such as continuous function approximation, dynamic system modeling, nonlinear sig-nal processing and time series prediction. Further, related methods are developed
for the modeling of dynamical systems nominally described by nonlinear differential
equations and to solve for static and dynamic response of Distributed Parameter Sys-
tems in an effcient manner. Finally, a hierarchical control allocation algorithm is
presented to solve the control allocation problem for highly over-actuated systems
that might arise with the development of embedded systems. The control allocation
algorithm makes use of the concept of distribution functions to keep in check the
"curse of dimensionality". The studies in the dissertation focus on demonstrating,
through analysis, simulation, and design, the applicability and feasibility of these ap-
proximation algorithms to a variety of examples. The results from these studies are
of direct utility in addressing the "curse of dimensionality" and frequent redundancy
of neural network approximation.
|
198 |
General schedulability bound analysis and its applications in real-time systemsWu, Jianjia 17 September 2007 (has links)
Real-time system refers to the computing, communication, and information system with deadline requirements. To meet these deadline requirements, most systems use a mechanism known as the schedulability test which determines whether each of the admitted tasks can meet its deadline. A new task will not be admitted unless it passes the schedulability test. Schedulability tests can be either direct or indirect. The utilization based schedulability test is the most common schedulability test approach, in which a task can be admitted only if the total system utilization is lower than a pre-derived bound. While the utilization bound based schedulability test is simple and effective, it is often difficult to derive the bound. For its analytical complexity, utilization bound results are usually obtained on a case-by-case basis. In this dissertation, we develop a general framework that allows effective derivation of schedulability bounds for different workload patterns and schedulers. We introduce an analytical model that is capable of describing a wide range of tasks' and schedulers'ÃÂÃÂ behaviors. We propose a new definition of utilization, called workload rate. While similar to utilization, workload rate enables flexible representation of different scheduling and workload scenarios and leads to uniform proof of schedulability bounds. We introduce two types of workload constraint functions, s-shaped and r-shaped, for flexible and accurate characterization of the task workloads. We derive parameterized schedulability bounds for arbitrary static priority schedulers, weighted round robin schedulers, and timed token ring schedulers. Existing utilization bounds for these schedulers are obtained from the closed-form formula by direct assignment of proper parameters. Some of these results are applied to a cluster computing environment. The results developed in this dissertation will help future schedulability bound analysis by supplying a unified modeling framework and will ease the implementation practical real-time systems by providing a set of ready to use bound results.
|
199 |
Nonparametric tests for interval-censored failure time data via multiple imputationHuang, Jin-long 26 June 2008 (has links)
Interval-censored failure time data often occur in follow-up studies where subjects can only be followed periodically and the failure time can only be known to lie in an interval. In this paper we consider the problem of comparing two or more interval-censored samples. We propose a multiple imputation method for discrete interval-censored data to impute exact failure times from interval-censored observations and then apply existing test for exact data, such as the log-rank test, to imputed exact data. The test statistic and covariance matrix are calculated by our proposed multiple imputation technique. The formula of covariance matrix estimator is similar to the estimator used by Follmann, Proschan and Leifer (2003) for clustered data. Through simulation studies we find that the performance of the proposed log-rank type test is comparable to that of the test proposed by Finkelstein (1986), and is better than that of the two existing log-rank type tests proposed by Sun (2001) and Zhao and Sun (2004) due to the differences in the method of multiple imputation and the covariance matrix estimation. The proposed method is illustrated by means of an example involving patients with breast cancer. We also investigate applying our method to the other two-sample comparison tests for exact data, such as Mantel's test (1967) and the integrated weighted difference test.
|
200 |
Multiresolution weighted norm equivalences and applicationsBeuchler, Sven, Schneider, Reinhold, Schwab, Christoph 05 April 2006 (has links) (PDF)
We establish multiresolution norm equivalences in
weighted spaces <i>L<sup>2</sup><sub>w</sub></i>((0,1))
with possibly singular weight functions <i>w(x)</i>≥0
in (0,1).
Our analysis exploits the locality of the
biorthogonal wavelet basis and its dual basis
functions. The discrete norms are sums of wavelet
coefficients which are weighted with respect to the
collocated weight function <i>w(x)</i> within each scale.
Since norm equivalences for Sobolev norms are by now
well-known, our result can also be applied to
weighted Sobolev norms. We apply our theory to
the problem of preconditioning <i>p</i>-Version FEM
and wavelet discretizations of degenerate
elliptic problems.
|
Page generated in 0.0344 seconds