• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 576
  • 240
  • 59
  • 58
  • 28
  • 25
  • 24
  • 24
  • 20
  • 15
  • 15
  • 7
  • 3
  • 3
  • 3
  • Tagged with
  • 1278
  • 621
  • 313
  • 271
  • 197
  • 195
  • 193
  • 180
  • 172
  • 167
  • 151
  • 122
  • 122
  • 108
  • 106
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Subset selection based on likelihood from uniform and related populations

Chotai, Jayanti January 1979 (has links)
Let π1,  π2, ... π be k (>_2) populations. Let  πi (i = 1, 2, ..., k) be characterized by the uniform distributionon (ai, bi), where exactly one of ai and bi is unknown. With unequal sample sizes, suppose that we wish to select arandom-size subset of the populations containing the one withthe smallest value of 0i = bi - ai. Rule Ri selects πi iff a likelihood-based k-dimensional confidence region for the unknown (01,..., 0k) contains at least one point having 0i as its smallest component. A second rule, R, is derived through a likelihood ratio and is equivalent to that of Barr and Rizvi (1966) when the sample sizes are equal. Numerical comparisons are made. The results apply to the larger class of densities g(z; 0i) = M(z)Q(0i) iff a(0i) < z < b(0i). Extensions to the cases when both ai and bi are unknown and when 0max is of interest are i i indicated. / digitalisering@umu
102

Subset selection based on likelihood ratios : the normal means case

Chotai, Jayanti January 1979 (has links)
Let π1, ..., πk be k(&gt;_2) populations such that πi, i = 1, 2, ..., k, is characterized by the normal distribution with unknown mean and ui variance aio2 , where ai is known and o2 may be unknown. Suppose that on the basis of independent samples of size ni from π (i=1,2,...,k), we are interested in selecting a random-size subset of the given populations which hopefully contains the population with the largest mean.Based on likelihood ratios, several new procedures for this problem are derived in this report. Some of these procedures are compared with the classical procedure of Gupta (1956,1965) and are shown to be better in certain respects. / <p>Ny rev. utg.</p><p>This is a slightly revised version of Statistical Research Report No. 1978-6.</p> / digitalisering@umu
103

A Collapsing Method for Efficient Recovery of Optimal Edges

Hu, Mike January 2002 (has links)
In this thesis we present a novel algorithm, <I>HyperCleaning*</I>, for effectively inferring phylogenetic trees. The method is based on the quartet method paradigm and is guaranteed to recover the best supported edges of the underlying phylogeny based on the witness quartet set. This is performed efficiently using a collapsing mechanism that employs memory/time tradeoff to ensure no loss of information. This enables <I>HyperCleaning*</I> to solve the relaxed version of the Maximum-Quartet-Consistency problem feasibly, thus providing a valuable tool for inferring phylogenies using quartet based analysis.
104

A New Third Compartment Significantly Improves Fit and Identifiability in a Model for Ace2p Distribution in Saccharomyces cerevisiae after Cytokinesis.

Järvstråt, Linnea January 2011 (has links)
Asymmetric cell division is an important mechanism for the differentiation of cells during embryogenesis and cancer development. Saccharomyces cerevisiae divides asymmetrically and is therefore used as a model system for understanding the mechanisms behind asymmetric cell division. Ace2p is a transcriptional factor in yeast that localizes primarily to the daughter nucleus during cell division. The distribution of Ace2p is visualized using a fusion protein with yellow fluorescent protein (YFP) and confocal microscopy. Systems biology provides a new approach to investigating biological systems through the use of quantitative models. The localization of the transcriptional factor Ace2p in yeast during cell division has been modelled using ordinary differential equations. Herein such modelling has been evaluated. A 2-compartment model for the localization of Ace2p in yeast post-cytokinesis proposed in earlier work was found to be insufficient when new data was included in the model evaluation. Ace2p localization in the dividing yeast cell pair before cytokinesis has been investigated using a similar approach and was found to not explain the data to a significant degree. A 3-compartment model is proposed. The improvement in comparison to the 2-compartment model was statistically significant. Simulations of the 3-compartment model predicts a fast decrease in the amount of Ace2p in the cytosol close to the nucleus during the first seconds after each bleaching of the fluorescence. Experimental investigation of the cytosol close to the nucleus could test if the fast dynamics are present after each bleaching of the fluorescence. The parameters in the model have been estimated using the profile likelihood approach in combination with global optimization with simulated annealing. Confidence intervals for parameters have been found for the 3-compartment model of Ace2p localization post-cytokinesis. In conclusion, the profile likelihood approach has proven a good method of estimating parameters, and the new 3-compartment model allows for reliable parameter estimates in the post-cytokinesis situation. A new Matlab-implementation of the profile likelihood method is appended.
105

A Collapsing Method for Efficient Recovery of Optimal Edges

Hu, Mike January 2002 (has links)
In this thesis we present a novel algorithm, <I>HyperCleaning*</I>, for effectively inferring phylogenetic trees. The method is based on the quartet method paradigm and is guaranteed to recover the best supported edges of the underlying phylogeny based on the witness quartet set. This is performed efficiently using a collapsing mechanism that employs memory/time tradeoff to ensure no loss of information. This enables <I>HyperCleaning*</I> to solve the relaxed version of the Maximum-Quartet-Consistency problem feasibly, thus providing a valuable tool for inferring phylogenies using quartet based analysis.
106

A comparably robust approach to estimate the left-censored data of trace elements in Swedish groundwater

Li, Cong January 2012 (has links)
Groundwater data in this thesis, which is taken from the database of Sveriges Geologiska Undersökning, characterizes chemical and quantitative status of groundwater in Sweden. The data usually is recorded with only quantification limits when it is below certain values. Accordingly, this thesis is aiming at handling such kind of data. The thesis considers this topic by using the EM algorithm to get the results from maximum likelihood estimation. Consequently, estimations of distributions on censored data of trace elements are expounded on. Related simulations show that the estimation is acceptable.
107

Robust Channel Estimation for Cooperative Communication Systems in the Presence of Relay Misbehaviors

Chou, Po-Yen 17 July 2012 (has links)
In this thesis, we investigate the problem of channel estimation in the amplify-and-forward cooperative communication systems when the networks could be in the presence of selfish relays. The information received at the destination will be detected and then used to estimate the channel. In previous studies, the relays will deliver the information under the prerequisite for cooperation and the destination can receive the information sent from the source without any possible selfish relay. Therefore, the channel will be estimated under this over idealistic assumption. Unfortunately, the assumption does not make sense in real applications. Currently, we don¡¦t have a mechanism to guarantee the relays will always be cooperative. The performance of channel estimation will be significantly degraded when the selfish relays present in the network. Therefore, this thesis considers an amplify-and-forward cooperative communication system with direct transmission and proposes a detection mechanism to overcome the misbehaving relay problem. The detection mechanism employed estimation is based on likelihood ratio test using both direct transmission and relayed information. The detection result will then be used to reconstruct the codeword used for estimating product channel gain of the source-to-relay and relay- to-destination links. The mathematical derivation for the considered problem is developed and numerical simulations for illustration is also carried out in the thesis. The numerical simulation results verify that the proposed method is indeed able to achieve robust channel estimation.
108

Generalized Maximum-Likelihood Algorithm for Time Delay Estimation in UWB Radio

Tsai, Wen-Chieh 24 July 2004 (has links)
The main purpose of this thesis is to estimate the direct path in dense multipath Ultra Wide-Band (UWB) environment. The time-of-arrival (ToA) estimation algorithm used is based on Generalized Maximum-Likelihood (GML) algorithm. Nevertheless, GML algorithm is so time-consuming that the results usually take a very long period of time, and sometimes fail to converge. Hence, the schemes that would improve the algorithm are investigated. In the schemes, the search was executed in sequential form. Two threshold parameters are to be determined¡Xone is about the arrival time of the estimation path while the other is the fading amplitude of the estimation path. The thresholds are determined in order to terminate the sequential algorithm. The determination of thresholds is based on error analysis, including the probability of error and root-mean-square error. The analysis of the probability of error is subject to the probability of false alarm and the probability of miss. However, a trade-off problem on the probability of false alarm and the probability of miss exists in the process of determining thresholds. The thresholds are determined according to the requirement of the probability of error. We propose an improvement scheme for determining the two thresholds. In the proposed scheme, candidate pairs are evaluated within an appropriate range. The root-mean-square error value for each pair of thresholds is calculated. The smallest error, corresponding to the desired thresholds, is chosen for use in ToA estimation. From the simulation results, it is seen that, when SNR falls between -4dB and 16dB, the improvement proposed scheme results has the smaller estimation error.
109

Factors Affecting the Purchase Intention of Recommended Products in On-line Stores

Ku, Yi-Cheng 28 July 2005 (has links)
The rapid increase of available products and information on the Internet has created new problems for consumers. In stead of not having adequate alternatives, consumers have to spend a lot of effort in filtering and processing information. Overcoming information overload becomes a key issue for information search. As a result, information filtering and product recommendation become increasingly popular among on-line stores. These e-stores can collect user preference and use the information for product recommendation and personalized services. The purpose of recommendation systems is to increase consumers¡¦ purchase intentions, which may be affected by many factors. The objective of this study is to investigate factors that may affect the purchase intention of consumers. More specifically, the research adopts two theories, the elaboration likelihood model and the social influence theory, to build a research framework. We assume that the recommendation message affect consumer attitudes and intention through information and social influences. A laboratory experiment was conducted that use books and movies as two products to test the theory. The results indicate that purchase intention was affected by the attitude toward the recommended product and informational influence. The attitude toward the recommended product, informational influence, and normative social influence were affected by the type of the products and web comments on the product. Different recommendation approaches also affected consumers¡¦ perception of informational influence. The contribution of the research is two folds. First, we develop a theory that can be used to interpret the effect of different factors in the recommendation process. Second, the results have explored much insight into how product recommendation affects consumer attitude and purchase intention and can also be used in designing recommendation systems.
110

none

Lai, I-chun 01 August 2005 (has links)
More .crimes of fraud make more . people who receive the messages lose money and felt great fear, which is a huge damage to Taiwan. There are many kind of fraud, included telemarketing fraud, lottery fraud and Automated Teller Machine fraud, etc. Message receivers are not understand the difference is between truth and fraud, and many message receivers reject all the messages they receive than believing racketeers and lose their money and time. There are many common cases of fraud, and related analysis is included in the study. There are four keypoints why message receivers were victims after asking for experts¡¦ opinions, including extra-environmental factors, forms of fraud, message receivers who do not have enough knowledge about fraud and message receivers¡¦ personal factors. Finally, the study designed a questionnaire to investigate what cognition, experience and attitude subjects had. The population of study is the citizens in Kaohsiung. The research uses the elaboration likelihood model as the research method and the situation stimulated to design the questionnaire. The research result is shown as following: the difference between receptions of subjects in the case of common fraud and the case of non-common fraud is significant, and the personal factors gender, age, education level and monthly homehold income level significant to receptions of the subjects.

Page generated in 0.0321 seconds