• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 447
  • 103
  • 99
  • 49
  • 43
  • 20
  • 17
  • 14
  • 11
  • 10
  • 7
  • 7
  • 6
  • 6
  • 4
  • Tagged with
  • 943
  • 165
  • 128
  • 106
  • 100
  • 96
  • 94
  • 94
  • 92
  • 88
  • 80
  • 73
  • 70
  • 70
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Optimization of nonlinear dynamic systems without Lagrange multipliers

Claewplodtook, Pana January 1996 (has links)
No description available.
292

Semi-parametric Bayesian Models Extending Weighted Least Squares

Wang, Zhen 31 August 2009 (has links)
No description available.
293

Quantification of Oxygen Saturation of Venous Vessels Using Susceptibility Mapping

Tang, Jin 10 1900 (has links)
<p>Quantitatively measuring oxygen saturation is important to characterize the physiological or pathological state of tissue function. In this thesis, we demonstrate the possibility of using susceptibility mapping to noninvasively estimate the venous blood oxygen saturation level. Accurate susceptibility quantification is the key to oxygen saturation quantification. Two approaches are presented in this thesis to generate accurate and artifact free susceptibility maps (SM): a regularized inverse filter and a k-space iterative method. Using the regularized inverse filter, with sufficient resolution, major veins in the brain can be visualized. We found that different sized vessels show a different level of contrast depending on their partial volume effects; larger vessels show a bias toward a reduced susceptibility approaching 90% of the expected value. Also, streaking artifacts associated with high susceptibility structures such as veins are obvious in the reconstructed SM. To further improve susceptibility quantification and reduce the streaking artifacts in the SMs, we proposed a threshold-based k-space iterative approach that used geometric information from the SM itself as a constraint to overcome the ill-posed nature of the inverse filter. Both simulations and in vivo results show that most streaking artifacts inside the SM were suppressed by the iterative approach. In simulated data, the bias toward lower mean susceptibility values inside vessels has been shown to decrease from around 10% to 2% when choosing an appropriate threshold value for the proposed iterative method, which brings us one step closer to a practical means to map out oxygen saturation in the brain.</p> / Doctor of Philosophy (PhD)
294

Estimating Veterans' Health Benefit Grants Using the Generalized Linear Mixed Cluster-Weighted Model with Incomplete Data

Deng, Xiaoying January 2018 (has links)
The poverty rate among veterans in US has increased over the past decade, according to the U.S. Department of Veterans Affairs (2015). Thus, it is crucial to veterans who live below the poverty level to get sufficient benefit grants. A study on prudently managing health benefit grants for veterans may be helpful for government and policy-makers making appropriate decisions and investments. The purpose of this research is to find an underlying group structure for the veterans' benefit grants dataset and then estimate veterans' benefit grants sought using incomplete data. The generalized linear mixed cluster-weighted model based on mixture models is carried out by grouping similar observations to the same cluster. Finally, the estimates of veterans' benefit grants sought will provide reference for future public policies. / Thesis / Master of Science (MSc)
295

Improved Methods for Interrupted Time Series Analysis Useful When Outcomes are Aggregated: Accounting for heterogeneity across patients and healthcare settings

Ewusie, Joycelyne E January 2019 (has links)
This is a sandwich thesis / In an interrupted time series (ITS) design, data are collected at multiple time points before and after the implementation of an intervention or program to investigate the effect of the intervention on an outcome of interest. ITS design is often implemented in healthcare settings and is considered the strongest quasi-experimental design in terms of internal and external validity as well as its ability to establish causal relationships. There are several statistical methods that can be used to analyze data from ITS studies. Nevertheless, limitations exist in practical applications, where researchers inappropriately apply the methods, and frequently ignore the assumptions and factors that may influence the optimality of the statistical analysis. Moreover, there is little to no guidance available regarding the application of the various methods, and a standardized framework for analysis of ITS studies does not exist. As such, there is a need to identify and compare existing ITS methods in terms of their strengths and limitations. Their methodological challenges also need to be investigated to inform and direct future research. In light of this, this PhD thesis addresses two main objectives: 1) to conduct a scoping review of the methods that have been employed in the analysis of ITS studies, and 2) to develop improved methods that address a major limitation of the statistical methods frequently used in ITS data analysis. These objectives are addressed in three projects. For the first project, a scoping review of the methods that have been used in analyzing ITS data was conducted, with the focus on ITS applications in health research. The review was based on the Arksey and O’Malley framework and the Joanna Briggs Handbook for scoping reviews. A total of 1389 studies were included in our scoping review. The articles were grouped into methods papers and applications papers based on the focus of the article. For the methods papers, we narratively described the identified methods and discussed their strengths and limitations. The application papers were summarized using frequencies and percentages. We identified some limitations of current methods and provided some recommendations useful in health research. In the second project, we developed and presented an improved method for ITS analysis when the data at each time point are aggregated across several participants, which is the most common case in ITS studies in healthcare settings. We considered the segmented linear regression approach, which our scoping review identified as the most frequently used method in ITS studies. When data are aggregated, heterogeneity is introduced due to variability in the patient population within sites (e.g. healthcare facilities) and this is ignored in the segmented linear regression method. Moreover, statistical uncertainty (imprecision) is introduced in the data because of the sample size (number of participants from whom data are aggregated). Ignoring this variability and uncertainty will likely lead to invalid estimates and loss of statistical power, which in turn leads to erroneous conclusions. Our proposed method incorporates patient variability and sample size as weights in a weighted segmented regression model. We performed extensive simulations and assessed the performance of our method using established performance criteria, such as bias, mean squared error, level and statistical power. We also compared our method with the segmented linear regression approach. The results indicated that the weighted segmented regression was uniformly more precise, less biased and more powerful than the segmented linear regression method. In the third project, we extended the weighted method to multisite ITS studies, where data are aggregated at two levels: across several participants within sites as well as across multiple sites. The extended method incorporates the two levels of heterogeneity using weights, where the weights are defined using patient variability, sample size, number of sites as well as site-to-site variability. This extended weighted regression model, which follows the weighted least squares approach is employed to estimate parameters and perform significance testing. We conducted extensive empirical evaluations using various scenarios generated from a multi-site ITS study and compared the performance of our method with that of the segmented linear regression model as well as a pooled analysis method previously developed for multisite studies. We observed that for most scenarios considered, our method produced estimates with narrower 95% confidence intervals and smaller p-values, indicating that our method is more precise and is associated with more statistical power. In some scenarios, where we considered low levels of heterogeneity, our method and the previously proposed method showed comparable results. In conclusion, this PhD thesis facilitates future ITS research by laying the groundwork for developing standard guidelines for the design and analysis of ITS studies. The proposed improved method for ITS analysis, which is the weighted segmented regression, contributes to the advancement of ITS research and will enable researchers to optimize their analysis, leading to more precise and powerful results. / Thesis / Doctor of Philosophy (PhD)
296

Complementation and Inclusion of Weighted Automata on Infinite Trees

Borgwardt, Stefan, Peñaloza, Rafael 16 June 2022 (has links)
Weighted automata can be seen as a natural generalization of finite state automata to more complex algebraic structures. The standard reasoning tasks for unweighted automata can also be generalized to the weighted setting. In this report we study the problems of intersection, complementation and inclusion for weighted automata on infinite trees and show that they are not harder than reasoning with unweighted automata. We also present explicit methods for solving these problems optimally.
297

Effective, Efficient Retrieval in a Network of Digital Information Objects

France, Robert Karl 27 November 2001 (has links)
Although different authors mean different thing by the term "digital libraries," one common thread is that they include or are built around collections of digital objects. Digital libraries also provide services to large communities, one of which is almost always search. Digital library collections, however, have several characteristic features that make search difficult. They are typically very large. They typically involve many different kinds of objects, including but not limited to books, e-published documents, images, and hypertexts, and often including items as esoteric as subtitled videos, simulations, and entire scientific databases. Even within a category, these objects may have widely different formats and internal structure. Furthermore, they are typically in complex relationships with each other and with such non-library objects as persons, institutions, and events. Relationships are a common feature of traditional libraries in the form of "See / See also" pointers, hierarchical relationships among categories, and relations between bibliographic and non-bibliographic objects such as having an author or being on a subject. Binary relations (typically in the form of directed links) are a common representational tool in computer science for structures from trees and graphs to semantic networks. And in recent years the World-Wide Web has made the construct of linked information objects commonplace for millions. Despite this, relationships have rarely been given "first-class" treatment in digital library collections or software. MARIAN is a digital library system designed and built to store, search over, and retrieve large numbers of diverse objects in a network of relationships. It is designed to run efficiently over large collections of digital library objects. It addresses the problem of object diversity through a system of classes unified by common abilities including searching and presentation. Divergent internal structure is exposed and interpreted using a simple and powerful graphical representation, and varied format through a unified system of presentation. Most importantly, MARIAN collections are designed to specifically include relations in the form of an extensible collection of different sorts of links. This thesis presents MARIAN and argues that it is both effective and efficient. MARIAN is effective in that it provides new and useful functionality to digital library end-users, and in that it makes constructing, modifying, and combining collections easy for library builders and maintainers. MARIAN is efficient since it works from an abstract presentation of search over networked collections to define on the one hand common operations required to implement a broad class of search engines, and on the other performance standards for those operations. Although some operations involve a high minimum cost under the most general assumptions, lower costs can be achieved when additional constraints are present. In particular, it is argued that the statistics of digital library collections can be exploited to obtain significant savings. MARIAN is designed to do exactly that, and in evidence from early versions appears to succeed. In conclusion, MARIAN presents a powerful and flexible platform for retrieval on large, diverse collections of networked information, significantly extending the representation and search capabilities of digital libraries. / Ph. D.
298

Surveillance of Poisson and Multinomial Processes

Ryan, Anne Garrett 18 April 2011 (has links)
As time passes, change occurs. With this change comes the need for surveillance. One may be a technician on an assembly line and in need of a surveillance technique to monitor the number of defective components produced. On the other hand, one may be an administrator of a hospital in need of surveillance measures to monitor the number of patient falls in the hospital or to monitor surgical outcomes to detect changes in surgical failure rates. A natural choice for on-going surveillance is the control chart; however, the chart must be constructed in a way that accommodates the situation at hand. Two scenarios involving attribute control charting are investigated here. The first scenario involves Poisson count data where the area of opportunity changes. A modified exponentially weighted moving average (EWMA) chart is proposed to accommodate the varying sample sizes. The performance of this method is compared with the performance for several competing control chart techniques and recommendations are made regarding the best preforming control chart method. This research is a result of joint work with Dr. William H. Woodall (Department of Statistics, Virginia Tech). The second scenario involves monitoring a process where items are classified into more than two categories and the results for these classifications are readily available. A multinomial cumulative sum (CUSUM) chart is proposed to monitor these types of situations. The multinomial CUSUM chart is evaluated through comparisons of performance with competing control chart methods. This research is a result of joint work with Mr. Lee J. Wells (Grado Department of Industrial and Systems Engineering, Virginia Tech) and Dr. William H. Woodall (Department of Statistics, Virginia Tech). / Ph. D.
299

Network Anomaly Detection with Incomplete Audit Data

Patcha, Animesh 04 October 2006 (has links)
With the ever increasing deployment and usage of gigabit networks, traditional network anomaly detection based intrusion detection systems have not scaled accordingly. Most, if not all, systems deployed assume the availability of complete and clean data for the purpose of intrusion detection. We contend that this assumption is not valid. Factors like noise in the audit data, mobility of the nodes, and the large amount of data generated by the network make it difficult to build a normal traffic profile of the network for the purpose of anomaly detection. From this perspective, the leitmotif of the research effort described in this dissertation is the design of a novel intrusion detection system that has the capability to detect intrusions with high accuracy even when complete audit data is not available. In this dissertation, we take a holistic approach to anomaly detection to address the threats posed by network based denial-of-service attacks by proposing improvements in every step of the intrusion detection process. At the data collection phase, we have implemented an adaptive sampling scheme that intelligently samples incoming network data to reduce the volume of traffic sampled, while maintaining the intrinsic characteristics of the network traffic. A Bloom filters based fast flow aggregation scheme is employed at the data pre-processing stage to further reduce the response time of the anomaly detection scheme. Lastly, this dissertation also proposes an expectation-maximization algorithm based anomaly detection scheme that uses the sampled audit data to detect intrusions in the incoming network traffic. / Ph. D.
300

Three Essays on Dynamic Contests

Cai, Yichuan 23 June 2022 (has links)
This dissertation consists of three essays studying the theory of dynamic contest. This analysis mainly focuses on how the outcome and the optimal design in a dynamic contest varies on contest technology, heterogeneous players, contest architecture, and bias instruments. The first chapter outlines the dissertation by briefly discussing the motivations, methods, and main findings in the following chapters. Chapter 2 considers a situation in which two groups compete in a series of battles with complete information. Each group has multiple heterogeneous players. The group who first wins a predetermined number of battles wins a prize which is a public good for the winning group. A discriminatory state-dependent contest success function will be employed in each battle. We found that in the subgame perfect Nash equilibrium (equilibria), the lower valuation players can only exert effort in earlier battles, while the higher valuation players may exert effort throughout the entire series of battles. The typical discouragement effect in a multi-battle contest is mitigated when players compete as a group. We also provide two types of optimal contest designs that can fully resolve the free-rider problem in group contests. Chapter 3 investigates optimal contest design with multiple heterogeneous players. We allow the contest designer to have one or multiple/mixed objectives, which includes the following parts: the total effort; the winner's effort; the maximal effort; and the winning probability of the strongest player. We provide a one-size-fits-all contest design that is optimal given any objective function. In the optimal contest, the designer will have one of the weaker players exhaust the strongest in the contest with infinite battles. We obtain the required conditions on different contest frameworks (e.g., all-pay auctions and lottery contests) and bias instruments (e.g., head starts and multiplicative bias). This means the contest designer has multiple alternatives to design the optimal contest. The last chapter investigates a situation where two players compete in a series of sequential battles to win a prize. A player can obtain certain points by winning a single battle, and the available points may vary across the battles. The player who first obtains predetermined points wins the prize. We fully characterize the subgame perfect Nash equilibrium by describing the indifference continuation value interval. We found that when two players are symmetric, they only compete in the separating battle. In the general case, we found that winning a battle may not create any momentum when the weight of the battle is small. A small enough adjustment of a battle's weight will not change both players' incentive to win the battle. Increasing (or decreasing) a battle's weight weakly increases (or weakly decreases) both players' incentive to win. / Doctor of Philosophy / A contest in economics is defined as a situation in which players exert positive effort to win a prize. The effort can be money, time, energy, or any resource that is used in a competition. The prize can be monetary or other perks from winning a competition. In this dissertation, we explore dynamic multi-battle contests where the winner is not decided by one single competition but by a series of sequential competitions. For example, the US presidential primary begins sometime in January or February and ends about mid-June and candidates will compete in different states during the time. In NBA finals, the winner is decided by a best-of-seven contest. The team that first wins four games becomes the champion. In the second chapter, we explore multi-battle group contest in which each group has multiple heterogeneous players. The group who first wins a certain number of battles wins a prize. The prize is a public good within the winning group so players in the winning group can enjoy the prize regardless their effort. We found that players with high prize valuation will be discouraged in earlier battles due to high expected effort in later battles. This may make high-value players only exert effort in later and more decisive battles. The low-value players will exert effort in earlier battles and will free rider on high-value players in later battles. We also provide the optimal contest design that can fully resolve the free-rider problem. In the optimal contest design, the designer should completely balance two groups in every battle. In the third chapter, we explore the optimal contest design in the multi-battle contests with multiple heterogeneous players. The contest designer can have one or multiple/mixed objectives. We found a "one size fits all" multi-battle contest design that is optimal for various objective functions. In the optimal contest design, the designer should give different advantages to the strongest player and one of the weaker players. More specifically, the weaker player is easier to win each battle, while the strongest player needs to win fewer battles. This overturns the conventional wisdom that the advantage should be only given to the weaker players. In the fourth quarter, we explore the multi-battle contest that in which each battle has a different weight, that is, some battles may more or less important than others. We found that when a battle's weight is small, players may feel indifference between winning or losing the battle. Therefore, winning such battles will not create any momentum, and players tend to give up those battles by exerting no effort. We also found that when we increase or decrease a battle's weight, if the adjustment is small, it will not change players' incentive to win a battle. However, if the adjustment is large enough, it will increase or decrease players' incentive to win in the same direction.

Page generated in 0.0329 seconds