• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3081
  • 943
  • 353
  • 314
  • 185
  • 108
  • 49
  • 49
  • 49
  • 49
  • 49
  • 48
  • 40
  • 37
  • 30
  • Tagged with
  • 6330
  • 1456
  • 1126
  • 1081
  • 845
  • 741
  • 735
  • 723
  • 651
  • 625
  • 510
  • 493
  • 484
  • 484
  • 457
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
671

Using percentile regression for estimating the maximum species richness line

Qadir, Mohammad F. 27 August 1993 (has links)
Graduation date: 1994
672

Large-sample sequential decision theory

January 1959 (has links)
Edward M. Hofstetter. / "December 9, 1959." Issued also as a thesis, M.I.T. Dept. of Electrical Engineering, August 24, 1959. / Bibliography: p. 35. / Army Signal Corps Contract DA36-039-sc-78108. Dept. of the Army Task 3-99-20-001 and Project 3-99-00-000.
673

Permutation Tests for Classification

Mukherjee, Sayan, Golland, Polina, Panchenko, Dmitry 28 August 2003 (has links)
We introduce and explore an approach to estimating statistical significance of classification accuracy, which is particularly useful in scientific applications of machine learning where high dimensionality of the data and the small number of training examples render most standard convergence bounds too loose to yield a meaningful guarantee of the generalization ability of the classifier. Instead, we estimate statistical significance of the observed classification accuracy, or the likelihood of observing such accuracy by chance due to spurious correlations of the high-dimensional data patterns with the class labels in the given training set. We adopt permutation testing, a non-parametric technique previously developed in classical statistics for hypothesis testing in the generative setting (i.e., comparing two probability distributions). We demonstrate the method on real examples from neuroimaging studies and DNA microarray analysis and suggest a theoretical analysis of the procedure that relates the asymptotic behavior of the test to the existing convergence bounds.
674

Laboratory studies of phase transitions in common tropospheric aerosols /

Cziczo, Daniel J. January 1999 (has links)
Thesis (Ph. D.)--University of Chicago, Dept. of the Geophysical Sciences, August 1999. / Includes bibliographical references. Also available on the Internet.
675

Determining Chlorophyll-a Concentrations in Aquatic Systems with New Statistical Methods and Models

Dimberg, Peter January 2011 (has links)
Chlorophyll-a (chl-a) concentration is an indicator of the trophic status and is extensively used as a measurement of the algal biomass which affects the level of eutrophication in aquatic systems. High concentration of chl-a may indicate high biomass of phytoplankton which can decrease the quality of water or eliminate important functional groups in the ecosystem. Predicting chl-a concentrations is desirable to understand how great impact chl-a may have in aquatic systems for different scenarios during long-time periods and seasonal variation. Several models of predicting annual or summer chl-a concentration have been designed using total phosphorus, total nitrogen or both in combination as in-parameters. These models have high predictive power but are not constructed for evaluating the long-term change or predicting the seasonal variation in a system since the input parameters often are annual values or values from other specific periods. The models are in other words limited to the range where they were constructed. The aim with this thesis was to complement these models with other methods and models which gives a more appropriate image of how the chl-a concentration in an aquatic system acts, both in a short as well as a long-time perspective. The results showed that with a new method called Statistically meaningful trend the Baltic Proper have not had any change in chl-a concentrations for the period 1975 to 2007 which contradicts the old result observing the p-value from the trend line of the raw data. It is possible to predict seasonal variation of median chl-a concentration in lakes from a wide geographically range using summer total phosphorus and latitude as an in-parameter. It is also possible to predict the probability of reaching different monthly median chl-a concentrations using Markov chains or a direct relationship between two months. These results give a proper image of how the chl-a concentration in aquatic systems vary and can be used to validate how different actions may or may not reduce the problem of potentially harmful algal blooms. / Koncentrationen av klorofyll-a (chl-a) är en indikator på vilken trofinivå ett akvatiskt system har och används som ett mått på algbiomassa som påverkar övergödningen i akvatiska system. Höga koncentrationer av chl-a i sjöar kan indikera hög biomassa av fytoplankton och försämra kvalitén i vattnet eller eliminera viktiga funktionella grupper i ett ekosystem. Det är önskvärt att kunna prediktera chl-a koncentrationer för att förstå hur stor påverkan chl-a kan ha för olika scenarier i akvatiska system under längre perioder samt under säsongsvariationer. Flera modeller har tagits fram som predikterar årsvärden eller sommarvärden av chl-a koncentrationer och i dessa ingår totalfosfor, totalkväve eller en kombination av båda som inparametrar. Dessa modeller har hög prediktiv kraft men är inte utvecklade för att kunna utvärdera förändringar över längre perioder eller prediktera säsongsvariationer i ett system eftersom inparametrarna ofta är årsmedelvärden eller värden från andra specifika perioder. Modellerna är med andra ord begränsade till den domän som de togs fram för. Målet med denna avhandling var att komplettera dessa modeller med andra metoder och modeller vilket ger en bättre förståelse för hur chl-a koncentrationer i akvatiska system varierar, både i ett kortsiktigt och ett längre perspektiv. Resultaten visade att med en ny metod som kallas för Statistiskt meningsfull trend så har egentliga Östersjön inte haft någon förändring av chl-a koncentrationer under perioden 1975 till 2007 vilket motsäger tidigare resultat då p-värdet tas fram från en trendlinje av rådata. Det är möjligt att prediktera säsongsvariationer av median chl-a koncentrationer i sjöar från en bred geografisk domän med totalfosfor från sommar och latitud som inparametrar. Det är även möjligt att beräkna sannolikhetenav ett predikterat värde för olika månadsmedianer av chl-a koncentrationer med Markovkedjor eller ett direkt samband mellan två månader. Dessa resultat ger en reell förståelse för hur chl-a koncentrationer i akvatiska system varierar och kan användas till att validera hur olika åtgärder kan eller inte kan reducera problemet av de potentiellt skadliga algblomningarna.
676

A Web-based Statistical Analysis Framework

Chodos, David January 2007 (has links)
Statistical software packages have been used for decades to perform statistical analyses. Recently, the emergence of the Internet has expanded the potential for these packages. However, none of the existing packages have fully realized the collaborative potential of the Internet. This medium, which is beginning to gain acceptance as a software development platform, allows people who might otherwise be separated by organizational or geographic barriers to come together and tackle complex issues using commonly available data sets, analysis tools and communications tools. Interestingly, there has been little work towards solving this problem in a generally applicable way. Rather, systems in this area have tended to focus on particular data sets, industries, or user groups. The Web-based statistical analysis model described in this thesis fills this gap. It includes a statistical analysis engine, data set management tools, an analysis storage framework and a communication component to facilitate information dissemination. Furthermore, its focus on enabling users with little statistical training to perform basic data analysis means that users of all skill levels will be able to take advantage of its capabilities. The value of the system is shown both through a rigorous analysis of the system’s structure and through a detailed case study conducted with the tobacco control community.
677

Vulnerability Assessment of Coastal Bridges Subjected to Hurricane Events

Ataei, Navid 16 September 2013 (has links)
Bridges are the most critical components of the transportation network. The functionality of bridges is important for hurricane aftermath recovery and emergency activities. However, past hurricane events revealed the potential susceptibility of these bridges under storm induced wave and surge loads. Coastal bridges traditionally were not designed to sustain hurricane induced wave and surge loads; and furthermore, no reliability assessment tool exists for bridges exposed to this hazard. However, such a tool is imperative for decision makers to evaluate the risk posed to the existing bridge inventory, and to decide on the retrofit measures and mitigation strategies. This dissertation offers a first attempt to quantify the structural vulnerability of bridges under coastal storms, offering a probabilistic framework, input tools, and application illustrations. To accomplish this goal, first an unbiased wave load model is developed based on the existing wave load models in the literature. The biased is removed from the load models through statistical analysis of the experimental test data. The developed wave load model is used to evaluate the response of coastal bridges employing single-physics domain Dynamic numerical models. Additionally, a high fidelity fluid-structure interaction model is developed to take into account the significant intricacies, such as turbulence, wave diffraction, and air entrapment, as well as material and geometric nonlinearities in structure. This numerical model provides insight on the influential parameters that affect the response of coastal bridges. Moreover, a Monte Carlo based Static Model methodology is developed to enable fast evaluation of the bridge deck unseating mode of failure. This methodology can be used for fast screening of vulnerable structures under hurricane induced wave and surge loads in a large bridge inventory. New statistical learning tools are used to develop fragility surfaces for coastal bridges vulnerable to storms. The performance of each of these tools is evaluated and compared. The statistical learning approaches are used to enable reliability assessment using the more rigorous finite element models such as the Dynamic and FSI Models which is important for improved confidence and retrofit assessment. Additionally, a new systematic method to evaluate the limit state capacity functions based on the post-event global performance of the bridge structure is developed. The application of the developed reliability models is illustrated by utilizing them for Houston/Galveston Bay area bridge inventory. The case study of Houston/Galveston Bay area reveals that more than 30% of bridges have a high probability of failure during an extreme hurricane scenario event. Two vulnerable bridge structures from the case study are selected to investigate the effect of different potential retrofit measures. Recommendations are made for the most appropriate retrofit measures that can prevent the deck unseating without significantly increasing the structural demands on other components.
678

A Web-based Statistical Analysis Framework

Chodos, David January 2007 (has links)
Statistical software packages have been used for decades to perform statistical analyses. Recently, the emergence of the Internet has expanded the potential for these packages. However, none of the existing packages have fully realized the collaborative potential of the Internet. This medium, which is beginning to gain acceptance as a software development platform, allows people who might otherwise be separated by organizational or geographic barriers to come together and tackle complex issues using commonly available data sets, analysis tools and communications tools. Interestingly, there has been little work towards solving this problem in a generally applicable way. Rather, systems in this area have tended to focus on particular data sets, industries, or user groups. The Web-based statistical analysis model described in this thesis fills this gap. It includes a statistical analysis engine, data set management tools, an analysis storage framework and a communication component to facilitate information dissemination. Furthermore, its focus on enabling users with little statistical training to perform basic data analysis means that users of all skill levels will be able to take advantage of its capabilities. The value of the system is shown both through a rigorous analysis of the system’s structure and through a detailed case study conducted with the tobacco control community.
679

Empirical and Kinetic Models for the Determination of Pharmaceutical Product Stability

Khalifa, Nagwa 24 January 2011 (has links)
Drug stability is one of the vital subjects in the pharmaceutical industry. All drug products should be kept stable and protected against any chemical, physical, and microbiological degradation to ensure their efficacy and safety until released for public use. Hence, stability is very important to be estimated or predicted. This work involved studying the stability of three different drug agents using three different mathematical models. These models included both empirical models (linear regression and artificial neural network), and mechanistic (kinetic) models. The stability of each drug in the three cases studied was expressed in terms of concentration, hardness, temperature and humidity. The predicted values obtained from the models were compared to the observed values of drug concentrations obtained experimentally and then evaluated by calculating the mean of squared. Among the models used in this work, the mechanistic model was found to be the most accurate and reliable method of stability testing given the fact that it had the smallest calculated errors. Overall, the accuracy of these mathematical models as indicated by the proximity of their stability measurements to the observed values, led to the assumption that such models can be reliable and time-saving alternatives to the analytical techniques used in practice.
680

Examining the application of conway-maxwell-poisson models for analyzing traffic crash data

Geedipally, Srinivas Reddy 15 May 2009 (has links)
Statistical models have been very popular for estimating the performance of highway safety improvement programs which are intended to reduce motor vehicle crashes. The traditional Poisson and Poisson-gamma (negative binomial) models are the most popular probabilistic models used by transportation safety analysts for analyzing traffic crash data. The Poisson-gamma model is usually preferred over traditional Poisson model since crash data usually exhibit over-dispersion. Although the Poisson-gamma model is popular in traffic safety analysis, this model has limitations particularly when crash data are characterized by small sample size and low sample mean values. Also, researchers have found that the Poisson-gamma model has difficulties in handling under-dispersed crash data. The primary objective of this research is to evaluate the performance of the Conway-Maxwell-Poisson (COM-Poisson) model for various situations and to examine its application for analyzing traffic crash datasets exhibiting over- and under-dispersion. This study makes use of various simulated and observed crash datasets for accomplishing the objectives of this research. Using a simulation study, it was found that the COM-Poisson model can handle under-, equi- and over-dispersed datasets with different mean values, although the credible intervals are found to be wider for low sample mean values. The computational burden of its implementation is also not prohibitive. Using intersection crash data collected in Toronto and segment crash data collected in Texas, the results show that COM-Poisson models perform as well as Poisson-gamma models in terms of goodness-of-fit statistics and predictive performance. With the use of crash data collected at railway-highway crossings in South Korea, several COM-Poisson models were estimated and it was found that the COM-Poisson model can handle crash data when the modeling output shows signs of under-dispersion. The results also show that the COM-Poisson model provides better statistical performance than the gamma probability and traditional Poisson models. Furthermore, it was found that the COM-Poisson model has limitations similar to that of the Poisson-gamma model when handling data with low sample mean and small sample size. Despite its limitations for low sample mean values for over-dispersed datasets, the COM-Poisson is still a flexible method for analyzing crash data.

Page generated in 0.1156 seconds