• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Biogeographic Patterns of Reef Fish Communities in the Saudi Arabian Red Sea

Roberts, May B. 12 1900 (has links)
As a region renowned for high biodiversity, endemism and extreme temperature and salinity levels, the Red Sea is of high ecological interest. Despite this, there is relatively little literature on basic broad scale characteristics of the biodiversity or overall reef fish communities and how they change across latitude. We conducted visual transects recording the abundance of over 200 species of fish from 45 reefs spanning over 1000 km of Saudi Arabian coastline and used hierarchical cluster analysis to find that for combined depths from 0m-10m across this geographical range, the reef fish communities are relatively similar. However we find some interesting patterns both at the community level across depth and latitude as well as in endemic community distributions. We find that the communities, much like the environmental factors, shift gradually along latitude but do not show distinct clusters within the range we surveyed (from Al-Wajh in the north to the Farasan Banks in the south). Numbers of endemic species tend to be higher in the Thuwal region and further south. This type of baseline data on reef fish distribution and possible factors that may influence their ranges in the Red Sea are critical for future scientific studies as well as effective monitoring and in the face of the persistent anthropogenic influences such as coastal development, overfishing and climate change.
2

Relative Efficiency of Adjusted and Unadjusted Analyses when Baseline Data are Partially Missing

Feng, Yue shan 09 1900 (has links)
<p> Many medical studies are performed to investigate the effectiveness of new treatments (such as new drugs, new surgery) versus traditional (or placebo) treatments. In many cases, researchers measure a continuous variable at baseline and again as an outcome assessed at follow up. The baseline measurement usually has strong relationship with post treatment measurement. Consequently, the ANCOVA model using baseline as covariate may provide more powerful and precise results than the ANOVA model.</p> <p> However, most epidemiologic studies will encounter the problem of missing covariate data. As a result, the patients with missing baseline measurements will be excluded from the data analysis. Hence, there exists a tradeoff between the ANOVA with full data set and the ANCOVA with partial data set.</p> <p> This study focuses on the variance of the estimator of treatment means difference. In practical situation, the standard error of the estimator obtained from the ANCOVA model with partially missing baseline relative to the standard error obtained form the ANOVA with full data relies on the correlation between baseline and follow-up outcome, the proportion of the missing baseline, and the difference of the group means on the baseline. In moderate sample size studies, it is also affected by the sample size.</p> <p> The theoretically required minimum correlations for the ANCOVA model were calculated to obtain the same precision with the ANOVA model assuming the missing proportion, sample size and difference of group means on covariate are available. The minimum correlation can be obtained through checking the reference table or figures.</p> <p> The figures of asymptotic relative efficiencies provide the asymptotic variance and the length of the confidence intervals of the estimated difference obtained from the ANCOVA model relative to the ANOVA model for all the range of the correlation between baseline and follow up.</p> / Thesis / Master of Science (MSc)
3

Assessing Binary Measurement Systems

Danila, Oana Mihaela January 2012 (has links)
Binary measurement systems (BMS) are widely used in both manufacturing industry and medicine. In industry, a BMS is often used to measure various characteristics of parts and then classify them as pass or fail, according to some quality standards. Good measurement systems are essential both for problem solving (i.e., reducing the rate of defectives) and to protect customers from receiving defective products. As a result, it is desirable to assess the performance of the BMS as well as to separate the effects of the measurement system and the production process on the observed classifications. In medicine, BMSs are known as diagnostic or screening tests, and are used to detect a target condition in subjects, thus classifying them as positive or negative. Assessing the performance of a medical test is essential in quantifying the costs due to misclassification of patients, and in the future prevention of these errors. In both industry and medicine, the most commonly used characteristics to quantify the performance a BMS are the two misclassification rates, defined as the chance of passing a nonconforming (non-diseased) unit, called the consumer's risk (false positive), and the chance of failing a conforming (diseased) unit, called the producer's risk (false negative). In most assessment studies, it is also of interest to estimate the conforming (prevalence) rate, i.e. probability that a randomly selected unit is conforming (diseased). There are two main approaches for assessing the performance of a BMS. Both approaches involve measuring a number of units one or more times with the BMS. The first one, called the "gold standard" approach, requires the use of a gold-standard measurement system that can determine the state of units with no classification errors. When a gold standard does not exist, is too expensive or time-consuming, another option is to repeatedly measure units with the BMS, and then use a latent class approach to estimate the parameters of interest. In industry, for both approaches, the standard sampling plan involves randomly selecting parts from the population of manufactured parts. In this thesis, we focus on a specific context commonly found in the manufacturing industry. First, the BMS under study is nondestructive. Second, the BMS is used for 100% inspection or any kind of systematic inspection of the production yield. In this context, we are likely to have available a large number of previously passed and failed parts. Furthermore, the inspection system typically tracks the number of parts passed and failed; that is, we often have baseline data about the current pass rate, separate from the assessment study. Finally, we assume that during the time of the evaluation, the process is under statistical control and the BMS is stable. Our main goal is to investigate the effect of using sampling plans that involve random selection of parts from the available populations of previously passed and failed parts, i.e. conditional selection, on the estimation procedure and the main characteristics of the estimators. Also, we demonstrate the value of combining the additional information provided by the baseline data with those collected in the assessment study, in improving the overall estimation procedure. We also examine how the availability of baseline data and using a conditional selection sampling plan affect recommendations on the design of the assessment study. In Chapter 2, we give a summary of the existing estimation methods and sampling plans for a BMS assessment study in both industrial and medical settings, that are relevant in our context. In Chapters 3 and 4, we investigate the assessment of a BMS in the case where we assume that the misclassification rates are common for all conforming/nonconforming parts and that repeated measurements on the same part are independent, conditional on the true state of the part, i.e. conditional independence. We call models using these assumptions fixed-effects models. In Chapter 3, we look at the case where a gold standard is available, whereas in Chapter 4, we investigate the "no gold standard" case. In both cases, we show that using a conditional selection plan, along with the baseline information, substantially improves the accuracy and precision of the estimators, compared to the standard sampling plan. In Chapters 5 and 6, we investigate the case where we allow for possible variation in the misclassification rates within conforming and nonconforming parts, by proposing some new random-effects models. These models relax the fixed-effects model assumptions regarding constant misclassification rates and conditional independence. As in the previous chapters, we focus on investigating the effect of using conditional selection and baseline information on the properties of the estimators, and give study design recommendations based on our findings. In Chapter 7, we discuss other potential applications of the conditional selection plan, where the study data are augmented with the baseline information on the pass rate, especially in the context where there are multiple BMSs under investigation.
4

Assessing Binary Measurement Systems

Danila, Oana Mihaela January 2012 (has links)
Binary measurement systems (BMS) are widely used in both manufacturing industry and medicine. In industry, a BMS is often used to measure various characteristics of parts and then classify them as pass or fail, according to some quality standards. Good measurement systems are essential both for problem solving (i.e., reducing the rate of defectives) and to protect customers from receiving defective products. As a result, it is desirable to assess the performance of the BMS as well as to separate the effects of the measurement system and the production process on the observed classifications. In medicine, BMSs are known as diagnostic or screening tests, and are used to detect a target condition in subjects, thus classifying them as positive or negative. Assessing the performance of a medical test is essential in quantifying the costs due to misclassification of patients, and in the future prevention of these errors. In both industry and medicine, the most commonly used characteristics to quantify the performance a BMS are the two misclassification rates, defined as the chance of passing a nonconforming (non-diseased) unit, called the consumer's risk (false positive), and the chance of failing a conforming (diseased) unit, called the producer's risk (false negative). In most assessment studies, it is also of interest to estimate the conforming (prevalence) rate, i.e. probability that a randomly selected unit is conforming (diseased). There are two main approaches for assessing the performance of a BMS. Both approaches involve measuring a number of units one or more times with the BMS. The first one, called the "gold standard" approach, requires the use of a gold-standard measurement system that can determine the state of units with no classification errors. When a gold standard does not exist, is too expensive or time-consuming, another option is to repeatedly measure units with the BMS, and then use a latent class approach to estimate the parameters of interest. In industry, for both approaches, the standard sampling plan involves randomly selecting parts from the population of manufactured parts. In this thesis, we focus on a specific context commonly found in the manufacturing industry. First, the BMS under study is nondestructive. Second, the BMS is used for 100% inspection or any kind of systematic inspection of the production yield. In this context, we are likely to have available a large number of previously passed and failed parts. Furthermore, the inspection system typically tracks the number of parts passed and failed; that is, we often have baseline data about the current pass rate, separate from the assessment study. Finally, we assume that during the time of the evaluation, the process is under statistical control and the BMS is stable. Our main goal is to investigate the effect of using sampling plans that involve random selection of parts from the available populations of previously passed and failed parts, i.e. conditional selection, on the estimation procedure and the main characteristics of the estimators. Also, we demonstrate the value of combining the additional information provided by the baseline data with those collected in the assessment study, in improving the overall estimation procedure. We also examine how the availability of baseline data and using a conditional selection sampling plan affect recommendations on the design of the assessment study. In Chapter 2, we give a summary of the existing estimation methods and sampling plans for a BMS assessment study in both industrial and medical settings, that are relevant in our context. In Chapters 3 and 4, we investigate the assessment of a BMS in the case where we assume that the misclassification rates are common for all conforming/nonconforming parts and that repeated measurements on the same part are independent, conditional on the true state of the part, i.e. conditional independence. We call models using these assumptions fixed-effects models. In Chapter 3, we look at the case where a gold standard is available, whereas in Chapter 4, we investigate the "no gold standard" case. In both cases, we show that using a conditional selection plan, along with the baseline information, substantially improves the accuracy and precision of the estimators, compared to the standard sampling plan. In Chapters 5 and 6, we investigate the case where we allow for possible variation in the misclassification rates within conforming and nonconforming parts, by proposing some new random-effects models. These models relax the fixed-effects model assumptions regarding constant misclassification rates and conditional independence. As in the previous chapters, we focus on investigating the effect of using conditional selection and baseline information on the properties of the estimators, and give study design recommendations based on our findings. In Chapter 7, we discuss other potential applications of the conditional selection plan, where the study data are augmented with the baseline information on the pass rate, especially in the context where there are multiple BMSs under investigation.
5

Assessment of the Oxbow Morphology of the Caloosahatchee River and its Evolution Over Time: A Case Study in South Florida

Delhomme, Chloe 01 January 2012 (has links)
The Caloosahatchee River, located in Southern Florida, was originally a meandering and relatively shallow river. During the 1920s, the Caloosahatchee River was channelized and became the C-43 canal. The channelization has significantly impacted the river ecosystem, particularly the oxbows. The oxbows are the U-shaped water bodies on each side of the river channel, which are the remnant bends of the original river. To understand how anthropogenic influence affects hydrologic systems, the proposed case study was designed to assess the geomorphic changes of the oxbows of the Caloosahatchee River, Florida. Understanding and documenting the evolution of river morphology is becoming increasingly important today with increasing river degradation due to anthropogenic activities. In fact, such monitoring will provide critical information regarding river conditions to support future management plans and restoration efforts. Monitoring is a key element of successful management. This study provided a baseline for future monitoring by assessing the current morphologic conditions of the thirty-seven oxbows of the Caloosahatchee River, coupled with GPS data. Bathymetric surveys were used to assess the morphology of the oxbows. The study also presented trends in the evolution of oxbow morphology by comparing the data collected from the survey in 2011 with a cross-sectional survey collected by the South Florida Water Management District in 1978. The study revealed that 21 of 37 oxbows are still open; however, 16 are already partially filled, either at one of the ends or somewhere in the interior. In both 1978 and 2011, oxbows in Lee County were significantly larger, wider and deeper than in Hendry County. Exterior limb cross-sections were significantly larger, wider and deeper than interior cross-sections in both 1978 and 2011. Finally, an attempt to determine trends in the evolution of the morphology of the oxbows demonstrated that the overall maximum depth is significantly decreasing but only in the interior of the oxbow and that the mean depth is significantly increasing but only in the exterior cross-sections. This analysis also showed that the width is significantly increasing throughout the oxbow. Factors responsible for such differences may include natural geomorphic processes, pattern changes due to channelization, land use and anthropogenic activities.
6

Oceanographic Considerations for the Management and Protection of Surfing Breaks

Scarfe, Bradley Edward January 2008 (has links)
Although the physical characteristics of surfing breaks are well described in the literature, there is little specific research on surfing and coastal management. Such research is required because coastal engineering has had significant impacts to surfing breaks, both positive and negative. Strategic planning and environmental impact assessment methods, a central tenet of integrated coastal zone management (ICZM), are recommended by this thesis to maximise surfing amenities. The research reported here identifies key oceanographic considerations required for ICZM around surfing breaks including: surfing wave parameters; surfing break components; relationship between surfer skill, surfing manoeuvre type and wave parameters; wind effects on waves; currents; geomorphic surfing break categorisation; beach-state and morphology; and offshore wave transformations. Key coastal activities that can have impacts to surfing breaks are identified. Environmental data types to consider during coastal studies around surfing breaks are presented and geographic information systems (GIS) are used to manage and interpret such information. To monitor surfing breaks, a shallow water multibeam echo sounding system was utilised and a RTK GPS water level correction and hydrographic GIS methodology developed. Including surfing in coastal management requires coastal engineering solutions that incorporate surfing. As an example, the efficacy of the artificial surfing reef (ASR) at Mount Maunganui, New Zealand, was evaluated. GIS, multibeam echo soundings, oceanographic measurements, photography, and wave modelling were all applied to monitor sea floor morphology around the reef. Results showed that the beach-state has more cellular circulation since the reef was installed, and a groin effect on the offshore bar was caused by the structure within the monitoring period, trapping sediment updrift and eroding sediment downdrift. No identifiable shoreline salient was observed. Landward of the reef, a scour hole ~3 times the surface area of the reef has formed. The current literature on ASRs has primarily focused on reef shape and its role in creating surfing waves. However, this study suggests that impacts to the offshore bar, beach-state, scour hole and surf zone hydrodynamics should all be included in future surfing reef designs. More real world reef studies, including ongoing monitoring of existing surfing reefs are required to validate theoretical concepts in the published literature.

Page generated in 0.073 seconds