• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Statistical method in a comparative study in which the standard treatment is superior to others

Ikeda, Mitsuru, Shimamoto, Kazuhiro, Ishigaki, Takeo, Yamauchi, Kazunobu, 池田, 充, 山内, 一信 11 1900 (has links)
No description available.
2

Teacher Matters: Re-examining the Effects of Grade-3 Test-based Retention Policy

Hong, Yihua 21 August 2012 (has links)
This study is aimed to unpack the ‘black box’ that connects the grade-3 test-based retention policy with students’ academic outcomes. I theorized that the policy effects on teaching and learning may be modified by instructional capacity, but are unlikely to occur through enhancing teachers’ capability to teach. Analyzing the Early Childhood Longitudinal Study Kindergarten cohort (ECLS-K) dataset, I first explored the relationship between the test-based retention policy and instructional capacity as indicated by teacher expectations of students’ learning capability and then investigated whether and how the expectations moderated the policy effects on instructional time reallocation, student academic performance, and student self-perceived academic competence and interests. To remove the selection bias associated with the non-experimental data, I applied a novel propensity score-based causal inference method, the marginal mean weighting through stratification (MMW-S) method and extended it to a causal analysis that approximates a randomization of schools to the test-based retention policy followed by a randomization of classes to teachers with different levels of expectations. Consistent with my theory, I found that the test-based retention policy had no effects on teacher expectations. Although the policy uniformly increased the time allocated to math instruction, it produced no significant changes in students’ overall performance and overall self-perception in math. In addition, I found that students responded differently to the test-based retention policy depending on the expectations they received from the grade-3 teachers. The results suggested some benefits of positive expectations over negative and indifferent expectations in moderating the policy effects, including more access to advanced content, higher learning gains of average-ability students, and more resilient student learning over a long term. However, the results also showed that having positive expectations alone is not sufficient for academic improvement under the high-stakes policy. If implemented by a positive-expectation teacher, the policy could be detrimental to students’ learning in the nontested subject or to their learning of basic reading/math skills. It would as well place the bottom-ability students at a disadvantage. The findings have significant implications for the ongoing high-stakes testing debate, for school improvement under the current accountability reform, and for research of teacher effectiveness.
3

Teacher Matters: Re-examining the Effects of Grade-3 Test-based Retention Policy

Hong, Yihua 21 August 2012 (has links)
This study is aimed to unpack the ‘black box’ that connects the grade-3 test-based retention policy with students’ academic outcomes. I theorized that the policy effects on teaching and learning may be modified by instructional capacity, but are unlikely to occur through enhancing teachers’ capability to teach. Analyzing the Early Childhood Longitudinal Study Kindergarten cohort (ECLS-K) dataset, I first explored the relationship between the test-based retention policy and instructional capacity as indicated by teacher expectations of students’ learning capability and then investigated whether and how the expectations moderated the policy effects on instructional time reallocation, student academic performance, and student self-perceived academic competence and interests. To remove the selection bias associated with the non-experimental data, I applied a novel propensity score-based causal inference method, the marginal mean weighting through stratification (MMW-S) method and extended it to a causal analysis that approximates a randomization of schools to the test-based retention policy followed by a randomization of classes to teachers with different levels of expectations. Consistent with my theory, I found that the test-based retention policy had no effects on teacher expectations. Although the policy uniformly increased the time allocated to math instruction, it produced no significant changes in students’ overall performance and overall self-perception in math. In addition, I found that students responded differently to the test-based retention policy depending on the expectations they received from the grade-3 teachers. The results suggested some benefits of positive expectations over negative and indifferent expectations in moderating the policy effects, including more access to advanced content, higher learning gains of average-ability students, and more resilient student learning over a long term. However, the results also showed that having positive expectations alone is not sufficient for academic improvement under the high-stakes policy. If implemented by a positive-expectation teacher, the policy could be detrimental to students’ learning in the nontested subject or to their learning of basic reading/math skills. It would as well place the bottom-ability students at a disadvantage. The findings have significant implications for the ongoing high-stakes testing debate, for school improvement under the current accountability reform, and for research of teacher effectiveness.
4

A study on the Advantageous tender evaluation system at Government Procurement Law

Liu, Mei-man 20 August 2007 (has links)
The essence of the most advantageous tender (MAT) is to allow the procuring authorities to carry out a comprehensive assessment on the technical merits, quality, function, terms and prices of tenders in accordance with the judging criteria listed on the tendering document. In this way, the award of contract can be determined that ensures the best quality within the budget and encourages good competition among tendering parties while eliminate vicious undercutting. Scandals arose from recent procurement projects such as the ETC procurement project, High Speed Rail vibration reduction project, the procurement of Kuan-hwa Fast Attack Boat Guide-Missile (F-ABG), and the construction of the southern courtyard of the National Palace museum have attracted great attention. On March 22, 2006, the Premier announced that ¡§Award to the lowest tender should be made the rule while the MAT should be the exception¡¨ in future government procurement projects. This announcement highlighted the flaws and problems yet to be improved within the existing system. After studying related literatures, conducting a thorough analysis of the current situation and different case studies of the tender selection process, this paper conducted a survey among the people involved in government procurement to find out how they think of the selection of the most advantageous tender legally and in practice, the function of the tender selection committee, the management and the efficiency of the selection process. Suggestions for improvement are put forward based on the findings and analysis. The survey found that the Ranking Method while considering the price factor and the Overall Evaluation Score Method are the most frequently used tendering methods in the past experience of our interviewees. Price may be a crucial factor in determining the most advantageous tender. The most important factor in the award selection process is technical merits. The process of selecting the most advantageous tender is most susceptible to flaws and scandals. The inappropriate appointment of the committee members is the main cause of these flaws. In practice, the selection of committee members itself is of great difficulty. The expertise, personal bias, as well as one¡¦s understanding of the procurement could all affect the fairness and credibility of the tendering process. Besides these committee members, top officials in the procuring institutions also play important roles in the decision-making process. Cognitive differences among interviewees in the understanding of the legal institutions of the MAT selection, the functions of the selection committee, the execution of the MAT selection, the management mechanisms of the MAT selection may be caused by elements such the institutions they work for, the nature of their works, the job title, the training hours they received, whether they are professionally accredited. Yet, different years of experience did not contribute to such differences. People with different job title, nature of work, years of experience, and training hours did cause significant difference in the understanding of the efficiency of the MAT selection. Working at different institutions and professional accreditation, however, did not result in such difference. Based on the above findings, a way forward has been provided: a set of comprehensive regulations for the most advantageous tender selection should be established. A standard of procedure and module should be designed. The decision authority of the procuring institution should be defined in order to actually fulfill the need of the procurement. A comprehensive list of suggested professionals should be compiled to assist different kinds of procurements. This list would ensure the fairness of the selection process. Specify the judging criteria for prices, weights of evaluating elements, and the scoring principle. Determine a set of reasonable scoring method for prices. Provide professional training for procurement professionals. Enhance the efficiency of the MAT selection. Committee members should receive professional training in order to improve the credibility of the selection process. A performance evaluation mechanism should be established in order to improve efficiency and put the government¡¦s budget to the best use.
5

CRISPR-Drawr, a tool to design mutagenic primer

Torbjörn, Larsson January 2023 (has links)
Short open reading frames (sORFs) are codon sequences with a start and stop codon within atmost 100 codons. Cells produce many transcripts from them and some sORFs have been found to have function. sORFs have been associated with embryogenesis, myogenesis, immunity and various diseases including cancers. Cell culture screening is a common method to study function in sORFs. By inserting mutations in known sORF locations one can affect their translation by removing start codons, inserting premature stop codons, or removing native stop codons. A new tool set to do this isCRISPR technology, where single guide RNA (gRNA) can be used to make more precise genome edits. Unfortunately, such design is nontrivial and suggests a lot of variants for testing. It results in a back-and-forth testing process involving different available design tools. In this project, a comprehensive way was developed to see and iterate over the many test combinations. This intends to ease the process and decrease the likelihood for errors. The developed solution is a tool that integrates the currently best design tools. It also introduces a method in the form of a new quality summary score that can evaluate the estimated outcomes of the various designed guide variants. The tool was tested, and it was found that the score simplifies and amplifies the earlier usedscore methods. The pipeline is simple to install and use, integrates the currently most actively developed tools, and an installation is as future proof as can be made in a rapidly evolving field.
6

Patterns, Determinants, and Spatial Analysis of Health Service Utilization following the 2004 Tsunami in Thailand

Isaranuwatchai, Wanrudee 09 January 2012 (has links)
On December 26th, 2004, 280,000 people lost their lives. A massive earthquake struck Indonesia, triggering a tsunami that affected several countries, including Thailand. The disaster had important implications for health status of Thai citizens, as well as health system planning, and thus underscores the need to study its long-term effect. This dissertation examined the patterns, determinants, and spatial analysis of health service utilization following the tsunami in Thailand. The primary aim was to determine whether tsunami-affected status (personal injury or property loss) and distance to a health facility (public health center or hospital) influenced health service utilization. The study population included Thai citizens (aged 14+), living in the tsunami-affected Thai provinces: Phuket, Phang Nga, Krabi, and Ranong. Study participants were randomly selected from the ‘affected’ and ‘unaffected’ populations. One and two years after the tsunami, participants were interviewed in-person on demographic and socio-economic factors, disaster impact, health status, and health service utilization. Five types of health services were examined: outpatient services, inpatient services, home visits, medications, and informal (unpaid) care. Distance to a health facility was calculated using Geographic Information System’s Network Analyst. The Grossman model of the demand for health care and a distance decay concept provided the foundation for this study. A propensity score method and a two-part model were used to examine the study objectives. There were 1,889 participants. One year after the tsunami, individuals affected by property loss were more likely to use medications than unaffected participants. Two years after the tsunami, individuals with personal injury were more likely to use outpatient services, medications, and informal care than unaffected participants. Distance to a health facility was associated with the use of medications and informal care. The results confirmed the long-term effect of a tsunami. This dissertation may assist the decision- and policy-makers in the identification of those most likely to use health services and in the request of health resources to the affected areas. The patterns, determinants, and spatial analysis of health service utilization found in this study may not be specific to a tsunami and may provide insights on post-disaster contexts of other natural disasters.
7

Patterns, Determinants, and Spatial Analysis of Health Service Utilization following the 2004 Tsunami in Thailand

Isaranuwatchai, Wanrudee 09 January 2012 (has links)
On December 26th, 2004, 280,000 people lost their lives. A massive earthquake struck Indonesia, triggering a tsunami that affected several countries, including Thailand. The disaster had important implications for health status of Thai citizens, as well as health system planning, and thus underscores the need to study its long-term effect. This dissertation examined the patterns, determinants, and spatial analysis of health service utilization following the tsunami in Thailand. The primary aim was to determine whether tsunami-affected status (personal injury or property loss) and distance to a health facility (public health center or hospital) influenced health service utilization. The study population included Thai citizens (aged 14+), living in the tsunami-affected Thai provinces: Phuket, Phang Nga, Krabi, and Ranong. Study participants were randomly selected from the ‘affected’ and ‘unaffected’ populations. One and two years after the tsunami, participants were interviewed in-person on demographic and socio-economic factors, disaster impact, health status, and health service utilization. Five types of health services were examined: outpatient services, inpatient services, home visits, medications, and informal (unpaid) care. Distance to a health facility was calculated using Geographic Information System’s Network Analyst. The Grossman model of the demand for health care and a distance decay concept provided the foundation for this study. A propensity score method and a two-part model were used to examine the study objectives. There were 1,889 participants. One year after the tsunami, individuals affected by property loss were more likely to use medications than unaffected participants. Two years after the tsunami, individuals with personal injury were more likely to use outpatient services, medications, and informal care than unaffected participants. Distance to a health facility was associated with the use of medications and informal care. The results confirmed the long-term effect of a tsunami. This dissertation may assist the decision- and policy-makers in the identification of those most likely to use health services and in the request of health resources to the affected areas. The patterns, determinants, and spatial analysis of health service utilization found in this study may not be specific to a tsunami and may provide insights on post-disaster contexts of other natural disasters.
8

Flaskhalsanalys med händelsestyrd simulering vid produktion mot beställning / Bottleneck analysis using discrete event simulation in a make to order environment

Gunnarsson, Nils, Bevemyr, Martin January 2022 (has links)
När ett tillverkande företag vill öka sin marknadsandel behöver de i allmänhet öka sin produktion. För att kunna göra detta på ett kostnadseffektivt sätt är det viktigt att veta vilka faktorer som begränsar produktionssystemet, dessa benämns ofta som flaskhalsar. Ett produktionssystem är dock inte ett statiskt system vilket innebär att flaskhalsarna i ett system inte heller är statiska. De kan flytta på sig på både lång och kort sikt.   Syftet med denna fallstudie är att undersöka flaskhalsarna i ett produktionssystem och hur dessa förflyttar sig, vilka förbättringar som kan göras för att förbättra flödet i produktionssystemet. Data om produktionssystemet har samlats in med studier av databas, tidsstudie och samtal/intervjuer. Dessa data har nyttjats i en simuleringsmodell som framtagits med hjälp av FACTS-analyzer. Denna modell har studerats samt använts för att genomföra experiment. Bland annat har flermålsoptimering med SCORE metoden använts för att hitta systemets primära och sekundära flaskhalsar.   Studien visade att en station på svetsavdelningen var den huvudsakliga flaskhalsen i produktionssystemet och att den sekundära flaskhalsen i systemet låg i avdelningen efter, måleriavdelningen. Studien tog också fram en optimerad förbättrings plan för fabriken upp till 260 tillverkade båtar på ett år. / When a manufacturing company intends to increase their market share, they generally need to increase their production as well. In order to achieve this in a cost-effective manner they must be aware of which factors are limiting the production system, these factors are generally known as bottlenecks. A production system is however not a static system, this means that the bottlenecks won’t be static either. They can move both in the long term and short term.   The aim of this case study is to examine the bottlenecks in a system and their movement. As well as which improvements could be applied in order to improve the flow in the production system. Data concerning the production system has been collected through studies of internal databases, time studies and interviews. This data has been used in a simulation which has been constructed with FACTS-analyzer. The model has been studied and used for experimentation, for example, multigoal-optimization with the SCORE-method, which has been used in order to find the primary and secondary bottlenecks of the system.   The study shows that a station in the welding department was the main bottleneck in the productions system and the secondary bottleneck is in the department after, the paint shop. The study also found an optimized improvement plan for the factory up to 260 boats produced each year
9

Expeditious Causal Inference for Big Observational Data

Yumin Zhang (13163253) 28 July 2022 (has links)
<p>This dissertation address two significant challenges in the causal inference workflow for Big Observational Data. The first is designing Big Observational Data with high-dimensional and heterogeneous covariates. The second is performing uncertainty quantification for estimates of causal estimands that are obtained from the application of black box machine learning algorithms on the designed Big Observational Data. The methodologies developed by addressing these challenges are applied for the design and analysis of Big Observational Data from a large public university in the United States. </p> <h4>Distributed Design</h4> <p>A fundamental issue in causal inference for Big Observational Data is confounding due to covariate imbalances between treatment groups. This can be addressed by designing the study prior to analysis. The design ensures that subjects in the different treatment groups that have comparable covariates are subclassified or matched together. Analyzing such a designed study helps to reduce biases arising from the confounding of covariates with treatment. Existing design methods, developed for traditional observational studies consisting of a single designer, can yield unsatisfactory designs with sub-optimum covariate balance for Big Observational Data due to their inability to accommodate the massive dimensionality, heterogeneity, and volume of the Big Data. We propose a new framework for the distributed design of Big Observational Data amongst collaborative designers. Our framework first assigns subsets of the high-dimensional and heterogeneous covariates to multiple designers. The designers then summarize their covariates into lower-dimensional quantities, share their summaries with the others, and design the study in parallel based on their assigned covariates and the summaries they receive. The final design is selected by comparing balance measures for all covariates across the candidates and identifying the best amongst the candidates. We perform simulation studies and analyze datasets from the 2016 Atlantic Causal Inference Conference Data Challenge to demonstrate the flexibility and power of our framework for constructing designs with good covariate balance from Big Observational Data.</p> <h4>Designed Bootstrap</h4> <p>The combination of modern machine learning algorithms with the nonparametric bootstrap can enable effective predictions and inferences on Big Observational Data. An increasingly prominent and critical objective in such analyses is to draw causal inferences from the Big Observational Data. A fundamental step in addressing this objective is to design the observational study prior to the application of machine learning algorithms. However, the application of the traditional nonparametric bootstrap on Big Observational Data requires excessive computational efforts. This is because every bootstrap sample would need to be re-designed under the traditional approach, which can be prohibitive in practice. We propose a design-based bootstrap for deriving causal inferences with reduced bias from the application of machine learning algorithms on Big Observational Data. Our bootstrap procedure operates by resampling from the original designed observational study. It eliminates the need for additional, costly design steps on each bootstrap sample that are performed under the standard nonparametric bootstrap. We demonstrate the computational efficiency of this procedure compared to the traditional nonparametric bootstrap, and its equivalency in terms of confidence interval coverage rates for the average treatment effects, by means of simulation studies and a real-life case study.</p> <h4>Case Study</h4> <p>We apply the distributed design and designed bootstrap methodologies in a case study involving institutional data from a large public university in the United States. The institutional data contains comprehensive information about the undergraduate students in the university, ranging from their academic records to on-campus activities. We study the causal effects of undergraduate students’ attempted course load on their academic performance based on a selection of covariates from these data. Ultimately, our real-life case study demonstrates how our methodologies enable researchers to effectively use straightforward design procedures to obtain valid causal inferences with reduced computational efforts from the application of machine learning algorithms on Big Observational Data.</p> <p><br></p>

Page generated in 0.0523 seconds