• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2077
  • 469
  • 321
  • 181
  • 169
  • 71
  • 68
  • 65
  • 53
  • 51
  • 49
  • 43
  • 28
  • 23
  • 22
  • Tagged with
  • 4366
  • 717
  • 538
  • 529
  • 506
  • 472
  • 432
  • 408
  • 390
  • 323
  • 316
  • 306
  • 296
  • 286
  • 275
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Striking a balance with concussion assessment : use of the Wii balance board to evaluate postural control

Cullen, Hilary, M 31 May 2017 (has links)
Background: Concussion assessments rely on a multifaceted approach where evaluation of balance and postural control plays an important role. Following a concussion, 67% of individuals report dizziness as a persistent symptom and 30% experience balance impairments. Studies incorporating the common Balance Error Scoring System (BESS) tool suggest that these impairments return to pre-injury baselines within ten days of incident. In contrast, however, studies incorporating more advanced posturography methods observe significant differences in balance up to one year following injury. While the BESS is consistently associated with low sensitivity and poor reliability scores, advanced posturography systems using force plates are not practical or accessible in most recreational sports environments. Recently, the Wii Balance Board (WBB) has been identified as a potential force plate proxy. Research confirms that the WBB is both valid and reliable in collecting center of pressure data. Thus, the WBB may be useful for investigating post-concussion balance deficits. Objective: The purpose of this study was to investigate the potential utility of a customized WBB program to assess postural balance in an athletic population. The study aimed to assess change in postural balance using the clinical BESS and WBB assessment tools to evaluate balance at fixed intervals during a regular athletic season and following concussion. Design: Prospective partial cohort. Methods: Balance was assessed at baseline, mid-, and post-season. Individuals who sustained a concussion during the study period were further assessed weekly for four weeks post-injury. Results: No significant differences were observed in raw BESS scores across regular season or post-concussion time points. In contrast, significant differences in several WBB outcome measures were observed. In the single stance condition, COPML worsened by 24% and COPT worsened by 9% between baseline and post-season time points (p=.002 and p=.007). In contrast, participants improved by 14% on a timed dynamic task (p=.003) between baseline and post-season time points. Following concussion, only the WBB dynamic outcome measures were found to be statistically significant. A positive trend was observed post-concussion, suggesting that a learning effect exists with the dynamic WBB program. Conclusion: Study results emphasize the importance of considering the progression of athletic season when interpreting baseline and post-concussion balance measurements. Study results support the use of a quantitative balance assessment, such as with a WBB, to improve measurement of static and dynamic postural balance. / Graduate / 0566 / hilarymcullen@gmail.com
602

An Analysis of Quality Improvement Education at US Colleges of Pharmacy.

Cooley, Janet, Stolpe, Samuel F, Montoya, Amber, Walsh, Angela, Hincapie, Ana L, Arya, Vibhuti, Nelson, Melissa L, Warholak, Terri 04 1900 (has links)
Objective. Analyze quality improvement (QI) education across US pharmacy programs. Methods. This was a two stage cross-sectional study that inspected each accredited school website for published QI curriculum or related content, and e-mailed a questionnaire to each school asking about QI curriculum or content. T-test and chi square were used for analysis with an alpha a priori set at .05. Results. Sixty responses (47% response rate) revealed the least-covered QI topics: quality dashboards /sentinel systems (30%); six-sigma or other QI methodologies (45%); safety and quality measures (57%); Medicare Star measures and payment incentives (58%); and how to implement changes to improve quality (60%). More private institutions covered Adverse Drug Events than public institutions and required a dedicated QI class; however, required QI projects were more often reported by public institutions. Conclusion. Despite the need for pharmacists to understand QI, it is not covered well in school curricula.
603

Designing Luby transform codes as an application layer

22 June 2011 (has links)
M.Ing. / Application Layer Forward Error Correction (AL-FEC) is a relatively new concept which uses erasure codes to add reliability insurance to particular application streams on a network. This concept has become particularly popular for media streaming services. Fountain codes have shown promise as the erasure code of choice for these implementations. The Fountain code concept is a principle that has two popular instantiations, the Luby Transform (LT) code and the Raptor code. While the Raptor code is the more efficient of the two, the LT code is the focal point of our dissertation. Our main objective in this dissertation was broken up into two different primary objectives which we had to satisfy in its completion. The first of these primary objectives entailed the finding of sets of input parameters which would yield an optimal implementation of the LT code for a given set of input block sizes. The simulation work performed in this investigation was done on a wide range of input parameters for each input block size concerned. While there have been a number of other studies which have performed such parameter optimisation we have not found any that present such comprehensive results as we do. The second of the primary objectives related to the analysis of the code when applied as an AL-FEC reliability mechanism for streaming media. This simulation work was performed on simulated IP network environments using the NS2 network simulator. The codes which were applied to the network were based on the optimal parameter sets found in the first objective. We analysed the effective throughput achievable by the code in the face of various packet loss rates. With the data obtained from the simulations we then derived a constraint on the allowable bit-rate of media which uses the LT code as an AL-FEC reliability mechanism. In performing the work in this dissertation it was identified that it was required to develop the LT code related simulation tools for performing the respective investigations. This involved development of a stand-alone LT code simulator as well as an LT code AL-FEC reliability mechanism for NS2.
604

Pruned convolutional codes and Viterbi decoding with the Levenshtein distance metric

26 February 2009 (has links)
M.Ing. / In practical transmission or storage systems, the convolutional encoding and Viterbi decoding scheme is widely used to protect the data from substitution errors. Two independent insertion/deletion/substitution (IDS) error correcting designs, working on the convolutional encoder and the Viterbi decoder respectively, are shown in this thesis. The Levenshtein distance has previously been postulated to be a suitable branch comparison metric for the Viterbi algorithm on channels with not only substitution errors, but also insertion/deletion errors. However, to a large extent, this hypothesis has still to be investigated. In the first coding scheme, a modified Viterbi algorithm based on the Levenshtein distance metric is used as the decoding algorithm. Our experiments give evidence that the modified Viterbi algorithm with the Levenshtein distance metric is suitable as an applicable decoding algorithm for IDS channels. In the second coding scheme, a new type of convolutional code called the path-pruned convolutional code is introduced on the encoder side. By periodically deleting branches in a high rate convolutional code trellis diagram to create a specific insertion/deletion error correcting block codeword structure in the encoded sequence, we can obtain an encoding system to protect against insertion, deletion and substitution errors at the same time. Moreover, the path-pruned convolutional code is an ideal code to use for unequal error protection. Therefore, we also present an application of the rate-compatible path-pruned convolutional codes over IDS channels.
605

Multiplex Gene Synthesis and Error Correction from Microchips Oligonucleotides and High-throughput Gene Screening with Programmable Double Emulsion Microfluidics Droplets

Ma, Siying January 2015 (has links)
<p>Promising applications in the design of various biological systems hold critical implications as heralded in the rising field of synthetic biology. But, to achieve these goals, the ability to synthesize and screen in situ DNA constructs of any size or sequence rapidly, accurately and economically is crucial. Today, the process of DNA oligonucleotide synthesis has been automated but the overall development of gene and genome synthesis and error correction technology has far lagged behind that of gene and genome sequencing. What even lagged behind is the capability of screening a large population of information on a single cell, protein or gene level. Compartmentalization of single cells in water-in-oil emulsion droplets provides an opportunity to screen vast numbers of individual assays with quantitative readouts. However these single-emulsion droplets are incompatible with aqueous phase analysis and are not controllable through molecule transports. </p><p>This thesis presents the development of a multi-tool ensemble platform targeted at high-throughput gene synthesis, error correction and screening. An inkjet oligonucleotide synthesizer is constructed to synthesize oligonucleotides as sub-arrays onto patterned and functionalized thermoplastic microchips. The arrays are married to microfluidic wells that provide a chamber to for enzymatic amplification and assembly of the DNA from the microarrays into a larger construct. Harvested product is then amplified off-chip and error corrected using a mismatch endonuclease-based reaction. Bacterial cells baring individual synthetic gene variants are encapsulated as single cells into double-emulsion droplets where cell populations are enriched by up to 1000 times within several hours of proliferation. Permeation of Isopropyl-D-1-thiogalactopyranoside (IPTG) molecules from the external solution allows induction of target gene expression. The induced expression of the synthetic fluorescent proteins from at least ~100 bacteria per droplet generates clearly distinguishable fluorescent signals that enable droplets sorting through fluorescence-activated cell sorting (FACS) technique. The integration of oligo synthesis and gene assembly on the same microchip facilitates automation and miniaturization, which leads to cost reduction and increases in throughput. The capacity of double emulsion system (millions discrete compartments in 1ml solution) combined with high-throughput sorting by FACS provide the basis for screening complex gene libraries for different functionality and activity, significantly reducing the cost and turn-around time.</p> / Dissertation
606

The impact of sample size re-estimation on the type I error rate in the analysis of a continuous end-point

Zhao, Songnian January 1900 (has links)
Master of Science / Department of Statistics / Christopher Vahl / Sample size estimation is generally based on assumptions made during the planning stage of a clinical trial. Often, there is limited information available to estimate the initial sample size. This may result in a poor estimate. For instance, an insufficient sample size may not have the capability to produce statistically significant results, while an over-sized study will lead to a waste of resources or even ethical issues in that too many patients are exposed to potentially ineffective treatments. Therefore, an interim analysis in the middle of a trial may be worthwhile to assure that the significance level is at the nominal level and/or the power is adequate to detect a meaningful treatment difference. In this report, the impact of sample size re-estimation on the type I error rate for the continuous end-point in a clinical trial with two treatments is evaluated through a simulation study. Two sample size estimation methods are taken into consideration: blinded and partially unblinded. For the blinded method, all collected data for two groups are used to estimate the variance, while only data from the control group are used to re-estimate the sample size for the partially unblinded method. The simulation study is designed with different combinations of assumed variance, assumed difference in treatment means, and re-estimation methods. The end-point is assumed to follow normal distribution and the variance for both groups are assumed to be identical. In addition, equal sample size is required for each group. According to the simulation results, the type I error rates are preserved for all settings.
607

Omyl v trestním právu / Error in criminal law

Tylšarová, Kateřina January 2016 (has links)
Diese Diplomarbeit beschäftigt sich mit dem Irrtum im Strafrecht und seine Problematik. Zweck der Arbeit ist die Wichtigkeit von diesem strafrechtlichen Institut zu betonen und seine Rolle im Rahmen des Strafrechts klarzustellen. Daraus entspringt auch seine sehr enge Verbindung zwischen dem Irrtum und dem Grundsatz der Subsidiarität der Strafrepression, auf welchen muss man in einigen Fällen des Rechtirrtums eine besondere Rücksicht nehmen. Der Irrtum des Straftäters ist für die Rechtstheorie wesentlich. Und es hat eine große Bedeutung auch für den Täter, weil es Einfluss auf seine strafrechtliche Verantwortlichkeit hat. In dem ersten Teil definiere ich erstens einige Begriffe, die wesentlich für das Verständnis des Irrtums sind und auf welche wir nicht verzichten können. In dem ersten Kapitel geht es um den Straftat. In dem zweiten Kapitel verlege ich mich mit der subjektiven Seite der Straftat und mit den Begriffen wie das Verschulden, der Vorsatz, die Fahrlässigkeit, der Versuch, die Vorbereitu ngshandlung und Weitere, also die meist mit Zusammenhang mit dem Strafrechtirrtum verwendeten Begriffe. In dem dritten Kapitel stelle ich kurz auch die Umstände, die die Widerrechtlichkeit beseitigen, vor. Kurzgefasst zahle ich auf, um welche es geht und was wir uns unter diese Begriffe vorstellen...
608

The effect of sample size re-estimation on type I error rates when comparing two binomial proportions

Cong, Danni January 1900 (has links)
Master of Science / Department of Statistics / Christopher I. Vahl / Estimation of sample size is an important and critical procedure in the design of clinical trials. A trial with inadequate sample size may not produce a statistically significant result. On the other hand, having an unnecessarily large sample size will definitely increase the expenditure of resources and may cause a potential ethical problem due to the exposure of unnecessary number of human subjects to an inferior treatment. A poor estimate of the necessary sample size is often due to the limited information at the planning stage. Hence, the adjustment of the sample size mid-trial has become a popular strategy recently. In this work, we introduce two methods for sample size re-estimation for trials with a binary endpoint utilizing the interim information collected from the trial: a blinded method and a partially unblinded method. The blinded method recalculates the sample size based on the first stage’s overall event proportion, while the partially unblinded method performs the calculation based only on the control event proportion from the first stage. We performed simulation studies with different combinations of expected proportions based on fixed ratios of response rates. In this study, equal sample size per group was considered. The study shows that for both methods, the type I error rates were preserved satisfactorily.
609

Performance of map matching and route tracking depending on the quality of the GPS data

Houda, Prokop January 2016 (has links)
Satellite positioning measurements are never perfectly unbiased. Due to multiple types of errors affecting the signal transmission through an open space and urban areas each positioning measurement contains certain degree of uncertainty. Satellite signal receivers also do not receive the signal continuously, but the localization information is received discretely. Sampling rate and positioning error provide uncertainty towards the various positioning algorithms used in localization, logistics and in intelligent transport systems applications. This thesis examines the effect of positioning error and sampling rate on geometric and topological map matching algorithms and on the precision of route tracking within these algorithms. Also the effects of the different network density on the performance of the algorithms are evaluated. It also creates the platform for simulation and evaluation of map matching algorithms. Map matching is the process of attaching the initial positioning measurement to the network. A number of authors presented their algorithms during past decades, which shows how complex topic the map matching is, mostly due to the changing environmental and network conditions. Geometric and topological map matching algorithms are chosen, modelled and simulated and their response to the different input combinations is evaluated. Also the recommendations for possible ITS applications are carried out in terms of proposed requirements of the receiver. The results confirm general expectation that the map matching overall improves the initial position error and that map matching serves as a form of error mitigation. Also the correlation between the increase of the original positioning error and the increase of the map matching error is universal for all the algorithms in the thesis. But the comparison of the algorithm also showed large differences between the topological and geometric algorithms and their ability to cope with distorted input data. Whereas topological algorithms were clearly performing better in scenarios with smaller initial error and smaller sampling rate, geometric matching proves to be more effective in heavily distorted or very sparsely sampled data set. That is caused mostly by the ability to easily leave the wrongly mapped position which is in these situations comparative advantage of simple geometric algorithms. Following work should concentrate on involving even more algorithms into the comparison, which would produce more valuable results. Also the simulation  of the errors using the error magnitude simulation with known an improved error modelling could increase the generalization of the results.
610

Construction of ternary convolutional codes

Ansari, Muhammad Khizar 14 August 2019 (has links)
Error control coding is employed in modern communication systems to reliably transfer data through noisy channels. Convolutional codes are widely used for this purpose because they are easy to encode and decode and so have been employed in numerous communication systems. The focus of this thesis is a search for new and better ternary convolutional codes with large free distance so more errors can be detected and corrected. An algorithm is developed to obtain ternary convolutional codes (TCCs) with the best possible free distance. Tables are given of binary and ternary convolutional codes with the best free distance for rate 1/2 with encoder memory up to 14, rate 1/3 with encoder memory up to 9 and rate 1/4 with encoder memory up to 8. / Graduate

Page generated in 0.0252 seconds