Spelling suggestions: "subject:"interval"" "subject:"8interval""
21 |
Dynamics of non-classical interval exchangesGadre, Vaibhav S. Dunfield, Nathan M. Calegari, Danny C. Calegari, Danny C., January 1900 (has links)
Thesis (Ph. D.) -- California Institute of Technology, 2010. / Title from home page (viewed 06/21/2010). Advisor and committee chair names found in the thesis' metadata record in the digital repository. Includes bibliographical references.
|
22 |
Sterilita a řešení reprodukce u problémových kravVrána, Rostislav January 2011 (has links)
No description available.
|
23 |
Reprodukční analýza stáda červenostrakatého skotuRulíková, Dagmar January 2012 (has links)
No description available.
|
24 |
Effects of Azithromycin and Moxifloxacin Used Alone and Concomitantly With QTc Prolonging Medications on the QTc IntervalJohannesmeyer, Herman, Moghimi, Parissa, Parekh, Hershil, Nix, David January 2015 (has links)
Class of 2015 Abstract / Objectives: The goals of this study were to determine how frequently azithromycin and moxifloxacin were used in combination with other drugs that cause QTc prolongation, describe the effects these combinations have on QTc interval length, determine the incidence of QTc prolongation in patients on these medication combinations, and identify risk factors associated with QTc interval prolongations in patients on these medication combinations.
Methods: A retrospective chart review was performed on patients who received at least two doses of azithromycin or moxifloxacin. It was noted whether these patients received other medications that prolonged the QTc interval. ECG information was grouped into daily phases depending on whether the patient was at baseline, receiving antibiotic therapy, QTc prolonging medication therapy, or concomitant therapy. These data were compared using a repeated measures ANOVA.
Results: Patients received concomitant antibiotic-QTc prolong medication therapy in 70% of cases analyzed. In all patients on concomitant therapy there was no significant difference in any measured ECG data (all p-values > 0.26). In those who were on azithromycin and experienced QTc prolongation there was a significant difference in RR interval length (p=0.034). In those that experienced QTc prolongation on moxifloxacin there was a significant difference in QT (p=0.0033) and QTcF (p=0.0089) length.
Conclusions: These medication combinations are used frequently in the hospital. These medications may not increase the QTc interval length in the general population but more research is warranted in this area to confirm this finding.
|
25 |
On interval scheduling problems: A contributionBouzina, Khalid Ibn El Walid January 1994 (has links)
No description available.
|
26 |
Estimating the Ratio of Two Poisson RatesPrice, Robert M., Bonett, Douglas G. 01 September 2000 (has links)
Classical and Bayesian methods for interval estimation of the ratio of two independent Poisson rates are examined and compared in terms of their exact coverage properties. Two methods to determine sampling effort requirements are derived.
|
27 |
Comparison of methods to calculate measures of inequality based on interval dataNeethling, Willem Francois 12 1900 (has links)
Thesis (MComm)—Stellenbosch University, 2015. / ENGLISH ABSTRACT: In recent decades, economists and sociologists have taken an increasing interest in the study of
income attainment and income inequality. Many of these studies have used census data, but
social surveys have also increasingly been utilised as sources for these analyses. In these
surveys, respondents’ incomes are most often not measured in true amounts, but in categories
of which the last category is open-ended. The reason is that income is seen as sensitive data
and/or is sometimes difficult to reveal.
Continuous data divided into categories is often more difficult to work with than ungrouped data.
In this study, we compare different methods to convert grouped data to data where each
observation has a specific value or point. For some methods, all the observations in an interval
receive the same value; an example is the midpoint method, where all the observations in an
interval are assigned the midpoint. Other methods include random methods, where each
observation receives a random point between the lower and upper bound of the interval. For
some methods, random and non-random, a distribution is fitted to the data and a value is
calculated according to the distribution.
The non-random methods that we use are the midpoint-, Pareto means- and lognormal means
methods; the random methods are the random midpoint-, random Pareto- and random
lognormal methods. Since our focus falls on income data, which usually follows a heavy-tailed
distribution, we use the Pareto and lognormal distributions in our methods.
The above-mentioned methods are applied to simulated and real datasets. The raw values of
these datasets are known, and are categorised into intervals. These methods are then applied
to the interval data to reconvert the interval data to point data. To test the effectiveness of these
methods, we calculate some measures of inequality. The measures considered are the Gini
coefficient, quintile share ratio (QSR), the Theil measure and the Atkinson measure. The
estimated measures of inequality, calculated from each dataset obtained through these
methods, are then compared to the true measures of inequality. / AFRIKAANSE OPSOMMING: Oor die afgelope dekades het ekonome en sosioloë ʼn toenemende belangstelling getoon in
studies aangaande inkomsteverkryging en inkomste-ongelykheid. Baie van die studies maak
gebruik van sensus data, maar die gebruik van sosiale opnames as bronne vir die ontledings
het ook merkbaar toegeneem. In die opnames word die inkomste van ʼn persoon meestal in
kategorieë aangedui waar die laaste interval oop is, in plaas van numeriese waardes. Die rede
vir die kategorieë is dat inkomste data as sensitief beskou word en soms is dit ook moeilik om
aan te dui.
Kontinue data wat in kategorieë opgedeel is, is meeste van die tyd moeiliker om mee te werk as
ongegroepeerde data. In dié studie word verskeie metodes vergelyk om gegroepeerde data om
te skakel na data waar elke waarneming ʼn numeriese waarde het. Vir van die metodes word
dieselfde waarde aan al die waarnemings in ʼn interval gegee, byvoorbeeld die ‘midpoint’
metode waar elke waarde die middelpunt van die interval verkry. Ander metodes is ewekansige
metodes waar elke waarneming ʼn ewekansige waarde kry tussen die onder- en bogrens van die
interval. Vir sommige van die metodes, ewekansig en nie-ewekansig, word ʼn verdeling oor die
data gepas en ʼn waarde bereken volgens die verdeling.
Die nie-ewekansige metodes wat gebruik word, is die ‘midpoint’, ‘Pareto means’ en ‘Lognormal
means’ en die ewekansige metodes is die ‘random midpoint’, ‘random Pareto’ en ‘random
lognormal’. Ons fokus is op inkomste data, wat gewoonlik ʼn swaar stertverdeling volg, en om
hierdie rede maak ons gebruik van die Pareto en lognormaal verdelings in ons metodes.
Al die metodes word toegepas op gesimuleerde en werklike datastelle. Die rou waardes van die
datastelle is bekend en word in intervalle gekategoriseer. Die metodes word dan op die interval
data toegepas om dit terug te skakel na data waar elke waarneming ʼn numeriese waardes het.
Om die doeltreffendheid van die metodes te toets word ʼn paar maatstawwe van ongelykheid
bereken. Die maatstawwe sluit in die Gini koeffisiënt, ‘quintile share ratio’ (QSR), die Theil en
Atkinson maatstawwe. Die beraamde maatstawwe van ongelykheid, wat bereken is vanaf die
datastelle verkry deur die metodes, word dan vergelyk met die ware maatstawwe van
ongelykheid.
|
28 |
High intensity versus endurance training: Are physiological and biomechanical adaptations preserved 2 months following the completion of an intensive exercise intervention.Siemens, Tina 31 October 2013 (has links)
In light of the current global prevalence of overweight and obesity, the associated health risks, and the continuing adoption of sedentary lifestyle, this thesis investigated some of the factors that contribute to exercise adherence, directly comparing high-intensity whole body interval training and continuous endurance training. 68 inactive university aged adults (Age: 21.4±3.4 yrs, BMI: 25.6±4.6 kg/m2, VO2peak 40.1±5.7 ml/kg/min) were randomized into one of three groups; a non-exercise control, whole body high intensity training, or continuous endurance training. Aerobic capacity measurements, time to completion trials, muscular endurance, and core strength measures were taken at pre, post and follow up testing sessions. Psychological questionnaires were also administered during exercise as well as throughout the study. Following the intervention both exercise groups demonstrated equivalent improvements in aerobic performance, with only the interval group experiencing improved muscular and core endurance. After the 2-month follow up testing sessions the interval group lost all aerobic and core adaptation, with endurance only experiencing a partial loss. This finding indicates that the interval group did not adhere to exercise at a level that was high enough to preserve the adaptations associated with training. This finding is further supported by the psychological factors measured throughout this study, including acute affect, enjoyment and intentions to engage in future exercise. / Thesis (Master, Kinesiology & Health Studies) -- Queen's University, 2013-10-31 15:08:15.524
|
29 |
A clinical patient vital signs parameter measurement, processing and predictive algorithm using ECGHolzhausen, Rudolf January 2011 (has links)
In the modern clinical and healthcare setting, the electronic collection and analysis of patient related vital signs and parameters are a fundamental part of the relevant treatment plan and positive patient response. Modern analytical techniques combined with readily available computer software today allow for the near real time analysis of digitally acquired measurements. In the clinical context, this can directly relate to patient survival rates and treatment success. The processing of clinical parameters, especially the Electrocardiogram (ECG) in the critical care setting has changed little in recent years and the analytical processes have mostly been managed by highly trained and experienced cardiac specialists. Warning, detection and measurement techniques are focused on the post processing of events relying heavily on averaging and analogue filtering to accurately capture waveform morphologies and deviations. This Ph. D. research investigates an alternative and the possibility to analyse, in the digital domain, bio signals with a focus on the ECG to determine if the feasibility of bit by bit or near real time analysis is indeed possible but more so if the data captured has any significance in the analysis and presentation of the wave patterns in a patient monitoring environment. The research and experiments have shown the potential for the development of logical models that address both the detection and short term predication of possible follow-on events with a focus on Myocardial Ischemic (MI) and Infraction based deviations. The research has shown that real time waveform processing compared to traditional graph based analysis, is both accurate and has the potential to be of benefit to the clinician by detecting deviations and morphologies in a real time domain. This is a significant step forward and has the potential to embed years of clinical experience into the measurement processes of clinical devices, in real terms. Also, providing expert analytical and identification input electronically at the patient bedside. The global human population is testing the healthcare systems and care capabilities with the shortage of clinical and healthcare providers in ever decreasing coverage of treatment that can be provided. The research is a moderate step in further realizing this and aiding the caregiver by providing true and relevant information and data, which assists in the clinical decision process and ultimately improving the required standard of patient care.
|
30 |
Significant or Not : What Does the "Magic" P-Value Tell Us?Nelson, Mary January 2016 (has links)
The use of the p-value in determination of statistical significance—and by extension in decision making—is widely taught and frequently used. It is not, however, without limitations, and its use as a primary marker of a worthwhile conclusion has recently come under increased scrutiny. This paper attempts to explain some lesser-known properties of the p-value, including its distribution under the null and alternative hypotheses, and to clearly present its limitations and some straightforward alternatives.
|
Page generated in 0.0578 seconds