• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Reprodutibilidade da hipotensão pós-exercício e de seus mecanismos hemodinâmicos e autonômicos / Reproducibility of post-exercise hypotension and its hemodynamic and autonomic mechanisms

Rafael Yokoyama Fécchio 16 October 2017 (has links)
A hipotensão pós-exercício (HPE) se caracteriza pela redução da pressão arterial (PA) após uma sessão de exercício. Diversos estudos têm investigado a HPE e seus mecanismos utilizando as seguintes formas de cálculo: I = PA pós-exercício - PA pré-exercício; II = PA pós-exercício - PA pós-controle; e III = [(PA pós-exercício - PA pré-exercício) - (PA pós-controle - PA pré-controle)]. Embora esses estudos tenham demonstrado a ocorrência da HPE em diferentes populações e sua relevância clínica, pouco se sabe sobre sua reprodutibilidade. Dessa forma, este estudo objetivou verificar a reprodutibilidade (erro sistemático, confiabilidade e concordância) da HPE e de seus mecanismos hemodinâmicos e autonômicos avaliados pelas 3 formas de cálculo expostas. Para tanto, 30 indivíduos realizaram 4 sessões experimentais divididas em 2 blocos (teste e reteste). Cada bloco foi composto por uma sessão de exercício (cicloergômetro, 45min, 50% do VO2pico) e uma controle (repouso sentado), realizadas em ordem aleatória. Antes e após as intervenções, foram medidos: a PA (auscultatória e fotopletismográfica), o débito cardíaco (reinalação de CO2), a frequência cardíaca (FC - eletrocardiograma) e a modulação autonômica cardiovascular (análise espectral das variabilidades da FC e da PA, além da sensibilidade barorreflexa). A presença de erro sistemático foi avaliada pelo test-t pareado, a confiabilidade pelo coeficiente de correlação intraclasse (CCI) e a concordância pelo erro típico (ET). A HPE e seus mecanismos hemodinâmicos e autonômicos avaliados pelas 3 formas de cálculo não apresentaram erro sistemático. A HPE sistólica apresentou confiabilidade alta e a diastólica confiabilidade baixa a moderada, com melhor confiabilidade na forma de cálculo II. Em geral, os mecanismos hemodinâmicos e autonômicos apresentaram confiabilidade moderada a baixa, com maior confiabilidade com a forma de cálculo I. Para finalizar, os parâmetros de concordância variaram entre as 3 formas de cálculo, o que implica que o ET específico de cada variável para cada forma de cálculo deve ser considerado para a estimativa do tamanho amostral necessário em estudos e para o cálculo da mínima diferença detectável na prática clínica quando o objetivo for comparar as respostas pós-exercício obtidas em diferentes condições / Post-exercise hypotension is characterized by a reduction in blood pressure (BP) after a single session of exercise. Several studies have investigated PEH and its mechanisms and they have employed the following methods of calculation: I = post-exercise BP - pre-exercise BP; II = post-exercise BP - post-control BP; and III = [(post-exercise BP - pre-exercise BP) - (post-control BP - pre-control BP)]. Although these studies have demonstrated the occurrence of PEH in different populations and its clinical relevance, little is known about the reproducibility of PEH. Thus, the current study was designed to determine the reproducibility (systematic error, reliability and agreement) of PEH and its hemodynamic and autonomic mechanisms evaluated by the three methods of calculation exposed. For this purpose, 30 subjects performed 4 experimental sessions divided into two blocks (test and retest). Each block was composed by one exercise (cycle ergometer, 45 min, 50% of VO2peak) and one control (seated rest) session executed in a random order. Before and after the interventions, the following parameters were measured: BP (auscultatory and photoplethysmographic), cardiac output (CO2 rebreathing), heart rate (HR - electrocardiogram) and cardiovascular autonomic modulation (spectral analysis of HR and BP variabilities, as well as spontaneous baroreflex sensitivity). The presence of systematic bias was evaluated by paired t-test, reliability by intraclass correlation coefficient (ICC) and agreement by typical error (TE). PEH and its hemodynamic and autonomic mechanisms evaluated by the three methods of calculation did not present systematic bias. Systolic PEH presented high reliability and diastolic PEH showed low to moderate reliability, with better results for the method II. In general, the hemodynamic and autonomic mechanisms presented low to moderate reliabilities, with better results for the method I. Lastly, agreement parameters varied among the three methods of calculation, which implies that the specific value of TE for each variable and each method of calculation should be used for estimating the sample size required in studies and establishing the minimal detectable change in clinical settings when the goal is to compare post-exercise responses obtained in different conditions
12

Systematic errors in the characterization of rock mass quality for tunnels : a comparative analysis between core and tunnel mapping

Domingo Sabugo, María January 2018 (has links)
This thesis analyzes the potential systematic errorin the characterization of the rock mass quality in borehole and tunnel mapping. The difference when assessing the rock mass quality refers to the fact that the characterization performed on drilled rock cores are commonly done on-meter length, while the tunnel section can be up to 20-25 m wide. At the same time, previous studies indicate that the engineering geologist tends to characterize the rock mass quality during tunnel excavation with a conservative estimation of the parameters defining the rock mass quality to ensure a sufficient rock support. In order to estimate this possible systematic error produced by the size difference when assessing the rock mass quality, a simulation was performed within a geological domain, representative of Stockholm city. In the simulation, each meter of the tunnel section was given a separate value of the rock mass quality, randomly chosen from a normal distribution representative for the studied geological domain. The minimum value was set to represent the characterized rock mass quality for that tunnel section. The results from the simulation produced a systematic error due to the difference between the geological domain, reproducing the borehole mapping, and the simulated values, representing the tunnel mapping. The results showed a systematic error in the RMR basic index around 15 points in average, which compared to the difference of 5-7 points obtained in Norrström and the Norrmalm tunnels in the Stockholm Citylink project recently constructed, are found to be excessive. However, in the simulation, it was assumed that (1) the results obtained were the same in the bore hole mapping and in the tunnel mapping, (2) with the only difference of the engineer geologist assigning to the tunnel section the lowest RMR basic value, and (3) that there was no spatial correlation between the quality RMR basic index. After analyzing the three assumptions the simulation was based upon, the absence of spatial correlation was found to be the most significative, which indicate that spatial correlation in rock mass quality needs to be included if a more correct value should be obtained.
13

Test of Decay Rate Parameter Variation due to Antineutrino Interactions

Shih-Chieh Liu (5929988) 16 January 2019 (has links)
High precision measurements of a weak interaction decay were conducted to search for possible variation of the decay rate parameter caused by an antineutrino flux. The experiment searched for variation of the <sup>54</sup>Mn electron capture decay rate parameter to a level of precision of 1 part in ∼10<sup>5</sup> by comparing the difference between the decay rate in the presence of an antineutrino flux ∼3×10<sup>12</sup> cm<sup>-2</sup>sec<sup>-1</sup> and no flux measurements. The experiment is located 6.5 meters from the reactor core of the High Flux Isotope Reactor (HFIR) in Oak Ridge National Laboratory. A measurement to this level of precision requires a detailed understanding of both systematic and statistical errors. Otherwise, systematic errors in the measurement may mimic fundamental interactions. <div><br></div><div>The gamma spectrum has been collected from the electron capture decay of <sup>54</sup>Mn. What differs in this experiment compared to previous experiments are, (1) a strong, uniform, highly controlled, and repeatable source of antineutrino flux, using a reactor, nearly 50 times higher than the solar neutrino flux on the Earth, (2) the variation of the antineutrino flux from HFIR is 600 times higher than the variation in the solar neutrino flux on the Earth, (3) the extensive use of neutron and gamma-ray shielding around the detectors, (4) a controlled environment for the detector including a fixed temperature, a nitrogen atmosphere, and stable power supplies, (5) the use of precision High Purity Germanium (HPGe) detectors and finally, (6) accurate time stamping of all experimental runs. By using accurate detector energy calibrations, electronic dead time corrections, background corrections, and pile-up corrections, the measured variation in the <sup>54</sup>Mn decay rate parameter is found to be δλ/λ=(0.034±1.38)×10<sup>-5</sup>. This measurement in the presence of the HFIR flux is equivalent to a cross-section of σ=(0.097±1.24)×10<sup>-25 </sup>cm<sup>2</sup>. These results are consistent with no measurable decay rate parameter variation due to an antineutrino flux, yielding a 68% confidence level upper limit sensitivity in δλ/λ <= 1.43×10<sup>-5</sup> or σ<=1.34×10<sup>-25 </sup>cm<sup>2</sup> in cross-section. The cross-section upper limit obtained in this null or no observable effect experiment is ∼10<sup>4</sup> times more sensitive than past experiments reporting positive results in <sup>54</sup>Mn.</div>
14

Back-calculating emission rates for ammonia and particulate matter from area sources using dispersion modeling

Price, Jacqueline Elaine 15 November 2004 (has links)
Engineering directly impacts current and future regulatory policy decisions. The foundation of air pollution control and air pollution dispersion modeling lies in the math, chemistry, and physics of the environment. Therefore, regulatory decision making must rely upon sound science and engineering as the core of appropriate policy making (objective analysis in lieu of subjective opinion). This research evaluated particulate matter and ammonia concentration data as well as two modeling methods, a backward Lagrangian stochastic model and a Gaussian plume dispersion model. This analysis assessed the uncertainty surrounding each sampling procedure in order to gain a better understanding of the uncertainty in the final emission rate calculation (a basis for federal regulation), and it assessed the differences between emission rates generated using two different dispersion models. First, this research evaluated the uncertainty encompassing the gravimetric sampling of particulate matter and the passive ammonia sampling technique at an animal feeding operation. Future research will be to further determine the wind velocity profile as well as determining the vertical temperature gradient during the modeling time period. This information will help quantify the uncertainty of the meteorological model inputs into the dispersion model, which will aid in understanding the propagated uncertainty in the dispersion modeling outputs. Next, an evaluation of the emission rates generated by both the Industrial Source Complex (Gaussian) model and the WindTrax (backward-Lagrangian stochastic) model revealed that the calculated emission concentrations from each model using the average emission rate generated by the model are extremely close in value. However, the average emission rates calculated by the models vary by a factor of 10. This is extremely troubling. In conclusion, current and future sources are regulated based on emission rate data from previous time periods. Emission factors are published for regulation of various sources, and these emission factors are derived based upon back-calculated model emission rates and site management practices. Thus, this factor of 10 ratio in the emission rates could prove troubling in terms of regulation if the model that the emission rate is back-calculated from is not used as the model to predict a future downwind pollutant concentration.
15

Back-calculating emission rates for ammonia and particulate matter from area sources using dispersion modeling

Price, Jacqueline Elaine 15 November 2004 (has links)
Engineering directly impacts current and future regulatory policy decisions. The foundation of air pollution control and air pollution dispersion modeling lies in the math, chemistry, and physics of the environment. Therefore, regulatory decision making must rely upon sound science and engineering as the core of appropriate policy making (objective analysis in lieu of subjective opinion). This research evaluated particulate matter and ammonia concentration data as well as two modeling methods, a backward Lagrangian stochastic model and a Gaussian plume dispersion model. This analysis assessed the uncertainty surrounding each sampling procedure in order to gain a better understanding of the uncertainty in the final emission rate calculation (a basis for federal regulation), and it assessed the differences between emission rates generated using two different dispersion models. First, this research evaluated the uncertainty encompassing the gravimetric sampling of particulate matter and the passive ammonia sampling technique at an animal feeding operation. Future research will be to further determine the wind velocity profile as well as determining the vertical temperature gradient during the modeling time period. This information will help quantify the uncertainty of the meteorological model inputs into the dispersion model, which will aid in understanding the propagated uncertainty in the dispersion modeling outputs. Next, an evaluation of the emission rates generated by both the Industrial Source Complex (Gaussian) model and the WindTrax (backward-Lagrangian stochastic) model revealed that the calculated emission concentrations from each model using the average emission rate generated by the model are extremely close in value. However, the average emission rates calculated by the models vary by a factor of 10. This is extremely troubling. In conclusion, current and future sources are regulated based on emission rate data from previous time periods. Emission factors are published for regulation of various sources, and these emission factors are derived based upon back-calculated model emission rates and site management practices. Thus, this factor of 10 ratio in the emission rates could prove troubling in terms of regulation if the model that the emission rate is back-calculated from is not used as the model to predict a future downwind pollutant concentration.
16

Marknadseffektivitet och det systematiska felet : Finansanalytikers och Ekonomijournalisters marknadspåverkan / Market Efficiency and the Systematical Error

Wiman, Robin, Persson, Alexander January 2015 (has links)
Forskningen kring effektiva marknader är uppdelad; ena sidan påstår att marknaden är fullständigt effektiv och det inte går att skapa någon form av överavkastning. Andra sidan hävdar tvärtemot att endast historisk information reflekteras i dagens priser. På kort sikt kan det finns en viss ineffektivitet och de flesta erkänner att marknaden innehåller anomalier Syftet med denna studie är att undersöka om det existerar systematiska fel beträffande informationsflöden som pekar mot att den svenska aktiemarknaden inte är av semi-stark form eller stark form av effektivitet Vi utgår från tre metodologiska ställningstaganden; utgångspunkt, forskningsansats samt kunskapssyn. En deduktivt kvantitativ metod tillämpas och vi applicerar metoden för event study. Vi finner stöd för att det finns systematiska fel i marknaden beträffande informationsflöden i form av aktierekommendationer. Resultaten antyder att den svenska aktiemarknaden inte är av starkt effektiv form och i ett fall av fyra finner vi att den inte heller besitter semi-stark form. / Research concerning efficient markets are divided into two camps; the one hand, claims that the market is fully efficient and it is not possible to create any kind of excess returns. The other side argues the contrary that only historical information are reflected in today’s prices. Short term, there is some inefficiency and most recognize that the market contain anomalies The purpose is to investigate whether there exist indications regarding flows of information to the Swedish stock market suggesting a semi-strong form or strong form of efficiency. We start from three methodological statements; starting point, the research approach and epistemological beliefs. A deductive quantitative methodology is used, and we apply the method of event study. We find evidence for the existence of systematic errors in the market in terms of flows of information in the form of stock recommendations. The results suggest that the Swedish stock market is not of the strong efficient form and in one case out of four, we find that it does not possess the semi-strong form.
17

Srovnání vybraných technik sběru dat kvantitativního výzkumu / Comparison of selected data collection techniques of quantitative research

Utler, Richard January 2017 (has links)
The thesis analyses the differences that result from using of specific data collection method, computer assisted telephone interviewing (CATI) and computer assisted personal interviewing (CAPI). The main findings are based on election model built by research agency TNS Aisa for elections to the Chamber of Deputies of the Parliament of the Czech Republic in 2013. The aim of the thesis is to determine whether the chosen method of data collection influences the results of the estimated electoral preferences, to determine in which sociodemographic categories it is happening and whether the differences in the obtained data are related to the ideological orientation of the political parties. The dependence of the results on the data collection method is assessed by the chi-quadrate independence test. Further, through personal interviews with researchers, it is determined at what stage of the research process the data may be distorted and what its possible causes are. The benefit of the thesis is the finding that the chosen method of collection influences the established preferences among voters aged 30-44, university graduates and voters living in Prague. In these groups, the left-hand side ČSSD is preferred more by personal interviewing and right-handed TOP 09 through telephone interviewing. The collection of data itself was evaluated by the most risky phase of the research process, in which possible distortion could occur. While the accuracy of personal interviewing depends largely on the interviewer's personality, the sources of distortion of the telephone inquiry result more from the nature of the use of the phone itself.
18

Energy Usage Evaluation and Condition Monitoring for Electric Machines using Wireless Sensor Networks

Lu, Bin 16 November 2006 (has links)
Energy usage evaluation and condition monitoring for electric machines are important in industry for overall energy savings. Traditionally these functions are realized only for large motors in wired systems formed by communication cables and various types of sensors. The unique characteristics of the wireless sensor networks (WSN) make them the ideal wireless structure for low-cost energy management in industrial plants. This work focuses on developing nonintrusive motor-efficiency-estimation methods, which are essential in the wireless motor-energy-management systems in a WSN architecture that is capable of improving overall energy savings in U.S. industry. This work starts with an investigation of existing motor-efficiency-evaluation methods. Based on the findings, a general approach of developing nonintrusive efficiency-estimation methods is proposed, incorporating sensorless rotor-speed detection, stator-resistance estimation, and loss estimation techniques. Following this approach, two new methods are proposed for estimating the efficiencies of in-service induction motors, using air-gap torque estimation and a modified induction motor equivalent circuit, respectively. The experimental results show that both methods achieve accurate efficiency estimates within ¡À2-3% errors under normal load conditions, using only a few cycles of input voltages and currents. The analytical results obtained from error analysis agree well with the experimental results. Using the proposed efficiency-estimation methods, a closed-loop motor-energy-management scheme for industrial plants with a WSN architecture is proposed. Besides the energy-usage-evaluation algorithms, this scheme also incorporates various sensorless current-based motor-condition-monitoring algorithms. A uniform data interface is defined to seamlessly integrate these energy-evaluation and condition-monitoring algorithms. Prototype wireless sensor devices are designed and implemented to satisfy the specific needs of motor energy management. A WSN test bed is implemented. The applicability of the proposed scheme is validated from the experimental results using multiple motors with different physical configurations under various load conditions. To demonstrate the validity of the measured and estimated motor efficiencies in the experiments presented in this work, an in-depth error analysis on motor efficiency measurement and estimation is conducted, using maximum error estimation, worst-case error estimation, and realistic error estimation techniques. The conclusions, contributions, and recommendations are summarized at the end.
19

Analysis of errors made by learners in simplifying algebraic expressions at grade 9 level / Analysis of errors made by learners in simplifying algebraic expressions at grade nine level

Ncube, Mildret 06 1900 (has links)
The study investigated errors made by Grade 9 learners when simplifying algebraic expressions. Eighty-two (82) Grade 9 learners from a rural secondary school in Limpopo Province, South Africa participated in the study. The sequential explanatory design method which uses both quantitative and qualitative approaches was used to analyse errors in basic algebra. In the quantitative phase, a 20-item test was administered to the 82 participants. Learners’ common errors were identified and grouped according to error type. The qualitative phase involved interviews with selected participants. The interviews focused on each identified common error in order to establish the reasons why learners made the identified errors. The study identified six (6) common errors in relation to simplifying algebraic expressions. The causes of these errors were attributed to poor arithmetic background; interference from new learning; failure to deal with direction and operation signs; problems with algebraic notation and misapplication of rules. / Mathematics Education / M. Ed. (Mathematics Education)

Page generated in 0.4855 seconds