• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 31
  • 18
  • 9
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 189
  • 21
  • 20
  • 18
  • 16
  • 16
  • 16
  • 15
  • 14
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Estudo comparativo entre metodos de calibracao de aplicadores clinicos de radiacao beta / Comparative study among calibration methods of clinical applicators of beta radiation

ANTONIO, PATRICIA de L. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:26:51Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:06:14Z (GMT). No. of bitstreams: 0 / Os aplicadores clínicos de 90Sr+90Y são instrumentos utilizados em procedimentos de braquiterapia e devem ser calibrados periodicamente, de acordo com normas e recomendações internacionais. Neste trabalho, foram estudados quatro métodos de calibração de aplicadores dermatológicos e oftálmicos, comparando-se os resultados com os fornecidos pelos certificados de calibração dos fabricantes dos aplicadores. Os métodos envolveram o aplicador clínico padrão do Laboratório de Calibração de Instrumentos (LCI), calibrado pelo laboratório padrão primário americano do National Institute of Standards and Technology, como referência; um aplicador da Amersham, também pertencente ao LCI, como referência; uma mini-câmara de extrapolação desenvolvida no IPEN como padrão absoluto; e dosimetria termoluminescente. A mini-câmara de extrapolação e uma câmara de extrapolação comercial PTW foram estudadas com relação ao seu desempenho por meio de testes de controle de qualidade, como corrente de fuga, repetitividade e reprodutibilidade. A distribuição de dose em profundidade na água, estudo de grande importância na dosimetria de aplicadores clínicos, foi determinada utilizando-se a mini-câmara de extrapolação e dosímetros termoluminescentes. Os resultados obtidos foram considerados satisfatórios para os dois casos, quando comparados com os dados fornecidos pela norma IAEA (2002). Além disso, foi desenvolvido um sistema postal dosimétrico para a calibração de aplicadores clínicos por meio da técnica da termoluminescência, para ser enviado para clínicas e hospitais, sem a necessidade do transporte das fontes ao LCI do IPEN para calibração. / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
72

Metabolism and interactions of pesticides in human and animal <em>in vitro</em> hepatic models

Abass, K. M. (Khaled M.) 16 November 2010 (has links)
Abstract Risk assessment of chemicals needs reliable scientific information and one source of information is the characterization of the metabolic fate and toxicokinetics of a chemical. Metabolism is often the most important factor contributing to toxicokinetics. Cytochrome P450 (CYP) enzymes are a superfamily of microsomal proteins playing a pivotal role in xenobiotic metabolism. In the present study, pesticides were used as representative xenobiotics since exposure to pesticides is a global challenge to risk assessment. Human and animal in vitro hepatic models were applied with the advantage of novel analytical techniques (LC/TOF-MS and LC/MS-MS) to elucidate the in vitro metabolism and interaction of selected pesticides. The results of these studies demonstrate that CYP enzymes catalyze the bioactivation of profenofos, diuron and carbosulfan into their more toxic metabolites desthiopropylprofenofos, N-demethyldiuron and carbofuran, respectively. The suspected carcinogenic metabolite of metalaxyl, 2,6-dimethylaniline, was not detected. CYP3A4 and CYP2C19 activities may be important in determining the toxicity arising from exposure to profenofos and carbosulfan. Individuals with high CYP1A2 and CYP2C19 activities might be more susceptible to diuron toxicity. Qualitative results of in vitro metabolism were generally in agreement with the results obtained from the published in vivo data, at least for the active chemical moiety and major metabolites. Considerable differences in the quantities of the metabolites produced within the species, as well as in the ratios of the metabolites among the species, were observed. These findings illustrate that in vitro screening of qualitative and quantitative differences are needed to provide a firm basis for interspecies and in vitro-in vivo extrapolations. Based on our findings, in vitro-in vivo extrapolation based on the elucidation of the in vitro metabolic pattern of pesticides in human and animal hepatic models could be a good model for understanding and extending the results of pesticides metabolism studies to human health risk assessment.
73

Mechanistic prediction of intestinal first-pass metabolism using in vitro data in preclinical species and in man

Hatley, Oliver James Dimitriu January 2014 (has links)
The impact of the intestine in determining the oral bioavailability of drugs has been extensively studied. Its large surface area, metabolic content and positioning at the first site of exposure for orally ingested xenobiotics means its contribution can be significant for certain drugs. However, prediction of the exact metabolic component of the intestine is limited, in part due to limitations in validation of in vitro tools as well as in vitro-in vivo extrapolation scaling factors. Microsomes are a well established in vitro tool for extrapolating hepatic metabolism, however standardised methodologies for preparation in the intestine are limited, in light of complexities in preparation (e.g. presence of multiple non-metabolic cells, proteases and mucus). Therefore, the aims of this study were to establish an optimised method of intestinal microsome preparation via elution in the proximal rat intestine, and to determine microsomal scaling factors by correcting for protein losses during preparation. In addition, to assess species in another preclinical species (dog) and human as well as assessing and regional differences in scaling factors and metabolism. Following optimisation of a reproducible intestinal microsome preparation method in the rat, the importance of heparin in limiting mucosal contamination was established. These microsomes were characterised for total cytochrome P450 (CYP) content, and CYP and uridine 5′-diphosphate glucuronosyltransferase (UGT) activities using maker probes of testosterone and 4-nitrophenol. Loss corrected microsomal scaling factors between two pools of n=9 rats was 9.6±3.5 (recovery 33%). A broad range of compounds (n=25) in terms of metabolic activity and physicochemical properties were screened in rat intestinal microsomes. The prediction accuracy relative to in house generated or literature in vivo estimates of the fraction escaping intestinal metabolism (FG) through in vitro-in vivo extrapolation of observed metabolism and the derived scaling factors and either Caco-2 permeability of physicochemical permeability estimates utilising the Qgut model. In the dog, regional differences in intestinal scaling factors and metabolic activities were explored, as well as relationships between the proximal intestine and liver in matched donors. Positive correlations in both hepatic activity and microsomal scalars were observed. Robust scaling factors were established using the 3 microsomal markers. A total of 24 compounds were screened for hepatic and intestinal metabolism in order to make in vivo estimates of FG, the fraction escaping hepatic metabolism (FH) and oral bioavailability (F). Estimates based on Caco-2 and physicochemical based scaling, as well as utilising a commercial PBPK software platform (ADAM model, Simcyp® v12) were broadly similar with generally reduced prediction accuracy in proximal physicochemical based Qgut scaling, and improved predictions using Caco-2 Qgut or PBPK approaches. Worse predictions were observed for compounds with high protein binding, transporter substrates and/or CYP3A inhibitors. Regional metabolism demonstrated peak metabolism in the proximal intestine, before declining distally. Human intestinal microsomes were prepared for jejunum and ileum tissue. Although samples were limited, regional differences in metabolic activities and scaling factors were also assessed, using correction markers and activity in 23 compounds. In all, 20 compounds overlapped between all three species. Comparison in Fa.FG between rat and human CYP3A substrates showed a modest relationship, however relationships between species and human were generally poor given the observed differing metabolic contributions of testosterone and 4-NP metabolite formation between species limited the observed relationships between species. However, within species, good estimates of oral bioavailability were observed. This is the largest know interspecies comparison of intestinal metabolism and scaling factors with microsomes prepared within the same lab.
74

Velocity memory

Makin, Alexis David James January 2011 (has links)
It is known that primates are sensitive to the velocity of moving objects. We can also remember velocity information after moving objects disappear. This cognitive faculty has been investigated before, however, the literature on velocity memory to date has been fragmented. For example, velocity memory has been disparately described as a system that controls eye movements and delayed discrimination. Furthermore, velocity memory may have a role in motion extrapolation, i.e. the ability to judge the position of a moving target after it becomes occluded. This thesis provides a unifying account of velocity memory, and uses electroencephalography (EEG) to explore its neural basis. In Chapter 2, the relationship between oculomotor control and motion extrapolation was investigated. Two forms of motion extrapolation task were presented. In the first, participants observed a moving target disappear then reappear further along its path. Reappearance could be at the correct time, too early or too late. Participants discriminated reappearance error with a two-alternative forced choice button press. In the second task, participants saw identical targets travel behind a visible occluder, and they attempted to press a button at the exact time that it reached the other side. Tasks were completed under fixation and free viewing conditions. The accuracy of participant's judgments was reduced by fixation in both tasks. In addition, eye movements were systematically related to behavioural responses, and small eye movements during fixation were affected by occluded motion. These three results imply that common velocity memory and pre-motor systems mediate eye movements and motion extrapolation. In Chapter 3, different types of velocity representation were explored. Another motion extrapolation task was presented, and targets of a particular colour were associated with fast or slow motion. On identical-velocity probe trials, colour still influenced response times. This indicates that long-term colour-velocity associations influence motion extrapolation. In Chapter 4, interference between subsequently encoded velocities was explored. There was robust interference between motion extrapolation and delayed discrimination tasks, suggesting that common processes are involved in both. In Chapter 5, EEG was used to investigate when memory-guided tracking begins during motion extrapolation. This study compared conditions where participants covertly tracked visible and occluded targets. It was found that a specific event related potential (ERP) appeared around 200 ms post occlusion, irrespective of target location or velocity. This component could delineate the onset of memory guided tracking during occlusion. Finally, Chapter 6 presents evidence that a change in alpha band activity is associated with information processing during motion extrapolation tasks. In light of these results, it is concluded that a common velocity memory system is involved a variety of tasks. In the general discussion (Chapter 7), a new account of velocity memory is proposed. It is suggested that a velocity memory reflects persistent synchronization across several velocity sensitive neural populations after stimulus offset. This distributed network is involved in sensory-motor integration, and can remain active without visual input. Theoretical work on eye movements, delayed discrimination and motion extrapolation could benefit from this account of velocity memory.
75

Investigations into rat hepatobiliary drug clearance pathways in early drug discovery

Rynn, Caroline January 2014 (has links)
Conventional ‘well-stirred’ extrapolation methodology using intrinsic metabolic clearance data from rat liver microsomes poorly predicts in vivo clearance for approximately half of drug discovery compounds. The aim of this present study was to gain a more detailed understanding of the hepatobiliary disposition pathways which influence drug clearance. A set of 77 new chemical entities (NCEs), demonstrating a range of physicochemical properties and in vitro-in vivo clearance correlations (IVIVC), were employed to explore relationships between hepatobiliary disposition pathways in rat and physicochemical, structural and molecular properties of the NCEs. Primary rat hepatocytes with >80% cell viability were successfully isolated from male Han Wistar rats and used to establish in vitro models of drug uptake and biliary efflux. Preliminary studies with cultured primary rat hepatocytes indicated that uptake of d8-taurocholic acid and pitavastatin was time, concentration and temperature dependent. Initial studies with sandwich cultured primary rat hepatocytes demonstrated that cellular accumulation and biliary efflux of [3H]-Taurocholic acid was time and concentration dependent. These in vitro rat hepatocyte models were then used to investigate drug uptake and biliary efflux for all NCEs. In general, NCEs with high (passive) permeability showed better IVIVC and a lower incidence of active uptake and biliary efflux compared to NCEs with lower permeability, suggesting permeability is a key property influencing hepatobiliary drug disposition in rat. Preliminary in silico models analysing structural and molecular descriptors of substrates of active transport in rat hepatocytes were developed and indicated modest potential to highlight clearance pathways beyond hepatic metabolism but further follow up work with larger, more diverse compound sets is warranted to gain confidence in these models. Extended clearance models were investigated to estimate the effect of hepatic transporters on clearance and to predict the overall hepatic clearance of the NCEs. None of these models resulted in a 1 to 1 correlation but in general, improvements in clearance predictions were made when drug transport processes were accounted for. In vivo excretion studies using bile duct cannulated rats demonstrated that NCEs with high permeability and good IVIVC were not directly eliminated in bile or urine as unchanged drug, whereas NCEs with lower permeability and poor IVIVC (> 3-fold under predicted) were all directly eliminated unchanged indicating key drivers of clearance beyond metabolism. In conclusion these investigations confirmed a role for hepatic transporters in clearance but the complex nature of active transport mechanisms and a lack of robust in vitro tools create challenges in the quantitative prediction of hepatobiliary clearance. However, one of the key findings from this research, which is highly applicable in early drug discovery, was to identify the existence of disposition permeability relationships. These can be anticipated by observing physicochemical parameters of NCEs in conjunction with conventional IVIVC, since NCEs that are not highly permeable, possess some hydrophobic characteristics, and which are poor substrates of cytochrome P450 enzymes are more likely to be good substrates of transporters and be directly eliminated in bile and/or urine. The present study focused on exploring hepatobiliary disposition pathways using rat as the investigative species. Whilst there is no guarantee that pathways relevant to rat will be similar to other preclinical species or even humans, an early diagnosis of dominant clearance pathways can guide a more efficient use of the ADME-PK toolbox.
76

Synthesis of the 1D modelling of turbochargers and its effects on engine performance prediction

Dombrovsky, Artem 05 June 2017 (has links)
Low fuel consumption is one of the main requirement for current internal combustion engines for passenger car applications. One of the most used strategies to achieve this goal is to use downsized engines (smaller engines while maintaining power) what implies the usage of turbochargers. The coupling between both machines (the turbocharger and the internal combustion engines) presents many difficulties due to the different nature between turbomachines and reciprocating machines. These difficulties make the optimal design of the turbocharged internal combustion engines a complicated issue. In these thesis a strong effort has been made to improve the global understanding of different physical phenomena occurring in turbochargers and in turbocharged engines. The work has been focused on the 1D modelling of the phenomena since 1D tools currently play a major role in the engine design process. Both experimental and modelling efforts have been made to understand the heat transfer and gas flow processes in turbochargers. Previously to the experimental analysis a literature review has been made in which the state of the art of heat transfer and gas flow modelling in turbochargers have been analysed. The experimental effort of the thesis has been focused on measuring different turbochargers in the gas stand and the engine test bench. In the first case, the gas stand, a more controlled environment, has been used to perform tests at different conditions. Hot tests with insulated and not insulated turbocharger have been made to characterise the external heat transfer. Moreover, adiabatic tests have been made to compare the effect of the heat transfer on different turbocharger variables and for the validation of the turbine gas flow models. In the engine test bench full and partial load tests have been made for model validation purposes. For the models development task, the work has been divided in heat flow models and gas flow models. In the first case, a general heat transfer model for turbochargers has been proposed based on the measured turbochargers and data available from previous works of the literature. This model includes a procedure of conductive conductances estimation, internal and external convection correlations and radiation estimation procedure. In the case of the gas flow modelling, an extended model for VGT performance maps extrapolation for both the efficiency and the mass flow has been developed as well as a model for discharge coefficient prediction in valves for two stage turbochargers. Finally, the models have been fully validated coupling them with a 1D modelling software simulating both the gas stand and the whole engine. On the one hand, the results of the validation show that compressor and turbine outlet temperature prediction is highly improved using the developed models. This results prove that the turbocharger heat transfer phenomena are important not only for partial load and transient simulation but also in full loads. On the other hand, the VGT extrapolation model accuracy is high even at off-design conditions. / El bajo consumo de combustible es uno de los principales requerimientos de los motores de combustión interna actuales para aplicaciones de coches de pasajeros. Una de las estrategias más usadas para conseguir ese fin es el uso de motores "downsized" (motores más pequeños con la misma potencia) lo que implica el uso de turbocompresores. El acoplamiento entre ambas máquinas (el turbocompresor y el motor de combustión alternativo) presenta muchas dificultades debido a la diferente naturaleza entre las turbomáquinas y las máquinas alternativas. Estas dificultades convierten el diseño óptimo de los motores de combustión interna sobrealimentados en un asunto complicado. En esta tesis se ha realizado un importante esfuerzo para mejorar el entendimiento global de los diferentes fenómenos físicos que ocurren en los turbocompresores y en los motores sobrealimentados. El trabajo se ha centrado en el modelado 1D de los fenómenos puesto que las herramientas 1D juegan actualmente un papel principal en el proceso de diseño del motor. Se han realizado tanto esfuerzos experimentales como de modelado para el entendimiento de los procesos de transmisión de calor y de flujo de gases en turbocompresores. Previamente al análisis experimental se ha realizado una revisión de la literatura disponible en la que se ha analizado el estado del arte del modelado de transmisión de calor y flujo de gases en turbocompresores. El esfuerzo experimental de la tesis se ha centrado en la medida de diferentes turbocompresores en el banco de gas y en el banco motor. En el primer caso, se ha utilizado el banco de gas, un ambiente más controlado, para realizar ensayos en diferentes condiciones. Se han realizado ensayos calientes con y sin aislamiento del turbocompresor para caracterizar el flujo de calor externo. Además, se han realizado ensayos adiabáticos para comparar el efecto de la transmisión de calor sobre diferentes variables del turbocompresor y para la validación de los modelos de flujo de gases de la turbina. En el banco motor se han realizado ensayos a plena carga y a cargas parciales para usarlos en la validación. Para la tarea del desarrollo de los modelos, el trabajo se dividió en modelos de flujo de calor y modelos de flujo de gases. En el primer caso, se ha propuesto un modelo general de transmisión de calor para turbocompresores basado en los turbocompresores medidos y en datos disponibles de trabajos previos de la literatura. Este modelo incluye un procedimiento para la estimación de las conductancias conductivas, correlaciones de convección interna y externa y un procedimiento de estimación de la radiación. En el caso del modelado de flujo de gases, se ha desarrollado un modelo extendido para la extrapolación de mapas de funcionamiento de TGV tanto para el rendimiento como para el gasto másico además del modelo de predicción de coeficientes de descarga en válvulas de turbocompresores de doble etapa. Finalmente, los modelos han sido completamente validados con su acoplamiento a un software de modelado 1D simulando tanto el banco de turbos como el motor completo. Por un lado, los resultados de la validación señalan que la predicción de las temperaturas de salida de compresor y turbina mejora notablemente usando los modelos desarrollados. Este resultado demuestra que los fenómenos de transmisión de calor son importantes no sólo en simulaciones de cargas parciales y de transitorios sino también en plenas cargas. Por otro lado, la precisión del modelo de extrapolación de TGV es alta incluso en condiciones fuera de diseño. / El baix consum de combustible és un dels principals requeriments dels motors de combustió interna actuals per a aplicacions de cotxes de passatgers. Una de les estratègies més usades per a aconseguir eixe fi és l'ús de motors "downsized" (motors més xicotets amb la mateixa potència) el que implica l'ús de turbocompressors. L'adaptament entre ambdues màquines (el turbocompressor i el motor de combustió alternatiu) presenta moltes dificultats degut a la diferent naturalesa entre les turbomàquines i les màquines alternatives. Estes dificultats convertixen el disseny òptim dels motors de combustió interna sobrealimentats en un assumpte complicat. En esta tesi s'ha realitzat un important esforç per a millorar l'enteniment global dels diferents fenòmens físics que ocorren en els turbocompressors i en els motors sobrealimentats. El treball s'ha centrat en el modelatge 1D dels fenòmens ja que les ferramentes 1D juguen actualment un paper principal en el procés de disseny del motor. S'han realitzat tant esforços experimentals com de modelatge per a l'enteniment dels processos de transmissió de calor i de flux de gasos en turbocompressors. Prèviament a l'anàlisi experimental s'ha realitzat una revisió de la literatura disponible en què s'ha analitzat l'estat de l'art del modelatge de transmissió de calor i flux de gasos en turbocompressors. L'esforç experimental de la tesi s'ha centrat en la mesura de diferents turbocompressors en el banc de gas i en el banc motor. En el primer cas, s'ha utilitzat el banc de gas, un ambient més controlat, per a realitzar assajos en diferents condicions. S'han realitzat assajos calents amb i sense aïllament del turbocompressor per a caracteritzar el flux de calor extern. A més, s'han realitzat assajos adiabàtics per a comparar l'efecte de la transmissió de calor sobre diferents variables del turbocompressor i per a la validació dels models de flux de gasos de la turbina. En el banc motor s'han realitzat assajos a plena càrrega i a càrregues parcials per a usar-los en la validació. Per a la tasca del desenvolupament dels models, el treball es va dividir en models de flux de calor i models de flux de gasos. En el primer cas, s'ha proposat un model general de transmissió de calor per a turbocompressors basat en els turbocompressors mesurats i en dades disponibles de treballs previs de la literatura. Este model inclou un procediment per a l'estimació de les conductàncies conductivas, correlacions de convecció interna i externa i un procediment d'estimació de la radiació. En el cas del modelatge de flux de gasos, s'ha desenvolupat un model estés per a l'extrapolació de mapes de funcionament de TGV tant per al rendiment com per al gasto màssic a més del model de predicció de coeficients de descàrrega en vàlvules de turbocompressors de doble etapa. Finalment, els models han sigut completament validats amb el seu adaptament a un software de modelatge 1D simulant tant el banc de turbos com el motor complet. D'una banda, els resultats de la validació assenyalen que la predicció de les temperatures d'eixida de compressor i turbina millora notablement usant els models desenrotllats. Este resultat demostra que els fenòmens de transmissió de calor són importants no sols en simulacions de càrregues parcials i de transitoris sinó també en plenes càrregues. D'altra banda, la precisió del model d'extrapolació de TGV és alta inclús en condicions fora de disseny. / Dombrovsky, A. (2017). Synthesis of the 1D modelling of turbochargers and its effects on engine performance prediction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/82307 / TESIS
77

Efficient Variable Mesh Techniques to solve Interior Layer Problems

Mbayi, Charles K. January 2020 (has links)
Philosophiae Doctor - PhD / Singularly perturbed problems have been studied extensively over the past few years from different perspectives. The recent research has focussed on the problems whose solutions possess interior layers. These interior layers appear in the interior of the domain, location of which is difficult to determine a-priori and hence making it difficult to investigate these problems analytically. This explains the need for approximation methods to gain some insight into the behaviour of the solution of such problems. Keeping this in mind, in this thesis we would like to explore a special class of numerical methods, namely, fitted finite difference methods to determine reliable solutions. As far as the fitted finite difference methods are concerned, they are grouped into two categories: fitted mesh finite difference methods (FMFDMs) and the fitted operator finite difference methods (FOFDMs). The aim of this thesis is to focus on the former. To this end, we note that FMFDMs have extensively been used for singularly perturbed two-point boundary value problems (TPBVPs) whose solutions possess boundary layers. However, they are not fully explored for problems whose solutions have interior layers. Hence, in this thesis, we intend firstly to design robust FMFDMs for singularly perturbed TPBVPs whose solutions possess interior layers and to improve accuracy of these approximation methods via methods like Richardson extrapolation. Then we extend these two ideas to solve such singularly perturbed TPBVPs with variable diffusion coefficients. The overall approach is further extended to parabolic singularly perturbed problems having constant as well as variable diffusion coefficients. / 2023-08-31
78

Video Prediction with Invertible Linear Embeddings

Pottorff, Robert Thomas 01 June 2019 (has links)
Using recently popularized invertible neural network We predict future video frames from complex dynamic scenes. Our invertible linear embedding (ILE) demonstrates successful learning, prediction and latent state inference. In contrast to other approaches, ILE does not use any explicit reconstruction loss or simplistic pixel-space assumptions. Instead, it leverages invertibility to optimize the likelihood of image sequences exactly, albeit indirectly.Experiments and comparisons against state of the art methods over synthetic and natural image sequences demonstrate the robustness of our approach, and a discussion of future work explores the opportunities our method might provide to other fields in which the accurate analysis and forecasting of non-linear dynamic systems is essential.
79

Signály s omezeným spektrem, jejich vlastnosti a možnosti jejich extrapolace / Bandlimited signals, their properties and extrapolation capabilities

Mihálik, Ondrej January 2019 (has links)
The work is concerned with the band-limited signal extrapolation using truncated series of prolate spheroidal wave function. Our aim is to investigate the extent to which it is possible to extrapolate signal from its samples taken in a finite interval. It is often believed that this extrapolation method depends on computing definite integrals. We show an alternative approach by using the least squares method and we compare it with the methods of numerical integration. We also consider their performance in the presence of noise and the possibility of using these algorithms for real-time data processing. Finally all proposed algorithms are tested using real data from a microphone array, so that their performance can be compared.
80

A Bayesian approach to energy monitoring optimization

Carstens, Herman January 2017 (has links)
This thesis develops methods for reducing energy Measurement and Verification (M&V) costs through the use of Bayesian statistics. M&V quantifies the savings of energy efficiency and demand side projects by comparing the energy use in a given period to what that use would have been, had no interventions taken place. The case of a large-scale lighting retrofit study, where incandescent lamps are replaced by Compact Fluorescent Lamps (CFLs), is considered. These projects often need to be monitored over a number of years with a predetermined level of statistical rigour, making M&V very expensive. M&V lighting retrofit projects have two interrelated uncertainty components that need to be addressed, and which form the basis of this thesis. The first is the uncertainty in the annual energy use of the average lamp, and the second the persistence of the savings over multiple years, determined by the number of lamps that are still functioning in a given year. For longitudinal projects, the results from these two aspects need to be obtained for multiple years. This thesis addresses these problems by using the Bayesian statistical paradigm. Bayesian statistics is still relatively unknown in M&V, and presents an opportunity for increasing the efficiency of statistical analyses, especially for such projects. After a thorough literature review, especially of measurement uncertainty in M&V, and an introduction to Bayesian statistics for M&V, three methods are developed. These methods address the three types of uncertainty in M&V: measurement, sampling, and modelling. The first method is a low-cost energy meter calibration technique. The second method is a Dynamic Linear Model (DLM) with Bayesian Forecasting for determining the size of the metering sample that needs to be taken in a given year. The third method is a Dynamic Generalised Linear Model (DGLM) for determining the size of the population survival survey sample. It is often required by law that M&V energy meters be calibrated periodically by accredited laboratories. This can be expensive and inconvenient, especially if the facility needs to be shut down for meter installation or removal. Some jurisdictions also require meters to be calibrated in-situ; in their operating environments. However, it is shown that metering uncertainty makes a relatively small impact to overall M&V uncertainty in the presence of sampling, and therefore the costs of such laboratory calibration may outweigh the benefits. The proposed technique uses another commercial-grade meter (which also measures with error) to achieve this calibration in-situ. This is done by accounting for the mismeasurement effect through a mathematical technique called Simulation Extrapolation (SIMEX). The SIMEX result is refined using Bayesian statistics, and achieves acceptably low error rates and accurate parameter estimates. The second technique uses a DLM with Bayesian forecasting to quantify the uncertainty in metering only a sample of the total population of lighting circuits. A Genetic Algorithm (GA) is then applied to determine an efficient sampling plan. Bayesian statistics is especially useful in this case because it allows the results from previous years to inform the planning of future samples. It also allows for exact uncertainty quantification, where current confidence interval techniques do not always do so. Results show a cost reduction of up to 66%, but this depends on the costing scheme used. The study then explores the robustness of the efficient sampling plans to forecast error, and finds a 50% chance of undersampling for such plans, due to the standard M&V sampling formula which lacks statistical power. The third technique uses a DGLM in the same way as the DLM, except for population survival survey samples and persistence studies, not metering samples. Convolving the binomial survey result distributions inside a GA is problematic, and instead of Monte Carlo simulation, a relatively new technique called Mellin Transform Moment Calculation is applied to the problem. The technique is then expanded to model stratified sampling designs for heterogeneous populations. Results show a cost reduction of 17-40%, although this depends on the costing scheme used. Finally the DLM and DGLM are combined into an efficient overall M&V plan where metering and survey costs are traded off over multiple years, while still adhering to statistical precision constraints. This is done for simple random sampling and stratified designs. Monitoring costs are reduced by 26-40% for the costing scheme assumed. The results demonstrate the power and flexibility of Bayesian statistics for M&V applications, both in terms of exact uncertainty quantification, and by increasing the efficiency of the study and reducing monitoring costs. / Hierdie proefskrif ontwikkel metodes waarmee die koste van energiemonitering en verifieëring (M&V) deur Bayesiese statistiek verlaag kan word. M&V bepaal die hoeveelheid besparings wat deur energiedoeltreffendheid- en vraagkantbestuurprojekte behaal kan word. Dit word gedoen deur die energieverbruik in ’n gegewe tydperk te vergelyk met wat dit sou wees indien geen ingryping plaasgevind het nie. ’n Grootskaalse beligtingsretrofitstudie, waar filamentgloeilampe met fluoresserende spaarlampe vervang word, dien as ’n gevallestudie. Sulke projekte moet gewoonlik oor baie jare met ’n vasgestelde statistiese akkuuraatheid gemonitor word, wat M&V duur kan maak. Twee verwante onsekerheidskomponente moet in M&V beligtingsprojekte aangespreek word, en vorm die grondslag van hierdie proefskrif. Ten eerste is daar die onsekerheid in jaarlikse energieverbruik van die gemiddelde lamp. Ten tweede is daar die volhoubaarheid van die besparings oor veelvoudige jare, wat bepaal word deur die aantal lampe wat tot in ’n gegewe jaar behoue bly. Vir longitudinale projekte moet hierdie twee komponente oor veelvoudige jare bepaal word. Hierdie proefskrif spreek die probleem deur middel van ’n Bayesiese paradigma aan. Bayesiese statistiek is nog relatief onbekend in M&V, en bied ’n geleentheid om die doeltreffendheid van statistiese analises te verhoog, veral vir bogenoemde projekte. Die proefskrif begin met ’n deeglike literatuurstudie, veral met betrekking tot metingsonsekerheid in M&V. Daarna word ’n inleiding tot Bayesiese statistiek vir M&V voorgehou, en drie metodes word ontwikkel. Hierdie metodes spreek die drie hoofbronne van onsekerheid in M&V aan: metings, opnames, en modellering. Die eerste metode is ’n laekoste energiemeterkalibrasietegniek. Die tweede metode is ’n Dinamiese Linieêre Model (DLM) met Bayesiese vooruitskatting, waarmee meter opnamegroottes bepaal kan word. Die derde metode is ’n Dinamiese Veralgemeende Linieêre Model (DVLM), waarmee bevolkingsoorlewing opnamegroottes bepaal kan word. Volgens wet moet M&V energiemeters gereeld deur erkende laboratoria gekalibreer word. Dit kan duur en ongerieflik wees, veral as die aanleg tydens meterverwydering en -installering afgeskakel moet word. Sommige regsgebiede vereis ook dat meters in-situ gekalibreer word; in hul bedryfsomgewings. Tog word dit aangetoon dat metingsonsekerheid ’n klein deel van die algehele M&V onsekerheid beslaan, veral wanneer opnames gedoen word. Dit bevraagteken die kostevoordeel van laboratoriumkalibrering. Die voorgestelde tegniek gebruik ’n ander kommersieële-akkuurraatheidsgraad meter (wat self ’n nie-weglaatbare metingsfout bevat), om die kalibrasie in-situ te behaal. Dit word gedoen deur die metingsfout deur SIMulerings EKStraptolering (SIMEKS) te verminder. Die SIMEKS resultaat word dan deur Bayesiese statistiek verbeter, en behaal aanvaarbare foutbereike en akkuurate parameterafskattings. Die tweede tegniek gebruik ’n DLM met Bayesiese vooruitskatting om die onsekerheid in die meting van die opnamemonster van die algehele bevolking af te skat. ’n Genetiese Algoritme (GA) word dan toegepas om doeltreffende opnamegroottes te vind. Bayesiese statistiek is veral nuttig in hierdie geval aangesien dit vorige jare se uitslae kan gebruik om huidige afskattings te belig Dit laat ook die presiese afskatting van onsekerheid toe, terwyl standaard vertrouensintervaltegnieke dit nie doen nie. Resultate toon ’n kostebesparing van tot 66%. Die studie ondersoek dan die standvastigheid van kostedoeltreffende opnameplanne in die teenwoordigheid van vooruitskattingsfoute. Dit word gevind dat kostedoeltreffende opnamegroottes 50% van die tyd te klein is, vanweë die gebrek aan statistiese krag in die standaard M&V formules. Die derde tegniek gebruik ’n DVLM op dieselfde manier as die DLM, behalwe dat bevolkingsoorlewingopnamegroottes ondersoek word. Die saamrol van binomiale opname-uitslae binne die GA skep ’n probleem, en in plaas van ’n Monte Carlo simulasie word die relatiewe nuwe Mellin Vervorming Moment Berekening op die probleem toegepas. Die tegniek word dan uitgebou om laagsgewyse opname-ontwerpe vir heterogene bevolkings te vind. Die uitslae wys ’n 17-40% kosteverlaging, alhoewel dit van die koste-skema afhang. Laastens word die DLM en DVLM saamgevoeg om ’n doeltreffende algehele M&V plan, waar meting en opnamekostes teen mekaar afgespeel word, te ontwerp. Dit word vir eenvoudige en laagsgewyse opname-ontwerpe gedoen. Moniteringskostes word met 26-40% verlaag, maar hang van die aangenome koste-skema af. Die uitslae bewys die krag en buigsaamheid van Bayesiese statistiek vir M&V toepassings, beide vir presiese onsekerheidskwantifisering, en deur die doeltreffendheid van die dataverbruik te verhoog en sodoende moniteringskostes te verlaag. / Thesis (PhD)--University of Pretoria, 2017. / National Research Foundation / Department of Science and Technology / National Hub for the Postgraduate Programme in Energy Efficiency and Demand Side Management / Electrical, Electronic and Computer Engineering / PhD / Unrestricted

Page generated in 0.1237 seconds