1 |
Continuous characterization of universal invertible amplifier using source noiseAhmed, Chandrama 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / With passage of time and repeated usage of a system, component values that make up the system parameters change, causing errors in its functional output. In order to ensure the fidelity of the results derived from these systems it is thus very important to keep track of the system parameters while being used. This thesis introduces a method for tracking the existing system parameters while the system was being used using the inherent noise of its signal source. Kalman filter algorithm is used to track the inherent noise response to the system and use that response to estimate the system parameters. In this thesis this continuous characterization scheme has been used on a Universal Invertible Amplifier (UIA).
Current biomedical research as well as diagnostic medicine depend a lot on shape profile of bio-electric signals of different sources, for example heart, muscle, nerve, brain etc. making it very important to capture the different event of these signals without the distortion usually introduced by the filtering of the amplifier system. The Universal Invertible Amplifier extracts the original signal in electrodes by inverting the filtered and compressed signal while its gain bandwidth profile allows it to capture from the entire bandwidth of bioelectric signals.
For this inversion to be successful the captured compressed and filtered signals needs to be inverted with the actual system parameters that the system had during capturing the signals, not its original parameters. The continuous characterization scheme introduced in this thesis is aimed at knowing the system parameters of the UIA by tracking the response of its source noise and estimating its transfer function from that.
Two types of source noises have been tried out in this method, an externally added noise that was digitally generated and a noise that inherently contaminates the signals the system is trying to capture. In our cases, the UIA was used to capture nerve activity from vagus nerve where the signal was contaminated with electrocardiogram signals providing us with a well-defined inherent noise whose response could be tracked with Kalman Filter and used to estimate the transfer function of UIA.
The transfer function estimation using the externally added noise did not produce good results but could be improved by means that can be explored as future direction of this project. However continuous characterization using the inherent noise, a bioelectric signal, was successful producing transfer function estimates with minimal error. Thus this thesis was successful to introduce a novel approach for system characterization using bio-signal contamination.
|
2 |
Simulation and Experimental Methods for Characterization of Nonlinear Mechanical SystemsMagnevall, Martin January 2011 (has links)
Trial and error and the use of highly time-consuming methods are often necessary for investigation and characterization of nonlinear systems. However, for the rather common case where a nonlinear system has linear relations between many of its degrees of freedom there are opportunities for more efficient approaches. The aim of this thesis is to develop and validate new efficient simulation and experimental methods for characterization of mechanical systems with localized nonlinearities. The purpose is to contribute to the development of analysis tools for such systems that are useful in early phases of the product innovation process for predicting product properties and functionality. Fundamental research is combined with industrial case studies related to metal cutting. Theoretical modeling, computer simulations and experimental testing are utilized in a coordinated approach to iteratively evaluate and improve the methods. The nonlinearities are modeled as external forces acting on the underlying linear system. In this way, much of the linear theories behind forced response simulations can be utilized. The linear parts of the system are described using digital filters and modal superposition, and the response of the system is recursively solved for together with the artificial external forces. The result is an efficient simulation method, which in conjunction with experimental tests, is used to validate the proposed characterization methods. A major part of the thesis addresses a frequency domain characterization method based on broad-band excitation. This method uses the measured responses to create artificial nonlinear inputs to the parameter estimation model. Conventional multiple-input/multiple-output techniques are then used to separate the linear system from the nonlinear parameters. A specific result is a generalization of this frequency domain method, which allows for characterization of continuous systems with an arbitrary number of localized zero-memory nonlinearities in a structured way. The efficiency and robustness of this method is demonstrated by both simulations and experimental tests. A time domain simulation and characterization method intended for use on systems with hysteresis damping is also developed and its efficiency is demonstrated by the case of a dry-friction damper. Furthermore, a method for improved harmonic excitation of nonlinear systems using numerically optimized input signals is developed. Inverse filtering is utilized to remove unwanted dynamic effects in cutting force measurements, which increases the frequency range of the force dynamometer and significantly improves the experimental results compared to traditional methods. The new methods form a basis for efficient analysis and increased understanding of mechanical systems with localized nonlinearities, which in turn provides possibilities for more efficient product development as well as for continued research on analysis methods for nonlinear mechanical structures.
|
3 |
Uma metodologia para caracterização de aplicações em ambientes de computação nas nuvens. / A methodology of application characterization in cloud computing environment.Ogura, Denis Ryoji 04 October 2011 (has links)
Computação nas nuvens e um novo termo criado para expressar uma tendência tecnológica recente que virtualiza o data center. Esse conceito busca um melhor aproveitamento dos recursos computacionais e dos aplicativos corporativos, virtualizados por meio de programas de virtualização de sistema operacional (SO), plataformas, infraestruturas, softwares, entre outros. Essa virtualização ocorre por intermédio de maquinas virtuais (MV) para executar aplicativos nesse ambiente virtualizado. Contudo, uma MV pode ser configurada de tal forma que seu desempenho poderá ter um atraso no processamento por conta de gargalo(s) em algum hardware alocado. A fim de maximizar a alocação do hardware na criação da MV, foi desenvolvido um método de caracterização de aplicações para a coleta de dados de desempenho e busca da melhor configuração de MV. A partir desse estudo, pode-se identificar pelo workload a classificação do tipo de aplicação e apresentar o ambiente mais adequado, um recomendado e não recomendado. Dessa forma, a tendência de se obter um desempenho satisfatório nos ambientes virtualizados pode ser descoberta pela caracterização dos programas, o que possibilita avaliar o comportamento de cada cenário e identificar situações importantes para seu bom funcionamento. Para provar essa linha de raciocínio, foram executados programas mono e multiprocessador em ambientes de monitores de maquinas virtuais. Os resultados obtidos foram satisfatórios e estão de acordo com cada característica de aplicação conhecida previamente. Porem, podem ocorrer situações de exceção nesse método, principalmente quando o monitor de maquinas virtuais, e submetido a processamentos intensos. Com isso, a aplicação pode ter um atraso no processamento por conta do gargalo de processamento no monitor de maquinas virtuais, o que modifica o ambiente ideal dessa aplicação. Portanto, este estudo apresenta um método para identificar a configuração ideal para a execução de um aplicativo. / Cloud computing represents a new age, raised to express a new technology trending that virtualizes the data center. This concept advanced to make a better use of the computational resources and corporate application, virtualizing through the programs of operating systems virtualization, platform, infrastructure, software, etc. This virtualization occurs through the virtual machine (VM) to execute virtualized applications in this environment. However, a VM may be configured in such a way that the performance delays on processing, due to overhead or other hardware allocation itself. In order to maximize the hardware allocation on MV creation, it was developed a methodology of application characterization to collect performance data and achieve the best VM configuration. After this study, based on workload metric, it is possible to identify the classification of the application type and present the best configuration, the recommended environment and the not recommended. This way, the trend to achieve a satisfactory performance in virtualized environment may be discovered through the program characterization, which possibly evaluate the behavior of each scenario and identify important conditions for its proper operation. In order to prove this argument, mono and multi core applications under monitors of virtual machines were executed. The collected results were satisfactory and are aligned with each previously known application characteristic. However, it may occur exceptions in this method, mainly when the monitor of the virtual machine monitor is submitted with high volume of processing.
|
4 |
Caracter??sticas de um sistema para melhoria do atendimento a demanda de solu????es SESITaho, Thiago Yhudi 11 December 2017 (has links)
Submitted by Sara Ribeiro (sara.ribeiro@ucb.br) on 2018-04-24T14:45:40Z
No. of bitstreams: 1
KatianeDuarteFelixDissertacao2017.pdf: 1649203 bytes, checksum: a93de32b23bab298272b9a92cf5cb40f (MD5) / Approved for entry into archive by Sara Ribeiro (sara.ribeiro@ucb.br) on 2018-04-24T14:46:44Z (GMT) No. of bitstreams: 1
KatianeDuarteFelixDissertacao2017.pdf: 1649203 bytes, checksum: a93de32b23bab298272b9a92cf5cb40f (MD5) / Made available in DSpace on 2018-04-24T14:46:44Z (GMT). No. of bitstreams: 1
KatianeDuarteFelixDissertacao2017.pdf: 1649203 bytes, checksum: a93de32b23bab298272b9a92cf5cb40f (MD5)
Previous issue date: 2017-12-11 / In an environment with constant and rapid changes and increasing competition, organizations
need to identify ways to be able to face this competition. SESI carried out 67 panels of specialists
that identified 1275 products, comparing to the SESI portfolio, it was verify that of this
total, 36.4% (464) SESI did not offer, 21.5% (274) SESI offer, providing a reflection on the
lack of notion of specialists regarding the SESI portfolio in Occupational Safety and Health and
Health Promotion. This way, the institution can benefit by identifying means of using digital
technologies and existing information within its institutions through the identification of elements
understanding of the dynamics in the dissemination and development of services to meet
the demands of Brazilian industries. It??s proposed to characterize a system for a new relationship
between SESI and customers in the service offering, using the PDCA Cycle methodology
associated with the Design Thinking approach and requirements engineering concepts with the
purpose of improving demand-side service. / Com as constantes e r??pidas mudan??as no ambiente e uma concorr??ncia cada vez maior, as
organiza????es necessitam identificar formas de conseguir enfrentar essa competi????o. O SESI
realizou 67 pain??is de especialistas que identificaram 1275 produtos, comparando ao portf??lio
do SESI, verificou-se que deste total, 36,4% (464) o SESI n??o oferta, 21,5% (274) o SESI
oferta, proporcionando uma reflex??o quanto ?? falta de conhecimento dos especialistas quanto
ao portf??lio do SESI em Seguran??a e Sa??de no Trabalho e Promo????o da Sa??de. Com isso, a
institui????o pode beneficiar-se ao identificar meios de utilizar as tecnologias digitais e as informa????es
existentes dentro das suas institui????es atrav??s da identifica????o de elementos de entendimento
da din??mica na divulga????o e no desenvolvimento de servi??os para o atendimento das
demandas das ind??strias brasileiras. Prop??e-se caracterizar um sistema para nova forma de relacionamento
entre SESI e clientes na oferta de servi??os, utilizando a metodologia do Ciclo
PDCA associado a abordagem do Design Thinking e conceitos da engenharia de requisitos com
intuito de melhoria do atendimento a demanda.
|
5 |
Uma metodologia para caracterização de aplicações em ambientes de computação nas nuvens. / A methodology of application characterization in cloud computing environment.Denis Ryoji Ogura 04 October 2011 (has links)
Computação nas nuvens e um novo termo criado para expressar uma tendência tecnológica recente que virtualiza o data center. Esse conceito busca um melhor aproveitamento dos recursos computacionais e dos aplicativos corporativos, virtualizados por meio de programas de virtualização de sistema operacional (SO), plataformas, infraestruturas, softwares, entre outros. Essa virtualização ocorre por intermédio de maquinas virtuais (MV) para executar aplicativos nesse ambiente virtualizado. Contudo, uma MV pode ser configurada de tal forma que seu desempenho poderá ter um atraso no processamento por conta de gargalo(s) em algum hardware alocado. A fim de maximizar a alocação do hardware na criação da MV, foi desenvolvido um método de caracterização de aplicações para a coleta de dados de desempenho e busca da melhor configuração de MV. A partir desse estudo, pode-se identificar pelo workload a classificação do tipo de aplicação e apresentar o ambiente mais adequado, um recomendado e não recomendado. Dessa forma, a tendência de se obter um desempenho satisfatório nos ambientes virtualizados pode ser descoberta pela caracterização dos programas, o que possibilita avaliar o comportamento de cada cenário e identificar situações importantes para seu bom funcionamento. Para provar essa linha de raciocínio, foram executados programas mono e multiprocessador em ambientes de monitores de maquinas virtuais. Os resultados obtidos foram satisfatórios e estão de acordo com cada característica de aplicação conhecida previamente. Porem, podem ocorrer situações de exceção nesse método, principalmente quando o monitor de maquinas virtuais, e submetido a processamentos intensos. Com isso, a aplicação pode ter um atraso no processamento por conta do gargalo de processamento no monitor de maquinas virtuais, o que modifica o ambiente ideal dessa aplicação. Portanto, este estudo apresenta um método para identificar a configuração ideal para a execução de um aplicativo. / Cloud computing represents a new age, raised to express a new technology trending that virtualizes the data center. This concept advanced to make a better use of the computational resources and corporate application, virtualizing through the programs of operating systems virtualization, platform, infrastructure, software, etc. This virtualization occurs through the virtual machine (VM) to execute virtualized applications in this environment. However, a VM may be configured in such a way that the performance delays on processing, due to overhead or other hardware allocation itself. In order to maximize the hardware allocation on MV creation, it was developed a methodology of application characterization to collect performance data and achieve the best VM configuration. After this study, based on workload metric, it is possible to identify the classification of the application type and present the best configuration, the recommended environment and the not recommended. This way, the trend to achieve a satisfactory performance in virtualized environment may be discovered through the program characterization, which possibly evaluate the behavior of each scenario and identify important conditions for its proper operation. In order to prove this argument, mono and multi core applications under monitors of virtual machines were executed. The collected results were satisfactory and are aligned with each previously known application characteristic. However, it may occur exceptions in this method, mainly when the monitor of the virtual machine monitor is submitted with high volume of processing.
|
6 |
Point-of-Care High-throughput Optofluidic Microscope for Quantitative Imaging CytometryJagannadh, Veerendra Kalyan January 2017 (has links) (PDF)
Biological research and Clinical Diagnostics heavily rely on Optical Microscopy for analyzing properties of cells. The experimental protocol for con-ducting a microscopy based diagnostic test consists of several manual steps, like sample extraction, slide preparation and inspection. Recent advances in optical microscopy have predominantly focused on resolution enhancement. Whereas, the aspect of automating the manual steps and enhancing imaging throughput were relatively less explored. Cost-e ective automation of clinical microscopy would potentially enable the creation of diagnostic devices with a wide range of medical and biological applications. Further, automation plays an important role in enabling diagnostic testing in resource-limited settings.
This thesis presents a novel optofluidics based approach for automation of clinical diagnostic microscopy. A system-level integrated optofluidic architecture, which enables the automation of overall diagnostic work- ow has been proposed. Based on the proposed architecture, three different prototypes, which can enable point-of-care (POC) imaging cytometry have been developed. The characterization of these prototypes has been performed. Following which, the applicability of the platform for usage in diagnostic testing has been validated. The prototypes were used to demonstrate applications like Cell Viability Assay, Red Blood Cell Counting, Diagnosis of Malaria and Spherocytosis.
An important performance metric of the device is the throughput (number of cells imaged per second). A novel microfluidic channel design, capable of enabling imaging throughputs of about 2000 cells per second has been incorporated into the instrument. Further, material properties of the sample handling component (microfluidic device) determine several functional aspects of the instrument. Ultrafast-laser inscription (ULI) based glass microfluidic devices have been identi ed and tested as viable alternatives to Polydimethylsiloxane (PDMS) based microfluidic chips. Cellular imaging with POC platforms has thus far been limited to acquisition of 2D morphology. To potentially enable 3D cellular imaging with POC platforms, a novel slanted channel microfluidic chip design has been proposed. The proposed design has been experimentally validated by performing 3D imaging of fluorescent microspheres and cells. It is envisaged that the proposed innovation would aid to the current e orts towards implementing good quality health-care in rural scenarios. The thesis is organized in the following manner :
The overall thesis can be divided into two parts. The first part (chapters 2, 3) of the thesis deals with the optical aspects of the proposed Optofluidic instrument (development, characterization and validations demonstrating its use in poc diagnostic applications). The second part (chapters 4,5,6) of the thesis details the microfluidic sample handling aspects implemented with the help of custom fabricated microfludic devices, the integration of the prototype, func-tional framework of the device.
Chapter 2 introduces the proposed optofluidic architecture for implementing the POC tool. Further, it details the first implementation of the proposed platform, based on the philosophy of adapting ubiquitously available electronic imaging devices to perform cellular diagnostic testing. The characterization of the developed prototypes is also detailed.
Chapter 3 details the development of a stand-alone prototype based on the proposed architecture using inexpensive o -the-shelf, low frame-rate image sensors. The characterization of the developed prototype and its performance evaluation for application in malaria diagnostic testing are also presented. The chapter concludes with a comparative evaluation of the developed prototypes, so far.
Chapter 4 presents a novel microfludic channel design, which enables the enhancement of imaging throughput, even while employing an inexpensive low frame-rate imaging modules. The design takes advantage of radial arrangement of microfludic channels for enhancing the achievable imaging throughput. The fabrication of the device and characterization of achievable throughputs is presented. The stand-alone optofluidic imaging system was then integrated into a single functional unit, with the proposed microfluidic channel design, a viscoelastic effect based micro uidic mixer and a suction-based microfluidic pumping mechanism.
Chapter 5 brings into picture the aspect of the material used to fabricate the sample handling unit, the robustness of which determines certain functional aspects of the device. An investigative study on the applicability of glass microfluidic devices, fabricated using ultra-fast laser inscription in the context of the microfluidics based imaging flow cytometry is presented. As detailed in the introduction, imaging in poc platforms, has thus far been limited to acquisition of 2D images. The design and implementation of a novel slanted channel microfluidic chip, which can potentially enable 3D imaging with simplistic optical imaging systems (such as the one reported in the earlier chapters of this thesis) is detailed. A example application of the proposed microfludic chip architecture for imaging 3D fluorescence imaging of cells in flow is presented.
Chapter 6 introduces a diagnostic assessment framework for the use of the developed of m in an actual clinical diagnostic scenario. The chapter presents the use of computational signatures (extracted from cell images) to be employed for cell recognition, as part of the proposed framework. The experimental results obtained while employing the framework to identify cells from three different leukemia cell lines have been presented in this chapter.
Chapter 7 summarizes the contributions reported in this thesis. Potential future scope of the work is also detailed.
|
7 |
Conception d’antennes et méthode de caractérisation des systèmes RFID UHF en champ proche et en champ lointain / Antenna design and characterization method of near-field and far-field UHF RFID systemsSouza, Aline Coelho de 07 October 2015 (has links)
La technologie d'identification par radiofréquence (RFID) a eu un essor très important ces dernières années notamment grâce à sa configuration polyvalente et aux innombrables possibilités d'intégration offertes par cette technologie notamment avec l'apparition d'un nouveau contexte applicatif celui des objets connectés. Depuis quelques années, des applications de la RFID UHF en champ proche ont été notamment développées afin de surmonter les problèmes liés à la dégradation de lecture des tags lorsqu'ils sont placés dans des milieux fortement perturbateurs. Les travaux de recherche présentés dans cette thèse s'intéressent à l'étude de la technologie RFID UHF en zones de champ proche et de champ lointain. Les études portent plus particulièrement sur la conception d'antennes lecteur et d'antennes tag ainsi que sur les méthodes de caractérisation des systèmes RFID en zones de champ proche et de champ lointain. Une étude sur les caractéristiques des champs rayonnées par une antenne est réalisée afin de souligner les critères les plus pertinents en vue de concevoir des antennes pour les lecteurs RFID, performantes en zone de champ proche. A partir de l'état de l'art sur les antennes tags et les méthodologies de conception classiques, une nouvelle approche de conception est développée qui vise à améliorer la conception d'antennes tags en intégrant une vision appropriée pour tenir compte du niveau de puissance espéré dans une application donnée. Enfin avec pour objectif la caractérisation des tags RFID UHF, d'une part une approche est proposée permettant l'identification de familles de tags, et d'autre part, une procédure innovante pour la mesure de l'efficacité du transfert de puissance est proposée et validée expérimentalement. / The Radiofrequency Identification technology (RFID) has had a huge growth these last years, due to its versatility and the uncountable possibilities to integrate this technology in many different application (tracking and inventory of goods, access control, supply chain, etc.), and in particular the brand new context of internet of things projects. For some years, the near field UHF RFID applications has been developed in order to overcome the problems related to degradations of tag's read range when needing to use it in a perturbing environment. The research work presented in this thesis come as a study of the UHF RFID in near field and far field zones. This study focus more particularly on the design of reader and tag antennas and on the characterization method in near field and far field zones. A study on the characteristics of fields outgoing from an antenna has been made, in order to underline the important criteria for the design of performant UHF RFID reader antennas in near field zone. From the state of art about tag antennas design and classical methodologies, we propose a new approach that aims improving the design of tag antennas in a more suitable point of view that is getting the expected power level in a given application. Finally, with an objective to characterize UHF RFID tags, we first propose a new approach enabling the identification of tag families, and then propose a new innovating power transfer efficiency measurement procedure, that has been validated experimentally.
|
Page generated in 0.162 seconds