• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 203
  • 128
  • 21
  • 20
  • 19
  • 8
  • 7
  • 7
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 497
  • 497
  • 89
  • 88
  • 74
  • 68
  • 35
  • 30
  • 29
  • 29
  • 27
  • 27
  • 26
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Análises genéticas em sistemas microfabricados / Genetic analysis in microfabricated systems

Duarte, Gabriela Rodrigues Mendes 30 July 2010 (has links)
A produção de microssistemas de análises totais (µTAS) tem sido objeto de esforços intensos pela comunidade científica. A necessidade de produção de uma plataforma que realize extração, amplificação e separação de DNA--um verdadeiro \"lab on a chip\"--é impulsionada pelas vantagens associadas com as análises em plataformas miniaturizadas. Esta Tese foca no desenvolvimento de métodos para análises de DNA em dispositivos microfluídicos que podem ser associados em µTAS. Inicialmente, foi feito o desenvolvimento de um novo método de extração em fase sólida em que a eficiência de extração depende da manipulação magnética das partículas e não do fluxo da solução através da fase sólida. A utilidade desta técnica em isolar DNA puro de alta qualidade (amplificável) a partir de uma amostra biológica complexa foi demonstrada através da purificação de DNA a partir de sangue total e a subsequente amplificação do fragmento do gene β-globina. A técnica descrita é rápida, simples e eficiente, permitindo uma recuperação de mais de 60% de DNA a partir de 600 nL de sangue em concentração suficiente para amplificação via reação em cadeia da polimerase (PCR). Após o desenvolvimento da extração dinâmica de DNA em fase sólida (dSPE) em microchip de vidro, o método foi adaptado para o uso em microchips de poliéster-toner (PT). Além da extração, a amplificação e separação de DNA também foram realizadas em microchips de PT. O processo convencional de fabricação dos dispositivos de PT produz canais com 12 µm de profundidade. Este trabalho descreve um novo processo de fabricação dos microchips de PT com canais mais profundos. Uma cortadora a laser de CO2 é usada para definir a estrutura desejada no filme de poliéster recoberto com toner. Estes filmes de poliéster recobertos com toner e os canais recortados são utilizados com partes intermediárias no microchip. A tampa e a base (filmes de poliéster) são laminadas juntamente com as partes intermediárias. Desta forma microchips com canais mais profundos podem ser criados. Microchips com 4 filmes de poliéster (base, tampa, e dois filmes centrais) foram utilizados para realizar dSPE. Estes microchips possuem canais com ~270 µm de profundidade. A dSPE adaptada para os microchips de PT demonstrou ser capaz de extrair eficientemente DNA (~65%), e o DNA purificado apresentou qualidade suficiente para PCR. A PCR realizada em microchips de PT demonstrou que os dispositivos de PT são compatíveis com os reagentes da PCR e o sucesso da reação de PCR foi demonstrado através da amplificação do fragmento de 520 pares de bases do λ-DNA. A possibilidade de manipular diferentes soluções que são necessárias para realizar a extração e a PCR demonstra o grande potencial desta plataforma para realizar análises genéticas. Além da extração e amplificação, a separação também foi demonstrada nos dispositivos de PT. Duas integrações foram feitas nos microchips de PT, dSPE-PCR e PCR-separação. Na primeira integração a dSPE e PCR foram realizadas em uma única câmara, e a amplificação do fragmento de 520 pb do λ-DNA foi demonstrada. Na segunda integração, o dispositivo foi fabricado com espessuras diferentes para os diferentes domínios. No domínio da PCR as câmaras possuem profundidade de ~270 µm de profundidade, e para o domínio da eletroforese os canais apresentam 12 µm de profundidade. A integração realizada sem válvulas foi demonstrada através da amplificação e detecção do fragmento de 520 pb do λ-DNA em um mesmo microchip. Este trabalho demonstra o grande potencial dos microchips de PT para produzir dispositivos descartáveis totalmente integrados para análise genética. / Efforts to develop a microfluidic-based total analysis system (µTAS) have been intense in the scientific community. The goal of achieving a device comprising DNA extraction, amplification, and detection in a single device, a true \"lab on a chip,\" is driven by the substantial advantages associated with such a device. This Thesis focus on development of methods for DNA analysis on microdevices, that can be associated with µTAS. Sequentially, the first step was the development of a novel solid-phase extraction technique in which DNA is bound and eluted from magnetic silica beads in a manner that efficiency is dependent on the magnetic manipulation of the beads and not on the flow of solution through a packed bed. The utility of this technique in the isolation of reasonably pure, PCR-amplifiable DNA from complex samples is shown by isolating DNA from whole human blood, and subsequently amplifying a fragment of the β-globin gene. The technique described here is rapid, simple, and efficient, allowing for recovery of more than 60% of DNA from 600 nL of blood at a concentration which is suitable for PCR amplification. The second step was the use of polyester-toner (PT) microchips for DNA analysis (extraction, PCR and separation). The laser-printing of toner onto polyester films has been shown to be effective for generating PT microfluidic devices with channel depths on the order of 12 µm. We describe a novel and innovative process that allows for the production of multilayer PT microdevices with substantially larger channel depths. Utilizing a CO2 laser to create the microchannel in polyester sheets containing a uniform layer of printed toner, multilayer devices can easily be constructed by sandwiching the channel layer between uncoated cover sheets of polyester containing precut access holes. The process allows for the fabrication of channels several hundred microns in depth, with ~270 µm deep microchannels utilized here to demonstrate the effectiveness of multilayer PT microchips for dynamic solid phase extraction (dSPE) and PCR amplification. Dynamic SPE adapted for PT microchip was able to recover more than 65% of DNA from 600 nL of blood and the DNA was compatible with downstream microchip-based PCR amplification. The compatibility of PT microchips was demonstrated by successful amplification of a 520 bp fragment of λ-phage DNA. The ability to handle the diverse chemistries associated with DNA purtification and extraction is a testimony to potential utility of PT microchips beyond separations, and presents a promising new platform for genetic analysis that is low cost and easy to fabricate. Two integrations were carrying out on PT microchip, dSPE - PCR and PCR-ME. The first integration was made in a single chamber and the amplification of 520 bp fragment of λ-phage was demonstrated. The second integration describes a process that allows the production of a multidomain microchip with different channel depths for the different domains for genetic analysis. The final device was made by the conventional sandwiching of the four polyester films of the PCR domain with the two polyester films for the electrophoresis domain. The successful valveless integration of PCR and separation was demonstrated by amplification and detection of a 520 bp fragment of λ-phage DNA. This work shows the enormous potential of PT microchips to be used for total genetic analysis.
332

Filtragem adaptativa de baixa complexidade computacional. / Low-complexity adaptive filtering.

Almeida Neto, Fernando Gonçalves de 20 February 2015 (has links)
Neste texto são propostos algoritmos de filtragem adaptativa de baixo custo computacional para o processamento de sinais lineares no sentido amplo e para beamforming. Novas técnicas de filtragem adaptativa com baixo custo computacional são desenvolvidas para o processamento de sinais lineares no sentido amplo, representados por números complexos ou por quaternions. Os algoritmos propostos evitam a redundância de estatísticas de segunda ordem na matriz de auto correlação, o que é obtido por meio da substituição do vetor de dados original por um vetor de dados real contendo as mesmas informações. Dessa forma, evitam-se muitas operações entre números complexos (ou entre quaternions), que são substituídas por operações entre reais e números complexos (ou entre reais e quaternions), de menor custo computacional. Análises na media e na variância para qualquer algoritmo de quaternions baseados na técnica least-mean squares (LMS) são desenvolvidas. Também é obtido o algoritmo de quaternions baseado no LMS e com vetor de entrada real de mais rápida convergência. Uma nova versão estável e de baixo custo computacional do algoritmo recursive least squares (RLS) amplamente linear também é desenvolvida neste texto. A técnica é modificada para usar o método do dichotomous coordinate descent (DCD), resultando em uma abordagem de custo computacional linear em relação ao comprimento N do vetor de entrada (enquanto o algoritmo original possui custo computacional quadrático em N). Para aplicações em beamforming, são desenvolvidas novas técnicas baseadas no algoritmo adaptive re-weighting homotopy. As novas técnicas são aplicadas para arrays em que o número de fontes é menor do que o número de sensores, tal que a matriz de auto correlação se torna mal-condicionada. O algoritmo DCD é usado para obter uma redução adicional do custo computacional. / In this text, low-cost adaptive filtering techniques are proposed for widely-linear processing and beamforming applications. New reduced-complexity versions of widely-linear adaptive filters are proposed for complex and quaternion processing. The low-cost techniques avoid redundant secondorder statistics in the autocorrelation matrix, which is obtained replacing the original widely-linear data vector by a real vector with the same information. Using this approach, many complex-complex (or quaternion-quaternion) operations are substituted by less costly real-complex (or real-quaternion) computations in the algorithms. An analysis in the mean and in the variance is performed for quaternion-based techniques, suitable for any quaternion least-mean squares (LMS) algorithm. The fastest-converging widely-linear quaternion LMS algorithm with real-valued input is obtained. For complex-valued processing, a low-cost and stable version of the widely-linear recursive least-squares (RLS) algorithm is also developed. The widely-linear RLS technique is modified to apply the dichotomous coordinate descent (DCD) method, which leads to an algorithm with computational complexity linear on the data vector length N (in opposition to the original WL technique, for which the complexity is quadratic in N). New complex-valued techniques based on the adaptive re-weighting homotopy algorithm are developed for beamforming. The algorithms are applied to sensor arrays in which the number of interferer sources is less than the number of sensors, so that the autocorrelation matrix is ill-conditioned. DCD iterations are applied to further reduce the computational complexity.
333

Pulse oximetry in low-income settings : a case study of Kenyan hospitals

Enoch, Abigail J. January 2018 (has links)
Pulse oximeters are low-cost, easy to use, and effective at detecting hypoxemia (low blood oxygen levels), a common complication of bronchiolitis, asthma, and pneumonia, the leading infectious cause of death in children worldwide. However, pulse oximeters are often unavailable in lowincome settings, and if available, often underused, yet little research investigates why. In this thesis, I examine pulse oximeter implementation in low-income settings, focusing on Kenyan hospitals as a case study, and using a mixed-methods approach. I conducted a systematic literature review, examining how pulse oximeter use with children at admission to hospital impacts health outcomes; I then conducted quantitative analyses of 28,000 children admitted to seven Kenyan hospitals to determine with which children pulse oximeters are used, and pulse oximetry's impact on treatment provision; these analyses informed the qualitative research component, for which I conducted interviews with 30 healthcare workers (HCWs) and staff in 14 Kenyan hospitals and employed theoretical frameworks to determine how HCWs decide whether to use pulse oximeters, and the barriers to pulse oximetry. I found that pulse oximeter use varies substantially between and within Kenyan hospitals over time. After adjusting for case-mix and signs of illness severity, HCWs were most likely to use pulse oximeters with children with a very high respiratory rate, indrawing and/or who were not alert; children who obtained a pulse oximeter reading were more likely to be prescribed oxygen than if a pulse oximeter was not used; and children with a reading below 90% were more likely to be prescribed oxygen than those with higher readings, suggesting that HCW decision-making is influenced by international and national guidelines. However, HCWs sometimes cannot use pulse oximeters when they intend to, because of insufficient pulse oximeter availability, largely due to inefficient and confusing procurement processes and repair delays. Furthermore, HCWs sometimes use pulse oximeters incorrectly or misinterpret their results, because of insufficient training. Pulse oximeter promotion programme planners can use the recommendations I provide to effectively target barriers to pulse oximeter uptake in low-income settings. Increased pulse oximetry implementation could enable early detection of hypoxemia, improving accurate diagnosis, and supporting prompt, effective treatment, which could help reduce mortality in children needing oxygen, in line with Sustainable Development Goal 3.
334

Construção e avaliação de equipamento de baixo custo para análise experimental do comportamento de sujeitos não-humanos / Manufacturing and evaluation of low cost apparatus for non-human research in experimental analysis of behavior.

Nicolas Carsten Rossger 09 April 2015 (has links)
Equipamentos utilizados em pesquisas em Análise Experimental do Comportamento com sujeitos não humanos tiveram uma trajetória intrincada ao próprio estudo do comportamento. Um tipo de equipamento importante nessas pesquisas é a câmara de condicionamento operante. Diversos incrementos foram feitos nessas câmaras, com estes focados no aumento do controle de variáveis independentes e no registro mais preciso de variáveis dependentes. Uma característica de equipamentos atualmente disponíveis no mercado é o seu alto custo de aquisição e a necessidade do uso de software fechado utilizado pelos mesmos. Diante disso, apresenta-se a construção e avaliação de equipamentos não comerciais de baixo custo para pesquisas operantes. Foram construídas duas câmaras de condicionamento, uma para pombos e outra para ratos. Com o objetivo de avaliar o equipamento construído, submeteram-se dois pombos e dois ratos a esquemas de reforço de razão fixa (FR) e razão variável (VR), e dois pombos e dois ratos a esquemas de reforço de intervalo fixo (FI) e intervalo variável (VI). Replicou-se a maior parte dos aspectos da literatura de esquemas de reforço simples com o equipamento apresentado, o que indica sua adequação ao uso em pesquisas de condicionamento operante como uma alternativa viável de baixo custo / Apparatuses used non-human research in experimental analysis of behavior have had a parallel trajectory to the study of behavior itself. A type of apparatus used in behavioral studies is the operant conditioning chamber. Several improvements have been done to these chambers, focusing in the control of independent variables and more precise registry of dependent variables. A feature of apparatuses available in the market is their high acquisition costs and the need of usage of closed software. In light of this, it is shown the construction and evaluation of alternative low cost apparatuses for behavioral research. Two operant conditioning chambers were built, one for usage with pigeons and the other for the usage with rats. In order to evaluate the apparatus, two pigeons and two rats were submitted to fixed ratio (FR) and variable ratio (VR) schedules of reinforcement, while another two pigeons and two rats were submitted to fixed interval (FI) and variable interval (VI) schedules of reinforcement. The majority of features related to the responding under simple schedules of reinforcement were replicated using the presented apparatus, which indicates its adequacy for the usage in behavioral research as a low cost alternative
335

Filtragem adaptativa de baixa complexidade computacional. / Low-complexity adaptive filtering.

Fernando Gonçalves de Almeida Neto 20 February 2015 (has links)
Neste texto são propostos algoritmos de filtragem adaptativa de baixo custo computacional para o processamento de sinais lineares no sentido amplo e para beamforming. Novas técnicas de filtragem adaptativa com baixo custo computacional são desenvolvidas para o processamento de sinais lineares no sentido amplo, representados por números complexos ou por quaternions. Os algoritmos propostos evitam a redundância de estatísticas de segunda ordem na matriz de auto correlação, o que é obtido por meio da substituição do vetor de dados original por um vetor de dados real contendo as mesmas informações. Dessa forma, evitam-se muitas operações entre números complexos (ou entre quaternions), que são substituídas por operações entre reais e números complexos (ou entre reais e quaternions), de menor custo computacional. Análises na media e na variância para qualquer algoritmo de quaternions baseados na técnica least-mean squares (LMS) são desenvolvidas. Também é obtido o algoritmo de quaternions baseado no LMS e com vetor de entrada real de mais rápida convergência. Uma nova versão estável e de baixo custo computacional do algoritmo recursive least squares (RLS) amplamente linear também é desenvolvida neste texto. A técnica é modificada para usar o método do dichotomous coordinate descent (DCD), resultando em uma abordagem de custo computacional linear em relação ao comprimento N do vetor de entrada (enquanto o algoritmo original possui custo computacional quadrático em N). Para aplicações em beamforming, são desenvolvidas novas técnicas baseadas no algoritmo adaptive re-weighting homotopy. As novas técnicas são aplicadas para arrays em que o número de fontes é menor do que o número de sensores, tal que a matriz de auto correlação se torna mal-condicionada. O algoritmo DCD é usado para obter uma redução adicional do custo computacional. / In this text, low-cost adaptive filtering techniques are proposed for widely-linear processing and beamforming applications. New reduced-complexity versions of widely-linear adaptive filters are proposed for complex and quaternion processing. The low-cost techniques avoid redundant secondorder statistics in the autocorrelation matrix, which is obtained replacing the original widely-linear data vector by a real vector with the same information. Using this approach, many complex-complex (or quaternion-quaternion) operations are substituted by less costly real-complex (or real-quaternion) computations in the algorithms. An analysis in the mean and in the variance is performed for quaternion-based techniques, suitable for any quaternion least-mean squares (LMS) algorithm. The fastest-converging widely-linear quaternion LMS algorithm with real-valued input is obtained. For complex-valued processing, a low-cost and stable version of the widely-linear recursive least-squares (RLS) algorithm is also developed. The widely-linear RLS technique is modified to apply the dichotomous coordinate descent (DCD) method, which leads to an algorithm with computational complexity linear on the data vector length N (in opposition to the original WL technique, for which the complexity is quadratic in N). New complex-valued techniques based on the adaptive re-weighting homotopy algorithm are developed for beamforming. The algorithms are applied to sensor arrays in which the number of interferer sources is less than the number of sensors, so that the autocorrelation matrix is ill-conditioned. DCD iterations are applied to further reduce the computational complexity.
336

Funcionalização de celulose para ensaios bioanalíticos em dispositivos microfluídicos baseados em papel (μPADs) / Cellulose modification for bioanalytical assays on paper-based microfluidic devices (µPADs)

Giorgio Gianini Morbioli 09 June 2015 (has links)
A funcionalização da matriz celulósica é um ponto essencial para o aprimoramento dos dispositivos microfluídicos baseados em papel (µPADs). Ela permite minimizar o preparo de amostras e a interferência do usuário, principais fontes de erro no processo analítico. A oxidação da celulose durante uma hora com m-periodato de sódio e a imobilização química de enzimas a partir da formação de bases de Schiff (iminas), via a adição direta da enzima ao substrato oxidado sem a necessidade de outras etapas, é um processo rápido e de baixo custo, apresentando grande potencialidade de aplicação nos dispositivos microfluídicos em papel. A enzima glicose oxidase imobilizada na celulose, com a adição do estabilizante trealose, apresentou elevada atividade catalítica - de 31,9 ± 5,5 mmol L-1 para a enzima não imobilizada a 14,8 ± 2,0 mmol L-1 para a enzima imobilizada e com o estabilizante - além de apresentar maior homogeneidade de sinal, condições desejáveis em testes rápidos em papel. A confecção de dispositivos em papel via impressão em cera alia rapidez e baixo custo de produção, e o arranjo em camadas para originar dispositivos tridimensionais (3D) permite ampliar as funcionalidades dos dispositivos em duas dimensões, tal como o tratamento individualizado de camadas e o armazenamento de reagentes no próprio dispositivo. O método da adição de padrão para obtenção de curvas analíticas no próprio microchip em papel surge como alternativa às curvas analíticas externas, minimizando a manipulação e o preparo de amostras. O uso do ácido 2,2’-azino-bis (3-etilbenzotiazolina-6-sulfônico) - ABTS como indicador redox para as reações enzimáticas e o método de adição de padrão nos µPADs apresentou boa correlação com um modelo de crescimento e saturação de Michaelis-Menten (r2 = 0,8723) na faixa de 0 a 10 mmol L-1, e a utilização da faixa linear para quantificação de glicose (0 a 3 mmol L-1) apresentou grande correlação linear com a concentração estimada pelas curvas de adição de padrão (r2 = 0,959), demonstrando a potencialidade do método. A união da tecnologia desses dispositivos em papel com a de um software automatizado de reconhecimento de imagens (PAlizer) torna instantânea a obtenção de resultados, eliminando-se a necessidade de intervenção humana no processo, tornando os testes em papel mais robustos, reprodutíveis e rápidos. Com o contínuo aperfeiçoamento das funcionalidades e potencialidades dos dispositivos microfluídicos em papel espera-se que os testes diagnósticos de baixo custo atinjam àqueles que deles necessitam, contribuindo para a saúde da população. / Functionalization of a cellulosic matrix is essential for the success of the paper-based microfluidic analytical devices (µPADs). It allows minimization of sample preparation and user interference, both being major sources of errors in the analytical process. Cellulose oxidation with sodium m-periodate during one hour and the direct chemical immobilization of enzymes on it by Schiff-base (imines) formation, which is made by direct insertion of the enzyme on the oxidized substrate without subsequent steps, is a fast and low cost process of immobilization, presenting great potential of application in paper-based microfluidic analytical devices. The glucose oxidase enzyme immobilized on cellulose, with the addition of trehalose stabilizer presented enhanced catalytic activity - from 31.9 ± 5.5 mmol L-1 for the non-immobilized enzyme to 14.8 ± 2.0 mmol L-1 for the immobilized enzyme with the stabilizing agent - also presenting greater signal homogeneity, which are ideal characteristics in a paper-based rapid test. Wax printing is a simple, inexpensive and fast method by which micro-devices can be fabricated. Additionally, the stacking of layers originating tridimensional devices (3D) allow for the improvement of functionalities of 2-dimensional ones, such as individualized layer treatment and reagent storage at different layers in the same device. Standard addition to analytical curves in paper-based microchips is an alternative to external analytical curves, minimizing handling/sample preparation. The use of 2,2\'-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid - ABTS redox indicator with the enzymatic reactions and the standard addition method in µPADs presented a good correlation in a growth and saturation Michaelis-Menten model (r2 = 0.8723), in the range of 0 to 10 mmol L-1, and the usage of the linear range to the glucose quantification (0 to 3 mmol L-1) presented a high linear correlation with the estimated concentration from the standard addition curves (r2 = 0.959), showing the potentiality of the method. The coupling of such paper-based devices to automated image analysis software, such as \'PAlizer\', turns the data acquisition process instantaneous, eliminating the need of human intervention during the process, making it more robust, reproducible and rapid. Expectations lie in improving the devices functions and potential so that these low-cost diagnostic devices can one day reach those who need them, contributing significantly to public health.
337

Identification et commande des robots manipulateurs à bas prix / Identification and control of low-cost robot manipulators

Shao, Zilong 24 March 2016 (has links)
Contrairement aux robots manipulateurs industriels qui sont de taille énorme et de prix élevé, beaucoup de robots manipulateurs à bas prix sont déjà entrés dans le marché, avec une petite taille, un poids léger, ce type de robots est plus accessible pour les particuliers. Cependant, limité par le coût de revient, des accessoires (matériaux, actuateurs, contrôleurs, etc) adoptés sont aussi limités, cela conduit souvent à la performance moins robuste au niveau de contrôle. Cette thèses se concentre sur la conception de contrôleur pour améliorer la performance des robots manipulateurs à bas prix. D'abord, pour des robots manipulateurs rigides, la modélisation dynamique en lien avec le système d'actualisation est établie, qui forme une équation différentielle avec paramètres constants et perturbation. Une méthode d'identification des paramètres en utilisant des observateurs et une commande adaptative sont proposées, et des résultats de simulation et d'expérimentation sont donnés. Ensuite, pour le cas d'articulation flexibles, pour simplifier, le modèle 1DOF est pris en compte. Premièrement, avec la mesure de la vitesse de lien, une méthode d'identification et une loi deux-étages adaptative sont proposées à condition que la position statique de lien puisse également être mesurée, des résultats de simulation sont donnés. Deuxièmement, en utilisant des mesures d'accélération de lien, une méthode d'identification et la même loi deux-étages adaptative sont proposées, cette idée est généralisée à l'identification et au contrôle de systèmes linéaires avec mesures de dérivées d'ordre élevé, des résultat de simulation sont présentés. Pour la mise en œuvre, des capteurs inertiels (gyroscopes et accéléromètres) sont utilisés et des résultats expérimentaux sont présentés. / Unlike industrial robot manipulators which are huge in size and of high price, many low-cost robot manipulators have already entered the market, with small size and light weight, this type of robots are more accessible to the public. However, limited by the cost, the components adopted (materials, actuators, controllers, etc.) are also limited, this often leads to less robust control performance. This thesis focuses on the controller design to improve the performance for such kind low-cost robot manipulators. To start with, for rigid case, dynamic modeling considering the actuator system is established, which forms a differential equation with constant parameters and disturbance, a method to identify the model parameters using observers and then an adaptive controller are proposed, simulation and experimental results are given. Then, in case of flexible joints, for simplicity, a single-link case model is considered. Firstly, link velocity measurement is assumed to provide link information, and an identification method and a two-stage adaptive control low are proposed provided that the static link position can also be measured, simulation result is given. Secondly, by using link acceleration measurement, an identification method and the same two-stage adaptive control low areproposed, this idea is generalized to identification and control of linear system using high-order derivative measurements, simulation result is presented. For implementation, inertial sensors (gyro and accelerometer) are used and experimental result is presented.
338

Modeling, Design, Fabrication, and Characterization of a Highly Sensitive Fluorescence-based Detection Platform for Point-of-Care Applications

January 2018 (has links)
abstract: Over the past several decades, there has been a growing interest in the use of fluorescent probes in low-cost diagnostic devices for resource-limited environments. This dissertation details the design, development, and deployment of an inexpensive, multiplexed, and quantitative, fluorescence-based lateral flow immunoassay platform, in light of the specific constraints associated with resource-limited settings. This effort grew out of the need to develop a highly sensitive, field-deployable platform to be used as a primary screening and early detection tool for serologic biomarkers for the high-risk human papillomavirus (hrHPV) infection. A hrHPV infection is a precursor for developing high-grade cervical intraepithelial neoplasia (CIN 2/3+). Early detection requires high sensitivity and a low limit-of-detection (LOD). To this end, the developed platform (DxArray) takes advantage of the specificity of immunoassays and the selectivity of fluorescence for early disease detection. The long term goal is to improve the quality of life for several hundred million women globally, at risk of being infected with hrHPV. The developed platform uses fluorescent labels over the gold-standard colorimetric labels in a compact, high-sensitivity lateral flow assay configuration. It is also compatible with POC settings as it substitutes expensive and bulky light sources for LEDs, low-light CMOS cameras, and photomultiplier tubes for photodiodes, in a transillumination architecture, and eliminates the need for expensive focusing/transfer optics. The platform uses high-quality interference filters at less than $1 each, enabling a rugged and robust design suitable for field use. The limit of detection (LOD) of the developed platform is within an order of magnitude of centralized laboratory diagnostic instruments. It enhances the LOD of absorbance or reflectometric and visual readout lateral flow assays by 2 - 3 orders of magnitude. This system could be applied toward any chemical or bioanalytical procedure that requires a high performance at low-cost. The knowledge and techniques developed in this effort is relevant to the community of researchers and industry developers looking to deploy inexpensive, quantitative, and highly sensitive diagnostic devices to resource-limited settings. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2018
339

Investigation of DC Motors for Electric and Hybrid Electric Motor Vehicle Applications Using an Infinitely Variable Transmission

Groen, Benjamin Carson 13 January 2011 (has links)
Since the early 1900's demand for fuel efficient vehicles has motivated the development of electric and hybrid electric vehicles. Unfortunately, some components used in these vehicles are expensive and complex. AC motors, complex electronic controllers and complex battery management systems are currently used in electric (EV) and hybrid vehicles. This research examines various motors and speed control methods in an attempt to help designers identify which motors would be best suited for an EV powertrain application. The feasibility of using DC motors coupled with an Infinitely Variable Transmission (IVT), to obtain an innovative new electric or hybrid electric powertrain is also presented. The results of this research include an extensive review of the many motor types including a comparison chart and motor hierarchy. An experiment was designed and built to test motorspeed control methods. Testing with two DC separately-excited motors and a differential as an IVT was also conducted. These tests revealed that field weakening appears to be a viable low-costspeed-control method. Testing of these motors, coupled with an IVT revealed that the output of a differential or planetary gear set can be controlled by varying the speed of the inputs. Combining this information in a product development mentality led to the concept of using one DC motor as a power or traction motor while another motor acts as a speed controller, with the method of speedcontrol on the speed control motor being field weakening. This concept allows most of the power to be delivered at an efficient rate with a simple form of speed control. This concept may also eliminate the need for expensive, complex electronic motor controllers. This approach could be used to improve the safety and reduce battery management requirements by lowering the operating voltage of the entire system.
340

Reliability and Validity of a Low-cost, Interactive Gaming System to Assess Balance in Older Adults

Hall, Courtney D., Broadwater, A., Caissie, D., Woodward, D., Young, H. 04 February 2015 (has links)
Abstract available through Journal of Geriatric Physical Therapy.

Page generated in 0.4342 seconds