• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 97
  • 22
  • 15
  • 15
  • 13
  • 8
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 197
  • 38
  • 27
  • 25
  • 24
  • 21
  • 21
  • 19
  • 18
  • 16
  • 16
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Oxygénation des lits capillaires à la papille optique des patients sains et glaucomateux

Tran, Van Loc 01 1900 (has links)
Le glaucome représente la première cause de cécité irréversible à l’échelle mondiale. C’est une maladie neuro-dégénérative caractérisée traditionnellement par une pression intraoculaire (PIO) élevée, un dommage du nerf optique et un défaut du champ visuel correspondant. En fait, la PIO élevée constitue le facteur de risque central associé au développement du glaucome. Cependant, en dépit d’un contrôle adéquat de la PIO, la maladie continue à progresser chez certains patients. Cela montre qu’il existe d’autres facteurs impliqués dans la pathogenèse du glaucome. Des études récentes indiquent qu’un dérèglement de l’oxygène est associé à son développement. En utilisant une nouvelle technologie multi-spectrale capable de mesurer la saturation en oxygène (SaO2) dans les structures capillaires de la rétine, cette étude tentera de déterminer si un état d’oxygénation anormal pourrait se retrouver à la papille optique des patients souffrant de glaucome. Une meilleure compréhension du rôle de l’oxygène pourrait aider à améliorer le pronostic du glaucome. Les résultats de l’étude indiquent que le facteur de position (supérieure, temporale et inférieure de la papille optique) n’a aucun effet sur la mesure SaO2 ainsi que sa variabilité chez les patients normaux. La comparaison de la SaO2 entre les sujets normaux et glaucomateux ne montre pas de différence statistiquement significative. En conclusion, la SaO2 «normale» mesurée dans les yeux glaucomateux n'exclut pas nécessairement que l'hypoxie ne soit pas impliquée dans la pathogenèse. Au moment de l’étude, la PIO était bien contrôlée par des médicaments topiques, ce qui pourrait influencer l’oxygénation à la papille optique. / Glaucoma is the leading cause of irreversible blindness worldwide. Traditionally, open-angle glaucoma was defined as a neurodegenerative disease characterized by high intraocular pressure (IOP), progressive retinal cell death with subsequent visual field loss. Elevated IOP has been identified as one of the major risk factors for glaucomatous optic nerve damage. However, adequate IOP control cannot prevent progression of the disease in all patients suggesting that there are other factors involved in the pathogenesis of glaucoma. Recent studies suggest that hypoxia may contribute to the development of glaucoma. Using a new multi-spectral detection system of oxygen saturation (O2Sa), this study determined whether an abnormal state of oxygenation at the optic disc could be found in glaucoma patients. Knowledge about the influence of the oxygen in glaucoma may help to improve the pronostic of the disease. The results of the study indicate that the position factor (superior, temporal and inferior of the optic nerve head) has no effect on the measurement of O2Sa and its variability in normal patients. Comparing the O2Sa between normal subjects and glaucoma subjects shows no statistically significant difference. In conclusion, the «normal» O2Sa measured in glaucomatous eyes does not necessarily exclude that hypoxia is not involved in the pathogenesis of glaucoma because glaucoma patients were under treatment with topical drops that lowered IOP. These medicines could affect the oxygenation of the optic disc.
162

Škálování arteriální vstupní funkce v DCE-MRI / Scaling of arterial input function in DCE-MRI

Holeček, Tomáš Unknown Date (has links)
Perfusion magnetic resonance imaging is modern diagnostic method used mainly in oncology. In this method, contrast agent is injected to the subject and then is continuously monitored the progress of its concentration in the affected area in time. Correct determination of the arterial input function (AIF) is very important for perfusion analysis. One possibility is to model AIF by multichannel blind deconvolution but the estimated AIF is necessary to be scaled. This master´s thesis is focused on description of scaling methods and their influence on perfussion parameters in dependence on used model of AIF in different tissues.
163

Imagens de fontes magnéticas usando um sistema multicanal de sensores magneto-resistivos / Magnetic Source images using a Magnetoresistive Sensors Multichannel System

Cruz, Juan Alberto Leyva 03 November 2005 (has links)
Apresenta-se o desenho, construção e caracterização de uma plataforma experimental para a obtenção de imagens magnéticas bidimensionais (2D) geradas pela distribuição não uniforme em gel de vaselina de micro-partículas magnéticas (magnetita- Fe3O4), acomodadas em fantomas magnéticos de geometrias irregulares. A instrumentação é basicamente formada por um arranjo multicanal de 12-sensores magnetorresistivos de última geração (modelo HMC 1001/1002 da Honeywell), os quais convertem os sinais magnéticos, a serem medidas, em voltagens diferenciais, que posteriormente passam-se pela etapa de condicionamento analógico multisinais, e adquiridos por uma placa de aquisição PCI de 16 canais simples, e geradas pelas fontes magnéticas (fantomas) as quais eram posicionadas acima de uma tabua porta-fantoma a qual era acionada por um sistema de posicionamento x-y, utilizando-se dois motores de passo controlados via porta paralela. A obtenção e processamento das imagens de forma automática foi levado acabo por médio da ferramenta computacional SmaGimFM v1.0 (grupo de scripts escritos pelo autor, em LABVIEW v8.1 e Matlab v7.3). A montagem experimental foi desenhada para realizar o scan numa área de ate (20x18) cm2. O sistema consegue medir campos na ordem de poucos nano-teslas (10-9 T). Foi demostrado experimentalmente que: a detectibilidade do sistema está na ordem de 100 pT/?Hz; a resolução, o menor valor da indução magnética detectada e a resolução espacial dos sensores foi aproximadamente de (3±1) nT e (3.0± 0.1) mm, respectivamente, este último obtido para uma distancia sensor-fonte média de (6.0± 0.1) mm. O nível de ruído ambiental médio foi corroborado experimentalmente no valor de 10 nT. O fator de Calibração para todos os sensores alimentados com 8V, foi aproximadamente de 10-6 T/V, confirmando o valor da sensibilidade nominal oferecida pelo vendedor no data-sheet dos sensores. Os multisinais sempre foram pré-processadas para a remoção dos offset, e posteriormente era realizadas uma interpolação bi-cúbica, para gerar imagens magnéticas com uma alta resolução espacial da ordem de (256x256) pixels. A funções de transferência da modulação e espalhamento pontual do sistema foram estudados e os sensores foram espaçados e fixados de acordo com os resultados destes estudos. Nesta tese todas as imagens cruas foram geradas pelo mapeamento da resposta do sistema multicanal de magnetômetros a pequenas distancias e geradas pela presença de micropartículas de magnetita (Fe3O4) não tratada termicamente e dispersada em oitos fantomas planares com geometrias complexas e chamados como: PhMão; PhNum; PhLines; PhCinco; PhTrês; PhCircle; PhQuadSmall e PhQuadBig. As imagens magnéticas de cada um destes fantomas é apresentada. A cada experimento, estes fantomas eram magnetizados pela ação de um pulso magnético uniforme no volume dos fantomas, com um valor aproximadamente de 81.6 mT, e produzido por um sistema de bobinas par de Helmholtz. Para fazer o registro experimental das imagens magnéticas, os fantomas foram posicionados a uma altura fixa em relação aos sensores, e movidos numa direção de scan, assim nos detectores observávamos as voltagens gerados pela variação no campo remanente devido às diferentes concentrações de micro-partículas magnéticas magnetizadas foram medidos e controlados por um computador pessoal. Usando as imagens cruas (imagens ruidosas e borradas) e outras informações a priori, foram obtidas as imagens reconstruídas das fontes do campo magnético, tais como, a distribuição de partículas ferrimagnéticas no interior dos fantomas, a qual é relacionada com a susceptibilidade magnética das amostras. Encontrar as imagens das fontes magnéticas, é resolver o problema magnético associado, e nosso trabalho estas restaurações foram realizadas usando-se os seguintes algoritmos numéricos de deconvolução, filtragem espacial de Wiener e Fourier, o filtragem Pseudo-inversa, o método do gradiente conjugado e os procedimentos de regularização de Tikhonov e Decomposição de Valores singulares truncados, dentre outros. Estes procedimentos foram implementados e testados. As imagens reconstruídas das fontes magnéticas de quatro fantomas são apresentadas. Estas técnicas foram programadas computacionalmente por médio de um conjunto de scripts chamados de SmaGimFM v1.0, estes foram escritos nos linguagens computacionais MATLAB® desde a MathWorks Inc.; e LABVIEW desde a National Instruments Inc. Estes resultados preliminares mostram que o sistema de imagens apresenta potencial para ser aplicada em estudos na área da Física Médica, onde imagens com moderada para alta resolução espacial e baixa amplitude da indução magnética são exigidas. Contudo, podemos afirmar que à distância sensor-fonte é crítica e afeta a resolução das imagens. O sistema é capaz de registrar imagens na ordem de 10-9 T, e sua elevada resolução espacial indica que pode ser testada como uma nova técnica biomagnética para gerar imagens em 2D de partículas magnéticas dentro de objetos, na região do campo próximo, para futuras aplicações médicas / We have designed and build a magnetic imaging system for obtaining experimental noisy and blurred magnetic images from distribution of ferromagnetic tracers (magnetite Fe3O4). The main part of the magnetic imaging system was formed by a linear array composed of 12-magnetoresistive sensors from Honeywell Inc. (HMC 1001). These sensors are microcircuits with a configuration of wheatstone-bridge and convert magnetic fields into differential voltage, which after pass for the multichannel signal stage can be to measure magnetic signals about of 10-9 T. The system is capable of scanning planar samples with dimensions up to (16x18) cm square. A full experimental characterization of the magnetic imaging system was carried out. The calibration factor for all sensor supplied by 8 V, was approximately 10-6 T/V, confirming the data sheet nominal properties from the vendor. The spatial resolution and the resolution of the magnetic imaging system were experimentally confirmed to be 3 mm and 3 nT, respectively. The spectral density noise was about , for the experimental conditions used in these studies. The signals were pre-processed for offset remove and the interpolation for spatial resolution improves and generates images of (256x256) pixels. The point spread and modulation transference functions of multi-sensor system were studied and the sensors were spaced accordingly. In this thesis, all raw images were generated by mapping the response of the magnetoresistive magnetometers multichannel array at short distances due to the presence of uncooked magnetite powder dispersed in eight planar phantoms with complex geometries and called as: PhMão; PhNum; PhLines; PhCinco; PhTrês; PhCircle; PhQuadSmall and PhQuadBig. These phantoms were magnetized by a uniform pulse field of approximately of 81.6 mT produced by a Helmholtz coil system. The samples were moved under the magnetoresistive sensors and the voltages generated by the variation in remanent magnetic field due to different magnetized ferromagnetic particles concentrations were recorded and controlled by a personal computer. Using the experimental noisy and blurred magnetic field images (raw images), and some another, a priori information\'s, the reconstruction of the magnetic field source images, such as, the distribution of ferromagnetic particles inner of the phantoms which are related with magnetic susceptibility, was obtained by various inverse problem solution algorithms\', such as, the spatial Wiener and Fourier filtering, the Pseudo-inverse filtering; the conjugated gradient and Tikhonov and Decomposition of Truncated Singular Values approaches and others. These procedures were implemented by mean of the scripts set called SmaGimFM v1.0, that we developed using the MATLAB® language from MathWorks Inc. A preliminary result shows that this magnetic imaging system join to some deconvolution technique can be considered efficient to be used in functional images of the gastrointestinal tract, where a moderate resolution is required. We can affirm that at a distance sensor-source choose is a critical parameter and affects the resolution of the images; and we can conclude that this magnetic images method can be successfully used to generate planar blurred magnetic images and magnetic field sources images in the near field region at macroscopic level generated by ferromagnetic materials.
164

Potential Replacement of the US Navy's Rapid Penetration Test with the Method of Multichannel Analysis of Surface Waves

Fletcher, William 01 January 2018 (has links)
The United States Navy (USN) currently utilizes a Rapid Penetration Test (RPT) on both land and in water as the means to determine whether sufficient soil bearing capacity exists for piles in axial compression, prior to construction of the Elevated Causeway System (Modular) [ELCAS(M)] pile-supported pier system. The USN desires a replacement for the RPT because of issues with the method incorrectly classifying soils as well as the need to have a less labor-and-equipment-intensive method for geotechnical investigation. The Multichannel Analysis of Surface Waves (MASW) method is selected herein as the potential replacement for the RPT. The MASW method is an existing, geophysical method for determining soil properties based upon the acquisition and analysis of seismic surface waves used to develop shear wave velocity profiles for the soils at specific sites. Correlations between shear wave velocity and Cone Penetration Testing are utilized to classify soils, develop pile blow count estimates, and calculate soil bearing capacity. This researcher found that the MASW method was feasible and reliable in predicting the required properties for terrestrial sites. However, it was not successful in predicting those properties for underwater marine sites due to issues with equipment and field setup. Future areas of improvement are recommended to address these issues and, due to the success of the method on land, it is expected that once the issues are addressed the MASW method will be a reliable replacement for the RPT method across the entire subaerial and subaqueous profile.
165

Distributed Joint Source-Channel Coding For Multiple Access Channels

Rajesh, R 05 1900 (has links)
We consider the transmission of correlated sources over a multiple access channel(MAC). Multiple access channels are important building blocks in many practical communication systems, e.g., local area networks(LAN), cellular systems, wireless multi-hop networks. Thus this topic has been studied for last several decades. One recent motivation is estimating a random field via wireless sensor networks. Often the sensor nodes are densely deployed resulting in correlated observations. These sensor nodes need to transmit their correlated observations to a fusion center which uses this data to estimate the sensed random field. Sensor nodes have limited computational and storage capabilities and very limited energy. Since transmission is very energy intensive, it is important to minimize it. This motivates our problem of energy efficient transmission of correlated sources over a sensor network. Sensor networks are often arranged in a hierarchical fashion. Neighboring nodes can first transmit their data to a cluster head which can further compress information before transmission to the fusion center. The transmission of data from sensor nodes to their cluster-head is usually through a MAC. At the fusion center the underlying physical process is estimated. The main trade-off possible is between the rates at which the sensors send their observations and the distortion incurred in estimation at the fusion center. The availability of side information at the encoders and/or the decoder can reduce the rate of transmission. In this thesis, the above scenario is modeled as an information theoretic problem. Efficient joint source-channel codes are discussed under various assumptions on side information and distortion criteria. Sufficient conditions for transmission of discrete/continuous alphabet sources with a given distortion over a discrete/continuous alphabet MAC are given. We recover various previous results as special cases from our results. Furthermore, we study the practically important case of the Gaussian MAC(GMAC) in detail and propose new joint source-channel coding schemes for discrete and continuous sources. Optimal schemes are identified in different scenarios. The protocols like TDMA, FDMA and CDMA are widely used across systems and standards. When these protocols are used the MAC becomes a system of orthogonal channels. Our general conditions can be specialized to obtain sufficient conditions for lossy transmission over this system. Using this conditions, we identify an optimal scheme for transmission of Gaussian sources over orthogonal Gaussian channels and show that the Amplify and Forward(AF) scheme performs close to the optimal scheme even at high SNR. Next we investigate transmission of correlated sources over a fast fading MAC with perfect or partial channel state information available at both the encoders and the decoder. We provide sufficient conditions for transmission with given distortions. We also provide power allocation policies for efficient transmission. Next, we use MAC with side information as a building block of a hierarchical sensor network. For Gaussian sources over Gaussian MACs, we show that AF performs well in such sensor network scenarios where the battery power is at a premium. We then extend this result to the hierarchical network scenario and show that it can perform favourably to the Slepian-Wolf based source coding and independent channel coding scheme. In a hierarchical sensor network the cluster heads often need to send only a function of the sensor observations to the fusion center. In such a setup the sensor nodes can compress the data sent to the cluster head exploiting the correlation in the data and also the structure of the function to be computed at the cluster head. Depending upon the function, exploiting the structure of the function can substantially reduce the data rate for transmission. We provide efficient joint source-channel codes for transmitting a general class of functions of the sources over the MAC.
166

Automatizuotų skaitmeninių sistemų mažiems pokyčiams įvertinti tyrimas / Investigation of digital automatic systems for evaluation of small changes

Kvedaras, Rokas 12 February 2006 (has links)
1. The method for automatic digital balancing of Wheatstone resistance bridge was developed and investigated using DAC R-2R matrix for evaluation of resistance small changes. Balancing method enables reducing of external influence impact to evaluation results and avoids most disadvantages that are common to classic systems based on unbalanced Wheatstone bridge. The parameters of connecting wires and channel switches are not impacting evaluation results in the system developed. Advantages of the system developed are proven by experiments. System developed allows low cost implementation of systems for evaluation of resistance small changes. 2. Possibilities for simplification of circuit for resistance small changes evaluation by using digital signal processing means are proven. It is established that by using known and the proposed methods for improvement of reliability of evaluation results resolution of the evaluation is 12–14 bits (0,024 % - 0,006 % accuracy). It is established that it is necessary to use digital signal processing methods for achieving such resolution. 3. New structures of resistance small change evaluation systems ensuring resolutions of 212 and 28 intended for laboratory investigations and monitoring of constructions are proposed. Experimental model of system with resolution of 28 was made and investigated. Methods of reducing noises in long cables were established during experiment. In general it was proven that the model is suitable for monitoring tasks... [to full text]
167

Hallå, är någon där? : En studie om informationsnyttjande mellan handelskanaler

Hurtig, Robert, Forsberg, Elisabeth January 2014 (has links)
Syftet med denna studie har varit att undersöka hur företag samlar in och utnyttjar kundinformation mellan försäljningskanalerna. Men även att utreda motiven bakom att bli en multikanalåterförsäljare samt produkttypens roll i beslutet. En kvalitativ fallstudie med semistrukturerade intervjuer har tillämpats, där fyra företag och en expert deltagit. Studien har visat att information om kunden har varit värdefull och har använts emellan handelskanaler för att anpassa butikens sortimentsmix, skapa välgrundade kampanjer och styra kunder till butik. De mindre företagen, sett till omsättning, har haft mer användning av kundinformation vid beslutet att öppna fysiska butiker än de större. Motivationen bakom att bli multikanalhandlare har varit potentialen att nå fler kunder men även bättre kundfokus. Andra faktorer än produkttyp har varit av intresse för företagen vid öppnandet av de fysiska butikerna men arbetet med returer har visat att produkttyp kan spela roll. / The study investigates how companies gather and use customer information between sales channels. But it will also investigate the motive behind expanding to multichannel retailing and the role of the product type in that decision. A qualitative case study with semi structured interviews has been applied, were four companies and one expert has participated. The study has shown that data about the customer has been valuable and has been used between sales channels to adjust the stores product mix, create well-founded campaigns as well as directing customers to the stores. The smaller companies, in terms of turnover, used a greater amount of customer information in the decision of adding stores than the larger ones. The motivation behind going multichannel has been the potential of reaching more customers but also getting better customer focus. Other factors than product type has been of interest to the companies when the stores were added, but the work around product returns has shown that the product type can matter.
168

Reverse audio engineering for active listening and other applications

Gorlow, Stasnislaw 16 December 2013 (has links) (PDF)
This work deals with the problem of reverse audio engineering for active listening. The format under consideration corresponds to the audio CD. The musical content is viewed as the result of a concatenation of the composition, the recording, the mixing, and the mastering. The inversion of the two latter stages constitutes the core of the problem at hand. The audio signal is treated as a post-nonlinear mixture. Thus, the mixture is "decompressed" before being "decomposed" into audio tracks. The problem is tackled in an informed context: The inversion is accompanied by information which is specific to the content production. In this manner, the quality of the inversion is significantly improved. The information is reduced in size by the use of quantification and coding methods, and some facts on psychoacoustics. The proposed methods are applicable in real time and have a low complexity. The obtained results advance the state of the art and contribute new insights.
169

Shallow gas hazards in Queen Charlotte Basin from interpretation of high resolution seismic and multibeam data

Halliday, Julie 30 December 2008 (has links)
This thesis investigates shallow gas hazards in Queen Charlotte Basin, a sedimentary basin situated offshore British Columbia. The work presented here provides the first detailed gas hazard assessment in Queen Charlotte Basin and the first evidence that gas has migrated from basin sediments into surficial sediments to be expelled in the water column. A unique method of geophysical surveying is used to investigate hazards due to shallow gas at two sites within Queen Charlotte Basin: high-resolution multichannel seismic, Huntec Deep-Towed Seismic and multibeam bathymetry data were collected over two 2-D grids and interpreted concurrently to yield a comprehensive understanding of the geology at each site. Numerous features related to both ice-cover and shallow gas has been identified. Pockmarks, iceberg ploughmarks and seafloor mounds are observed in the multibeam data; acoustically turbid and vertical blank zones are imaged in the Huntec data and faulted anticlines containing bright spots as well as low frequency shadow zones are seen in the multichannel data. Combining and interpreting all three geophysical datasets concurrently provided the means to discriminate features related to ice-cover from features related to gas in the shallow sediments. In addition, this method of geohazards assessment has enabled links between surficial and basin geology to be made. Based on the results obtained gas and other geohazards were identified at each of the two sites. Based on observations in high-resolution multichannel seismic data, gas is determined to have migrated along structural pathways within basin sediments and into surficial sediments. The level of hazard posed by shallow gas has been assessed qualitatively for each of the two study sites and gas hazard regions have been identified elsewhere in Queen Charlotte Basin.
170

Optogenetic feedback control of neural activity

Newman, Jonathan P. 12 January 2015 (has links)
Optogenetics is a set of technologies that enable optically triggered gain or loss of function in genetically specified populations of cells. Optogenetic methods have revolutionized experimental neuroscience by allowing precise excitation or inhibition of firing in specified neuronal populations embedded within complex, heterogeneous tissue. Although optogenetic tools have greatly improved our ability manipulate neural activity, they do not offer control of neural firing in the face of ongoing changes in network activity, plasticity, or sensory input. In this thesis, I develop a feedback control technology that automatically adjusts optical stimulation in real-time to precisely control network activity levels. I describe hardware and software tools, modes of optogenetic stimulation, and control algorithms required to achieve robust neural control over timescales ranging from seconds to days. I then demonstrate the scientific utility of these technologies in several experimental contexts. First, I investigate the role of connectivity in shaping the network encoding process using continuously-varying optical stimulation. I show that synaptic connectivity linearizes the neuronal response, verifying previous theoretical predictions. Next, I use long-term optogenetic feedback control to show that reductions in excitatory neurotransmission directly trigger homeostatic increases in synaptic strength. This result opposes a large body of literature on the subject and has significant implications for memory formation and maintenance. The technology presented in this thesis greatly enhances the precision with which optical stimulation can control neural activity, and allows causally related variables within neural circuits to be studied independently.

Page generated in 0.078 seconds