• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 259
  • 80
  • 32
  • 23
  • 23
  • 13
  • 9
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 584
  • 97
  • 53
  • 49
  • 45
  • 44
  • 43
  • 43
  • 39
  • 38
  • 36
  • 36
  • 31
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Etude de la morphologie et de la distribution des neurones dans le cerveau de macaque par microscopie optique / Study of the morphology and distribution of neurons in the macaque brain using optical microscopy

You, Zhenzhen 09 October 2017 (has links)
La compréhension des mécanismes impliqués dans les cas sains, les maladies neurodégénératives ainsi que le développement de nouvelles approches thérapeutiques repose sur l’utilisation de modèles expérimentaux pertinents et de techniques d’imagerie adaptées. Dans ce contexte, la microscopie virtuelle offre la possibilité unique d’analyser ces modèles à l’échelle cellulaire avec une très grande variété de marquages histologiques. Mon projet de thèse consiste à mettre en place et à appliquer une méthode d’analyse d’images histologiques en couleur permettant de segmenter et de synthétiser l’information relative aux neurones à l’aide de l’anticorps NeuN sur des coupes de cerveau de macaque. Dans ce travail, nous appliquons d’abord la méthode de Random Forest (RF) pour segmenter les neurones ainsi que le tissu et le fond. Ensuite, nous proposons une méthode originale pour séparer les neurones qui sont accolés afin de les individualiser. Cette méthode s’adapte à l’ensemble des neurones présentant une taille variable (diamètre variant entre 5 et 30 μm). Elle est également efficace non seulement pour des régions dites « simples » caractérisées par une faible densité de neurones mais aussi pour des régions dites « complexes » caractérisées par une très forte densité de plusieurs milliers de neurones. Le travail suivant se concentre sur la création de cartes paramétriques synthétisant la morphologie et la distribution des neurones individualisés. Pour cela, un changement d’échelle est mis en œuvre afin de produire des cartographies présentant des résolutions spatiales plus faibles (résolution originale de 0,22 μm et cartographies créées proposant une résolution spatiale adaptative de quelques dizaines à quelques centaines de micromètres). Plusieurs dizaines de paramètres morphologiques (rayon moyen, surface, orientation, etc.) sont d’abord calculés pour chaque neurone ainsi que des paramètres colorimétriques. Ensuite, il est possible de synthétiser ces informations sous la forme de cartes paramétriques à plus basse résolution à l’échelle de régions anatomiques, de coupes voire, à terme, de cerveaux entiers. Cette étape transforme des images microscopiques qualitatives couleur en images mésoscopiques quantitatives, plus informatives et plus simples à analyser. Ce travail permet d’analyser statistiquement de très grands volumes de données, de synthétiser l’information sous la forme de cartographies quantitatives, d’analyser des problèmes extrêmement complexes tels que la mort neuronale et à terme de tester de nouveaux médicaments voire de confronter ces informations acquises post mortem avec des données acquises in vivo. / Understanding the mechanisms involved in healthy cases, neurodegenerative diseases and the development of new therapeutic approaches is based on the use of relevant experimental models and appropriate imaging techniques. In this context, virtual microscopy offers the unique possibility of analyzing these models at a cellular scale with a very wide variety of histological markers. My thesis project consists in carrying out and applying a method of analyzing colored histological images that can segment and synthesize information corresponding to neurons using the NeuN antibody on sections of the macaque brain. In this work, we first apply the Random Forest (RF) method to segment neurons as well as tissue and background. Then, we propose an original method to separate the touching or overlapping neurons in order to individualize them. This method is adapted to process neurons presenting a variable size (diameter varying between 5 and 30 μm). It is also effective not only for so-called "simple" regions characterized by a low density of neurons but also for so-called "complex" regions characterized by a very high density of several thousands of neurons. The next work focuses on the creation of parametric maps synthesizing the morphology and distribution of individualized neurons. For this purpose, a multiscale approach is implemented in order to produce maps with lower spatial resolutions (0.22 μm original resolution and created maps offering adaptive spatial resolution from a few dozens to a few hundred of micrometers). Several dozens of morphological parameters (mean radius, surface, orientation, etc.) are first computed as well as colorimetric parameters. Then, it is possible to synthesize this information in the form of lower-resolution parametric maps at the level of anatomical regions, sections and even, eventually, the entire brains. This step transforms qualitative color microscopic images to quantitative mesoscopic images, more informative and easier to analyze. This work makes it possible to statistically analyze very large volumes of data, to synthesize information in the form of quantitative maps, to analyze extremely complex problems such as neuronal death, to test new drugs and to compare this acquired information post mortem with data acquired in vivo.
352

Fully-Integrated CMOS pH, Electrical Conductivity, And Temperature Sensing System

Asgari, Mohammadreza January 2018 (has links)
No description available.
353

Measurements of luminosity and a search for dark matter in the ATLAS experiment

Pasuwan, Patrawan January 2020 (has links)
This licentiate thesis presents contributions to the luminosity measurement from the data recorded by the ATLAS detector in 2017 using a track-counting technique, as well as a search for dark matter in the ATLAS experiment using 139 fb-1 of √s = 13 TeV pp collision data delivered by the LHC from 2015 to 2018. Track-counting luminosity measurements in low-luminosity operations are performed to study the effect of low collision rates on luminosity determination. The luminosity measured in a calibration transfer procedure using the track-counting technique is used to correct the pile-up dependence observed in ATLAS’s main luminosity detector called LUCID. A search in the final state of a lepton, jets and missing transverse energy, where the final state is produced from a pair of top quarks and a spin-0 scalar/pseudoscalar mediator, is presented. A dedicated signal region is designed to target this final state in which the mediator decays into dark matter particles. The signal region covers the search in the mass plane of the mediator and the dark matter particle. Dedicated control regions are designed to estimate the top-quark background events, as well as the events where a Zboson is produced in association with the top quarks. The signal region event counts in the data have not been unblinded yet, but expected exclusion limits at 95% confidence level as a function of mediator mass are presented. Scalar and pseudoscalar mediators are expected to be excluded up to 200 and 250 GeV, respectively, for the dark matter mass of 1 GeV, and the coupling strengths of the mediator to the dark matter and Standard Model particles of 1.
354

Exploring the Riemann Hypothesis

Henderson, Cory 28 June 2013 (has links)
No description available.
355

High resolution x-ray imaging by measuring the induced charge distribution / Högupplöst röntgenavbildning genom mätning av den inducerade laddningsfördelningen

Jin, Zihui January 2022 (has links)
Computed tomography (CT) is a medical imaging technique used to create cross-section images of human bodies based on x-rays. The emerging photon-counting CT detector shows several advantages compared with the traditional energy integrating detector. This thesis is based on the new generation deep silicon photon-counting CT detector developed by KTH Medical Imaging group, with a 12×500μm^2 pixel size. A method is proposed to achieve high spatial resolution with low computation resource consumption.A Monte Carlo simulation has been done to simulate the photon interaction along with the charge transport process in the detector. The charge cloud distribution and induced current are used to make a precise estimation of the interaction position in the direction along the collecting electrodes. The feasibility of such a method under estimated electronic noise and other detector geometries has been checked. By having a high spatial resolution of around 1μm in one direction, it could be beneficial in phase contrast imaging.Besides the small pixel geometry, simulations on current photon-counting detector geometry, similar to what is used in clinics, have also been carried out, with a study of the charge carrier transport behavior and charge sharing possibility. The result shows that although the charge sharing event could be used to help estimate interaction position, its low proportion among total events leads to little resolution improvement. Another study on the induced current as a function of time has been presented. By reducing the electrode width while keeping the same pixel width, the induced current signal peak appears to be sharper. / Datortomografi (CT) är en medicinsk bildteknik som används för att skapa tvärsnittsbilder av människokroppen med hjälp av röntgenstrålar. Den nya CT-detektorn med fotonräkning har flera fördelar jämfört med den traditionella energiintegrerande detektorn. Den här avhandlingen bygger på den nya generationen av den djupa kiseldetektorn för CT-detektorn med fotonträkning i kisel som utvecklats av KTH:s grupp för medicinsk avbildning, med en pixelstorlek på 12×500μm^2. En metod föreslås för att uppnå hög spatial upplösning med begränsad kapacitet för beräkningar.En Monte Carlo-simulering har gjorts för att simulera fotoninteraktionen tillsammans med laddningstransportprocessen i detektorn. Laddningsmolnets fördelning och den inducerade strömmen används för att göra en exakt uppskattning av interaktionspositionen i riktningen längs de uppsamlande elektroderna. Genomförbarheten av en sådan metod med beräknat elektroniskt brus och andra detektorgeometrier har kontrollerats. Genom att ha en hög rumslig upplösning på cirka 1 μm i en riktning kan detta vara fördelaktigt vid faskontrastbildtagning.Förutom den lilla pixelgeometrin har simuleringar av den nuvarande geometrin för detektorer som räknar fotoner, liknande den som används på kliniker, också utförts, med en studie av transportbeteendet för laddningsbärare och möjligheten till laddningsdelning. Resultatet visar att även om laddningsdelningshändelsen kan användas för att hjälpa till att uppskatta interaktionspositionen, leder dess låga andel av de totala händelserna till en liten förbättring av upplösningen. En annan studie av den inducerade strömmen som en funktion av tiden har presenterats. Genom att minska elektrodbredden samtidigt som man behåller samma pixelbredd verkar den inducerade signaltoppen bli skarpare.
356

Point Cloud Registration using both Machine Learning and Non-learning Methods : with Data from a Photon-counting LIDAR Sensor

Boström, Maja January 2023 (has links)
Point Cloud Registration with data measured from a photon-counting LIDAR sensor from a large distance (500 m - 1.5 km) is an expanding field. Data measuredfrom far is sparse and have low detail, which can make the registration processdifficult, and registering this type of data is fairly unexplored. In recent years,machine learning for point cloud registration has been explored with promisingresults. This work compares the performance of the point cloud registration algorithm Iterative Closest Point with state-of-the-art algorithms, with data froma photon-counting LIDAR sensor. The data was provided by the Swedish Defense Research Agency (FOI). The chosen state-of-the-art algorithms were thenon-learning-based Fast Global Registration and learning-based D3Feat and SpinNet. The results indicated that all state-of-the-art algorithms achieve a substantial increase in performance compared to the Iterative Closest Point method. Allthe state-of-the-art algorithms utilize their calculated features to obtain bettercorrespondence points and therefore, can achieve higher performance in pointcloud registration. D3Feat performed point cloud registration with the highestaccuracy of all the state-of-the-art algorithms and ICP.
357

Determination of Activity Deposited in the Axillary Lymph Nodes by Direct, In vivo Radiation Measurements

Lobaugh, Megan L. January 2013 (has links)
No description available.
358

Aspects of the Many-Body Problem in Nuclear Physics

Dyhdalo, Alexander 18 September 2018 (has links)
No description available.
359

A Comparison of Discrete Trial Training and Embedded Instruction on the Promotion of Response Maintenance of Coin Counting Skills for Middle School Students with Intellectual Disabilities

Turner, Heather L. 26 September 2011 (has links)
No description available.
360

Photon Counting as a Probe of Superfluidity in a Two-Band Bose Hubbard System Coupled to a Cavity Field

Rajaram, Sara 20 December 2012 (has links)
No description available.

Page generated in 0.0654 seconds