• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 2
  • 2
  • Tagged with
  • 10
  • 10
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Enabling the processing of bioinformatics workflows where data is located through the use of cloud and container technologies

de Beste, Eugene January 2019 (has links)
>Magister Scientiae - MSc / The growing size of raw data and the lack of internet communication technology to keep up with that growth is introducing unique challenges to academic researchers. This is especially true for those residing in rural areas or countries with sub-par telecommunication infrastructure. In this project I investigate the usefulness of cloud computing technology, data analysis workflow languages and portable computation for institutions that generate data. I introduce the concept of a software solution that could be used to simplify the way that researchers execute their analysis on data sets at remote sources, rather than having to move the data. The scope of this project involved conceptualising and designing a software system to simplify the use of a cloud environment as well as implementing a working prototype of said software for the OpenStack cloud computing platform. I conclude that it is possible to improve the performance of research pipelines by removing the need for researchers to have operating system or cloud computing knowledge and that utilising technologies such as this can ease the burden of moving data.
2

The Realization Analysis of SAR Raw Data With Block Adaptive Vector Quantization Algorithm

Yang, Yun-zhi, Huang, Shun-ji, Wang, Jian-guo 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / In this paper, we discuss a Block Adaptive Vector Quantization(BAVQ) Algorithm for Synthetic Aperture Radar(SAR). And we discuss a realization method of BAVQ algorithm for SAR raw data compressing in digital signal processor. Using the algorithm and the digital signal processor, we have compressed the SIR_C/X_SAR data.
3

Sensordatafusion av IR- och radarbilder / Sensor data fusion of IR- and radar images

Schultz, Johan January 2004 (has links)
<p>Den här rapporten beskriver och utvärderar ett antal algoritmer för multisensordatafusion av radar och IR/TV-data på rådatanivå. Med rådatafusion menas att fusionen ska ske innan attribut- eller objektextrahering. Attributextrahering kan medföra att information går förlorad som skulle kunna förbättra fusionen. Om fusionen sker på rådatanivå finns mer information tillgänglig och skulle kunna leda till en förbättrad attributextrahering i ett senare steg. Två tillvägagångssätt presenteras. Den ena metoden projicerar radarbilden till IR-vyn och vice versa. Fusionen utförs sedan på de par av bilder med samma dimensioner. Den andra metoden fusionerar de två ursprungliga bilderna till en volym. Volymen spänns upp av de tre dimensionerna representerade i ursprungsbilderna. Metoden utökas också genom att utnyttja stereoseende. Resultaten visar att det kan vara givande att utnyttja stereoseende då den extra informationen underlättar fusionen samt ger en mer generell lösning på problemet.</p> / <p>This thesis describes and evaluates a number of algorithms for multi sensor fusion of radar and IR/TV data. The fusion is performed on raw data level, that is prior to attribute extraction. The idea is that less information will be lost compared to attribute level fusion. Two methods are presented. The first method transforms the radar image to the IR-view and vice versa. The images sharing the same dimension are then fused together. The second method fuses the original images to a three dimensional volume. Another version is also presented, where stereo vision is used. The results show that stereo vision can be used with good performance and gives a more general solution to the problem.</p>
4

Sensordatafusion av IR- och radarbilder / Sensor data fusion of IR- and radar images

Schultz, Johan January 2004 (has links)
Den här rapporten beskriver och utvärderar ett antal algoritmer för multisensordatafusion av radar och IR/TV-data på rådatanivå. Med rådatafusion menas att fusionen ska ske innan attribut- eller objektextrahering. Attributextrahering kan medföra att information går förlorad som skulle kunna förbättra fusionen. Om fusionen sker på rådatanivå finns mer information tillgänglig och skulle kunna leda till en förbättrad attributextrahering i ett senare steg. Två tillvägagångssätt presenteras. Den ena metoden projicerar radarbilden till IR-vyn och vice versa. Fusionen utförs sedan på de par av bilder med samma dimensioner. Den andra metoden fusionerar de två ursprungliga bilderna till en volym. Volymen spänns upp av de tre dimensionerna representerade i ursprungsbilderna. Metoden utökas också genom att utnyttja stereoseende. Resultaten visar att det kan vara givande att utnyttja stereoseende då den extra informationen underlättar fusionen samt ger en mer generell lösning på problemet. / This thesis describes and evaluates a number of algorithms for multi sensor fusion of radar and IR/TV data. The fusion is performed on raw data level, that is prior to attribute extraction. The idea is that less information will be lost compared to attribute level fusion. Two methods are presented. The first method transforms the radar image to the IR-view and vice versa. The images sharing the same dimension are then fused together. The second method fuses the original images to a three dimensional volume. Another version is also presented, where stereo vision is used. The results show that stereo vision can be used with good performance and gives a more general solution to the problem.
5

Efficient similarity-driven emission angle selection for coherent plane-wave compounding

Akbar, Haroon Ali 09 October 2018 (has links)
Typical ultrafast plane-wave ultrasound imaging involves: 1) insonifying the medium with several plane-wave pulses emitted at different angles by a linear transducer array, 2) sampling the returning echo signals, after each plane-wave emission, with the same transducer array, 3) beamforming the recorded angle-specific raw data frames, and 4) compounding the beamformed data frames over all angles to form a final image. This thesis attempts to address the following question: Given a set of available plane-wave emission angles, which ones should we select for acquisition (i.e., which angle-specific raw data frames should we sample), to achieve adequate image quality at low cost associated with both sampling and computation? We propose a simple similarity-driven angle selection scheme and evaluate its several variants that rely on user-specified similarity measurement thresholds guiding the recursive angle selection process. Our results show that the proposed scheme has a low computational overhead and can yield significant savings in terms of the amount of sampled raw data. / Graduate
6

Instaurer des données, instaurer des publics : une enquête sociologique dans les coulisses de l'open data / Instantiate data, instantiate publics : a sociological inquiry in the backrooms of open data

Goeta, Samuel 08 September 2016 (has links)
Alors que plus de cinquante pays dans le monde ont entrepris une démarche d’ouverture des données publiques, la thèse enquête sur l’émergence et la mise en oeuvre des politiques d’open data. Elle repose sur l’analyse de sources publiques et sur une enquête ethnographique conduite dans sept collectivités locales et institutions françaises. Revenant sur six moments de définition de grands « principes » de l’open data et leur traduction en politique publique par une institution française, Etalab, ce travail montre comment la catégorisation par l’open data a porté l’attention sur les données, en particulier sous leur forme « brute », considérées comme une ressource inexploitée, le « nouveau pétrole » gisant sous les organisations. L’enquête montre que le processus de l’ouverture débute généralement par une phase d’identification marquée par des explorations progressives et incertaines. Elle permet de comprendre que l’identification constitue un geste d’instauration qui transforme progressivement les fichiers de gestion de l’administration en données. Leur mise en circulation provoque des frictions : pour sortir des réseaux sociotechniques de l’organisation, les données doivent généralement passer à travers des circuits de validation et des chaînes de traitement. Par ailleurs, les données doivent souvent subir d’importantes transformations avant leur ouverture pour devenir intelligibles à la fois par les machines et par les humains. Cette thèse montre enfin que l’instauration concerne aussi les publics dont il est attendu qu’ils visualisent, inspectent et exploitent les données ouvertes. L’instauration des publics par des instruments très divers constitue un autre pan du travail invisible des politiques d’open data. Il ressort enfin de cette thèse que l’obligation à l’ouverture des données publiques, une suite possible des politiques d’open data, pose de manière saillante une question fondamentale « qu’est-ce qu’une donnée ? » Plutôt que de réduire la donnée à une catégorie relative, qui s’appliquerait à toutes sortes de matériaux informationnels, les cas étudiés montrent qu’elle est généralement attribuée dès lors que les données sont le point de départ de réseauxsociotechniques dédiés à leur circulation, leur exploitation et leur mise en visibilité. / As more than fifty countries have launched an open data policy, this doctoral dissertation investigates on the emergence and implementation of such policies. It is based on the analysis of public sources and an ethnographic inquiry conducted in seven French local authorities and institutions. By retracing six moments of definitions of the “open data principles” and their implementation by a French institution, Etalab, this work shows how open data has brought attention to data, particularly in their raw form, considered as an untapped resource, the “new oil” lying under the organisations. The inquiry shows that the process of opening generally begins by a phase of identification marked by progressive and uncertain explorations. It allows to understand that data are progressively instantiated from management files into data. Their circulation provoke frictions: to leave the sociotechnical network of organisations, data generally go through validation circuits and chains of treatment. Besides, data must often undergo important treatments before their opening in order to become intelligible by machines as well as humans. This thesis shows eventually that data publics are also instantiated as they are expected to visualize, inspect and process the data. Data publics are instantiated through various tools, which compose another area of the invisible work of open data projects. Finally, it appears from this work that the possible legal requirement to open data asks a fundamental question, “what is data?” Instead of reducing data to a relational category, which would apply to any informational material, studied cases show that they generally are applied when data are a starting point of sociotechnical networks dedicated to their circulation, their exploitation and their visibility.
7

Coil Sensitivity Estimation and Intensity Normalisation for Magnetic Resonance Imaging / Spolkänslighetsbestämning och intensitetsnormalisering för magnetresonanstomografi

Herterich, Rebecka, Sumarokova, Anna January 2019 (has links)
The quest for improved efficiency in magnetic resonance imaging has motivated the development of strategies like parallel imaging where arrays of multiple receiver coils are operated simultaneously in parallel. The objective of this project was to find an estimation of phased-array coil sensitivity profiles of magnetic resonance images of the human body. These sensitivity maps can then be used to perform an intensity inhomogeneity correction of the images. Through investigative work in Matlab, a script was developed that uses data embedded in raw data from a magnetic resonance scan, to generate coil sensitivities for each voxel of the volume of interest and recalculate them to two-dimensional sensitivity maps of the corresponding diagnostic images. The resulting mapped sensitivity profiles can be used in Sensitivity Encoding where a more exact solution can be obtained using the carefully estimated sensitivity maps of the images. / Inom magnetresonanstomografi eftersträvas förbättrad effektivitet, villket bidragit till utvecklingen av strategier som parallell imaging, där arrayer av flera mottagarspolar andvänds samtidigt. Syftet med detta projekt var att uppskattamottagarspolarnas känslighetskarta för att utnyttja dem till i metoder inom magnetresonansavbildning. Dessa känslighetskartor kan användas för att utföra intensitetsinhomogenitetskorrigering av bilderna. Genom utforskande arbete i Matlab utvecklades ett skript som tillämpar inbyggd rådata, från en magnetiskresonansavbildning för att generera spolens känslighet för varje voxel av volymen och omberäkna dem till tvådimensionella känslighetskartor av motsvarande diagnostiska bilder. De resulterande kartlagda känslighetsprofilerna kan användas i känslighetskodning, där en mer exakt lösning kan erhållas med hjälp av de noggrant uppskattade känslighetskartorna.
8

Etude de l’influence de l’entrée artérielle tumorale par modélisation numérique et in vitro en imagerie de contraste ultrasonore. : application clinique pour l’évaluation des thérapies ciblées en cancérologie / In vitro assessment of the arterial input function influence on dynamic contrast-enhanced ultrasonography microvascularization parameter measurements using numerical modeling. : clinical impact on treatment evaluations in oncology

Gauthier, Marianne 05 December 2011 (has links)
L’échographie dynamique de contraste (DCE-US) est actuellement proposée comme technique d’imagerie fonctionnelle permettant d’évaluer les nouvelles thérapies anti-angiogéniques. Dans ce contexte, L'UPRES EA 4040, Université Paris-Sud 11, et le service d'Echographie de l'Institut Gustave Roussy ont développé une méthodologie permettant de calculer automatiquement, à partir de la courbe de prise de contraste moyenne obtenue dans la tumeur après injection en bolus d’un agent de contraste, un ensemble de paramètres semi-quantitatifs. Actuellement, l’état hémodynamique du patient ou encore les conditions d’injection du produit de contraste ne sont pas pris en compte dans le calcul de ces paramètres à l’inverse d’autres modalités (imagerie par résonance magnétique dynamique de contraste ou scanner de perfusion). L’objectif de cette thèse était donc d’étendre la méthode de déconvolution utilisée en routine dans les autres modalités d’imagerie à l’échographie de contraste. Celle-ci permet de s’affranchir des conditions citées précédemment en déconvoluant la courbe de prise de contraste issue de la tumeur par la fonction d’entrée artérielle, donnant ainsi accès aux paramètres quantitatifs flux sanguin, volume sanguin et temps de transit moyen. Mon travail de recherche s’est alors articulé autour de trois axes. Le premier visait à développer la méthode de quantification par déconvolution dédiée à l’échographie de contraste, avec l’élaboration d’un outil méthodologique suivie de l’évaluation de son apport sur la variabilité des paramètres de la microvascularisation. Des évaluations comparatives de variabilité intra-opérateur ont alors mis en évidence une diminution drastique des coefficients de variation des paramètres de la microvascularisation de 30% à 13% avec la méthode de déconvolution. Le deuxième axe était centré sur l’étude des sources de variabilité influençant les paramètres de la microvascularisation portant à la fois sur les conditions expérimentales et sur les conditions physiologiques de la tumeur. Enfin, le dernier axe a reposé sur une étude rétrospective menée sur 12 patients pour lesquels nous avons évalué l’intérêt de la déconvolution en comparant l’évolution des paramètres quantitatifs et semi-quantitatifs de la microvascularisation en fonction des réponses des tumeurs obtenues par les critères RECIST à partir d’un scan effectué à 2 mois. Cette méthodologie est prometteuse et peut permettre à terme une évaluation plus robuste et précoce des thérapies anti-angiogéniques que les méthodologies actuellement utilisées en routine dans le cadre des examens DCE-US. / Dynamic contrast-enhanced ultrasonography (DCE-US) is currently used as a functional imaging technique for evaluating anti-angiogenic therapies. A mathematical model has been developed by the UPRES EA 4040, Paris-Sud university and the Gustave Roussy Institute to evaluate semi-quantitative microvascularization parameters directly from time-intensity curves. But DCE-US evaluation of such parameters does not yet take into account physiological variations of the patient or even the way the contrast agent is injected as opposed to other functional modalities (dynamic magnetic resonance imaging or perfusion scintigraphy). The aim of my PhD was to develop a deconvolution process dedicated to the DCE-US imaging, which is currently used as a routine method in other imaging modalities. Such a process would allow access to quantitatively-defined microvascularization parameters since it would provide absolute evaluation of the tumor blood flow, the tumor blood volume and the mean transit time. This PhD has been led according to three main goals. First, we developed a deconvolution method involving the creation of a quantification tool and validation through studies of the microvascularization parameter variability. Evaluation and comparison of intra-operator variabilities demonstrated a decrease in the coefficients of variation from 30% to 13% when microvascularization parameters were extracted using the deconvolution process. Secondly, we evaluated sources of variation that influence microvascularization parameters concerning both the experimental conditions and the physiological conditions of the tumor. Finally, we performed a retrospective study involving 12 patients for whom we evaluated the benefit of the deconvolution process: we compared the evolution of the quantitative and semi-quantitative microvascularization parameters based on tumor responses evaluated by the RECIST criteria obtained through a scan performed after 2 months. Deconvolution is a promising process that may allow an earlier, more robust evaluation of anti-angiogenic treatments than the DCE-US method in current clinical use.
9

Distributions Of Fiber Characteristics As A Tool To Evaluate Mechanical Pulps

Reyier Österling, Sofia January 2015 (has links)
Mechanical pulps are used in paper products such as magazine or news grade printing papers or paperboard. Mechanical pulping gives a high yield; nearly everything in the tree except the bark is used in the paper. This means that mechanical pulping consumes much less wood than chemical pulping, especially to produce a unit area of printing surface. A drawback of mechanical pulp production is the high amounts of electrical energy needed to separate and refine the fibers to a given fiber quality. Mechanical pulps are often produced from slow growing spruce trees of forests in the northern hemisphere resulting in long, slender fibers that are well suited for mechanical pulp products. These fibers have large varieties in geometry, mainly wall thickness and width, depending on seasonal variations and growth conditions. Earlywood fibers typically have thin walls and latewood fibers thick. The background to this study was that a more detailed fiber characterization involving evaluations of distributions of fiber characteristics, may give improved possibilities to optimize the mechanical pulping process and thereby reduce the total electric energy needed to reach a given quality of the pulp and final product. This would result in improved competitiveness as well as less environmental impact. This study evaluated the relation between fiber characteristics in three types of mechanical pulps made from Norway spruce (Picea abies), thermomechanical pulp(TMP), stone groundwood pulp (SGW) and chemithermomechanical pulp (CTMP). In addition, the influence of fibers from these pulp types on sheet characteristics, mainly tensile index, was studied. A comparatively rapid method was presented on how to evaluate the propensity of each fiber to form sheets of high tensile index, by the use of raw data from a commercially available fiber analyzer (FiberLabTM). The developed method gives novel opportunities of evaluating the effect on the fibers of each stage in the mechanical pulping process and has a potential to be applied also on‐line to steer the refining and pulping process by the characteristics of the final pulp and the quality of the final paper. The long fiber fraction is important for the properties of the whole pulp. It was found that fiber wall thickness and external fibrillation were the fibercharacteristics that contributed the most to tensile index of the long fiber fractions in five mechanical pulps (three TMPs, one SGW, one CTMP). The tensile index of handsheets of the long fiber fractions could be predicted by linear regressions using a combination of fiber wall thickness and degree of external fibrillation. The predicted tensile index was denoted BIN, short for Bonding ability INfluence. This resulted in the same linear correlation between BIN and tensile index for 52 samples of the five mechanical pulps studied, each fractionated into five streams(plus feed) in full size hydrocyclones. The Bauer McNett P16/R30 (passed 16 meshwire, retained on a 30 mesh wire) and P30/R50 fractions of each stream were used for the evaluation. The fibers of the SGW had thicker walls and a higher degree of external fibrillation than the TMPs and CTMP, which resulted in a correlation between BIN and tensile index on a different level for the P30/R50 fraction of SGW than the other pulp samples. A BIN model based on averages weighted by each fiber´s wall volume instead of arithmetic averages, took the fiber wall thickness of the SGW into account, and gave one uniform correlation between BIN and tensile index for all pulp samples (12 samples for constructing the model, 46 for validatingit). If the BIN model is used for predicting averages of the tensile index of a sheet, a model based on wall volume weighted data is recommended. To be able to produce BIN distributions where the influence of the length or wall volume of each fiber is taken into account, the BIN model is currently based on arithmetic averages of fiber wall thickness and fibrillation. Fiber width used as a single factor reduced the accuracy of the BIN model. Wall volume weighted averages of fiber width also resulted in a completely changed ranking of the five hydrocyclone streams compared to arithmetic, for two of thefive pulps. This was not seen when fiber width was combined with fiber wallthickness into the factor “collapse resistance index”. In order to avoid too high influence of fiber wall thickness and until the influence of fiber width on BIN and the measurement of fiber width is further evaluated, it is recommended to use length weighted or arithmetic distributions of BIN and other fiber characteristics. A comparably fast method to evaluate the distribution of fiber wall thickness and degree of external fibrillation with high resolution showed that the fiber wallthickness of the latewood fibers was reduced by increasing the refining energy in adouble disc refiner operated at four levels of specific energy input in a commercial TMP production line. This was expected but could not be seen by the use of average values, it was concluded that fiber characteristics in many cases should be evaluated as distributions and not only as averages. BIN distributions of various types of mechanical pulps from Norway spruce showed results that were expected based on knowledge of the particular pulps and processes. Measurements of mixtures of a news‐ and a SC (super calendered) gradeTMP, showed a gradual increase in high‐BIN fibers with higher amounts of SCgrade TMP. The BIN distributions also revealed differences between the pulps that were not seen from average fiber values, for example that the shape of the BINdistributions was similar for two pulps that originated from conical disc refiners, a news grade TMP and the board grade CTMP, although the distributions were on different BIN levels. The SC grade TMP and the SC grade SGW had similar levels of tensile index, but the SGW contained some fibers of very low BIN values which may influence the characteristics of the final paper, for example strength, surface and structure. This shows that the BIN model has the potential of being applied on either the whole or parts of a papermaking process based on mechanical or chemimechanical pulping; the evaluation of distributions of fiber characteristics can contribute to increased knowledge about the process and opportunities to optimize it.
10

GIS-based Episode Reconstruction Using GPS Data for Activity Analysis and Route Choice Modeling / GIS-based Episode Reconstruction Using GPS Data

Dalumpines, Ron 26 September 2014 (has links)
Most transportation problems arise from individual travel decisions. In response, transportation researchers had been studying individual travel behavior – a growing trend that requires activity data at individual level. Global positioning systems (GPS) and geographical information systems (GIS) have been used to capture and process individual activity data, from determining activity locations to mapping routes to these locations. Potential applications of GPS data seem limitless but our tools and methods to make these data usable lags behind. In response to this need, this dissertation presents a GIS-based toolkit to automatically extract activity episodes from GPS data and derive information related to these episodes from additional data (e.g., road network, land use). The major emphasis of this dissertation is the development of a toolkit for extracting information associated with movements of individuals from GPS data. To be effective, the toolkit has been developed around three design principles: transferability, modularity, and scalability. Two substantive chapters focus on selected components of the toolkit (map-matching, mode detection); another for the entire toolkit. Final substantive chapter demonstrates the toolkit’s potential by comparing route choice models of work and shop trips using inputs generated by the toolkit. There are several tools and methods that capitalize on GPS data, developed within different problem domains. This dissertation contributes to that repository of tools and methods by presenting a suite of tools that can extract all possible information that can be derived from GPS data. Unlike existing tools cited in the transportation literature, the toolkit has been designed to be complete (covers preprocessing up to extracting route attributes), and can work with GPS data alone or in combination with additional data. Moreover, this dissertation contributes to our understanding of route choice decisions for work and shop trips by looking into the combined effects of route attributes and individual characteristics. / Dissertation / Doctor of Philosophy (PhD)

Page generated in 0.3355 seconds