• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 24
  • 24
  • 15
  • 13
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 183
  • 183
  • 32
  • 32
  • 31
  • 29
  • 25
  • 22
  • 22
  • 21
  • 18
  • 16
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modelling and simulation of dynamic contrast-enhanced MRI of abdominal tumours

Banerji, Anita January 2012 (has links)
Dynamic contrast-enhanced (DCE) time series analysis techniques are hard to fully validate quantitatively as ground truth microvascular parameters are difficult to obtain from patient data. This thesis presents a software application for generating synthetic image data from known ground truth tracer kinetic model parameters. As an object oriented design has been employed to maximise flexibility and extensibility, the application can be extended to include different vascular input functions, tracer kinetic models and imaging modalities. Data sets can be generated for different anatomical and motion descriptions as well as different ground truth parameters. The application has been used to generate a synthetic DCE-MRI time series of a liver tumour with non-linear motion of the abdominal organs due to breathing. The utility of the synthetic data has been demonstrated in several applications: in the development of an Akaike model selection technique for assessing the spatially varying characteristics of liver tumours; the robustness of model fitting and model selection to noise, partial volume effects and breathing motion in liver tumours; and the benefit of using model-driven registration to compensate for breathing motion. When applied to synthetic data with appropriate noise levels, the Akaike model selection technique can distinguish between the single-input extended Kety model for tumour and the dual-input Materne model for liver, and is robust to motion. A significant difference between median Akaike probability value in tumour and liver regions is also seen in 5/6 acquired data sets, with the extended Kety model selected for tumour. Knowledge of the ground truth distribution for the synthetic data was used to demonstrate that, whilst median Ktrans does not change significantly due to breathing motion, model-driven registration restored the structure of the Ktrans histogram and so could be beneficial to tumour heterogeneity assessments.
12

Elicitation indirecte de modèles de tri multicritère / Indirect elicitation of multicriteria sorting models

Cailloux, Olivier 14 November 2012 (has links)
Le champ de l’Aide Multicritère à la Décision (AMCD) propose des manières de modéliser formellement les préférences d’un décideur de façon à lui apporter des éclaircissements. Le champ s’intéresse aux problèmes impliquant une décision et faisant intervenir plusieurs points de vue pour évaluer les options (ou alternatives) disponibles. Ce travail vise principalement à proposer des méthodes d’élicitation, donc des façons de questionner un décideur ou un groupe de décideurs pour obtenir un ou plusieurs modèles de préférence. Ces méthodes utilisent des techniques dites de désagrégation consistant à prendre des exemples de décision pour base de la modélisation. Dans le contexte étudié, les modèles de préférence sont des modèles de tri : ils déterminent une façon d’affecter des alternatives à des catégories ordonnées par préférence. Nous nous intéressons à la classe de modèles de tri MR Sort. Nous présentons une méthode permettant de faire converger un groupe de décideurs vers un modèle de tri unique. Elle s’appuie sur des programmes mathématiques. Nous analysons également en détail les difficultés liées aux imprécisions numériques posées par l’implémentation de ces programmes. Nous proposons aussi un algorithme permettant de comparer deux modèles MR Sort. Nous introduisons une manière novatrice d’interroger le décideur d’une façon qui permet de prendre en compte ses hésitations, via l’expression de degrés de crédibilités, lorsqu’il fournit des exemples d’affectation. Les résultats de la méthode permettent au décideur de visualiser les compromis possibles entre la crédibilité et la précision des conclusions obtenues. Nous proposons une méthode de choix de portefeuille. Elle intègre des préoccupations d’évaluation absolue, afin de s’assurer de la qualité intrinsèque des alternatives sélectionnées, et des préoccupations portant sur l’équilibre du portefeuille résultant. Nous expliquons également en quoi cette méthode peut constituer une alternative à la discrimination positive. Nous décrivons les composants logiciels réutilisables que nous avons soumis à une plateforme de services web, ainsi que les fonctionnalités développées dans une bibliothèque qui implémente les méthodes proposées dans ce travail. Un schéma de données existe visant à standardiser l’encodage de données de méthodes d’AMCD en vue de faciliter la communication entre composants logiciels. Nous proposons une nouvelle approche visant à résoudre un certain nombre d’inconvénients de l’approche actuelle. Nous développons en guise de perspective une proposition visant à inscrire la modélisation des préférences dans une épistémologie de type réaliste. / The field of Multicriteria Decision Aid (MCDA) aims to model in a formal way the preferences of a decision maker (DM) in order to bring informations that can help her in a decision problem. MCDA is interested in situations where the available options (called alternatives) are evaluated on multiple points of view. This work suggests elicitation methods: ways of questioning a DM or a group of DMs in order to obtain one or several preference models. These methods rely on socalled disaggregation techniques, which use exemplary decisions as a basis for building the preference model. In our context, the preference models are sorting models: they determine a way of assigning alternatives to preferenceordered categories. We are interested in a class of sorting models called MR Sort. We present a method that helps a group of DMs converge to a unique sorting model. It uses mathematical programs. We also analyze in detail the difficulties due to numerical imprecision when implementing these programs, and we propose an algorithm allowing to compare two MR Sort models. We introduce a novel way of interrogating the DM in order to take her hesitations into account, through the expression of degrees of credibility, when she gives assignment examples. Results of the method let the DM examine possible compromises between credibility and precision of the conclusions. We propose a method to choose portfolios. It encompasses two dimensions: absolute evaluation, in order to ensure that the selected alternatives are sufficiently good, and balance of the resulting portfolio. We also explain how this method compares to affirmative action. We describe the reusable software components that we have submitted to a web services platform, as well as functionalities developed in a library that implements the methods this work proposes. A data scheme exists that aims to standardize encoding of data related to MCDA methods, in order to ease communication between software components. We propose a new approach aiming to solve some drawbacks of the current approach. We develop as a perspective a proposal that aims to integrate preference modeling into the framework of realistic epistemology.
13

Quality of service in cloud computing: Data model; resource allocation; and data availability and security

Akintoye, Samson Busuyi January 2019 (has links)
Philosophiae Doctor - PhD / Recently, massive migration of enterprise applications to the cloud has been recorded in the Information Technology (IT) world. The number of cloud providers offering their services and the number of cloud customers interested in using such services is rapidly increasing. However, one of the challenges of cloud computing is Quality-of-Service management which denotes the level of performance, reliability, and availability offered by cloud service providers. Quality-of-Service is fundamental to cloud service providers who find the right tradeoff between Quality-of-Service levels and operational cost. In order to find out the optimal tradeoff, cloud service providers need to comply with service level agreements contracts which define an agreement between cloud service providers and cloud customers. Service level agreements are expressed in terms of quality of service (QoS) parameters such as availability, scalability performance and the service cost. On the other hand, if the cloud service provider violates the service level agreement contract, the cloud customer can file for damages and claims some penalties that can result in revenue losses, and probably detriment to the provider’s reputation. Thus, the goal of any cloud service provider is to meet the Service level agreements, while reducing the total cost of offering its services.
14

Design and Implementation of an Object-Oriented Space-Time GIS Data Model

Zhao, Ziliang 01 August 2011 (has links)
Geographic data are closely related to both spatial and temporal domains. Geographic information systems (GIS) can capture, manage, analyze, and display spatial data. However, they are not suitable for handling temporal data. Rapid developments of data collection and location-aware technologies stimulate the interests of obtaining useful information from the historical data. Researchers have been working to build various spatio-temporal data models to support spatio-temporal query. Nevertheless, the existing models exhibit weaknesses in various aspects. For instance, the snapshot model is plagued with data redundancy and the event-based spatio-temporal data model (ESTDM) is limited to raster dataset. This study reviews existing spatio-temporal data models in order to design an object-oriented space-time GIS data model that makes additional contributions to processing spatio-temporal data. A binary large object (BLOB) data type, labeled Space-Time BLOB, is added to ArcGIS geodatabase data model to store instantiated space-time objects. A Space-Time BLOB is associated with an array that contains the spatial and temporal information for an object at different time points and time intervals. This study also implements a space-time GIS prototype system, along with a set of spatio-temporal query functions, based on the proposed space-time GIS data model.
15

Technological Spillovers via Foreign Investment and China¡¦s Economic Development

Chu, Yu-han 22 June 2007 (has links)
We review previous literature on productivity effects of FDI in China and find that the evidence of FDI spillovers on her economic growth rate is mixed. Take A. Marino (2000) and E-G Lim (2001) for example, they pointed out that it just happened conditionally. Thus due to the proof of its plausibility, China¡¦s experience may help underdeveloped countries fulfill their goals and become one of the most contentious issues. Based on CH(1995), this paper presents a 3-sector R&D-based endogenous growth model in an open economy with human capital accumulation and the existing stocks of technology from MNCs as well as domestic industries. And the thread of thought is that the technology growth rate will arise if technological spillovers of FDI do act in domestic R&D sectors, and that will lead to the better development of economy. The solution satisfied to the competitive equilibrium conditions shows that long-run growth rate arises from the improvement of absorptive capability and higher human capital stock, while the relationships between technology gap and steady-state growth rate are uncertain. Then, bottomed on the results of theoretical model and the existing information including Chinese 30 provincial level data for 1996-2004, this paper tests with econometric methods¡Ð panel data OLS model with fixed effect¡Ðand makes empirical analyses. In addition, absorptive capacity is weighted by human capital. As the setting of empirical model, the major focuses are on how human capital, domestic R&D, and international technological spillovers affect long-run growth rate. And the main conclusion is that the steady-state growth rates depend positively on the stock of human capital, the investment of domestic R&D, and the effects of technological spillovers via FDI whether the absorptive capacity is considered or not. While the results also show that the stock of human capital is a definitive and appropriate index to the absorptive capacity and that Chinese provincial level productivity effects of FDI are strongly confirmed by this paper. However, there are still some hinder in China for the digestion of foreign technologies, thus in the future the authority should put more emphases on increasing human capital stock and stepping up self-innovated ability.
16

Software Design of A Graph Data Model with Extended Views and Operations

Yen, Yu-Yang 27 March 2008 (has links)
In state-of-the-art libraries (for example, Standard Template Library), they support a number of data models, such as set, map, sequence, etc. Since graph data processing is widely used in combinatorial processing and optimization programs, in this research, we implemented software design of a graph model with extended views. In the design, we developed various graph data models with associated graph operations and graph algorithms. With this library, we can support program designs utilizing graph data and processing.
17

Ex-dividend day abnormal return analysis in Taiwan 50 index stocks

YAO, YI-HSIN 28 July 2008 (has links)
Abstract Taiwan's stock market have always been ex-dividend Performance , in essence, to participate in ex-dividend will not increase wealth, but investors are usually regarded as dividends paid by companies operating in the future of the expected. Ex-dividend will to come into notice of investor. We collection from 1999 to 2007, total of nine-year period. The ex-dividend day stock prices analysis in Taiwan 50 index stocks. We use market model of Event Study, and respectively studies by OLS¡BGARCH and SUR model, it's estimated that the abnormal return (AR), this paper to discuss ex-dividend performance of the Taiwan50 index stocks. We to join may cause abnormal return of variables to Panel data regression analysis model, the certification may cause abnormal return of factors.
18

Postglacial Transient Dynamics of Olympic Peninsula Forests: Comparing Predictions and Observations

Fisher, David 03 October 2013 (has links)
Interpreting particular climatic drivers of local and regional vegetation change from paleoecological records is complex. I explicitly simulated vegetation change from the late-Glacial period to the present on the Olympic Peninsula, WA and made formal comparisons to pollen records. A temporally continuous paleoclimate scenario drove the process-based vegetation model, LPJ-GUESS. Nine tree species and a grass type were parameterized, with special attention to species requirements for establishment as limited by snowpack. Simulations produced realistic present-day species composition in five forest zones and captured late-Glacial to late Holocene transitions in forest communities. Early Holocene fire-adapted communities were not simulated well by LPJ-GUESS. Scenarios with varying amounts of snow relative to rain showed the influence of snowpack on key bioclimatic variables and on species composition at a subalpine location. This study affirms the importance of exploring climate change with methods that consider species interactions, transient dynamics, and functional components of the climate.
19

Hydrologic Information Systems: Advancing Cyberinfrastructure for Environmental Observatories

Horsburgh, Jeffery S. 01 May 2009 (has links)
Recently, community initiatives have emerged for the establishment of large-scale environmental observatories. Cyberinfrastructure is the backbone upon which these observatories will be built, and scientists' ability to access and use the data collected within observatories to address research questions will depend on the successful implementation of cyberinfrastructure. The research described in this dissertation advances the cyberinfrastructure available for supporting environmental observatories. This has been accomplished through both development of new cyberinfrastructure components as well as through the demonstration and application of existing tools, with a specific focus on point observations data. The cyberinfrastructure that was developed and deployed to support collection, management, analysis, and publication of data generated by an environmental sensor network in the Little Bear River environmental observatory test bed is described, as is the sensor network design and deployment. Results of several analyses that demonstrate how high-frequency data enable identification of trends and analysis of physical, chemical, and biological behavior that would be impossible using traditional, low-frequency monitoring data are presented. This dissertation also illustrates how the cyberinfrastructure components demonstrated in the Little Bear River test bed have been integrated into a data publication system that is now supporting a nationwide network of 11 environmental observatory test bed sites, as well as other research sites within and outside of the United States. Enhancements to the infrastructure for research and education that are enabled by this research are impacting a diverse community, including the national community of researchers involved with prospective Water and Environmental Research Systems (WATERS) Network environmental observatories as well as other observatory efforts, research watersheds, and test beds. The results of this research provide insight into and potential solutions for some of the bottlenecks associated with design and implementation of cyberinfrastructure for observatory support.
20

Machine-to-machine communication for automatic retrieval of scientific data

Gangaraju, SricharanLochan 03 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / With the increasing need for accurate weather predictions, we need large samples of data from different data sources for an accurate estimate. There are a number of data sources that keep publishing data periodically. These data sources have their own server protocols that a user needs to follow while writing client for retrieving data. This project aims at creating a generic semi-automatic client mechanism for retrieving scientific data from such sources. Also, with the increasing number of data sources there is also a need for a data model to accommodate data that is published in different formats. We have come up with a data model that can be used across various applications in the domain of scientific data retrieval.

Page generated in 0.0704 seconds