• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 10
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Organizational Form of Disease Management Programs: A Transaction Cost Analysis

Chandaver, Nahush 14 November 2007 (has links)
Patient care programs such as wellness, preventive care and specifically disease management programs, which target the chronically ill population, are designed to reduce healthcare costs and improve health, while promoting the efficient use of healthcare resources, and increasing productivity. The organizational form adopted by the health plan for these programs, i.e. in-sourced vs. outsourced is an important factor in the success of these programs and the extent to which the core objectives listed above are fulfilled. Transaction cost economics aims to explain the working arrangement for an organization and to explain why sourcing decisions were made by considering alternate organizational arrangements and comparing the costs of transacting under each. This research aims to understand the nature and sources of transaction costs, how they affect the sourcing decision of disease management and other programs, and its effect on the organization, using current industry data. Predictive models are used to obtain empirical results of the influence of each factor, and also to provide cost estimates for each organizational form available, irrespective of the form currently adopted. The analysis of the primary data obtained by the means of a web-based survey supports and confirms the effect of transaction cost factors on these programs. This implies that in order to reap financial rewards and serve patients better, health plans must aim to minimize transaction costs and select the organizational form that best accomplishes this objective.
12

Linking seafloor mapping and ecological models to improve classification of marine habitats : opportunities and lessons learnt in the Recherche Archipelago, Western Australia

Baxter, Katrina January 2008 (has links)
[Truncated abstract] Spatially explicit marine habitat data is required for effective resource planning and management across large areas, although mapped boundaries typically lack rigour in explaining what factors influence habitat distributions. Accurate, quantitative methods are needed. In this thesis I aimed to assess the utility of ecological models to determine what factors limit the spatial extent of marine habitats. I assessed what types of modeling methods were able to produce the most accurate predictions and what influenced model results. To achieve this, initially a broad scale marine habitat survey was undertaken in the Recherche Archipelago, on the south coast of Western Australia using video and sidescan sonar. Broad and more detailed functional habitats types were mapped for 1054km2 of the Archipelago. Broad habitats included high and low profile reefs, sand, seagrass and extensive rhodolith beds, although considerable variation could be identified from video within these broad types. Different densities of seagrass were identified and reefs were dominated by macroalgae, filter feeder communities, or a combination of both. Geophysical characteristics (depth, substrate, relief) and dominant benthic biota were recorded and then modelled using decision trees and a combination of generalised additive models (GAMs) and generalised linear models (GLMs) to determine the factors influencing broad and functional habitat variation. Models were developed for the entire Archipelago (n=2769) and a subset of data in Esperance Bay (n=797), which included exposure to wave conditions (mean maximum wave height and mean maximum shear stress) calculated from oceanographic models. Additional distance variables from the mainland and islands were also derived and used as model inputs for both datasets. Model performance varied across habitats, with no one method better than the other in terms of overall model accuracy for each habitat type, although prevalent classes (>20%) such as high profile reefs with macroalgae and dense seagrass were the most reliable (Area Under the Curve >0.7). ... This highlighted not only issues of data prevalence, but also how ecological models can be used to test the reliability of classification schemes. Care should be taken when mapping predicted habitat occurrence with broad habitat models. It should not be assumed that all habitats within the type will be defined spatially, as this may result in the distribution of distinctive and unique habitats such as filterfeeders being underestimated or not identified at all. More data is needed to improve prediction of these habitats. Despite the limitations identified, the results provide direction for future field sampling to ensure appropriate variables are sampled and classification schemes are carefully designed to improve descriptions of habitat distributions. Reliable habitat models that make ecological sense will assist future assessments of biodiversity within habitats as well as provide improved data on the probability of habitat occurrence. This data and the methods developed will be a valuable resource for reserve selection models that prioritise sites for management and planning of marine protected areas.
13

Predictive Modelling of Heavy Metals in Urban Lakes

Lindström, Martin January 2000 (has links)
<p>Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. </p><p>Sediment and water data for this study were collected from ten small lakes in the Stockholm area, the Eastern parts of Lake Mälaren, the innermost areas of the Stockholm archipelago and from literature studies. By correlating calculated metal loads to the land use of the catchment areas (describing urban and natural land use), the influences of the local urban status on the metal load could be evaluated. Copper was most influenced by the urban status and less by the regional background. The opposite pattern was shown for cadmium, nickel and zinc (and mercury). Lead and chromium were in-between these groups. </p><p>It was shown that the metal load from the City of Stockholm is considerable. There is a 5-fold increase in sediment deposition of cadmium, copper, mercury and lead in the central areas of Stockholm compared to surrounding areas. </p><p>The results also include a model for the lake characteristic concentration of suspended particulate matter (SPM), and new methods for empirical model testing. The results indicate that the traditional distribution (or partition) coefficient K<sub>d</sub> (L kg<sup>-1</sup>) is unsuitable to use in modelling of the particle association of metals. Instead the particulate fraction, PF (-), defined as the ratio of the particulate associated concentration to the total concentration, is recommended. K<sub>d</sub> is affected by spurious correlations due to the definition of K<sub>d</sub> as a ratio including SPM and also secondary spurious correlations with many variables correlated to SPM. It was also shown that K<sub>d</sub> has a larger inherent within-system variability than PF. This is important in modelling. </p>
14

Predictive Modelling of Heavy Metals in Urban Lakes

Lindström, Martin January 2000 (has links)
Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the Stockholm area, the Eastern parts of Lake Mälaren, the innermost areas of the Stockholm archipelago and from literature studies. By correlating calculated metal loads to the land use of the catchment areas (describing urban and natural land use), the influences of the local urban status on the metal load could be evaluated. Copper was most influenced by the urban status and less by the regional background. The opposite pattern was shown for cadmium, nickel and zinc (and mercury). Lead and chromium were in-between these groups. It was shown that the metal load from the City of Stockholm is considerable. There is a 5-fold increase in sediment deposition of cadmium, copper, mercury and lead in the central areas of Stockholm compared to surrounding areas. The results also include a model for the lake characteristic concentration of suspended particulate matter (SPM), and new methods for empirical model testing. The results indicate that the traditional distribution (or partition) coefficient Kd (L kg-1) is unsuitable to use in modelling of the particle association of metals. Instead the particulate fraction, PF (-), defined as the ratio of the particulate associated concentration to the total concentration, is recommended. Kd is affected by spurious correlations due to the definition of Kd as a ratio including SPM and also secondary spurious correlations with many variables correlated to SPM. It was also shown that Kd has a larger inherent within-system variability than PF. This is important in modelling.
15

Predicting High-cost Patients in General Population Using Data Mining Techniques

Izad Shenas, Seyed Abdolmotalleb 26 October 2012 (has links)
In this research, we apply data mining techniques to a nationally-representative expenditure data from the US to predict very high-cost patients in the top 5 cost percentiles, among the general population. Samples are derived from the Medical Expenditure Panel Survey’s Household Component data for 2006-2008 including 98,175 records. After pre-processing, partitioning and balancing the data, the final MEPS dataset with 31,704 records is modeled by Decision Trees (including C5.0 and CHAID), Neural Networks. Multiple predictive models are built and their performances are analyzed using various measures including correctness accuracy, G-mean, and Area under ROC Curve. We conclude that the CHAID tree returns the best G-mean and AUC measures for top performing predictive models ranging from 76% to 85%, and 0.812 to 0.942 units, respectively. Among a primary set of 66 attributes, the best predictors to estimate the top 5% high-cost population include individual’s overall health perception, history of blood cholesterol check, history of physical/sensory/mental limitations, age, and history of colonic prevention measures. It is worthy to note that we do not consider number of visits to care providers as a predictor since it has a high correlation with the expenditure, and does not offer a new insight to the data (i.e. it is a trivial predictor). We predict high-cost patients without knowing how many times the patient was visited by doctors or hospitalized. Consequently, the results from this study can be used by policy makers, health planners, and insurers to plan and improve delivery of health services.
16

Using GIS modelling as a tool to search for late Pleistocene and early Holocene archaeology on Quadra Island, British Columbia

Vogelaar, Colton 20 December 2017 (has links)
The archaeological sites that inform the hypothesized coastal route of entry to the Americas are limited, with fewer than twenty sites older than 11,500 years before present on the Northwest Coast of North America. Late Pleistocene and early Holocene archaeological sites are hard to find in this expansive, remote, and heavily forested area due to the complexity of paleoenvironmental change since the last glacial maximum. The study area for this thesis, Quadra Island, in the Discovery Islands, lies in the middle of a gap in knowledge about this time period. Changes in relative sea level have proven to be especially important for early site location on the coast. Predictive modelling has been used to search for new archaeological sites on the Northwest Coast, and is a basic component of cultural resource management practices in British Columbia. Such quantitative modelling can aid in archaeological site survey, but must be used critically. This study integrates quantitative and qualitative modelling with a heuristic method to incorporate more humanistic modelling theory and address some critiques of a traditional predictive modelling approach. In this study, quantitative modelling highlighted target areas which were then evaluated by qualitative modelling. A selection of targets were then subjected to focussed archaeological survey to evaluate methodology, results, and search for new sites. This method is important theoretically because modelling is explicitly used only as a tool and does not label the landscape with values of potential. Modelling was applied in two areas of Light Detection and Ranging (LiDAR) data which collectively host more than 4,000 kilometres of potential paleo-coastline. Fifteen new archaeological sites were found during this study, with at least two sites radiocarbon dated to ca. 9,500 calibrated years ago. This methodology could be applied in different archaeological contexts, such as underwater and in different coastal regions. The results of this study have important implications for coastal First Nations and implications for cultural resource management in the province. / Graduate / 2018-11-30
17

Predicting High-cost Patients in General Population Using Data Mining Techniques

Izad Shenas, Seyed Abdolmotalleb January 2012 (has links)
In this research, we apply data mining techniques to a nationally-representative expenditure data from the US to predict very high-cost patients in the top 5 cost percentiles, among the general population. Samples are derived from the Medical Expenditure Panel Survey’s Household Component data for 2006-2008 including 98,175 records. After pre-processing, partitioning and balancing the data, the final MEPS dataset with 31,704 records is modeled by Decision Trees (including C5.0 and CHAID), Neural Networks. Multiple predictive models are built and their performances are analyzed using various measures including correctness accuracy, G-mean, and Area under ROC Curve. We conclude that the CHAID tree returns the best G-mean and AUC measures for top performing predictive models ranging from 76% to 85%, and 0.812 to 0.942 units, respectively. Among a primary set of 66 attributes, the best predictors to estimate the top 5% high-cost population include individual’s overall health perception, history of blood cholesterol check, history of physical/sensory/mental limitations, age, and history of colonic prevention measures. It is worthy to note that we do not consider number of visits to care providers as a predictor since it has a high correlation with the expenditure, and does not offer a new insight to the data (i.e. it is a trivial predictor). We predict high-cost patients without knowing how many times the patient was visited by doctors or hospitalized. Consequently, the results from this study can be used by policy makers, health planners, and insurers to plan and improve delivery of health services.
18

Towards the development of a predictive rent model in Nigeria and South Africa

Oladeji, Jonathan Damilola January 2019 (has links)
This research aimed to identify reliable economic data for predictive rent modelling in South Africa and Nigeria, as a contribution towards the growing debate on real estate rental forecasting from the African perspective. The data were obtained from the Iress Expert Database, Stat SA, the Central Bank of Nigeria database (CBN), the National Bureau of Statistics and World Bank. The South African economic data comprised time series for a fifteen-year period between Quarter 1 (Q1), 2003 and Quarter 4 (Q4), 2018. The Nigerian data comprised time series for a ten-year period between Quarter 1 (Q1), 2008 and Quarter 4 (Q4), 2018. The logit model was proposed among others as a macroeconomic modelling approach that captures the future rental directions based on the general economic movements and likely turning points. The model is particularly useful due to its reliance on macroeconomic and indirect/listed real estate data which are more readily available to real estate investment decision-makers. This study identified that coincident indicators and the exchange rate both have positive significant relationships with Johannesburg Stock Exchange (JSE) listed real estate as compelling indicators for the South African market. For the Nigerian listed real estate market indicator, the model also responded to interest rate, the consumer price index and the Treasury Bill Rate (TBR) as reliable indicators. In addition to this, analysis revealed the logit regression framework as an improvement to naïve or ordinary linear rent models in these emerging African real estate markets. The use of macroeconomic modelling proved to be a viable alternative to scarce comparable transaction data which serve as the bedrock of traditional real estate investment appraisal. Thus, a forecasting model for early detection of turning points in commercial real estate rental values in South Africa and Nigeria was developed for use in real estate investment decisions. The study concluded that not all economic indicators lead the listed real estate market. The relationship between the macroeconomy and listed real estate is largely significant, but this could be a positive or negative relationship. / Dissertation (MSc)--University of Pretoria, 2019. / African Real Estate Research (AFRER) for IREBS Foundation. / Construction Economics / MSc / Unrestricted
19

Smart Cube Predictions for Online Analytic Query Processing in Data Warehouses

Belcin, Andrei 01 April 2021 (has links)
A data warehouse (DW) is a transformation of many sources of transactional data integrated into a single collection that is non-volatile and time-variant that can provide decision support to managerial roles within an organization. For this application, the database server needs to process multiple users’ queries by joining various datasets and loading the result in main memory to begin calculations. In current systems, this process is reactionary to users’ input and can be undesirably slow. In previous studies, it was shown that a personalization scheme of a single user’s query patterns and loading the smaller subset into main memory the query response time significantly shortened the query response time. The LPCDA framework developed in this research handles multiple users’ query demands, and the query patterns are subject to change (so-called concept drift) and noise. To this end, the LPCDA framework detects changes in user behaviour and dynamically adapts the personalized smart cube definition for the group of users. Numerous data mart (DM)s, as components of the DW, are subject to intense aggregations to assist analytics at the request of automated systems and human users’ queries. Subsequently, there is a growing need to properly manage the supply of data into main memory that is in closest proximity to the CPU that computes the query in order to reduce the response time from the moment a query arrives at the DW server. As a result, this thesis proposes an end-to-end adaptive learning ensemble for resource allocation of cuboids within a a DM to achieve a relevant and timely constructed smart cube before the time in need, as a way of adopting the just-in-time inventory management strategy applied in other real-world scenarios. The algorithms comprising the ensemble involve predictive methodologies from Bayesian statistics, data mining, and machine learning, that reflect the changes in the data-generating process using a number of change detection algorithms. Therefore, given different operational constraints and data-specific considerations, the ensemble can, to an effective degree, determine the cuboids in the lattice of a DM to pre-construct into a smart cube ahead of users submitting their queries, thereby benefiting from a quicker response than static schema views or no action at all.
20

Prediction of Rate of Disease Progression in Parkinson’s Disease Patients Based on RNA-Sequence Using Deep Learning

Ahmed, Siraj 06 November 2020 (has links)
The advent of recent high throughput sequencing technologies resulted in an unexplored big data of genomics and transcriptomics that might help to answer various research questions in Parkinson’s disease(PD) progression. While the literature has revealed various predictive models that use longitudinal clinical data for disease progression, there is no predictive model based on RNA-Sequence data of PD patients. This study investigates how to predict the PD Progression for a patient’s next medical visit by capturing longitudinal temporal patterns in the RNA-Seq data. Data provided by Parkinson Progression Marker Initiative (PPMI) includes 423 PD patients with a variable number of visits for a period of 4 years. We propose a predictive model based on a Recurrent Neural Network (RNN) with dense connections. The results show that the proposed architecture is able to predict PD progression from high dimensional RNA-seq data with a Root Mean Square Error (RMSE) of 6.0 and rank-order correlation of (r=0.83, p<0.0001) between the predicted and actual disease status of PD. We show empirical evidence that the addition of dense connections and batch normalization into RNN layers boosts its training and generalization capability.

Page generated in 0.087 seconds