• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 83
  • 83
  • 83
  • 32
  • 30
  • 24
  • 19
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Modelling And Predicting Binding Affinity Of Pcp-like Compounds Using Machine Learning Methods

Erdas, Ozlem 01 September 2007 (has links) (PDF)
Machine learning methods have been promising tools in science and engineering fields. The use of these methods in chemistry and drug design has advanced after 1990s. In this study, molecular electrostatic potential (MEP) surfaces of PCP-like compounds are modelled and visualized in order to extract features which will be used in predicting binding affinity. In modelling, Cartesian coordinates of MEP surface points are mapped onto a spherical self-organizing map. Resulting maps are visualized by using values of electrostatic potential. These values also provide features for prediction system. Support vector machines and partial least squares method are used for predicting binding affinity of compounds, and results are compared.
32

A Methodology for Scheduling Operating Rooms Under Uncertainty

Davila, Marbelly Paola 01 January 2013 (has links)
An operating room (OR) is considered to be one of the most costly functional areas within hospitals as well as its major profit center. It is known that managing an OR department is a challenging task, which requires the integration of many actors (e.g., patients, surgeons, nurses, technicians) who may have conflicting interests and priorities. Considering these aspects, this dissertation focuses on developing a simulation based methodology for scheduling operating rooms under uncertainty, which reflects the complexity, uncertainty and variability associated with surgery. We split the process of scheduling ORs under uncertainty into two main components. First, we designed a research roadmap for modeling surgical procedure duration (from incision to wound closure) based on the surgery volume and time variability. Then, using a real surgical dataset we modeled the procedure duration using parametric and distribution-free predictive methods. We found that Support Vector Regression performs better that Generalized Linear Models increasing the prediction accuracy on unseen data by at least 5.5%. Next, we developed a simulation based methodology for scheduling ORs through a case study. For that purpose, we initially built one day feasible schedules using the 60th, 70th, 80th, and 90th percentiles to allocate surgical procedures to ORs using four different allocation policies. We then used a discrete event simulation model to evaluate the robustness of these initial feasible schedules considering the stochastic duration of all the OR activities and the arrival of surgical emergency cases. We found that on average elective waiting almost doubled the time for the emergency cases. In addition, we observed that there is not a clear effect of how being more conservative in scheduling within each scheduling policy impacts the elective waiting times. By contrast, there is a clear effect of how the scheduling policy and scheduling percentile impact the emergency waiting times. Thus, as we increase the percentile, the waiting times for emergency cases remarkably increases under half of the scheduling policies but reflects a lesser impact under scheduling the other half. OR utilization and OR overtime in a "virtual" eight operating room hospital fluctuate between 67% and 88% and 97 and 111 minutes respectively. Moreover, we noticed that both performance metrics depend not only on the levels of the scheduling policy and scheduling percentile but also are strongly affected by the increase of the emergency arrival rate. Finally, we fit a multivariate-multiple-regression model using the output of the simulation model to assess the robustness of the model and the extent to which these results can be generalized to a single, aggregate hospital goal. Further research should include a true stochastic optimization model to integrate optimization techniques into simulation analysis.
33

Predicting the Clinical Outcome in Patients with Traumatic Brain Injury using Clinical Pathway Scores

Mendoza Alonzo, Jennifer Lorena 01 January 2013 (has links)
The Polytrauma/TBI Rehabilitation Center (PRC) of the Veterans Affairs Hospital (VAH) treats patients with Traumatic Brain Injury (TBI). These patients have major motor and cognitive disabilities. Most of the patients stay in the hospital for many months without major improvements. This suggests that patients, family and the VAH could benefit if healthcare provider had a way to better assess or "predict" patients' progression. The individual progress of patients over time is assessed using a pre-defined multi-component performance measure Functional Independence Measures (FIM) at admission and discharge, and a semi-quantitative documentation parameter Clinical Pathway (CP) at weekly intervals. This work uses already de-identified and transformed data to explore developing a clinical outcome predictive model for patients with TBI, as early as possible. The clinical outcome is measured as percentage of recovery using CP scores. The results of this research will allow healthcare providers to improve the current resource management (e.g. staff, equipment, space) through setting goals for each patient, as well as to provide the family more accurate and timely information about the status and needs of the patient.
34

Evaluation of a Guided Machine Learning Approach for Pharmacokinetic Modeling

January 2017 (has links)
abstract: A medical control system, a real-time controller, uses a predictive model of human physiology for estimation and controlling of drug concentration in the human body. Artificial Pancreas (AP) is an example of the control system which regulates blood glucose in T1D patients. The predictive model in the control system such as Bergman Minimal Model (BMM) is based on physiological modeling technique which separates the body into the number of anatomical compartments and each compartment's effect on body system is determined by their physiological parameters. These models are less accurate due to unaccounted physiological factors effecting target values. Estimation of a large number of physiological parameters through optimization algorithm is computationally expensive and stuck in local minima. This work evaluates a machine learning(ML) framework which has an ML model guided through physiological models. A support vector regression model guided through modified BMM is implemented for estimation of blood glucose levels. Physical activity and Endogenous glucose production are key factors that contribute in the increased hypoglycemia events thus, this work modifies Bergman Minimal Model ( Bergman et al. 1981) for more accurate estimation of blood glucose levels. Results show that the SVR outperformed BMM by 0.164 average RMSE for 7 different patients in the free-living scenario. This computationally inexpensive data driven model can potentially learn parameters more accurately with time. In conclusion, advised prediction model is promising in modeling the physiology elements in living systems. / Dissertation/Thesis / Masters Thesis Computer Science 2017
35

Dynamic demand modelling and pricing decision support systems for petroleum

Fox, David January 2014 (has links)
Pricing decision support systems have been developed in order to help retail companies optimise the prices they set when selling their goods and services. This research aims to enhance the essential forecasting and optimisation techniques that underlie these systems. This is first done by applying the method of Dynamic Linear Models in order to provide sales forecasts of a higher accuracy compared with current methods. Secondly, the method of Support Vector Regression is used to forecast future competitor prices. This new technique aims to produce forecasts of greater accuracy compared with the assumption currentlyused in pricing decision support systems that each competitor's price will simply remain unchanged. Thirdly, when competitor prices aren't forecasted, a new pricing optimisation technique is presented which provides the highest guaranteed profit. Existing pricing decision support systems optimise price assuming that competitor prices will remain unchanged but this optimisation can't be trusted since competitor prices are never actually forecasted. Finally, when competitor prices are forecasted, an exhaustive search of a game-tree is presented as a new way to optimise a retailer's price. This optimisation incorporates future competitor price moves, something which is vital when analysing the success of a pricing strategy but is absent from current pricing decision support systems. Each approach is applied to the forecasting and optimisation of daily retail vehicle fuel pricing using real commercial data, showing the improved results in each case.
36

Aktiemarknadsprognoser: En jämförande studie av LSTM- och SVR-modeller med olika dataset och epoker / Stock Market Forecasting: A Comparative Study of LSTM and SVR Models Across Different Datasets and Epochs

Nørklit Johansen, Mads, Sidhu, Jagtej January 2023 (has links)
Predicting stock market trends is a complex task due to the inherent volatility and unpredictability of financial markets. Nevertheless, accurate forecasts are of critical importance to investors, financial analysts, and stakeholders, as they directly inform decision-making processes and risk management strategies associated with financial investments. Inaccurate forecasts can lead to notable financial consequences, emphasizing the crucial and demanding task of developing models that provide accurate and trustworthy predictions. This article addresses this challenging problem by utilizing a long-short term memory (LSTM) model to predict stock market developments. The study undertakes a thorough analysis of the LSTM model's performance across multiple datasets, critically examining the impact of different timespans and epochs on the accuracy of its predictions. Additionally, a comparison is made with a support vector regression (SVR) model using the same datasets and timespans, which allows for a comprehensive evaluation of the relative strengths of the two techniques. The findings offer insights into the capabilities and limitations of both models, thus paving the way for future research in stock market prediction methodologies. Crucially, the study reveals that larger datasets and an increased number of epochs can significantly enhance the LSTM model's performance. Conversely, the SVR model exhibits significant challenges with overfitting. Overall, this research contributes to ongoing efforts to improve financial prediction models and provides potential solutions for individuals and organizations seeking to make accurate and reliable forecasts of stock market trends.
37

Utilizing Artificial Intelligence to Predict Severe Weather Outbreak Severity in the Contiguous United States

Williams, Megan Spade 04 May 2018 (has links)
Severe weather outbreaks are violent weather events that can cause major damage and injury. Unfortunately, forecast models can mistakenly predict the intensity of these events. Frequently, the prediction of outbreaks is inaccurate with regards to their intensity, hindering the efforts of forecasters to confidently inform the public about intensity risks. This research aims to improve outbreak intensity forecasting using severe weather parameters and an outbreak ranking index to predict outbreak intensity. Areal coverage values of gridded severe weather diagnostic variables, computed from the North American Regional Reanalysis (NARR) database for outbreaks spanning 1979 to 2013, will be used as predictors in an artificial intelligence modeling ensemble to predict outbreak intensity. NARR fields will be dynamically downscaled to a National Severe Storms Laboratory-defined WRF 4-km North American domain on which areal coverages will be computed. The research will result in a model that will predict verification information on the model performance.
38

Predicting Reactor Instability Using Neural Networks

Hubert, Hilborn January 2022 (has links)
The study of the instabilities in boiling water reactors is of significant importance to the safety withwhich they can be operated, as they can cause damage to the reactor posing risks to both equipmentand personnel. The instabilities that concern this paper are progressive growths in the oscillatingpower of boiling-water reactors. As thermal power is oscillatory is important to be able to identifywhether or not the power amplitude is stable. The main focus of this paper has been the development of a neural network estimator of these insta-bilities, fitting a non-linear model function to data by estimating it’s parameters. In doing this, theambition was to optimize the networks to the point that it can deliver near ”best-guess” estimationsof the parameters which define these instabilities, evaluating the usefulness of these networks whenapplied to problems like this. The goal was to design both MLP(Multi-Layer Perceptron) and SVR/KRR(Support Vector Regres-sion/Kernel Rigde Regression) networks and improve them to the point that they provide reliableand useful information about the waves in question. This goal was accomplished only in part asthe SVR/KRR networks proved to have some difficulty in ascertaining the phase shift of the waves.Overall, however, these networks prove very useful in this kind of task, succeeding with a reasonabledegree of confidence to calculating the different parameters of the waves studied.
39

The development and analysis of a computationally efficient data driven suit jacket fit recommendation system

Bogdanov, Daniil January 2017 (has links)
In this master thesis work we design and analyze a data driven suit jacket fit recommendation system which aim to guide shoppers in the process of assessing garment fit over the web. The system is divided into two stages. In the first stage we analyze labelled customer data, train supervised learning models as to be able to predict optimal suit jacket dimensions of unseen shoppers and determine appropriate models for each suit jacket dimension. In stage two the recommendation system uses the results from stage one and sorts a garment collection from best fit to least fit. The sorted collection is what the fit recommendation system is to return. In this thesis work we propose a particular design of stage two that aim to reduce the complexity of the system but at a cost of reduced quality of the results. The trade-offs are identified and weighed against each other. The results in stage one show that simple supervised learning models with linear regression functions suffice when the independent and dependent variables align at particular landmarks on the body. If style preferences are also to be incorporated into the supervised learning models, non-linear regression functions should be considered as to account for increased complexity. The results in stage two show that the complexity of the recommendation system can be made independent from the complexity of how fit is assessed. And as technology is enabling for more advanced ways of assessing garment fit, such as 3D body scanning techniques, the proposed design of reducing the complexity of the recommendation system enables for highly complex techniques to be utilized without affecting the responsiveness of the system in run-time. / I detta masterexamensarbete designar och analyserar vi ett datadrivet rekommendationssystem för kavajer med mål att vägleda nät-handlare i deras process i att bedöma passform över internet. Systemet är uppdelat i två steg. I det första steget analyserar vi märkt data och tränar modeller i att lära sig att framställa prognoser av optimala kavajmått för shoppare som inte systemet har tidigare exponeras för. I steg två tar rekommendationssystemet resultatet ifrån steg ett och sorterar plaggkollektionen från bästa till sämsta passform. Den sorterade kollektionen är vad systemet är tänkt att retunera. I detta arbete föreslåar vi en specifik utformning gällande steg två med mål att reducera komplexiteten av systemet men till en kostnad i noggrannhet vad det gäller resultat. För- och nackdelar identifieras och vägs mot varandra. Resultatet i steg två visar att enkla modeller med linjära regressionsfunktioner räcker när de obereoende och beroende variabler sammanfaller på specifika punkter på kroppen. Om stil-preferenser också vill inkorpereras i dessa modeller bör icke-linjära regressionsfunktioner betraktas för att redogöra för den ökade komplexitet som medföljer. Resultaten i steg två visar att komplexiteten av rekommendationssystemet kan göras obereoende av komplexiteten för hur passform bedöms. Och då teknologin möjliggör för allt mer avancerade sätt att bedöma passform, såsom 3D-scannings tekniker, kan mer komplexa tekniker utnyttjas utan att påverka responstiden för systemet under körtid.
40

Prediction of the future trend of e-commerce / Prognostisering av trender inom e-handel i Sverige

Engström, Freja, Nilsson Rojas, Disa January 2021 (has links)
In recent years more companies have invested in electronic commerce as a result of more customers using the internet as a tool for shopping. However, the basics of marketing still apply to online stores, and thus companies need to conduct market analyses of customers and the online market to be able to successfully target customers online. In this report, we propose the use of machine learning, a tool that has received a lot of attention and positive affirmation for the ability to tackle a range of problems, to predict future trends of electronic commerce in Sweden. More precise, to predict the future share of users of electronic commerce in general and for certain demographics. We will build three different models, polynomial regression, SVR and ARIMA. The findings from the constructed forecasts were that there are differences between different demographics of customers and between groups within a certain demographic. Furthermore, the result showed that the forecast was more accurate when modelling a certain demographic than the entire population. Companies can thereby possibly use the models to predict the behaviour of certain smaller segments of the market and use that in their marketing to attract these customers. / Pa senare år har många företag investerat i elektronisk handel, även kallat e-handel, vilket är ett resultat av att individer i samhället i större utsträckning använder internet som ett redskap. Grunderna för marknadsföring gäller fortfarande för webbaserade butiker, och därmed behöver företag genomföra marknadsanalyser över potentiella kunder och internet-marknaden för att kunna lansera starka marknadsföringskampanjer. I denna rapport föreslår vi användning av maskininlärning, ett verktyg som har fått mycket uppmärksamhet på senaste tiden för dess förmåga att hantera olika problem kring data och för att prognostisera framtida trender för e-handel i Sverige. Mer exakt kommer andelen användare av e-handel i framtiden prognostiseras, både generellt och för enskilda demografier. Vi kommer att implementera tre olika modeller, polynomisk regression, SVR och ARIMA. Resultaten från de konstruerade prognoserna visar att det finns tydliga skillnader mellan olika demografier av kunder och mellan grupper inom en viss demografi. Dessutom visade resultaten att prognoserna var mer exakta vid modellering av en viss demografi än över hela befolkningen. Företag kan därmed möjligtvis använda modellerna för att förutsäga beteendet hos vissa mindre segment av marknaden.

Page generated in 0.1277 seconds