• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 6
  • 4
  • 2
  • 1
  • Tagged with
  • 41
  • 41
  • 17
  • 12
  • 11
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A non-continuum approach to obtain a macroscopic model for the flow of traffic

Tyagi, Vipin 17 September 2007 (has links)
Existing macroscopic models for the flow of traffic treat traffic as a continuum or employ techniques similar to those used in the kinetic theory of gases. Spurious two- way propagation of disturbances that are physically unacceptable are predicted by continuum models for the flow of traffic. The number of vehicles in a typical section of a freeway does not justify traffic being treated as a continuum. It is also important to recognize that the basic premises of kinetic theory are not appropriate for the flow of traffic. A model for the flow of traffic that does not treat traffic as a continuum or use notions from kinetic theory is developed in this dissertation and corroborated with traffic data collected from the sensors deployed on US 183 freeway in Austin, Texas, USA. The flow of traffic exhibits distinct characteristics under different conditions and reflects the congestion during peak hours and relatively free motion during off-peak hours. This requires one to use different governing equations to describe the diverse traffic characteristics, namely the different traffic flow regimes of response. Such an approach has been followed in this dissertation. An observer based on extended Kalman filtering technique has been utilized for the purpose of estimating the traffic state. Historical traffic data has been used for model calibration. The estimated model parameters have consistent values for different traffic conditions. These esti- mated model parameters are then subsequently used for estimation of the state of traffic in real-time. A short-term traffic state forecasting approach, based on the non-continuum traffic model, which incorporates weighted historical and real-time traffic information has been developed. A methodology for predicting trip travel time based on this approach has also been developed. Ten and fifteen minute predictions for traffic state and trip travel time seem to agree well with the traffic data collected on US 183.
12

Freeway Travel Time Prediction Using Data from Mobile Probes

Izadpanah, Pedram 08 November 2010 (has links)
It is widely agreed that estimates of freeway segment travel times are more highly valued by motorists than other forms of traveller information. The provision of real-time estimates of travel times is becoming relatively common in many of the large urban centres in the US and overseas. Presently, most traveler information systems are operating based on estimated travel time rather than predicted travel time. However, traveler information systems are most beneficial when they are built upon predicted traffic information (e.g. predicted travel time). A number of researchers have proposed different models to predict travel time. One of these techniques is based on traffic flow theory and the concept of shockwaves. Most of the past efforts at identifying shockwaves have been focused on performing shockwave analysis based on fixed sensors such as loop detectors which are commonly used in many jurisdictions. However, latest advances in wireless communications have provided an opportunity to obtain vehicle trajectory data that potentially could be used to derive traffic conditions over a wide spatial area. This research proposes a new methodology to detect and analyze shockwaves based on vehicle trajectory data and will use this information to predict travel time for freeway sections. The main idea behind this methodology is that average speed on a section of roadway is constant unless a shockwave is created due to change in flow rate or density of traffic. In the proposed methodology first the road section is discretized into a number of smaller road segments and the average speed of each segment is calculated based on the available information obtained from probe vehicles during the current time interval. If a new shockwave is detected, the average speed of the road segment is adjusted to account for the change in the traffic conditions. In order to detect shockwaves, first, a two phase piecewise linear regression is used to find the points at which a vehicle has changed its speed. Then, the points that correspond to the intersection of shockwaves and trajectories of probe vehicles are identified using a data filtering procedure and a linear clustering algorithm is employed to group different shockwaves. Finally, a linear regression model is applied to find propagation speed and spatial and temporal extent of each shockwave. The performance of this methodology was tested using one simulated signalized intersection, trajectories obtained from video processing of a section of freeway in California, and trajectories obtained from two freeway sections in Ontario. The results of this thesis show that the proposed methodology is able to detect shockwaves and predict travel time even with a small sample of vehicles. These results show that traffic data acquisition systems which are based on anonymously tracking of vehicles are a viable substitution to the tradition traffic data collection systems especially in relatively rural areas.
13

Retention time predictions in Gas Chromatography

Thewalim, Yasar January 2011 (has links)
In gas chromatography, analytes are separated by differences in their partition between a mobile phase and a stationary phase. Temperature-program, column dimensions, stationary and mobile phases, and flow rate are all parameters that can affect the quality of the separation in gas chromatography. To achieve a good separation (in a short amount of time) it is necessary to optimize these parameters. This can often be quite a tedious task. Using computer simulations, it is possible to both gain a better understanding of how the different parameters govern retention and separation of a given set of analytes, and to optimize the parameters within minutes. In the research presented here, this was achieved by taking a thermodynamic approach that used the two parameters ΔH (enthalpy change) and ΔS (entropy change) to predict retention times for gas chromatography. By determining these compound partition parameters, it was possible to predict retention times for analytes in temperature-programmed runs. This was achieved through the measurement of the retention times of n-alkanes, PAHs, alcohols, amines and compounds in the Grob calibration mixture in isothermal runs. The isothermally obtained partition coefficients, together with the column dimensions and specifications, were then used for computer simulation using in-house software. The two-parameter model was found to be both robust and precise and could be a useful tool for the prediction of retention times. It was shown that it is possible to calculate retention times with good precision and accuracy using this model. The relative differences between the predicted and experimental retention times for different compound groups were generally less than 1%. The scientific studies (Papers I-IV) are summarized and discussed in the main text of this thesis. / At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 4: Submitted.
14

Machine Learning-based path prediction for emergency vehicles

Rosberg, Felix, Ghassemloi, Aidin January 2018 (has links)
No description available.
15

Adaptace programů ve Scale zaměřená na výkon / Performance based adaptation of Scala programs

Kubát, Petr January 2017 (has links)
Dynamic adaptivity of a computer system is its ability to modify the behavior according to the environment in which it is executed. It allows the system to achieve better performance, but usually requires specialized architecture and brings more complexity. The thesis presents an analysis and design of a framework that allows simple and fluent performance-based adaptive development at the level of functions and methods. It closely examines the API requirements and possibilities of integrating such a framework into the Scala programming language using its advanced syntactical constructs. On theoretical level, it deals with the problem of selecting the most appropriate function to execute with given input based on measurements of previous executions. In the provided framework implementation, the main stress is laid on modularity and extensibility, as many possible future extensions are outlined. The solution is evaluated on a variety of development scenarios, ranging from input adaptation of algorithms to environment adaptations of complex distributed computations in Apache Spark.
16

Freeway Travel Time Estimation and Prediction Using Dynamic Neural Networks

Shen, Luou 16 July 2008 (has links)
Providing transportation system operators and travelers with accurate travel time information allows them to make more informed decisions, yielding benefits for individual travelers and for the entire transportation system. Most existing advanced traveler information systems (ATIS) and advanced traffic management systems (ATMS) use instantaneous travel time values estimated based on the current measurements, assuming that traffic conditions remain constant in the near future. For more effective applications, it has been proposed that ATIS and ATMS should use travel times predicted for short-term future conditions rather than instantaneous travel times measured or estimated for current conditions. This dissertation research investigates short-term freeway travel time prediction using Dynamic Neural Networks (DNN) based on traffic detector data collected by radar traffic detectors installed along a freeway corridor. DNN comprises a class of neural networks that are particularly suitable for predicting variables like travel time, but has not been adequately investigated for this purpose. Before this investigation, it was necessary to identifying methods for data imputation to account for missing data usually encountered when collecting data using traffic detectors. It was also necessary to identify a method to estimate the travel time on the freeway corridor based on data collected using point traffic detectors. A new travel time estimation method referred to as the Piecewise Constant Acceleration Based (PCAB) method was developed and compared with other methods reported in the literatures. The results show that one of the simple travel time estimation methods (the average speed method) can work as well as the PCAB method, and both of them out-perform other methods. This study also compared the travel time prediction performance of three different DNN topologies with different memory setups. The results show that one DNN topology (the time-delay neural networks) out-performs the other two DNN topologies for the investigated prediction problem. This topology also performs slightly better than the simple multilayer perceptron (MLP) neural network topology that has been used in a number of previous studies for travel time prediction.
17

Predicting Transit Times For Outbound Logistics

Brooke Renee Cochenour (8996768) 23 June 2020 (has links)
On-time delivery of supplies to industry is essential because delays can disrupt production schedules. The aim of the proposed application is to predict transit times for outbound logistics thereby allowing suppliers to plan for timely mitigation of risks during shipment planning. The predictive model consists of a classifier that is trained for each specific source-destination pair using historical shipment, weather, and social media data. The model estimates the transit times for future shipments using Support Vector Machine (SVM). These estimates were validated using four case study routes of varying distances in the United States. A predictive model is trained for each route. The results show that the contribution of each input feature to the predictive ability of the model varies for each route. The mean average error (MAE) values of the model vary for each route due to the availability of testing and training historical shipment data as well as the availability of weather and social media data. In addition, it was found that the inclusion of the historical traffic data provided by INRIX™ improves the accuracy of the model. Sample INRIX™ data was available for one of the routes. One of the main limitations of the proposed approach is the availability of historical shipment data and the quality of social media data. However, if the data is available, the proposed methodology can be applied to any supplier with high volume shipments in order to develop a predictive model for outbound transit time delays over any land route.
18

An interoperable electronic medical record-based platform for personalized predictive analytics

Abedtash, Hamed 31 May 2017 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Precision medicine refers to the delivering of customized treatment to patients based on their individual characteristics, and aims to reduce adverse events, improve diagnostic methods, and enhance the efficacy of therapies. Among efforts to achieve the goals of precision medicine, researchers have used observational data for developing predictive modeling to best predict health outcomes according to patients’ variables. Although numerous predictive models have been reported in the literature, not all models present high prediction power, and as the result, not all models may reach clinical settings to help healthcare professionals make clinical decisions at the point-of-care. The lack of generalizability stems from the fact that no comprehensive medical data repository exists that has the information of all patients in the target population. Even if the patients’ records were available from other sources, the datasets may need further processing prior to data analysis due to differences in the structure of databases and the coding systems used to record concepts. This project intends to fill the gap by introducing an interoperable solution that receives patient electronic health records via Health Level Seven (HL7) messaging standard from other data sources, transforms the records to observational medical outcomes partnership (OMOP) common data model (CDM) for population health research, and applies predictive models on patient data to make predictions about health outcomes. This project comprises of three studies. The first study introduces CCD-TOOMOP parser, and evaluates OMOP CDM to accommodate patient data transferred by HL7 consolidated continuity of care documents (CCDs). The second study explores how to adopt predictive model markup language (PMML) for standardizing dissemination of OMOP-based predictive models. Finally, the third study introduces Personalized Health Risk Scoring Tool (PHRST), a pilot, interoperable OMOP-based model scoring tool that processes the embedded models and generates risk scores in a real-time manner. The final product addresses objectives of precision medicine, and has the potentials to not only be employed at the point-of-care to deliver individualized treatment to patients, but also can contribute to health outcome research by easing collecting clinical outcomes across diverse medical centers independent of system specifications.
19

Churn prediction using time series data / Prediktion av kunduppsägelser med hjälp av tidsseriedata

Granberg, Patrick January 2020 (has links)
Customer churn is problematic for any business trying to expand their customer base. The acquisition of new customers to replace churned ones are associated with additional costs, whereas taking measures to retain existing customers may prove more cost efficient. As such, it is of interest to estimate the time until the occurrence of a potential churn for every customer in order to take preventive measures. The application of deep learning and machine learning to this type of problem using time series data is relatively new and there is a lot of recent research on this topic. This thesis is based on the assumption that early signs of churn can be detected by the temporal changes in customer behavior. Recurrent neural networks and more specifically long short-term memory (LSTM) and gated recurrent unit (GRU) are suitable contenders since they are designed to take the sequential time aspect of the data into account. Random forest (RF) and stochastic vector machine (SVM) are machine learning models that are frequently used in related research. The problem is solved through a classification approach, and a comparison is done with implementations using LSTM, GRU, RF, and SVM. According to the results, LSTM and GRU perform similarly while being slightly better than RF and SVM in the task of predicting customers that will churn in the coming six months, and that all models could potentially lead to cost savings according to simulations (using non-official but reasonable costs assigned to each prediction outcome). Predicting the time until churn is a more difficult problem and none of the models can give reliable estimates, but all models are significantly better than random predictions. / Kundbortfall är problematiskt för företag som försöker expandera sin kundbas. Förvärvandet av nya kunder för att ersätta förlorade kunder är associerat med extra kostnader, medan vidtagandet av åtgärder för att behålla kunder kan visa sig mer lönsamt. Som så är det av intresse att för varje kund ha pålitliga tidsestimat till en potentiell uppsägning kan tänkas inträffa så att förebyggande åtgärder kan vidtas. Applicerandet av djupinlärning och maskininlärning på denna typ av problem som involverar tidsseriedata är relativt nytt och det finns mycket ny forskning kring ämnet. Denna uppsats är baserad på antagandet att tidiga tecken på kundbortfall kan upptäckas genom kunders användarmönster över tid. Reccurent neural networks och mer specifikt long short-term memory (LSTM) och gated recurrent unit (GRU) är lämpliga modellval eftersom de är designade att ta hänsyn till den sekventiella tidsaspekten i tidsseriedata. Random forest (RF) och stochastic vector machine (SVM) är maskininlärningsmodeller som ofta används i relaterad forskning. Problemet löses genom en klassificeringsapproach, och en jämförelse utförs med implementationer av LSTM, GRU, RF och SVM. Resultaten visar att LSTM och GRU presterar likvärdigt samtidigt som de presterar bättre än RF och SVM på problemet om att förutspå kunder som kommer att säga upp sig inom det kommande halvåret, och att samtliga modeller potentiellt kan leda till kostnadsbesparingar enligt simuleringar (som använder icke-officiella men rimliga kostnader associerat till varje utfall). Att förutspå tid till en kunduppsägning är ett svårare problem och ingen av de framtagna modellerna kan ge pålitliga tidsestimat, men alla är signifikant bättre än slumpvisa gissningar.
20

Prediction of Ranking of Chromatographic Retention Times using a Convolutional Network / Rankning av kromatografisk retentionstid med hjälp av faltningsnätverk

Kruczek, Daniel January 2018 (has links)
In shotgun proteomics, the liquid chromatography step is used to separate peptides in order to analyze as few as possible at the same time in the mass spectrometry step. Each peptide has a retention time, that is how long it takes to pass through the chromatography column. Prediction of the retention time can be used to gain increased identification of peptides or in order to create targeted proteomics experiments. Using machine learning methods such as support vector machines has given a high prediction accuracy, but such methods require known features that the retention time depends on. In this thesis we let a convolutional network, learn to rank the retention times instead of predicting the retention times themselves. We also tested how the prediction accuracy depends on the size of the training set. We found that pairwise ranking of peptides outperforms pointwise ranking and that adding more training data increased accuracy until the end without an increase in training time. This implies that accuracy can be further increased by training on even greater training sets.

Page generated in 0.0843 seconds