• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2904
  • 470
  • 284
  • 259
  • 232
  • 65
  • 59
  • 40
  • 36
  • 28
  • 22
  • 22
  • 21
  • 20
  • 19
  • Tagged with
  • 5507
  • 712
  • 594
  • 575
  • 574
  • 570
  • 549
  • 549
  • 498
  • 422
  • 415
  • 387
  • 376
  • 364
  • 331
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Predicting spontaneous racemate resolution using recent developments in crystal structure prediction

Kendrick, John, Gourlay, Matthew D., Neumann, M.A., Leusen, Frank J.J. January 2009 (has links)
No / A hybrid molecular mechanics and quantum mechanics solid state DFT method is used to re-rank the stability of racemic and enantiopure crystal structures of four molecules; 4-hydroxymethyl-2-oxazolidinone, 5-hydroxymethyl-2-oxazolidinone, 2-(4-hydroxyphenyl)-2,5,5-trimethylpyrrolidine-1-oxy and 2-(3-hydroxyphenyl)-2,5,5-trimethylpyrrolidine-1-oxy. Previous work using a force field based method to predict these crystal structures indicated that the lattice energy may be a suitable criterion for predicting whether a chiral molecule will resolve spontaneously on crystallisation. However, in some cases, the method had predicted an unrealistically high lattice energy for the structure corresponding to the experimentally observed one. The Hybrid DFT method successfully predicts those molecules which resolve spontaneously and furthermore predicts satisfactory lattice energies for all experimentally observed structures. Based on a comparison of the predicted lattice energies from the two methods it is concluded that the force fields used were not sufficiently accurate to predict spontaneous resolution with any confidence. However, the Hybrid DFT method is shown to be sufficiently accurate for making such predictions.
312

An investigation of the relationships between the Inwald Personality Inventory, Nelson-Denny reading test and field training officer performance

Montgomery, Brandon G. 01 January 1998 (has links)
No description available.
313

Improving prediction accuracy of hard-to-predict branches using data value correlation

Farooq, Muhammad Umar, active 2013 17 February 2014 (has links)
Performance of modern pipelined processor depends on steady flow of useful instructions for processing. Branch instruction disrupts sequential flow of instructions by presenting multiple paths through which a program can proceed. By predicting branch outcome early, branch predictors allow processor to continue fetching instructions from the predicted path. History-based dynamic branch predictors have shown to reach high prediction accuracy, yet certain branch types continue to mispredict. These are multitarget indirect branches and data-dependent direct and indirect branches. These are hard-to-predict branches since their outcome do not always exhibit repeatable patterns. This thesis describes branch prediction schemes for improving prediction accuracy of hard-to-predict branches using data value correlation. In these schemes, instead of relying on branch history information, compiler identifies program instructions whose output value strongly correlates with branch outcome. These correlated instructions are tracked at run-time, and their output is used for making branch predictions. Specifically, this thesis proposes following two branch prediction schemes: (i) Value-based BTB indexing (VBBI) is a low cost, compiler-guided scheme for predicting multi-target indirect branches. For every indirect branch, compiler identifies an instruction whose output strongly correlates with targets taken by the indirect branch. At run-time, multiple branch targets are stored and subsequently accessed from BTB using index formed by hashing indirect branch PC with output of the correlated instruction. (ii) Store-Load-Branch (SLB) predictor is a compiler-assisted branch prediction scheme for data-dependent branches. Typically, data-dependent branches are associated with program data structures such as arrays, linked list etc., and follow store-load-branch execution sequence. A set of memory locations is written at an earlier point in a program. Later, these locations are read, and used for evaluating branch condition. Branch outcome depends on values stored in data structure, which, normally do not have repeatable patterns. In SLB scheme, compiler identifies all program points where data structure associated with a data-dependent branch is modified. These marked store instructions are tracked at run-time, and stored values are used for computing branch flags ahead of time. Later, when branch instruction is fetched, pre-computed flags are read, and used for making predictions. This thesis introduces new branch prediction schemes, describes hardware structures and compiler analysis for implementing these schemes, evaluates their performance impact, and estimates their area, power and timing overhead. / text
314

Steam Prediction at an Integrated Pulp and Paper Mill : Mondi Dynäs in Kramfors Municipality

Sehlberg, Jimmy January 2020 (has links)
The most important energy carrier at an integrated pulp and paper mill is steam, it is essential to power components and machinery. The components create variations in the steam grid network, variations that exceed the capacity of the steam accumulator. To avoid steam shortages, production leans towards having the accumulator nearly filled, eventually leading to periods with over production. Abundantly produced steam must be released from the steam grid network, and this is done without energy recovery. The purpose has therefore been to create a computer model with the ability to predict steam consumption for the entire mill. The prediction shall eventually be used in the control systems for steam producers and the accumulator. By knowing future steam demand, production can be planned more efficiently and so can the accumulation level of steam. This will allow a greater range of operation since the predictor can provide information on when significant steam demand changes will occur. The most important energy carrier at an integrated pulp and paper mill is steam, it is essential to power components and machinery. The components create variations in the steam grid network, variations that exceed the capacity of the steam accumulator. To avoid steam shortages, production leans towards having the accumulator nearly filled, eventually leading to periods with over production. Abundantly produced steam must be released from the steam grid network, and this is done without energy recovery. The purpose has therefore been to create a computer model with the ability to predict steam consumption for the entire mill. The prediction shall eventually be used in the control systems for steam producers and the accumulator. By knowing future steam demand, production can be planned more efficiently and so can the accumulation level of steam. This will allow a greater range of operation since the predictor can provide information on when significant steam demand changes will occur.By creating separate predictor models for the largest steam consumers, the final predictor consists of four minor predictor models. The first is related to five batch digesters, the second to one of the two paper machines (PM5), the third to the other paper machine (PM6), finally the forth to all other consumers. The separate predictors have been created by gathering historical process data connected to their operation. Analyses and correlations have been made to show what has significant effects on their steam consumption. The final predictor has shown the possibility of having an R2 above 0.7 for up to one hour ahead. Even though, it is possible to have 60 minutes of accurate prediction. Reliable prediction ranges are determined for the four separate predictors. The reliable prediction range for the two paper machines has a potential of 15 minutes and the R2 is still above 0.8 for that time ahead. The predictions for digesters have an R2 above 0.6 for up to 25 minutes ahead. The steam demand from other components can be predicted with an average error of no more than 9% for 60 minutes ahead. / Vid ett integrerat massa- och pappersbruk är ånga den mest vitala energibäraren, den brukas av maskiner och komponenter för massa- och papperproduktionen. Komponenternas arbetscykler skapar svängningar på ångnätet som överstiger vad den installerade ångackumulatorn kan hantera. För att möta det svängande behovet produceras ånga i en sådan takt att ackumulatorn ska hålla hög nivå. Något som skapar perioder med överproduktion och full ackumulator vilket leder till att ånga måste friblåsas förutan energiåtervinning. Av denna anledning har syftet med detta arbete varit att ta fram en prediktionsmodell som kan förse bruket med pålitlig prognos för ångförbrukning. Kunskap om framtida prognoser ska såsmåningom implementeras i styrningen för ackumulatorn samt ångproducenter. Prognoserna ska underlätta att mer effektivt möta kommande behov, större reglerutrymme i ackumulatorn samt mer anpassad produktion. Den färdigställda prediktionsmodellen består av fyra mindre modeller grundade utefter de mest påverkande komponenterna. Den första tillhör de fem batch kokarna, den andra ansvarar för pappermaskin 5 (PM5). Tredje är till pappersmaskin 6 (PM6), slutligen en prediktor för övriga förbrukare. Prediktorerna har skapats utefter teoretiska behov samt relevant historisk data som påverkat energianvändningen. Analyser av korrelationer mellan olika parametrar har skapat prediktionsförmåga för dessa prediktorer. Den kompletta prediktionsmodellen uppvisar potential att leverera pålitlig prognos med förklaringsgrad R2 över 0.7 upp till 60 minuter fram i tiden. Trots att 60 minuters pålitlig prediktion är möjlig kan den inte garanteras. Pålitlig prediktionstid bestämms utifrån vardera enskild prediktor. Pappersmakinera påvisar pålitlig prediktionsförmåga upp till 15 minuter där R2 hålls ovan 0.8 inom den tiden. Kokeriets prediktionstid är 25 minuter där R2 har värden över 0.6. Övriga komponenter påvisar liten skillnad inom 60 minuters prediktionstid. Det genomsnittliga prediktionsfelet överstiger ej 9% inom den tiden.
315

CASE BASED REASONING – TAYLOR SERIES MODEL TO PREDICT CORROSION RATE IN OIL AND GAS WELLS AND PIPELINES

Khajotia, Burzin K. 17 April 2007 (has links)
No description available.
316

Predicting Future Locations and Arrival Times of Individuals

Burbey, Ingrid 13 May 2011 (has links)
This work has two objectives: a) to predict people's future locations, and b) to predict when they will be at given locations. Current location-based applications react to the user's current location. The progression from location-awareness to location-prediction can enable the next generation of proactive, context-predicting applications. Existing location-prediction algorithms predict someone's next location. In contrast, this dissertation predicts someone's future locations. Existing algorithms use a sequence of locations and predict the next location in the sequence. This dissertation incorporates temporal information as timestamps in order to predict someone's location at any time in the future. Sequence predictors based on Markov models have been shown to be effective predictors of someone's next location. This dissertation applies a Markov model to two-dimensional, timestamped location information to predict future locations. This dissertation also predicts when someone will be at a given location. These predictions can support presence or understanding co-workers’ routines. Predicting the times that someone is going to be at a given location is a very different and more difficult problem than predicting where someone will be at a given time. A location-prediction application may predict one or two key locations for a given time, while there could be hundreds of correct predictions for times of the day that someone will be in a given location. The approach used in this dissertation, a heuristic model loosely based on Market Basket Analysis, is the first to predict when someone will arrive at any given location. The models are applied to sparse, WiFi mobility data collected on PDAs given to 275 college freshmen. The location-prediction model predicts future locations with 78-91% accuracy. The temporal-prediction model achieves 33-39% accuracy. If a tolerance of plus/minus twenty minutes is allowed, the prediction rates rise to 77%-91%. This dissertation shows the characteristics of the timestamped, location data which lead to the highest number of correct predictions. The best data cover large portions of the day, with less than three locations for any given timestamp. / Ph. D.
317

Machine learning approach for crude oil price prediction

Abdullah, Siti Norbaiti binti January 2014 (has links)
Crude oil prices impact the world economy and are thus of interest to economic experts and politicians. Oil price’s volatile behaviour, which has moulded today’s world economy, society and politics, has motivated and continues to excite researchers for further study. This volatile behaviour is predicted to prompt more new and interesting research challenges. In the present research, machine learning and computational intelligence utilising historical quantitative data, with the linguistic element of online news services, are used to predict crude oil prices via five different models: (1) the Hierarchical Conceptual (HC) model; (2) the Artificial Neural Network-Quantitative (ANN-Q) model; (3) the Linguistic model; (4) the Rule-based Expert model; and, finally, (5) the Hybridisation of Linguistic and Quantitative (LQ) model. First, to understand the behaviour of the crude oil price market, the HC model functions as a platform to retrieve information that explains the behaviour of the market. This is retrieved from Google News articles using the keyword “Crude oil price”. Through a systematic approach, price data are classified into categories that explain the crude oil price’s level of impact on the market. The price data classification distinguishes crucial behaviour information contained in the articles. These distinguished data features ranked hierarchically according to the level of impact and used as reference to discover the numeric data implemented in model (2). Model (2) is developed to validate the features retrieved in model (1). It introduces the Back Propagation Neural Network (BPNN) technique as an alternative to conventional techniques used for forecasting the crude oil market. The BPNN technique is proven in model (2) to have produced more accurate and competitive results. Likewise, the features retrieved from model (1) are also validated and proven to cause market volatility. In model (3), a more systematic approach is introduced to extract the features from the news corpus. This approach applies a content utilisation technique to news articles and mines news sentiments by applying a fuzzy grammar fragment extraction. To extract the features from the news articles systematically, a domain-customised ‘dictionary’ containing grammar definitions is built beforehand. These retrieved features are used as the linguistic data to predict the market’s behaviour with crude oil price. A decision tree is also produced from this model which hierarchically delineates the events (i.e., the market’s rules) that made the market volatile, and later resulted in the production of model (4). Then, model (5) is built to complement the linguistic character performed in model (3) from the numeric prediction model made in model (2). To conclude, the hybridisation of these two models and the integration of models (1) to (5) in this research imitates the execution of crude oil market’s regulators in calculating their risk of actions before executing a price hedge in the market, wherein risk calculation is based on the ‘facts’ (quantitative data) and ‘rumours’ (linguistic data) collected. The hybridisation of quantitative and linguistic data in this study has shown promising accuracy outcomes, evidenced by the optimum value of directional accuracy and the minimum value of errors obtained.
318

Investment Decision Support with Dynamic Bayesian Networks

Wang, Sheng-chung 25 July 2005 (has links)
Stock market plays an important role in the modern capital market. As a result, the prediction of financial assets attracts people in different areas. Moreover, it is commonly accepted that stock price movement generally follows a major trend. As a result, forecasting the market trend becomes an important mission for a prediction method. Accordingly, we will predict the long term trend rather than the movement of near future or change in a trading day as the target of our predicting approach. Although there are various kinds of analyses for trend prediction, most of them use clear cuts or certain thresholds to classify the trends. Users (or investors) are not informed with the degrees of confidence associated with the recommendation or the trading signal. Therefore, in this research, we would like to study an approach that could offer the confidence of the trend analysis by providing the probabilities of each possible state given its historical data through Dynamic Bayesian Network. We will incorporate the well-known principles of Dow¡¦s Theory to better model the trend of stock movements. Through the results of our experiment, we may say that the financial performance of the proposed model is able to defeat the buy and hold trading strategy when the time scope covers the entire cycle of a trend. It also means that for the long term investors, our approach has high potential to win the excess return. At the same time, the trading frequency and correspondently trading costs can be reduced significantly.
319

Flood forecasting using time series data mining

Damle, Chaitanya 01 June 2005 (has links)
Earthquakes, floods, rainfall represent a class of nonlinear systems termed chaotic, in which the relationships between variables in a system are dynamic and disproportionate, however completely deterministic. Classical linear time series models have proved inadequate in analysis and prediction of complex geophysical phenomena. Nonlinear approaches such as Artificial Neural Networks, Hidden Markov Models and Nonlinear Prediction are useful in forecasting of daily discharge values in a river. The focus of these methods is on forecasting magnitudes of future discharge values and not the prediction of floods. Chaos theory provides a structured explanation for irregular behavior and anomalies in systems that are not inherently stochastic. Time Series Data Mining methodology combines chaos theory and data mining to characterize and predict complex, nonperiodic and chaotic time series. Time Series Data Mining focuses on the prediction of events.
320

Sequence-based prediction and characterization of disorder-to-order transitioning binding sites in proteins

Miri Disfani, Fatemeh Unknown Date
No description available.

Page generated in 0.0336 seconds