• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2904
  • 470
  • 284
  • 259
  • 232
  • 65
  • 59
  • 40
  • 36
  • 28
  • 22
  • 22
  • 21
  • 20
  • 19
  • Tagged with
  • 5507
  • 712
  • 594
  • 575
  • 574
  • 570
  • 549
  • 549
  • 498
  • 422
  • 415
  • 387
  • 376
  • 364
  • 331
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Performance Evaluation and Field Validation of Building Thermal Load Prediction Model

Sarwar, Riasat Azim 14 August 2015 (has links)
This thesis presents performance evaluation and a field validation study of a time and temperature indexed autoregressive with exogenous (4-3-5 ARX) building thermal load prediction model with an aim to integrate the model with actual predictive control systems. The 4-3-5 ARX model is very simple and computationally efficient with relatively high prediction accuracy compared to the existing sophisticated prediction models, such as artificial neural network prediction models. However, performance evaluation and field validation of the model are essential steps before implementing the model in actual practice. The performance of the model was evaluated under different climate conditions as well as under modeling uncertainty. A field validation study was carried out for three buildings at Mississippi State University. The results demonstrate that the 4-3-5 ARX model can predict building thermal loads in an accurate manner most of the times, indicating that the model can be readily implemented in predictive control systems.
322

Relationship between training in vocational agriculture and success in selected courses in four agricultural curriculums at Kansas State University

Russell, Clinton. January 1960 (has links)
Call number: LD2668 .T4 1960 R85
323

Comparison of two groups of home economics freshman differing in scholastic potential using selected developmental factors

Helms, Patricia Irene. January 1966 (has links)
Call number: LD2668 .T4 1966 H481 / Master of Science
324

Using task network modeling to predict human error

Pop, Vlad L. 07 January 2016 (has links)
Human error taxonomies have been implemented in numerous safety critical industries. These taxonomies have provided invaluable insight into understanding the underlying causes of human error; however, their utility for actually predicting future errors remains in question. A need has been identified for another approach to supplement what we can extrapolate from taxonomies and better predict human error. Task network modeling is a promising approach to human error prediction that had yet to be empirically evaluated. This study tested a task network modeling approach to predicting human error in the context of automotive assembly. The task network modeling architecture was expanded to include a set of predictors from the human error literature, and used to model part of an operational automotive assembly plant. This manuscript contains three studies. Study 1 tested separate task network models for two different target areas of an active automotive assembly line. Study 2 tested the validity of predictions made by the models from Study 1, both within and across samples. Study 3 tested predictions across both models on a larger sample of vehicles. The expanded architecture accounted for 21.9% to 36.5% of the variance in human error and identified 12 explanatory variables that significantly predicted the occurrence of human error. Model outputs were used to compute prediction equations that were tested using binary logistic regression and then cross-validated twice using both split-half and cross-sample validation. The predictors of Time Pressure, Visual Workload, Auditory Workload, Cognitive Workload, Psychomotor Workload, Task Frequency, Information Flow, Teamwork, and Equipment Feedback were significant predictors of human error in all three models that were tested. The variables of Information Presentation and Task Dependency varied in significance across samples, but both were significant in two out of the three models. The variables of Shift and Hour into Shift were never significant in any of the three models. The variables that were greatly stable across studies were all related to the tasks being performed by each worker at each station. The variables related to the timing of errors, on the other hand, were never significant. The results indicate that an expanded task network architecture is a great tool for predicting the situations and circumstances in which human errors will occur, but not the timing of when they will occur. Nevertheless, task network modeling demonstrated to provide useful, valid, and accurate predictions of human error and should continue to be developed as an error prediction tool.
325

An intelligent mobility prediction scheme for location-based service over cellular communications network

Daoud, Mohammad January 2012 (has links)
One of the trickiest challenges introduced by cellular communications networks is mobility prediction for Location Based-Services (LBSs). Hence, an accurate and efficient mobility prediction technique is particularly needed for these networks. The mobility prediction technique incurs overheads on the transmission process. These overheads affect properties of the cellular communications network such as delay, denial of services, manual filtering and bandwidth. The main goal of this research is to enhance a mobility prediction scheme in cellular communications networks through three phases. Firstly, current mobility prediction techniques will be investigated. Secondly, innovation and examination of new mobility prediction techniques will be based on three hypothesises that are suitable for cellular communications network and mobile user (MU) resources with low computation cost and high prediction success rate without using MU resources in the prediction process. Thirdly, a new mobility prediction scheme will be generated that is based on different levels of mobility prediction. In this thesis, a new mobility prediction scheme for LBSs is proposed. It could be considered as a combination of the cell and routing area (RA) prediction levels. For cell level prediction, most of the current location prediction research is focused on generalized location models, where the geographic extent is divided into regular-shape cells. These models are not suitable for certain LBSs where the objectives are to compute and present on-road services. Such techniques are the New Markov-Based Mobility Prediction (NMMP) and Prediction Location Model (PLM) that deal with inner cell structure and different levels of prediction, respectively. The NMMP and PLM techniques suffer from complex computation, accuracy rate regression and insufficient accuracy. In this thesis, Location Prediction based on a Sector Snapshot (LPSS) is introduced, which is based on a Novel Cell Splitting Algorithm (NCPA). This algorithm is implemented in a micro cell in parallel with the new prediction technique. The LPSS technique, compared with two classic prediction techniques and the experimental results, shows the effectiveness and robustness of the new splitting algorithm and prediction technique. In the cell side, the proposed approach reduces the complexity cost and prevents the cell level prediction technique from performing in time slots that are too close. For these reasons, the RA avoids cell-side problems. This research discusses a New Routing Area Displacement Prediction for Location-Based Services (NRADP) which is based on developed Ant Colony Optimization (ACO). The NRADP, compared with Mobility Prediction based on an Ant System (MPAS) and the experimental results, shows the effectiveness, higher prediction rate, reduced search stagnation ratio, and reduced computation cost of the new prediction technique.
326

Influence of limiting working memory resources on contextual facilitation in language processing

Stewart, Oliver William Thomas January 2014 (has links)
Language processing is a complex task requiring the integration of many different streams of information. Theorists have considered that working memory plays an important role in language processing and that a reduction in available working memory resources will reduce the efficacy of the system. In debate, however, is whether or not there exists a single pool of resources from which all language processes draw, or if the resource pool is functionally fractionated into modular subsections (e.g. syntactic processing, lexical processing etc.). This thesis seeks to investigate the role that working memory capacity plays in the utilisation of context to facilitate language processing. We experimentally manipulated the resources available to each participant using a titrated extrinsic memory load (a string of digits the length of which was tailored to each participant). Participants had to maintain the digits in memory while reading target sentences. Using this methodology we conducted six eyetracking experiments to investigate how a reduction of working memory resources influences the use of context in different language processes. Two experiments examined the resolution of syntactic ambiguities (reduced relative clauses); three examined the resolution of lexical ambiguities (balanced homonyms such as appendix); and one explored semantic predictability (It was a windy day so the boy went to the park to fly his… kite). General conclusions are hard to draw in the face of variable findings. All three experiment areas (syntactic, lexical, and semantic) show that memory loads interact with context, but there is little consistency as to where and how this occurs. In the syntactic experiments we see hints towards a general degradation in context use (supporting Single Resource Theories) whereas in the Lexical and Semantic experiments we see mixed support leaning in the direction of Multiple Resource Theories. Additionally, while individual experiments suggest that limiting working memory resources reduces the role that context plays in guiding both syntactic and lexical ambiguity resolution, more sophisticated statistical investigation indicates that these findings are not reliable. Taken together, the findings of all the experiments lead us to tentatively conclude that imposing limitations on working memory resources can influence the use of context in some language processes, but also that that influence is variant, subtle, and hard to statistically detect.
327

Evaluation of Instruction Prefetch Methods for Coresonic DSP Processor

Lind, Tobias January 2016 (has links)
With increasing demands on mobile communication transfer rates the circuits in mobile phones must be designed for higher performance while maintaining low power consumption for increased battery life. One possible way to improve an existing architecture is to implement instruction prefetching. By predicting which instructions will be executed ahead of time the instructions can be prefetched from memory to increase performance and some instructions which will be executed again shortly can be stored temporarily to avoid fetching them from the memory multiple times. By creating a trace driven simulator the existing hardware can be simulated while running a realistic scenario. Different methods of instruction prefetch can be implemented into this simulator to measure how they perform. It is shown that the execution time can be reduced by up to five percent and the amount of memory accesses can be reduced by up to 25 percent with a simple loop buffer and return stack. The execution time can be reduced even further with the more complex methods such as branch target prediction and branch condition prediction.
328

On Effectively Creating Ensembles of Classifiers : Studies on Creation Strategies, Diversity and Predicting with Confidence

Löfström, Tuwe January 2015 (has links)
An ensemble is a composite model, combining the predictions from several other models. Ensembles are known to be more accurate than single models. Diversity has been identified as an important factor in explaining the success of ensembles. In the context of classification, diversity has not been well defined, and several heuristic diversity measures have been proposed. The focus of this thesis is on how to create effective ensembles in the context of classification. Even though several effective ensemble algorithms have been proposed, there are still several open questions regarding the role diversity plays when creating an effective ensemble. Open questions relating to creating effective ensembles that are addressed include: what to optimize when trying to find an ensemble using a subset of models used by the original ensemble that is more effective than the original ensemble; how effective is it to search for such a sub-ensemble; how should the neural networks used in an ensemble be trained for the ensemble to be effective? The contributions of the thesis include several studies evaluating different ways to optimize which sub-ensemble would be most effective, including a novel approach using combinations of performance and diversity measures. The contributions of the initial studies presented in the thesis eventually resulted in an investigation of the underlying assumption motivating the search for more effective sub-ensembles. The evaluation concluded that even if several more effective sub-ensembles exist, it may not be possible to identify which sub-ensembles would be the most effective using any of the evaluated optimization measures. An investigation of the most effective ways to train neural networks to be used in ensembles was also performed. The conclusions are that effective ensembles can be obtained by training neural networks in a number of different ways but that high average individual accuracy or much diversity both would generate effective ensembles. Several findings regarding diversity and effective ensembles presented in the literature in recent years are also discussed and related to the results of the included studies. When creating confidence based predictors using conformal prediction, there are several open questions regarding how data should be utilized effectively when using ensembles. Open questions related to predicting with confidence that are addressed include: how can data be utilized effectively to achieve more efficient confidence based predictions using ensembles; how do problems with class imbalance affect the confidence based predictions when using conformal prediction? Contributions include two studies where it is shown in the first that the use of out-of-bag estimates when using bagging ensembles results in more effective conformal predictors and it is shown in the second that a conformal predictor conditioned on the class labels to avoid a strong bias towards the majority class is more effective on problems with class imbalance. The research method used is mainly inspired by the design science paradigm, which is manifested by the development and evaluation of artifacts. / En ensemble är en sammansatt modell som kombinerar prediktionerna från flera olika modeller. Det är välkänt att ensembler är mer träffsäkra än enskilda modeller. Diversitet har identifierats som en viktig faktor för att förklara varför ensembler är så framgångsrika. Diversitet hade fram tills nyligen inte definierats entydigt för klassificering vilket resulterade i att många heuristiska diverstitetsmått har föreslagits. Den här avhandlingen fokuserar på hur klassificeringsensembler kan skapas på ett ändamålsenligt (eng. effective) sätt. Den vetenskapliga metoden är huvudsakligen inspirerad av design science-paradigmet vilket lämpar sig väl för utveckling och evaluering av IT-artefakter. Det finns sedan tidigare många framgångsrika ensembleralgoritmer men trots det så finns det fortfarande vissa frågetecken kring vilken roll diversitet spelar vid skapande av välpresterande (eng. effective) ensemblemodeller. Några av de frågor som berör diversitet som behandlas i avhandlingen inkluderar: Vad skall optimeras när man söker efter en delmängd av de tillgängliga modellerna för att försöka skapa en ensemble som är bättre än ensemblen bestående av samtliga modeller; Hur väl fungerar strategin att söka efter sådana delensembler; Hur skall neurala nätverk tränas för att fungera så bra som möjligt i en ensemble? Bidraget i avhandlingen inkluderar flera studier som utvärderar flera olika sätt att finna delensembler som är bättre än att använda hela ensemblen, inklusive ett nytt tillvägagångssätt som utnyttjar en kombination av både diversitets- och prestandamått. Resultaten i de första studierna ledde fram till att det underliggande antagandet som motiverar att söka efter delensembler undersöktes. Slutsatsen blev, trots att det fanns flera delensembler som var bättre än hela ensemblen, att det inte fanns något sätt att identifiera med tillgänglig data vilka de bättre delensemblerna var. Vidare undersöktes hur neurala nätverk bör tränas för att tillsammans samverka så väl som möjligt när de används i en ensemble. Slutsatserna från den undersökningen är att det är möjligt att skapa välpresterande ensembler både genom att ha många modeller som är antingen bra i genomsnitt eller olika varandra (dvs diversa). Insikter som har presenterats i litteraturen under de senaste åren diskuteras och relateras till resultaten i de inkluderade studierna. När man skapar konfidensbaserade modeller med hjälp av ett ramverk som kallas för conformal prediction så finns det flera frågor kring hur data bör utnyttjas på bästa sätt när man använder ensembler som behöver belysas. De frågor som relaterar till konfidensbaserad predicering inkluderar: Hur kan data utnyttjas på bästa sätt för att åstadkomma mer effektiva konfidensbaserade prediktioner med ensembler; Hur påverkar obalanserad datade konfidensbaserade prediktionerna när man använder conformal perdiction? Bidragen inkluderar två studier där resultaten i den första visar att det mest effektiva sättet att använda data när man har en baggingensemble är att använda sk out-of-bag estimeringar. Resultaten i den andra studien visar att obalanserad data behöver hanteras med hjälp av en klassvillkorad konfidensbaserad modell för att undvika en stark tendens att favorisera majoritetsklassen. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 8: In press.</p> / Dataanalys för detektion av läkemedelseffekter (DADEL)
329

On Effectively Creating Ensembles of Classifiers : Studies on Creation Strategies, Diversity and Predicting with Confidence

Löfström, Tuwe January 2015 (has links)
An ensemble is a composite model, combining the predictions from several other models. Ensembles are known to be more accurate than single models. Diversity has been identified as an important factor in explaining the success of ensembles. In the context of classification, diversity has not been well defined, and several heuristic diversity measures have been proposed. The focus of this thesis is on how to create effective ensembles in the context of classification. Even though several effective ensemble algorithms have been proposed, there are still several open questions regarding the role diversity plays when creating an effective ensemble. Open questions relating to creating effective ensembles that are addressed include: what to optimize when trying to find an ensemble using a subset of models used by the original ensemble that is more effective than the original ensemble; how effective is it to search for such a sub-ensemble; how should the neural networks used in an ensemble be trained for the ensemble to be effective? The contributions of the thesis include several studies evaluating different ways to optimize which sub-ensemble would be most effective, including a novel approach using combinations of performance and diversity measures. The contributions of the initial studies presented in the thesis eventually resulted in an investigation of the underlying assumption motivating the search for more effective sub-ensembles. The evaluation concluded that even if several more effective sub-ensembles exist, it may not be possible to identify which sub-ensembles would be the most effective using any of the evaluated optimization measures. An investigation of the most effective ways to train neural networks to be used in ensembles was also performed. The conclusions are that effective ensembles can be obtained by training neural networks in a number of different ways but that high average individual accuracy or much diversity both would generate effective ensembles. Several findings regarding diversity and effective ensembles presented in the literature in recent years are also discussed and related to the results of the included studies. When creating confidence based predictors using conformal prediction, there are several open questions regarding how data should be utilized effectively when using ensembles. Open questions related to predicting with confidence that are addressed include: how can data be utilized effectively to achieve more efficient confidence based predictions using ensembles; how do problems with class imbalance affect the confidence based predictions when using conformal prediction? Contributions include two studies where it is shown in the first that the use of out-of-bag estimates when using bagging ensembles results in more effective conformal predictors and it is shown in the second that a conformal predictor conditioned on the class labels to avoid a strong bias towards the majority class is more effective on problems with class imbalance. The research method used is mainly inspired by the design science paradigm, which is manifested by the development and evaluation of artifacts. / En ensemble är en sammansatt modell som kombinerar prediktionerna från flera olika modeller. Det är välkänt att ensembler är mer träffsäkra än enskilda modeller. Diversitet har identifierats som en viktig faktor för att förklara varför ensembler är så framgångsrika. Diversitet hade fram tills nyligen inte definierats entydigt för klassificering vilket resulterade i att många heuristiska diverstitetsmått har föreslagits. Den här avhandlingen fokuserar på hur klassificeringsensembler kan skapas på ett ändamålsenligt (eng. effective) sätt. Den vetenskapliga metoden är huvudsakligen inspirerad av design science-paradigmet vilket lämpar sig väl för utveckling och evaluering av IT-artefakter. Det finns sedan tidigare många framgångsrika ensembleralgoritmer men trots det så finns det fortfarande vissa frågetecken kring vilken roll diversitet spelar vid skapande av välpresterande (eng. effective) ensemblemodeller. Några av de frågor som berör diversitet som behandlas i avhandlingen inkluderar: Vad skall optimeras när man söker efter en delmängd av de tillgängliga modellerna för att försöka skapa en ensemble som är bättre än ensemblen bestående av samtliga modeller; Hur väl fungerar strategin att söka efter sådana delensembler; Hur skall neurala nätverk tränas för att fungera så bra som möjligt i en ensemble? Bidraget i avhandlingen inkluderar flera studier som utvärderar flera olika sätt att finna delensembler som är bättre än att använda hela ensemblen, inklusive ett nytt tillvägagångssätt som utnyttjar en kombination av både diversitets- och prestandamått. Resultaten i de första studierna ledde fram till att det underliggande antagandet som motiverar att söka efter delensembler undersöktes. Slutsatsen blev, trots att det fanns flera delensembler som var bättre än hela ensemblen, att det inte fanns något sätt att identifiera med tillgänglig data vilka de bättre delensemblerna var. Vidare undersöktes hur neurala nätverk bör tränas för att tillsammans samverka så väl som möjligt när de används i en ensemble. Slutsatserna från den undersökningen är att det är möjligt att skapa välpresterande ensembler både genom att ha många modeller som är antingen bra i genomsnitt eller olika varandra (dvs diversa). Insikter som har presenterats i litteraturen under de senaste åren diskuteras och relateras till resultaten i de inkluderade studierna. När man skapar konfidensbaserade modeller med hjälp av ett ramverk som kallas för conformal prediction så finns det flera frågor kring hur data bör utnyttjas på bästa sätt när man använder ensembler som behöver belysas. De frågor som relaterar till konfidensbaserad predicering inkluderar: Hur kan data utnyttjas på bästa sätt för att åstadkomma mer effektiva konfidensbaserade prediktioner med ensembler; Hur påverkar obalanserad datade konfidensbaserade prediktionerna när man använder conformal perdiction? Bidragen inkluderar två studier där resultaten i den första visar att det mest effektiva sättet att använda data när man har en baggingensemble är att använda sk out-of-bag estimeringar. Resultaten i den andra studien visar att obalanserad data behöver hanteras med hjälp av en klassvillkorad konfidensbaserad modell för att undvika en stark tendens att favorisera majoritetsklassen. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 8: In press.</p> / Dataanalys för detektion av läkemedelseffekter (DADEL)
330

Portfolio optimization using stochastic programming with market trend forecast

Yang, Yutian, active 21st century 02 October 2014 (has links)
This report discusses a multi-stage stochastic programming model that maximizes expected ending time profit assuming investors can forecast a bull or bear market trend. If an investor can always predict the market trend correctly and pick the optimal stochastic strategy that matches the real market trend, intuitively his return will beat the market performance. For investors with different levels of prediction accuracy, our analytical results support their decision of selecting the highest return strategy. Real stock prices of 154 stocks on 73 trading days are collected. The computational results verify that accurate prediction helps to exceed market return while portfolio profit drops if investors partially predict or forecast incorrectly part of the time. A sensitivity analysis shows how risk control requirements affect the investor's decision on selecting stochastic strategies under the same prediction accuracy. / text

Page generated in 0.0312 seconds