• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 111
  • Tagged with
  • 119
  • 119
  • 119
  • 119
  • 18
  • 9
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Forecast combination in revenue management demand forecasting

Riedel, Silvia January 2008 (has links)
The domain of multi level forecast combination is a challenging new domain containing a large potential for forecast improvements. This thesis presents a theoretical and experimental analysis of different types of forecast diversification on forecast error covariances and resulting combined forecast quality. Three types of diversification are used: (a) diversification concerning the level of learning (b) diversification of predefined parameter values and (c) the use of different forecast models. The diversification is carried out on forecasts of seasonal factor predictions in Revenue Management for Airlines. After decomposing the data and generating diversified forecasts a (multi step) combination procedure is applied. We provide theoretical evidence of why and under which conditions multi step multi level forecast combination can be a powerful approach in order to build a high quality and adaptive forecast system. We theoretically and experimentally compare models differing with respect to the used decomposition, diversification as well as the applied combination models and structures. After an introduction into the application of forecasting seasonal behaviour in Revenue Management, a literature review of the theory of forecast combination is provided. In order to get a clearer idea of under which condition combination works, we then investigate aspects of forecast diversity and forecast diversification. The diversity of forecast errors in terms of error covariances can be expressed in a decomposed manner in relation to different independent error components. This type of decomposed analysis has the advantage that it allows conclusions concerning the potential of the diversified forecasts for future combination. We carry out such an analysis of effects of different types of diversification on error components corresponding to the bias-variance-Bayes decomposition proposed by James and Hastie. Different approaches of how to include information from different levels into forecasting are also discussed in the thesis. The improvements achieved with multi level forecast combination prove that theoretical analysis is extremely important in this relatively new field. The bias-variance-Bayes decomposition is extended to the multi level case. An analysis of the effects of including forecasts with parameters learned at different levels on the bias and variance error components show that forecast combination is the best choice in comparison to some other discussed alternatives. The proposed approach represents a completely automatic procedure. It realises changes in the error components which are not only advantageous at the low level, but have also a stabilising effect on aggregates of low level forecasts to the higher level. We also identify cases in which multi level forecast combination should ideally be connected with the use of different function spaces and/or thick modelling related to certain parameter values or preprocessing procedures. In order to avoid problems occurring for large sets of highly correlated forecasts when considering covariance information, we investigated the potential of pooling and trimming for our case. We estimate the expected behaviour of our diversified forecasts in purely error variance based pooling represented by a common approach of Aiolfi and Timmermann and analyse effects of different kinds of covariances on the accuracy of the combined forecast. We show that a significant loss in the expected forecast accuracy may ensue because of typical inhomogeneities in the covariance matrix for the analysed case. If covariance information is available in a sufficiently high quality, it is possible to run a clustering directly based on covariance information. We discuss how to carry out a clustering in that case. We also consider a case (quite common in our application) when covariance information may not be available and propose a novel simplified representation of the covariance matrix which represents the distance in the forecast generation space and is only based on knowledge about the forecast generation process. A new pooling approach is proposed that avoids inhomogeneities in the covariance matrix by considering the information contained in the simplified covariance representation. One of the main advantages of the proposed approach is that the covariance matrix does not have to be calculated. We compared the results of our approach with the approach of Aiolfi and Timmermann and explained the reasons for significant improvement. Another advantage of our approach is that it leads to the generation of novel multi step, multi level forecast generation structures that carry out the combination in different steps of pooling. Finally, we describe different evolutionary approaches in order to generate combination structures automatically. We investigate very flexible approaches as well as approaches that avoid the expected inhomogeneities in the error covariance matrix based on our theoretical findings. The theoretical analysis is supported by experimental results. We could achieve an improvement of forecast quality up to 11 percent for the practical application of demand forecasting in Revenue Management compared to the current optimised forecasting system.
82

Interactive pattern recognition strategies for multimedia document analysis and search /

Dagli, Cagri K., January 2009 (has links)
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2009. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3596. Adviser: Thomas S. Huang. Includes bibliographical references (leaves 193-201) Available on microfilm from Pro Quest Information and Learning.
83

Multi-Resolution Superpixels for Visual Saliency Detection in a Large Image Collection

Singh, Anurag 17 September 2015 (has links)
<p> Finding what attracts attention is an important task for visual processing. The visual saliency detection finds location of focus of visual attention on the most important or stand-out object in an image or a video sequence. These stand-out objects are composed of regions or superpixels. Moreover, the fixations occur in clusters, which are simulated using superpixels, where superpixels are clusters of pixels bound by the Gestalt principle for perceptual grouping. The visual saliency detection algorithms presented in the dissertation build on the premise that salient regions are high in color contrast, and when compared to other regions, they stand-out. </p><p> The most intuitive method to find the salient region is by comparing it to every other region. A region is ranked by its dissimilarities with respect to other regions and highlighting the statistically salient region proportional to their rank. Another way to compare regions is with respect to its local surrounding. Each region is represented with its Dominant Color Descriptor and the color difference between neighbors is found using the Earth Mover's Distance. The multi-resolution framework ensures robustness to the object size, location, and background type. </p><p> Image saliency detection using region contrast is often based on the premise that a salient region has a contrast with the background. But the natural biological method involves comparison to a large collection of similar regions. A novel method is presented to efficiently compare the image region to the regions derived from a large, stored collection of images. Intuitively finding video saliency is derived as a special case of a large collection with temporal reference. The various methods presented in the dissertation are tested on publicly available data sets and performs better than existing state-of-the-art methods.</p>
84

Kernel Based Relevance Vector Machine for Classification of Diseases

Tcheimegni, Elie 21 May 2013 (has links)
<p> Motivated by improvements of diseases and cancers depiction that will be facilitated by an ability to predict the related syndrome occurrence; this work employs a data-driven approach to developing cancer classification/prediction models using Relevance Vector Machine (RVM), a probabilistic kernel-based learning machine. </p><p> Drawing from the work of Bertrand Luvision, Chao Dong, and the outcome result classification of electrocardiogram signals by S. Karpagachelvi ,which show the superiority of the RVM approach as compared to traditional classifiers, the problem addressed in this research is to design a program of piping components together in a graphic workflows which could help improve the accuracy classification/regression of two models structure methods (Support vector machines and kernel based Relevance Vector machines) for better prediction performance of related diseases and then make a comparison among both methods using clinical data. </p><p> Would the application of relevance vector machine on these data classification improve their coverage. We developed a hierarchical Bayesian model for binary and bivariate data classification using the RBF, sigmoid kernel, with different parameterization and varied threshold. The parameters of the kernel function are considered as model parameters. The finding results allow us to conclude that RVM is almost equal to SVM on training efficiency and classification accuracy, but RVM performs better on sparse property, generalization ability, and decision speed. </p><p> Meanwhile, the use of RVM raise some issues due to the fact that it used less support vectors but it trains much faster for non-linear kernel than SVM-light. Finally, we test those approaches on a corpus of public release phenotype data. Further research to improve the accuracy prediction with more patients' data is needed. Appendices provide the SVM and RVM derivation in detail. One important area of focus is the development of models for predicting cancers. </p><p> <b>Keywords:</b> Support Vector Machines, Relevance Vector Machine, Rapidminer, Tanagra, Accuracy's values.</p>
85

Search-based optimization for compiler machine-code generation

Clauson, Aran 18 December 2013 (has links)
<p> Compilation encompasses many steps. Parsing turns the input program into a more manageable syntax tree. Verification ensures that the program makes some semblance of sense. Finally, code generation transforms the internal abstract program representation into an executable program. Compilers strive to produce the best possible programs. Optimizations are applied at nearly every level of compilation. Instruction Scheduling is one of the last compilation tasks. It is part of code generation. Instruction Scheduling replaces the internal graph representation of the program with an instruction sequence. The scheduler should produce some sequence that the hardware can execute quickly. Considering that Instruction Scheduling is an NP-Complete optimization problem, it is interesting that schedules are usually generated by a greedy, heuristic algorithm called List Scheduling. Given search-based algorithms' successes in other NP-Complete optimization domains, we ask whether search-based algorithms can be applied to Instruction Scheduling to generate superior schedules without unacceptably increasing compilation time. To answer this question, we formulate a problem description that captures practical scheduling constraints. We show that this problem is NP-Complete given modest requirements on the actual hardware. We adapt three different search algorithms to Instruction Scheduling in order to show that search is an effective Instruction Scheduling technique. The schedules generated by our algorithms are generally shorter than those generated by List Scheduling. Search-based scheduling does take more time, but the increases are acceptable for some compilation domains.</p>
86

Ant colony inspired models for trust-based recommendations

Alathel, Deema 31 March 2015 (has links)
<p> The rapid growth of web-based social networks has led to many breakthroughs in the different services that can be provided by such networks. Some networks allow users to describe their relationships with other users beyond a basic connection. This dissertation focuses on trust in web-based social networks and how it can be utilized to enhance a user's experience within a recommender system. A definition of trust and its properties is presented followed by a detailed explanation of recommender systems, their application and techniques. </p><p> The recommendation problem in recommender systems is considered to be an optimization problem and thus many optimization algorithms can be used in such systems. The focus in this dissertation is specific to one group of such algorithms, ant algorithms, and an overview of how they can be applied to optimization problems is presented. While studying ant algorithms, it was noticed that an unprecedented improvement could be presented in the form of a local pheromone initialization technique, which is added to the list of contributions of this dissertation. </p><p> This dissertation presents a set of novel models that apply an ant-based algorithm to trust-based recommender systems. A total of five main models are presented where each model is designed with a specific purpose such as expanding the scope of the search in the solution space or dealing with cold start users, but ultimately all models aim to enhance the performance of the recommender system. In addition to the basic model, the enhanced models fall under two categories: localized models that increase the importance of trust within local computations, and dynamic models that increase the level of information sharing between the artificial agents in the system. The results of the conducted experiments are presented in this dissertation along with an analysis of the results highlighting the strengths of each model and the different situations in which each model is most suitable for application. </p><p> The dissertation concludes by discussing the lessons learned from the work presented and the possible extensions that can be added to the presented findings, which can contribute to the fields of recommender systems and artificial intelligence.</p>
87

Analogical Constructivism| The emergence of reasoning through analogy and action schemas

Licato, John 02 July 2015 (has links)
<p> The ability to reason analogically is a central marker of human-level cognition. Analogy involves mapping, reorganizing, and creating <i>structural knowledge,</i> a particular type of cognitive construct commonly understood as residing purely within the domain of declarative knowledge. Yet existing computational models of analogy struggle to show human-level performance on any data sets not manually constructed for the purposes of demonstration, a problem referred to as the <i>tailorability concern.</i> Solving the tailorability concern may require more investigation into the nature of cognitive structures, defined as those elements in mental representation which are referred to whenever contemporary literature on analogy discusses "structured" knowledge.</p><p> I propose to develop the theory of Analogical Constructivism. This theory builds on Piaget's constructivist epistemology, first refining its concepts by clarifying the modifications Piaget himself made in his later, less-discussed works. I reconcile Piaget's assertion that meaning is, first and foremost, rooted in the action schemas that the agent is both born with and develops throughout life, with an account of cognitive structure, concluding that cognitive structure is inseparable from action-centered/procedural knowledge.</p><p> After a defense of the claim that cognitive structure cannot exist apart from actions (a claim which I refer to as "No-semantically-empty-structure"), I introduce PAGI World, a simulation environment rich enough in possible actions to foster the growth of artificial agents capable of producing their own cognitive structures. I conclude with a brief demonstration of an agent in PAGI World, and discuss future work.</p>
88

Pediatric heart sound segmentation

Sedighian, Pouye 22 November 2014 (has links)
<p> Recent advances in technology have facilitated the prospect of automatic cardiac auscultation by using digital stethoscopes. This in turn creates the need for development of algorithms capable of automatic segmentation of the heart sound. Pediatric heart sound segmentation is a challenging task due to various factors including the significant influence of respiration on the heart sound. This project studies the application of homomorphic filtering and Hidden Markov Model for the purpose of pediatric heart sound segmentation. The efficacy of the proposed method is evaluated on a publicly available dataset and its performance is compared with those of three other existing methods. The results show that our proposed method achieves accuracy of 92.4% &plusmn;1.1% and 93.5% &plusmn;1.1% in identification of first and second heart sound components, and is superior to four other existing methods in term of accuracy or time complexity.</p>
89

Identifying latent attributes from video scenes using knowledge acquired from large collections of text documents

Tran, Anh Xuan 18 October 2014 (has links)
<p> Peter Drucker, a well-known influential writer and philosopher in the field of management theory and practice, once claimed that &ldquo;the most important thing in communication is hearing what isn't said.&rdquo; It is not difficult to see that a similar concept also holds in the context of video scene understanding. In almost every non-trivial video scene, most important elements, such as the motives and intentions of the actors, can never be seen or directly observed, yet the identification of these latent attributes is crucial to our full understanding of the scene. That is to say, latent attributes matter. </p><p> In this work, we explore the task of identifying latent attributes in video scenes, focusing on the mental states of participant actors. We propose a novel approach to the problem based on the use of large text collections as background knowledge and minimal information about the videos, such as activity and actor types, as query context. We formalize the task and a measure of merit that accounts for the semantic relatedness of mental state terms, as well as their distribution weights. We develop and test several largely unsupervised information extraction models that identify the mental state labels of human participants in video scenes given some contextual information about the scenes. We show that these models produce complementary information and their combination significantly outperforms the individual models, and improves performance over several baseline methods on two different datasets. We present an extensive analysis of our models and close with a discussion of our findings, along with a roadmap for future research.</p>
90

A framework for computer aided diagnosis and analysis of the neuro-vasculature

Chowriappa, Ashirwad 11 April 2014 (has links)
<p> Various types of vascular diseases such as carotid stenosis, aneurysms, Arterio-venous Malformations (AVM) and Subarachnoid Hemorrhage (SAH) caused by the rupture of an aneurysm are some of the common causes of stroke. The diagnosis and management of such vascular conditions presents a challenge. In this dissertation we present a vascular analysis framework for Computer Aided Diagnosis (CAD) of the neuro-vasculature. We develop methods for 3D vascular decomposition, vascular skeleton extraction and identification of vascular structures such as aneurysms. </p><p> Owing to the complex and highly tortuous nature of the vasculature, analysis is often only attempted on a subset of the vessel network. In our framework we first, compute the decomposition of the vascular tree into meaningful sub-components. A novel spectral segmentation approach is presented that focuses on the eigenfunctions of the Laplace-Beltrami operator (LBO), FEM discretization. In this approach, we attain a set of vessel segmentations induced by the nodal sets of the LBO. This discretization produces a family of real valued functions, which provide interesting insights in the structure and morphology of the vasculature. Next, a novel Weighted Approximate Convex Decomposition (WACD) strategy is proposed to understand the nature of complex vessel structures. We start by addressing this problem of vascular decomposition as a cluster optimization problem and introduce a methodology for compact geometric decomposition. These decomposed vessel structures are then grouped into anatomically relevant sections using a novel vessel skeleton extraction methodology that utilizes a Laplace based operator. Vascular analysis is performed by obtaining a surface mapping between decomposed vessel sections. A non-rigid correspondence between vessel surfaces are achieved using Thin Plate Splines (TPS), and changes between corresponding surface morphologies are detected using Gaussian curvature maps and mean curvature maps. Finally, characteristic vascular structures such as vessel bifurcations and aneurysms are identified using a Support Vector Machine (SVM) on the most relevant eigenvalues, obtained through feature selection. </p><p> The proposed CAD framework was validated using pre-segmented sections of vasculature archived for 98 aneurysms in 112 patients. We first test our methodologies for vascular segmentation and next for detection. Our vascular segmentation approaches produced promising results, 81% of the vessel sections correctly segmented. For vascular classification, Recursive Feature Elimination (RFE) was performed to find the most compact and informative set of features. We showed that the selected sub-set of eigenvalues produces minimum error and improved classifier precision. This analysis framework was also tested on longitudinal cases of patients having internal cerebral aneurysms. Volumetric and surface area comparisons were made by establishing a correspondence between segmented vascular sections. Our results suggest that the CAD framework was able to decompose, classify and detect changes in aneurysm volumes and surface areas close to that segmented by an expert.</p>

Page generated in 0.1247 seconds