• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7476
  • 1104
  • 1048
  • 794
  • 476
  • 291
  • 237
  • 184
  • 90
  • 81
  • 63
  • 52
  • 44
  • 43
  • 42
  • Tagged with
  • 14470
  • 9283
  • 3952
  • 2374
  • 1928
  • 1926
  • 1726
  • 1634
  • 1524
  • 1442
  • 1379
  • 1356
  • 1351
  • 1287
  • 1274
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
901

Automated Resolution Selection for Image Segmentation

Al-Qunaieer, Fares January 2014 (has links)
It is well known in image processing in general, and hence in image segmentation in particular, that computational cost increases rapidly with the number and dimensions of the images to be processed. Several fields, such as astronomy, remote sensing, and medical imaging, use very large images, which might also be 3D and/or captured at several frequency bands, all adding to the computational expense. Multiresolution analysis is one method of increasing the efficiency of the segmentation process. One multiresolution approach is the coarse-to-fine segmentation strategy, whereby the segmentation starts at a coarse resolution and is then fine-tuned during subsequent steps. Until now, the starting resolution for segmentation has been selected arbitrarily with no clear selection criteria. The research conducted for this thesis showed that starting from different resolutions for image segmentation results in different accuracies and speeds, even for images from the same dataset. An automated method for resolution selection for an input image would thus be beneficial. This thesis introduces a framework for the selection of the best resolution for image segmentation. First proposed is a measure for defining the best resolution based on user/system criteria, which offers a trade-off between accuracy and time. A learning approach is then described for the selection of the resolution, whereby extracted image features are mapped to the previously determined best resolution. In the learning process, class (i.e., resolution) distribution is imbalanced, making effective learning from the data difficult. A variant of AdaBoost, called RAMOBoost, is therefore used in this research for the learning-based selection of the best resolution for image segmentation. RAMOBoost is designed specifically for learning from imbalanced data. Two sets of features are used: Local Binary Patterns (LBP) and statistical features. Experiments conducted with four datasets using three different segmentation algorithms show that the resolutions selected through learning enable much faster segmentation than the original ones, while retaining at least the original accuracy. For three of the four datasets used, the segmentation results obtained with the proposed framework were significantly better than with the original resolution with respect to both accuracy and time.
902

Weakly Supervised Learning Algorithms and an Application to Electromyography

Hesham, Tameem January 2014 (has links)
In the standard machine learning framework, training data is assumed to be fully supervised. However, collecting fully labelled data is not always easy. Due to cost, time, effort or other types of constraints, requiring the whole data to be labelled can be difficult in many applications, whereas collecting unlabelled data can be relatively easy. Therefore, paradigms that enable learning from unlabelled and/or partially labelled data have been growing recently in machine learning. The focus of this thesis is to provide algorithms that enable weakly annotating unlabelled parts of data not provided in the standard supervised setting consisting of an instance-label pair for each sample, then learning from weakly as well as strongly labelled data. More specifically, the bulk of the thesis aims at finding solutions for data that come in the form of bags or groups of instances where available information about the labels is at the bag level only. This is the form of the electromyographic (EMG) data, which represent the main application of the thesis. Electromyographic (EMG) data can be used to diagnose muscles as either normal or suffering from a neuromuscular disease. Muscles can be classified into one of three labels; normal, myopathic or neurogenic. Each muscle consists of motor units (MUs). Equivalently, an EMG signal detected from a muscle consists of motor unit potential trains (MUPTs). This data is an example of partially labelled data where instances (MUs) are grouped in bags (muscles) and labels are provided for bags but not for instances. First, we introduce and investigate a weakly supervised learning paradigm that aims at improving classification performance by using a spectral graph-theoretic approach to weakly annotate unlabelled instances before classification. The spectral graph-theoretic phase of this paradigm groups unlabelled data instances using similarity graph models. Two new similarity graph models are introduced as well in this paradigm. This paradigm improves overall bag accuracy for EMG datasets. Second, generative modelling approaches for multiple-instance learning (MIL) are presented. We introduce and analyse a variety of model structures and components of these generative models and believe it can serve as a methodological guide to other MIL tasks of similar form. This approach improves overall bag accuracy, especially for low-dimensional bags-of-instances datasets like EMG datasets. MIL generative models provide an example of models where probability distributions need to be represented compactly and efficiently, especially when number of variables of a certain model is large. Sum-product networks (SPNs) represent a relatively new class of deep probabilistic models that aims at providing a compact and tractable representation of a probability distribution. SPNs are used to model the joint distribution of instance features in the MIL generative models. An SPN whose structure is learnt by a structure learning algorithm introduced in this thesis leads to improved bag accuracy for higher-dimensional datasets.
903

Algorithms and Systems for Virtual Machine Scheduling in Cloud Infrastructures

Li, Wubin January 2014 (has links)
With the emergence of cloud computing, computing resources (i.e., networks, servers, storage, applications, etc.) are provisioned as metered on-demand services over net- works, and can be rapidly allocated and released with minimal management effort. In the cloud computing paradigm, the virtual machine (VM) is one of the most com- monly used resource units in which business services are encapsulated. VM schedul- ing optimization, i.e., finding optimal placement schemes for VMs and reconfigu- rations according to the changing conditions, becomes challenging issues for cloud infrastructure providers and their customers. The thesis investigates the VM scheduling problem in two scenarios: (i) single- cloud environments where VMs are scheduled within a cloud aiming at improving criteria such as load balancing, carbon footprint, utilization, and revenue, and (ii) multi-cloud scenarios where a cloud user (which could be the owner of the VMs or a cloud infrastructure provider) schedules VMs across multiple cloud providers, target- ing optimization for investment cost, service availability, etc. For single-cloud scenar- ios, taking load balancing as the objective, an approach to optimal VM placement for predictable and time-constrained peak loads is presented. In addition, we also present a set of heuristic methods based on fundamental management actions (namely, sus- pend and resume physical machines, VM migration, and suspend and resume VMs), continuously optimizing the profit for the cloud infrastructure provider regardless of the predictability of the workload. For multi-cloud scenarios, we identify key re- quirements for service deployment in a range of common cloud scenarios (including private clouds, bursted clouds, federated clouds, multi-clouds, and cloud brokering), and present a general architecture to meet these requirements. Based on this architec- ture, a set of placement algorithms tuned for cost optimization under dynamic pricing schemes are evaluated. By explicitly specifying service structure, component relation- ships, and placement constraints, a mechanism is introduced to enable service owners the ability to influence placement. In addition, we also study how dynamic cloud scheduling using VM migration can be modeled using a linear integer programming approach. The primary contribution of this thesis is the development and evaluation of al- gorithms (ranging from combinatorial optimization formulations to simple heuristic algorithms) for VM scheduling in cloud infrastructures. In addition to scientific pub- lications, this work also contributes software tools (in the OPTIMIS project funded by the European Commissions Seventh Framework Programme) that demonstrate the feasibility and characteristics of the approaches presented. / I datormoln tillhandahålls datorresurser (dvs., nätverk, servrar, lagring, applikationer, etc.) som tjänster åtkomliga via Internet. Resurserna, som t.ex. virtuella maskiner (VMs), kan snabbt och enkelt allokeras och frigöras alltefter behov. De potentiellt snabba förändringarna i hur många och hur stora VMs som behövs leder till utmanade schedulerings- och konfigureringsproblem. Scheduleringsproblemen uppstår både för infrastrukturleverantörer som behöver välja vilka servrar olika VMs ska placeras på inom ett moln och deras kunder som behöver välja vilka moln VMs ska placeras på. Avhandlingen fokuserar på VM-scheduleringsproblem i dessa två scenarier, dvs (i) enskilda moln där VMs ska scheduleras för att optimera lastbalans, energiåtgång, resursnyttjande och ekonomi och (ii) situationer där en molnanvändare ska välja ett eller flera moln för att placera VMs för att optimera t.ex. kostnad, prestanda och tillgänglighet för den applikation som nyttjar resurserna. För det förstnämnda scenar- iot presenterar avhandlingen en scheduleringsmetod som utifrån förutsägbara belast- ningsvariationer optimerar lastbalansen mellan de fysiska datorresurserna. Därtill pre- senteras en uppsättning heuristiska metoder, baserade på fundamentala resurshanter- ingsåtgärder, fö att kontinuerligt optimera den ekonomiska vinsten för en molnlever- antör, utan krav på lastvariationernas förutsägbarhet. För fallet med flera moln identifierar vi viktiga krav för hur resurshanteringstjänster ska konstrueras för att fungera väl i en rad konceptuellt olika fler-moln-scenarier. Utifrån dessa krav definierar vi också en generell arkitektur som kan anpassas till dessa scenarier. Baserat pp vår arkitektur utvecklar och utvärderar vi en uppsättning algoritmer för VM-schedulering avsedda att minimera kostnader för användning av molninfrastruktur med dynamisk prissättning. Användaren ges genom ny funktionalitet möjlighet att explicit specificera relationer mellan de VMs som allokeras och andra bivillkor för hur de ska placeras. Vi demonstrerar också hur linjär heltals- programmering kan användas för att optimera detta scheduleringsproblem. Avhandlingens främsta bidrag är utveckling och utvärdering av nya metoder för VM-schedulering i datormoln, med lösningar som inkluderar såväl kombinatorisk op- timering som heuristiska metoder. Utöver vetenskapliga publikationer bidrar arbetet även med programvaror för VM-schedulering, utvecklade inom ramen för projektet OPTIMIS som finansierats av EU-kommissionens sjunde ramprogram. metoder för VM-schedulering i datormoln, med lösningar som inkluderar såväl kombinatorisk op- timering som heuristiska metoder. Utöver vetenskapliga publikationer bidrar arbetet även med programvaror för VM-schedulering, utvecklade inom ramen för projektet OPTIMIS som finansierats av EU-kommissionens sjunde ramprogram.
904

Design and optimization of a one-degree-of-freedom eight-bar leg mechanism for a walking machine

Giesbrecht, Daniel 08 April 2010 (has links)
It has been established that legged, off-road vehicles exhibit better mobility, obtain higher energy efficiency and provide more comfortable movement than those of tracked or wheeled vehicles while moving on rough terrain. Previous studies on legged mechanism design were performed by selecting the length of each link by trial and error or by certain optimization techniques where only a static force analysis was performed due to the complexity of the mechanisms. We found that these techniques can be inefficient and inaccurate. In this paper, we present the design and the optimization of a single degree-of-freedom 8-bar legged walking mechanism. We design the leg using the mechanism design theory because it offers a greater control on the output motion. Furthermore, a dynamic force analysis is performed to determine the torque applied on the input link. The optimization is set up to achieve two objectives: i) to minimize the energy needed by the system and ii) to maximize the stride length. The kinematics and dynamics of the optimized leg mechanism are compared to the one by trial-and-error. It shows that large improvements to the performance of the leg mechanism can be achieved. A prototype of the walking mechanism with 6 legs is built to demonstrate the performance.
905

Influence of growing locations, sample presentation technique and amount of foreign material on features extracted from colour images of Canada Western Red Spring wheat

Zhang, Wanyu 27 October 2010 (has links)
An area scan colour camera was used to acquire images of single kernels of Canada Western Red Spring (CWRS) wheat from different growing locations (nine locations in the year 2007, eight locations in the years 2008 and 2009) in Western Canada. Two sample presentation methods were used. In the first method, fifteen kernels from a single location were imaged in a single image and in the second method one kernel from each location were imaged in the same image. Images of individual kernels of barley and rye were also acquired for a classification study. Bulk images of heaped and flat CWRS samples, heaped and flat barley samples, and images of CWRS wheat mixed with different proportion of foreign materials (0%, 2%, 5%, 10%, 20% barley) were acquired. Morphological, colour, and textural features from single kernel images and colour and textural features from bulk grain images were extracted by a program developed by researchers at the Canadian Wheat Board Centre for Grain Storage Research. The top 30 features from the single kernel images of CWRS wheat samples from different growing locations and also different crop years were compared by Scheffe's test. Image features from two types of presentation methods were also compared. Representative of a composite sample which was generated by randomly selecting kernels from each location was compared with individual locations. Three-way classification of CWRS wheat, barley, and rye was done using the top 30 features. For bulk grain image analysis, features from flat bulk grain samples and heaped bulk grain samples were extracted and compared. Image features of CWRS wheat mixed with different percentages of barley were examined, and a cross-validation discriminant classifier was developed to classify CWRS wheat mixed with different percentages of barley. Classifications were also conducted using flat grain as training, flat and heaped grain in testing. Results from this study indicated that most image features from different growing locations and also different crop year samples had significant differences. However, these differences did not influence three-way classification of CWRS wheat, barley, and rye. Features from the composite sample were compared with those from each location. Composite sample features were different from each location. Hence composite samples may not be representative for all locations. However three-way classification using composite sample features gave similar results as in the case of using each location samples. Canada Western Red Spring wheat and barley samples were used in comparing the image features of flat grain and heaped grain. Results indicated that image features from flat grain were different from heaped grain samples. However a two-way classification applied to heaped and flat CWRS wheat, and also heaped and flat barley, gave perfect classification accuracies. Classification models trained using flat grain also gave perfect classification accuracies when tested using flat and heaped grain. A comparison of the top 30 features extracted from images of CWRS wheat mixed with different proportion of barley revealed that grain image features changed after mixing barley. In classification of CWRS wheat mixed with 0, 2, 5, 10, and 20% barley, classification accuracies of 100, 99, 96, 95, and 98% were obtained, respectively.
906

Atlas Simulation: A Numerical Scheme for Approximating Multiscale Diffusions Embedded in High Dimensions

Crosskey, Miles Martin January 2014 (has links)
<p>When simulating multiscale stochastic differential equations (SDEs) in high-dimensions, separation of timescales and high-dimensionality can make simulations expensive. The computational cost is dictated by microscale properties and interactions of many variables, while interesting behavior often occurs on the macroscale with few important degrees of freedom. For many problems bridging the gap between the microscale and macroscale by direct simulation is computationally infeasible, and one would like to learn a fast macroscale simulator. In this paper we present an unsupervised learning algorithm that uses short parallelizable microscale simulations to learn provably accurate macroscale SDE models. The learning algorithm takes as input: the microscale simulator, a local distance function, and a homogenization scale. The learned macroscale model can then be used for fast computation and storage of long simulations. I will discuss various examples, both low- and high-dimensional, as well as results about the accuracy of the fast simulators we construct, and its dependency on the number of short paths requested from the microscale simulator.</p> / Dissertation
907

Enhancements to reverse engineering : surface modelling and segmentation of CMM data

Bardell, Rayman A. January 2000 (has links)
No description available.
908

Computer aided design of feed drives for NC machine tools

Filiz, I. H. January 1981 (has links)
No description available.
909

The modelling of changeovers and the classification of changeover time reduction techniques

Gest, G. B. January 1995 (has links)
No description available.
910

Machine Learning for Aerial Image Labeling

Mnih, Volodymyr 09 August 2013 (has links)
Information extracted from aerial photographs has found applications in a wide range of areas including urban planning, crop and forest management, disaster relief, and climate modeling. At present, much of the extraction is still performed by human experts, making the process slow, costly, and error prone. The goal of this thesis is to develop methods for automatically extracting the locations of objects such as roads, buildings, and trees directly from aerial images. We investigate the use of machine learning methods trained on aligned aerial images and possibly outdated maps for labeling the pixels of an aerial image with semantic labels. We show how deep neural networks implemented on modern GPUs can be used to efficiently learn highly discriminative image features. We then introduce new loss functions for training neural networks that are partially robust to incomplete and poorly registered target maps. Finally, we propose two ways of improving the predictions of our system by introducing structure into the outputs of the neural networks. We evaluate our system on the largest and most-challenging road and building detection datasets considered in the literature and show that it works reliably under a wide variety of conditions. Furthermore, we are releasing the first large-scale road and building detection datasets to the public in order to facilitate future comparisons with other methods.

Page generated in 0.0508 seconds