• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5543
  • 2202
  • 1169
  • 543
  • 376
  • 294
  • 236
  • 152
  • 142
  • 103
  • 100
  • 88
  • 74
  • 71
  • 32
  • Tagged with
  • 13043
  • 1947
  • 1556
  • 1436
  • 1323
  • 1166
  • 1150
  • 1088
  • 920
  • 901
  • 884
  • 732
  • 702
  • 662
  • 645
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Estimating Non-homogeneous Intensity Matrices in Continuous Time Multi-state Markov Models

Lebovic, Gerald 31 August 2011 (has links)
Multi-State-Markov (MSM) models can be used to characterize the behaviour of categorical outcomes measured repeatedly over time. Kalbfleisch and Lawless (1985) and Gentleman et al. (1994) examine the MSM model under the assumption of time-homogeneous transition intensities. In the context of non-homogeneous intensities, current methods use piecewise constant approximations which are less than ideal. We propose a local likelihood method, based on Tibshirani and Hastie (1987) and Loader (1996), to estimate the transition intensities as continuous functions of time. In particular the local EM algorithm suggested by Betensky et al. (1999) is employed to estimate the in-homogeneous intensities in the presence of missing data. A simulation comparing the piecewise constant method with the local EM method is examined using two different sets of underlying intensities. In addition, model assessment tools such as bandwidth selection, grid size selection, and bootstrapped percentile intervals are examined. Lastly, the method is applied to an HIV data set to examine the intensities with regard to depression scores. Although computationally intensive, it appears that this method is viable for estimating non-homogeneous intensities and outperforms existing methods.
382

Cooperative and intelligent control of multi-robot systems using machine learning

Wang, Ying 05 1900 (has links)
This thesis investigates cooperative and intelligent control of autonomous multi-robot systems in a dynamic, unstructured and unknown environment and makes significant original contributions with regard to self-deterministic learning for robot cooperation, evolutionary optimization of robotic actions, improvement of system robustness, vision-based object tracking, and real-time performance. A distributed multi-robot architecture is developed which will facilitate operation of a cooperative multi-robot system in a dynamic and unknown environment in a self-improving, robust, and real-time manner. It is a fully distributed and hierarchical architecture with three levels. By combining several popular AI, soft computing, and control techniques such as learning, planning, reactive paradigm, optimization, and hybrid control, the developed architecture is expected to facilitate effective autonomous operation of cooperative multi-robot systems in a dynamically changing, unknown, and unstructured environment. A machine learning technique is incorporated into the developed multi-robot system for self-deterministic and self-improving cooperation and coping with uncertainties in the environment. A modified Q-learning algorithm termed Sequential Q-learning with Kalman Filtering (SQKF) is developed in the thesis, which can provide fast multi-robot learning. By arranging the robots to learn according to a predefined sequence, modeling the effect of the actions of other robots in the work environment as Gaussian white noise and estimating this noise online with a Kalman filter, the SQKF algorithm seeks to solve several key problems in multi-robot learning. As a part of low-level sensing and control in the proposed multi-robot architecture, a fast computer vision algorithm for color-blob tracking is developed to track multiple moving objects in the environment. By removing the brightness and saturation information in an image and filtering unrelated information based on statistical features and domain knowledge, the algorithm solves the problems of uneven illumination in the environment and improves real-time performance. In order to validate the developed approaches, a Java-based simulation system and a physical multi-robot experimental system are developed to successfully transport an object of interest to a goal location in a dynamic and unknown environment with complex obstacle distribution. The developed approaches in this thesis are implemented in the prototype system and rigorously tested and validated through computer simulation and experimentation.
383

Multi-objective optimization for scheduling elective surgical patients at the Health Sciences Centre in Winnipeg

Tan, Yin Yin 12 September 2008 (has links)
Health Sciences Centre (HSC) in Winnipeg is the major healthcare facility serving Manitoba, Northwestern Ontario, and Nunavut. An evaluation of HSC’s adult surgical patient flow revealed that one major barrier to smooth flow was their Operating Room (OR) scheduling system. This thesis presents a new two-stage elective OR scheduling system for HSC, which generates weekly OR schedules that reduce artificial variability in order to facilitate smooth patient flow. The first stage reduces day-to-day variability while the second stage reduces variability occurring within a day. The scheduling processes in both stages are mathematically modelled as multi-objective optimization problems. An attempt was made to solve both models using lexicographic goal programming. However, this proved to be an unacceptable method for the second stage, so a new multi-objective genetic algorithm, Nondominated Sorting Genetic Algorithm II – Operating Room (NSGAII-OR), was developed. Results indicate that if the system is implemented at HSC, their surgical patient flow will likely improve. / October 2008
384

Trust and reputation for agent societies

Sabater Mir, Jordi 28 July 2002 (has links)
No description available.
385

Running stream-like programs on heterogeneous multi-core systems

Carpenter, Paul 24 October 2011 (has links)
All major semiconductor companies are now shipping multi-cores. Phones, PCs, laptops, and mobile internet devices will all require software that can make effective use of these cores. Writing high-performance parallel software is difficult, time-consuming and error prone, increasing both time-to-market and cost. Software outlives hardware; it typically takes longer to develop new software than hardware, and legacy software tends to survive for a long time, during which the number of cores per system will increase. Development and maintenance productivity will be improved if parallelism and technical details are managed by the machine, while the programmer reasons about the application as a whole. Parallel software should be written using domain-specific high-level languages or extensions. These languages reveal implicit parallelism, which would be obscured by a sequential language such as C. When memory allocation and program control are managed by the compiler, the program's structure and data layout can be safely and reliably modified by high-level compiler transformations. One important application domain contains so-called stream programs, which are structured as independent kernels interacting only through one-way channels, called streams. Stream programming is not applicable to all programs, but it arises naturally in audio and video encode and decode, 3D graphics, and digital signal processing. This representation enables high-level transformations, including kernel unrolling and kernel fusion. This thesis develops new compiler and run-time techniques for stream programming. The first part of the thesis is concerned with a statically scheduled stream compiler. It introduces a new static partitioning algorithm, which determines which kernels should be fused, in order to balance the loads on the processors and interconnects. A good partitioning algorithm is crucial if the compiler is to produce efficient code. The algorithm also takes account of downstream compiler passes---specifically software pipelining and buffer allocation---and it models the compiler's ability to fuse kernels. The latter is important because the compiler may not be able to fuse arbitrary collections of kernels. This thesis also introduces a static queue sizing algorithm. This algorithm is important when memory is distributed, especially when local stores are small. The algorithm takes account of latencies and variations in computation time, and is constrained by the sizes of the local memories. The second part of this thesis is concerned with dynamic scheduling of stream programs. First, it investigates the performance of known online, non-preemptive, non-clairvoyant dynamic schedulers. Second, it proposes two dynamic schedulers for stream programs. The first is specifically for one-dimensional stream programs. The second is more general: it does not need to be told the stream graph, but it has slightly larger overhead. This thesis also introduces some support tools related to stream programming. StarssCheck is a debugging tool, based on Valgrind, for the StarSs task-parallel programming language. It generates a warning whenever the program's behaviour contradicts a pragma annotation. Such behaviour could otherwise lead to exceptions or race conditions. StreamIt to OmpSs is a tool to convert a streaming program in the StreamIt language into a dynamically scheduled task based program using StarSs. / Totes les empreses de semiconductors produeixen actualment multi-cores. Mòbils,PCs, portàtils, i dispositius mòbils d’Internet necessitaran programari quefaci servir eficientment aquests cores. Escriure programari paral·lel d’altrendiment és difícil, laboriós i propens a errors, incrementant tant el tempsde llançament al mercat com el cost. El programari té una vida més llarga queel maquinari; típicament pren més temps desenvolupar nou programi que noumaquinari, i el programari ja existent pot perdurar molt temps, durant el qualel nombre de cores dels sistemes incrementarà. La productivitat dedesenvolupament i manteniment millorarà si el paral·lelisme i els detallstècnics són gestionats per la màquina, mentre el programador raona sobre elconjunt de l’aplicació.El programari paral·lel hauria de ser escrit en llenguatges específics deldomini. Aquests llenguatges extrauen paral·lelisme implícit, el qual és ocultatper un llenguatge seqüencial com C. Quan l’assignació de memòria i lesestructures de control són gestionades pel compilador, l’estructura iorganització de dades del programi poden ser modificades de manera segura ifiable per les transformacions d’alt nivell del compilador.Un dels dominis de l’aplicació importants és el que consta dels programes destream; aquest programes són estructurats com a nuclis independents queinteractuen només a través de canals d’un sol sentit, anomenats streams. Laprogramació de streams no és aplicable a tots els programes, però sorgeix deforma natural en la codificació i descodificació d’àudio i vídeo, gràfics 3D, iprocessament de senyals digitals. Aquesta representació permet transformacionsd’alt nivell, fins i tot descomposició i fusió de nucli.Aquesta tesi desenvolupa noves tècniques de compilació i sistemes en tempsd’execució per a programació de streams. La primera part d’aquesta tesi esfocalitza amb un compilador de streams de planificació estàtica. Presenta unnou algorisme de partició estàtica, que determina quins nuclis han de serfusionats, per tal d’equilibrar la càrrega en els processadors i en lesinterconnexions. Un bon algorisme de particionat és fonamental per tal de queel compilador produeixi codi eficient. L’algorisme també té en compte elspassos de compilació subseqüents---específicament software pipelining il’arranjament de buffers---i modela la capacitat del compilador per fusionarnuclis. Aquesta tesi també presenta un algorisme estàtic de redimensionament de cues.Aquest algorisme és important quan la memòria és distribuïda, especialment quanles memòries locals són petites. L’algorisme té en compte latències ivariacions en els temps de càlcul, i considera el límit imposat per la mida deles memòries locals.La segona part d’aquesta tesi es centralitza en la planificació dinàmica deprogrames de streams. En primer lloc, investiga el rendiment dels planificadorsdinàmics online, non-preemptive i non-clairvoyant. En segon lloc, proposa dosplanificadors dinàmics per programes de stream. El primer és específicament pera programes de streams unidimensionals. El segon és més general: no necessitael graf de streams, però els overheads són una mica més grans.Aquesta tesi també presenta un conjunt d’eines de suport relacionades amb laprogramació de streams. StarssCheck és una eina de depuració, que és basa enValgrind, per StarSs, un llenguatge de programació paral·lela basat en tasques.Aquesta eina genera un avís cada vegada que el comportament del programa estàen contradicció amb una anotació pragma. Aquest comportament d’una altra manerapodria causar excepcions o situacions de competició. StreamIt to OmpSs és unaeina per convertir un programa de streams codificat en el llenguatge StreamIt aun programa de tasques en StarSs planificat de forma dinàmica.
386

Multi-View Imaging of Drosophila Embryos

Groh, Paul January 2008 (has links)
There are several reasons for imaging a single, developing embryo from multiple view points. The embryo is a complex biomechanical system and morphogenesis movements in one region typically produce motions in adjacent areas. Multi-view imaging can be used to observe morphogenesis and gain a better understanding of normal and abnormal embryo development. The system would allow the embryo to be rotated to a specific vantage point so that a particular morphogenetic process may be observed clearly. Moreover, a multi-view system can be used to gather images to create an accurate three-dimensional reconstruction of the embryo for computer simulations. The scope of this thesis was to construct an apparatus that could capture multi-view images for these applications. A multi-view system for imaging live Drosophila melanogaster embryos, the first of its kind, is presented. Embryos for imaging are collected from genetically modified Drosophila stocks that contain a green fluorescing protein (GFP), which highlights only specific cell components. The embryos are mounted on a wire that is rotated under computer control to desired viewpoints in front of the objective of a custom-built confocal microscope. The optical components for the orizontallyaligned microscope were researched, selected and installed specifically for this multi-viewing apparatus. The multiple images of the stacks from each viewpoint are deconvolved and collaged so as to show all of the cells visible from that view. The process of rotating and capturing images can be repeated for many angles over the course of one hour. Experiments were conducted to verify the repeatability of the rotation mechanism and to determine the number of image slices required to produce a satisfactory image collage from each viewpoint. Additional testing was conducted to establish that the system could capture a complete 360° view of the embryo, and a time-lapse study was done to verify that a developing embryo could be imaged repeatedly from two separate angles during ventral furrow formation. An analysis of the effects of the imaging system on embryos in terms of photo-bleaching and viability is presented.
387

Scheduling in a Multi-Sector Wireless Cell

Lin, Chao-Wen January 2009 (has links)
In this thesis, we propose a scheduling problem for the downlink of a single cell system with multiple sectors. We formulate an optimization problem based on a generalized round robin scheme that aims at minimizing the cycle length necessary to provide one timeslot to each user, while avoiding harmful interference. Since this problem is under-constrained and might have multiple solutions, we propose a second optimization problem for which we try to find a scheduling that minimizes the cycle length while being as efficient as possible in resource utilization. Both of these problems are large integer programming problems that can be solved numerically using a commercial solver, but for real time use, efficient heuristics need to be developed. We design heuristics for these two problems and validate them by comparing their performances to the optimal solutions.
388

An Exploratory Study of Storytelling Using Digital Tabletops

Mostafapourdehcheshmeh, Mehrnaz 18 September 2013 (has links)
Storytelling is a powerful means of communication that has been employed by humankind from the early stages of development. As technology has advanced, the medium through which people tell stories has evolved from verbal, to writing, performing on stage, and more recently television, movies, and video games. A promising medium for the telling of stories in an in-person, one-on-one or one-to-many setting is a digital table—a large, horizontal multi-touch surface—that can provide quick access to visuals and narrative elements at the touch of one’s hands and fingers. In this work, I present the results of an exploratory study on storytellers’ interaction behaviours while working with digital tables, and its physical counterparts of sand and water. My results highlight some of the differences in these media that can both help and hinder a storyteller’s narrative process. I use these findings to present design implications for the design of applications for storytelling on digital multi-touch surfaces.
389

Resurshantering i Dual-core kluster

Gustafsson, Johan, Lingbrand, Mikael January 2008 (has links)
Med den nya generationen processorer där vi har flera cpu-kärnor på ett chip, så ökas prestandan genom parallell exekvering. I denna rapport presenterar vi en omvärldsstudie om allmän multiprocessorteori där vi går igenom olika tekniker för både hårdvara och mjukvara. Vi har även utfört empiriska tester på ett datorkluster, där vi har testat de två olika programmen Fluent och CFX, som utför CFD beräkningar. För varje program så har tre modeller använts för simuleringar med varierande antal beräkningsnoder. Vi har undersökt vad som är mest lönsamt, att använda en eller båda CPU-kärnorna vid de olika simuleringarna. För att testa detta har vi kört simuleringar där vi har kört med en respektive två cpu-kärnor på beräkningsnoderna. Under simuleringarna har vi samlat in mätvärden som nätverk, minne och cpu-belastning för alla noder samt exekveringstider. Dessa värden har sedan sammanställts där vi ser att ju större en modell är desto mer lönar det sig att köra med en cpu-kärna. I endast ett av våra tester har det visat sig lönsamt att använda båda cpu-kärnorna. En formel har sedan utarbetats för att påvisa skillnaderna mellan olika antal processer med en respektive två cpu-kärnor per nod. Denna formel kan appliceras för att räkna ut den totala kostnaden per simulering med hjälp av årskostnaden för de noder och licenser som används.
390

Error Rate Performance of Multi-Hop Communication Systems Over Nakagami-m Fading Channel

Sajjad, Hassan, Jamil, Muhammad January 2012 (has links)
This work examines error rate performance of Multi-Hop communication systems, employing Single Input Single Output (SISO) transmissions over Nakagami-m fading channel. Mobile multi-hop relaying (MMR) system has been adopted in several Broadband Wireless Access Networks (BWAN) as a cost-effective means of extending the coverage and improving the capacity of these wireless networks. In a MMR system, communication between the source node and destination node is achieved through an intermediate node (i.e., Relay Station). It is widely accepted that multi-hop relaying communication can provide higher capacity and can reduce the interference in BWANs. Such claims though have not been quantified. Quantification of such claims is an essential step to justify a better opportunity for wide deployment of relay stations.In this thesis, Bit Error Rate (BER) of multi-hop communication systems has been analysed. Different kinds of fading channels have been used to estimate the error rate performance for wireless transmission. Binary Phase Shift Keying (BPSK) has been employed as the modulation technique and Additive White Gaussian Noise (AWGN) has been used as the channel noise. The same Signal to Noise Ratio (SNR) was used to estimate the channel performance. Three channels were compared by simulating their BER, namely, Rayleigh, Rician and Nakagami. Matlab has been used for simulation.

Page generated in 0.0752 seconds