• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 66
  • 66
  • 66
  • 66
  • 66
  • 20
  • 5
  • 5
  • 1
  • Tagged with
  • 254
  • 254
  • 254
  • 129
  • 123
  • 44
  • 36
  • 31
  • 27
  • 25
  • 24
  • 23
  • 21
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Towards a computational model of the colonic crypt with a realistic, deformable geometry

Dunn, Sara-Jane Nicole January 2011 (has links)
Colorectal cancer (CRC) is one of the most prevalent and deadly forms of cancer. Its high mortality rate is associated with difficulties in early detection, which is crucial to survival. The onset of CRC is marked by macroscopic changes in intestinal tissue, originating from a deviation in the healthy cell dynamics of glands known as the crypts of Lieberkuhn. It is believed that accumulated genetic alterations confer on mutated cells the ability to persist in the crypts, which can lead to the formation of a benign tumour through localised proliferation. Stress on the crypt walls can lead to buckling, or crypt fission, and the further spread of mutant cells. Elucidating the initial perturbations in crypt dynamics is not possible experimentally, but such investigations could be made using a predictive, computational model. This thesis proposes a new discrete crypt model, which focuses on the interaction between cell- and tissue-level behaviour, while incorporating key subcellular components. The model contains a novel description of the role of the surrounding tissue and musculature, which allows the shape of the crypt to evolve and deform. A two-dimensional (2D) cross-sectional geometry is considered. Simulation results reveal how the shape of the crypt base may contribute mechanically to the asymmetric division events typically associated with the stem cells in this region. The model predicts that epithelial cell migration may arise due to feedback between cell loss at the crypt collar and density-dependent cell division, an hypothesis which can be investigated in a wet lab. Further, in silico experiments illustrate how this framework can be used to investigate the spread of mutations, and conclude that a reduction in cell migration is key to confer persistence on mutant cell populations. A three-dimensional (3D) model is proposed to remove the spatial restrictions imposed on cell migration in 2D, and preliminary simulation results agree with the hypotheses generated in 2D. Computational limitations that currently restrict extension to a realistic 3D geometry are discussed. These models enable investigation of the role that mechanical forces play in regulating tissue homeostasis, and make a significant contribution to the theoretical study of the onset of crypt deformation under pre-cancerous conditions.
172

Automatic learning of British Sign Language from signed TV broadcasts

Buehler, Patrick January 2010 (has links)
In this work, we will present several contributions towards automatic recognition of BSL signs from continuous signing video sequences. Specifically, we will address three main points: (i) automatic detection and tracking of the hands using a generative model of the image; (ii) automatic learning of signs from TV broadcasts using the supervisory information available from subtitles; and (iii) generalisation given sign examples from one signer to recognition of signs from different signers. Our source material consists of many hours of video with continuous signing and corresponding subtitles recorded from BBC digital television. This is very challenging material for a number of reasons, including self-occlusions of the signer, self-shadowing, blur due to the speed of motion, and in particular the changing background. Knowledge of the hand position and hand shape is a pre-requisite for automatic sign language recognition. We cast the problem of detecting and tracking the hands as inference in a generative model of the image, and propose a complete model which accounts for the positions and self-occlusions of the arms. Reasonable configurations are obtained by efficiently sampling from a pictorial structure proposal distribution. The results using our method exceed the state-of-the-art for the length and stability of continuous limb tracking. Previous research in sign language recognition has typically required manual training data to be generated for each sign, e.g. a signer performing each sign in controlled conditions - a time-consuming and expensive procedure. We show that for a given signer, a large number of BSL signs can be learned automatically from TV broadcasts using the supervisory information available from subtitles broadcast simultaneously with the signing. We achieve this by modelling the problem as one of multiple instance learning. In this way we are able to extract the sign of interest from hours of signing footage, despite the very weak and "noisy" supervision from the subtitles. Lastly, we show that automatic recognition of signs can be extended to multiple signers. Using automatically extracted examples from a single signer, we train discriminative classifiers and show that these can successfully classify and localise signs in new signers. This demonstrates that the descriptor we extract for each frame (i.e. hand position, hand shape, and hand orientation) generalises between different signers.
173

Hybrid tractability of constraint satisfaction problems with global constraints

Thorstensen, Evgenij January 2013 (has links)
A wide range of problems can be modelled as constraint satisfaction problems (CSPs), that is, a set of constraints that must be satisfied simultaneously. Constraints can either be represented extensionally, by explicitly listing allowed combinations of values, or intensionally, whether by an equation, propositional logic formula, or other means. Intensionally represented constraints, known as global constraints, are a powerful modelling technique, and many modern CSP solvers provide them. We give examples to show how problems that deal with product configuration can be modelled with such constraints, and how this approach relates to other modelling formalisms. The complexity of CSPs with extensionally represented constraints is well understood, and there are several known techniques that can be used to identify tractable classes of such problems. For CSPs with global constraints, however, many of these techniques fail, and far fewer tractable classes are known. In order to remedy this state of affairs, we undertake a systematic review of research into the tractability of CSPs. In particular, we look at CSPs with extensionally represented constraints in order to understand why many of the techniques that give tractable classes for this case fail for CSPs with global constraints. The above investigation leads to two discoveries. First, many restrictions on how the constraints of a CSP interact implicitly rely on a property of extensionally represented constraints to guarantee tractability. We identify this property as being a bound on the number of solutions in key parts of the instance, and find classes of global constraints that also possess this property. For such classes, we show that many known tractability results apply. Furthermore, global constraints allow us to treat entire CSP instances as constraints. We combine this observation with the above result, and obtain new tractable classes of CSPs by dividing a CSP into smaller CSPs drawn from known tractable classes. Second, for CSPs that simply do not possess the above property, we look at how the constraints of an instance overlap, and how assignments to the overlapping parts extend to the rest of the problem. We show that assignments that extend in the same way can be identified. Combined with a new structural restriction, this observation leads to a second set of tractable classes. We conclude with a summary, as well as some observations about potential for future work in this area.
174

Graph colourings and games

Meeks, Kitty M. F. T. January 2012 (has links)
Graph colourings and combinatorial games are two very widely studied topics in discrete mathematics. This thesis addresses the computational complexity of a range of problems falling within one or both of these subjects. Much of the thesis is concerned with the computational complexity of problems related to the combinatorial game (Free-)Flood-It, in which players aim to make a coloured graph monochromatic ("flood" the graph) with the minimum possible number of flooding operations; such problems are known to be computationally hard in many cases. We begin by proving some general structural results about the behaviour of the game, including a powerful characterisation of the number of moves required to flood a graph in terms of the number of moves required to flood its spanning trees; these structural results are then applied to prove tractability results about a number of flood-filling problems. We also consider the computational complexity of flood-filling problems when the game is played on a rectangular grid of fixed height (focussing in particular on 3xn and 2xn grids), answering an open question of Clifford, Jalsenius, Montanaro and Sach. The final chapter concerns the parameterised complexity of list problems on graphs of bounded treewidth. We prove structural results determining the list edge chromatic number and list total chromatic number of graphs with bounded treewidth and large maximum degree, which are special cases of the List (Edge) Colouring Conjecture and Total Colouring Conjecture respectively. Using these results, we show that the problem of determining either of these quantities is fixed parameter tractable, parameterised by the treewidth of the input graph. Finally, we analyse a list version of the Hamilton Path problem, and prove it to be W[1]-hard when parameterised by the pathwidth of the input graph. These results answer two open questions of Fellows, Fomin, Lokshtanov, Rosamond, Saurabh, Szeider and Thomassen.
175

Formal methods for the analysis of wireless network protocols

Fruth, Matthias January 2011 (has links)
In this thesis, we present novel software technology for the analysis of wireless networks, an emerging area of computer science. To address the widely acknowledged lack of formal foundations in this field, probabilistic model checking, a formal method for verification and performance analysis, is used. Contrary to test and simulation, it systematically explores the full state space and therefore allows reasoning about all possible behaviours of a system. This thesis contributes to design, modelling, and analysis of ad-hoc networks and randomised distributed coordination protocols. First, we present a new hybrid approach that effectively combines probabilistic model checking and state-of-the-art models from the simulation community in order to improve the reliability of design and analysis of wireless sensor networks and their protocols. We describe algorithms for the automated generation of models for both analysis methods and their implementation in a tool. Second, we study spatial properties of wireless sensor networks, mainly with respect to Quality of Service and energy properties. Third, we investigate the contention resolution protocol of the networking standard ZigBee. We build a generic stochastic model for this protocol and analyse Quality of Service and energy properties of it. Furthermore, we assess the applicability of different interference models. Fourth, we explore slot allocation protocols, which serve as a bandwidth allocation mechanism for ad-hoc networks. We build a generic model for this class of protocols, study real-world protocols, and optimise protocol parameters with respect to Quality of Service and energy constraints. We combine this with the novel formalisms for wireless communication and interference models, and finally we optimise local (node) and global (network) routing policies. This is the first application of probabilistic model checking both to protocols of the ZigBee standard and protocols for slot allocation.
176

Topics in monitoring and planning for embedded real-time systems

Ho, Hsi-Ming January 2015 (has links)
The verification of real-time systems has gained much interest in the formal verification community during the past two decades. In this thesis, we investigate two real-time verification problems that benefit from the techniques normally used in untimed verification. The first part of this thesis is concerned with the monitoring of real-time specifications. We study the expressiveness of metric temporal logics over timed words, a problem that dates back to early 1990s. We show that the logic obtained by extending Metric Temporal Logic (MTL) with two families of new modalities is expressively complete for the Monadic First-Order Logic of Order and Metric (FO[<,+1]) in time-bounded settings. Furthermore, by allowing rational constants, expressive completeness also holds in the general (time-unbounded) setting. Finally, we incorporate several notions and techniques from LTL monitoring to obtain the first trace-length independent monitoring procedure for this logic. The second part of this thesis concerns a decision problem regarding UAVs: given a set of targets (each ascribed with a relative deadline) and flight times between each pair of targets, is there a way to coordinate a flock of k identical UAVs so that all targets are visited infinitely often and no target is ever left unvisited for a time longer than its relative deadline? We show that the problem is PSPACE-complete even in the single-UAV case, thereby corrects an erroneous claim from the literature. We then complement this result by proposing an efficient antichain-based approach where a delayed simulation is used to prune the state space. Experimental results clearly demonstrate the effectiveness of our approach.
177

Extraction of clinical information from the non-invasive fetal electrocardiogram

Behar, Joachim January 2014 (has links)
Estimation of the fetal heart rate (FHR) has gained interest in the last century; low heart rate variability has been studied to identify intrauterine growth restricted fetuses (prepartum), and abnormal FHR patterns have been associated with fetal distress during delivery (intrapartum). Several monitoring techniques have been proposed for FHR estimation, including auscultation and Doppler ultrasound. This thesis focuses on the extraction of the non-invasive fetal electrocardiogram (NI-FECG) recorded from a limited set of abdominal sensors. The main challenge with NI-FECG extraction techniques is the low signal-to-noise ratio of the FECG signal on the abdominal mixture signal which consists of a dominant maternal ECG component, FECG and noise. However the NI-FECG offers many advantages over the alternative fetal monitoring techniques, the most important one being the opportunity to enable morphological analysis of the FECG which is vital for determining whether an observed FHR event is normal or pathological. In order to advance the field of NI-FECG signal processing, the development of standardised public databases and benchmarking of a number of published and novel algorithms was necessary. Databases were created depending on the application: FHR estimation with or without maternal chest lead reference or directed toward FECG morphology analysis. Moreover, a FECG simulator was developed in order to account for pathological cases or rare events which are often under-represented (or completely missing) in the existing databases. This simulator also serves as a tool for studying NI-FECG signal processing algorithms aimed at morphological analysis (which require underlying ground truth annotations). An accurate technique for the automatic estimation of the signal quality level was also developed, optimised and thoroughly tested on pathological cases. Such a technique is mandatory for any clinical applications of FECG analysis as an external confidence index of both the input signals and the analysis outputs. Finally, a Bayesian filtering approach was implemented in order to address the NI-FECG morphology analysis problem. It was shown, for the first time, that the NI-FECG can allow accurate estimation of the fetal QT interval, which opens the way for new clinical studies on the development of the fetus during the pregnancy.
178

Pricing exotic options using improved strong convergence

Schmitz Abe, Klaus E. January 2008 (has links)
Today, better numerical approximations are required for multi-dimensional SDEs to improve on the poor performance of the standard Monte Carlo integration. With this aim in mind, the material in the thesis is divided into two main categories, stochastic calculus and mathematical finance. In the former, we introduce a new scheme or discrete time approximation based on an idea of Paul Malliavin where, for some conditions, a better strong convergence order is obtained than the standard Milstein scheme without the expensive simulation of the Lévy Area. We demonstrate when the conditions of the 2−Dimensional problem permit this and give an exact solution for the orthogonal transformation (θ Scheme or Orthogonal Milstein Scheme). Our applications are focused on continuous time diffusion models for the volatility and variance with their discrete time approximations (ARV). Two theorems that measure with confidence the order of strong and weak convergence of schemes without an exact solution or expectation of the system are formally proved and tested with numerical examples. In addition, some methods for simulating the double integrals or Lévy Area in the Milstein approximation are introduced. For mathematical finance, we review evidence of non-constant volatility and consider the implications for option pricing using stochastic volatility models. A general stochastic volatility model that represents most of the stochastic volatility models that are outlined in the literature is proposed. This was necessary in order to both study and understand the option price properties. The analytic closed-form solution for a European/Digital option for both the Square Root Model and the 3/2 Model are given. We present the Multilevel Monte Carlo path simulation method which is a powerful tool for pricing exotic options. An improved/updated version of the ML-MC algorithm using multi-schemes and a non-zero starting level is introduced. To link the contents of the thesis, we present a wide variety of pricing exotic option examples where considerable computational savings are demonstrated using the new θ Scheme and the improved Multischeme Multilevel Monte Carlo method (MSL-MC). The computational cost to achieve an accuracy of O(e) is reduced from O(e−3) to O(e−2) for some applications.
179

Uma arquitetura multiagente para gerenciamento de dispositivos em ambientes da internet das coisas /

Cagnin, Renato Luciano. January 2015 (has links)
Orientador: Ivan Rizzo Guilherme / Banca: Paulo André Lima De Castro / Banca: Simone das Graças Domingues Prado / Resumo: A Internet das Coisas (IoT) é o termo usado para definir o novo cenário da Internet, caracterizado pela conexão de uma grande variedade de dispositivos que possuem interfaces de comunicação sem fio e estão imersos em um ambiente físico. Na IoT, esses ambientes são abertos, onde novos componentes podem ser incorporados; distribuídos, caracterizados por diversos componentes conectados em rede; e dinâmicos. Tais características trazem diversos desafios para o desenvolvimento de aplicações que buscam acessar, integrar e analisar a quantidade de dados produzida por estes dispositivos. Nesse sentido, este trabalho propõe uma arquitetura de software para o desenvolvimento de aplicações no contexto da IoT. A arquitetura proposta é constituída de diversas camadas, caracterizadas por diferentes tecnologias. As tecnologias utilizadas são Sistemas Multiagente, Arquitetura Orientada a Serviços e Web Semântica. Para demonstrar a viabilidade da proposta, um protótipo da arquitetura foi desenvolvido e aplicado em um estudo de caso no contexto da automação residencial. Os resultados apresentados demonstram que a arquitetura demonstrou capacidade de identificação de diferentes dispositivos, operações e serviços; e coordenação da execução de operações de dispositivos em tarefas de maior complexidade segundo informações contextuais / Abstract: The Internet of Things (IoT) is the term used to define the new Internet scenario, characterized by connecting a variety of devices that have wireless communication interfaces and are immersed in a physical environment. The IoT environments are open, where new components can be incorporated; distributed, characterized by several components networked; and dynamic. These characteristics bring many challenges for the development of applications that seek to access, integrate and analyze the amount of data produced by these devices. In this sense, this paper proposes a software architecture for the development of applications in the context of IoT. The proposed architecture is made up of several layers characterized by different technologies. The technologies used are Multi-Agent Systems, Service Oriented Architecture and Web Semantics. To demonstrate the feasibility of the proposal, a prototype of the architecture was developed and applied in a case study in the context of home automation. The results presented show that the architecture demonstrated ability to identify different devices, operations, and services; and coordinate the executions of operation of devices in more complex tasks according to the available contextual information / Mestre
180

Solução das ambiguidades de linhas de bases médias e longas: aplicação no posicionamento baseado em redes

Silva, Crislaine Menezes da [UNESP] 01 October 2015 (has links) (PDF)
Made available in DSpace on 2017-03-14T14:10:08Z (GMT). No. of bitstreams: 0 Previous issue date: 2015-10-01. Added 1 bitstream(s) on 2017-03-14T14:42:49Z : No. of bitstreams: 1 000874637.pdf: 3049251 bytes, checksum: 612105d6db96ee6a9d1d53c4cf9e159e (MD5) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Essencial para o desempenho de posicionamento baseado em redes é que um usuá- rio receba e aplique as informações de correção geradas a partir de uma rede. Essas correções são necessárias para o usuário levar em conta os atrasos atmosféricos (ionosfera e troposfera) entre a sua localização aproximada e as estações da rede. A m de proporcionar correções mais precisas para os usuários, o processamento da rede deve ser baseada na solução das ambiguidades inteiras da fase da portadora entre as estações da rede. Denomina-se ambiguidade o número de ciclos inteiros entre as antenas do satélite e do receptor na primeira época de coleta de dados. As ambiguidades são introduzidas como parâmetros a serem estimados nas equações de observação. processo de solução das ambiguidades pode ser dividido em duas etapas: estimação e de validação. A estimativa está preocupada com o cálculo dos valores das ambiguidades. o etapa de validação é utilizada para inferir se o valor estimado pode ser aceito ou não. Um método muito utilizado pela comunidade cientí ca internacional para a estimação das ambiguidades inteiras é o método LAMBDA. Já para a validação os testes Ratio e FF-RT podem ser utilizados. O objetivo desta dissertação é investigar a solução das ambiguidades no contexto do posicionamento baseado em redes e sua implementação no software FCT_RTK_Net que foi desenvolvido em ambiente acadêmico. Nesta dissertação alguns experimentos sobre a solução das ambiguidades são apresentados, cujos resultados mostraram que o teste de validação FF-RT produz melhores percentuais de xação das ambiguidades. Os resultados também mostraram que o ADOP é um bom preditor da taxa de sucesso das ambiguidades e que a detecção e correção de perdas de ciclos são essenciais para a obtenção da solução das ambiguidades / Essencial to the performance of Network RTK positioning is that a user receives and applies correction information from a network. These corrections are necessary for the user to account for the atmospheric (ionospheric and tropospheric) delays between his approximate location and the locations of the network's stations. In order to provide the most precise corrections to users, the network processing should be based on integer resolution of the carrier phase ambiguities between the network's stations. Ambiguity is called the number of complete cycles between the satellite antenna and receiver in his rst season of data collection. The ambiguities are introduced as parameters to be estimated in the observation equations. Ambiguity resolution can be divided in two steps: estimation and validation. The estimate is concerned with the calculation of the ambiguities values. O validation stage is used to infer whether the estimated value can be accepted or not. A method widely used by the international scienti c community for the estimation of integer ambiguities is the LAMBDA method. For the validation the ratio test and FF-RT may be used. The aim of this work is to investigate the resolution of ambiguities in the context of Network RTK and its implementation in FCT_RTK_Net software that was developed in an academic environment. In this thesis some experiments on resolving the ambiguities are presented, the results showed that the FF-RT validation test produces better percentage xing the ambiguities. The results also showed that ADOP is a good predictor of the success rate of ambiguities and that the detection and correction cycles slips are essential for obtaining the resolution of ambiguities / FAPESP: 2013/06325-9

Page generated in 0.1881 seconds