• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1258
  • 959
  • 482
  • 266
  • 46
  • 35
  • 27
  • 22
  • 17
  • 10
  • 8
  • 6
  • 6
  • 4
  • 3
  • Tagged with
  • 3509
  • 745
  • 681
  • 667
  • 657
  • 648
  • 606
  • 460
  • 371
  • 323
  • 302
  • 295
  • 241
  • 222
  • 203
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Data linkage for pharmacovigilance using routinely acquired electronic health data

Kirby, Bradley January 2014 (has links)
Introduction: Despite the establishment of pharmacovigilance systems, there is a recognised paucity of information specifically on the safety of paediatric medicines. Data linkage techniques offer real potential for linking routinely collected population based primary and secondary care datasets, using the Community Health Index (CHI) as a patient linkage key, to monitor the safety of new drugs and treatments. Aim: To explore the validity of routinely acquired NHS data and the utility of linking this data to support a routine mechanism for post-marketing surveillance of paediatric medicines. Methods: The internal and external validity of the Scottish national Prescribing Information System (PIS) was assessed using retrospective cohort studies combined with data linkage techniques. This PhD programme assesses the consistency of unique patient identifiers; the completeness and accuracy of the data; and the extent to which well established associations between drugs and adverse events can be reproduced using routinely collected NHS data. Results: For routine prescribing data a CHI number was found present on nearly 95% of dispensed items. In the first cohort study, insulin prescriptions within PIS were identified for 96% (95% CI 0.96-0.97) of children hospitalised for type 1 diabetes (SMR01). The rates of newly prescribed insulin were concordant with published rates in both Scottish and non-Scottish populations. In the second study asthma prescribing in children was observed to be complete (sensitivity 0.96 (95% CI 0.95-0.98)) and accurate (PPV 0.87 (95% CI 0.83-0.9)) when compared with a gold standard patient registry. Finally, patients newly prescribed NSAID therapy were observed to be 1.51 (95% CI 1.24-1.85) to 3.97 (95% CI 1.27 – 12.46) times more likely to experience first time hospitalisation for a gastrointestinal event than unexposed. Significant risk factors for a GI event were age and concurrent use of antiplatelet and anticoagulant therapy. These results are concordant with the published literature. Conclusions: Routine Scottish prescribing data is consistent, complete and accurate; however several key variables such as indication, dose and frequency, which are essential for robust pharmacovigilance, are currently missing from routinely collected data.
182

An integrated modelling framework for the design and construction of distributed messaging systems

Makoond, Bippin Lall January 2008 (has links)
Having evolved to gain the capabilities of a computer and the inherent characteristic of mobility, mobile phones have transcended into the realm of the Internet, forcing mobile telecommunication to experience the phenomenon of IP Convergence. Within the wide spectrum of mobile services, the messaging business has shown the most promising candidate to exploiting the Internet due to its adaptability and growing popularity. However, mobile operators have to change the way they traditionally handle the message logistics, transforming their technologies while adhering to aspects of quality of service. To keep up with the growth in messaging, in the UK alone reaching to 52 billion in 2007, and with the increased complexity of the messages, there is an urgent need to move away from traditional monolithic architectures and to adopt distributed and autonomous systems. The aim of this thesis is to propose and validate the implementation of a new distributed messaging infrastructure that will sustain the dynamics of the mobile market by providing innovative technological resolutions to the common problem of quality modelling, communication, evolution and resource management, within mobile Telecoms. To design such systems, requires techniques, not only found in classical software engineering, but also in the scientific methods, statistics and economics, thus the emergence of an apparent problem of combining these tools in a logical and meaningful manner. To address this problem, we propose a new blended modelling approach which is at the heart of the research process model. We formulate a Class of problems that categorises problem attributes into an information system and assess each requirement against a quality model. To ensure that quality is imprinted in the design of the distributed messaging system, we formulate dynamic models and simulation methods to measure the QoS capabilities of the system, particular in terms of communication and distributed resource management. The outcomes of extensive simulation enabled the design of predictive models to build a system for capacity. A major contribution of this work relates to the problem of integrating the aspect of evolution within the communication model. We propose a new multi-criteria decision making mechanism called the BipRyt algorithm, which essentially preserve the quality model of the system as it tends to grow in size and evolve in complexity. The decision making I process is based on the availability of computational resources, associated rules of usage and defined rules for a group of users or the system as a whole. The algorithm allows for local and global optimisation of resources during the system life cycle while managing conflicts among the rules, such as racing condition and resource starvation. Another important contribution relates to the process of organizing and managing nodes over distributed shared memory. We design the communication model in the shape of a grid architecture, which empowers the concept of single point management of the system (without being a single point of failure), using the same discipline of managing an information system. The distributed shared memory is implemented over the concept of RDMA, where the system runs at very high performance and low latency, while preserving requirements such as high availability and horizontal scalability. A working prototype of the grid architecture is presented, which compares different network technologies against a set of quality metrics for validation purposes.
183

Sketch-based skeleton-driven 2D animation and motion capture

Pan, Junjun January 2009 (has links)
This research is concerned with the development of a set of novel sketch-based skeleton-driven 2D animation techniques, which allow the user to produce realistic 2D character animation efficiently. The technique consists of three parts: sketch-based skeleton-driven 2D animation production, 2D motion capture and a cartoon animation filter. For 2D animation production, the traditional way is drawing the key-frames by experienced animators manually. It is a laborious and time-consuming process. With the proposed techniques, the user only inputs one image ofa character and sketches a skeleton for each subsequent key-frame. The system then deforms the character according to the sketches and produces animation automatically. To perform 2D shape deformation, a variable-length needle model is developed, which divides the deformation into two stages: skeleton driven deformation and nonlinear deformation in joint areas. This approach preserves the local geometric features and global area during animation. Compared with existing 2D shape deformation algorithms, it reduces the computation complexity while still yielding plausible deformation results. To capture the motion of a character from exiting 2D image sequences, a 2D motion capture technique is presented. Since this technique is skeleton-driven, the motion of a 2D character is captured by tracking the joint positions. Using both geometric and visual features, this problem can be solved by ptimization, which prevents self-occlusion and feature disappearance. After tracking, the motion data are retargeted to a new character using the deformation algorithm proposed in the first part. This facilitates the reuse of the characteristics of motion contained in existing moving images, making the process of cartoon generation easy for artists and novices alike. Subsequent to the 2D animation production and motion capture,"Cartoon Animation Filter" is implemented and applied. Following the animation principles, this filter processes two types of cartoon input: a single frame of a cartoon character and motion capture data from an image sequence. It adds anticipation and follow-through to the motion with related squash and stretch effect.
184

3D digital relief generation

Wang, Meili January 2011 (has links)
This thesis investigates a framework for generating reliefs. Relief is a special kind of sculptured artwork consisting of shapes carved on a surface so as to stand out from the surrounding background. Traditional relief creation is done by hand and is therefore a laborious process. In addition, hand-made reliefs are hard to modify. Contrasted with this, digital relief can offer more flexibility as well as a less laborious alternative and can be easily adjusted. This thesis reviews existing work and offers a framework to tackle the problem of generating three types of reliefs: bas reliefs, high reliefs and sunken reliefs. Considerably enhanced by incorporating gradient operations, an efficient bas relief generation method has been proposed, based on 2D images. An improvement of bas relief and high relief generation method based on 3D models has been provided as well, that employs mesh representation to process the model. This thesis is innovative in describing and evaluating sunken relief generation techniques. Two types of sunken reliefs have been generated: one is created with pure engraved lines, and the other is generated with smooth height transition between lines. The latter one is more complex to implement, and includes three elements: a line drawing image provides a input for contour lines; a rendered Lambertian image shares the same light direction of the relief and sets the visual cues and a depth image conveys the height information. These three elements have been combined to generate final sunken reliefs. It is the first time in computer graphics that a method for digital sunken relief generation has been proposed. The main contribution of this thesis is to have proposed a systematic framework to generate all three types of reliefs. Results of this work can potentially provide references for craftsman, and this work could be beneficial for relief creation in the fields of both entertainment and manufacturing.
185

Adaptive motion synthesis and motor invariant theory

Liu, Fangde January 2012 (has links)
Generating natural-looking motion for virtual characters is a challenging research topic. It becomes even harder when adapting synthesized motion to interact with the environment. Current methods are tedious to use, computationally expensive and fail to capture natural looking features. These difficulties seem to suggest that artificial control techniques are inferior to their natural counterparts. Recent advances in biology research point to a new motor control principle: utilizing the natural dynamics. The interaction of body and environment forms some patterns, which work as primary elements for the motion repertoire: Motion Primitives. These elements serve as templates, tweaked by the neural system to satisfy environmental constraints or motion purposes. Complex motions are synthesized by connecting motion primitives together, just like connecting alphabets to form sentences. Based on such ideas, this thesis proposes a new dynamic motion synthesis method. A key contribution is the insight into dynamic reason behind motion primitives: template motions are stable and energy efficient. When synthesizing motions from templates, valuable properties like stability and efficiency should be perfectly preserved. The mathematical formalization of this idea is the Motor Invariant Theory and the preserved properties are motor invariant In the process of conceptualization, newmathematical tools are introduced to the research topic. The Invariant Theory, especially mathematical concepts of equivalence and symmetry, plays a crucial role. Motion adaptation is mathematically modelled as topological conjugacy: a transformation which maintains the topology and results in an analogous system. The Neural Oscillator and Symmetry Preserving Transformations are proposed for their computational efficiency. Even without reference motion data, this approach produces natural looking motion in real-time. Also the new motor invariant theory might shed light on the long time perception problem in biological research.
186

Constrained parameterization with applications to graphics and image processing

Yu, Hongchuan January 2012 (has links)
Surface parameterization is to establish a transformation that maps the points on a surface to a specified parametric domain. It has been widely applied to computer graphics and image processing fields. The challenging issue is that the usual positional constraints always result in triangle flipping in parameterizations (also called foldovers). Additionally, distortion is inevitable in parameterizations. Thus the rigid constraint is always taken into account. In general, the constraints are application-dependent. This thesis thus focuses on the various constraints depended on applications and investigates the foldover-free constrained parameterization approaches individually. Such constraints usually include, simple positional constraints, tradeoff of positional constraints and rigid constraint, and rigid constraint. From the perspective of applications, we aim at the foldover-free parameterization methods with positional constraints, the as-rigid-as-possible parameterization with positional constraints, and the well-shaped well-spaced pre-processing procedure for low-distortion parameterizations in this thesis. The first contribution of this thesis is the development of a RBF-based re-parameterization algorithm for the application of the foldover-free constrained texture mapping. The basic idea is to split the usual parameterization procedure into two steps, 2D parameterization with the constraints of convex boundaries and 2D re-parameterization with the interior positional constraints. Moreover, we further extend the 2D re-parameterization approach with the interior positional constraints to high dimensional datasets, such as, volume data and polyhedrons. The second contribution is the development of a vector field based deformation algorithm for 2D mesh deformation and image warping. Many presented deformation approaches are used to employ the basis functions (including our proposed RBF-based re-parameterization algorithm here). The main problem is that such algorithms have infinite support, that is, any local deformation always leads to small changes over the whole domain. Our presented vector field based algorithm can effectively carry on the local deformation while reducing distortion as much as possible. The third contribution is the development of a pre-processing for surface parameterization. Except the developable surfaces, the current parameterization approaches inevitably incur large distortion. To reduce distortion, we proposed a pre-processing procedure in this thesis, including mesh partition and mesh smoothing. As a result, the resulting meshes are partitioned into a set of small patches with rectangle-like boundaries. Moreover, they are well-shaped and well-spaced. This pre-processing procedure can evidently improve the quality of meshes for low-distortion parameterizations.
187

An investigation into the uncanny : character design, behaviour and context

Tharib, S. January 2013 (has links)
Whilst there has been a substantial amount of research into the uncanny valley, defining research that contextualises a character as they would normally be viewed remains an unexplored area. Often previous research focused solely on realistic render styles giving characters an unfair basis that tended towards the realistic, thus facilitating only one mode of animation style: realism. Furthermore, characters were not contextualized because researchers often used footage from previous productions. These characters also differed in quality as various artists worked on different productions. This research considers characterisation as three key components, the aesthetic, the behaviour and the contextualisation. Attempts were made to develop a greater understanding of how these components contribute to the appeal of a character within the field of 3D computer animation. Research consisted of two experiments. Both experiments were conducted using an online survey method. The first experiment used five different characters ranging from realistic to abstract. Each character displayed three different behaviours and the characters were contextualized within a six panel narrative. Data obtained from the first experiment was used to refine the second experiment. A further experiment was conducted to further define how combinations of different behaviours and the context containing a character affected the subject’s perception. The second experiment used three different character types and the characters were contextualized within a video stimulus. Findings from the first experiment indicated a strong relationship between character type and context. Interest with the various characters changed depending on adaptions to either the behaviour of the said character or the contextualisation. Certain character types based on appearance where better suited to different contexts than others. An abstract character was more likely to be perceived positively by the subject in a surprising context stipulated by the behaviour of the character and form of the narrative sequence. Other characters such as one based around an inanimate object found a greater positive reception with the subjects under sad contextual constraints rather than happy or surprise. The first experiment took into account various independent variables obtained from the subject and aimed to draw parallels if found between these variables and the subjects perception of a given character be it positive or negative. However, these variables namely gender, nationality and age had no effect on the subject’s perception. In the second experiment, it was found that in order for the realistic human character to be perceived more positively, the behaviour needed to match the context. When a mismatch occurred the subjects began to perceive the character more negatively. The cartoon character was however not affected by the mismatch of behaviour and context. The experiment was further expanded when two different character types were compared committing negative actions and having negative actions inflicted upon them and what effect it had on the subjects perception. It was found that a cartoon character committing a negative action was perceived positively whilst a human character committing the same act was perceived negatively. However, when a negative action was inflicted on these same characters, subjects were more concerned for the human character than the cartoon character. Results from both experiments confirm the idea that various characters are perceived very differently by the viewers and come with predefined notions within the viewer of how they should behave. What is expected of one character type is not acceptable for another character type. Cartoon characters can get away with bizarre behaviour. A real human character may have some sort of novel unusual behaviour, whilst a realistic CG human character is assessed on how realistically (normally) it behaves. This research expands upon previous research into this area by offering a greater understanding of character types and emphasising the importance of contextualisation.
188

Improved facial feature fitting for model based coding and animation

Kuo, Po Tsun Paul January 2006 (has links)
No description available.
189

Semantic based support for visualisation in complex collaborative planning environments

Lino, Natasha Correia Queiroz January 2007 (has links)
Visualisation in intelligent planning systems [Ghallab et al., 2004] is a subject that has not been given much attention by researchers. Among the existing planning systems, some well known planners do not propose a solution for visualisation at all, while others only consider a single approach when this solution sometimes is not appropriate for every situation. Thus, users cannot make the most of planning systems because they do not have appropriate support for interaction with them. This problem is more enhanced when considering mixed-initiative planning systems, where agents that are collaborating in the process have different backgrounds, are playing different roles in the process, have different capabilities and responsibilities, or are using different devices to interact and collaborate in the process. To address this problem, we propose a general framework for visualisation in planning systems that will give support for a more appropriate visualisation mechanism. This framework is divided into two main parts: a knowledge representation aspect and a reasoning mechanism for multi-modality visualisation. The knowledge representation uses the concept of ontology to organise and model complex domain problems. The reasoning mechanism gives support to reasoning about the visualisation problem based on the knowledge bases available for a realistic collaborative planning environment, including agent preferences, device features, planning information, visualisation modalities, etc. The main result of the reasoning mechanism is an appropriate visualisation modality for each specific situation, which provides a better interaction among agents (software and human) in a collaborative planning environment. The main contributions of this approach are: (1) it is a general and extensible framework for the problem of visualisation in planning systems, which enables the modelling of the domain from an information visualisation perspective; (2) it allows a tailored approach for visualisation of information in an AI collaborative planning environment; (3) its models can be used separately in other problems and domains; (4) it is based on real standards that enable easy communication and interoperability with other systems and services; and (5) it has a broad potential for its application on the Semantic Web.
190

Construction of a quality assurance and measurement framework for software projects

Horgan, Gerard January 2000 (has links)
The way in which quality is modelled within an organisation has typically followed either a fixed-model or a tailorable approach. Fixed-model techniques suffer the disadvantage of inflexibility to local environments, since, the parameters of these models cannot be changed by users or designers to reflect their own views. The tailorable approaches tend to preclude cross-project comparisons. In addition, both techniques lack comprehensive guidelines for building quality into a software product, and lack the ability to resolve conflicts where individuals disagree about the model parameters. In this work, the construction of a new approach is described which overcomes these deficiencies. Since metrics and metric measurement is an important component of quality models, common metrics and measurement techniques are identified, before the construction and evaluation of the new quality modelling approach is presented. A common metric is software size, measurement of which can be performed by use of the Function Point Analysis (FPA) technique. The weighting and adjustment factors of the traditional FPA approach are simplified here, to produce a new estimation technique which can be used at early stages in the development lifecycle. The new model is validated against two project datasets, and the results show a good degree of accuracy when estimating the FPA count, although a lower performance is achieved when estimating actual effort. The major component of this thesis is the construction of the new quality modelling approach, that enables local requirements tailoring whilst providing the ability to perform cross-project comparisons. Unlike existing techniques, comprehensive conflict resolution mechanisms are incorporated, and it is shown that the approach can be used to measure different software entities, allowing direct comparisons between measurements and thus producing more consistent results. The implementation consists of the construction of a software tool supporting the new methodology, and use of both this tool and the technique on real projects at a large financial organisation. The validation of the approach is performed against a list of requirements for a good quality model, and from feedback both from use on the projects and from a questionnaire survey.

Page generated in 0.0947 seconds