• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 198
  • 198
  • 198
  • 26
  • 16
  • 13
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Meta-data to enhance case-based prediction

Premraj, Rahul January 2006 (has links)
The focus of this thesis is to measure the regularity of case bases used in Case-Based Prediction (CBP) systems and the reliability of their constituent cases prior to the system's deployment to influence user confidence on the delivered solutions. The reliability information, referred to as meta-data, is then used to enhance prediction accuracy. CBP is a strain of Case-Based Reasoning (CBR) that differs from the latter only in the solution feature which is a continuous value. Several factors make implementing such systems for prediction domains a challenge. Typically, the problem and solution spaces are unbounded in prediction problems that make it difficult to determine the portions of the domain represented by the case base. In addition, such problem domains often exhibit complex and poorly understood interactions between features and contain noise. As a result, the overall regularity in the case base is distorted which poses a hindrance to delivery of good quality solutions. Hence in this research, techniques have been presented that address the issue of irregularity in case bases with an objective to increase prediction accuracy of solutions. Although, several techniques have been proposed in the CBR literature to deal with irregular case bases, they are inapplicable to CBP problems. As an alternative, this research proposes the generation of relevant case-specific meta-data. The meta-data is made use of in Mantel's randomisation test to objectively measure regularity in the case base. Several novel visualisations using the meta-data have been presented to observe the degree of regularity and help identify suspect unreliable cases whose reuse may very likely yield poor solutions. Further, performances of individual cases are recorded to judge their reliability, which is reflected upon before selecting them for reuse along with their distance from the problem case. The intention is to overlook unreliable cases in favour of relatively distant yet more reliable ones for reuse to enhance prediction accuracy. The proposed techniques have been demonstrated on software engineering data sets where the aim is to predict the duration of a software project on the basis of past completed projects recorded in the case base. Software engineering is a human-centric, volatile and dynamic discipline where many unrecorded factors influence productivity. This degrades the regularity in case bases where cases are disproportionably spread out in the problem and solution spaces resulting in erratic prediction quality. Results from administering the proposed techniques were helpful to gain insight into the three software engineering data sets used in this analysis. The Mantel's test was very effective at measuring overall regularity within a case base, while the visualisations were learnt to be variably valuable depending upon the size of the data set. Most importantly, the proposed case discrimination system, that intended to reuse only reliable similar cases, was successful at increasing prediction accuracy for all three data sets. Thus, the contributions of this research are some novel approaches making use of meta-data to firstly provide the means to assess and visualise irregularities in case bases and cases from prediction domains and secondly, provide a method to identify unreliable cases to avoid their reuse in favour to more reliable cases to enhance overall prediction accuracy.
132

An empirical investigation into management and control of software prototyping

Chen, Liguang January 1997 (has links)
In response to the so-called "software crisis', software prototyping has been widely used as a technique in various stage of systems development since the late 70's, and, with the growing sophistication of 4GLs tools and environments, it has becoming a popular alternative to conventional development approaches. A study of the literature revealed that, unlike tools and environments, the management and control of software prototyping practice has been widely reported as being problematic. The study also suggested that there were very few reported studies of prototyping projects in practice. In order to contribute to the understanding of the management and control of prototyping, it was therefore decided to conduct an empirical study. The empirical investigation comprises three interrelated stages: preliminary survey, field modelling and semi structured interviews. The findings of each stage provided inputs and formed a base for the following stage. From the survey to practitioners it became apparent that the concerns of the literature, regarding the management and control of prototyping projects, were justified. The next stage involved a detailed study using process modelling techniques of ten prototyping projects at eight software development organisations. This was then followed up by semi structured interviews of managers and prototypers at five organisations. In addition a number of documents, minutes and standards were also analysed, and personality tests conducted. The main lessons learnt include the 'process diversity', the inadequate methods and standards, and lack of quality control, particularly in regard to future maintainability and extensibility. Recommendations are given for each key management and control area identified, including team selection, initial requirement gathering, prototypes building, change requests and quality controls. Finally the thesis concludes that further work should be extended to areas such as developing 'lean methods' and an easy to use toolset for better management and control of the process.
133

QoS provisioning and mobility management for IP-based wireless LAN

Politis, Christos January 2004 (has links)
Today two major technological forces drive the telecommunication era: the wireless cellular systems and the Internet. As these forces converge, the demand for new services, increasing bandwidth and ubiquitous connectivity continuously grows. The next-generation mobile systems will be based solely or in a large extent, on the Internet Protocol (IP). This thesis begins by addressing the problems and challenges faced in a multimedia, IP-based Wireless LAN environment. The ETSI HiperLAN/2 system has been mainly selected as the test wireless network for our theoretical and simulation experiments. Apart from the simulations, measurements have been taken from real life test scenarios, where the IEEE 802.11 system was used (UniS Test-bed). Furthermore, a brief overview of the All-IP network infrastructure is presented. An extension to the conventional wireless (cellular) architecture, which takes advantage of the IP network characteristics, is considered. Some of the trends driving the 3G and WLANs developments are explored, while the provision of quality of service on the latter for real-time and non-real-time multimedia services is investigated, simulated and evaluated. Finally, an efficient and catholic Q0S framework is proposed. At the same time, the multimedia services should be offered in a seamless and uninterrupted manner to users who access the all-IP infrastructure via a WLAN, meeting the demands of both enterprise and public environments anywhere and anytime. Thus providing support for mobile communications not only in terms of terminal mobility, as is currently the case, but also for session, service and personal mobility. Furthermore, this mobility should be available over heterogeneous networks, such as WLANs, IJMTS, as well as fixed networks. Therefore, this work investigates issues such as, multilayer and multi-protocol (SIP-Mobile IP-Cellular IP) mobility management in wireless LAN and 3G domains. Several local and global mobility protocols and architectures have been tested and evaluated and a complete mobility management framework is proposed. Moreover, integration of simple yet efficient authentication, accounting and authorisation mechanisms with the multimedia service architecture is an important issue of IP-based WLANs. Without such integration providers will not have the necessary means to control their provided services and make revenue from the users. The proposed AAA architecture should support a robust AAA infrastructure providing secure, fast and seamless access granting to multimedia services. On the other hand, a user wishing a service from the All-IP WLAN infrastructure needs to be authenticated twice, once to get access to the network and the other one should be granted for the required service. Hence, we provide insights into these issues by simulating and evaluating pre-authentication techniques and other network authentication scenarios based on the wellknown IEEE 802.lx protocol for multimedia IP-based WLANs.
134

Investigating the group development process in virtual student software project teams

Last, Mary Z. January 2003 (has links)
To remain competitive in today's global economy, organizations must be able to keep costs down, respond quickly to demands for new products and services, solve complex problems that often cross technical and functional areas of expertise, and be willing to use innovative work structures. Organizations also must be able to coordinate work across a variety of intra- and inter-organizational boundaries. Many companies are addressing competitive challenges by adopting teams as the primary organizational work unit and by using communications technologies to bridge organizational and physical boundaries. Teams that work across space, time, and organization boundaries using technology are virtual teams. The increasing reliance on teams in industry has had an effect on education as well. Colleges and universities around the world are preparing students to work in virtual teams by incorporating distributed teamwork in distance-learning courses and by offering courses in collaboration with other post-secondary institutions. This study investigates the group development process in virtual student software project teams. The research uses grounded theory methodology to analyze the electronic communications of these virtual student teams. The study looked at virtual teams over a three-year period within the framework of the Runestone project. Four themes emerged from the data analysis: dialog, attitude, relationships, and trust. The themes revolve around a core category of team cohesiveness. The findings of this research add to the small body of empirical research on virtual teams by demonstrating that: (1) trust can exist in virtual teams; (2) trust can be measured by analyzing only communication artifacts; (3) team members in virtual teams can use lean CMC media such as IRC to develop social relationships; (4):certain communication behaviors and strategies do influence team development; (5) is possible to use IRC logs to investigate team process; and (6) there are specific associated with the themes of dialog, attitude, relationships, and trust that cohesive teams and non-cohesive teams.
135

Motion analysis of cinematographic image sequences

Giaccone, Paul January 2000 (has links)
Many digital special effects require knowledge of the motion present in an image sequence. In order for these effects to be realistic, blending seamlessly with unmodified live action or animation, motion must be represented accurately. Most existing methods of motion estimation are unsuitable for use in postproduction for one or more reasons; namely poor accuracy; corruption, by aliasing and the aperture problem, of estimation of large-magnitude motion; failure to handle multiple motions and motion boundaries; representation of curvilinear motion as concatenated translations instead of as smooth curves; slowness of execution and inefficiency in the presence of small variations between successive images. Novel methods of motion estimation are proposed here that are specifically designed for use in postproduction and address all of the above problems. The techniques are based on parametric estimation of optical-flow fields, reformulated in terms of displacements rather than velocities. The paradigm of displacement estimation leads to techniques for iterative updating of motion estimation for accuracy; faster motion estimation by exploiting redundancies between successive images; representation of motion over a sequence of images with a single set of parameters; and curvilinear representation of motion. Robust statistics provides a means for distinguishing separate types of motion and overcoming the problems of motion boundaries. Accurate recovery of the motion of the background in a sequence, combined with other image characteristics, leads to a segmentation procedure that greatly accelerates the rotoscoping and compositing tasks commonly carried out in postproduction. Comparative evaluation of the proposed methods with other techniques for motion estimation and image segmentation indicates that, in most cases, the new work provides considerable improvements in quality.
136

Turbo codes for real-time applications

Hebbes, Luke January 2004 (has links)
No description available.
137

Artificial intelligence for animated autonomous agents

Szarowicz, Adam January 2004 (has links)
Automatic creation of animated crowd scenes involving multiple interacting characters is currently a field of extensive research. This is because automatic generation of animation finds immediate applications in film post-production and special effects, computer games or event simulation in crowded areas. The work presented here addresses the problem of inadequate application of AI techniques in current animation research. The thesis presents a survey of different industrial and academic approaches and a number of lacking features are identified. After extensive research in existing systems an agent-based system and an animation framework are chosen for extension and the cognitive architecture FreeWill is proposed. The architecture further extends its underlying principles and combines software agent solutions with typical animation elements. It also allows for easy integration with existing tools. Another important contribution of FreeWill is a proposal of an algorithm for automatic generation of high-level actions using reinforcement learning. The chosen learning technique lends itself well to the animation task, as reinforcement learning allows for easy definition of the learning task - only the ultimate goal of the learning agent must be defined. The process of defining and conducting the learning task is explained in detail. The learning module allows for further automation of the process of animation generation, shortens the task of creating crowd scenes and also reduces the production costs. Newly learnt actions can be applied to increase the quality of the generated sequences. The learning module can be used in both deterministic and non-deterministic environments. Experiments in both modes are presented, and conclusions are drawn. Two modes of control - inverse and forward kinematics are also compared and a number of experiments are demonstrated. Learning with inverse kinematics control was found to converge faster for the same task. A working prototype of the architecture is presented and the learnt motion is compared with human motion. The results of the comparison demonstrate that the learning scheme could be used to imitate human motion in crowd scenes. Finally a number of metrics are defined which allow for easy selection of most relevant actions from the learnt set, again helping to automate the process. The work concludes with pointing out further directions of research based on this work and suggests possible extensions and applications.
138

Image processing analysis of stem cell antigens

Baradez, Marc-Olivier M. P. January 2005 (has links)
This thesis aims to investigate the automation of an image processing driven analysis of antigen distributions in the membrane of early human Haematopoietic StemIProgenitor Cells (HSPCs ) imaged by Laser Scanning Confocal Microscopy (LSCM). LSCM experiments generated a vast amount of images of both single and dual labelled HSPCs. Special focus was given to the analysis of colocalised antigen distributions, as colocalisation may involve functional relationships. However, quantitative methods are also investigated to characterise both single and dual labelled antigen distributions. Firstly, novel segmentation algorithms are developed and assessed for their performances in automatically achieving fast fluorescence signal identification. Special attention is given to global histogram-based thresholding methods due to their potential use in real time applications. A new approach to fluorescence quantification is proposed and tested. Secondly, visualisation techniques are developed in order to further assist the analysis of the antigen distributions in cell membranes. They include 3D reconstruction of the fluorescence, newly proposed 2D Antigen Density Maps (ADMs) and new 3D graphs of the spatial distributions (sphere models). Thirdly, original methods to quantitatively characterise the fluorescence distributions are developed. They are applied to both single and dual/colocalised distributions. For the latest, specific approaches are investigated and applied to colocalised CD34/CD164 distributions and to colocalised CD34[sup]class I CD34[sup]class II and CD34[sup]c1ass I CD34[sup]class III epitopes distributions (two combinations of the three known different isoforms of the CD34 molecule, a major clinical marker for HSPCs). The visualisation tools revealed that HSPC membrane antigens are often clustered within membrane domains. Three main types of clusters were identified: small clusters, large patch-like clusters and newly identified meridian-shaped crest-like (MSCL) clusters. Quantitative analysis of antigen distributions showed heterogeneous distributions of the various measured features (such as polarity or colocalisation patterns) within the HSPC populations analysed. Finally, the proposed methodology to characterise membrane antigen distributions is discussed, and its potential application to other biomedical studies is commented. The potential extensions of the innovative linear diffusion-based MultiScale Analysis (MSA) algorithm to other applications are outlined. Visual and quantitative analyses of antigen membrane distributions are eventually used to generate hypotheses on the potential, yet unknown roles of these early antigens and are discussed in the context of haematopoietic theories.
139

Content-aware and context-aware adaptive video streaming over HTTP

Ognenoski, Ognen January 2016 (has links)
Adaptive HTTP video streaming techniques are rapidly becoming the main method for video delivery over the Internet. From a conceptual viewpoint, adaptive HTTP video streaming systems enable adaptation of the video quality according to network conditions (link-awareness), content characteristics (content-awareness), user preferences (user-awareness) or device capabilities (device awareness). Proprietary adaptive HTTP video streaming platforms from Apple, Adobe and Microsoft preceded the completion of a standard for adaptive HTTP video streaming, i.e., the MPEG-DASH standard. The dissertation presents modeling approaches, experiments, simulations and subjective tests tightly related to adaptive HTTP video streaming with particular emphasis on the MPEG-DASH standard. Different case studies are investigated through novel models based on analytical and simulation approaches. In particular, adaptive HTTP video streaming over Long Term Evolution (LTE) networks, over cloud infrastructure, and streaming of medical videos are investigated and the relevant benefits and drawbacks of using adaptive HTTP video streaming for these cases are highlighted. Further, mathematical tools and concepts are used to acquire quantifiable knowledge related to the HTTP/TCP communication protocol stack and to investigate dependencies between adaptive HTTP video streaming parameters and the underlying Quality of Service (QoS) and Quality of Experience (QoE). Additionally, a novel method and model for QoE assessment are proposed, derived in a specific experimental setup. A more general setup is then considered and a QoE metric is derived. The QoE metric expresses the users' quality for adaptive HTTP video streaming by taking into consideration rebuffering, video quality and content-related parameters. In the end, a novel analytical model that captures the user's perception of quality via the experienced delay during streaming navigation is derived. The contributions in this dissertation and the relevant conclusions are obtained by simulations, experimental demo setups, subjective tests and analytical modeling.
140

Vision-based analysis and simulation of pedestrian dynamics

Sourtzinos, Panagiotis January 2016 (has links)
The aim of this thesis is to examine the applicability of computer vision to analyze pedestrian and crowd characteristics, and how pedestrain simulation for shopping environments can be driven from the visual perception of the simulated pedestrians. More specifically, two frameworks for pedestrian speed profile estimation are designed and implemented. The first address the problem of speed estimation for people moving parallel to the image plane on a flat surface, while the other tries to estimate the speed of people walking on stairs moving while their trajectories and being perpendicular on the image plane. Both approaches aim to localise the foot of the pedestrains, and by identifying their steps measure their speed. Except from measuring the speed of pedestrains, a crowd counting system using Convolutional Neural Networks is created by exploiting the background spatial persistence of a whole image in the temporal domain, and furthermore by fusing consecutive temporal counting information in the systme further refines its estimates. Finally a novel memory-free cognitive framework for pedestrian shopping behaviour is presented where the simulated pedestrians use as route choice model their visual perception. Agents moving in an environment and equipped with an activity agenda. use their vision to select not only their root choices but also the shops that they visit.

Page generated in 0.273 seconds