• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1258
  • 961
  • 482
  • 266
  • 46
  • 35
  • 27
  • 22
  • 17
  • 10
  • 8
  • 6
  • 6
  • 4
  • 3
  • Tagged with
  • 3513
  • 745
  • 681
  • 667
  • 657
  • 649
  • 607
  • 461
  • 371
  • 323
  • 304
  • 296
  • 241
  • 222
  • 204
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Embedding requirements within the model driven architecture

Fouad, A. January 2011 (has links)
The Model Driven Architecture (MDA) is offered as one way forward in software systems modelling to connect software design with the business domain. The general focus of the MDA is the development of software systems by performing transformations between software design models, and the automatic generation of application code from those models. Software systems are provided by developers, whose experience and models are not always in line with those of other stakeholders, which presents a challenge for the community. From reviewing the available literature, it is found that whilst many models and notations are available, those that are significantly supported by the MDA may not be best for use by non technical stakeholders. In addition, the MDA does not explicitly consider requirements and specification. This research begins by investigating the adequacy of the MDA requirements phase and examining the feasibility of incorporating a requirements definition, specifically focusing upon model transformations. MDA artefacts were found to serve better the software community and requirements were not appropriately integrated within the MDA, with significant extension upstream being required in order to sufficiently accommodate the business user in terms of a requirements definition. Therefore, an extension to the MDA framework is offered that directly addresses Requirements Engineering (RE), including the distinction of analysis from design, highlighting the importance of specification. This extension is suggested to further the utility of the MDA by making it accessible to a wider audience upstream, enabling specification to be a direct output from business user involvement in the requirements phase of the MDA. To demonstrate applicability, this research illustrates the framework extension with the provision of a method and discusses the use of the approach in both academic and commercial settings. The results suggest that such an extension is academically viable in facilitating the move from analysis into the design of software systems, accessible for business use and beneficial in industry by allowing for the involvement of the client in producing models sufficient enough for use in the development of software systems using MDA tools and techniques.
222

Combinations of time series forecasts : when and why are they beneficial?

Lemke, Christiane January 2010 (has links)
Time series forecasting has a long track record in many application areas. In forecasting research, it has been illustrated that finding an individual algorithm that works best for all possible scenarios is hopeless. Therefore, instead of striving to design a single superior algorithm, current research efforts have shifted towards gaining a deeper understanding of the reasons a forecasting method may perform well in some conditions whilst it may fail in others. This thesis provides a number of contributions to this matter. Traditional empirical evaluations are discussed from a novel point of view, questioning the benefit of using sophisticated forecasting methods without domain knowledge. An own empirical study focusing on relevant off-the shelf forecasting and forecast combination methods underlines the competitiveness of relatively simple methods in practical applications. Furthermore, meta-features of time series are extracted to automatically find and exploit a link between application specific data characteristics and forecasting performance using meta-learning. Finally, the approach of extending the set of input forecasts by diversifying functional approaches, parameter sets and data aggregation level used for learning is discussed, relating characteristics of the resulting forecasts to different error decompositions for both individual methods and combinations. Advanced combination structures are investigated in order to take advantage of the knowledge on the forecast generation processes. Forecasting is a crucial factor in airline revenue management; forecasting of the anticipated booking, cancellation and no-show numbers has a direct impact on general planning of routes and schedules, capacity control for fareclasses and overbooking limits. In a collaboration with Lufthansa Systems in Berlin, experiments in the thesis are conducted on an airline data set with the objective of improving the current net booking forecast by modifying one of its components, the cancellation forecast. To also compare results achieved of the methods investigated here with the current state-of-the-art in forecasting research, some experiments also use data sets of two recent forecasting competitions, thus being able to provide a link between academic research and industrial practice.
223

Computational logic : structure sharing and proof of program properties

Moore, J. Strother January 1973 (has links)
This thesis describes the results of two studies in computational logic. The first concerns a very efficient method of implementing resolution theorem provers. The second concerns a non-resolution program which automatically proves many theorems about LISP functions, using structural induction. In Part 1, a method of representing clauses, called 'structure sharing'is presented. In this representation, terms are instantiated by binding their variables on a stack, or in a dictionary, and derived clauses are represented in terms of their parents. This allows the structure representing a clause to be used in different contexts without renaming its variables or copying it in any way. The amount of space required for a clause is (2 + n) 36-bit words, where n is the number of components in the unifying substitution made for the resolution or factor. This is independant of the number of literals in the clause and the depth of function nesting. Several ways of making the unification algorithm more efficient are presented. These include a method od preprocessing the input terms so that the unifying substitution for derived terms can be discovered by a recursive look-up proceedure. Techniques for naturally mixing computation and deduction are presented. The structure sharing implementation of SL-resolution is described in detail. The relationship between structure sharing and programming language implementations is discussed. Part 1 concludes with the presentation of a programming language, based on predicate calculus, with structure sharing as the natural implementation. Part 2 of this thesis describes a program which automatically proves a wide variety of theorems about functions written in a subset of pre LISP. Features of this program include: The program is fully automatic, requiring no information from the user except the LISP definitions of the functions involved and the statement of the theorem to be proved. No inductive assertions are required for the user. The program uses structural induction when required, automatically generating its own induction formulas. All relationships in the theorem are expressed in terms of user defined LISP functions, rather than a secong logical language. The system employs no built-in information about any non-primitive function. All properties required of any function involved in a proof are derived and established automatically. The progeam is capable of generalizing some theorems in order to prove them; in doing so, it often generates interesting lemmas. The program can write new, recursive LISP functions automatically in attempting to generalize a theorem. Finally, the program is very fast by theorem proving standards, requiring around 10 seconds per proof.
224

Robot environment learning with a mixed-linear probabilistic state-space model

Chesters, William Robert January 2001 (has links)
This thesis proposes the use of a probabilistic state-space model with mixed-linear dynamics for learning to predict a robot's experiences. It is motivated by a desire to bridge the gap between traditional models with predefined objective semantics on the one hand, and the biologically-inspired "black box" behavioural paradigm on the other. A novel EM-type algorithm for the model is presented, which is less compuationally demanding than the Monte Carlo techniques developed for use in (for example) visual applications. The algorithm's E-step is slightly approximative, but an extension is described which would in principle make it asymptotically correct. Investigation using synthetically sampled data shows that the uncorrected E-step can any case make correct inferences about quite complicated systems. Results collected from two simulated mobile robot environments support the claim that mixed-linear models can capture both discontinuous and continuous structure in world in an intuitively natural manner; while they proved to perform only slightly better than simpler autoregressive hidden Markov models on these simple tasks, it is possible to claim tentatively that they might scale more effectively to environments in which trends over time played a larger role. Bayesian confidence regions—easily by mixed-linear model— proved be an effective guard for preventing it from making over-confident predictions outside its area of competence. A section on future extensions discusses how the model's easy invertibility could be harnessed to the ultimate aim of choosing actions, from a continuous space of possibilities, which maximise the robot's expected payoff over several steps into the future
225

Modelling turn-taking in a simulation of small group discussion

Padilha, Emiliano Gomes January 2006 (has links)
The organization of taking turns at talk is an important part of any verbal interaction such as conversation, particularly in groups. Sociologists and psycholinguists have been studying turn-taking in conversation through empirical and statistical analysis, and identified some systematics in it. But to my knowledge no detailed computational modelling of verbal turn-taking has yet been attempted. This thesis describes one such attempt, for a simulation of small group discussion— that is, engaged conversation in groups of up to seven participants, which researchers have found to be much like two-person dialogues with overhearers. The group discussion is simulated by a simple multi-agent framework with a blackboard architecture, where each agent represents a participant in the discussion and the blackboard is their channel of communication, or ‘environment’ of the discussion. Agents are modelled with just a set of probabilistic parameters that give their likelihood of doing the various turn-taking decisions in the simulation: when to talk, when to continue talking, when to interrupt, when to give feedback (“uh huh”), and so on. The simulation, therefore, consists of coordinating a one-at-a-time talk (symbolic talk) with speaker transitions, hesitation, yielding or keeping the floor, and managing simultaneous talk which occurs mostly around speaker transitions. The turn-taking modelling considers whether participants are talking or not, and when they reach points of possible completion in their utterances that correspond to the places of transition-relevance, TRPs, where others could start to speak in attempts to take a new turn of talk. The agent behaviours (acts), their internal states and procedures are then described. The model is expanded with elaborate procedures for the resolution of simultaneous talk, for speaking hesitations and their potential interruption, and for the constraints of the different ‘sorts’ of utterance with respect to turn-taking: whether the TRP is free, or the speaker has selected someone to speak next, has encouraged anyone to speak, or has indicated the course of an extended multi-utterance turn at talk as in sentence beginnings like “first of all,” or “let me tell you something:. . . ”. The model and extensions are then comprehensively analysed through a series of large quantitative evaluations computing various aggregate statistics such as: the total times of single talk, multiple talk and silences; total occurrences of utterances, silences, simultaneous talk, multiple starts, middle-of-utterance attempts at talking, false-starts, abandoned utterances (interrupted by others), and more.
226

Towards formal structural representation of spoken language : an evolving transformation system (ETS) approach

Alexander, Gutkin January 2006 (has links)
Speech recognition has been a very active area of research over the past twenty years. Despite an evident progress, it is generally agreed by the practitioners of the field that performance of the current speech recognition systems is rather suboptimal and new approaches are needed. The motivation behind the undertaken research is an observation that the notion of representation of objects and concepts that once was considered to be central in the early days of pattern recognition, has been largely marginalised by the advent of statistical approaches. As a consequence of a predominantly statistical approach to speech recognition problem, due to the numeric, feature vector-based, nature of representation, the classes inductively discovered from real data using decision-theoretic techniques have little meaning outside the statistical framework. This is because decision surfaces or probability distributions are difficult to analyse linguistically. Because of the later limitation it is doubtful that the gap between speech recognition and linguistic research can be bridged by the numeric representations. This thesis investigates an alternative, structural, approach to spoken language representation and categorisation. The approach pursued in this thesis is based on a consistent program, known as the Evolving Transformation System (ETS), motivated by the development and clarification of the concept of structural representation in pattern recognition and artificial intelligence from both theoretical and applied points of view. This thesis consists of two parts. In the first part of this thesis, a similarity-based approach to structural representation of speech is presented. First, a linguistically well-motivated structural representation of phones based on distinctive phonological features recovered from speech is proposed. The representation consists of string templates representing phones together with a similarity measure. The set of phonological templates together with a similarity measure defines a symbolic metric space. Representation and ETS-inspired categorisation in the symbolic metric spaces corresponding to the phonological structural representation are then investigated by constructing appropriate symbolic space classifiers and evaluating them on a standard corpus of read speech. In addition, similarity-based isometric transition from phonological symbolic metric spaces to the corresponding non-Euclidean vector spaces is investigated. Second part of this thesis deals with the formal approach to structural representation of spoken language. Unlike the approach adopted in the first part of this thesis, the representation developed in the second part is based on the mathematical language of the ETS formalism. This formalism has been specifically developed for structural modelling of dynamic processes. In particular, it allows the representation of both objects and classes in a uniform event-based hierarchical framework. In this thesis, the latter property of the formalism allows the adoption of a more physiologically-concreteapproach to structural representation. The proposed representation is based on gestural structures and encapsulates speech processes at the articulatory level. Algorithms for deriving the articulatory structures from the data are presented and evaluated.
227

Levels of interaction between episodic and semantic memory : an electrophysiological and computational investigation

Greve, Andrea January 2007 (has links)
There is compelling evidence that memory is supported by multiple, functionally independent subsystems that distinguish declarative from non-declarative memories (Tulving, 1972). The declarative subsystems, episodic and semantic memory, have been studied intensively, largely in isolation from each other. Relatively little attention has been paid to the interplay been episodic and semantic memory. This thesis constitutes a series of behavioural, neuroimaging, and computational investigations aimed at elucidating the factors and mechanisms that mediate interactions between episodic and semantic memory. Event-Related Potentials (ERPs) are used to isolate processes implicated in episodic and semantic memory interactions on the basis of known ERP effects. Experimental investigations vary factors that target semantic memory either directly or indirectly. Direct manipulations alter the semantic content of word pairs by modulating their lexicality (words vs. non-words) or coherence (categorical vs. non-categorical). Indirect manipulations focus episodic encoding towards semantic or non-semantic aspects of the to-be-encoded word pairs. This thesis investigates whether such manipulations influence episodic memory and if so, in what form. The behavioural and ERP data provide clear evidence for distinct episodic and semantic interactions at the level of semantic organisation and lexical representation. Episodic retrieval, which is supported by recollection and familiarity according to dual process theories (Yonelinas, 2002), reveals enhanced familiarity for semantically organised stimuli. This effect is dependent on semantically deep encoding strategies. By contrast differences in the lexicality of stimuli modulated both familiarity and recollection. To provide an account for why different types of interactions are obtained a computational memory model is proposed. This model uses a single network to simulate a dual process model of episodic retrieval and gives insight into processes that may support interactions between episodic and semantic memory. Thus, this thesis provides novel evidence for different types of episodic and semantic memory interactions dependent on the kind of semantic manipulation and specifies the mediating mechanisms leading to such interactions.
228

Automatic tailoring and cloth modelling for animation characters

Li, Wenxi January 2014 (has links)
The construction of realistic characters has become increasingly important to the production of blockbuster films, TV series and computer games. The outfit of character plays an important role in the application of virtual characters. It is one of the key elements reflects the personality of character. Virtual clothing refers to the process that constructs outfits for virtual characters, and currently, it is widely used in mainly two areas, fashion industry and computer animation. In fashion industry, virtual clothing technology is an effective tool which creates, edits and pre-visualises cloth design patterns efficiently. However, using this method requires lots of tailoring expertises. In computer animation, geometric modelling methods are widely used for cloth modelling due to their simplicity and intuitiveness. However, because of the shortage of tailoring knowledge among animation artists, current existing cloth design patterns can not be used directly by animation artists, and the appearance of cloth depends heavily on the skill of artists. Moreover, geometric modelling methods requires lots of manual operations. This tediousness is worsen by modelling same style cloth for different characters with different body shapes and proportions. This thesis addresses this problem and presents a new virtual clothing method which includes automatic character measuring, automatic cloth pattern adjustment, and cloth patterns assembling. There are two main contributions in this research. Firstly, a geodesic curvature flow based geodesic computation scheme is presented for acquiring length measurements from character. Due to the fast growing demand on usage of high resolution character model in animation production, the increasing number of characters need to be handled simultaneously as well as improving the reusability of 3D model in film production, the efficiency of modelling cloth for multiple high resolution character is very important. In order to improve the efficiency of measuring character for cloth fitting, a fast geodesic algorithm that has linear time complexity with a small bounded error is also presented. Secondly, a cloth pattern adjusting genetic algorithm is developed for automatic cloth fitting and retargeting. For the reason that that body shapes and proportions vary largely in character design, fitting and transferring cloth to a different character is a challenging task. This thesis considers the cloth fitting process as an optimization procedure. It optimizes both the shape and size of each cloth pattern automatically, the integrity, design and size of each cloth pattern are evaluated in order to create 3D cloth for any character with different body shapes and proportions while preserve the original cloth design. By automating the cloth modelling process, it empowers the creativity of animation artists and improves their productivity by allowing them to use a large amount of existing cloth design patterns in fashion industry to create various clothes and to transfer same design cloth to characters with different body shapes and proportions with ease.
229

Automatic control program creation using concurrent Evolutionary Computing

Hart, John K. January 2004 (has links)
Over the past decade, Genetic Programming (GP) has been the subject of a significant amount of research, but this has resulted in the solution of few complex real -world problems. In this work, I propose that, for some relatively simple, non safety -critical embedded control applications, GP can be used as a practical alternative to software developed by humans. Embedded control software has become a branch of software engineering with distinct temporal, interface and resource constraints and requirements. This results in a characteristic software structure, and by examining this, the effective decomposition of an overall problem into a number of smaller, simpler problems is performed. It is this type of problem amelioration that is suggested as a method whereby certain real -world problems may be rendered into a soluble form suitable for GP. In the course of this research, the body of published GP literature was examined and the most important changes to the original GP technique of Koza are noted; particular focus is made upon GP techniques involving an element of concurrency -which is central to this work. This search highlighted few applications of GP for the creation of software for complex, real -world problems -this was especially true in the case of multi thread, multi output solutions. To demonstrate this Idea, a concurrent Linear GP (LGP) system was built that creates a multiple input -multiple output solution using a custom low -level evolutionary language set, combining both continuous and Boolean data types. The system uses a multi -tasking model to evolve and execute the required LGP code for each system output using separate populations: Two example problems -a simple fridge controller and a more complex washing machine controller are described, and the problems encountered and overcome during the successful solution of these problems, are detailed. The operation of the complete, evolved washing machine controller is simulated using a graphical LabVIEWapplication. The aim of this research is to propose a general purpose system for the automatic creation of control software for use in a range of problems from the target problem class -without requiring any system tuning: In order to assess the system search performance sensitivity, experiments were performed using various population and LGP string sizes; the experimental data collected was also used to examine the utility of abandoning stalled searches and restarting. This work is significant because it identifies a realistic application of GP that can ease the burden of finite human software design resources, whilst capitalising on accelerating computing potential.
230

Synthetic voice design and implementation

Cowley, Christopher K. January 1999 (has links)
The limitations of speech output technology emphasise the need for exploratory psychological research to maximise the effectiveness of speech as a display medium in human-computer interaction. Stage 1 of this study reviewed speech implementation research, focusing on general issues for tasks, users and environments. An analysis of design issues was conducted, related to the differing methodologies for synthesised and digitised message production. A selection of ergonomic guidelines were developed to enhance effective speech interface design. Stage 2 addressed the negative reactions of users to synthetic speech in spite of elegant dialogue structure and appropriate functional assignment. Synthetic speech interfaces have been consistently rejected by their users in a wide variety of application domains because of their poor quality. Indeed the literature repeatedly emphasises quality as being the most important contributor to implementation acceptance. In order to investigate this, a converging operations approach was adopted. This consisted of a series of five experiments (and associated pilot studies) which homed in on the specific characteristics of synthetic speech that determine the listeners varying perceptions of its qualities, and how these might be manipulated to improve its aesthetics. A flexible and reliable ratings interface was designed to display DECtalk speech variations and record listeners perceptions. In experiment one, 40 participants used this to evaluate synthetic speech variations on a wide range of perceptual scales. Factor analysis revealed two main factors: "listenability" accounting for 44.7% of the variance and correlating with the DECtalk "smoothness" parameter to . 57 (p<0.005) and "richness" to . 53 (p<0.005); "assurance" accounting for 12.6% of the variance and correlating with "average pitch" to . 42 (p<0.005) and "head size" to. 42 (p<0.005). Complimentary experiments were then required in order to address appropriate voice design for enhanced listenability and assurance perceptions. With a standard male voice set, 20 participants rated enhanced smoothness and attenuated richness as contributing significantly to speech listenability (p<0.001). Experiment three using a female voice set yielded comparable results, suggesting that further refinements of the technique were necessary in order to develop an effective methodology for speech quality optimization. At this stage it became essential to focus directly on the parameter modifications that are associated with the the aesthetically pleasing characteristics of synthetic speech. If a reliable technique could be developed to enhance perceived speech quality, then synthesis systems based on the commonly used DECtalk model might assume some of their considerable yet unfulfilled potential. In experiment four, 20 subjects rated a wide range of voices modified across the two main parameters associated with perceived listenability, smoothness and richness. The results clearly revealed a linear relationship between enhanced smoothness and attenuated richness and significant improvements in perceived listenability (p<0.001 in both cases). Planned comparisons conducted were between the different levels of the parameters and revealed significant listenability enhancements as smoothness was increased, and a similar pattern as richness decreased. Statistical analysis also revealed a significant interaction between the two parameters (p<0.001) and a more comprehensive picture was constructed. In order to expand the focus of and enhance the generality of the research, it was now necessary to assess the effects of synthetic speech modifications whilst subjects were undertaking a more realistic task. Passively rating the voices independent of processing for meaning is arguably an artificial task which rarely, if ever, would occur in 'real-world' settings. In order to investigate perceived quality in a more realistic task scenario, experiment five introduced two levels of information processing load. The purpose of this experiment was firstly to see if a comprehension load modified the pattern of listenability enhancements, and secondly to see if that pattern differed between high and and low load. Techniques for introducing cognitive load were investigated and comprehension load was selected as the most appropriate method in this case. A pilot study distinguished two levels of comprehension load from a set of 150 true/false sentences and these were recorded across the full range of parameter modifications. Twenty subjects then rated the voices using the established listenability scales as before but also performing the additional task of processing each spoken stimuli for meaning and determining the authenticity of the statements. Results indicated that listenability enhancements did indeed occur at both levels of processing although at the higher level variations in the pattern occured. A significant difference was revealed between optimal parameter modifications for conditions of high and low cognitive load (p<0.05). The results showed that subjects perceived the synthetic voices in the high cognitive load condition to be significantly less listenable than those same voices in the low cognitive load condition. The analysis also revealed that this effect was independent of the number of errors made. This result may be of general value because conclusions drawn from this findings are independent of any particular parameter modifications that may be exclusively available to DECtalk users. Overall, the study presents a detailed analysis of the research domain combined with a systematic experimental program of synthetic speech quality assessment. The experiments reported establish a reliable and replicable procedure for optimising the aesthetically pleasing characteristics of DECtalk speech, but the implications of the research extend beyond the boundaries of a particular synthesiser. Results from the experimental program lead to a number of conclusions, the most salient being that not only does the synthetic speech designer have to overcome the general rejection of synthetic voices based on their poor quality by sophisticated customisation of synthetic voice parameters, but that he or she needs to take into account the cognitive load of the task being undertaken. The interaction between cognitive load and optimal settings for synthesis requires direct consideration if synthetic speech systems are going to realise and maximise their potential in human computer interaction.

Page generated in 0.0694 seconds