• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 5
  • 4
  • 1
  • Tagged with
  • 43
  • 43
  • 12
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Découpage textuel dans la traduction assistée par les systèmes de mémoire de traduction / Text segmentation in human translation assisted by translation memory systems

Popis, Anna 13 December 2013 (has links)
L’objectif des études théoriques et expérimentales présentées dans ce travail était de cerner à l’aide des critères objectifs fiables un niveau optimum de découpage textuel pour la traduction spécialisée assistée par un système de mémoire de traduction (SMT) pour les langues française et polonaise. Afin de réaliser cet objectif, nous avons élaboré notre propre approche : une nouvelle combinaison des méthodes de recherche et des outils d’analyse proposés surtout dans les travaux de Simard (2003), Langlais et Simard (2001, 2003) et Dragsted (2004) visant l’amélioration de la viabilité des SMT à travers des modifications apportées à la segmentation phrastique considérée comme limitant cette viabilité. A la base des observations de quelques réalisations effectives du processus de découpage textuel dans la traduction spécialisée effectuée par l’homme sans aide informatique à la traduction, nous avons déterminé trois niveaux de segmentation potentiellement applicables dans les SMT tels que phrase, proposition, groupes verbal et nominal. Nous avons ensuite réalisé une analyse comparative des taux de réutilisabilité des MT du système WORDFAST et de l’utilité des traductions proposées par le système pour chacun de ces trois niveaux de découpage textuel sur un corpus de douze textes de spécialité. Cette analyse a permis de constater qu’il n’est pas possible de déterminer un seul niveau de segmentation textuelle dont l’application améliorerait la viabilité des SMT de façon incontestable. Deux niveaux de segmentation textuelle, notamment en phrases et en propositions, permettent en effet d’assurer une viabilité comparable des SMT. / The aim of the theoretical and experimental study presented in this work was to define with objective and reliable criterion an optimal level of textual segmentation for specialized translation from French into Polish assisted by a translation memory system (TMS). In this aim, we created our own approach based on research methods and analysis tools proposed particularly by Simard (2003), Langlais and Simard (2001, 2003) and Dragsted (2004). In order to increase the SMT performances, they proposed to eliminate a sentence segmentation level from SMT which is considered an obstacle to achieve satisfying SMT performances. On the basis of the observations of text segmentation process realized during a specialized translation made by a group of students without any computer aid, we defined three segmentation levels which can be potentially used in SMT such as sentences, clauses and noun and verb phrases. We realized then a comparative study of the influence of each of these levels on the reusability of WORDFAST translation memories and on the utility of translations proposed by the system for a group of twelve specialized texts. This study showed that it is not possible to define a unique text segmentation level which would unquestionably increase the SMT performances. Sentences and clauses are in fact two text segmentation levels which ensure the comparable SMT performances.
42

Experiences of emergent change from an applied neurosciences perspective

Garnett, Gabriella 11 1900 (has links)
Emergent change is a pervasive force in modern organisations. However, the subjective experiences of emergent change for frontline individuals and teams have not been explored in organisational change literature. The integrative field of applied neurosciences offers valuable insights into the underlying neural mechanisms that shape these experiences and drive responses in order to meet basic psychological needs. Using interactive qualitative analysis (IQA), this study involved a focus group and follow-up interviews with nine participants at a South African software development company to explore the experiences of emergent change at work. System dynamics reflected that these experiences are significantly more complex than literature and practice currently account for, and that individuals and teams find their experiences of emergent change to threaten their sense of safety and basic psychological needs. The physiological and emotional experiences were found to be driving elements. Peak performance state and the relational environment were found to be salient outcomes. Findings present the opportunity for the reconceptualisation of emergent change, a shift in focus from change itself to the human experiences thereof and the importance of embracing new possibilities, tools and practices for meeting needs and thriving in an ever-changing world. / Industrial and Organisational Psychology / M. Com. (Industrial and Organisational Psychology)
43

Real-time Business Intelligence through Compact and Efficient Query Processing Under Updates

Idris, Muhammad 10 April 2019 (has links)
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain these results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flow (or set of flows) of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation); however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems. In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of updates that is based on relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow the automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g., timestamp attributes) along-with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints, while non-temporal comparisons can either be equality or inequality constraints, hence these systems mostly process inequality joins. As starting point, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kind of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub) results incur high update latency and systems that materialize (sub) results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems, and present a main-memory data representation that allows to enumerate query (sub) results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLR). We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries); 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only); 3) they take space linear in the size of the database; 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm. Then, we present the generalization of thiw algorithm to the class of acyclic queries featuring multi-way theta-joins with projections. We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of dynamic algorithms over DCLRs is based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present the algorithms to test a conjunctive query featuring theta-joins for acyclicity and to generate GJTs for such queries. To do this, we extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to test a conjunctive query featuring multi-way theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic. We implemented our algorithms in a query compiler that takes as input the SQL queries and generates Scala executable code – a trigger program to process queries and maintain under updates. We tested our approach against state of the art main-memory BI and CEP systems. Our evaluation results have shown that our DCLRs based approach is over an order of magnitude efficient than existing systems for both memory footprint and update processing cost. We have also shown that the enumeration of query results without materialization in DCLRs is comparable (and in some cases efficient) as compared to enumerating from materialized query results.

Page generated in 0.0512 seconds