Spelling suggestions: "subject:"lenguajes"" "subject:"lenguaje""
361 |
Un procedimiento de medición de tamaño funcional para especificaciones de requisitosCondori Fernández, Olinda Nelly 07 May 2008 (has links)
Hoy en día el tamaño del software es utilizado en la gestión y control de producción como uno de los parámetros esenciales de los modelos de estimación que contribuyen a la calidad de los proyectos de software y productos entregables. Aunque la importancia de la medición temprana del tamaño es evidente, actualmente esta medición es solamente alcanzada en fases tardías del ciclo de vida del software (análisis, diseño e implementación).
El tamaño de software puede ser cuantificado usando diferentes técnicas, como las líneas de código y los métodos de medición de tamaño funcional. Un método de medición de tamaño funcional mide el tamaño del software cuantificando los requisitos funcionales. El método Análisis de Puntos de Función (FPA) es el método mayormente utilizado. Este
método fue desarrollado para medir Sistemas de Información de Gestión desarrollados con metodos tradicionales. Aunque IFPUG FPA ha ido alcanzado mayor popularidad en la industria, este método carece de aplicabilidad a todo tipo de software y a nuevos paradigmas de desarrollo.
Para direccionar estas debilidades, COSMIC-FFP ha surgido como un método de segunda generación y ha sido probado como un estandar internacional (ISO/IEC 19761). Sin embargo, la generalidad de COSMIC-FFP requiere ser instanciado por medio de un procedimiento más específico y sistemático en conjunción con un método de desarrollo de software. / Condori Fernández, ON. (2007). Un procedimiento de medición de tamaño funcional para especificaciones de requisitos [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1998
|
362 |
On Clustering and Evaluation of Narrow Domain Short-Test CorporaPinto Avendaño, David Eduardo 23 July 2008 (has links)
En este trabajo de tesis doctoral se investiga el problema del agrupamiento de conjuntos especiales de documentos llamados textos cortos de dominios restringidos.
Para llevar a cabo esta tarea, se han analizados diversos corpora y métodos de agrupamiento. Mas aún, se han introducido algunas medidas de evaluación de corpus, técnicas de selección de términos y medidas para la validez de agrupamiento con la finalidad de estudiar los siguientes problemas:
-Determinar la relativa dificultad de un corpus para ser agrupado y estudiar algunas de sus características como longitud de los textos, amplitud del dominio, estilometría, desequilibrio de clases y estructura.
-Contribuir en el estado del arte sobre el agrupamiento de corpora compuesto de textos cortos de dominios restringidos
El trabajo de investigación que se ha llevado a cabo se encuentra parcialmente enfocado en el "agrupamiento de textos cortos". Este tema se considera relevante dado el modo actual y futuro en que las personas tienden a usar un "lenguaje reducido" constituidos por textos cortos (por ejemplo, blogs, snippets, noticias y generación de mensajes de textos como el correo electrónico y el chat).
Adicionalmente, se estudia la amplitud del dominio de corpora. En este sentido, un corpus puede ser considerado como restringido o amplio si el grado de traslape de vocabulario es alto o bajo, respectivamente. En la tarea de categorización, es bastante complejo lidiar con corpora de dominio restringido tales como artículos científicos, reportes técnicos, patentes, etc.
El objetivo principal de este trabajo consiste en estudiar las posibles estrategias para tratar con los siguientes dos problemas:
a) las bajas frecuencias de los términos del vocabulario en textos cortos, y
b) el alto traslape de vocabulario asociado a dominios restringidos.
Si bien, cada uno de los problemas anteriores es un reto suficientemente alto, cuando se trata con textos cortos de dominios restringidos, la complejidad del problema se incr / Pinto Avendaño, DE. (2008). On Clustering and Evaluation of Narrow Domain Short-Test Corpora [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/2641
|
363 |
Crash recovery with partial amnesia failure model issuesDe Juan Marín, Rubén 30 September 2008 (has links)
Replicated systems are a kind of distributed systems whose main goal
is to ensure that computer systems are highly available, fault tolerant and
provide high performance. One of the last trends in replication techniques
managed by replication protocols, make use of Group Communication Sys-
tem, and more specifically of the communication primitive atomic broadcast
for developing more eficient replication protocols.
An important aspect in these systems consists in how they manage
the disconnection of nodes {which degrades their service{ and the connec-
tion/reconnection of nodes for maintaining their original support. This task
is delegated in replicated systems to recovery protocols. How it works de-
pends specially on the failure model adopted. A model commonly used for
systems managing large state is the crash-recovery with partial amnesia be-
cause it implies short recovery periods. But, assuming it implies arising
several problems. Most of them have been already solved in the literature:
view management, abort of local transactions started in crashed nodes {
when referring to transactional environments{ or for example the reinclu-
sion of new nodes to the replicated system. Anyway, there is one problem
related to the assumption of this second failure model that has not been
completely considered: the amnesia phenomenon. Phenomenon that can
lead to inconsistencies if it is not correctly managed.
This work presents this inconsistency problem due to the amnesia and
formalizes it, de ning the properties that must be ful lled for avoiding it
and de ning possible solutions. Besides, it also presents and formalizes an
inconsistency problem {due to the amnesia{ which appears under a speci c
sequence of events allowed by the majority partition progress condition that
will imply to stop the system, proposing the properties for overcoming it and
proposing di erent solutions. As a consequence it proposes a new majority
partition progress condition. In the sequel there is de / De Juan Marín, R. (2008). Crash recovery with partial amnesia failure model issues [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/3302
|
364 |
Conceptual schemas generation from organizacional model in an automatic software production processMartínez Rebollar, Alicia 30 September 2008 (has links)
Actualmente, la ingeniería de software ha propuesto múltiples técnicas para mejorar
el desarrollo de software, sin embargo, la meta final no ha sido satisfecha. En
muchos casos, el producto software no satisface las necesidades reales de los
clientes finales del negocio donde el sistema operará.
Uno de los problemas principales de los trabajos actuales es la carencia de un
enfoque sistemático para mapear cada concepto de modelado del dominio del
problema (modelos organizacionales), en sus correspondientes elementos
conceptuales en el espacio de la solución (modelos conceptuales orientados a
objetos).
El principal objetivo de esta tesis es proveer un enfoque metodológico que permita
generar modelos conceptuales y modelos de requisitos a partir de descripciones
organizacionales. Se propone el uso de tres disciplinas, distintas pero
complementarias (modelado organizacional, requisitos de software y modelado
conceptual) para lograr este objetivo.
La tesis describe un proceso de elicitación de requisitos que permite al usuario crear
un modelo de negocios que representa la situación actual del negocio (requisitos
tempranos). Nosotros consideramos que este modelo, el cual refleja la forma en la
que se implementan actualmente los procesos de negocio, es la fuente correcta para
determinar la funcionalidad esperada del sistema a desarrollar. Se propone también
un proceso para identificar los elementos que son relevantes para ser automatizados
a partir del modelo de negocio. Como resultado de este proceso se genera un
modelo intermedio que representa los requisitos del sistema de software.
Finalmente, presentamos un conjunto de guías sistemáticas para generar un
esquema conceptual orientado a objetos a partir del modelo intermedio. Nosotros
también exploramos, como solución alternativa, la generación de una especificación
de requisitos tardíos a partir del modelo intermedio. / Martínez Rebollar, A. (2008). Conceptual schemas generation from organizacional model in an automatic software production process [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/3304
|
365 |
Statistical approaches for natural language modelling and monotone statistical machine translationAndrés Ferrer, Jesús 11 February 2010 (has links)
Esta tesis reune algunas contribuciones al reconocimiento de formas estadístico y, más especícamente, a varias tareas del procesamiento del lenguaje natural. Varias técnicas estadísticas bien conocidas se revisan en esta tesis, a saber: estimación paramétrica, diseño de la función de pérdida y modelado estadístico. Estas técnicas se aplican a varias tareas del procesamiento del lenguajes natural tales como clasicación de documentos, modelado del lenguaje natural
y traducción automática estadística.
En relación con la estimación paramétrica, abordamos el problema del suavizado proponiendo una nueva técnica de estimación por máxima verosimilitud con dominio restringido (CDMLEa ). La técnica CDMLE evita la necesidad de la etapa de suavizado que propicia la pérdida de las propiedades del estimador máximo verosímil. Esta técnica se aplica a clasicación de documentos mediante el clasificador Naive Bayes. Más tarde, la técnica CDMLE se extiende a la estimación por máxima verosimilitud por leaving-one-out aplicandola al suavizado de modelos de lenguaje. Los resultados obtenidos en varias tareas de modelado del lenguaje natural, muestran una mejora en términos de perplejidad.
En a la función de pérdida, se estudia cuidadosamente el diseño de funciones de pérdida diferentes a la 0-1. El estudio se centra en aquellas funciones de pérdida que reteniendo una complejidad de decodificación similar a la función 0-1, proporcionan una mayor flexibilidad. Analizamos y presentamos varias funciones de pérdida en varias tareas de traducción automática y con varios modelos de traducción. También, analizamos algunas reglas de traducción que destacan por causas prácticas tales como la regla de traducción directa; y, así mismo, profundizamos en la comprensión de los modelos log-lineares, que son de hecho, casos particulares de funciones de pérdida.
Finalmente, se proponen varios modelos de traducción monótonos basados en técnicas de modelado estadístico . / Andrés Ferrer, J. (2010). Statistical approaches for natural language modelling and monotone statistical machine translation [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7109
|
366 |
METHODOLOGICAL INTEGRATION OF COMMUNICATION ANALYSIS INTO A MODEL-DRIVEN SOFTWARE DEVELOPMENT FRAMEWORKEspaña Cubillo, Sergio 27 January 2012 (has links)
It is widely recognised that information and communication technologies development is a risky activity. Despite the advances in software engineering, many software development projects fail to satisfy the clients' needs, to deliver on time or to stay within budget. Among the various factors that are considered to cause failure, an inadequate requirements practice stands out. Model-driven development is a relatively recent paradigm with the potential to solve some of the dragging problems of software development. Models play a paramount role in model-driven development: several modelling layers allow defining views of the system under construction at different abstraction levels, and model transformations facilitate the transition from one layer to the other. However, how to effectively integrate requirements engineering within model-driven development is still an open research challenge. This thesis integrates Communication Analysis, a communication-oriented business process modelling and requirements engineering method for information systems development, and the OO Method, an object-oriented model-driven software development method provides automatic software generation from conceptual models. We first provide a detailed specification of Communication Analysis intended to facilitate the integration; among other improvements to the method, we build an ontology-based set of concept definitions in which to ground the method, we provide precise methodological guidelines, we create a metamodel for the modelling languages included in the method, and we provide tools to support the creation of Communication Analysis requirements models. Then we perform the integration by providing a technique to systematically derive OO-Method conceptual models from Communication Analysis requirements models. The derivation technique is offered in two flavours: a set of rules to be manually applied by a human analyst, and an ATL model transformation that automates this task. / España Cubillo, S. (2011). METHODOLOGICAL INTEGRATION OF COMMUNICATION ANALYSIS INTO A MODEL-DRIVEN SOFTWARE DEVELOPMENT FRAMEWORK [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/14572
|
367 |
Algoritmos de detección y filtrado de imágenes para arquitecturas multicore y manycoreSánchez Cervantes, María Guadalupe 15 May 2013 (has links)
En esta tesis se aborda la eliminaci'on de ruido impulsivo, gaussiano y
speckle en im'agenes a color y en escala de gises. Como caso particular
se puede mencionar la eliminaci'on de ruido en im'agenes m'edicas.
Algunos m'etodos de filtrado son costosos computacionalmente y m'as
a'un, si las im'agenes son de gran tama¿no. Con el fin de reducir el coste
computacional de dichos m'etodos, en esta tesis se utiliza hardware que
soporta procesamiento paralelo, como lo son los cores CPU con procesadores
multicore y GPUs con procesadores manycore.En las implementaciones
paralelas en CUDA, se configuran algunas caracter'¿sticas
con la finalidad de optimizar el procesamiento de la aplicaci'on en las
GPUs.
Esta tesis estudia por un lado, el rendimiento computacional obtenido
en el proceso de eliminaci'on de ruido impulsivo y uniforme. Por otro
lado, se eval'ua la calidad despu'es de realizar el proceso de filtrado.
El rendimiento computacional se ha obtenido con la paralelizaci'on de
los algoritmos en CPU y/o GPU. Para obtener buena calidad en la
imagen filtrada, primero se detectan los p'¿xeles corruptos y luego son
filtrados solo los p'¿xeles que se han detectado como corruptos. Por lo
que respecta a la eliminaci'on de ruido gaussiano y speckle, el an'alisis
del filtro difusivo no lineal ha demostrado ser eficaz para este caso.
Los algoritmos que se utilizan para eliminar el ruido impulsivo y uniforme
en las im'agenes, y sus implementaciones secuenciales y paralelas
se han evaluado experimentalmente en tiempo de ejecuci'on (speedup)
y eficiencia en tres equipos de c'omputo de altas prestaciones. Los resultados
han mostrado que las implementaciones paralelas disminuyen
considerablemente los tiempos de ejecuci'on secuenciales.
Finalmente, en esta tesis se propone un m'etodo para reducir eficientemente
el ruido en las im'agenes sin tener informaci'on inicial del tipo
de ruido contenido en ellas.
I / Sánchez Cervantes, MG. (2013). Algoritmos de detección y filtrado de imágenes para arquitecturas multicore y manycore [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/28854
|
368 |
Achieving Autonomic Web Service Compositions with Models at RuntimeAlférez Salinas, Germán Harvey 26 December 2013 (has links)
Over the last years, Web services have become increasingly popular. It is because they allow businesses to share data and business process (BP) logic through a programmatic interface across networks. In order to reach the full potential of
Web services, they can be combined to achieve specifi c functionalities.
Web services run in complex contexts where arising events may compromise the quality of the system (e.g. a sudden security attack). As a result, it is desirable to count on mechanisms to adapt Web service compositions (or simply
called service compositions) according to problematic events in the context. Since critical systems may require prompt responses, manual adaptations are unfeasible in large and intricate service compositions. Thus, it is suitable to
have autonomic mechanisms to guide their self-adaptation. One way to achieve this is by implementing variability constructs at the language level. However, this approach may become tedious, difficult to manage, and error-prone as the number of con figurations for the service composition grows.
The goal of this thesis is to provide a model-driven framework to guide autonomic adjustments of context-aware service compositions. This framework spans over design time and runtime to face arising known and unknown context events (i.e., foreseen and unforeseen at design time) in the close and open worlds respectively.
At design time, we propose a methodology for creating the models that guide autonomic changes. Since Service-Oriented Architecture (SOA) lacks support for systematic reuse of service operations, we represent service operations as Software Product Line (SPL) features in a variability model. As a result, our approach can support the construction of service composition families in mass production-environments. In order to reach optimum adaptations, the variability model and its possible con figurations are verifi ed at design time using Constraint Programming (CP).
At runtime, when problematic events arise in the context, the variability model is leveraged for guiding autonomic changes of the service composition. The activation and deactivation of features in the variability model result in changes in a composition model that abstracts the underlying service composition. Changes in the variability model are refl ected into the service composition by adding or removing fragments of Business Process Execution Language (WS-BPEL)
code, which are deployed at runtime. Model-driven strategies guide the safe migration of running service composition instances. Under the closed-world assumption, the possible context events are fully known at design time. These
events will eventually trigger the dynamic adaptation of the service composition. Nevertheless, it is diffi cult to foresee all the possible situations arising in uncertain contexts where service compositions run. Therefore, we extend our
framework to cover the dynamic evolution of service compositions to deal with unexpected events in the open world. If model adaptations cannot solve uncertainty, the supporting models self-evolve according to abstract tactics that
preserve expected requirements. / Alférez Salinas, GH. (2013). Achieving Autonomic Web Service Compositions with Models at Runtime [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34672
|
369 |
Multimodal interactive structured predictionAlabau Gonzalvo, Vicente 27 January 2014 (has links)
This thesis presents scientific contributions to the field of multimodal interac-
tive structured prediction (MISP). The aim of MISP is to reduce the human
effort required to supervise an automatic output, in an efficient and ergonomic
way. Hence, this thesis focuses on the two aspects of MISP systems. The first
aspect, which refers to the interactive part of MISP, is the study of strate-
gies for efficient human¿computer collaboration to produce error-free outputs.
Multimodality, the second aspect, deals with other more ergonomic modalities
of communication with the computer rather than keyboard and mouse.
To begin with, in sequential interaction the user is assumed to supervise the
output from left-to-right so that errors are corrected in sequential order. We
study the problem under the decision theory framework and define an optimum
decoding algorithm. The optimum algorithm is compared to the usually ap-
plied, standard approach. Experimental results on several tasks suggests that
the optimum algorithm is slightly better than the standard algorithm.
In contrast to sequential interaction, in active interaction it is the system that
decides what should be given to the user for supervision. On the one hand, user
supervision can be reduced if the user is required to supervise only the outputs
that the system expects to be erroneous. In this respect, we define a strategy
that retrieves first the outputs with highest expected error first. Moreover, we
prove that this strategy is optimum under certain conditions, which is validated
by experimental results. On the other hand, if the goal is to reduce the number
of corrections, active interaction works by selecting elements, one by one, e.g.,
words of a given output to be supervised by the user. For this case, several
strategies are compared. Unlike the previous case, the strategy that performs
better is to choose the element with highest confidence, which coincides with
the findings of the optimum algorithm for sequential interaction. However, this
also suggests that minimizing effort and supervision are contradictory goals.
With respect to the multimodality aspect, this thesis delves into techniques to
make multimodal systems more robust. To achieve that, multimodal systems
are improved by providing contextual information of the application at hand.
First, we study how to integrate e-pen interaction in a machine translation
task. We contribute to the state-of-the-art by leveraging the information from the source sentence. Several strategies are compared basically grouped into two
approaches: inspired by word-based translation models and n-grams generated
from a phrase-based system. The experiments show that the former outper-
forms the latter for this task. Furthermore, the results present remarkable
improvements against not using contextual information. Second, similar ex-
periments are conducted on a speech-enabled interface for interactive machine
translation. The improvements over the baseline are also noticeable. How-
ever, in this case, phrase-based models perform much better than word-based
models. We attribute that to the fact that acoustic models are poorer estima-
tions than morphologic models and, thus, they benefit more from the language
model. Finally, similar techniques are proposed for dictation of handwritten
documents. The results show that speech and handwritten recognition can be
combined in an effective way.
Finally, an evaluation with real users is carried out to compare an interactive
machine translation prototype with a post-editing prototype. The results of
the study reveal that users are very sensitive to the usability aspects of the
user interface. Therefore, usability is a crucial aspect to consider in an human
evaluation that can hinder the real benefits of the technology being evaluated.
Hopefully, once usability problems are fixed, the evaluation indicates that users
are more favorable to work with the interactive machine translation system than
to the post-editing system. / Alabau Gonzalvo, V. (2014). Multimodal interactive structured prediction [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/35135 / Premios Extraordinarios de tesis doctorales
|
370 |
On the effective deployment of current machine translation technologyGonzález Rubio, Jesús 03 June 2014 (has links)
Machine translation is a fundamental technology that is gaining more importance
each day in our multilingual society. Companies and particulars are
turning their attention to machine translation since it dramatically cuts down
their expenses on translation and interpreting. However, the output of current
machine translation systems is still far from the quality of translations generated
by human experts. The overall goal of this thesis is to narrow down
this quality gap by developing new methodologies and tools that improve the
broader and more efficient deployment of machine translation technology.
We start by proposing a new technique to improve the quality of the
translations generated by fully-automatic machine translation systems. The
key insight of our approach is that different translation systems, implementing
different approaches and technologies, can exhibit different strengths and
limitations. Therefore, a proper combination of the outputs of such different
systems has the potential to produce translations of improved quality.
We present minimum Bayes¿ risk system combination, an automatic approach
that detects the best parts of the candidate translations and combines them
to generate a consensus translation that is optimal with respect to a particular
performance metric. We thoroughly describe the formalization of our
approach as a weighted ensemble of probability distributions and provide efficient
algorithms to obtain the optimal consensus translation according to the
widespread BLEU score. Empirical results show that the proposed approach
is indeed able to generate statistically better translations than the provided
candidates. Compared to other state-of-the-art systems combination methods,
our approach reports similar performance not requiring any additional data
but the candidate translations.
Then, we focus our attention on how to improve the utility of automatic
translations for the end-user of the system. Since automatic translations are
not perfect, a desirable feature of machine translation systems is the ability
to predict at run-time the quality of the generated translations. Quality estimation
is usually addressed as a regression problem where a quality score
is predicted from a set of features that represents the translation. However, although the concept of translation quality is intuitively clear, there is no
consensus on which are the features that actually account for it. As a consequence,
quality estimation systems for machine translation have to utilize
a large number of weak features to predict translation quality. This involves
several learning problems related to feature collinearity and ambiguity, and
due to the ¿curse¿ of dimensionality. We address these challenges by adopting
a two-step training methodology. First, a dimensionality reduction method
computes, from the original features, the reduced set of features that better
explains translation quality. Then, a prediction model is built from this
reduced set to finally predict the quality score. We study various reduction
methods previously used in the literature and propose two new ones based on
statistical multivariate analysis techniques. More specifically, the proposed dimensionality
reduction methods are based on partial least squares regression.
The results of a thorough experimentation show that the quality estimation
systems estimated following the proposed two-step methodology obtain better
prediction accuracy that systems estimated using all the original features.
Moreover, one of the proposed dimensionality reduction methods obtained the
best prediction accuracy with only a fraction of the original features. This
feature reduction ratio is important because it implies a dramatic reduction
of the operating times of the quality estimation system.
An alternative use of current machine translation systems is to embed them
within an interactive editing environment where the system and a human expert
collaborate to generate error-free translations. This interactive machine
translation approach have shown to reduce supervision effort of the user in
comparison to the conventional decoupled post-edition approach. However,
interactive machine translation considers the translation system as a passive
agent in the interaction process. In other words, the system only suggests translations
to the user, who then makes the necessary supervision decisions. As
a result, the user is bound to exhaustively supervise every suggested translation.
This passive approach ensures error-free translations but it also demands
a large amount of supervision effort from the user.
Finally, we study different techniques to improve the productivity of current
interactive machine translation systems. Specifically, we focus on the development
of alternative approaches where the system becomes an active agent
in the interaction process. We propose two different active approaches. On the
one hand, we describe an active interaction approach where the system informs
the user about the reliability of the suggested translations. The hope is that
this information may help the user to locate translation errors thus improving
the overall translation productivity. We propose different scores to measure translation reliability at the word and sentence levels and study the influence
of such information in the productivity of an interactive machine translation
system. Empirical results show that the proposed active interaction protocol
is able to achieve a large reduction in supervision effort while still generating
translations of very high quality. On the other hand, we study an active learning
framework for interactive machine translation. In this case, the system is
not only able to inform the user of which suggested translations should be
supervised, but it is also able to learn from the user-supervised translations to
improve its future suggestions. We develop a value-of-information criterion to
select which automatic translations undergo user supervision. However, given
its high computational complexity, in practice we study different selection
strategies that approximate this optimal criterion. Results of a large scale experimentation
show that the proposed active learning framework is able to
obtain better compromises between the quality of the generated translations
and the human effort required to obtain them. Moreover, in comparison to
a conventional interactive machine translation system, our proposal obtained
translations of twice the quality with the same supervision effort. / González Rubio, J. (2014). On the effective deployment of current machine translation technology [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37888
|
Page generated in 0.0436 seconds