Spelling suggestions: "subject:"square""
1071 |
Methods for 3D Structured Light Sensor Calibration and GPU Accelerated ColormapKurella, Venu January 2018 (has links)
In manufacturing, metrological inspection is a time-consuming process.
The higher the required precision in inspection, the longer the
inspection time. This is due to both slow devices that collect
measurement data and slow computational methods that process the data.
The goal of this work is to propose methods to speed up some of these
processes. Conventional measurement devices like Coordinate Measuring
Machines (CMMs) have high precision but low measurement speed while
new digitizer technologies have high speed but low precision. Using
these devices in synergy gives a significant improvement in the
measurement speed without loss of precision. The method of synergistic
integration of an advanced digitizer with a CMM is discussed.
Computational aspects of the inspection process are addressed next. Once
a part is measured, measurement data is compared against its
model to check for tolerances. This comparison is a time-consuming
process on conventional CPUs. We developed and benchmarked some GPU accelerations. Finally, naive data fitting methods can produce misleading results in cases with non-uniform data. Weighted total least-squares methods can compensate for non-uniformity. We show how they can be accelerated with GPUs, using plane fitting as an example. / Thesis / Doctor of Philosophy (PhD)
|
1072 |
An Evaluation of Technological, Organizational and Environmental Determinants of Emerging Technologies Adoption Driving SMEs’ Competitive AdvantageDobre, Marius January 2022 (has links)
This research evaluates the technological, organizational, and environmental determinants of emerging technologies adoption represented by Artificial Intelligence (AI) and Internet of Things (IoT) driving SMEs’ competitive advantage within a resource-based view (RBV) theoretical approach supported by the technological-organizational-environmental (TOE)-framework setting. Current literature on SMEs competitive advantage as outcome of emerging technologies in the technological, organisational, and environmental contexts presents models focused on these contexts individual components. There are no models in the literature to represent the TOE framework as an integrated structure with gradual levels of complexity, allowing for incremental evaluation of the business context in support of decision making towards emerging technologies adoption supporting the firm competitive advantage. This research gap is addressed with the introduction of a new concept, the IT resource-based renewal, underpinned by the RBV, and supported by the TOE framework for providing a holistic understanding of the SMEs strategic renewal decision through information technology. This is achieved through a complex measurement model with four level constructs, leading into a parsimonious structural model that evaluates the relationships between IT resource-based renewal, and emerging technologies adoption driving SMEs competitive advantage. The model confirms the positive association between the IT resource-based renewal and emerging technologies adoption, and between the IT resource-based renewal and SME competitive advantage for the SMEs managers model, with the SME owners model outcomes are found not being supportive towards emerging technologies adoption driving SME competitive advantage.
As methodology, PLS-SEM is used for its capabilities of assessing complex paths among model variables. Analysis is done on three models, one for the full sample, with two subsequent ones for owners and managers, respectively, as SME decision makers, with data collected using a web-based survey in Canada, the UK, and the US, that has provided 510 usable answers. This research has a theoretical contribution represented by the introduction of the IT resource-based renewal concept, that integrates the RBV perspective and the TOE framework for supporting organization’s decision on emerging technologies adoption driving SMEs competitive advantage. As practical implications, this thesis provides SMEs with a reference framework on adopting emerging technologies, offering SME managers and owners a comprehensive model of hierarchical factors contributing to SMEs competitive advantage acquired as outcome of AI and IoT adoption. This research makes an original contribution to the enterprise management, information systems adoption, and SME competitive advantage literature, with an empirical approach that verifies a model of emerging technologies adoption determinants driving SMEs competitive advantage.
|
1073 |
On the effective deployment of current machine translation technologyGonzález Rubio, Jesús 03 June 2014 (has links)
Machine translation is a fundamental technology that is gaining more importance
each day in our multilingual society. Companies and particulars are
turning their attention to machine translation since it dramatically cuts down
their expenses on translation and interpreting. However, the output of current
machine translation systems is still far from the quality of translations generated
by human experts. The overall goal of this thesis is to narrow down
this quality gap by developing new methodologies and tools that improve the
broader and more efficient deployment of machine translation technology.
We start by proposing a new technique to improve the quality of the
translations generated by fully-automatic machine translation systems. The
key insight of our approach is that different translation systems, implementing
different approaches and technologies, can exhibit different strengths and
limitations. Therefore, a proper combination of the outputs of such different
systems has the potential to produce translations of improved quality.
We present minimum Bayes¿ risk system combination, an automatic approach
that detects the best parts of the candidate translations and combines them
to generate a consensus translation that is optimal with respect to a particular
performance metric. We thoroughly describe the formalization of our
approach as a weighted ensemble of probability distributions and provide efficient
algorithms to obtain the optimal consensus translation according to the
widespread BLEU score. Empirical results show that the proposed approach
is indeed able to generate statistically better translations than the provided
candidates. Compared to other state-of-the-art systems combination methods,
our approach reports similar performance not requiring any additional data
but the candidate translations.
Then, we focus our attention on how to improve the utility of automatic
translations for the end-user of the system. Since automatic translations are
not perfect, a desirable feature of machine translation systems is the ability
to predict at run-time the quality of the generated translations. Quality estimation
is usually addressed as a regression problem where a quality score
is predicted from a set of features that represents the translation. However, although the concept of translation quality is intuitively clear, there is no
consensus on which are the features that actually account for it. As a consequence,
quality estimation systems for machine translation have to utilize
a large number of weak features to predict translation quality. This involves
several learning problems related to feature collinearity and ambiguity, and
due to the ¿curse¿ of dimensionality. We address these challenges by adopting
a two-step training methodology. First, a dimensionality reduction method
computes, from the original features, the reduced set of features that better
explains translation quality. Then, a prediction model is built from this
reduced set to finally predict the quality score. We study various reduction
methods previously used in the literature and propose two new ones based on
statistical multivariate analysis techniques. More specifically, the proposed dimensionality
reduction methods are based on partial least squares regression.
The results of a thorough experimentation show that the quality estimation
systems estimated following the proposed two-step methodology obtain better
prediction accuracy that systems estimated using all the original features.
Moreover, one of the proposed dimensionality reduction methods obtained the
best prediction accuracy with only a fraction of the original features. This
feature reduction ratio is important because it implies a dramatic reduction
of the operating times of the quality estimation system.
An alternative use of current machine translation systems is to embed them
within an interactive editing environment where the system and a human expert
collaborate to generate error-free translations. This interactive machine
translation approach have shown to reduce supervision effort of the user in
comparison to the conventional decoupled post-edition approach. However,
interactive machine translation considers the translation system as a passive
agent in the interaction process. In other words, the system only suggests translations
to the user, who then makes the necessary supervision decisions. As
a result, the user is bound to exhaustively supervise every suggested translation.
This passive approach ensures error-free translations but it also demands
a large amount of supervision effort from the user.
Finally, we study different techniques to improve the productivity of current
interactive machine translation systems. Specifically, we focus on the development
of alternative approaches where the system becomes an active agent
in the interaction process. We propose two different active approaches. On the
one hand, we describe an active interaction approach where the system informs
the user about the reliability of the suggested translations. The hope is that
this information may help the user to locate translation errors thus improving
the overall translation productivity. We propose different scores to measure translation reliability at the word and sentence levels and study the influence
of such information in the productivity of an interactive machine translation
system. Empirical results show that the proposed active interaction protocol
is able to achieve a large reduction in supervision effort while still generating
translations of very high quality. On the other hand, we study an active learning
framework for interactive machine translation. In this case, the system is
not only able to inform the user of which suggested translations should be
supervised, but it is also able to learn from the user-supervised translations to
improve its future suggestions. We develop a value-of-information criterion to
select which automatic translations undergo user supervision. However, given
its high computational complexity, in practice we study different selection
strategies that approximate this optimal criterion. Results of a large scale experimentation
show that the proposed active learning framework is able to
obtain better compromises between the quality of the generated translations
and the human effort required to obtain them. Moreover, in comparison to
a conventional interactive machine translation system, our proposal obtained
translations of twice the quality with the same supervision effort. / González Rubio, J. (2014). On the effective deployment of current machine translation technology [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37888
|
1074 |
La plaza en la ciudad histórica. Análisis tipológico de la plaza histórica en la Región de Murcia. Criterios de intervenciónRódenas Cañada, José María 05 May 2022 (has links)
[ES] El objeto de estudio de la presente tesis es la plaza urbana en el contexto de la ciudad histórica, entendida como espacio urbano históricamente configurado y aceptado socialmente como tal. El estudio se centra en la plaza como elemento estructurante de la ciudad, relacionada con su constitución, origen y principios organizadores. Se analizan las intervenciones planificadoras sobre la ciudad histórica tratando de descubrir criterios de intervención en el patrimonio urbanístico aplicables al ejercicio y la práctica profesional, en un estudio centrado en los límites administrativos de la Comunidad de Murcia. La práctica del urbanismo se considera en el marco de este estudio, como una actividad necesitada de criterios de actuación al margen de las determinaciones legales y reglamentarias de obligado cumplimiento. No se trata de expedir recetas, sino de encontrar principios y fundamentos para una práctica urbanística responsable.
Esta tesis quiere servir también para divulgar los valores del patrimonio artístico, maltratado muchas veces por actuaciones incontroladas de la administración, a quien compete poner en marcha programas de recuperación de espacios públicos como piezas significativas de la ciudad histórica, al tiempo que se potencia una "cultura espacial" que contribuya a enriquecer estéticamente nuestro entorno urbano. / [EN] The subject of this thesis is the urban square in the context of the historic city, understood as an urban space historically configured and socially accepted as such. The study focuses on the square as a structuring element of the city, related to its constitution, origin and organizing principles. The planning interventions on the historic city are analyzed trying to discover criteria of intervention in the urban heritage applicable to the exercise and professional practice, with a focus on the administrative limits of the Community of Murcia. The practice of urban planning is considered within the framework of this study as an activity in need of performance criteria outside the legal and regulatory determinations of mandatory compliance. It is not about using pre-made formulas, but about finding principles and foundations for a responsible urban practice.
This thesis also wants to serve to spread the values of artistic heritage, mistreated many times by uncontrolled actions of the government administration, which is responsible for launching recovery programs for public spaces as significant pieces of the historic city, while promoting a "spatial culture" that contributes to aesthetically enriching our urban environment. / Ródenas Cañada, JM. (1994). La plaza en la ciudad histórica. Análisis tipológico de la plaza histórica en la Región de Murcia. Criterios de intervención [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/182559
|
1075 |
Multi-omic data integration study of immune system alterations in the development of minimal hepatic encephalopathy in patients with liver cirrhosisRubio Martínez-Abarca, María Teresa 01 September 2022 (has links)
[ES] El objetivo principal de este trabajo fue conocer las alteraciones inmunológicas asociadas a la inflamación periférica que desencadenan deterioro cognitivo en los pacientes cirróticos encefalopatía hepática mínima (EHM). Estos cambios pueden ser monitorizados como cascadas de señalización a lo largo de los tipos celulares del sistema inmune. Como estudio preliminar, se analizaron los cambios en la expresión génica (transcriptómica), los metabolitos de plasma (metabolómica) y un panel de citoquinas extracelulares en muestras de sangre de pacientes cirróticos con y sin EHM. Los resultados del análisis transcriptómico apoyaron la hipótesis de alternancias en las poblaciones celulares de linfocitos Th1/Th2 y Th17 como principales impulsores de la EHM. El análisis clúster de las moléculas del suero dio como resultado 6 grupos de compuestos químicamente similares. También se ha realizado un análisis de integración multiómica para detectar las relaciones entre los componentes intra y extracelulares que podrían contribuir a la inducción del deterioro cognitivo. Los resultados de este análisis integrativo sugirieron una relación entre las citocinas CCL20, CX3CL1, CXCL13, IL-15, IL-22 e IL-6 con la alteración de la quimiotaxis, así como un vínculo entre los fosfolípidos insaturados de cadena larga y el aumento del transporte de ácidos grasos y la producción de prostaglandinas.
Estudios previos sugieren que un cambio en la inflamación periférica, orquestado principalmente por las células T CD4+, es un factor crítico que desencadena el deterioro cognitivo en EHM. La segunda parte de la tesis se centró en la comprensión de las rutas genéticas y los mecanismos por los que las alteraciones en los linfocitos CD4+ pueden contribuir a la inflamación periférica en EHM. Se analizaron los niveles de expresión de genes, factores de transcripción y miARNs en este subtipo de linfocitos mediante secuenciación de alto rendimiento (RNA-seq y miRNA-seq). El análisis individual de cada grupo de datos mostró diferencias de expresión de ARNm y miARN, así como las vías biológicas alteradas en los linfocitos CD4+ comparando pacientes cirróticos con y sin EHM. Encontramos alteraciones en 167 ARNm y 20 rutas biológicas en los pacientes con EHM, incluyendo los receptores tipo Toll, la señalización de la IL-17 y las vías del metabolismo de histidina y triptófano. Trece miRNAs y 7 factores de transcripción presentaron alteraciones en los pacientes con EHM. Utilizando bases de datos para determinar sus genes diana, encontramos una modulación por el aumento de miR-494-39, miR-656-3p y miR-130b-3p de la expresión de TNFAIP3 (proteína A20) y ZFP36 (proteína TTP) aumentaría los niveles de citoquinas proinflamatorias como IL-17 y TNF¿.
Finalmente, estudiamos el repertorio de receptores de células T (TCR) de pacientes control, y de pacientes cirróticos con y sin EHM, a partir del conjunto de datos de RNA-seq procedentes de células T CD4+ aisladas previamente. Dado que los experimentos de RNA-seq contienen genes del TCR en una fracción de los datos, se puede analizar el repertorio sin necesidad de generar datos adicionales. Tras el alineamiento de las lecturas con la base de datos de los genes VDJ realizada por la herramienta MiXCR, recuperamos entre 498-1114 cadenas TCR beta distintas por paciente. Los resultados mostraron un bajo número de clones públicos (convergencia clonal), una alta diversidad (expansión clonal) y una elevada similitud en la arquitectura de la secuencia dentro de los repertorios, independientemente del estado inmunitario de los 3 grupos de pacientes. Además, detectamos una sobrerrepresentación significativa de los TCRs relacionados con la enfermedad celíaca y la enfermedad inflamatoria intestinal en los repertorios de los pacientes con EHM. / [CA] L'objectiu principal d'aquest treball va ser conèixer les alteracions immunològiques associades a la inflamació perifèrica que desencadenen deteriorament cognitiu en els pacients cirròtics amb encefalopatia hepàtica mínima (EHM). Aquests canvis poden ser monitoritzats com cascades de senyalització al llarg dels tipus cel·lulars del sistema immune. Com a estudi preliminar, es van analitzar els canvis en l'expressió gènica (transcriptòmica), els metabòlits de plasma (metabolòmica) i un conjunt de citocines extracel·lulars en mostres de sang de pacients cirròtics amb i sense EHM. Els resultats de l'anàlisi transcriptòmica van recolzar la hipòtesi d'alternances en les poblacions cel·lulars de limfòcits Th1/Th2 i Th17 com a principals impulsors de la EHM. L'anàlisi clúster de les molècules del sèrum va donar com a resultat 6 grups de compostos químicament similars. També s'ha realitzat una anàlisi d'integració multiòmica per detectar les relacions entre els components intra i extracel·lulars que podrien contribuir a la inducció del deteriorament cognitiu. Els resultats d'aquesta anàlisi d'integració van suggerir una relació entre les citocines CCL20, CX3CL1, CXCL13, IL-15, IL-22 i IL-6 amb l'alteració de la quimiotaxis, així com un vincle entre els fosfolípids insaturats de cadena llarga i l'augment del transport d'àcids grassos i la producció de prostaglandines. Estudis previs suggereixen que un canvi en la inflamació perifèrica, orquestrat principalment per les cèl·lules T CD4+, és un factor crític que desencadena el deteriorament cognitiu en EHM. La segona part de la tesi es va centrar en la comprensió de les rutes genètiques i els mecanismes pels quals les alteracions en els limfòcits CD4+ poden contribuir a la inflamació perifèrica en EHM. Es van analitzar els nivells d'expressió de gens, factors de transcripció i miARNs en aquest subtipus de limfòcits mitjançant seqüenciació d'alt rendiment (RNA-seq i miRNA-seq). L'anàlisi individual de cada grup de dades va mostrar les diferències d'expressió d'ARNm i miARN, així com les vies biològiques alterades en els limfòcits CD4+ comparant pacients cirròtics amb i sense EHM. Trobàrem alteracions en 167 ARNm i 20 rutes biològiques en els pacients amb EHM, incloent els receptors tipus Toll, la senyalització de la IL-17 i les vies del metabolisme de la histidina i el triptòfan. Tretze miRNAs i 7 factors de transcripció van presentar alteracions en els pacients amb EHM. Després utilitzàrem bases de dades per determinar els seus gens diana, els quals van resultar ser codificants per proteïnes clau implicades en el canvi immunològic que desencadena la EHM. Per exemple, la modulació per l'augment de miR-494-39, miR-656-3p i miR-130b-3p de l'expressió de TNFAIP3 (proteïna A20) i ZFP36 (proteïna TTP) augmentaria els nivells de citocines proinflamatòries com IL-17 i TNF¿. L'última part de la tesi comprèn un cas pràctic en el qual s'estudia el repertori de receptors de cèl·lules T (TCR) de pacients control, i de pacients cirròtics amb i sense EHM, a partir del conjunt de dades de RNA-seq procedents de cèl·lules T CD4+ aïllades prèviament. Atès que els experiments de RNA-seq contenen gens del TCR en una fracció de les dades, es crea una oportunitat per a l'anàlisi del repertori sense necessitat de generar dades addicionals, el qual redueix la quantitat i els costos de les mostres. Després de l'alineament de les lectures amb la base de dades dels gens VDJ realitzada per l'eina MiXCR, recuperàrem entre 498-1114 cadenes TCR beta diferents per pacient. Els resultats van mostrar un baix nombre de clons públics (convergència clonal), una alta diversitat (expansió clonal) i una elevada similitud en l'arquitectura de la seqüència dins dels repertoris, independentment de l'estat immunitari dels 3 grups de pacients. A més, detectàrem una sobrerepresentació significativa dels TCRs relacionats amb la malaltia celíaca i la malaltia inflamatòria intestinal en els repertoris dels pacients amb EHM. / [EN] The main objective of this work was to understand the immunological alterations associated with the peripheral inflammation that trigger minimal hepatic encephalopathy (MHE) in patients with cirrhosis. These changes can be monitored through the signaling cascades of different immune system cell types. In this work, in a preliminary study, changes in gene expression (transcriptomics), plasma metabolites (metabolomics), and a panel of extracellular cytokines were analyzed in blood samples from patients with cirrhosis with and without MHE. Transcriptomics analysis supported the hypothesis that alternations in the Th1/Th2 and Th17 lymphocyte cell populations are the major drivers of MHE. Cluster analysis of serum molecules highlighted 6 groups of chemically similar compounds. We also developed a multi-omic integration analysis pipeline to detect covariation between intra- and extracellular components that could contribute to the induction of cognitive impairment. Results of this integrative analysis suggested a relationship between cytokines CCL20, CX3CL1, CXCL13, IL-15, IL-22, and IL-6 and altered chemotaxis, as well as a link between long-chain unsaturated phospholipids and increased fatty acid transport and prostaglandin production.
A shift in peripheral inflammation in patients with MHE, mainly orchestrated by CD4+ T cells, had been proposed in previous studies as a critical factor that triggers cognitive impairment. The second part of this thesis focused on understanding the pathways and mechanisms by which alterations in CD4+ lymphocytes may contribute to peripheral inflammation in MHE. Thus, the expression levels of genes, transcription factors, and miRNAs were analyzed in this lymphocyte subtype by high throughput sequencing (RNA-seq and miRNA-seq). Separate analysis of each dataset showed mRNA and miRNA expression differences and altered biological pathways in CD4+ lymphocytes when compared to patients with cirrhosis with and without MHE. We found alterations in 167 mRNAs and 20 pathways in patients with MHE, including toll-like receptors, IL-17 signaling, histidine, and tryptophan metabolism pathways. In addition, 13 miRNAs and 7 transcription factors presented alterations in patients with MHE. We used public databases to determine the target genes of these regulatory molecules and found that increased miR-494-39, miR-656-3p, and miR-130b-3p expression may modulate TNFAIP3 (A20) and ZFP36 (TTP) to increase levels of pro-inflammatory cytokines such as IL-17 and TNF¿.
Finally, we present a case study of the T-cell receptor (TCR) repertoire profiles of control patients and patients with cirrhosis with and without MHE obtained from the bulk RNA-seq dataset previously generated from isolated CD4+ T cells. Given that RNA-seq experiments contain the TCR genes in a fraction of the data, the receptor repertoire analysis without the need to generate additional data is possible. After read alignment to the VDJ genes was performed with the MiXCR tool, we successfully recovered 498-1,114 distinct TCR beta chains per patient. Results showed fewer public clones (clonal convergence), higher diversity (clonal expansion), and elevated sequence architecture similarity within repertoires, independently of the immune status of the 3 groups of patients. Additionally, we detected significant overrepresentation of celiac disease and inflammatory bowel disease related TCRs in MHE patient repertoires. To the best of our knowledge, this is one of the few studies to have shown a step-by-step pipeline for the analysis of immune repertoires using whole transcriptome RNA-seq reads as source data.
In conclusion, our work identified potentially relevant molecular mechanisms of the changes in the immune system associated with the onset of MHE in patients with cirrhosis. Future work with a large sample cohort will be required to validate these results in terms of biomarker determination and the development of new, more effective treatments for MHE. / Rubio Martínez-Abarca, MT. (2022). Multi-omic data integration study of immune system alterations in the development of minimal hepatic encephalopathy in patients with liver cirrhosis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/185116
|
1076 |
Quality by Design through multivariate latent structuresPalací López, Daniel Gonzalo 14 January 2019 (has links)
La presente tesis doctoral surge ante la necesidad creciente por parte de la mayoría de empresas, y en especial (pero no únicamente) aquellas dentro de los sectores farmacéu-tico, químico, alimentación y bioprocesos, de aumentar la flexibilidad en su rango ope-rativo para reducir los costes de fabricación, manteniendo o mejorando la calidad del producto final obtenido. Para ello, esta tesis se centra en la aplicación de los conceptos del Quality by Design para la aplicación y extensión de distintas metodologías ya exis-tentes y el desarrollo de nuevos algoritmos que permitan la implementación de herra-mientas adecuadas para el diseño de experimentos, el análisis multivariante de datos y la optimización de procesos en el ámbito del diseño de mezclas, pero sin limitarse ex-clusivamente a este tipo de problemas.
Parte I - Prefacio, donde se presenta un resumen del trabajo de investigación realiza-do y los objetivos principales que pretende abordar y su justificación, así como una introducción a los conceptos más importantes relativos a los temas tratados en partes posteriores de la tesis, tales como el diseño de experimentos o diversas herramientas estadísticas de análisis multivariado.
Parte II - Optimización en el diseño de mezclas, donde se lleva a cabo una recapitu-lación de las diversas herramientas existentes para el diseño de experimentos y análisis de datos por medios tradicionales relativos al diseño de mezclas, así como de algunas herramientas basadas en variables latentes, tales como la Regresión en Mínimos Cua-drados Parciales (PLS). En esta parte de la tesis también se propone una extensión del PLS basada en kernels para el análisis de datos de diseños de mezclas, y se hace una comparativa de las distintas metodologías presentadas. Finalmente, se incluye una breve presentación del programa MiDAs, desarrollado con la finalidad de ofrecer a sus usuarios la posibilidad de comparar de forma sencilla diversas metodologías para el diseño de experimentos y análisis de datos para problemas de mezclas.
Parte III - Espacio de diseño y optimización a través del espacio latente, donde se aborda el problema fundamental dentro de la filosofía del Quality by Design asociado a la definición del llamado 'espacio de diseño', que comprendería todo el conjunto de posibles combinaciones de condiciones de proceso, materias primas, etc. que garanti-zan la obtención de un producto con la calidad deseada. En esta parte también se trata el problema de la definición del problema de optimización como herramienta para la mejora de la calidad, pero también para la exploración y flexibilización de los procesos productivos, con el objeto de definir un procedimiento eficiente y robusto de optimiza-ción que se adapte a los diversos problemas que exigen recurrir a dicha optimización.
Parte IV - Epílogo, donde se presentan las conclusiones finales, la consecución de objetivos y posibles líneas futuras de investigación. En esta parte se incluyen además los anexos. / Aquesta tesi doctoral sorgeix davant la necessitat creixent per part de la majoria d'em-preses, i especialment (però no únicament) d'aquelles dins dels sectors farmacèutic, químic, alimentari i de bioprocessos, d'augmentar la flexibilitat en el seu rang operatiu per tal de reduir els costos de fabricació, mantenint o millorant la qualitat del producte final obtingut. La tesi se centra en l'aplicació dels conceptes del Quality by Design per a l'aplicació i extensió de diferents metodologies ja existents i el desenvolupament de nous algorismes que permeten la implementació d'eines adequades per al disseny d'ex-periments, l'anàlisi multivariada de dades i l'optimització de processos en l'àmbit del disseny de mescles, però sense limitar-se exclusivament a aquest tipus de problemes.
Part I- Prefaci, en què es presenta un resum del treball de recerca realitzat i els objec-tius principals que pretén abordar i la seua justificació, així com una introducció als conceptes més importants relatius als temes tractats en parts posteriors de la tesi, com ara el disseny d'experiments o diverses eines estadístiques d'anàlisi multivariada.
Part II - Optimització en el disseny de mescles, on es duu a terme una recapitulació de les diverses eines existents per al disseny d'experiments i anàlisi de dades per mit-jans tradicionals relatius al disseny de mescles, així com d'algunes eines basades en variables latents, tals com la Regressió en Mínims Quadrats Parcials (PLS). En aquesta part de la tesi també es proposa una extensió del PLS basada en kernels per a l'anàlisi de dades de dissenys de mescles, i es fa una comparativa de les diferents metodologies presentades. Finalment, s'inclou una breu presentació del programari MiDAs, que ofe-reix la possibilitat als usuaris de comparar de forma senzilla diverses metodologies per al disseny d'experiments i l'anàlisi de dades per a problemes de mescles.
Part III- Espai de disseny i optimització a través de l'espai latent, on s'aborda el problema fonamental dins de la filosofia del Quality by Design associat a la definició de l'anomenat 'espai de disseny', que comprendria tot el conjunt de possibles combina-cions de condicions de procés, matèries primeres, etc. que garanteixen l'obtenció d'un producte amb la qualitat desitjada. En aquesta part també es tracta el problema de la definició del problema d'optimització com a eina per a la millora de la qualitat, però també per a l'exploració i flexibilització dels processos productius, amb l'objecte de definir un procediment eficient i robust d'optimització que s'adapti als diversos pro-blemes que exigeixen recórrer a aquesta optimització.
Part IV- Epíleg, on es presenten les conclusions finals i la consecució d'objectius i es plantegen possibles línies futures de recerca arran dels resultats de la tesi. En aquesta part s'inclouen a més els annexos. / The present Ph.D. thesis is motivated by the growing need in most companies, and specially (but not solely) those in the pharmaceutical, chemical, food and bioprocess fields, to increase the flexibility in their operating conditions in order to reduce production costs while maintaining or even improving the quality of their products. To this end, this thesis focuses on the application of the concepts of the Quality by Design for the exploitation and development of already existing methodologies, and the development of new algorithms aimed at the proper implementation of tools for the design of experiments, multivariate data analysis and process optimization, specially (but not only) in the context of mixture design.
Part I - Preface, where a summary of the research work done, the main goals it aimed at and their justification, are presented. Some of the most relevant concepts related to the developed work in subsequent chapters are also introduced, such as those regarding design of experiments or latent variable-based multivariate data analysis techniques.
Part II - Mixture design optimization, in which a review of existing mixture design tools for the design of experiments and data analysis via traditional approaches, as well as some latent variable-based techniques, such as Partial Least Squares (PLS), is provided. A kernel-based extension of PLS for mixture design data analysis is also proposed, and the different available methods are compared to each other. Finally, a brief presentation of the software MiDAs is done. MiDAs has been developed in order to provide users with a tool to easily approach mixture design problems for the construction of Designs of Experiments and data analysis with different methods and compare them.
Part III - Design Space and optimization through the latent space, where one of the fundamental issues within the Quality by Design philosophy, the definition of the so-called 'design space' (i.e. the subspace comprised by all possible combinations of process operating conditions, raw materials, etc. that guarantee obtaining a product meeting a required quality standard), is addressed. The problem of properly defining the optimization problem is also tackled, not only as a tool for quality improvement but also when it is to be used for exploration of process flexibilisation purposes, in order to establish an efficient and robust optimization method in accordance with the nature of the different problems that may require such optimization to be resorted to.
Part IV - Epilogue, where final conclusions are drawn, future perspectives suggested, and annexes are included. / Palací López, DG. (2018). Quality by Design through multivariate latent structures [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/115489
|
1077 |
A multivariate approach to characterization of drug-like molecules, proteins and the interactions between themLindström, Anton January 2008 (has links)
En sjukdom kan många gånger härledas till en kaskadereaktion mellan proteiner, co-faktorer och substrat. Denna kaskadreaktion blir många gånger målet för att behandla sjukdomen med läkemedel. För att designa nya läkemedelsmoleyler används vanligen datorbaserade verktyg. Denna design av läkemedelsmolekyler drar stor nytta av att målproteinet är känt och då framförallt dess tredimensionella (3D) struktur. Är 3D-strukturen känd kan man utföra så kallad struktur- och datorbaserad molekyldesign, 3D-geometrin (f.f.a. för inbindningsplatsen) blir en vägledning för designen av en ny molekyl. Många faktorer avgör interaktionen mellan en molekyl och bindningsplatsen, till exempel fysikalisk-kemiska egenskaper hos molekylen och bindningsplatsen, flexibiliteten i molekylen och målproteinet, och det omgivande lösningsmedlet. För att strukturbaserad molekyldesign ska fungera väl måste två viktiga steg utföras: i) 3D anpassning av molekyler till bindningsplatsen i ett målprotein (s.k. dockning) och ii) prediktion av molekylers affinitet för bindningsplatsen. Huvudsyftena med arbetet i denna avhandling var som följer: i) skapa modeler för att prediktera affiniteten mellan en molekyl och bindningsplatsen i ett målprotein; ii) förfina molekyl-protein-geometrin som skapas vid 3D-anpassning mellan en molekyl och bindningsplatsen i ett målprotein (s.k. dockning); iii) karaktärisera proteiner och framför allt deras sekundärstruktur; iv) bedöma effekten av olika matematiska beskrivningar av lösningsmedlet för förfining av 3D molekyl-protein-geometrin skapad vid dockning och prediktion av molekylers affinitet för proteiners bindningsfickor. Ett övergripande syfte var att använda kemometriska metoder för modellering och dataanalys på de ovan nämnda punkterna. För att sammanfatta så presenterar denna avhandling metoder och resultat som är användbara för strukturbaserad molekyldesign. De rapporterade resultaten visar att det är möjligt att skapa kemometriska modeler för prediktion av molekylers affinitet för bindningsplatsen i ett protein och att dessa presterade lika bra som andra vanliga metoder. Dessutom kunde kemometriska modeller skapas för att beskriva effekten av hur inställningarna för olika parametrar i dockningsprogram påverkade den 3D molekyl-protein-geometrin som dockingsprogram skapade. Vidare kunde kemometriska modeller andvändas för att öka förståelsen för deskriptorer som beskrev sekundärstrukturen i proteiner. Förfining av molekyl-protein-geometrin skapad genom dockning gav liknande och ickesignifikanta resultat oberoende av vilken matematisk modell för lösningsmedlet som användes, förutom för ett fåtal (sex av 30) fall. Däremot visade det sig att användandet av en förfinad geometri var värdefullt för prediktion av molekylers affinitet för bindningsplatsen i ett protein. Förbättringen av prediktion av affintitet var markant då en Poisson-Boltzmann beskrivning av lösningsmedlet användes; jämfört med prediktionerna gjorda med ett dockningsprogram förbättrades korrelationen mellan beräknad affintiet och uppmätt affinitet med 0,7 (R2). / A disease is often associated with a cascade reaction pathway involving proteins, co-factors and substrates. Hence to treat the disease, elements of this pathway are often targeted using a therapeutic agent, a drug. Designing new drug molecules for use as therapeutic agents involves the application of methods collectively known as computer-aided molecular design, CAMD. When the three dimensional (3D) geometry of a macromolecular target (usually a protein) is known, structure-based CAMD is undertaken and structural information of the target guides the design of new molecules and their interactions with the binding sites in targeted proteins. Many factors influence the interactions between the designed molecules and the binding sites of the target proteins, such as the physico-chemical properties of the molecule and the binding site, the flexibility of the protein and the ligand, and the surrounding solvent. In order for structure-based CAMD to be successful, two important aspects must be considered that take the abovementioned factors into account. These are; i) 3D fitting of molecules to the binding site of the target protein (like fitting pieces of a jigsaw puzzle), and ii) predicting the affinity of molecules to the protein binding site. The main objectives of the work underlying this thesis were: to create models for predicting the affinity between a molecule and a protein binding site; to refine the geometry of the molecule-protein complex derived by or in 3D fitting (also known as docking); to characterize the proteins and their secondary structure; and to evaluate the effects of different generalized-Born (GB) and Poisson-Boltzmann (PB) implicit solvent models on the refinement of the molecule-protein complex geometry created in the docking and the prediction of the molecule-to-protein binding site affinity. A further objective was to apply chemometric methodologies for modeling and data analysis to all of the above. To summarize, this thesis presents methodologies and results applicable to structure-based CAMD. Results show that predictive chemometric models for molecule-to-protein binding site affinity could be created that yield comparable results to similar, commonly used methods. In addition, chemometric models could be created to model the effects of software settings on the molecule-protein complex geometry using software for molecule-to-binding site docking. Furthermore, the use of chemometric models provided a more profound understanding of protein secondary structure descriptors. Refining the geometry of molecule-protein complexes created through molecule-to-binding site docking gave similar results for all investigated implicit solvent models, but the geometry was significantly improved in only a few examined cases (six of 30). However, using the geometry-refined molecule-protein complexes was highly valuable for the prediction of molecule-to-binding site affinity. Indeed, using the PB solvent model it yielded improvements of 0.7 in correlation coefficients (R2) for binding affinity parameters of a set of Factor Xa protein drug molecules, relative to those obtained using the fitting software.
|
1078 |
A existência e a divulgação de ativos intangíveis em processos de fusões & aquisições na frança e o desempenho empresarial financeiroFeitosa, Evelyn Seligmann 10 November 2011 (has links)
Made available in DSpace on 2016-03-15T19:30:46Z (GMT). No. of bitstreams: 1
Evelyn Seligmann Feitosa.pdf: 4150862 bytes, checksum: c2fb95c13060f06c44c6788bbbfd1fc6 (MD5)
Previous issue date: 2011-11-10 / Fundo Mackenzie de Pesquisa / The allocation of resources and the constant search for competitive advantages differentiators to reach best results are always business challenges. In the contemporary context, in order to achieve superior performance, it reinforces the company's need to have, and make good use, of scarce, valuable, non-substitutable and inimitable resources. These resources include brands, customer base, knowledge, ability and competence of the work teams, corporate culture, partnerships and operational processes established, among other intangible assets, usually arising from a long and risky development process. Mergers and acquisitions (M & A) arise, then, as an important strategic action, being an alternative means to obtain and accelerate the accumulation of these resources within the companies. That is the subject of this work, which discusses the importance of existing and intangible assets disclosed, previous to the M & A transactions, their classification into various types, measurement, and impact on the resulting firm's financial performance in long term. The overall objective of this thesis was to analyze how this performance, after a minimum period of 36 months of the event, is related to the existence, level of disclosure and the nature of intangible assets in the organizations involved. One hundred-eighteen (118) companies were investigated in fifty-nine (59) cases of M & A occurred in France between 1997 and 2007; the study reflects a multi-method research, pluralistic, on qualitative and quantitative aspects. Intangible assets disclosure indicators were built by applying the content analysis technique to financial and accounting reports provided by the companies prior to the events, as well as financial indicators (proxies) for the existence of intangibles were calculated. These indicators were initially confronted with each other and later their explanatory power in relation to financial ratios of growth and profitability (for the corporation and its shareholders), which are the analyzed dimensions of financial performance. Many methods for statistical analysis were used in the multivariate data analysis (correlations and factor analysis, multiple regressions) and in the structural equation modeling (SEM), via Partial Least Squares (PLS). A total of twelve models, with statistics significance, were established to express the relationship among the constructs examined. Best results were achieved in the models developed with variables of semantic origin, in detriment of those with financial indicators only. The results obtained in this thesis leads to deduce that, in this study, there are positive relationships between the existence and the disclosure of intangible assets by firms involved in the operations of M & A and subsequent financial performance, measured by the corporate profitability and the growth of the resulting organization. This suggests that the strategic choice for business growth via M & A operations is favorable to the accumulation of intangible assets in the firms, in search for better results. / A alocação de recursos e a constante busca por diferenciais competitivos, visando melhores resultados, são grandes desafios empresariais. No contexto contemporâneo, para obter desempenho superior, reforça-se a necessidade de a empresa dispor, e fazer bom uso, de recursos raros, valiosos, não-substituíveis e de difícil imitação. Dentre estes recursos, destacam-se aspectos como as marcas, a base de clientes, o conhecimento, a capacidade e competência das equipes de trabalho, a cultura corporativa, as parcerias e os processos operacionais estabelecidos, dentre outros ativos intangíveis, geralmente decorrentes de longos e arriscados processos de desenvolvimento. As fusões e aquisições (F&A) surgem, então, como movimentos estratégicos importantes, sendo meio alternativo para obter e acelerar a acumulação destes recursos nas empresas. É essa a temática deste trabalho, que discorre sobre a importância dos ativos intangíveis existentes e divulgados previamente às operações de F&A de empresas, sobre a classificação dos seus diversos tipos, a sua mensuração e o seu impacto sobre o desempenho financeiro da firma resultante, no longo prazo. O objetivo geral desta tese foi analisar como este desempenho, após prazo mínimo de 36 meses do evento, está relacionado à existência, ao nível de divulgação e à natureza dos ativos intangíveis das organizações envolvidas. Foram investigadas 118 empresas, em 59 casos de F&A ocorridos na França entre 1997 e 2007, em uma pesquisa multi-métodos, pluralística, nas vertentes qualitativa e quantitativa. Foram construídos indicadores de divulgação (disclosure) de ativos intangíveis, mediante aplicação da técnica de análise de conteúdos aos relatórios contábil-financeiros disponibilizados pelas empresas antes do evento, e calculados indicadores financeiros (proxies) para a existência de intangíveis. Estes indicadores foram inicialmente confrontados entre si e posteriormente quanto ao seu poder explicativo em relação aos índices financeiros de crescimento e de lucratividade (empresarial e para os acionistas), que são as dimensões analisadas do desempenho financeiro. Utilizaram-se métodos de análise estatística de dados multivariados (análises de correlações, fatoriais, regressões múltiplas) e modelagem em equações estruturais, via Partial Least Squares (SEM- PLS). Foram estabelecidos, no total, doze modelos com significância estatística para expressar o relacionamento entre os construtos examinados. Alcançaram-se melhores resultados nos modelos desenvolvidos com variáveis de origem semântica, em detrimento daqueles que utilizaram indicadores exclusivamente financeiros. Os resultados obtidos nesta tese permitiram deduzir que há relações positivas entre a existência e a divulgação de ativos intangíveis pelas firmas envolvidas nas operações de F&A estudadas e o posterior desempenho financeiro, mensurado pela lucratividade empresarial e pelo crescimento, da organização resultante. Isto sugere que a opção estratégica por crescimento empresarial via operações de F&A é favorável ao acúmulo de recursos intangíveis nas firmas, na busca por melhores resultados.
|
1079 |
Adaptive least-squares finite element method with optimal convergence ratesBringmann, Philipp 29 January 2021 (has links)
Die Least-Squares Finite-Elemente-Methoden (LSFEMn) basieren auf der Minimierung des Least-Squares-Funktionals, das aus quadrierten Normen der Residuen eines Systems von partiellen Differentialgleichungen erster Ordnung besteht. Dieses Funktional liefert einen a posteriori Fehlerschätzer und ermöglicht die adaptive Verfeinerung des zugrundeliegenden Netzes. Aus zwei Gründen versagen die gängigen Methoden zum Beweis optimaler Konvergenzraten, wie sie in Carstensen, Feischl, Page und Praetorius (Comp. Math. Appl., 67(6), 2014) zusammengefasst werden. Erstens scheinen fehlende Vorfaktoren proportional zur Netzweite den Beweis einer schrittweisen Reduktion der Least-Squares-Schätzerterme zu verhindern. Zweitens kontrolliert das Least-Squares-Funktional den Fehler der Fluss- beziehungsweise Spannungsvariablen in der H(div)-Norm, wodurch ein Datenapproximationsfehler der rechten Seite f auftritt. Diese Schwierigkeiten führten zu einem zweifachen Paradigmenwechsel in der Konvergenzanalyse adaptiver LSFEMn in Carstensen und Park (SIAM J. Numer. Anal., 53(1), 2015) für das 2D-Poisson-Modellproblem mit Diskretisierung niedrigster Ordnung und homogenen Dirichlet-Randdaten. Ein neuartiger expliziter residuenbasierter Fehlerschätzer ermöglicht den Beweis der Reduktionseigenschaft. Durch separiertes Markieren im adaptiven Algorithmus wird zudem der Datenapproximationsfehler reduziert.
Die vorliegende Arbeit verallgemeinert diese Techniken auf die drei linearen Modellprobleme das Poisson-Problem, die Stokes-Gleichungen und das lineare Elastizitätsproblem. Die Axiome der Adaptivität mit separiertem Markieren nach Carstensen und Rabus (SIAM J. Numer. Anal., 55(6), 2017) werden in drei Raumdimensionen nachgewiesen. Die Analysis umfasst Diskretisierungen mit beliebigem Polynomgrad sowie inhomogene Dirichlet- und Neumann-Randbedingungen. Abschließend bestätigen numerische Experimente mit dem h-adaptiven Algorithmus die theoretisch bewiesenen optimalen Konvergenzraten. / The least-squares finite element methods (LSFEMs) base on the minimisation of the least-squares functional consisting of the squared norms of the residuals of first-order systems of partial differential equations. This functional provides a reliable and efficient built-in a posteriori error estimator and allows for adaptive mesh-refinement. The established convergence analysis with rates for adaptive algorithms, as summarised in the axiomatic framework by Carstensen, Feischl, Page, and Praetorius (Comp. Math. Appl., 67(6), 2014), fails for two reasons. First, the least-squares estimator lacks prefactors in terms of the mesh-size, what seemingly prevents a reduction under mesh-refinement. Second, the first-order divergence LSFEMs measure the flux or stress errors in the H(div) norm and, thus, involve a data resolution error of the right-hand side f. These difficulties led to a twofold paradigm shift in the convergence analysis with rates for adaptive LSFEMs in Carstensen and Park (SIAM J. Numer. Anal., 53(1), 2015) for the lowest-order discretisation of the 2D Poisson model problem with homogeneous Dirichlet boundary conditions. Accordingly, some novel explicit residual-based a posteriori error estimator accomplishes the reduction property. Furthermore, a separate marking strategy in the adaptive algorithm ensures the sufficient data resolution.
This thesis presents the generalisation of these techniques to three linear model problems, namely, the Poisson problem, the Stokes equations, and the linear elasticity problem. It verifies the axioms of adaptivity with separate marking by Carstensen and Rabus (SIAM J. Numer. Anal., 55(6), 2017) in three spatial dimensions. The analysis covers discretisations with arbitrary polynomial degree and inhomogeneous Dirichlet and Neumann boundary conditions. Numerical experiments confirm the theoretically proven optimal convergence rates of the h-adaptive algorithm.
|
1080 |
Political and economic events 1988 to 1998 : their impact on the specification of the nonlinear multifactor asset pricing model described by the arbitrage pricing theory for the financial and industrial sector of the Johannesburg Stock ExchangeStephanou, Costas Michael 05 1900 (has links)
The impact of political and economic events on the asset pricing model described by the
arbitrage pricing theory (APTM) was examined in order to establish if they had caused any
changes in its specification. It was concluded that the APTM is not stationary and that it must
be continuously tested before it can be used as political and economic events can change its
specification. It was also found that political events had a more direct effect on the
specification of the APTM, in that their effect is more immediate, than did economic events,
which influenced the APTM by first influencing the economic environment in which it
operated.
The conventional approach that would have evaluated important political and economic
events, case by case, to determine whether they affected the linear factor model (LFM), and
subsequently the APTM, could not be used since no correlation was found between the
pricing of a risk factor in the LFM and its subsequent pricing in the APTM. A new approach
was then followed in which a correlation with a political or economic event was sought
whenever a change was detected in the specification of the APTM. This was achieved by first
finding the best subset LFM, chosen for producing the highest adjusted R2
, month by month,
over 87 periods from 20 October1991 to 21 June 1998, using a combination of nine
prespecified risk factors (five of which were proxies for economic events and one for
political events). Multivariate analysis techniques were then used to establish which risk
factors were priced most often during the three equal subperiods into which the 87 periods
were broken up.
Using the above methodology, the researcher was able to conclude that political events
changed the specification of the APTM in late 1991. After the national elections in April
1994 it was found that the acceptance of South Africa into the world economic community
had again changed the specification of the APTM and the two most important factors were
proxies for economic events. / Business Leadership / DBL
|
Page generated in 0.0595 seconds