Spelling suggestions: "subject:"informàtica"" "subject:"informàticad""
41 |
Modelo de estimaçao dos custos da nao formaçao em saúde no ambito do diagnóstico e tratamento de feridas crónicas: uso de simulação da decisão clínica com ferramentas baseadas na InternetSoares Gaspar, Pedro João 16 December 2009 (has links)
Los contextos profesionales actuales, marcados fuertemente por rápidas y profundas evoluciones tecnológicas y científicas, exigen una actualización permanente de las competencias que sólo el Aprendizaje a lo Largo de la Vida y la formación profesional continua permiten. El área de la prestación de cuidados de salud no es excepción, y para además de la calidad de los cuidados también sus costes se encuentran bajo permanente evaluación.En los contextos de la prestación de cuidados de salud, los avances de la ciencia y de la tecnología rápidamente hacen obsoletas las competencias específicas adquiridas en las escuelas. Por otro lado los errores clínicos, las malas prácticas y los rendimientos deficientes, frecuentemente asociados al déficit de formación, imputan una enorme carga económica a los costes en salud.Las Tecnologías de la Comunicación y de la Información, y expresamente la formación disponible online, pueden ser importantes catalizadores del Aprendizaje a lo Largo de la Vida en general y de la formación profesional continua en particular, sobre todo por la flexibilidad, accesibilidad y ubicuidad con que pueden revestir los programas de formación.En la inversión en formación de los profesionales de salud, más que saber cuánto se va a gastar es sobre todo importante saber cuánto se puede ahorrar, en los costes de los cuidados, haciendo más calificados y competentes los profesionales de salud. Pero responder a la cuestión "Cuanto cuesta la no formación", en el caso de los profesionales de salud, presenta fuertes obstáculos éticos y metodológicos.Con el objetivo general de desarrollar un modelo de estimación de los costes de la no formación en el ámbito de la salud, usando casos clínicos virtuales y un simulador de toma de decisión clínica para el tratamiento de heridas crónicas, construimos y validamos casos clínicos virtuales de personas con heridas crónicas, un modelo matemático para estimación de los Costes Óptimos (basados en las decisiones clínicas óptimas) de los casos clínicos virtuales y un simulador de toma de decisión en diagnóstico y tratamiento de los casos clínicos virtuales para construir las matrices de Costes de la Acción (basados en las decisiones registradas en el simulador).Probamos este modelo desarrollando un primer estudio cuantitativo, transversal, correlacional en una muestra no aleatoria de 78 enfermeros, con diferentes niveles de formación específica y experiencia en el diagnóstico y tratamiento de heridas crónicas.En los resultados de este estudio reunimos evidencia empírica de que los costes envueltos en el tratamiento son más elevados entre los profesionales de salud que no frecuentaron formación acreditada específica en el área del diagnóstico y tratamiento de heridas crónicas, y de que entre los que frecuentaron formación los costes tienden a bajar a medida que el número de horas de formación sube.En un segundo experimento desarrollamos un estudio más controlado, casiexperimental del tipo pre-test/pos-test con grupo de control no equivalente, en una muestra de 53 profesionales de la salud (25 en el grupo experimental y 28 en el grupo de control). La manipulación de la variable dependiente (Costes de la Acción) se hizo con la frecuencia de un programa de 40 horas de formación acreditada específica en el área del diagnóstico y tratamiento de heridas crónicas, que sólo el grupo experimental asistió.En los resultados de este estudio se recogió evidencia empírica de que los costes de tratamiento son más elevados entre los profesionales que no han asistido a la formación. Fue posible estimar los costes de la no formación y demostrar su gran representatividad.Estos resultados indican que (1) la formación profesional continua puede ser eficaz en la reducción de errores y malas prácticas y contribuir para la reducción de los costes del tratamiento de heridas crónicas, (2) que los costes de tratamiento son más altos entre profesionales de la salud que no han asistido a la formación acreditada específica en el área del diagnóstico y tratamiento de heridas crónicas y (3) que en un análisis coste efectividad con una perspectiva de la sociedad que contiene los costes directos y los gastos indirectos, los costes atribuibles a la no formación tienen una gran representatividad en la carga económica.El modelo desarrollado y probado, basado en casos clínicos virtuales y tomada de decisiones clínicas en un simulador virtual, ha demostrado ser eficaz en la estimación de los costes de la no formación en diagnóstico y tratamiento de heridas crónicas, y refuerza nuestra convicción de que puede ser utilizado con los mismos objetivos en otras áreas de la prestación de atención de la salud. / Current professional contexts are strongly marked by rapid and profound technological and scientific advancements, and therefore require the constant updating of skills that only Lifelong Learning and continuous professional training can provide. The area of health care provision is not an exception. In addition to the quality of care provided, so too are the costs of such care constantly being assessed.In health care provision, the advances of science and technology rapidly turn specific skills acquired at school totally obsolete. On the other hand, clinical errors, bad practices and inadequate performances frequently associated to insufficient training bring a tremendous economic load to health costs.Information and Communication Technologies, and particularly online training can become important promoters of Lifelong Learning in general, and of continuous professional training in particular, especially because they bring flexibility, accessibility and ubiquity to training programmes.In investing in the training of health care providers, rather than knowing how much will be spent, we have to know how much we can save in health care costs, making health care providers more qualified and skilled. Yet, in the case of such providers, there are significant ethical and methodological constraints to the question "How much does nontraining cost?"Our general aim was to develop a model to estimate the costs of non-training within health care, using virtual clinical cases and a decision-making simulator for the treatment of chronic wounds. To this end, we built and validated several virtual clinical cases of patients with chronic wounds, a mathematical model to estimate the Optimal Costs (based on optimal clinical decisions) of these cases, and a decision-making simulator for the diagnosis and treatment of virtual clinical cases, in order to prepare the Cost of Action matrices (based on the decisions recorded in the simulator).This model was tested through the development of a first quantitative, transversal and correlational study, using a non-random cohort of 78 nurses with different levels of specific training and experience in the diagnosis and treatment of chronic wounds.The outcomes of this study provided us with the empirical knowledge that the costs involved in the treatment are higher among the health care providers that had not attended specific accredited training in diagnosis and treatment of chronic wounds.Furthermore, among those who did attend such training, the costs are likely to drop as the number of training hours increases.The second test of the model implied a more controlled analysis, a quasi-experimental pre-test/post-test study with a non-equivalent control group, on a cohort of 53 health care providers (25 in the experimental group and 28 in the control group). Handling of the dependent variable (Costs of Action) was done by implementing a 40-hour accredited training programme on diagnosis and treatment of chronic wounds, attended only by the experimental group.The outcomes of this study provided us with the empirical knowledge that the costs involved in the treatment are higher among the health care providers that had not attended the training. We were able to estimate the costs of non-training, and to confirm the extent of its representativeness.Based on these results, our conclusion is that (1) continuous professional training is important to minimize errors and bad practices, and are essential to reduce the costs involved in the treatment of chronic wounds, (2) the costs involved in the treatment are higher among the health care providers who did not attend the accredited training in the prevention and treatment of chronic wounds, and (3) the costs chargeable to nontraining strongly influence the economic aspect and the cost-effectiveness analysis, in a society perspective that included both direct costs and indirect costs.The model developed and tested, based on virtual clinical cases and on a clinical decision-making simulator, proved effective in estimating the costs of non-training in the diagnosis and treatment of chronic wounds, and stresses our conviction that it can be used to the same end in other areas of health care provision. / OOs contextos profissionais actuais, fortemente marcados por rápidas e profundas evoluções tecnológicas e científicas, exigem uma actualização permanente de competências que só a Aprendizagem ao Longo da Vida e a formação profissional contínua permitem. A área da prestação de cuidados de saúde não é excepção, e para além da qualidade dos cuidados também os custos dos cuidados se encontram sob permanente avaliação.Nos contextos da prestação de cuidados de saúde os avanços da ciência e da tecnología rapidamente tornam obsoletas competências específicas adquiridas nas escolas. Por outro lado os erros clínicos, as más práticas e as performances deficientes, frequentemente associadas ao défice de formação, imputam uma enorme carga económica aos custos da saúde.As Tecnologias da Informação e da Comunicação, e nomeadamente a formação disponibilizada online, podem ser importantes catalizadores da Aprendizagem ao Longo da Vida em geral e da formação profissional contínua em particular, sobretudo pela flexibilidade, acessibilidade e ubiquidade com que podem revestir os programas de formação.No investimento em formação dos profissionais de saúde, mais do que saber quanto se vai gastar é sobretudo importante saber quanto se pode poupar, nos custos dos cuidados, tornando mais qualificados e competentes os profissionais de saúde. Mas responder à questão "Quanto custa a não formação", no caso dos profissionais de saúde apresenta fortes obstáculos éticos e metodológicos.Com o objectivo geral de desenvolver um modelo de estimação dos custos da não formação no âmbito da saúde, usando casos clínicos virtuais e um simulador de tomada de decisão clínica para o tratamento de feridas crónicas, construímos e validamos casos clínicos virtuais de pessoas com feridas crónicas, um modelo matemático para estimação dos Custos Óptimos (baseados nas decisões clínicas óptimas) dos casos clínicos virtuais e um simulador de tomada de decisão em diagnóstico e tratamento dos casos clínicos virtuais para construir as matrizes de Custos da Acção (baseados nas decisões registadas no simulador).Testámos este modelo desenvolvendo um primeiro estudo quantitativo, transversal, correlacional numa amostra não aleatória de 78 enfermeiros com diferentes níveis de formação específica e experiência no diagnóstico e tratamento de feridas crónicas.Nos resultados deste estudo reunimos evidência empírica de que os custos envolvimos no tratamento são mais elevados entre os profissionais de saúde que não frequentaram formação acreditada específica na área do diagnóstico e tratamento de feridas crónicas, e de que entre os que frequentaram formação os custos tendem a baixar à medida que o número de horas de formação sobe.Num segundo teste do modelo desenvolve um estudo mais controlado, quaseexperimental do tipo pré-teste/pos-teste com grupo de controlo não equivalente, numa amostra de 53 profissionais de saúde (25 no grupo experimental e 28 no grupo de controlo). A manipulação da variável dependente (Custos da Acção) realizou-se com a frequência de um programa de 40 horas de formação acreditada no âmbito do diagnóstico e tratamento de feridas crónicas, que só o grupo experimental frequentou.Nos resultados deste estudo reunimos evidência empírica de que os custos envolvimos no tratamento são mais elevados entre os profissionais que não frequentaram a formação. Foi possível estimar os custos da não formação, atestando a sua grande representatividade.Estes resultados permitem concluir que (1) a formação profissional contínua pode ser efectiva na minimização dos erros e más práticas clínicas e ser fundamental na redução dos custos do tratamento de feridas crónicas, (2) que os custos envolvidos no tratamento são mais elevados entre os profissionais de saúde que não frequentaram a formação acreditada no âmbito da prevenção e tratamento de feridas crónicas e (3) que os custos imputáveis à não formação têm uma grande representatividade na carga económica, numa análise custo-efectividade segundo uma perspectiva da sociedade que incluiu não apenas os custos directos, mas também custos indirectos.O modelo desenvolvido e testado, baseado em casos clínicos virtuais e simulador de tomada de decisão clínica, revelou-se efectivo na estimação dos custos da não formação em diagnostico e tratamento de feridas crónicas, e reforça a nossa convicção de que pode ser usado com os mesmos objectivos em outras áreas da prestação dos cuidados de saúde.
|
42 |
Distributed aop middleware for large-scale scenariosMondéjar Andreu, Rubén 29 April 2010 (has links)
En aquesta tesi doctoral presentem una proposta de middleware distribuït pel desenvolupament d'aplicacions de gran escala. La nostra motivació principal és permetre que les responsabilitats distribuïdes d'aquestes aplicacions, com per exemple la replicació, puguin integrar-se de forma transparent i independent. El nostre enfoc es basa en la implementació d'aquestes responsabilitats mitjançant el paradigma d'aspectes distribuïts i es beneficia dels substrats de les xarxes peer-to-peer (P2P) i de la programació orientada a aspectes (AOP) per realitzar-ho de forma descentralitzada, desacoblada, eficient i transparent. La nostra arquitectura middleware es divideix en dues capes: un model de composició i una plataforma escalable de desplegament d'aspectes distribuïts. Per últim, es demostra la viabilitat i aplicabilitat del nostre model mitjançant la implementació i experimentació de prototipus en xarxes de gran escala reals. / In this PhD dissertation we present a distributed middleware proposal for large-scale application development. Our main aim is to separate the distributed concerns of these applications, like replication, which can be integrated independently and transparently. Our approach is based on the implementation of these concerns using the paradigm of distributed aspects. In addition, our proposal benefits from the peer-to-peer (P2P) networks and aspect-oriented programming (AOP) substrates to provide these concerns in a decentralized, decoupled, efficient, and transparent way. Our middleware architecture is divided into two layers: a composition model and a scalable deployment platform for distributed aspects. Finally, we demonstrate the viability and applicability of our model via implementation and experimentation of prototypes in real large-scale networks.
|
43 |
Moving towards the semantic web: enabling new technologies through the semantic annotation of social contents.Vicient Monllaó, Carlos 12 January 2015 (has links)
La Web Social ha causat un creixement exponencial dels continguts disponibles deixant enormes quantitats de recursos textuals electrònics que sovint aclaparen els usuaris. Aquest volum d’informació és d’interès per a la comunitat de mineria de dades. Els algorismes de mineria de dades exploten característiques de les entitats per tal de categoritzar-les, agrupar-les o classificar-les segons la seva semblança. Les dades per si mateixes no aporten cap mena de significat: han de ser interpretades per esdevenir informació. Els mètodes tradicionals de mineria de dades no tenen com a objectiu “entendre” el contingut d’un recurs, sinó que extreuen valors numèrics els quals esdevenen models en aplicar-hi càlculs estadístics, que només cobren sentit sota l’anàlisi manual d’un expert. Els darrers anys, motivat per la Web Semàntica, molts investigadors han proposat mètodes semàntics de classificació de dades capaços d’explotar recursos textuals a nivell conceptual. Malgrat això, normalment aquests mètodes depenen de recursos anotats prèviament per poder interpretar semànticament el contingut d’un document. L’ús d’aquests mètodes està estretament relacionat amb l’associació de dades i el seu significat.
Aquest treball es centra en el desenvolupament d’una metodologia genèrica capaç de detectar els trets més rellevants d’un recurs textual descobrint la seva associació semàntica, es a dir, enllaçant-los amb conceptes modelats a una ontologia, i detectant els principals temes de discussió. Els mètodes proposats són no supervisats per evitar el coll d’ampolla generat per l’anotació manual, independents del domini (aplicables a qualsevol àrea de coneixement) i flexibles (capaços d’analitzar recursos heterogenis: documents textuals o documents semi-estructurats com els articles de la Viquipèdia o les publicacions de Twitter). El treball ha estat avaluat en els àmbits turístic i mèdic.
Per tant, aquesta dissertació és un primer pas cap a l'anotació semàntica automàtica de documents necessària per possibilitar el camí cap a la visió de la Web Semàntica. / La Web Social ha provocado un crecimiento exponencial de los contenidos disponibles, dejando enormes cantidades de recursos electrónicos que a menudo abruman a los usuarios. Tal volumen de información es de interés para la comunidad de minería de datos. Los algoritmos de minería de datos explotan características de las entidades para categorizarlas, agruparlas o clasificarlas según su semejanza. Los datos por sí mismos no aportan ningún significado: deben ser interpretados para convertirse en información. Los métodos tradicionales no tienen como objetivo "entender" el contenido de un recurso, sino que extraen valores numéricos que se convierten en modelos tras aplicar cálculos estadísticos, los cuales cobran sentido bajo el análisis manual de un experto. Actualmente, motivados por la Web Semántica, muchos investigadores han propuesto métodos semánticos de clasificación de datos capaces de explotar recursos textuales a nivel conceptual. Sin embargo, generalmente estos métodos dependen de recursos anotados previamente para poder interpretar semánticamente el contenido de un documento. El uso de estos métodos está estrechamente relacionado con la asociación de datos y su significado.
Este trabajo se centra en el desarrollo de una metodología genérica capaz de detectar los rasgos más relevantes de un recurso textual descubriendo su asociación semántica, es decir, enlazándolos con conceptos modelados en una ontología, y detectando los principales temas de discusión. Los métodos propuestos son no supervisados para evitar el cuello de botella generado por la anotación manual, independientes del dominio (aplicables a cualquier área de conocimiento) y flexibles (capaces de analizar recursos heterogéneos: documentos textuales o documentos semi-estructurados, como artículos de la Wikipedia o publicaciones de Twitter). El trabajo ha sido evaluado en los ámbitos turístico y médico.
Esta disertación es un primer paso hacia la anotación semántica automática de documentos necesaria para posibilitar el camino hacia la visión de la Web Semántica. / Social Web technologies have caused an exponential growth of the documents available through the Web, making enormous amounts of textual electronic resources available. Users may be overwhelmed by such amount of contents and, therefore, the automatic analysis and exploitation of all this information is of interest to the data mining community. Data mining algorithms exploit features of the entities in order to characterise, group or classify them according to their resemblance. Data by itself does not carry any meaning; it needs to be interpreted to convey information. Classical data analysis methods did not aim to “understand” the content and the data were treated as meaningless numbers and statistics were calculated on them to build models that were interpreted manually by human domain experts. Nowadays, motivated by the Semantic Web, many researchers have proposed semantic-grounded data classification and clustering methods that are able to exploit textual data at a conceptual level. However, they usually rely on pre-annotated inputs to be able to semantically interpret textual data such as the content of Web pages. The usability of all these methods is related to the linkage between data and its meaning.
This work focuses on the development of a general methodology able to detect the most relevant features of a particular textual resource finding out their semantics (associating them to concepts modelled in ontologies) and detecting its main topics. The proposed methods are unsupervised (avoiding the manual annotation bottleneck), domain-independent (applicable to any area of knowledge) and flexible (being able to deal with heterogeneous resources: raw text documents, semi-structured user-generated documents such Wikipedia articles or short and noisy tweets). The methods have been evaluated in different fields (Tourism, Oncology).
This work is a first step towards the automatic semantic annotation of documents, needed to pave the way towards the Semantic Web vision.
|
44 |
Semantic recommender systems Provision of personalised information about tourist activities.Borràs Nogués, Joan 09 June 2015 (has links)
Aquesta tesi estudia com millorar els sistemes de recomanació utilitzant informació semàntica sobre un determinat domini (en el cas d’aquest treball, Turisme). Les ontologies defineixen un conjunt de conceptes relacionats amb un determinat domini, així com les relacions entre ells. Aquestes estructures de coneixement poden ser utilitzades no només per representar d'una manera més precisa i refinada els objectes del domini i les preferències dels usuaris, sinó també per millorar els procediments de comparació entre els objectes i usuaris (i també entre els mateixos usuaris) amb l'ajuda de mesures de similitud semàntica. Les millores al nivell de la representació del coneixement i al nivell de raonament condueixen a recomanacions més precises i a una millora del rendiment dels sistemes de recomanació, generant nous sistemes de recomanació semàntics intel•ligents. Les dues tècniques bàsiques de recomanació, basades en contingut i en filtratge col•laboratiu, es beneficien de la introducció de coneixement explícit del domini.
En aquesta tesi també hem dissenyat i desenvolupat un sistema de recomanació que aplica els mètodes que hem proposat. Aquest recomanador està dissenyat per proporcionar recomanacions personalitzades sobre activitats turístiques a la regió de Tarragona. Les activitats estan degudament classificades i etiquetades d'acord amb una ontologia específica, que guia el procés de raonament. El recomanador té en compte molts tipus diferents de dades: informació demogràfica, les motivacions de viatge, les accions de l'usuari en el sistema, les qualificacions proporcionades per l'usuari, les opinions dels usuaris amb característiques demogràfiques similars o gustos similars, etc. Un procés de diversificació que calcula similituds entre objectes s'aplica per augmentar la varietat de les recomanacions i per tant augmentar la satisfacció de l'usuari. Aquest sistema pot tenir un impacte positiu a la regió en millorar l'experiència dels seus visitants. / Esta tesis estudia cómo mejorar los sistemas de recomendación utilizando información semántica sobre un determinado dominio, en el caso de este trabajo el Turismo. Las ontologías definen un conjunto de conceptos relacionados con un determinado dominio, así como las relaciones entre ellos. East estructuras de conocimiento pueden ser utilizadas no sólo para representar de una manera más precisa y refinada los objetos del dominio y las preferencias de los usuarios, sino también para aplicar mejor los procedimientos de comparación entre los objetos y usuarios (y también entre los propios usuarios) con la ayuda de medidas de similitud semántica. Las mejoras al nivel de la representación del conocimiento y al nivel de razonamiento conducen a recomendaciones más precisas y a una mejora del rendimiento de los sistemas de recomendación, generando nuevos sistemas de recomendación semánticos inteligentes. Las dos técnicas de recomendación básicas, basadas en contenido y en filtrado colaborativo, se benefician de la introducción de conocimiento explícito del dominio.
En esta tesis también hemos diseñado y desarrollado un sistema de recomendación que aplica los métodos que hemos propuesto. Este recomendador está diseñado para proporcionar recomendaciones personalizadas sobre las actividades turísticas en la región de Tarragona. Las actividades están debidamente clasificadas y etiquetadas de acuerdo con una ontología específica, que guía el proceso de razonamiento. El recomendador tiene en cuenta diferentes tipos de datos: información demográfica, las motivaciones de viaje, las acciones del usuario en el sistema, las calificaciones proporcionadas por el usuario, las opiniones de los usuarios con características demográficas similares o gustos similares, etc. Un proceso de diversificación que calcula similitudes entre objetos se aplica para generar variedad en las recomendaciones y por tanto aumentar la satisfacción del usuario. Este sistema puede tener un impacto positivo en la región al mejorar la experiencia de sus visitantes. / This dissertation studies how new improvements can be made on recommender systems by using ontological information about a certain domain (in the case of this work, Tourism). Ontologies define a set of concepts related to a certain domain as well as the relationships among them. These knowledge structures may be used not only to represent in a more precise and refined way the domain objects and the user preferences, but also to apply better matching procedures between objects and users (or between users themselves) with the help of semantic similarity measures. The improvements at the knowledge representation level and at the reasoning level lead to more accurate recommendations and to an improvement of the performance of recommender systems, paving the way towards a new generation of smart semantic recommender systems. Both content-based recommendation techniques and collaborative filtering ones certainly benefit from the introduction of explicit domain knowledge.
In this thesis we have also designed and developed a recommender system that applies the methods we have proposed. This recommender is designed to provide personalized recommendations of touristic activities in the region of Tarragona. The activities are properly classified and labelled according to a specific ontology, which guides the reasoning process. The recommender takes into account many different kinds of data: demographic information, travel motivations, the actions of the user on the system, the ratings provided by the user, the opinions of users with similar demographic characteristics or similar tastes, etc. A diversification process that computes similarities between objects is applied to produce diverse recommendations and hence increase user satisfaction. This system can have a beneficial impact on the region by improving the experience of its visitors.
|
45 |
Optimizing programming models for massively parallel computersFarreras Esclusa, Montse 12 December 2008 (has links)
Since the invention of the transistor, clock frequency increase was the primary method of improving computing performance. As the reach of Moore's law came to an end, however, technology driven performance gains became increasingly harder to achieve, and the research community was forced to come up with innovative system architectures. Today increasing parallelism is the primary method of improving performance: single processors are being replaced by multiprocessor systems and multicore architectures.
The challenge faced by computer architects is to increase performance while limited by cost and power consumption. The appearance of cheap and fast interconnection networks has promoted designs based on distributed memory computing. Most modern massively parallel computers, as reflected by the Top 500 list, are clusters of workstations using commodity processors connected by high speed interconnects.
Today's massively parallel systems consist of hundreds of thousands of processors. Software technology to program these large systems is still in its infancy. Optimizing communication has become a key to overall system performance. To cope with the increasing burden of communication, the following methods have been explored:
(i) Scalability in the messaging system: The messaging system itself needs to scale up to the 100K processor range.
(ii) Scalable algorithms reducing communication: As the machine grows in size the amount of communication also increases, and the resulting overhead negatively impacts performance. New programming models and algorithms allow programmers to better exploit locality and reduce communication.
(iii) Speed up communication: reducing and hiding communication latency, and improving bandwidth.
Following the three items described above, this thesis contributes to the improvement of the communication system (i) by proposing a scalable memory management of the communication system, that guarantees the correct reception of data and control-data, (ii) by proposing a language extension that allows programmers to better exploit data locality to reduce inter-node communication, and (iii) by presenting and evaluating
a cache of remote addresses that aims to reduce control-data and exploit the RDMA native network capabilities, resulting in latency reduction and better overlap of communication and computation.
Our contributions are analyzed in two different parallel programming models: Message Passing Interface (MPI) and Unified Parallel C (UPC). Many different programing models exist today, and the programmer usually needs to choose one or another depending on the problem and the machine architecture. MPI has been chosen because it is the de facto standard for parallel programming in distributed memory machines.
UPC was considered because it constitutes a promising easy-to-use approach to parallelism. Since parallelism is everywhere, programmability is becoming important and languages such as UPC are gaining attention as a potential future of high performance computing.
Concerning the communication system, the languages chosen are relevant because, while MPI offers two-sided communication, UPC relays on a one-sided communication model. This difference potentially influences the communication system requirements of the language. These requirements as well as our contributions are analyzed and discussed for both programming models and we state whether they apply to both programming models.
|
46 |
Estrategias de descomposición en dominios para entornos GridOtero Calviño, Beatriz 13 April 2007 (has links)
En este trabajo estamos interesados en realizar simulaciones numéricas basadas en elementos finitos con integración explícita en el tiempo utilizando la tecnología Grid.Actualmente, las simulaciones explícitas de elementos finitos usan la técnica de descomposición en dominios con particiones balanceadas para realizar la distribución de los datos. Sin embargo, esta distribución de los datos presenta una degradación importante del rendimiento de las simulaciones explícitas cuando son ejecutadas en entornos Grid. Esto se debe principalmente, a que en un ambiente Grid tenemos comunicaciones heterogéneas, muy rápidas dentro de una máquina y muy lentas fuera de ella. De esta forma, una distribución balanceada de los datos se ejecuta a la velocidad de las comunicaciones más lentas. Para superar este problema proponemos solapar el tiempo de la comunicación remota con el tiempo de cálculo. Para ello, dedicaremos algunos procesadores a gestionar las comunicaciones más lentas, y el resto, a realizar cálculo intensivo. Este esquema de distribución de los datos, requiere que la descomposición en dominios sea no balanceada, para que, los procesadores dedicados a realizar la gestión de las comunicaciones lentas tengan apenas carga computacional. En este trabajo se han propuesto y analizado diferentes estrategias para distribuir los datos y mejorar el rendimiento de las aplicaciones en entornos Grid. Las estrategias de distribución estáticas analizadas son: 1. U-1domains: Inicialmente, el dominio de los datos es dividido proporcionalmente entre las máquinas dependiendo de su velocidad relativa. Posteriormente, en cada máquina, los datos son divididos en nprocs-1 partes, donde nprocs es el número de procesadores total de la máquina. Cada subdominio es asignado a un procesador y cada máquina dispone de un único procesador para gestionar las comunicaciones remotas con otras máquinas. 2. U-Bdomains: El particionamiento de los datos se realiza en dos fases. La primera fase es equivalente a la realizada para la distribución U-1domains. La segunda fase, divide, proporcionalmente, cada subdominio de datos en nprocs-B partes, donde B es el número de comunicaciones remotas con otras máquinas (dominios especiales). Cada máquina tiene más de un procesador para gestionar las comunicaciones remotas. 3. U-CBdomains: En esta distribución, se crean tantos dominios especiales como comunicaciones remotas. Sin embargo, ahora los dominios especiales son asignados a un único procesador dentro de la máquina. De esta forma, cada subdomino de datos es dividido en nprocs-1 partes. La gestión de las comunicaciones remotas se realiza concurrentemente mediante threads. Para evaluar el rendimiento de las aplicaciones sobre entornos Grid utilizamos Dimemas. Para cada caso, evaluamos el rendimiento de las aplicaciones en diferentes entornos y tipos de mallas. Los resultados obtenidos muestran que:· La distribución U-1domains reduce los tiempos de ejecución hasta un 45% respecto a la distribución balanceada. Sin embargo, esta distribución no resulta efectiva para entornos Grid compuestos de una gran cantidad de máquinas remotas.· La distribución U-Bdomains muestra ser más eficiente, ya que reduce el tiempo de ejecución hasta un 53%. Sin embargo, la escalabilidad de ésta distribución es moderada, debido a que puede llegar a tener un gran número de procesadores que no realizan cálculo intensivo. Estos procesadores únicamente gestionan las comunicaciones remotas. Como limite sólo podemos aplicar esta distribución si más del 50% de los procesadores en una máquina realizan cálculo.· La distribución U-CBdomains reduce los tiempos de ejecución hasta 30%, pero no resulta tan efectiva como la distribución U-Bdomains. Sin embargo, esta distribución incrementa la utilización de los procesadores en 50%, es decir que disminuye los procesadores ociosos.
|
47 |
GRID superscalar: a programming model for the GridSirvent Pardell, Raül 03 February 2009 (has links)
Durant els darrers anys el Grid ha sorgit com una nova plataforma per la computació distribuïda. La tecnologia Gris permet unir diferents recursos de diferents dominis administratius i formar un superordinador virtual amb tots ells. Molts grups de recerca han dedicat els seus esforços a desenvolupar un conjunt de serveis bàsics per oferir un middleware de Grid: una capa que permet l'ús del Grid. De tota manera, utilitzar aquests serveis no és una tasca fácil per molts usuaris finals, cosa que empitjora si l'expertesa d'aquests usuaris no està relacionada amb la informàtica.Això té una influència negativa a l'hora de que la comunitat científica adopti la tecnologia Grid. Es veu com una tecnologia potent però molt difícil de fer servir. Per facilitar l'ús del Grid és necessària una capa extra que amagui la complexitat d'aquest i permeti als usuaris programar o portar les seves aplicacions de manera senzilla.Existeixen moltes propostes d'eines de programació pel Grid. En aquesta tesi fem un resum d'algunes d'elles, i podem veure que existeixen eines conscients i no-conscients del Grid (es programen especificant o no els detalls del Grid, respectivament). A més, molt poques d'aquestes eines poden explotar el paral·lelisme implícit de l'aplicació, i en la majoria d'elles, l'usuari ha de definir aquest paral·lelisme de manera explícita. Una altra característica que considerem important és si es basen en llenguatges de programació molt populars (com C++ o Java), cosa que facilita l'adopció per part dels usuaris finals.En aquesta tesi, el nostre objectiu principal ha estat crear un model de programació pel Grid basat en la programació seqüencial i els llenguatges més coneguts de la programació imperativa, capaç d'explotar el paral·lelisme implícit de les aplicacions i d'accelerar-les fent servir els recursos del Grid de manera concurrent. A més, com el Grid és de naturalesa distribuïda, heterogènia i dinàmica i degut també a que el nombre de recursos que pot formar un Grid pot ser molt gran, la probabilitat de que es produeixi una errada durant l'execució d'una aplicació és elevada. Per tant, un altre dels nostres objectius ha estat tractar qualsevol tipus d'error que pugui sorgir durant l'execució d'una aplicació de manera automàtica (ja siguin errors relacionats amb l'aplicació o amb el Grid). GRID superscalar (GRIDSs), la principal contribució d'aquesta tesi, és un model de programació que assoleix elsobjectius mencionats proporcionant una interfície molt petita i simple i un entorn d'execució que és capaç d'executar en paral·lel el codi proporcionat fent servir el Grid. La nostra interfície de programació permet a un usuari programar una aplicació no-conscient del Grid, amb llenguatges imperatius coneguts i populars (com C/C++, Java, Perl o Shell script) i de manera seqüencial, per tant dóna un pas important per ajudar als usuaris a adoptar la tecnologia Grid.Hem aplicat el nostre coneixement de l'arquitectura de computadors i el disseny de microprocessadors a l'entorn d'execució de GRIDSs. Tal com es fa a un processador superescalar, l'entorn d'execució de GRIDSs és capaç de realitzar un anàlisi de dependències entre les tasques que formen l'aplicació, i d'aplicar tècniques de renombrament per incrementar el seu paral·lelisme. GRIDSs genera automàticament a partir del codi principal de l'usuari un graf que descriu les dependències de dades en l'aplicació. També presentem casos d'ús reals del model de programació en els camps de la química computacional i la bioinformàtica, que demostren que els nostres objectius han estat assolits.Finalment, hem estudiat l'aplicació de diferents tècniques per detectar i tractar fallades: checkpoint, reintent i replicació de tasques. La nostra proposta és proporcionar un entorn capaç de tractar qualsevol tipus d'errors, de manera transparent a l'usuari sempre que sigui possible. El principal avantatge d'implementar aquests mecanismos al nivell del model de programació és que el coneixement a nivell de l'aplicació pot ser explotat per crear dinàmicament una estratègia de tolerància a fallades per cada aplicació, i evitar introduir sobrecàrrega en entorns lliures d'errors. / During last years, the Grid has emerged as a new platform for distributed computing. The Grid technology allows joining different resources from different administrative domains and forming a virtual supercomputer with all of them.Many research groups have dedicated their efforts to develop a set of basic services to offer a Grid middleware: a layer that enables the use of the Grid. Anyway, using these services is not an easy task for many end users, even more if their expertise is not related to computer science. This has a negative influence in the adoption of the Grid technology by the scientific community. They see it as a powerful technology but very difficult to exploit. In order to ease the way the Grid must be used, there is a need for an extra layer which hides all the complexity of the Grid, and allows users to program or port their applications in an easy way.There has been many proposals of programming tools for the Grid. In this thesis we give an overview on some of them, and we can see that there exist both Grid-aware and Grid-unaware environments (programmed with or without specifying details of the Grid respectively). Besides, very few existing tools can exploit the implicit parallelism of the application and in the majority of them, the user must define the parallelism explicitly. Another important feature we consider is if they are based in widely used programming languages (as C++ or Java), so the adoption is easier for end users.In this thesis, our main objective has been to create a programming model for the Grid based on sequential programming and well-known imperative programming languages, able to exploit the implicit parallelism of applications and to speed them up by using the Grid resources concurrently. Moreover, because the Grid has a distributed, heterogeneous and dynamic nature and also because the number of resources that form a Grid can be very big, the probability that an error arises during an application's execution is big. Thus, another of our objectives has been to automatically deal with any type of errors which may arise during the execution of the application (application related or Grid related).GRID superscalar (GRIDSs), the main contribution of this thesis, is a programming model that achieves these mentioned objectives by providing a very small and simple interface and a runtime that is able to execute in parallel the code provided using the Grid. Our programming interface allows a user to program a Grid-unaware application with already known and popular imperative languages (such as C/C++, Java, Perl or Shell script) and in a sequential fashion, therefore giving an important step to assist end users in the adoption of the Grid technology.We have applied our knowledge from computer architecture and microprocessor design to the GRIDSs runtime. As it is done in a superscalar processor, the GRIDSs runtime system is able to perform a data dependence analysis between the tasks that form an application, and to apply renaming techniques in order to increase its parallelism. GRIDSs generates automatically from user's main code a graph describing the data dependencies in the application.We present real use cases of the programming model in the fields of computational chemistry and bioinformatics, which demonstrate that our objectives have been achieved.Finally, we have studied the application of several fault detection and treatment techniques: checkpointing, task retry and task replication. Our proposal is to provide an environment able to deal with all types of failures, transparently for the user whenever possible. The main advantage in implementing these mechanisms at the programming model level is that application-level knowledge can be exploited in order to dynamically create a fault tolerance strategy for each application, and avoiding to introduce overhead in error-free environments.
|
48 |
Spectral analysis of executions of computer programs and its applications on performance analysisCasas Guix, Marc 09 March 2010 (has links)
This work is motivated by the growing intricacy of high performance computing infrastructures. For example, supercomputer MareNostrum (installed in 2005 at BSC) has 10240 processors and currently there are machines with more than 100.000 processors. The complexity of this systems increases the complexity of the manual performance analysis of parallel applications. For this reason, it is mandatory to use automatic tools and methodologies.The performance analysis group of BSC and UPC has a large experience in analyzing parallel applications. The approach of this group consists mainly in the analysis of tracefiles (obtained from parallel applications executions) using performance analysis and visualization tools, such as Paraver. Taking into account the general characteristics of the current systems, this method can sometimes be very expensive in terms of time and inefficient. To overcome these problems, this thesis makes several contributions.The first one is an automatic system able to detect the internal structure of executions of high performance computing applications. This automatic system is able to rule out nonsignificant regions of executions, to detect redundancies and, finally, to select small but significant execution regions. This automatic detection process is based on spectral analysis (wavelet transform, fourier transform, etc..) and works detecting the most important frequencies of the application's execution. These main frequencies are strongly related to the internal loops of the application' source code. Finally, it is important to state that an automatic detection of small but significant execution regions reduces remarkably the complexity of the performance analysis process.The second contribution is an automatic methodology able to show general but nontrivial performance trends. They can be very useful for the analyst in order to carry out a performance analysis of the application. The automatic methodology is based on an analytical model. This model consists in several performance factors. Such factors modify the value of the linear speedup in order to fit the real speedup. That is, if this real speedup is far from the linear one, we will detect immediately which one of the performance factors is undermining the scalability of the application. The second main characteristic of the analytical model is that it can be used to predict the performance of high performance computing applications. From several execution on a few of processors, we extract model's performance factors and we extrapolate these values to executions on higher number of processors. Finally, we obtain a speedup prediction using the analytical model.The third contribution is the automatic detection of the optimal sampling frequency of applications. We show that it is possible to extract this frequency using spectral analysis. In case of sequential applications, we show that to use this frequency improves existing results of recognized techniques focused on the reduction of serial application's instruction execution stream (SimPoint, Smarts, etc..). In case of parallel benchmarks, we show that the optimal frequency is very useful to extract significant performance information very efficiently and accurately.In summary, this thesis proposes a set of techniques based on signal processing. The main focus of these techniques is to perform an automatic analysis of the applications, reporting and initial diagnostic of their performance and showing their internal iterative structure. Finally, these methods also provide a reduced tracefile from which it is easy to start manual finegrain performance analysis. The contributions of the thesis are not reduced to proposals and publications. The research carried out these last years has provided a tool for analyzing applications' structure. Even more, the methodology is general and it can be adapted to many performance analysis methods, improving remarkably their efficiency, flexibility and generality.
|
49 |
Formal mission specification and execution mechanisms for unmanned aircraft systemsSantamaría Barnadas, Eduard 15 June 2010 (has links)
Unmanned Aircraft Systems (UAS) are rapidly gaining attention due to the increasing potential of their applications in the civil domain. UAS can provide great value performing environmental applications, during emergency situations, as monitoring and surveillance tools, and operating as communication relays among other uses. In general, they are specially well suited for the so-called D-cube operations (Dirty, Dull or Dangerous).Most current commercial solutions, if not remotely piloted, rely on waypoint based flight control systems for their navigation and are unable to coordinate UAS flight with payload operation. Therefore, automation capabilities and the ability for the system to operate in an autonomous manner are very limited. Some motivators that turn autonomy into an important requirement include limited bandwidth, limits on long-term attention spans of human operators, faster access to sensed data, which also results in better reaction times, as well as benefits derived from reducing operators workload and training requirements.Other important requirements we believe are key to the success of UAS in the civil domain are reconfigurability and cost-effectiveness. As a result, an affordable platform should be able to operate in different application scenarios with reduced human intervention.To increase capabilities of UAS and satisfy the aforementioned requirements, we propose adding flight plan and mission management layers on top of a commercial off-the-shelf flight control system. By doing so, a high level of autonomy can be achieved while taking advantage of available technologies and avoiding huge investments. Reconfiguration is made possible by separating flight and mission execution from its specification.The flight and mission management components presented in this thesis integrate into a wider hardware/software architecture being developed by the ICARUS research group.This architecture follows a service oriented approach where UAS subsystems are connected together through a common networking infrastructure. Components can be added and removed from the network in order to adapt the system to the target mission.The first contribution of this thesis consists, then, in a flight specification language that enables the description of the flight plan in terms of legs. Legs provide a higher level of abstraction compared to plain waypoints since they not only specify a destination but also the trajectory that should be followed to reach it. This leg concept is extended with additional constructs that enable specification of alternative routes, repetition and generation of complex trajectories from a reduced number of parameters.A Flight Plan Manager (FPM) service has been developed that is responsible for the execution of the flight plan. Since the underlying flight control system is still waypoint based, additional intermediate waypoints are automatically generated to adjust the flight to the desired trajectory.In order to coordinate UAS flight and payload operation a Mission Manager (MMa) service has also been developed. The MMa is able to adapt payload operation according to the current flight phase, but it can also act on the FPM and make modifications on the flight plan for a better adaption to the mission needs. To specify UAS behavior, instead of designing a new language, we propose using an in-development standard for the specification of state machines called State Chart XML.Finally, validation of the proposed specification and execution elements is carried out with two example missions executed in a simulation environment. The first mission mimics the procedures required for inspecting navigation aids and shows the UAS performance in a complex flight scenario. In this mission only the FPM is involved. The second example combines operation of the FPM with the MMa. In this case the mission consists in the detection of hotspots on a given area after a hypothetical wildfire. This second simulation shows how the MMa is able to modify the flight plan in order to adapt the trajectory to the mission needs. In particular, an eight pattern is flown over each of the dynamically detected potential hot spots.
|
50 |
Adaptive execution environments for application serversCarrera Pérez, David 08 July 2008 (has links)
El creixement experimentat tant per la web com per Internet en els últims anys ha potenciat l'introducció de servidors d'aplicacions dins de la majoria d'entorns d'execució distribuits. El servidors d'aplicacions web porten les aplicacions distribuides un pas endavant pel que fa a accessibilitat, facilitat d'ús i estandardització, mitjançant l'ús dels protocols de comunicació més extesos i proveïnt rics entorns de desenvolupament.Seguint l'evolució dels entorns d'execució dels servidors d'aplicacions, els factors que determinin el seu rendiment també ha evolucionat, amb l'aparició de nous factors relacionats amb la creixent complexitat de l'entorn, mentres que els ja existents que determinaven el rendiment dels servidors d'aplicacions en les etapes inicials d'aquesta tecnologia encara són importants en l'actualitat. Inicialment, el rendiment d'un servidor d'aplicacions era principalment determinat pel comportament de la seva pila d'execució local, que normalment era l'origen de tots els problemes de rendiment. Més tard, quan el middleware va esdevenir més eficient, més càrrega es podia executar en cada instància del servidor d'aplicacions i per tant la gestió d'un nombre gran de clients va resultar ser un nou punt calent en termes de rendiment. Finalment, quan la capacitat d'un node va ser sobrepassada, els entorns d'execució van esdevenir massivament clusteritzats per tal de dividir la càrrega entre un nombre gran d'instàncies de servidors d'aplicacions, fet aquest que va significar que cadascuna de les instàncies havia de rebre una certa quantitat de recursos del sistema. El resultat d'aquest procés és que fins i tot en l'arquitectura de gestió del servei més avançada que pugui ser trobada avui dia, 1) comprendre l'impacte de rendiment causat per la pila d'execució del servidor d'aplicacions, 2) gestionar eficientment les connexions dels clients, i 3) assignar recursos adequadament a cada instància del servidor d'aplicacions, són tres passos incrementals de vital importància per tal d'optimitzar el rendiment d'un entorn tan complex. I donada la mida i complexitat dels centres de processat de dades actuals, tots aquests passos haurien de funcionar de manera automàtica sense necessitat d'intervenció humana.Seguint els tres elements presentats abans, aquesta tesis aporta a la gestió del rendiment dels complexos entorns d'execució per a servidors d'aplicacions tres contribucions: 1) la proposta d'un entorn de monitorització automàtic que proporciona important informació de rendiment dins del contexte d'un sol node; 2) la proposta i evaluació d'un nou disseny arquitectònic per a servidors d'aplicacions que millora l'adaptabilitat en condicions de càrrega variable; i 3) la proposta i evaluació d'una tècnica d'assignació automàtica de recursos per entorns d'execució virtualitzats i clusteritzats. La suma de les tres contribucions proposades en aquesta tesis proporcionen un nou rang d'opcions per a millorar el rendiment del sistema tant off-line (1) com on-line (2 i 3). / The growth experienced by the web and by the Internet over the last years has fuelled web application servers to break into most of the existing distributed execution environments. Web application servers take distributed applications one step forward in their accessibility, easiness of development and standardization, by using the most extended communication protocols and by providing rich development frameworks.Following the evolution of the application server execution environment, the factors that determine their performance have evolved too, with new ones that have come out with the raising complexity of the environment, while the already known ones that determined the performance in the early stages of the application server technology remain relevant in modern scenarios. In the old times, the performance of an application server was mainly determined by the behavior of its local execution stack, what usually resulted to be the source of most performance bottlenecks. Later, when the middleware became more efficient, more load could be put on each application server instance and thus the management of such a large number of concurrent client connections resulted to be a new hot spot in terms of performance. Finally, when the capacity of any single node was exceeded, the execution environments were massively clusterized to spread the load across a very large number of application server instances, what meant that each instance was allocated a certain amount of resources. The result of this process is that even in the most advanced service management architecture that can be currently found, 1) understanding the performance impact caused by the application server execution stack, 2) efficiently managing client connections, and 3) adequately allocating resources to each application server instance, are three incremental steps of crucial importance in order to optimize the performance of such a complex facility. And given the size and complexity of modern data centers, all of them should operate automatically without need of human interaction.Following the three items described above, this thesis contributes to the performance management of a complex application server execution environment by 1) proposing an automatic monitoring framework that provides a performance insight in the context of a single machine; 2) proposing and evaluating a new architectural application server design that improves the adaptability to changing workload conditions; and 3) proposing and evaluating an automatic resource allocation technique for clustered and virtualized execution environments. The sum of the three techniques proposed in this thesis opens up a new range of options to improve the performance of the system both off-line (1) and on-line (2 and 3).
|
Page generated in 0.0947 seconds