21 |
Desenvolvimento de banco de dados de pacientes submetidos ao transplante de células-tronco hematopoéticasSilva, Tatiana Schnorr January 2018 (has links)
Introdução: O transplante de células‐tronco hematopoéticas (TCTH) é um procedimento complexo, que envolve diferentes fatores e condições biopsicossociais. O acompanhamento dos dados desses pacientes é fundamental para a obtenção de informações que possam auxiliar a gestão, aperfeiçoar a assistência prestada e subsidiar novas pesquisas sobre o assunto. Objetivos: desenvolver um modelo de banco de dados (BD) de pacientes submetidos a TCTH, contemplando as principais variáveis de interesse na área. Métodos: Trata‐se de um estudo aplicado, onde utilizou‐se a metodologia de desenvolvimento de um BD relacional, seguindo três etapas principais (modelo conceitual, modelo relacional, modelo físico). O modelo físico proposto foi desenvolvido na plataforma Research Electronic Data Capture (REDCap). Um teste piloto foi realizado com dados de três pacientes submetidos a TCTH no Hospital Moinhos de Vento no ano de 2016/2017, a fim de avaliar a utilização das ferramentas e sua aplicabilidade. Resultados: Foram desenvolvidos nove formulários no REDCap: dados sociodemográficos; dados diagnósticos; histórico, dados clínicos prévios; avaliação prétransplante; procedimento; acompanhamento pós‐imediato; acompanhamento pós‐tardio; reinternações; óbito. Adicionalmente foram desenvolvidos três modelos de relatórios, com as variáveis contidas nos formulários para auxiliar na exportação de dados para as instituições envolvidas com o TCTH. Após o teste piloto foram realizados pequenos ajustes na nomenclatura de algumas variáveis e exclusão de outras devido à complexidade na sua obtenção. Conclusão: Espera‐se que com a sua utilização, o modelo de BD proposto possa servir como subsídio para qualificar a assistência prestada ao paciente, auxiliar a gestão e facilitar futuras pesquisas na área. / Introduction: hematopoietic stem cell transplantation (HSCT) is a complex procedure involving different biopsychosocial factors and conditions. Monitoring the data of these patients is fundamental for obtaining information that can help the management, improve the assistance provided and subsidize new research on the subject. Objectives: to develop a database model (DB) of patients submitted to HSCT, considering the main variables of interest in the area. Methods: it is an applied study, where the methodology of development of a relational DB was used, following three main steps (conceptual model, relational model, physical model). The proposed physical model was developed in the research electronic data capture (Redcap) platform. A pilot test was performed with data from three patients submitted to HSCT at Moinhos de Vento Hospital in 2016, in order to evaluate the use of the tools and their applicability. Results: nine forms were developed in redcap: demographic data; diagnostic data; previous clinical data; pre‐transplant evaluation; procedure; post‐immediate follow‐up; post‐late follow‐up; readmissions; death. In addition, three reporting models were developed, with the variables contained in the forms to assist in the export of data to the institutions involved with the TCTH. After the pilot test small adjustments were made in the nomenclature of some variables and others were excluded due to the complexity in obtaining them. Conclusion: it is hoped that with its use, the proposed BD model can serve as a subsidy to qualify the care provided to the patient, assist the management and facilitate research in the area.
|
22 |
Verlustzeitenbasierte LSA-Steuerung eines EinzelknotensOertel, Robert, Wagner, Peter, Krimmling, Jürgen, Körner, Matthias 24 July 2012 (has links)
Neue Methoden zur Verkehrsdatenerfassung wie die Fahrzeug-Infrastruktur-Kommunikation, der Floating Car-Ansatz und die Videodetektion eröffnen die Möglichkeit, neue Verfahren zur verkehrsabhängigen Lichtsignalanlagensteuerung zu realisieren. In dem Beitrag wird ein Verfahren beschrieben, das aus diesen Quellen Daten in Form von Fahrzeugverlustzeiten direkt zur Steuerung eines Einzelknotens verwendet. Die robuste Ausgestaltung des Verfahrens sorgt dabei dafür, dass auch mit einer lückenhaften Datenlage, wie z. B. aufgrund geringer Ausstattungsraten kommunikationsfähiger Fahrzeuge, angemessen umgegangen werden kann. Mit Hilfe einer mikroskopischen Simulationsstudie wird nachgewiesen, dass das neue Verfahren bei der Qualität des Verkehrsablaufs das gleiche Niveau wie eine traditionelle Zeitlückensteuerung erreicht oder dieses unter bestimmten Bedingungen sogar übersteigt. Mit abnehmender Ausstattungsrate ergibt sich dabei allerdings ein Qualitätsverlust, der ebenfalls mit Hilfe der mikroskopischen Simulation quantifiziert wird und wichtige Erkenntnisse für einen möglichen Praxistest liefert. / State-of-the-art traffic data sources like Car-to-Infrastructure communication, Floating Car Data and video detection offer great new prospects for vehicle-actuated traffic signal control. Due to this, the article deals with a recent approach which uses vehicles’ delay times for real-time control of traffic signals at an isolated intersection. One of the strengths of the new approach is that it can handle also incomplete data sets, e.g. caused by low penetration rates of vehicles equipped with Car-to-Infrastructure communication technology, in an appropriate manner. Based on a microscopic simulation study the high quality of this innovative approach is demonstrated, which is equal or even outperforms the well-known headway-based control. However, a decreasing penetration rate of equipped vehicles means a reduced quality of signals’ control, which is quantified in the microscopic simulation study, too, and provides useful information for tests in the field.
|
23 |
Material Hub – Ordnung im Chaos der WerkstoffdatenquellenMosch, Marc, Radeck, Carsten, Schumann, Maria 17 May 2018 (has links)
Neuartige Materialien spielen eine entscheidende Rolle in Innovationsprozessen und sind die Voraussetzung für eine Vielzahl neuer Produkte. Der Standort Dresden stellt mit der Exzellenz-Universität TU Dresden und einer Vielzahl an außeruniversitären Einrichtungen ein bedeutendes europäisches Zentrum auf dem Gebiet der Materialforschung dar. Das breite wissenschaftliche und technologische Spektrum sowie die enorme Forschungsdichte in Kombination mit einer hohen fachlichen Vernetzung führen einerseits zu Synergieeffekten unter den Wissenschaftlern und verschaffen andererseits der Wirtschaft einen enormen Standortvorteil. Sollen diese Vorteile voll ausgenutzt werden, bedarf es eines vereinheitlichten, intuitiven Informationszugangs. Aktuell werden Materialdaten jedoch typischerweise auf einer Vielzahl separierter, teilweise eingeschränkt zugänglicher Datenbestände gehalten und sind nach heterogenen Schemas und in variierendem Detailgrad beschrieben. Zwar existieren bereits Rechercheportale, diese sind jedoch domänenspezifisch, kostenpflichtig oder bieten nur auf spezielle Zielgruppen zugeschnittene Bedienoberflächen, die für andere Nutzer kaum bedienbar sind. Verteilte Recherchen über mehrere Datenquellen und Portale sind zeitaufwändig und mühsam. Abhilfe soll die hier vorgestellte integrierte Material-Recherche-Plattform Material Hub schaffen. Sie muss den Anforderungen von Herstellern und Zulieferern, deren Daten sie enthält ebenso entsprechen wie den Anforderungen der Anwender aus Forschung, Industrie und Handwerk. Diese den Wissenschaftsraum Dresden integrierende Plattform soll weitere erstklassige Forschungs- und Innovationsleistungen stimulieren, Kooperationen begünstigen und die
Vermarktung innovativer Ideen und Lösungen wesentlich erleichtern. Außerdem soll Material Hub die Sichtbarkeit und Reichweite der Dresdner Materialforschung erhöhen und so die bereits vorhandene Leistungsfähigkeit signifikant stärken.
Gegenstand dieses Artikels ist das technische Grundkonzept des Material Hub. Ein wesentlicher Aspekt besteht dabei in der Zusammenführung verschiedener Datenquellen in einem zentralen Rechercheportal. Integriert werden Forschungsdaten, Herstellerinformationen und Anwendungsbeispiele, die sowohl hinsichtlich Domäne als auch hinsichtlich Detailgrad und 1 gefördert aus Mitteln der Europäischen Union und des Europäischen Fonds für regionale Entwicklung zugrundeliegendem Schema heterogen sind. Dazu wird in Abstimmung mit Werkstoffwissenschaftlern ein Schema zur Materialbeschreibung sowie eine semantische Wissensbasis konzipiert, die z. B. Synonyme und inhaltliche Zusammenhänge modelliert. Basierend darauf werden die Datenbestände indexiert und für die Recherche zugänglich gemacht. Die Benutzeroberfläche unterstützt mehrere Suchmasken, von der klassischen Stichwortsuche über die facettierte Suche bis hin zu stärker geführten Ansätzen, um zielgruppenspezifischen Anwendungsfällen durch geeignete UI-Konzepte gerecht zu werden. Neben konzeptionellen Ansätzen behandelt dieser Artikel erste Implementierungs- und Evaluationsergebnisse.
|
24 |
DATAWAREHOUSE APPROACH TO DECISION SUPPORT SYSTEM FROM DISTRIBUTED, HETEROGENEOUS SOURCESSannellappanavar, Vijaya Laxmankumar 05 October 2006 (has links)
No description available.
|
25 |
Ontology-based discovery of time-series data sources for landslide early warning systemPhengsuwan, J., Shah, T., James, P., Thakker, Dhaval, Barr, S., Ranjan, R. 15 July 2019 (has links)
Yes / Modern early warning system (EWS) requires sophisticated knowledge of the natural hazards, the urban context and underlying risk factors to enable dynamic and timely decision making (e.g., hazard detection, hazard preparedness). Landslides are a common form of natural hazard with a global impact and closely linked to a variety of other hazards. EWS for landslides prediction and detection relies on scientific methods and models which requires input from the time series data, such as the earth observation (EO) and urban environment data. Such data sets are produced by a variety of remote sensing satellites and Internet of things sensors which are deployed in the landslide prone areas. To this end, the automatic discovery of potential time series data sources has become a challenge due to the complexity and high variety of data sources. To solve this hard research problem, in this paper, we propose a novel ontology, namely Landslip Ontology, to provide the knowledge base that establishes relationship between landslide hazard and EO and urban data sources. The purpose of Landslip Ontology is to facilitate time series data source discovery for the verification and prediction of landslide hazards. The ontology is evaluated based on scenarios and competency questions to verify the coverage and consistency. Moreover, the ontology can also be used to realize the implementation of data sources discovery system which is an essential component in EWS that needs to manage (store, search, process) rich information from heterogeneous data sources.
|
26 |
An ontology-based system for discovering landslide-induced emergencies in electrical gridPhengsuwan, J., Shah, T., Sun, R., James, P., Thakker, Dhaval, Ranjan, R. 07 April 2020 (has links)
No / Early warning systems (EWS) for electrical grid infrastructure have played a significant role in the efficient management of electricity supply in natural hazard prone areas. Modern EWS rely on scientific methods to analyze a variety of Earth Observation and ancillary data provided by multiple and heterogeneous data sources for the monitoring of electrical grid infrastructure. Furthermore, through cooperation, EWS for natural hazards contribute to monitoring by reporting hazard events that are associated with a particular electrical grid network. Additionally, sophisticated domain knowledge of natural hazards and electrical grid is also required to enable dynamic and timely decision‐making about the management of electrical grid infrastructure in serious hazards. In this paper, we propose a data integration and analytics system that enables an interaction between natural hazard EWS and electrical grid EWS to contribute to electrical grid network monitoring and support decision‐making for electrical grid infrastructure management. We prototype the system using landslides as an example natural hazard for the grid infrastructure monitoring. Essentially, the system consists of background knowledge about landslides as well as information about data sources to facilitate the process of data integration and analysis. Using the knowledge modeled, the prototype system can report the occurrence of landslides and suggest potential data sources for the electrical grid network monitoring. / FloodPrep, Grant/Award Number: (NE/P017134/1); LandSlip, Grant/Award Number: (NE/P000681/1)
|
27 |
Scalable Integration View Computation and Maintenance with Parallel, Adaptive and Grouping TechniquesLiu, Bin 19 August 2005 (has links)
"
Materialized integration views constructed by integrating data from multiple distributed data sources help to achieve better access, reliable performance, and high availability for a wide range of applications. In this dissertation, we propose parallel, adaptive, and grouping techniques to address scalability challenges in high-performance integration view computation and maintenance due to increasingly large data sources and high rates of source updates.
State-of-the-art parallel integration view computation makes the common assumption that the maximal pipelined parallelism leads to superior performance. We instead propose segmented bushy parallel processing that combines pipelined parallelism with alternate forms of parallelism to achieve an overall more effective strategy. Experimental studies conducted over a cluster of high-performance PCs confirm that the proposed strategy has an on average of 50\% improvement in terms of total processing time in comparison to existing solutions.
Run-time adaptation becomes critical for parallel integration view computation due to its long running and memory intensive nature. We investigate two types of state level adaptations, namely, state spill and state relocation, to address the run-time memory shortage. We propose lazy-disk and active-disk approaches that integrate both adaptations to maximize run-time query throughput in a memory constrained environment. We also propose global throughput-oriented state adaptation strategies for computation plans with multiple state intensive operators. Extensive experiments confirm the effectiveness of our proposed adaptation solutions.
Once results have been computed and materialized, it's typically more efficient to maintain them incrementally instead of full recomputation. However, state-of-the-art incremental view maintenance require O($n^2$) maintenance queries with n being the number of data sources that the view is defined upon. Moreover, they do not exploit view definitions and data source processing capabilities to further improve view maintenance performance. We propose novel grouping maintenance algorithms that dramatically reduce the number of maintenance queries to (O(n)). A cost-based view maintenance framework has been proposed to generate optimized maintenance plans tuned to particular environmental settings. Extensive experimental studies verify the effectiveness of our maintenance algorithms as well as the maintenance framework. "
|
28 |
Um perfil de qualidade para fontes de dados dinâmicasSILVA NETO, Everaldo Costa 24 August 2016 (has links)
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2016-10-17T18:07:42Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Dissertação - Everaldo Costa Silva Neto (final).pdf: 1973752 bytes, checksum: 18ff29972829bab54f92cc990addf923 (MD5) / Made available in DSpace on 2016-10-17T18:07:42Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Dissertação - Everaldo Costa Silva Neto (final).pdf: 1973752 bytes, checksum: 18ff29972829bab54f92cc990addf923 (MD5)
Previous issue date: 2016-08-24 / Atualmente, um massivo volume de dados tem sido produzido pelos mais variados tipos de fontes de dados. Apesar da crescente facilidade de acesso a esses dados, identificar quais fontes de dados são mais adequadas para um determinado uso é um grande desafio. Isso ocorre devido ao grande número de fontes de dados disponíveis e, principalmente, devido à ausência de informações sobre a qualidade dos dados. Nesse contexto, a literatura oferece diversos trabalhos que abordam o uso de critérios de Qualidade da Informação (QI) para avaliar fontes de dados e solucionar esse desafio. No entanto, poucos trabalhos consideram o aspecto dinâmico das fontes na etapa da avaliação da qualidade. Nesta dissertação, abordamos o problema de avaliação da qualidade em fontes de dados dinâmicas, ou seja, fontes de dados cujo conteúdo pode sofrer modificações com alta frequência. Como contribuição, propomos uma estratégia onde os critérios de QI são avaliados de forma contínua, com o objetivo de acompanhar a evolução das fontes de dados ao longo do tempo. Além disso, propomos a criação de um Perfil de Qualidade, que consiste de um conjunto de metadados sobre a qualidade de uma fonte, onde seu uso pode ser aplicado para diversos fins, inclusive no processo de seleção de fontes de dados. O Perfil de Qualidade proposto é atualizado periodicamente de acordo com os resultados obtidos pela avaliação contínua da qualidade. Dessa forma, é possível refletir o aspecto dinâmico das fontes. Para avaliar os resultados deste trabalho, mais especificamente a estratégia de avaliação contínua da qualidade, utilizamos fontes de dados do domínio Meteorológico. Os experimentos realizados demonstraram que a estratégia de avaliação proposta produz resultados satisfatórios. / Nowadays, a massive data volume has been produced by a variety of data sources. The easy access to these data presents new opportunities. In this sense, choosing the most suitable data sources for a specific use has become a challenge. Several works in the literature use Information Quality as a mean of solving this problem, however, only few works employ a continuous strategy. In this work, we address the problem of performing assessment continuously, looking to dynamic data sources. We also propose the creation of a data source Quality Profile, which consists of a set of metadata about the data source’s quality and may be used to help the selection of data sources. To reflect the real quality values of a data source, we propose a continuous updating of the Quality Profile, according to the data source’s refresh rate. In order to evaluate our proposal, we carried out some experiments with meteorological data provided by institutions that monitor weather conditions of Recife. The experimental results have demonstrated that our strategy produces more satisfactory results than others, regarding the trade off between performance and accuracy.
|
29 |
Business Intelligence and Customer Relationship Management: a Direct Support to Product Development TeamsPietrobon, Alberto, Ogunmakinwa, Abraham Bamidele Sunday January 2011 (has links)
For manufacturing firms, having knowledge about customers is very important, in particular for the developers and designers of new products. A way in which software can help to build an information channel between the customers and the firm is through Customer Relationship Management (CRM) and Business Intelligence (BI) solutions. Customers’ data are captured into the Customer Relationship Management solution while Business Intelligence analyses them and provide clear processed information to the developers and designers of new products. In this study we have researched if this process occurs in the industry, if and how it can be improved and what advantages it could bring to manufacturing firms. We have carried out the data collection by interviewing experts in four companies, three software companies that provide Business Intelligence solutions and one manufacturing firm. We found out that those software solutions are not used to directly connect developers and designers to customers’ data, and that there are no specific technical obstacles that prevents this, if not managerial reasons rooted in everyday practice. We also uncovered facts that would help to make this process more efficient and make customers’ data even more relevant to development.
|
30 |
A return on investment study of Employee Assistance Programmes amongst corporate clients of The Careways GroupKeet, Annaline Caroline Sandra 04 June 2010 (has links)
The purpose of this research is to conduct an evaluation of the return on investment value of Employee Assistance Programmes within the South African context. Assistance to employees originated from the 19th century. The term Employee Assistance Programmes was however formulated in the 1970’s in the United States. The Employee Assistance field has since seen a paradigm shift in its focus, significant growth in its market value (amount of corporate clients internationally investing in EAPs for their employees), the establishment of a regulatory and ethical body through EAPA and its formalization as an academic discipline. This study takes the concept of return on investment value of EAPs further than the ratio of benefit-to-cost. The utilization of different data sources, inclusive of quantitative and qualitative instruments creates an opportunity to explore areas of value perception of different role players in the field. It furthermore maps the subjective and objective experience of behaviour change resulting from personal problems and the journey of change as a result of focused interventions. The consistency of views across different datasources as well as between different industries strengthens the value add claims of EAPs as contributing to the financial bottom line of companies. This study advocates for the importance of programme evaluation as a central part of EAP contracting. It furthermore also highlights the importance of documentation of employee performance for evaluation purposes. It illustrates a journey that can be complicated by the failure to agree to evaluative terms at program inception as well as unstructured data-capturing within companies. Employee behaviour consists of both computable and incomputable elements. Generally the focus of a return on investment study would be the computable components of human behaviour. This investigation however highlights significant elements of risk relating to employee performance challenges that is not easy to include in a ROI but holds significant financial and reputational risks for corporate clients. The influence of individual performance challenges on teams and the challenges it holds for line managers is also highlighted through the qualitative journey of this study. Employee behaviour seems vulnerable to internal and external forces and as a result companies’ productivity can be affected by how individual employees respond to these forces. It could be accepted that interventions that is aimed at stabilising and improving employee behaviour, will inevitably impact work performance and as a result the financial bottom-line of the company. Employee Assistance Programmes often operates in an arena where other programmes aimed at impacting employee behaviour are also present. It is thus difficult to isolate it’s intervention as being one of the main behaviour changing facilitators of the company. This study acknowledges this challenge and changes focus to different data-sources reporting on employee behaviour before and after EAP intervention. The consistency of data across these different data-sources becomes one of the main reporting areas for this study. Eventually the challenges encountered in this study guides the advocacy in the recommendations for a thorough agreement of programme evaluation at inception, the areas that will be included in such evaluations, the availability of Human Resource data to ensure effective evaluation inclusive of ROI assessments, targeted assessments at service provider level with effective software support. / Thesis (DPhil)--University of Pretoria, 2010. / Social Work and Criminology / unrestricted
|
Page generated in 0.0925 seconds