• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 38
  • 34
  • 8
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 1
  • Tagged with
  • 205
  • 32
  • 24
  • 24
  • 22
  • 22
  • 18
  • 18
  • 17
  • 16
  • 16
  • 14
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Automatische Generalisierungsverfahren zur Vereinfachung von Kartenvektordaten unter Berücksichtigung der Topologie und Echtzeitfähigkeit

Hahmann, Stefan 15 September 2006 (has links)
Die mapChart GmbH bietet einen Softwaredienst an, der es ermöglicht, auf der Grundlage von teilweise kundenspezifischen Basisgeometrien vektorbasierte Karten zu erstellen. Primäres Ausgabemedium ist dabei die Bildschirmkarte des JAVA-Clients im Webbrowser. PDF-Export und Druck sind ebenso möglich. Bei der Kartenerstellung ist der Anwender nicht an vorgegebene Maßstäbe gebunden, sondern kann frei wählen, welches Gebiet in welcher Größe dargestellt werden soll. Hierdurch ergeben sich komplexe Aufgabenstellungen für die kartografische Generalisierung. Diese Probleme und deren bisherige Lösungen durch das Unternehmen werden im ersten Teil der Arbeit diskutiert, wobei verschiedene wissenschaftliche Arbeiten für spezielle Teilaufgaben der Generalisierung kurz vorgestellt werden. Selektion und Formvereinfachung gelten als die wichtigsten Generalisierungsschritte. Während die Selektion mit den vorhandenen Methoden von Geodatenbanken relativ problemlos realisiert werden kann, stellt die Formvereinfachung ein umfangreiches Problem dar. Das Hauptaugenmerk der Arbeit richtet sich deswegen auf die rechnergestützte Liniengeneralisierung verbunden mit dem Ziel, überflüssige Stützpunkte mit Hilfe von Algorithmen zur Linienvereinfachung einzusparen. Ergebnis sind schnellere Übertragungszeiten der Kartenvektordaten zum Anwender sowie eine Beschleunigung raumbezogener Analysen, wie z. B. Flächenverschneidungen. Des weiteren werden Verbesserungen in der Darstellung angestrebt. Ein geeigneter Algorithmus zeichnet sich durch eine geringe Beanspruchung der Ressourcen Zeit und Speicherbedarf aus. Weiterhin spielt der erreichbare Grad der Verringerung der Stützpunktmenge bei akzeptabler Kartenqualität eine entscheidende Rolle. Nicht zuletzt sind topologische Aspekte und der Implementierungsaufwand zu beachten. Die Arbeit gibt einen umfassenden Überblick über vorhandene Ansätze zur Liniengeneralisierung und leitet aus der Diskussion der Vor- und Nachteile zwei geeignete Algorithmen für die Implementierung mit der Programmiersprache JAVA ab. Die Ergebnisse der Verfahren nach Douglas-Peucker und Visvalingam werden hinsichtlich der Laufzeiten, des Grades der Verringerung der Stützpunktmenge sowie der Qualität der Kartendarstellung verglichen, wobei sich für die Visvalingam-Variante leichte Vorteile ergeben. Eine Parameterkonfiguration für den konkreten Einsatz der Vereinfachungsmethode in das GIS der mapChart GmbH wird vorgeschlagen. Die Vereinfachung von Polygonnetzen stellt eine Erweiterung des Problems der Liniengeneralisierung dar. Hierbei müssen topologische Aspekte beachtet werden, was besonders schwierig ist, wenn die Ausgangsdaten nicht topologisch strukturiert vorliegen. Für diese Aufgabe wurde ein neuer Algorithmus entwickelt und ebenfalls in JAVA implementiert. Die Implementierung dieses Algorithmus und damit erreichbaren Ergebnisse werden anhand von zwei Testdatensätzen vorgestellt, jedoch zeigt sich, dass die wichtige Bedingung der Echtzeitfähigkeit nicht erfüllt wird. Damit ergibt sich, dass der Algorithmus zur Netzvereinfachung nur offline benutzt werden sollte.:1 Einleitung 2 Kartografische Generalisierung 3 Algorithmen zur Liniengeneralisierung 4 Implementierung 5 Ergebnisse 6 Ausblick 7 Zusammenfassung / MapChart GmbH offers a software service, which allows users to create customized, vector-based maps using vendor as well as customer geometric and attributive data. Target delivery media is the on-screen map of the JAVA client within the web browser. PDF export and print are also supported. Map production is not limited to specific scales. The user can choose which area at which scale is shown. This triggers complex tasks for cartographic generalization. Current solutions by the company are discussed and scientific work for selected tasks will be presented shortly. Selection and Simplification are known as the most important steps of generalization. While selection can be managed sufficiently by geo databases, simplification poses considerably problems. The main focus of the thesis is the computational line generalization aiming on reducing the amount of points by simplification. Results are an increased speed of server to client communication and better performance of spatial analysis, such as intersection. Furthermore enhancements for the portrayal of maps are highlighted. An appropriate algorithm minimizes the demands for the resources time and memory. Furthermore the obtainable level of simplification by still producing acceptable map quality plays an important role. Last but not least efforts for the implementation of the algorithm and topology are important. The thesis discusses a broad overview of existing approaches to line simplification. Two appropriate algorithms for the implementation using the programming language JAVA will be proposed. The results of the methods of Visvalingam and Douglas-Peucker will be discussed with regards to performance, level of point reduction and map quality. Recommended parameters for the implementation in the software of MapChart GmbH are derived. The simplification of polygon meshes will be an extension of the line generalization. Topological constraints need to be considered. This task needs a sophisticated approach as the raw data is not stored in a topological structure. For this task a new algorithm was developed. It was also implemented using JAVA. The results of the testing scenario show that the constraint of real-time performance cannot be fulfilled. Hence it is recommended to use the algorithm for the polygon mesh simplification offline only.:1 Einleitung 2 Kartografische Generalisierung 3 Algorithmen zur Liniengeneralisierung 4 Implementierung 5 Ergebnisse 6 Ausblick 7 Zusammenfassung
132

Automatische Generalisierungsverfahren zur Vereinfachung von Kartenvektordaten unter Berücksichtigung der Topologie und Echtzeitfähigkeit

Hahmann, Stefan 15 September 2006 (has links)
Die mapChart GmbH bietet einen Softwaredienst an, der es ermöglicht, auf der Grundlage von teilweise kundenspezifischen Basisgeometrien vektorbasierte Karten zu erstellen. Primäres Ausgabemedium ist dabei die Bildschirmkarte des JAVA-Clients im Webbrowser. PDF-Export und Druck sind ebenso möglich. Bei der Kartenerstellung ist der Anwender nicht an vorgegebene Maßstäbe gebunden, sondern kann frei wählen, welches Gebiet in welcher Größe dargestellt werden soll. Hierdurch ergeben sich komplexe Aufgabenstellungen für die kartografische Generalisierung. Diese Probleme und deren bisherige Lösungen durch das Unternehmen werden im ersten Teil der Arbeit diskutiert, wobei verschiedene wissenschaftliche Arbeiten für spezielle Teilaufgaben der Generalisierung kurz vorgestellt werden. Selektion und Formvereinfachung gelten als die wichtigsten Generalisierungsschritte. Während die Selektion mit den vorhandenen Methoden von Geodatenbanken relativ problemlos realisiert werden kann, stellt die Formvereinfachung ein umfangreiches Problem dar. Das Hauptaugenmerk der Arbeit richtet sich deswegen auf die rechnergestützte Liniengeneralisierung verbunden mit dem Ziel, überflüssige Stützpunkte mit Hilfe von Algorithmen zur Linienvereinfachung einzusparen. Ergebnis sind schnellere Übertragungszeiten der Kartenvektordaten zum Anwender sowie eine Beschleunigung raumbezogener Analysen, wie z. B. Flächenverschneidungen. Des weiteren werden Verbesserungen in der Darstellung angestrebt. Ein geeigneter Algorithmus zeichnet sich durch eine geringe Beanspruchung der Ressourcen Zeit und Speicherbedarf aus. Weiterhin spielt der erreichbare Grad der Verringerung der Stützpunktmenge bei akzeptabler Kartenqualität eine entscheidende Rolle. Nicht zuletzt sind topologische Aspekte und der Implementierungsaufwand zu beachten. Die Arbeit gibt einen umfassenden Überblick über vorhandene Ansätze zur Liniengeneralisierung und leitet aus der Diskussion der Vor- und Nachteile zwei geeignete Algorithmen für die Implementierung mit der Programmiersprache JAVA ab. Die Ergebnisse der Verfahren nach Douglas-Peucker und Visvalingam werden hinsichtlich der Laufzeiten, des Grades der Verringerung der Stützpunktmenge sowie der Qualität der Kartendarstellung verglichen, wobei sich für die Visvalingam-Variante leichte Vorteile ergeben. Eine Parameterkonfiguration für den konkreten Einsatz der Vereinfachungsmethode in das GIS der mapChart GmbH wird vorgeschlagen. Die Vereinfachung von Polygonnetzen stellt eine Erweiterung des Problems der Liniengeneralisierung dar. Hierbei müssen topologische Aspekte beachtet werden, was besonders schwierig ist, wenn die Ausgangsdaten nicht topologisch strukturiert vorliegen. Für diese Aufgabe wurde ein neuer Algorithmus entwickelt und ebenfalls in JAVA implementiert. Die Implementierung dieses Algorithmus und damit erreichbaren Ergebnisse werden anhand von zwei Testdatensätzen vorgestellt, jedoch zeigt sich, dass die wichtige Bedingung der Echtzeitfähigkeit nicht erfüllt wird. Damit ergibt sich, dass der Algorithmus zur Netzvereinfachung nur offline benutzt werden sollte.:1 Einleitung 2 Kartografische Generalisierung 3 Algorithmen zur Liniengeneralisierung 4 Implementierung 5 Ergebnisse 6 Ausblick 7 Zusammenfassung / MapChart GmbH offers a software service, which allows users to create customized, vector-based maps using vendor as well as customer geometric and attributive data. Target delivery media is the on-screen map of the JAVA client within the web browser. PDF export and print are also supported. Map production is not limited to specific scales. The user can choose which area at which scale is shown. This triggers complex tasks for cartographic generalization. Current solutions by the company are discussed and scientific work for selected tasks will be presented shortly. Selection and Simplification are known as the most important steps of generalization. While selection can be managed sufficiently by geo databases, simplification poses considerably problems. The main focus of the thesis is the computational line generalization aiming on reducing the amount of points by simplification. Results are an increased speed of server to client communication and better performance of spatial analysis, such as intersection. Furthermore enhancements for the portrayal of maps are highlighted. An appropriate algorithm minimizes the demands for the resources time and memory. Furthermore the obtainable level of simplification by still producing acceptable map quality plays an important role. Last but not least efforts for the implementation of the algorithm and topology are important. The thesis discusses a broad overview of existing approaches to line simplification. Two appropriate algorithms for the implementation using the programming language JAVA will be proposed. The results of the methods of Visvalingam and Douglas-Peucker will be discussed with regards to performance, level of point reduction and map quality. Recommended parameters for the implementation in the software of MapChart GmbH are derived. The simplification of polygon meshes will be an extension of the line generalization. Topological constraints need to be considered. This task needs a sophisticated approach as the raw data is not stored in a topological structure. For this task a new algorithm was developed. It was also implemented using JAVA. The results of the testing scenario show that the constraint of real-time performance cannot be fulfilled. Hence it is recommended to use the algorithm for the polygon mesh simplification offline only.:1 Einleitung 2 Kartografische Generalisierung 3 Algorithmen zur Liniengeneralisierung 4 Implementierung 5 Ergebnisse 6 Ausblick 7 Zusammenfassung
133

Log data filtering in embedded sensor devices

Olsson, Jakob, Yberg, Viktor January 2015 (has links)
Data filtering is the disposal of unnecessary data in a data set, to save resources such as server capacity and bandwidth. The method is used to reduce the amount of stored data and thereby prevent valuable resources from processing insignificant information.The purpose of this thesis is to find algorithms for data filtering and to find out which algorithm gives the best effect in embedded devices with resource limitations. This means that the algorithm needs to be resource efficient in terms of memory usage and performance, while saving enough data points to avoid modification or loss of information. After an algorithm has been found it will also be implemented to fit the Exqbe system.The study has been done by researching previously done studies in line simplification algorithms and their applications. A comparison between several well-known and studied algorithms has been done to find which suits this thesis problem best.The comparison between the different line simplification algorithms resulted in an implementation of an extended version of the Ramer-Douglas-Peucker algorithm. The algorithm has been optimized and a new filter has been implemented in addition to the algorithm. / Datafiltrering är att ta bort onödig data i en datamängd, för att spara resurser såsom serverkapacitet och bandbredd. Metoden används för att minska mängden lagrad data och därmed förhindra att värdefulla resurser används för att bearbeta obetydlig information. Syftet med denna tes är att hitta algoritmer för datafiltrering och att undersöka vilken algoritm som ger bäst resultat i inbyggda system med resursbegränsningar. Det innebär att algoritmen bör vara resurseffektiv vad gäller minnesanvändning och prestanda, men spara tillräckligt många datapunkter för att inte modifiera eller förlora information. Efter att en algoritm har hittats kommer den även att implementeras för att passa Exqbe-systemet. Studien är genomförd genom att studera tidigare gjorda studier om datafiltreringsalgoritmer och dess applikationer. Jämförelser mellan flera välkända algoritmer har utförts för att hitta vilken som passar denna tes bäst. Jämförelsen mellan de olika filtreringsalgoritmerna resulterade i en implementation av en utökad version av Ramer-Douglas-Peucker-algoritmen. Algoritmen har optimerats och ett nytt filter har implementerats utöver algoritmen.
134

Persistence, Metric Invariants, and Simplification

Okutan, Osman Berat 02 October 2019 (has links)
No description available.
135

Text simplification in Swedish using transformer-based neural networks / Textförenkling på Svenska med transformer-baserade neurala nätverk

Söderberg, Samuel January 2023 (has links)
Textförenkling innebär modifiering av text så att den blir lättare att läsa genom ersättning av komplexa ord, ändringar av satsstruktur och/eller borttagning av onödig information. Forskning existerar kring textförenkling på svenska, men användandet av neurala nätverk inom området är begränsat. Neurala nätverk kräver storaskaliga och högkvalitativa dataset, men sådana dataset är sällsynta för textförenkling på svenska. Denna studie undersöker framtagning av dataset för textförenkling på svenska genom parafrasutvinning från webbsidor och genom översättning av existerande dataset till svenska, och hur neurala nätverk tränade på sådana dataset presterar. Tre dataset med sekvenspar av komplexa och motsvarande simpla sekvenser skapades, den första genom parafrasutvinning från web data, det andra genom översättning av ett dataset från engelska till svenska, och ett tredje genom att kombinera de framtagna dataseten till ett. Dessa dataset användes sedan för att finjustera ett neuralt vätverk av BART modell, förtränad på stora mängder svensk data. Utvärdering av de tränade modellerna utfördes sedan genom en manuell undersökning och kategorisering av output, och en automatiserad bedömning med mätverktygen SARI och LIX. Två olika dataset för testning skapades och användes i utvärderingen, ett översatt från engelska och ett manuellt framtaget från svenska texter. Den automatiska utvärderingen med SARI gav resultat nära, men inte lika bra, som liknande forskning inom textförenkling på engelska. Utvärderingen med LIX gav resultat på liknande nivå eller bättre än nuvarande forskning inom textförenkling på svenska. Den manuella utvärderingen visade att modellen tränad på datat från parafrasutvinningen oftast producerade korta sekvenser med många ändringar jämfört med originalet, medan modellen tränad på det översatta datasetet oftast producerade oförändrade sekvenser och/eller sekvenser med få ändringar. Dock visade det sig att modellen tränad på de utvunna paragraferna producerade många fler oanvändbara sekvenser än vad modellen tränad på det översatta datasetet gjorde. Modellen tränad på det kombinerade datasetet presterade mellan de två andra modellerna i dessa två avseenden, då den producerade färre oanvändbara sekvenser än modellen tränad på de utvunna paragraferna och färre oförändrade sekvenser jämfört med modellen tränad på det översatta datat. Många sekvenser förenklades bra med de tre modellerna, men den manuella utvärderingen visade att en signifikant andel av de genererade sekvenserna förblev oförändrade eller oanvändbara, vilket belyser behovet av ytterligare forskning, utforskning av metoder, och förfinande av de använda verktygen. / Text simplification involves modifying text to make it easier to read by replacing complex words, altering sentence structure, and/or removing unnecessary information. It can be used to make text more accessible to a larger crowd. While research in text simplification exists for Swedish, the use of neural networks in the field is limited. Neural networks require large-scale high-quality datasets, but such datasets are scarce for text simplification in Swedish. This study investigates the acquisition of datasets through paraphrase mining from web snapshots and translation of existing datasets for text simplification in English to Swedish and aims to assess the performance of neural network models trained on such acquired datasets. Three datasets with complex-to-simple sequence pairs were created, one through mining paraphrases from web data, another by translating a dataset from English to Swedish, and a third by combining the acquired mined and translated datasets into one. These datasets were then used to fine-tune a BART neural network model pre-trained on large amounts of Swedish data. An evaluation was conducted through manual examination and categorization of output, and automated assessment using the SARI and LIX metrics. Two different test sets were evaluated, one translated from English and one manually constructed from Swedish texts. The automatic evaluation produced SARI scores close to, but not as well as, similar research in text simplification in English. When considering LIX scores, the models perform on par or better than existing research into automatic text simplification in Swedish. The manual evaluation revealed that the model trained on the mined paraphrases generally produced short sequences that had many alterations compared to the original, while the translated dataset often produced unchanged sequences and sequences with few alterations. However, the model trained on the mined dataset produced many more sequences that were unusable, either with corrupted Swedish or by altering the meaning of the sequences, compared to the model trained on the translated dataset. The model trained on the combined dataset reached a middle ground in these two regards, producing fewer unusable sequences than the model trained on the mined dataset and fewer unchanged sequences compared to the model trained on the translated dataset. Many sequences were successfully simplified using the three models, but the manual evaluation revealed that a significant portion of the generated sequences remains unchanged or unusable, highlighting the need for further research, exploration of methods, and tool refinement.
136

Controllable sentence simplification in Swedish : Automatic simplification of sentences using control prefixes and mined Swedish paraphrases

Monsen, Julius January 2023 (has links)
The ability to read and comprehend text is essential in everyday life. Some people, including individuals with dyslexia and cognitive disabilities, may experience difficulties with this. Thus, it is important to make textual information accessible to diverse target audiences. Automatic Text Simplification (ATS) techniques aim to reduce the linguistic complexity in texts to facilitate readability and comprehension. However, existing ATS systems often lack customization to specific user needs, and simplification data for languages other than English is limited. This thesis addressed ATS in a Swedish context, building upon novel methods that provide more control over the simplification generation process, enabling user customization. A dataset of Swedish paraphrases was mined from a large amount of text data. ATS models were then trained on this dataset utilizing prefix-tuning with control prefixes. Two sets of text attributes and their effects on performance were explored for controlling the generation. The first had been used in previous research, and the second was extracted in a data-driven way from existing text complexity measures. The trained ATS models for Swedish and additional models for English were evaluated and compared using SARI and BLEU metrics. The results for the English models were consistent with results from previous research using controllable generation mechanisms, although slightly lower. The Swedish models provided significant improvements over the baseline, in the form of a fine-tuned BART model, and compared to previous Swedish ATS results. These results highlight the efficiency of using paraphrase data paired with controllable generation mechanisms for simplification. Furthermore, the different sets of attributes provided very similar results, pointing to the fact that both these sets of attributes manage to capture aspects of simplification. The process of mining paraphrases, selecting control attributes and other methodological implications are discussed, leading to suggestions for future research.
137

T-Spline Simplification

Cardon, David L. 17 April 2007 (has links) (PDF)
This work focuses on generating approximations of complex T-spline surfaces with similar but less complex T-splines. Two approaches to simplifying T-splines are proposed: a bottom-up approach that iteratively refines an over-simple T-spline to approximate a complex one, and a top-down approach that evaluates existing control points for removal in producing an approximations. This thesis develops and compares the two simplification methods, determining the simplification tasks to which each is best suited. In addition, this thesis documents supporting developments made to T-spline research as simplification was developed.
138

[en] FEATURE PRESERVING MESH SIMPLIFICATION BASED ON MARKOV GEOMETRIC DIFFUSION / [pt] SIMPLIFICAÇÃO DE MALHAS COM PRESERVAÇÃO DE FEIÇÕES BASEADA EM DIFUSÃO GEOMÉTRICA MARKOVIANA

LEANDRO CARLOS DE SOUZA 13 May 2013 (has links)
[pt] O uso de modelos computacionais baseados em malhas 3D se torna cada vez mais frequente em diversas áreas da computação como em jogos, animações e simuladores de realidade virtual, por exemplo. Entretanto, malhas que possuem uma grande quantidade de elementos exigem um alto poder computacional para serem manipuladas. A fim de resolver este problema são utilizados métodos de simplicação para reduzir o número de elementos, preservando a topologia que o modelo apresenta. Neste trabalho é introduzido um método de Difusão Geométrica Markoviana - difusão calculada na forma de probabilidades de transição e construída sobre um conjunto de pontos organizados geometricamente - aplicado na malha. Esse método combina uma estratégia baseada em uma Cadeia de Markov de base geométrica, que controla probabilisticamente o comportamento das normais na malha, com métodos de simplicação que são capazes de avaliar o impacto que a remoção de um elemento provoca na estrutura da malha. Métricas de avaliação são utilizadas para comparar o erro cometido em relação à malha original. / [en] Computational models based on 3D meshes are ubiquitous in are such as game, animations and virtual reality. However, very large data sets are frequently produced, e.g. by scanners 3D and fluid dynamics simulations, wich require high computer power to be handled. Mesh simplification tecniques, preserving the topology and the geometry of the mesh, are then implemented to bring the datea to a size suited to be used in such areas. In this work we introduce a new tecnique wich we call Markov Geometric Diffusion based on probability transition matrix tecniques and built upon a data set organized geometricallyas a mesh. This method puts together a strategy based on a geometrically constructed Markov chain, wich control, in a probabilistic way, a normal vector field to the mesh, with a simplification method capable of estimating the impact of element removal in the mesh structure. Several error evaluation metrics are used tocompare the error of the simplified mesh with the original one.
139

Produção de biblioteca de compostos derivados de produtos naturais: síntese e estudo de atividades biológicas / Production of library of compounds derived from natural products: synthesis and study of biological activities

Mello, Rodrigo Brito de 19 September 2014 (has links)
O presente trabalho trata da semissíntese de análogos de importantes compostos líderes (afidicolina, lausona, lapachol e CAPE) utilizando técnicas de química medicinal como bioisosterismo, adição de grupamento funcional e simplificação molecular. Dessa forma foi possível obter uma biblioteca de análogos racionais, visando a manipulação de parâmetros físico-químicos e estruturais, para fins de bioprospecção. Foram desenvolvidos derivados de afidicolina mais lipofílicos, por meio da acilação das hidroxilas presentes na estrutura química deste terpeno. Tentativas de formação de bioisósteros, sais e de ésteres fosfato das hidroxinaftoquinonas naturais - lapachol e lausona, foram realizadas visando avaliar a influência do pKa sobre a atividade deste tipo de moléculas, bem como aumentar hidrossolubilidade. Neste caso, foram observadas reações paralelas, como um rearranjo molecular para a formação de aminonaftoquinonas, no estudo da cicloadição de azida de sódio com grupos cianos. Ademais, foi estudado o efeito da simplificação molecular de CAPE (fenetil éster do ácido cafeico), visando entender os requisitos estruturais de atividade antitumoral desta classe de compostos. Neste trabalho, foram obtidas 14 moléculas e testadas para diferentes atividades biológicas. Derivados naftoquinoidais se mostraram ativos frente à inibição de DHODH em ensaio sobre a enzima e também em ensaio celular. Adicionalmente, análogos simplificados do CAPE apresentaram alta atividade antitumoral, com segurança, em comparação ao controle 5-fluorouracila. / The present study aimed the semi-synthesis of analogues from important lead compounds (aphidicolin, lausone, lapachol and CAPE) by using medicinal chemistry strategies, such as bioisosterism, addition of functional groups and molecular simplification. Thus, we obtained a library of rational analogues, aiming the manipulation of physicochemical and structural parameters with bioprospecting purposes. We developed more lipophilic aphidicolin derivatives by acylation of the hydroxyl groups present in the structure of this terpene. Attempts towards the development of phosphate salts bioisosters from the hydroxinaftoquinones lapachol and lausone in order to evaluate the influence the pKa in the biological activity of these class compounds as well as to increase the water solubility. In this last case, we observed parallel reactions, as a molecular rearrangement for the formation of the aminonaftoquinones during the study of cycloaddition with cyanides and azides. In addition, we studied the effects of molecular simplification of CAPE (caffeic acid phenethyl ester), to better understand the structural requirements for antitumoral activity of this class of compounds. In the present work we obtained 14 molecules which were also tested for different biological activities. Naftoquinoidais derivatives showed inhibition activity on enzymatic essay on DHODH and on cellular essay. Moreover, simplified molecules from CAPE showed high antitumoral activity and safety in comparison to the control 5- fluorouracil.
140

Simplificações na modelagem de habitações de interesse social no programa de simulação de desempenho térmico EnergyPlus / Modeling simplification of social houses in the thermal performance simulation program EnergyPlus

Gil, María del Pilar Casatejada 12 June 2017 (has links)
Os programas computacionais de simulação do desempenho termoenergético de edificações têm adquirindo cada vez mais importância devido às possibilidades que apresentam para a avaliação dos projetos. No entanto, há dificuldades para o seu uso nas fases iniciais de projeto, por demandarem tempo, técnicos especializados, orçamentos elevados e um projeto detalhado. Atualmente, existem ferramentas computacionais simplificadas, mas que apresentam limitações quanto às possibilidades de uso, não oferecendo resultados precisos como os dos métodos mais complexos. Portanto, este trabalho propõe avaliar as possibilidades de simplificação das zonas térmicas no programa de simulação EnergyPlus, sem comprometer os resultados das simulações. Esta simplificação auxiliaria o uso dessas ferramentas computacionais nas fases iniciais do projeto. O edifício estudado é uma habitação de interesse social (HIS) naturalmente ventilada, térrea e isolada, simulada para três cidades do Brasil (Curitiba, Manaus e São Paulo). Esta HIS é modelada no EnergyPlus de duas formas: como um modelo multizona (MuZ) e como um modelo monozona (MoZ), em que toda a habitação é considerada apenas como uma única zona térmica. O impacto da simplificação das zonas térmicas é avaliado em dois estudos que consideram: 1) vários horários para abertura e fechamento de portas internas, e 2) diferentes geometrias e distribuições internas para os ambientes. Em ambos os estudos, os resultados mostram que a diferença absoluta horária da temperatura entre os modelos MoZs e MuZs é significativamente baixa para todos os casos considerados, estando abaixo de 0,4ºC mais de 50% do tempo. As maiores diferenças encontradas entre os MoZs e MuZs são obtidas nos modelos simulados nos climas mais frios, nos modelos nos quais as portas internas são consideradas fechadas e nos ambientes menores com uma exposição à radiação solar mais reduzida. As diferenças anuais mínimas e máximas da temperatura interna do ar entre os MoZ e MuZ são notadamente elevadas. No entanto, estes valores são observados num dia e uma hora específica, sendo a média anual significativamente baixa para todos os casos. / Building thermal and energy performance simulation programs are gaining more and more importance due to the possibilities they present for a project evaluation. However, there are difficulties for its use in the early stages of the project, because they demand time, specialized technicians, high budgets and a detailed project. Currently, there are simplified computer tools, but they present limits to the possibilities of its use, since they do not offer accurate results as more complex methods. Therefore, this work proposes to evaluate the possibilities of simplification of the thermal zones in EnergyPlus simulation program without compromising the results of the simulations. This simplification would help the computational tools use in the early design of the project. The studied building is a naturally ventilated single-story isolated social house, simulated for three cities of Brazil (Curitiba, Manaus and São Paulo). This building is modeled in two ways: as a multizone model (MuZ) and as a monozone model (MoZ), where the entire floorplan is considered as one thermal zone. The impact of the thermal zones simplification is evaluated in two studies considering: 1) several schedules for internal doors opening and closing, and 2) different building geometries and floorplans. In both studies, the results show that the absolute difference of temperature between the MoZs and MuZs models is significantly low for all of the cases, being below 0.4ºC more than 50% of the time. The maximum differences found between MoZs and MuZs are obtained in models simulated in colder climates, in models where internal doors are considered closed and in smaller rooms with reduced exposure to solar radiation. The minimum and maximum annual differences of the air temperature between the MoZ and MuZ are considerably high. However, these values are observed in a specific day and hour, with the annual average being significantly low for all cases.

Page generated in 0.1331 seconds