Spelling suggestions: "subject:"database"" "subject:"catabase""
1131 |
Development of an internet based housing demand database system for the KwaZulu-Natal Department of Human SettlementsEedara, Mamatha 16 September 2014 (has links)
Submitted in fulfillment of the requirements of the Degree of Master of Technology : Information Technology,Durban University of Technology, 2012 / The introduction of the Integrated Residential Development Programme (IRDP) in 2008 created challenges for the administration of all waiting lists and housing demand databases in South Africa as the provisioning of housing by the National Housing Programme was revised to include a higher earning per household. This resulted in an increase in the number of applications in all provinces. The fact that the KwaZulu-Natal Department of Human Settlements was processing applications manually because their electronic system was obsolete, only served to exacerbate matters. To address this problem of poor service provisioning at KZN-DHS, an automated internet based system was considered a promising solution to facilitate effective communication between the department and its clients. It was therefore important to find out which business activities and functional requirements of the KZN-DHS that when automated as an internet based application would improve housing service provisioning in the province. The purpose of this study therefore was to modify and enhance the old housing demand (electronic) database system for the KZN-DHS as the old system was not meeting their requirements and was not serving the citizens of the province efficiently.
The researcher used Entity Relationship (ER) Model and Unified Modelling Language (UML) as a framework to develop an internet based system to leverage the business process, minimize capturing errors and improve administration processes in the KZN-DHS. Using a JAD session, semi-structured interviews she determined the needs and requirements of the users before developing, implementing and testing the system. Implementation alerted the researcher to errors/issues which were addressed to ensure optimal functioning of the system. This study makes recommendations for maintenance of the system and discusses implications for further research.
|
1132 |
The Gonium pectorale genome demonstrates co-option of cell cycle regulation during the evolution of multicellularityHanschen, Erik R., Marriage, Tara N., Ferris, Patrick J., Hamaji, Takashi, Toyoda, Atsushi, Fujiyama, Asao, Neme, Rafik, Noguchi, Hideki, Minakuchi, Yohei, Suzuki, Masahiro, Kawai-Toyooka, Hiroko, Smith, David R., Sparks, Halle, Anderson, Jaden, Bakarić, Robert, Luria, Victor, Karger, Amir, Kirschner, Marc W., Durand, Pierre M., Michod, Richard E., Nozaki, Hisayoshi, Olson, Bradley J. S. C. 22 April 2016 (has links)
The transition to multicellularity has occurred numerous times in all domains of life, yet its initial steps are poorly understood. The volvocine green algae are a tractable system for understanding the genetic basis of multicellularity including the initial formation of cooperative cell groups. Here we report the genome sequence of the undifferentiated colonial alga, Gonium pectorale, where group formation evolved by co-option of the retinoblastoma cell cycle regulatory pathway. Significantly, expression of the Gonium retinoblastoma cell cycle regulator in unicellular Chlamydomonas causes it to become colonial. The presence of these changes in undifferentiated Gonium indicates extensive group-level adaptation during the initial step in the evolution of multicellularity. These results emphasize an early and formative step in the evolution of multicellularity, the evolution of cell cycle regulation, one that may shed light on the evolutionary history of other multicellular innovations and evolutionary transitions.
|
1133 |
Automated dust storm detection using satellite images : development of a computer system for the detection of dust storms from MODIS satellite images and the creation of a new dust storm databaseEl-Ossta, Esam Elmehde Amar January 2013 (has links)
Dust storms are one of the natural hazards, which have increased in frequency in the recent years over Sahara desert, Australia, the Arabian Desert, Turkmenistan and northern China, which have worsened during the last decade. Dust storms increase air pollution, impact on urban areas and farms as well as affecting ground and air traffic. They cause damage to human health, reduce the temperature, cause damage to communication facilities, reduce visibility which delays both road and air traffic and impact on both urban and rural areas. Thus, it is important to know the causation, movement and radiation effects of dust storms. The monitoring and forecasting of dust storms is increasing in order to help governments reduce the negative impact of these storms. Satellite remote sensing is the most common method but its use over sandy ground is still limited as the two share similar characteristics. However, satellite remote sensing using true-colour images or estimates of aerosol optical thickness (AOT) and algorithms such as the deep blue algorithm have limitations for identifying dust storms. Many researchers have studied the detection of dust storms during daytime in a number of different regions of the world including China, Australia, America, and North Africa using a variety of satellite data but fewer studies have focused on detecting dust storms at night. The key elements of this present study are to use data from the Moderate Resolution Imaging Spectroradiometers on the Terra and Aqua satellites to develop more effective automated method for detecting dust storms during both day and night and generate a MODIS dust storm database.
|
1134 |
Využití dat LLS pro aktualizaci silniční sítě / Utilization of ALS data for update of a road networkKutišová, Tereza January 2019 (has links)
Utilization of ALS data for update of a road network Abstract My thesis concerned problematics of automatic detection of communication data from aerial laser scanning. Goal of this method is to identify area of roads - tarmacs as accurate as possible. On its basis are counted attributes of specific parts. In first part of the thesis are summarized known procedures, which are used to deal with the issue and experiences and evaluation of the output of theirs authors. In practical part of the thesis is described procedure methodology, which is based on findings from the literature review. Subsequently, input data and model areas are introduced. In the final parts are described results and compared with the results of authors, who used such evaluation in their work. Key words: airborne laser scanning, digital topographic database, road network, database update
|
1135 |
EVALUATING SPATIAL QUERIES OVER DECLUSTERED SPATIAL DATAEslam A Almorshdy (6832553) 02 August 2019 (has links)
<div>
<div>
<p>Due to the large volumes of spatial data, data is stored on clusters of machines
that inter-communicate to achieve a task. In such distributed environment; communicating intermediate results among computing nodes dominates execution time.
Communication overhead is even more dominant if processing is in memory. Moreover, the way spatial data is partitioned affects overall processing cost. Various partitioning strategies influence the size of the intermediate results. Spatial data poses
the following additional challenges: 1)Storage load balancing because of the skewed
distribution of spatial data over the underlying space, 2)Query load imbalance due to
skewed query workload and query hotspots over both time and space, and 3)Lack of
effective utilization of the computing resources. We introduce a new kNN query evaluation technique, termed BCDB, for evaluating nearest-neighbor queries (NN-queries,
for short). In contrast to clustered partitioning of spatial data, BCDB explores the
use of declustered partitioning of data to address data and query skew. BCDB uses
summaries of the underling data and a coarse-grained index to localize processing of
the NN-query on each local node as much as possible. The coarse-grained index is locally traversed using a new uncertain version of classical distance browsing resulting in minimal O( √k) elements to be communicated across all processing nodes.</p>
</div>
</div>
|
1136 |
Visualiseringsverktyg för migrerad kod : Ersättare till GuardienEriksson, John, Karlsson, Tobias January 2019 (has links)
Java is one of the most widely used programming languages used today. CSN that previously used 4 GL tool should now migrate to Java development, which means that there is a need for a tool to show dependencies and relationships in the migrated and newly developed Java code. The GuardIEn was previously used in the old code base, but that tool will be wound up after CSN's after migration to Java. The overall purpose of the project is to create a graph database with data that is scanned with the tool jqAssistant. This database is then used by a Java backend that retrieves relationships and nodes from the graph database which is then used with a separate web interface in Angular to visualize all relations between program code / Java är en av de mest använda programmeringsspråken som används idag. CSN som tidigare använt 4 GL verktyg skall nu migrera till Java-utveckling vilket innebär att det finns ett behov av ett verktyg för att visa beroenden och relationer i den migrerade och nyutvecklade Java-koden. GuardIEn användes innan för detta i den gamla kodbasen men det verktyget kommer avvecklas efter CSN:s efter migrering till Java. Projektets övergripande syfte är att skapa en grafdatabas med data som skannas in med verktyget jqAssistant. Denna databas används sedan av en backend applikation som hämtar relationer och noder från grafdatabasen som sedan används med ett eget webbgränssnitt i Angular för visualisera alla relationer mellan programkod. Det har också undersökts kring funktioner på att söka efter programkod och filnamn i kodbasen för att hitta och kunna visa källkoden.
|
1137 |
Um novo processo para refatoração de bancos de dados. / A new process to database refactoring.Domingues, Márcia Beatriz Pereira 15 May 2014 (has links)
O projeto e manutenção de bancos de dados é um importante desafio, tendo em vista as frequentes mudanças de requisitos solicitados pelos usuários. Para acompanhar essas mudanças o esquema do banco de dados deve passar por alterações estruturais que muitas vezes prejudicam o desempenho e o projeto das consultas, tais como: relacionamentos desnecessários, chaves primárias ou estrangeiras criadas fortemente acopladas ao domínio, atributos obsoletos e tipos de atributos inadequados. A literatura sobre Métodos Ágeis para desenvolvimento de software propõe o uso de refatorações para evolução do esquema do banco de dados quando há mudanças de requisitos. Uma refatoração é uma alteração simples que melhora o design, mas não altera a semântica do modelo de dados, nem adiciona novas funcionalidades. Esta Tese apresenta um novo processo para aplicar refatorações ao esquema do banco de dados. Este processo é definido por um conjunto de tarefas com o objetivo de executar as refatorações de uma forma controlada e segura, permitindo saber o impacto no desempenho do banco de dados para cada refatoração executada. A notação BPMN foi utilizada para representar e executar as tarefas do processo. Como estudo de caso foi utilizado um banco de dados relacional, o qual é usado por um sistema de informação para agricultura de precisão. Esse sistema, baseado na Web, necessita fazer grandes consultas para plotagem de gráficos com informações georreferenciadas. / The development and maintenance of a database is an important challenge, due to frequent changes and requirements from users. To follow these changes, the database schema suffers structural modifications that, many times, negatively affect its performance and the result of the queries, such as: unnecessary relationships, primary and foreign keys, created strongly attached to the domain, with obsolete attributes or inadequate types of attributes. The literature about Agile Methods for software development suggests the use of refactoring for the evolution of database schemas when there are requirement changes. A refactoring is a simple change that improves the design, but it does not alter the semantics of the data model neither adds new functionalities. This thesis aims at proposing a new process to apply many refactoring to the database schema. This process is defined by a set of refactoring tasks, which is executed in a controlled, secure and automatized form, aiming at improving the design of the schema and allowing the DBA to know exactly the impact on the performance of the database for each refactoring performed. A notation BPMN has been used to represent and execute the tasks of the workflow. As a case study, a relational database, which is used by an information system for precision agriculture was used. This system is web based, and needs to perform large consultations to transfer graphics with geo-referential information.
|
1138 |
Scalable Visual Hierarchy ExplorationStroe, Ionel Daniel 10 May 2000 (has links)
More and more modern computer applications, from business decision support to scientific data analysis, utilize visualization techniques to support exploratory activities. Various tools have been proposed in the past decade to help users better interpret data using such display techniques. However, most do not scale well with regard to the size of the dataset upon which they operate. In particular, the level of cluttering on the screen is typically unacceptable and the performance is poor. To solve the problem of cluttering at the interface level, visualization tools have recently been extended to support hierarchical views of the data, with support for focusing and drilling-down using interactive brushes. To solve the scalability problem, we now investigate how best to couple such a visualization tool with a database management system without losing the real-time characteristics. This integration must be done carefully, since visual user interactions implemented as main memory operations do not map directly into efficient database operations. The main efficiency issue when doing this integration is to avoid the recursive processing required for hierarchical data retrieval. For this problem, we have develop a tree labeling method, called MinMax tree, that allows the movement of the on-line recursive processing into an off-line precomputation step. Thus, at run time, the recursive processing operations translate into linear cost range queries. Secondly, we employ a main memory access strategy to support incremental loading of data into the main memory. The techniques have been incorporated into XmdvTool, a multidimensional visual exploration tool, in order to achieve scalability. The tool now successfully scales up to datasets of the order 10^5-10^7 records. Lastly, we report experimental results that illustrate the impact of the proposed techniques on the system's overall performance.
|
1139 |
Planejamento de moduladores de polimerização de microtúbulos com propriedades anticâncer, análise estrutural de macromoléculas e geração de uma base virtual de produtos naturais / Design of microtubule polymerization modulators with anticancer properties, structural analysis of macromolecules and development of a virtual database of natural productsSantos, Ricardo Nascimento dos 19 November 2015 (has links)
Os trabalhos realizados e apresentados nesta tese de doutorado compreendem diversos estudos computacionais e experimentais aplicados ao planejamento de candidados a novos fármacos para o tratamento do câncer, de uma metodologia inovadora para investigar a formação de complexos proteicos e de uma base de compostos naturais reunindo parte da biodiversidade brasileira com a finalidade de incentivar e auxiliar a descoberta e o desenvolvimento de novos fármacos no país. No primeiro capítulo, são descritos estudos que permitiram a identificação e o desenvolvimento de novas moléculas com atividade anticâncer, através da integração de ensaios bioquímicos e métodos de modelagem molecular na área de química medicinal. Dessa forma, estudos de modelagem molecular e ensaios bioquímicos utilizando uma base de compostos disponibilizada pela colaboração com o Laboratório de Síntese de Produtos Naturais e Fármacos (LSPNF) da UNICAMP, permitiram identificar uma série de moléculas da classe ciclopenta-β-indóis como inibidores da polimerização de microtúbulos com considerável atividade anti-câncer. Estes compostos apresentaram-se capazes de modular a polimerização de microtúbulos em ensaios in vitro frente ao alvo molecular e a células cancerígenas, com valores de IC50 na faixa de 20 a 30 μM. Além disso, estudos experimentais permitiram identificar o sítio da colchicina na tubulina como a região de interação desta classe e ensaios de migração celular comprovaram sua atividade antitumoral. A partir dos resultados obtidos, estudos mais aprofundados de docagem e dinâmica molecular permitiram elucidar as interações moleculares envolvidas no processo de ligação à proteína tubulina, e a utilização destes modelos moleculares no planejamento, síntese e avaliação de uma nova série de compostos. Com base nos dados obtidos por estudos computacionais, modificações foram propostas e novos inibidores da polimerização de tubulina foram planejados, sintetizados e avaliados, resultando na identificação de um inibidor de elevada atividade e perfil farmacodinâmico superior dentre as moléculas planejadas, com IC50 de 5 μM. Concomitantemente, ensaios de citotoxicidade in vitro demostraram uma interessante seletividade destes compostos por células cancerígenas em comparação a células saudáveis. Os estudos desenvolvidos com inibidores de tubulina aqui apresentados permitiram identificar moduladores da polimerização de microtúbulos com excelente perfil anti-câncer, que servirão como modelo para o desenvolvimento de novos tratamentos eficazes contra o câncer. No segundo capítulo é apresentado um novo método para predizer modificações conformacionais e a formação de complexos multiméricos em sistemas proteicos. Este método foi elaborado durante os estudos desenvolvidos ao longo de um programa de intercâmbio no laboratório The Center for Theoretical and Biological Physics (CTBP, Rice University, Estados Unidos), sob orientação do professor Dr. José Nelson Onuchic. Durante este projeto, estudos de modelagem computacional foram realizados utilizando métodos computacionais modernos desenvolvidos no próprio CTBP, tal como o método de Análise de Acoplamento Direto (DCA, do inglês Direct-Coupling Analysis) e um método de simulação conhecido como Modelagem Baseada em Estrutura (SBM, do inglês Structure-Based Modeling). Nos estudos aqui apresentados, os métodos DCA e SBM desenvolvidos no CTBP foram combinados, modificados e ampliados no desenvolvimento de uma nova metodologia que permite identificar mudanças conformacionais e elucidar mecanismos de enovelamento e oligomerização em proteínas. Os resultados obtidos através da predição de diversos complexos proteicos multiméricos com uma alta precisão mostram que este sistema é extremamente eficaz e confiável para identificar regiões de interface de contato entre proteínas a a estrutura quaternária de complexos macromoleculares. Esta nova metodologia permite a elucidação e caracterização de sistemas proteicos incapazes de serem determinados atualmente por métodos puramente experimentais. No terceiro capítulo desta tese de doutorado, é descrito a construção de uma base virtual de dados em uma iniciativa pioneira que tem como principal objetivo reunir e disponibilizar o máximo possível de toda a informação já obtida através do estudo da biodiversidade brasileira. Esta base, intitulada NuBBE DataBase, reúne diversas informações como estrutura molecular 2D e 3D e informações de atividades biológicas de diversas moléculas já isoladas pelo Núcleo de Bioensaios Biossíntese e Ecofisiologia de Produtos Naturais (NuBBE), localizado na Universidade Estadual Paulista Júlio de Mesquita Filho (UNESP). A NuBBEDB será de grande utilidade para a comunidade científica, fornecendo a centros de pesquisa e indústrias farmacêuticas informações para estudos de modelagem molecular, metabolômica, derreplicação e principalmente para o planejamento e a identificação de novos compostos bioativos. / The work developed during a doctorate program and shown here as a PhD thesis reports the accomplishment of a series of computational and experimental studies focused on the development of new anticancer agents, an innovative methodology for the investigation of protein complexes formation and of a new database for natural products based on the Brazilian biodiversity, in an effort to assist and encourage the discovery and development of new pharmaceutical drugs inside country. The first chapter describes studies that resulted in the identification and development of new molecules with anticancer activity through the integration of biochemical experiments and molecular modeling methods in the area of medicinal chemistry. Thus, molecular modeling studies and biochemical assays using a library of compounds provided by collaboration with the Laboratório de Síntese de Produtos Naturais e Fármacos (LSPNF) from the University of Campinas (Unicamp) have identified a number of molecules of the cyclopenta-b-indole class as inhibitors for microtubule polymerisation, with substantial anti-cancer activity. These compounds showed to be able to modulate microtubule polymerisation on in vitro assays against the molecular target and cancer cells with IC50 values in the range of 20 to 30 μM. Moreover, experimental studies have identified the colchicine site of tubulin as in the region of interaction of this class and cell migration assays have proven their antitumour activity. Based on these results, further studies using molecular docking and molecular dynamics allowed to elucidate the molecular interactions involved in the binding process to tubulin protein, and these molecular models were used to guide the design, synthesis and evaluation of a novel series of compounds. From the data obtained by computational studies, modifications were proposed to design, synthesise and evaluate new tubulin polymerisation inhibitors, resulting in identification of a high-activity inhibitor and superior pharmacodynamic profile and IC50 of 5 μM. Alongside, in vitro cytotoxicity assays demonstrated an interesting selectivity of these compounds for cancer cells when compared to healthy cells. The studies presented here with tubulin inhibitors allowed to identify modulators of microtubule polymerisation with excellent anti-cancer profile, that will provide a valuable scaffold for the development of new effective treatments against cancer. The second chapter presents a new method for predicting changes on the conformational and the formation of multimeric protein complexes. This method was developed during the studies carried out over an exchange program in the Center for Theoretical and Biological Physics (CTBT, Rice University, USA), under the supervision of professor Dr. José Nelson Onuchic. During this project, computer modeling studies were carried using modern methods developed in the CTBT itself, such as Direct-Coupling Analysis (DCA) and a simulation method known as Modeling Based Structure (SBM). In the studies presented here, the DCA and SBM methods developed in CTBP were combined, modified and expanded to develop a new methodology able to identify the conformational changes and to elucidate mechanisms folding and oligomerization of proteins. The results obtained through prediction of various multimeric protein complexes with high accuracy show that this system is extremely effective and reliable to identify interface contacts between proteins and to predict the quaternary structure of macromolecular complexes. This new method allows the characterization and elucidation of protein systems that are currently unable to be solely determined by experimental methods. The third chapter of this doctoral thesis describes the construction of a virtual database in a pioneering initiative that aims to gather and make available all the information already obtained through the study of Brazilian biodiversity. This database, entitled NuBBE DataBase, brings together various information such as 2D and 3D molecular structure and biological activity of several molecules already isolated by the Núcleo de Bioensaios Biossíntese e Ecofisiologia de Produtos Naturais(NuBBE), located at the Universidade Estadual Paulista Julio de Mesquita Filho (UNESP). The NuBBEDB will be useful to the scientific community, providing research and pharmaceutical centers information for molecular modeling studies, metabolomics, derreplication and principally for the planning and identification of new bioactive compounds.
|
1140 |
Um método de integração de dados armazenados em bancos de dados relacionais e NOSQL / A method for integration data stored in databases relational and NOSQLVilela, Flávio de Assis 08 October 2015 (has links)
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-08-05T19:33:36Z
No. of bitstreams: 2
Dissertação - Flávio de Assis Vilela - 2015.pdf: 4909033 bytes, checksum: 3266fed0915712ec88adad7eec5bfc55 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-08-08T14:30:29Z (GMT) No. of bitstreams: 2
Dissertação - Flávio de Assis Vilela - 2015.pdf: 4909033 bytes, checksum: 3266fed0915712ec88adad7eec5bfc55 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2016-08-08T14:30:29Z (GMT). No. of bitstreams: 2
Dissertação - Flávio de Assis Vilela - 2015.pdf: 4909033 bytes, checksum: 3266fed0915712ec88adad7eec5bfc55 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Previous issue date: 2015-10-08 / The increase in quantity and variety of data available on the Web contributed to the
emergence of NOSQL approach, aiming at new demands, such as availability, schema
flexibility and scalability. At the same time, relational databases are widely used for
storing and manipulating structured data, providing stability and integrity of data, which
is accessed through a standard language such as SQL. This work presents a method for
integrating data stored in heterogeneous sources, in which an input query in standard
SQL produces a unified answer, based in the partial answers of relational and NOSQL
databases. / O aumento da quantidade e variedade de dados disponíveis na Web contribuiu com o surgimento
da abordagem NOSQL, visando atender novas demandas, como disponibilidade,
flexibilidade de esquema e escalabilidade. Paralelamente, bancos de dados relacionais são
largamente utilizados para armazenamento e manipulação de dados estruturados, oferecendo
estabilidade e integridade de dados, que são acessados através de uma linguagem
padrão, como SQL. Este trabalho apresenta um método de integração de dados armazenados
em fontes heterogêneas, no qual uma consulta de entrada em SQL produz uma resposta
unificada, baseada nas respostas parciais de bancos de dados relacionais e NOSQL.
Palavras–chave
|
Page generated in 0.0311 seconds