• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 24
  • 24
  • 15
  • 13
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 183
  • 183
  • 32
  • 32
  • 31
  • 29
  • 25
  • 22
  • 22
  • 21
  • 18
  • 16
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

O impacto de variáveis climáticas sobre o valor da produção agrícola - análise para alguns estados brasileiros / The climate impacts on the agricultural production - an analysis for some Brazilian states

Castro, Nicole Rennó 05 February 2015 (has links)
A influência do clima sobre a agricultura tem sido constantemente discutida na literatura econômica, e os resultados sugerem que este setor deve ser o mais afetado conforme as projeções atuais sobre o clima. No caso do Brasil, a temática tem sua importância destacada, uma vez que o setor agrícola e suas atividades vinculadas representam parte expressiva do PIB nacional, de modo que o desempenho econômico se apresenta vinculado aos resultados do setor. Ademais, a agricultura brasileira apresenta significativa participação no mercado internacional, sendo o país um importante player no que diz respeito à oferta global de commodities. Portanto, estudos e pesquisas que auxiliem na redução dos potenciais impactos do clima na agricultura brasileira ganham relevância, dados os efeitos sobre o mercado internacional de commodities e sobre a economia nacional. Neste contexto, o presente estudo avaliou empiricamente o impacto potencial do clima na produção agrícola dos principais estados produtores do país, por meio da estimação das elasticidades entre as variáveis temperatura e precipitação e o valor real de produção nestes estados. A fim de atingir o objetivo proposto, foi utilizado um modelo de efeitos fixos, aplicado a uma base de dados em painel, com dez estados entre 1990 e 2012. Os resultados encontrados sugerem impactos significativos do clima na agricultura, sendo aqueles relacionados à temperatura de magnitude expressivamente superior aos de precipitação. Quanto à temperatura, as relações estimadas foram predominantemente negativas, e para a precipitação ocorreu o inverso. Além disso, observaram-se respostas bastante divergentes entre os estados, sendo que o Rio Grande do Sul e o Espírito Santo se mostraram os mais vulneráveis às variações climáticas. Apenas em Goiás a agricultura respondeu positivamente a aumentos de temperatura, e na Bahia e no Mato Grosso não foram encontradas relações estatisticamente significativas. / The influence of climate on agriculture has been constantly discussed in the economic literature, and the results suggest that this should be the sector most affected according the current climate projections. In the Brazilian case, the issue is particular relevant, since the agricultural sector and its related activities represent a significant part of the national GDP, then, the country\'s economic performance is linked to the sector\'s result. Moreover, Brazilian agriculture has a significant share on the international market, and the country is an important player on the global commodities supply. Therefore, studies and researches might generate results to mitigate the potential climate impacts on Brazilian agriculture. In this context, this research evaluated the potential impact of climate variables on agricultural production at the states level, through the elasticities estimation among the climate variables, temperature and precipitation, and the state\'s agricultural production real values. It was estimated a fixed effects panel model, considering ten states from 1990 to 2012. The results suggest significant impacts of climate on agriculture, especially those related to temperature, which were significantly greater than the precipitation effects. For temperature, the results estimated were predominantly negative, and for precipitation, the opposite happened. In addition, there were widely divergent responses among states; the Rio Grande do Sul and the Espírito Santo were the most vulnerable states to climate variations. Only in Goiás the agriculture responded positively to increases in temperature, and in Bahia and Mato Grosso there was no statistically significant relationships between temperature and agricultural production.
32

Analýza a návrh řešení aplikační architektury finanční instituce / Analysis and Design Application Infrastructure Solutions Financial Institutions

Vychodil, Marek January 2011 (has links)
The topic of the thesis is putting up the application architecture of the organization based othe the actual state, identified requirements, the operational model. The thesis contains description of the business architecture domain, requirements metodology and creation of the operational model of the organization. For the main solution is used the conceptual data model. Application architecture design is presented by the aplication landscape model with the desctiption of the architecture layers and the integration concept including a master data managment concept.
33

Agrupamento personalizado de pontos em web maps usando um modelo multidimensional - APPWM / Multidimensional model for cluster points in web maps

Bigolin, Marcio January 2014 (has links)
Com o avanço da geração de informação georeferenciada torna-se extremamente importante desenvolver técnicas que auxiliem na melhora da visualização dessas informações. Neste sentido os web maps tornam-se cada vez mais comuns na difusão dessas informações. Esses sistemas permitem ao usuário explorar tendências geográficas de forma rápida e sem necessidade de muito conhecimento técnico em cartografia e softwares específicos. As áreas do mapa onde ocorre um mesmo evento com maior incidência geram visualizações confusas e que não possibilitam uma adequada tomada de decisão. Essas áreas, quando representadas através de pontos (o que é bastante comum), provocará uma sobreposição massiva de dados, devido à densidade de informações. Esta dissertação propõe uma técnica que utiliza um modelo de dados multidimensional para auxiliar a exibição das informações em um web map, de acordo com o contexto do usuário. Esse modelo organiza os dados por níveis geográficos e permite assim uma melhor compreensão da informação exibida. Os experimentos desenvolvidos mostraram que a técnica foi considerada de fácil utilização e de uma necessidade pequena de conhecimento para a execução das tarefas. Isso pode ser visto que das 59 consultas propostas para serem geradas apenas 7 precisam de mudanças significativas para serem executadas. Esses resultados permitem comprovar que o modelo se apresenta como uma boa alternativa para a tomada de decisão sobre mapas produzidos em ambiente web. / The advancement of generation of geo-referenced information becomes extremely important to develop techniques that help in improving the display of this information. In this sense the web maps become increasingly common in the dissemination of such information. These systems allow the user to explore geographical trends quickly and without much technical knowledge in cartography and specific software . The map areas where there is a single event with a higher incidence generate confusing views and not allow proper decision making. These areas , as represented by points (which is quite common) , will cause a massive overlay data , due to the density of information. This work proposes a technique that uses a multidimensional data model to support the display of information on a web map, according to the user's context . This model organizes data by geographical levels and thus allows a better understanding of the information displayed. Developed experiments showed that the technique was considered easy to use and a small need for knowledge to perform the tasks. It can be seen that the 59 queries proposals to be generated only 7 significant changes need to be executed. These results allow to prove that the model is presented as a good alternative for decision-making on maps produced in a web environment.
34

An approach to open virtual commissioning for component-based automation

Kong, Xiangjun January 2013 (has links)
Increasing market demands for highly customised products with shorter time-to-market and at lower prices are forcing manufacturing systems to be built and operated in a more efficient ways. In order to overcome some of the limitations in traditional methods of automation system engineering, this thesis focuses on the creation of a new approach to Virtual Commissioning (VC). In current VC approaches, virtual models are driven by pre-programmed PLC control software. These approaches are still time-consuming and heavily control expertise-reliant as the required programming and debugging activities are mainly performed by control engineers. Another current limitation is that virtual models validated during VC are difficult to reuse due to a lack of tool-independent data models. Therefore, in order to maximise the potential of VC, there is a need for new VC approaches and tools to address these limitations. The main contributions of this research are: (1) to develop a new approach and the related engineering tool functionality for directly deploying PLC control software based on component-based VC models and reusable components; and (2) to build tool-independent common data models for describing component-based virtual automation systems in order to enable data reusability.
35

Modeling and Querying Graph Data

Yang, Hong 12 March 2009 (has links)
Databases are used in many applications, spanning virtually the entire range of data processing services industry. The data in many database applications can be most naturally represented in the form of a graph structure consisting of various types of nodes and edges with several properties. These graph data can be classified into four categories: social networks describing the relationships between individual person and/or groups of people (e.g. genealogy, network of coauthorship among academics, etc); information networks in which the structure of the network reflects the structure of the information stored in the nodes (e.g. citation network among academic papers, etc); geographic networks, providing geographic information about public transport systems, airline routes, etc; and biological networks (e.g. biochemical networks, neuron network, etc). In order to analyze such networks and obtain desired information that users are interested in, some typical queries must be conducted. It can be seen that many of the query patterns are across multiple categories described above, such as finding nodes with certain properties in a path or graph, finding the distance between nodes, finding sub-graphs, paths enumeration, etc. However, the classical query languages like SQL, OQL are inept dealing with these types of queries needed to be performed in the above applications. Therefore, a data model that can effectively represent the graph objects and their properties, and a query language which empowers users to answer queries across multiple categories are needed. In this research work, a graph data model and a query language are proposed to resolve the issues existing in the current database applications. The proposed graph data model is an object-oriented graph data model which aims to represent the graph objects and their properties for various applications. The graph query language empowers users to query graph objects and their properties in a graph with specified conditions. The capability to specify the relationships among the entities composing the queried sub-graph makes the language more flexible than others.
36

Correcting for CBC model bias. A hybrid scanner data - conjoint model.

Natter, Martin, Feurstein, Markus January 2001 (has links) (PDF)
Choice-Based Conjoint (CBC) models are often used for pricing decisions, especially when scanner data models cannot be applied. Up to date, it is unclear how Choice-Based Conjoint (CBC) models perform in terms of forecasting real-world shop data. In this contribution, we measure the performance of a Latent Class CBC model not by means of an experimental hold-out sample but via aggregate scanner data. We find that the CBC model does not accurately predict real-world market shares, thus leading to wrong pricing decisions. In order to improve its forecasting performance, we propose a correction scheme based on scanner data. Our empirical analysis shows that the hybrid method improves the performance measures considerably. (author's abstract) / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
37

Data Management in an Object-Oriented Distributed Aircraft Conceptual Design Environment

Lu, Zhijie 16 January 2007 (has links)
Aircraft conceptual design, as the first design stage, provides major opportunity to compress design cycle time and is the cheapest place for making design changes. However, traditional aircraft conceptual design programs, which are monolithic programs, cannot provide satisfactory functionality to meet new design requirements due to the lack of domain flexibility and analysis scalability. Therefore, we are in need of the next generation aircraft conceptual design environment (NextADE). To build the NextADE, the framework and the data management problem are two major problems that need to be addressed at the forefront. Solving these two problems, particularly the data management problem, is the focus of this research. In this dissertation, a distributed object-oriented framework is firstly formulated and tested for the NextADE. In order to improve interoperability and simplify the integration of heterogeneous application tools, data management is one of the major problems that need to be tackled. To solve this problem, taking into account the characteristics of aircraft conceptual design data, a robust, extensible object-oriented data model is then proposed according to the distributed object-oriented framework. By overcoming the shortcomings of the traditional approach of modeling aircraft conceptual design data, this data model makes it possible to capture specific detailed information of aircraft conceptual design without sacrificing generality. Based upon this data model, a prototype of the data management system, which is one of the fundamental building blocks of the NextADE, is implemented utilizing the state of the art information technologies. Using a general-purpose integration software package to demonstrate the efficacy of the proposed framework and the data management system, the NextADE is initially implemented by integrating the prototype of the data management system with other building blocks of the design environment. As experiments, two case studies are conducted in the integrated design environments. One is based upon a simplified conceptual design of a notional conventional aircraft; the other is a simplified conceptual design of an unconventional aircraft. As a result of the experiments, the proposed framework and the data management approach are shown to be feasible solutions to the research problems.
38

Towards tool support for phase 2 in 2G

Stefánsson, Vilhjálmur January 2002 (has links)
<p>When systematically adopting a CASE (Computer-Aided Software Engineering) tool, an organisation evaluates candidate tools against a framework of requirements, and selects the most suitable tool for usage. A method, called 2G, has been proposed that aims at developing such frameworks based on the needs of a specific organisation.</p><p>This method includes a pilot evaluation phase, where state-of-the-art CASE-tools are explored with the aim of gaining more understanding of the requirements that the organisation adopting CASE-tools puts on candidate tools. This exploration results in certain output data, parts of which are used in interviews to discuss the findings of the tool exploration with the organisation. This project has focused on identifying the characteristics of these data, and subsequently to hypothesise a representation of the data, with the aim of providing guidelines for future tool support for the 2G method.</p><p>The approach to reaching this aim was to conduct a case study of a new application of the pilot evaluation phase, which resulted in data that could subsequently be analysed with the aim of identifying characteristics. This resulted in a hypothesised data representation, which was found to fit the data from the conducted application well, although certain situations were identified that the representation might not be able to handle.</p>
39

Duomenų loginių struktūrų išskyrimas funkcinių reikalavimų specifikacijos pagrindu / Data logical structure segregation on the ground of a functional requirements specification

Jučiūtė, Laura 25 May 2006 (has links)
The place of data modelling in information systems‘ life cycle and the importance of data model quality in effective IS exploitation are shown in this masters' work. Refering to results of the nonfiction literature analysis the reasons why the process of data modelling must be automated are introduced; current automatization solutions are described. And as it is the main purpose of this work an original data modelling method is described and programmable prototype which automates one step of that method – schema integration is introduced.
40

Objektinių ir reliacinių schemų integracijos modelis / Model for integrating object and relational schemas

Bivainis, Vytenis 02 September 2008 (has links)
Šiame darbe nagrinėjama objektinių ir reliacinių schemų integruojamumo ir suderinamumo problema. Programinei įrangai kurti šiuo metu populiariausios objektinės programavimo kalbos, tačiau duomenys, kuriais manipuliuojama, dažniausiai saugojami reliacinėse duomenų bazėse, todėl aktualu programuojant naudojamas struktūras susieti su reliacinės duomenų bazės struktūromis. Organizacijų informacijų sistemose duomenys dažnai yra saugojami keliose duomenų saugyklose, yra poreikis integruoti įvairiose saugyklose esančius duomenis. Tam tikslui naudojamos federacinės duomenų bazės, besiremiančios kanoniniu duomenų modeliu. Šiame darbe aprašomas objektinių ir reliacinių schemų integracijos modelis. Pasiūlytas skurdus kanoninis duomenų modelis, kurį sudaro atributai ir apribojimai: funkcinės, jungimo/projekcijos ir poaibio priklausomybės. Aprašytos transformacijos iš reliacinių ir objektinių schemų į kanoninę schemą, algoritmas kanoninėms schemoms integruoti, kanoninės schemos transformacija į struktūrinius tipus, naudojant modifikuotą sintezės algoritmą, ir OWL. Aprašyti algoritmai leidžia pasiekti vienareikšmiškumą ir iš dalies automatizuotumą. Modifikuotas sintezės algoritmas duoda geresnius rezultatus nei standartinis, nes įvertina jungimo/projekcijos priklausomybes. Pasiūlyti algoritmai gali būti naudojami integracijai, norint atkurti konceptualiąją schemą ar objektines struktūras iš reliacinės schemos. / In this work the problem of integration and compatibility of relational and object schemas is investigated. Nowadays object-oriented programming languages are the most popular, but data that has to be manipulated is usually stored in relational databases. It is relevant to map structures that are used in programming languages to relational structures. Data is usually stored in several repositories in enterprise information systems, so there is the need to integrate them. Federated databases are used for this purpose, and they have canonical data model. Semantically poor canonical data model, which consists of attributes and constraints (functional, join and subset dependencies), is proposed. Algorithms are given for transforming relational and object schemas to canonical schema, integrating canonical schemas, transforming canonical schema to structural types (using modified synthesis algorithm) and OWL. Proposed algorithms give unambiguous result and can be partially automated. Modified synthesis algorithm gives better results than standard algorithm as it takes join dependencies into account. The algorithms can be used to restore conceptual schema and object structures from relational schema as well as to integrate schemas.

Page generated in 0.0483 seconds