Spelling suggestions: "subject:"integration off data"" "subject:"integration off mata""
1 |
Growth effects of economic integration. The case of the EU Member States (1950-2000).Badinger, Harald January 2001 (has links) (PDF)
Has economic integration improved the postwar growth performance of the actual fifteen member states of the European Union (EU)? To answer this question, we first construct an index of integration for each member state that explicitly accounts for global integration (GATT) as well as regional (European) integration. Using this variable, we test for permanent and temporary growth effects in a dynamic growth accounting framework, both in a time series setting for the (aggregate) EU and a panel approach for the EU member states. Although the hypothesis of permanent growth effects as postulated by endogenous growth models with scale effects is clearly rejected, we find significant levels effects: GDP per capita of the EU would be approximately one fifth lower today, if no integration had taken place since 1950. Interestingly, two third of this effect are due to GATT-liberalization. (author's abstract) / Series: EI Working Papers / Europainstitut
|
2 |
Usando Assertivas de Correspondência para Especificação e Geração de Visões XML para Aplicações Web / Using assertive of correspondence for specification and generation of XML view for applications WebLemos, Fernando Cordeiro de January 2007 (has links)
LEMOS, Fernando Cordeiro de. Usando Assertivas de Correspondência para Especificação e Geração de Visões XML para Aplicações Web. 2007. 115 f. : Dissertação (mestrado) - Universidade Federal do Ceará, Centro de Ciências, Departamento de Computação, Fortaleza-CE, 2007. / Submitted by guaracy araujo (guaraa3355@gmail.com) on 2016-06-24T19:44:28Z
No. of bitstreams: 1
2007_dis_fclemos.pdf: 1586971 bytes, checksum: d5add67ad3fb40e35813240332a35900 (MD5) / Approved for entry into archive by guaracy araujo (guaraa3355@gmail.com) on 2016-06-24T19:47:37Z (GMT) No. of bitstreams: 1
2007_dis_fclemos.pdf: 1586971 bytes, checksum: d5add67ad3fb40e35813240332a35900 (MD5) / Made available in DSpace on 2016-06-24T19:47:37Z (GMT). No. of bitstreams: 1
2007_dis_fclemos.pdf: 1586971 bytes, checksum: d5add67ad3fb40e35813240332a35900 (MD5)
Previous issue date: 2007 / Web applications that have large number of pages, whose contents are dynamically extracted from one or more databases, and that requires data intensive access and update, are known as "data-intensive Web applications" (DIWA applications) [7]. In this work, the requirements for the content of each page of the application are specified by an XML view, which is called Navigation View (NV). We believe that the data of NVs are stored in a relational or XML database. In this work, we propose an approach to specify and generate NVs for Web applications whose content is extracted from one or more data sources. In the proposed approach, a NV is specified conceptually with the help of a set of Correspondence Assertions [44], so that the definition of NV can be generated automatically based on assertions of view. / Aplicações Web que possuem grande número de páginas, cujos conteúdos são dinamicamente extraídos de banco de dados, e que requerem intenso acesso e atualização dos dados, são conhecidas como “data-intensive Web applications” (aplicações DIWA). Neste trabalho, os requisitos de conteúdo de cada página da aplicação são especificados através de uma visão XML, a qual denominamos Visão de Navegação (VN). Consideramos que os dados das VNs estão armazenados em um banco de dados relacional ou XML. Nesse trabalho, propomos um enfoque para especificação e geração de VNs para aplicações Web cujo conteúdo é extraído de uma ou mais fontes de dados. No enfoque proposto, uma VN é especificada conceitualmente com a ajuda de um conjunto de Assertivas de Correspondência, de forma que a definição da VN pode ser gerada automaticamente a partir das assertivas da visão.
|
3 |
Usando assertivas de correspondência para especificação e geração de visões XML para aplicações web / Using assertive of correspondence for specification and generation of XML view for applications WebLemos, Fernando Cordeiro de January 2007 (has links)
LEMOS, Fernando Cordeiro de. Usando assertivas de correspondência para especificação e geração de visões XML para aplicações web. 2007. 127 f. Dissertação (Mestrado em ciência da computação)- Universidade Federal do Ceará, Fortaleza-CE, 2007. / Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-07-11T15:21:34Z
No. of bitstreams: 1
2007_dis_fclemos.pdf: 1586971 bytes, checksum: d5add67ad3fb40e35813240332a35900 (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2016-07-15T15:35:40Z (GMT) No. of bitstreams: 1
2007_dis_fclemos.pdf: 1586971 bytes, checksum: d5add67ad3fb40e35813240332a35900 (MD5) / Made available in DSpace on 2016-07-15T15:35:40Z (GMT). No. of bitstreams: 1
2007_dis_fclemos.pdf: 1586971 bytes, checksum: d5add67ad3fb40e35813240332a35900 (MD5)
Previous issue date: 2007 / Web applications that have large number of pages, whose contents are dynamically extracted from one or more databases, and that requires data intensive access and update, are known as "data-intensive Web applications" (DIWA applications) [7]. In this work, the requirements for the content of each page of the application are specified by an XML view, which is called Navigation View (NV). We believe that the data of NVs are stored in a relational or XML database. In this work, we propose an approach to specify and generate NVs for Web applications whose content is extracted from one or more data sources. In the proposed approach, a NV is specified conceptually with the help of a set of Correspondence Assertions [44], so that the definition of NV can be generated automatically based on assertions of view. / Aplicações Web que possuem grande número de páginas, cujos conteúdos são dinamicamente extraídos de banco de dados, e que requerem intenso acesso e atualização dos dados, são conhecidas como “data-intensive Web applications” (aplicações DIWA). Neste trabalho, os requisitos de conteúdo de cada página da aplicação são especificados através de uma visão XML, a qual denominamos Visão de Navegação (VN). Consideramos que os dados das VNs estão armazenados em um banco de dados relacional ou XML. Nesse trabalho, propomos um enfoque para especificação e geração de VNs para aplicações Web cujo conteúdo é extraído de uma ou mais fontes de dados. No enfoque proposto, uma VN é especificada conceitualmente com a ajuda de um conjunto de Assertivas de Correspondência, de forma que a definição da VN pode ser gerada automaticamente a partir das assertivas da visão.
|
4 |
Usando Assertivas de CorrespondÃncia para EspecificaÃÃo e GeraÃÃo de VisÃes XML para AplicaÃÃes Web / Using assertive of correspondence for specification and generation of XML view for applications WebFernando Cordeiro de Lemos 23 March 2007 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / AplicaÃÃes Web que possuem grande nÃmero de pÃginas, cujos conteÃdos sÃo dinamicamente extraÃdos de banco de dados, e que requerem intenso acesso e atualizaÃÃo dos dados, sÃo conhecidas como âdata-intensive Web applicationsâ (aplicaÃÃes DIWA). Neste trabalho, os requisitos de conteÃdo de cada pÃgina da aplicaÃÃo sÃo especificados atravÃs de uma visÃo XML, a qual denominamos VisÃo de NavegaÃÃo (VN). Consideramos que os dados das VNs estÃo armazenados em um banco de dados relacional ou XML.
Nesse trabalho, propomos um enfoque para especificaÃÃo e geraÃÃo de VNs para aplicaÃÃes Web cujo conteÃdo à extraÃdo de uma ou mais fontes de dados. No enfoque proposto, uma VN à especificada conceitualmente com a ajuda de um conjunto de Assertivas de CorrespondÃncia, de forma que a definiÃÃo da VN pode ser gerada automaticamente a partir das assertivas da visÃo. / Web applications that have large number of pages, whose contents are dynamically extracted from one or more databases, and that requires data intensive access and update, are known as "data-intensive Web applications" (DIWA applications) [7]. In this work, the requirements for the content of each page of the application are specified by an XML view, which is called Navigation View (NV). We believe that the data of NVs are stored in a relational or XML database. In this work, we propose an approach to specify and generate NVs for Web applications whose content is extracted from one or more data sources. In the proposed approach, a NV is specified conceptually with the help of a set of Correspondence Assertions [44], so that the definition of NV can be generated automatically based on
assertions of view.
|
5 |
Scalable Data Integration for Linked DataNentwig, Markus 06 August 2020 (has links)
Linked Data describes an extensive set of structured but heterogeneous datasources where entities are connected by formal semantic descriptions. In thevision of the Semantic Web, these semantic links are extended towards theWorld Wide Web to provide as much machine-readable data as possible forsearch queries. The resulting connections allow an automatic evaluation to findnew insights into the data. Identifying these semantic connections betweentwo data sources with automatic approaches is called link discovery. We derivecommon requirements and a generic link discovery workflow based on similaritiesbetween entity properties and associated properties of ontology concepts. Mostof the existing link discovery approaches disregard the fact that in times ofBig Data, an increasing volume of data sources poses new demands on linkdiscovery. In particular, the problem of complex and time-consuming linkdetermination escalates with an increasing number of intersecting data sources.To overcome the restriction of pairwise linking of entities, holistic clusteringapproaches are needed to link equivalent entities of multiple data sources toconstruct integrated knowledge bases. In this context, the focus on efficiencyand scalability is essential. For example, reusing existing links or backgroundinformation can help to avoid redundant calculations. However, when dealingwith multiple data sources, additional data quality problems must also be dealtwith. This dissertation addresses these comprehensive challenges by designingholistic linking and clustering approaches that enable reuse of existing links.Unlike previous systems, we execute the complete data integration workflowvia a distributed processing system. At first, the LinkLion portal will beintroduced to provide existing links for new applications. These links act asa basis for a physical data integration process to create a unified representationfor equivalent entities from many data sources. We then propose a holisticclustering approach to form consolidated clusters for same real-world entitiesfrom many different sources. At the same time, we exploit the semantic typeof entities to improve the quality of the result. The process identifies errorsin existing links and can find numerous additional links. Additionally, theentity clustering has to react to the high dynamics of the data. In particular,this requires scalable approaches for continuously growing data sources withmany entities as well as additional new sources. Previous entity clusteringapproaches are mostly static, focusing on the one-time linking and clustering ofentities from few sources. Therefore, we propose and evaluate new approaches for incremental entity clustering that supports the continuous addition of newentities and data sources. To cope with the ever-increasing number of LinkedData sources, efficient and scalable methods based on distributed processingsystems are required. Thus we propose distributed holistic approaches to linkmany data sources based on a clustering of entities that represent the samereal-world object. The implementation is realized on Apache Flink. In contrastto previous approaches, we utilize efficiency-enhancing optimizations for bothdistributed static and dynamic clustering. An extensive comparative evaluationof the proposed approaches with various distributed clustering strategies showshigh effectiveness for datasets from multiple domains as well as scalability on amulti-machine Apache Flink cluster.
|
6 |
Using DevOps principles to continuously monitor RDF data qualityMeissner, Roy, Junghanns, Kurt 01 August 2017 (has links)
One approach to continuously achieve a certain data quality level is to use an integration pipeline that continuously checks and monitors the quality of a data set according to defined metrics. This approach is inspired by Continuous Integration pipelines, that have been introduced in the area of software development and DevOps to perform continuous source code checks. By investigating in possible tools to use and discussing the specific requirements for RDF data sets, an integration pipeline is derived that joins current approaches of the areas of software development and semantic web as well as reuses existing tools. As these tools have not been built explicitly for CI usage, we evaluate their usability and propose possible workarounds and improvements. Furthermore, a real world usage scenario is discussed, outlining
the benefit of the usage of such a pipeline.
|
7 |
Integrace a konzumace důvěryhodných Linked Data / Towards Trustworthy Linked Data Integration and ConsumptionKnap, Tomáš January 2013 (has links)
Title: Towards Trustworthy Linked Data Integration and Consumption Author: RNDr. Tomáš Knap Department: Department of Software Engineering Supervisor: RNDr. Irena Holubová, PhD., Department of Software Engineering Abstract: We are now finally at a point when datasets based upon open standards are being published on an increasing basis by a variety of Web communities, governmental initiatives, and various companies. Linked Data offers information consumers a level of information integration and aggregation agility that has up to now not been possible. Consumers can now "mashup" and readily integrate information for use in a myriad of alternative end uses. Indiscriminate addition of information can, however, come with inherent problems, such as the provision of poor quality, inaccurate, irrelevant or fraudulent information. All will come with associated costs of the consumed data which will negatively affect data consumer's benefit and Linked Data applications usage and uptake. In this thesis, we address these issues by proposing ODCleanStore, a Linked Da- ta management and querying tool able to provide data consumers with Linked Data, which is cleansed, properly linked, integrated, and trustworthy accord- ing to consumer's subjective requirements. Trustworthiness of data means that the data has associated...
|
8 |
Data integration in large enterprises / Datová integrace ve velkých podnicíchNagyová, Barbora January 2015 (has links)
Data Integration is currently an important and complex topic for many companies, because having a good and working Data Integration solution can bring multiple advantages over competitors. Data Integration is usually being executed in a form of a project, which might easily turn into failure. In order to decrease risks and negative impact of a failed Data Integration project, there needs to be good project management, Data Integration knowledge and the right technology in place. This thesis provides a framework for setting up a good Data Integration solution. The framework is developed based on the current theory, currently available Data Integration tools and opinions provided by experts working in the field for a minimum of 7+ years and have proven their skills with a successful Data Integration project. This thesis does not guarantee the development of the right Data Integration solution, but it does provide guidance how to deal with a Data Integration project in a large enterprise. This thesis is structured into seven chapters. The first chapter brings an overview about this thesis such as scope, goals, assumptions and expected value. The second chapter describes Data Management and basic Data Integration theory in order to distinguish these two topics and to explain the relationship between them. The third chapter is focused purely on Data Integration theory which should be known by everyone who participates in a Data Integration project. The fourth chapter analyses features of the current Data Integration solutions available on the market and provides an overview of the most common and necessary functionalities. Chapter five focuses on the practical part of this thesis, where the Data Integration framework is designed based on findings from previous chapters and interviews with experts in this field. Chapter six then applies the framework to a real working (anonymized) Data Integration solution, highlights the gap between the framework and the solution and provides guidance how to deal with the gaps. Chapter seven provides a resume, personal opinion and outlook.
|
9 |
Echtzeit-Data-Warehouse-SystemeThiele, Maik, Lehner, Wolfgang 26 January 2023 (has links)
Die stets zentraler werdende Rolle der Data Warehouses, in allen Entscheidungsebenen eines Unternehmens, führt zu der Forderung nach hochaktuellen Daten bzw. echtzeitfähigen Data-Warehouses-Systemen. Dieser Artikel stellt die Frage inwieweit mit bestehenden Data-Warehouse-Architekturen eine Informationsversorgung in Echtzeit zu gewährleisten ist, deckt die Schwächen dieser Architekturen auf und diskutiert verschiedene Lösungsansätze.
|
10 |
Analyzing Crime Dynamics and Investigating the Great American Crime DeclineShaik, Salma 15 September 2022 (has links)
No description available.
|
Page generated in 0.1264 seconds