• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 65
  • 20
  • 15
  • 11
  • 7
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 332
  • 332
  • 70
  • 48
  • 48
  • 45
  • 38
  • 36
  • 35
  • 34
  • 32
  • 31
  • 31
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

IDEO Integrador de dados da Execução Orçamentária Brasileira: um estudo de caso da integração de dados das receitas e despesas nas Esferas Federal, Estadual Governo de São Paulo, e Municipal Municípios do Estado de São Paulo / The integration of multi-source heterogeneous data: an open data case study for budgetary execution in Brazil.

Beluzo, José Rodolfo 30 September 2015 (has links)
Este trabalho apresenta um grupo de processos para a integracao de dados e esquemas das receitas e despesas da execucao do orcamento publico brasileiro nas tres esferas governamentais: governo federal, estadual e municipios. Estes processos visam resolver problemas de heterogeneidade encontrados pelo cidadao ao buscar por informacoes publicas em diferentes entes. Estas informacoes atualmente sao disponibilizadas pelos portais de transparencia que seguem a obrigatoriedade definida pelo arcabouco legal brasileiro, no qual estes devem publicar, dentre outras informacoes, o registro de receitas, despesas, transferencias financeiras e processos licitatorios, de forma integra, primaria, autentica e atualizada. Porem, apesar das exigencias citadas por lei, nao existe um padrao para publicacao, alem de inconsistencias e ambiguidades de dados entre os diferentes portais. Assim, este trabalho visa resolver estes problemas de heterogeneidade enfrentados pelo cidadao. Para tal, como prova de conceito foram selecionados os dados de receitas e despesas do governo federal, do governo do estado de Sao Paulo e de 645 municipios do estado de Sao Paulo. Este trabalho padronizou um modelo conceitual de receitas e despesas baseado no manual tecnico do orcamento redigido pelo governo federal anualmente. A partir deste modelo criou-se esquemas de dados padronizados de acordo com os datasets que estao disponibilizados nos portais de transparencia de cada ente federativo, assim como um esquema integrado entre estes. Os dados de execucao orcamentaria do periodo de 2010 a 2014 dos governos citados foram extraidos dos portais, passando por um processo de transformacao e limpeza, e carregados no sistema integrador. Apos os dados carregados no sistema, a partir do prototipo foi possivel obter informacoes a respeito da execucao orcamentaria as quais nao eram possiveis de se realizar de forma direta acessando os portais de transparencia, ou, quando possivel o trabalho de compilacao da informacao seria muito elevado. Tambem foi possivel analisar e apontar possiveis falhas sistemicas nos portais de transparencia atraves dos resultados obtidos no processo, podendo contribuir com a melhoria destes. / This dissertation presents a process group for data integration and schemes of the Brazilian public budget s revenues and expenditures from all government level spheres (municipalities, states and nationwide). These process group aims to solve some heterogeneity problems to access public information provided by different government entities. Budget information is currently disclosed on e-gov portals, which must comply the requirements set by the Brazilian legal framework. Data about revenues, expenses, financial transfers and bidding processes must be published in a primary, authentic and updated way. However, there is no standards for publication, besides the poor data quality and inconsistencies found in the same data provided by different portals. Thus, this work aims to give some contributions to address these heterogeneity problems. To achieve this, we implemented a proof of concept that gathers revenue and expenditure data from the Brazilian federal government, the state government of Sao Paulo and 645 municipalities of Sao Paulo state. As a result, this work has standardized a conceptual model of revenues and expenses based on the technical manual of the budget. From this model, we created standardized data schemas according to the datasets that are available at the website of transparency of each government entity, as well as an integrated scheme between them. Data disclosed from 2010-2014 by all mentioned government were gathered, cleaned and loaded into the prototype. The resulting data warehouse allows queries about budget execution in Brazil that are not possible to perform directly accessing the transparency portals, or, when it is possible, this compilation work is very time consuming. During the validation phase was also possible to analyze and identify possible some failures in the e-gov portals and some recomendations try to give some contribution to their improvement.
242

Efficient use of a protein structure annotation database

Rother, Kristian 14 August 2007 (has links)
Im Rahmen dieser Arbeit wird eine Vielzahl von Daten zur Struktur und Funktion von Proteinen gesammelt. Anschließend wird in strukturellen Daten die atomare Packungsdichte untersucht. Untersuchungen an Strukturen benötigen oftmals maßgeschneiderte Datensätze von Proteinen. Kriterien für die Auswahl einzelner Proteine sind z.B. Eigenschaften der Sequenzen, die Faltung oder die Auflösung einer Struktur. Solche Datensätze mit den im Netz verfügbaren Mitteln herzustellen ist mühselig, da die notwendigen Daten über viele Datenbanken verteilt liegen. Um diese Aufgabe zu vereinfachen, wurde Columba, eine integrierte Datenbank zur Annotation von Proteinstrukturen, geschaffen. Columba integriert insgesamt sechzehn Datenbanken, darunter u.a. die PDB, KEGG, Swiss-Prot, CATH, SCOP, die Gene Ontology und ENZYME. Von den in Columba enthaltenen Strukturen der PDB sind zwei Drittel durch viele andere Datenbanken annotiert. Zum verbliebenen Drittel gibt es nur wenige zusätzliche Angaben, teils da die entsprechenden Strukturen erst seit kurzem in der PDB sind, teils da es gar keine richtigen Proteine sind. Die Datenbank kann über eine Web-Oberfläche unter www.columba-db.de spezifisch für einzelne Quelldatenbanken durchsucht werden. Ein Benutzer kann sich auf diese Weise schnell einen Datensatz von Strukturen aus der PDB zusammenstellen, welche den gewählten Anforderungen entsprechen. Es wurden Regeln aufgestellt, mit denen Datensätze effizient erstellt werden können. Diese Regeln wurden angewandt, um Datensätze zur Analyse der Packungsdichte von Proteinen zu erstellen. Die Packungsanalyse quantifiziert den Raum zwischen Atomen, und kann Regionen finden, in welchen eine hohe lokale Beweglichkeit vorliegt oder welche Fehler in der Struktur beinhalten. In einem Referenzdatensatz wurde so eine große Zahl von atomgroßen Höhlungen dicht unterhalb der Proteinoberfläche gefunden. In Transmembrandomänen treten diese Höhlungen besonders häufig in Kanal- und Transportproteinen auf, welche Konformationsänderungen vollführen. In proteingebundenen Liganden und Coenzymen wurde eine zu den Referenzdaten ähnliche Packungsdichte beobachtet. Mit diesen Ergebnissen konnten mehrere Widersprüche in der Fachliteratur ausgeräumt werden. / In this work, a multitude of data on structure and function of proteins is compiled and subsequently applied to the analysis of atomic packing. Structural analyses often require specific protein datasets, based on certain properties of the proteins, such as sequence features, protein folds, or resolution. Compiling such sets using current web resources is tedious because the necessary data are spread over many different databases. To facilitate this task, Columba, an integrated database containing annotation of protein structures was created. Columba integrates sixteen databases, including PDB, KEGG, Swiss-Prot, CATH, SCOP, the Gene Ontology, and ENZYME. The data in Columba revealed that two thirds of the structures in the PDB database are annotated by many other databases. The remaining third is poorly annotated, partially because the according structures have only recently been published, and partially because they are non-protein structures. The Columba database can be searched by a data source-specific web interface at www.columba-db.de. Users can thus quickly select PDB entries of proteins that match the desired criteria. Rules for creating datasets of proteins efficiently have been derived. These rules were applied to create datasets for analyzing the packing of proteins. Packing analysis measures how much space there is between atoms. This indicates regions where a high local mobility of the structure is required, and errors in the structure. In a reference dataset, a high number of atom-sized cavities was found in a region near the protein surface. In a transmembrane protein dataset, these cavities frequently locate in channels and transporters that undergo conformational changes. A dataset of ligands and coenzymes bound to proteins was packed as least as tightly as the reference data. By these results, several contradictions in the literature have been resolved.
243

Using web services for customised data entry

Deng, Yanbo January 2007 (has links)
Scientific databases often need to be accessed from a variety of different applications. There are usually many ways to retrieve and analyse data already in a database. However, it can be more difficult to enter data which has originally been stored in different sources and formats (e.g. spreadsheets, other databases, statistical packages). This project focuses on investigating a generic, platform independent way to simplify the loading of databases. The proposed solution uses Web services as middleware to supply essential data management functionality such as inserting, updating, deleting and retrieval of data. These functions allow application developers to easily customise their own data entry applications according to local data sources, formats and user requirements. We implemented a Web service to support loading data to the Germinate database at the New Zealand Institute of Crop & Food Research (CFR). We also provided language specific client toolkits to help developers invoke the Web service. The toolkits allow applications to be easily customised for different platforms. In addition, we developed sample applications to help end users load data from their project data sources via the Web service. The Web service approach was evaluated through user and developer trials. The feedback from the developer trial showed that using Web services as middleware is a useful approach to allow developers and competent end users to customise data entry with minimal effort. More importantly, the customised client applications enabled end users to load data directly from their project spreadsheets and databases. It significantly reduced the effort required for exporting or transforming the source data.
244

資料交換與查詢在XML文件與關連資料庫之間 / Data Exchange and Query Language between XML Documents and Relational Databases

王瑞娟 Unknown Date (has links)
隨著全球資訊網(World Wild Web,簡稱WWW或Web)的日趨普及,我們發現愈來愈多的資料是直接從網路上呈現與存取的。不同於過去關聯式資料庫(Relational Database Management Systems,RDBMS)的結構式資料(Structured Data)。現今許多資料都是直接以HTML(Hypertext Markup Language)格式呈現,然而HTML 的標籤只是在做資料的呈現。為了讓網際網路上的資料可真正順利傳達於組織間,新興的XML逐漸受到重視。相較於HTML,XML標籤是在做資料的定義,讓定義好的資料直接透過網際網路傳達於組織間,具有在組織間再使用(Reuse)的能力,因此現在逐漸成為組織間資料整合與轉換時一個好的解決方式。但面對傳統的關連式資料庫又該如何與XML文件整合(Data Integration)的動作,讓兩種不同來源的資料能夠相互運算(Interoperability),達成異質性資料的同質化(Homogeneous)的功效。使不同來源的資料可雙向的互相溝通,是目前急欲被探討的問題。因此本研究便發展了對關聯式資料庫與XML文件兩種來源相互轉換的溝通機制,讓資料能在這兩種來源間相互交換與利用。 / With the popularity of WWW( World Wild Web or Web ),we have seen large volume of data is available on the Web. Different from the data stored in traditional RDBMS (Relational Databases Managements Systems) which is structured data, huge data now is stored directly in the form of HTML (Hypertext Markup Language) pages. For representing data and interchange data between multiple data sources on the Web, XML (Extensible Markup Language) is a fast emerging as the dominant standard. Like HTML, XML is a subset of SGML. However, whereas HTML tags serve the primary purpose of describing how to display a data item, XML tags describe the data itself. The initial impetus for XML may have been primarily to enhance this ability of remote applications to interpret and operate on documents fetched over the Internet, so it has become the best solution now to solve the problems with data exchange and translation between the multiple sources. But XML also raises a problem: how to integrate XML documents with data stored in the traditional RDBMS. The objective is to communicate bi-directional data sources between RDBMS data and XML documents, and has the ability to interoperate data between multiple data sources. Finally, reach the purpose of heterogeneous data become homogeneous. In this research, we try to develop a translation model between RDBMS data and XML documents, in order to exchange and reuse data between different sources.
245

整合資料在雲端環境上的分享與 隱私保護-以電子病歷資料為例 / Sharing and Protection of Integrated Data in the Cloud : Electronic Health Record as an Example

楊竣展, Yang, Jiun Jan Unknown Date (has links)
由於電子化病歷逐漸取代了傳統的紙本病歷,在流通分享上面比傳統的紙本病歷更加來的方便及快速,另外電子病歷的整合性,也是比傳統的紙本來的有效。近年來雲端運算的發展,使得醫療系統在電子病歷上能夠更快速的發展,但是取而代之的是卻是雲端運算所產生隱私權的問題,在快速發展的雲端運算環境中,目前似乎無法完全確保資料的隱私性。即使現有的研究中可以讓資料擁有者表示自己的隱私偏好,卻因為設計時缺乏語意的考量,造成執行上有語意的落差。 本研究將探討電子病歷存放在雲端環境上,設計一套三層整合平台系統並使用語意化技術本體論整合來自多方的資料,達成在資料庫上使用OWL2作為整合的語言,並在此整合平台進行本體論整合,能夠讓使用者可以從多方的醫療中心快速查詢整合的資料,經由整合平台的改寫,到下層的規範擷取到上層平台進行管理與落實動作,最終在資料庫查詢資料,達成整合分享的目標,並同時能夠兼顧資料擁有者的隱私期待,完成在雲端環境上資料分享、整合、隱私保護的目標。 / The Electronic Health Records (EHRs) have replaced the traditional paper Health Records gradually and they are more rapid and more convenient in data sharing. Furthermore, the EHRs are also better than paper health records when health records need to be integrated on the computer. In recent years, the rapid development of cloud computing can help Health Information System to be more dynamic and provide a better service, but the problem of privacy is a critical issue. Although recent research can let data owner expresses his own personal privacy preference in to policy to protect privacy, it is lacked of semantics and that will result in the gap between the real meaning of personal privacy preference and of policy. In our research, we will using semantic technology to express personal privacy preference in to polices and also design the 3-layer integration platform to achieve semantics data integration so that polices can be enforced without loss of real meaning of personal privacy preference and polices will have interoperability with others when we are using semantic data integration.
246

Dependency discovery for data integration

Bauckmann, Jana January 2013 (has links)
Data integration aims to combine data of different sources and to provide users with a unified view on these data. This task is as challenging as valuable. In this thesis we propose algorithms for dependency discovery to provide necessary information for data integration. We focus on inclusion dependencies (INDs) in general and a special form named conditional inclusion dependencies (CINDs): (i) INDs enable the discovery of structure in a given schema. (ii) INDs and CINDs support the discovery of cross-references or links between schemas. An IND “A in B” simply states that all values of attribute A are included in the set of values of attribute B. We propose an algorithm that discovers all inclusion dependencies in a relational data source. The challenge of this task is the complexity of testing all attribute pairs and further of comparing all of each attribute pair's values. The complexity of existing approaches depends on the number of attribute pairs, while ours depends only on the number of attributes. Thus, our algorithm enables to profile entirely unknown data sources with large schemas by discovering all INDs. Further, we provide an approach to extract foreign keys from the identified INDs. We extend our IND discovery algorithm to also find three special types of INDs: (i) Composite INDs, such as “AB in CD”, (ii) approximate INDs that allow a certain amount of values of A to be not included in B, and (iii) prefix and suffix INDs that represent special cross-references between schemas. Conditional inclusion dependencies are inclusion dependencies with a limited scope defined by conditions over several attributes. Only the matching part of the instance must adhere the dependency. We generalize the definition of CINDs distinguishing covering and completeness conditions and define quality measures for conditions. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. The challenge for this task is twofold: (i) Which (and how many) attributes should be used for the conditions? (ii) Which attribute values should be chosen for the conditions? Previous approaches rely on pre-selected condition attributes or can only discover conditions applying to quality thresholds of 100%. Our approaches were motivated by two application domains: data integration in the life sciences and link discovery for linked open data. We show the efficiency and the benefits of our approaches for use cases in these domains. / Datenintegration hat das Ziel, Daten aus unterschiedlichen Quellen zu kombinieren und Nutzern eine einheitliche Sicht auf diese Daten zur Verfügung zu stellen. Diese Aufgabe ist gleichermaßen anspruchsvoll wie wertvoll. In dieser Dissertation werden Algorithmen zum Erkennen von Datenabhängigkeiten vorgestellt, die notwendige Informationen zur Datenintegration liefern. Der Schwerpunkt dieser Arbeit liegt auf Inklusionsabhängigkeiten (inclusion dependency, IND) im Allgemeinen und auf der speziellen Form der Bedingten Inklusionsabhängigkeiten (conditional inclusion dependency, CIND): (i) INDs ermöglichen das Finden von Strukturen in einem gegebenen Schema. (ii) INDs und CINDs unterstützen das Finden von Referenzen zwischen Datenquellen. Eine IND „A in B“ besagt, dass alle Werte des Attributs A in der Menge der Werte des Attributs B enthalten sind. Diese Arbeit liefert einen Algorithmus, der alle INDs in einer relationalen Datenquelle erkennt. Die Herausforderung dieser Aufgabe liegt in der Komplexität alle Attributpaare zu testen und dabei alle Werte dieser Attributpaare zu vergleichen. Die Komplexität bestehender Ansätze ist abhängig von der Anzahl der Attributpaare während der hier vorgestellte Ansatz lediglich von der Anzahl der Attribute abhängt. Damit ermöglicht der vorgestellte Algorithmus unbekannte Datenquellen mit großen Schemata zu untersuchen. Darüber hinaus wird der Algorithmus erweitert, um drei spezielle Formen von INDs zu finden, und ein Ansatz vorgestellt, der Fremdschlüssel aus den erkannten INDs filtert. Bedingte Inklusionsabhängigkeiten (CINDs) sind Inklusionsabhängigkeiten deren Geltungsbereich durch Bedingungen über bestimmten Attributen beschränkt ist. Nur der zutreffende Teil der Instanz muss der Inklusionsabhängigkeit genügen. Die Definition für CINDs wird in der vorliegenden Arbeit generalisiert durch die Unterscheidung von überdeckenden und vollständigen Bedingungen. Ferner werden Qualitätsmaße für Bedingungen definiert. Es werden effiziente Algorithmen vorgestellt, die überdeckende und vollständige Bedingungen mit gegebenen Qualitätsmaßen auffinden. Dabei erfolgt die Auswahl der verwendeten Attribute und Attributkombinationen sowie der Attributwerte automatisch. Bestehende Ansätze beruhen auf einer Vorauswahl von Attributen für die Bedingungen oder erkennen nur Bedingungen mit Schwellwerten von 100% für die Qualitätsmaße. Die Ansätze der vorliegenden Arbeit wurden durch zwei Anwendungsbereiche motiviert: Datenintegration in den Life Sciences und das Erkennen von Links in Linked Open Data. Die Effizienz und der Nutzen der vorgestellten Ansätze werden anhand von Anwendungsfällen in diesen Bereichen aufgezeigt.
247

Ontology-based approach to enable feature interoperability between CAD systems

Tessier, Sean Michael 23 May 2011 (has links)
Data interoperability between computer-aided design (CAD) systems remains a major obstacle in the information integration and exchange in a collaborative engineering environment. The standards for CAD data exchange have remained largely restricted to geometric representations, causing the design intent portrayed through construction history, features, parameters, and constraints to be discarded in the exchange process. In this thesis, an ontology-based framework is proposed to allow for the full exchange of semantic feature data. A hybrid ontology approach is proposed, where a shared base ontology is used to convey the concepts that are common amongst different CAD systems, while local ontologies are used to represent the feature libraries of individual CAD systems as combinations of these shared concepts. A three-branch CAD feature model is constructed to reduce ambiguity in the construction of local ontology feature data. Boundary representation (B-Rep) data corresponding to the output of the feature operation is incorporated into the feature data to enhance data exchange. The Ontology Web Language (OWL) is used to construct a shared base ontology and a small feature library, which allows the use of existing ontology reasoning tools to infer new relationships and information between heterogeneous data. A combination of OWL and SWRL (Semantic Web Rule Language) rules are developed to allow a feature from an arbitrary source system expressed via the shared base ontology to be automatically classified and translated into the target system. These rules relate input parameters and reference types to expected B-Rep objects, allowing classification even when feature definitions vary or when little is known about the source system. In cases when the source system is well known, this approach also permits direct translation rules to be implemented. With such a flexible framework, a neutral feature exchange format could be developed.
248

Ein Integrations- und Darstellungsmodell für verteilte und heterogene kontextbezogene Informationen / An Integration and Representation Model for Distributed and Heterogeneous Contextual Information

Goslar, Kevin 07 February 2007 (has links) (PDF)
Die "Kontextsensitivität" genannte systematische Berücksichtigung von Umweltinformationen durch Anwendungssysteme kann als Querschnittsfunktion im betrieblichen Umfeld in vielen Bereichen einen Nutzen stiften. Wirklich praxistaugliche kontextsensitive Anwendungssysteme, die sich analog zu einem mitdenkenden menschlichen Assistenten harmonisch in die ablaufenden Vorgänge in der Realwelt einbringen, haben einen enormen Bedarf nach umfassenden, d.h. diverse Aspekte der Realwelt beschreibenden Kontextinformationen, die jedoch prinzipbedingt verteilt in verschiedenen Datenquellen, etwa Kontexterfassungssystemen, Endgeräten sowie prinzipiell auch in beliebigen anderen, z.T. bereits existierenden Anwendungen entstehen. Ziel dieser Arbeit ist die Verringerung der Komplexität des Beschaffungsvorganges von verteilten und heterogenen Kontextinformationen durch Bereitstellung einer einfach verwendbaren Methode zur Darstellung eines umfassenden, aus verteilten und heterogenen Datenquellen zusammengetragenen Kontextmodells. Im Besonderen werden durch diese Arbeit zwei Probleme addressiert, zum einen daß ein Konsument von umfassenden Kontextinformationen mehrere Datenquellen sowohl kennen und zugreifen können und zum anderen über die zwischen den einzelnen Kontextinformationen in verschiedenen Datenquellen existierenden, zunächst nicht modellierten semantischen Verbindungen Bescheid wissen muß. Das dazu entwickelte Kontextinformationsintegrations- und -darstellungsverfahren kombiniert daher ein die Beschaffung und Integration von Kontextinformationen aus diversen Datenquellen modellierendes Informationsintegrationsmodell mit einem Kontextdarstellungsmodell, welches die abzubildende Realweltdomäne basierend auf ontologischen Informationen durch in problemspezifischer Weise erweiterte Verfahren des Semantic Web in einer möglichst intuitiven, wiederverwendbaren und modularen Weise modelliert. Nach einer fundierten Anforderungsanalyse des entwickelten Prinzips wird dessen Verwendung und Nutzen basierend auf der Skizzierung der wichtigsten allgemeinen Verwendungsmöglichkeiten von Kontextinformationen im betrieblichen Umfeld anhand eines komplexen betrieblichen Anwendungsszenarios demonstriert. Dieses beinhaltet ein Nutzerprofil, das von diversen Anwendungen, u.a. einem kontextsensitiven KFZ-Navigationssystem, einer Restaurantsuchanwendung sowie einem Touristenführer verwendet wird. Probleme hinsichtlich des Datenschutzes, der Integration in existierende Umgebungen und Abläufe sowie der Skalierbarkeit und Leistungsfähigkeit des Verfahrens werden ebenfalls diskutiert. / Context-awareness, which is the systematic consideration of information from the environment of applications, can provide significant benefits in the area of business and technology. To be really useful, i.e. harmonically support real-world processes as human assistants do it, practical applications need a comprehensive and detailed contextual information base that describes all relevant aspects of the real world. As a matter of principle, comprehensive contextual information arises in many places and data sources, e.g. in context-aware infrastructures as well as in "normal" applications, which may have knowledge about the context based on their functionality to support a certain process in the real world. This thesis facilitates the use of contextual information by reducing the complexity of the procurement process of distributed and heterogenous contextual information. Particularly, it addresses the two problems that a consumer of comprehensive contextual information needs to be aware of and able to access several different data sources and must know how to combine the contextual information taken from different and isolated data sources into a meaningful representation of the context. Especially the latter information cannot be modelled using the current state of the art. These problems are addressed by the development of an integration and representation model for contextual information that allows to compose comprehensive context models using information inside distributed and heterogeneous data sources. This model combines an information integration model for distributed and heterogenous information (which consists of an access model for heterogeneous data sources, an integration model and an information relation model) with a representation model for context that formalizes the representation of the respective real world domain, i.e. of the real world objects and their semantic relations in an intuitive, reusable and modular way based on ontologies. The resulting model consists of five layers that represent different aspects of the information integration solution. The achievement of the objectives is rated based on a requirement analysis of the problem domain. The technical feasibility and usefulness of the model is demonstrated by the implementation of an engine to support the approach as well as a complex application scenario consisting of a user profile that integrates information from several data sources and a couple of context-aware applications, e.g. a context-aware navigation system, a restaurant finder application as well as an enhanced tourist guide that use the user profile. Problems regarding security and social effects, the integration of this solution into existing environments and infrastructures as well as technical issues like the scalability and performance of this model are discussed too.
249

Skaitmeninės antžeminės televizijos paslaugos duomenų saugyklos ir OLAP galimybių taikymas ir tyrimas / Digital video broadcasting terriastrial service`s data warehouse and OLAP opportunities research and application

Juškaitis, Renatas 04 March 2009 (has links)
Tai darbas apie duomenų saugyklos ir OLAP galimybių panaudojimą skaitmeninės antžeminės televizijos paslaugos pardavimo procese. Atlikta išsami probleminės srities analizė, duomenų saugyklos, integravimo, kubų projektavimas ir realizavimas. Darbas atliktas, pasinaudojant MS SQL Server 2005 analizavimo ir integravimo paslaugomis. / This master`s work investigates data warehouse and OLAP tools opportunities research and practical use in organization which produces DVB-T (digital video broadcasting terrestrial) service for end-users. This work consist of problem analysis, data warehouse, data integration, data cubes project and realization. Realization was done using Ms SQL Server 2005 analysis and integration services.
250

Development of Wastewater Collection Network Asset Database, Deterioration Models and Management Framework

Younis, Rizwan January 2010 (has links)
The dynamics around managing urban infrastructure are changing dramatically. Today???s infrastructure management challenges ??? in the wake of shrinking coffers and stricter stakeholders??? requirements ??? include finding better condition assessment tools and prediction models, and effective and intelligent use of hard-earn data to ensure the sustainability of urban infrastructure systems. Wastewater collection networks ??? an important and critical component of urban infrastructure ??? have been neglected, and as a result, municipalities in North America and other parts of the world have accrued significant liabilities and infrastructure deficits. To reduce cost of ownership, to cope with heighten accountability, and to provide reliable and sustainable service, these systems need to be managed in an effective and intelligent manner. The overall objective of this research is to present a new strategic management framework and related tools to support multi-perspective maintenance, rehabilitation and replacement (M, R&R) planning for wastewater collection networks. The principal objectives of this research include: (1) Developing a comprehensive wastewater collection network asset database consisting of high quality condition assessment data to support the work presented in this thesis, as well as, the future research in this area. (2) Proposing a framework and related system to aggregate heterogeneous data from municipal wastewater collection networks to develop better understanding of their historical and future performance. (3) Developing statistical models to understand the deterioration of wastewater pipelines. (4) To investigate how strategic management principles and theories can be applied to effectively manage wastewater collection networks, and propose a new management framework and related system. (5) Demonstrating the application of strategic management framework and economic principles along with the proposed deterioration model to develop long-term financial sustainability plans for wastewater collection networks. A relational database application, WatBAMS (Waterloo Buried Asset Management System), consisting of high quality data from the City of Niagara Falls wastewater collection system is developed. The wastewater pipelines??? inspections were completed using a relatively new Side Scanner and Evaluation Technology camera that has advantages over the traditional Closed Circuit Television cameras. Appropriate quality assurance and quality control procedures were developed and adopted to capture, store and analyze the condition assessment data. To aggregate heterogeneous data from municipal wastewater collection systems, a data integration framework based on data warehousing approach is proposed. A prototype application, BAMS (Buried Asset Management System), based on XML technologies and specifications shows implementation of the proposed framework. Using wastewater pipelines condition assessment data from the City of Niagara Falls wastewater collection network, the limitations of ordinary and binary logistic regression methodologies for deterioration modeling of wastewater pipelines are demonstrated. Two new empirical models based on ordinal regression modeling technique are proposed. A new multi-perspective ??? that is, operational/technical, social/political, regulatory, and finance ??? strategic management framework based on modified balanced-scorecard model is developed. The proposed framework is based on the findings of the first Canadian National Asset Management workshop held in Hamilton, Ontario in 2007. The application of balanced-scorecard model along with additional management tools, such as strategy maps, dashboard reports and business intelligence applications, is presented using data from the City of Niagara Falls. Using economic principles and example management scenarios, application of Monte Carlo simulation technique along with the proposed deterioration model is presented to forecast financial requirements for long-term M, R&R plans for wastewater collection networks. A myriad of asset management systems and frameworks were found for transportation infrastructure. However, to date few efforts have been concentrated on understanding the performance behaviour of wastewater collection systems, and developing effective and intelligent M, R&R strategies. Incomplete inventories, and scarcity and poor quality of existing datasets on wastewater collection systems were found to be critical and limiting issues in conducting research in this field. It was found that the existing deterioration models either violated model assumptions or assumptions could not be verified due to limited and questionable quality data. The degradation of Reinforced Concrete pipes was found to be affected by age, whereas, for Vitrified Clay pipes, the degradation was not age dependent. The results of financial simulation model show that the City of Niagara Falls can save millions of dollars, in the long-term, by following a pro-active M, R&R strategy. The work presented in this thesis provides an insight into how an effective and intelligent management system can be developed for wastewater collection networks. The proposed framework and related system will lead to the sustainability of wastewater collection networks and assist municipal public works departments to proactively manage their wastewater collection networks.

Page generated in 0.0829 seconds