• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 16
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 89
  • 89
  • 21
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Duomenų bazės schemos SQL aprašo generavimas funkcinių reikalavimų specifikacijos pagrindu / Generation SQL Definition of Database Schema Using Functional Requirements Specification

Kekys, Aleksas 27 May 2005 (has links)
The data modeling phase is one of the main phases in automated system development methods. Usually data modeling is separated as independent process which success and robustness depends largely on the intuition and experience of the data analyst. This can become problematic when IS database is being designed for large scale IS. To solve this type of problems it’s proposed to accomplish data modeling phase not as a separate, poorly with the generic modeling process connected part but as one of the fundamental system development phases. Functional Requirements Specification Method is proposed for this matter. This methods main idea is to build database model according to user requirements for the functionality of the system being developed. The purpose of this work is to present a process intended for building a entity relationship diagram from the analysed functional requirements specification metamodel and generation of the SQL DDL script (relational schema) from that model. The presented model will be implemented as one of the CASE tool modules specified for the database modeling and SQL DDL script generation. Implemented tool will be verified and it’s correctness will be experimentally approved.
12

Second-order least squares estimation in dynamic regression models

AbdelAziz Salamh, Mustafa 16 April 2014 (has links)
In this dissertation we proposed two generalizations of the Second-Order Least Squares (SLS) approach in two popular dynamic econometrics models. The first one is the regression model with time varying nonlinear mean function and autoregressive conditionally heteroskedastic (ARCH) disturbances. The second one is a linear dynamic panel data model. We used a semiparametric framework in both models where the SLS approach is based only on the first two conditional moments of response variable given the explanatory variables. There is no need to specify the distribution of the error components in both models. For the ARCH model under the assumption of strong-mixing process with finite moments of some order, we established the strong consistency and asymptotic normality of the SLS estimator. It is shown that the optimal SLS estimator, which makes use of the additional information inherent in the conditional skewness and kurtosis of the process, is superior to the commonly used quasi-MLE, and the efficiency gain is significant when the underlying distribution is asymmetric. Moreover, our large scale simulation studies showed that the optimal SLSE behaves better than the corresponding estimating function estimator in finite sample situation. The practical usefulness of the optimal SLSE was tested by an empirical example on the U.K. Inflation. For the linear dynamic panel data model, we showed that the SLS estimator is consistent and asymptotically normal for large N and finite T under fairly general regularity conditions. Moreover, we showed that the optimal SLS estimator reaches a semiparametric efficiency bound. A specification test was developed for the first time to be used whenever the SLS is applied to real data. Our Monte Carlo simulations showed that the optimal SLS estimator performs satisfactorily in finite sample situations compared to the first-differenced GMM and the random effects pseudo ML estimators. The results apply under stationary/nonstationary process and wih/out exogenous regressors. The performance of the optimal SLS is robust under near-unit root case. Finally, the practical usefulness of the optimal SLSE was examined by an empirical study on the U.S. airfares.
13

Internal Dashboard

Wagnberg, Michael, Danielsson, Peter January 2018 (has links)
This project is about creating a Dashboard with suitable data models containing support ticket statistics for the company Sigma IT Consulting. The work flow used by Sigma today is to manually log in to the system to see the support ticket statistics, which can be a tedious and time consuming process. Furthermore, Sigma do not have any monitoring system for checking the health of their web application services. They have a need for an internal Dashboard containing this information with regularly updates. Our solution is to design suitable data models and implement them within a Dashboard application.
14

Hmt : modelagem e projeto de aplicações hipermídia / HMT: hypermedia applications modeling and design

Nemetz, Fabio January 1995 (has links)
Após três décadas de pesquisa em hipermídia, muitos problemas identificados ainda não foram totalmente solucionados. Problemas relacionados com desorientação, sobrecarga cognitiva, qualidade de interface, interatividade e estruturação dos componentes dos sistemas hipermídia, são alguns que merecem ser citados. Particularmente, o problema clássico de desorientação do usuário recebeu maior atenção. Várias propostas de solução já foram sugeridas: use de roteiros guiados em substituição a navegação, retorno ao no anterior (`backtrack'), histórico de nos visitados, marcas de livro (`bookmarks'), mapas globais e locais, visões olho-de-peixe, metáforas, folheadores (`browsers'), entre outras. Se observa também que o avanço tecnológico permite que cada vez mais as aplicações incluam dados multimídia. Esta tendência mostra a necessidade urgente do surgimento de novas técnicas de modelagem de aplicações hipermídia que diminuam os problemas citados anteriormente. Desta forma, se pretende com a presente dissertação, propor uma técnica de modelagem de aplicações hipermídia, capaz de diminuir os problemas de desorientação do usuário e também de facilitar a identificação das estruturas compreensíveis que interligarão os componentes da aplicação. A técnica de modelagem aqui proposta - HMT (`Hypermedia Modeling Technique') - utiliza quatro modelos para descrever uma aplicação: o modelo de objetos descreve os objetos do domínio da aplicação e seus relacionamentos; o modelo de hiperobjetos refina o modelo de objetos, adicionando maior semântica aos relacionamentos; o modelo de navegação descreve os elos e estruturas de acesso e o modelo de interface contem as descrições sobre como o usuário ira perceber os objetos hipermídia. A técnica HMT se baseou no levantamento dos problemas relevantes as aplicações hipermídia e principalmente na analise das propostas e dos principais trabalhos encontrados na literatura. Finalmente, reforçando a viabilidade das idéias, foi feita a modelagem, projeto e implementação da aplicação hipermídia sobre literatura no Rio Grande do Sul, sob a forma de CD-ROM : Enciclopédia da Literatura Rio-Grandense. / After three decades of hypermedia research a great number of identified problems still remain without a total solution. Problems related to disorientation, cognitive overhead, interface, interaction and structure of hypermedia applications components are some of the main problems. The classical user disorientation problem has been the main focus of the attention. Many solution proposals have been suggested: the use of guided tours replacing navigation, backtrack, history mechanisms, bookmarks, global and local maps, fisheye views, metaphors, browsers, among others. We note that the technological advance allows the construction of applications that include multimedia data. This shows us that new modeling techniques for hypermedia applications are required to reduce the problems cited above. In this dissertation, we intend to propose a modeling technique for hypermedia applications, capable of reducing both the user disorientation problem and the identification of comprehensible structures that will link the components of the application. This modeling technique - the HMT (Hypermedia Modeling Technique) - uses four models to describe an application: the object model describes the objects of the application domain and their relationships; the hyperobject model enhances the object model, adding more semantics to the relationships; the navigation model describes the access structures and the interface model contains the descriptions of how the user will perceive the hypermedia objects. The HMT technique was based on the relevant problems related to hypermedia applications and mainly on the analysis of the proposals and related research found in the literature. Finally, reinforcing the viability of the proposed ideas, an application was modeled, designed and implemented: the CD-ROM "Enciclopedia da Literatura Rio- Grandense" which deals with literary information.
15

Hmt : modelagem e projeto de aplicações hipermídia / HMT: hypermedia applications modeling and design

Nemetz, Fabio January 1995 (has links)
Após três décadas de pesquisa em hipermídia, muitos problemas identificados ainda não foram totalmente solucionados. Problemas relacionados com desorientação, sobrecarga cognitiva, qualidade de interface, interatividade e estruturação dos componentes dos sistemas hipermídia, são alguns que merecem ser citados. Particularmente, o problema clássico de desorientação do usuário recebeu maior atenção. Várias propostas de solução já foram sugeridas: use de roteiros guiados em substituição a navegação, retorno ao no anterior (`backtrack'), histórico de nos visitados, marcas de livro (`bookmarks'), mapas globais e locais, visões olho-de-peixe, metáforas, folheadores (`browsers'), entre outras. Se observa também que o avanço tecnológico permite que cada vez mais as aplicações incluam dados multimídia. Esta tendência mostra a necessidade urgente do surgimento de novas técnicas de modelagem de aplicações hipermídia que diminuam os problemas citados anteriormente. Desta forma, se pretende com a presente dissertação, propor uma técnica de modelagem de aplicações hipermídia, capaz de diminuir os problemas de desorientação do usuário e também de facilitar a identificação das estruturas compreensíveis que interligarão os componentes da aplicação. A técnica de modelagem aqui proposta - HMT (`Hypermedia Modeling Technique') - utiliza quatro modelos para descrever uma aplicação: o modelo de objetos descreve os objetos do domínio da aplicação e seus relacionamentos; o modelo de hiperobjetos refina o modelo de objetos, adicionando maior semântica aos relacionamentos; o modelo de navegação descreve os elos e estruturas de acesso e o modelo de interface contem as descrições sobre como o usuário ira perceber os objetos hipermídia. A técnica HMT se baseou no levantamento dos problemas relevantes as aplicações hipermídia e principalmente na analise das propostas e dos principais trabalhos encontrados na literatura. Finalmente, reforçando a viabilidade das idéias, foi feita a modelagem, projeto e implementação da aplicação hipermídia sobre literatura no Rio Grande do Sul, sob a forma de CD-ROM : Enciclopédia da Literatura Rio-Grandense. / After three decades of hypermedia research a great number of identified problems still remain without a total solution. Problems related to disorientation, cognitive overhead, interface, interaction and structure of hypermedia applications components are some of the main problems. The classical user disorientation problem has been the main focus of the attention. Many solution proposals have been suggested: the use of guided tours replacing navigation, backtrack, history mechanisms, bookmarks, global and local maps, fisheye views, metaphors, browsers, among others. We note that the technological advance allows the construction of applications that include multimedia data. This shows us that new modeling techniques for hypermedia applications are required to reduce the problems cited above. In this dissertation, we intend to propose a modeling technique for hypermedia applications, capable of reducing both the user disorientation problem and the identification of comprehensible structures that will link the components of the application. This modeling technique - the HMT (Hypermedia Modeling Technique) - uses four models to describe an application: the object model describes the objects of the application domain and their relationships; the hyperobject model enhances the object model, adding more semantics to the relationships; the navigation model describes the access structures and the interface model contains the descriptions of how the user will perceive the hypermedia objects. The HMT technique was based on the relevant problems related to hypermedia applications and mainly on the analysis of the proposals and related research found in the literature. Finally, reinforcing the viability of the proposed ideas, an application was modeled, designed and implemented: the CD-ROM "Enciclopedia da Literatura Rio- Grandense" which deals with literary information.
16

The development of a sports statistics web application : Sports Analytics and Data Models for a sports data web application

Alvarsson, Andreas January 2017 (has links)
Sports and technology have always co-operated to bring better and more specific sports statistics. The collection of sports game data as well as the ability to generate valuable sports statistics of it is growing. This thesis investigates the development of a sports statistics application that should be able to collect sports game data, structure the data according to suitable data models and show statistics in a proper way. The application was set to be a web application that was developed using modern web technologies. This purpose led to a comparison of different software stack solutions and web frameworks. A theoretical study of sports analytics was also conducted, which gave a foundation for how sports data could be stored and how valuable sports statistics could be generated. The resulting design of the prototype for the sports statistics application was evaluated. Interviews with persons working in sports contexts evaluated the prototype to be both user-friendly, functional and fulfilling the purpose to generate valuable statistics during sport games.
17

Na příkladu jednoduché hry demonstrujte principy vývoje aplikací pro platformu Android / Demonstrate the principles of application development for Android on the example of a simple game

Tatoušek, Petr January 2016 (has links)
Demonstration of the Android application development principles explained on a simple game is a thesis describing in it's theoretical part Android OS. It also briefly describes it's history and especially it's architecture emphasizing on my thesis' part relevant to the practi-cal part of my dissertation. It also focuses on the general applications' architecture princi-ples for this operating system. In the practical part of the thesis I tend to describe the application development principles for Android OS. This is achieved on an sample application. This sample application is an implementation of a text-based adventure game in the Java language. It uses SQLite data-base for storing the game data. There is a game framework which enables entering diffe-rent game data to the database and thus playing different games with divergent game plays.
18

Nonlinear principal component analysis

Der, Ralf, Steinmetz, Ulrich, Balzuweit, Gerd, Schüürmann, Gerrit 15 July 2019 (has links)
We study the extraction of nonlinear data models in high-dimensional spaces with modified self-organizing maps. We present a general algorithm which maps low-dimensional lattices into high-dimensional data manifolds without violation of topology. The approach is based on a new principle exploiting the specific dynamical properties of the first order phase transition induced by the noise of the data. Moreover we present a second algorithm for the extraction of generalized principal curves comprising disconnected and branching manifolds. The performance of the algorithm is demonstrated for both one- and two-dimensional principal manifolds and also for the case of sparse data sets. As an application we reveal cluster structures in a set of real world data from the domain of ecotoxicology.
19

High-Performance Processing of Continuous Uncertain Data

Tran, Thanh Thi Lac 01 May 2013 (has links)
Uncertain data has arisen in a growing number of applications such as sensor networks, RFID systems, weather radar networks, and digital sky surveys. The fact that the raw data in these applications is often incomplete, imprecise and even misleading has two implications: (i) the raw data is not suitable for direct querying, (ii) feeding the uncertain data into existing systems produces results of unknown quality. This thesis presents a system for uncertain data processing that has two key functionalities, (i) capturing and transforming raw noisy data to rich queriable tuples that carry attributes needed for query processing with quantified uncertainty, and (ii) performing query processing on such tuples, which captures changes of uncertainty as data goes through various query operators. The proposed system considers data naturally captured by continuous distributions, which is prevalent in sensing and scientific applications. The first part of the thesis addresses data capture and transformation by proposing a probabilistic modeling and inference approach. Since this task is application-specific and requires domain knowledge, this approach is demonstrated for RFID data from mobile readers. More specifically, the proposed solution involves an inference and cleaning substrate to transform raw RFID data streams to object location tuple streams where locations are inferred from raw noisy data and their uncertain values are captured by probability distributions. The second, also the main part, of this thesis examines query processing for uncertain data modeled by continuous random variables. The proposed system includes new data models and algorithms for relational processing, with a focus on aggregation and conditioning operations. For operations of high complexity, optimizations including approximations with guaranteed error bounds are considered. Then complex queries involving a mix of operations are addressed by query planning, which given a query, finds an efficient plan that meets user-defined accuracy requirements. Besides relational processing, this thesis also provides the support for user-defined functions (UDFs) on uncertain data, which aims to compute the output distribution given uncertain input and a black-box UDF. The proposed solution employs a learning-based approach using Gaussian processes to compute approximate output with error bounds, and a suite of optimizations for high performance in online settings such as data stream processing and interactive data analysis. The techniques proposed in this thesis are thoroughly evaluated using both synthetic data with controlled properties and various real-world datasets from the domains of severe weather monitoring, object tracking using RFID readers, and computational astrophysics. The experimental results show that these techniques can yield high accuracy, meet stream speeds, and outperform existing techniques such as Monte Carlo sampling for many important workloads .
20

Performance evaluation for process refinement stage of SWA system

Shurrab, O., Awan, Irfan U. January 2015 (has links)
No / Abstract: In periodic manner the analysts teams are in the process of designing, updating and verifying the situational awareness SWA system. Initially, at the designing stage the risk assessment model has little information about the dynamic environment. Hence, any missing information can directly impact the situational assessment capabilities. With this in mind, researchers relied on various performance metrics in order to verify how well they were doing in assessing different situations. In fact, before measuring the ranking capabilities of the SWA system, the underlying performance metrics should be examined against its intended purpose. In this paper, we have conducted quality based evaluations for the performance metrics, namely "The Ranking Capability Score". The results obtained showed that the proposed performance metrics have scaled well over a number of scenarios. Indeed, from the data fusion perspectives the underlying metrics have adequately satisfied different SWA system needs and configurations.

Page generated in 0.0705 seconds