• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 20
  • 9
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 99
  • 99
  • 29
  • 22
  • 18
  • 16
  • 15
  • 15
  • 14
  • 12
  • 12
  • 11
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Návrh informačního systému / Information System Design

Baliak, Adam January 2021 (has links)
This thesis aims to design an information system for an organization. The information system will provide the organization with a unified place to organize their data about their current development projects. The proposed solution will be integrated with other information systems such as CRM, which was chosen as a ready-made solution to reduce the costs. The thesis is divided into three chapters, the first chapter focuses on the theoretical background, the second chapter focuses on analysis, and in the third chapter, a new information system design is proposed.
52

Conceptual Factors and Fuzzy Data

Glodeanu, Cynthia Vera 20 December 2012 (has links)
With the growing number of large data sets, the necessity of complexity reduction applies today more than ever before. Moreover, some data may also be vague or uncertain. Thus, whenever we have an instrument for data analysis, the questions of how to apply complexity reduction methods and how to treat fuzzy data arise rather naturally. In this thesis, we discuss these issues for the very successful data analysis tool Formal Concept Analysis. In fact, we propose different methods for complexity reduction based on qualitative analyses, and we elaborate on various methods for handling fuzzy data. These two topics split the thesis into two parts. Data reduction is mainly dealt with in the first part of the thesis, whereas we focus on fuzzy data in the second part. Although each chapter may be read almost on its own, each one builds on and uses results from its predecessors. The main crosslink between the chapters is given by the reduction methods and fuzzy data. In particular, we will also discuss complexity reduction methods for fuzzy data, combining the two issues that motivate this thesis. / Komplexitätsreduktion ist eines der wichtigsten Verfahren in der Datenanalyse. Mit ständig wachsenden Datensätzen gilt dies heute mehr denn je. In vielen Gebieten stößt man zudem auf vage und ungewisse Daten. Wann immer man ein Instrument zur Datenanalyse hat, stellen sich daher die folgenden zwei Fragen auf eine natürliche Weise: Wie kann man im Rahmen der Analyse die Variablenanzahl verkleinern, und wie kann man Fuzzy-Daten bearbeiten? In dieser Arbeit versuchen wir die eben genannten Fragen für die Formale Begriffsanalyse zu beantworten. Genauer gesagt, erarbeiten wir verschiedene Methoden zur Komplexitätsreduktion qualitativer Daten und entwickeln diverse Verfahren für die Bearbeitung von Fuzzy-Datensätzen. Basierend auf diesen beiden Themen gliedert sich die Arbeit in zwei Teile. Im ersten Teil liegt der Schwerpunkt auf der Komplexitätsreduktion, während sich der zweite Teil der Verarbeitung von Fuzzy-Daten widmet. Die verschiedenen Kapitel sind dabei durch die beiden Themen verbunden. So werden insbesondere auch Methoden für die Komplexitätsreduktion von Fuzzy-Datensätzen entwickelt.
53

Multi-level kvantitativní vyhodnocení využití cizinců v českém fotbalu / Multi-level quantitative evaluation of the use of foreigners in Czech football

Riedl, Jakub January 2021 (has links)
Title: Multi-level quantitative evaluation of the use of foreigners in Czech football Objectives: The purpose of this thesis is to evaluate how foreign soccer players influence the Czech first league football in seasons from 1993/1994 to 2019/2020, and use multilevel modeling to analyze longitudinal data to find answers to these questions: 1. Do foreign players effect attendance in Czech first soccer league? 2. Do foreigners effect the number of Czech players in the highest soccer league? A secondary goal is to find out if multilevel analysis is a suitable method to evaluate sport migration in a primary sport in a semi-periphery country. Methods: In the master's thesis multilevel analysis with longitudinal data is used to explain dependent variables which were acquired from the Czech first football league between the seasons 1993/94 and 2019/20. Results: The results of this work show that foreign players do not have an effect on attendance because the results were statistically insignificant. The number of foreign players in the Czech league is increasing on average by 0,22 players per year in one club. On the other hand, Czech players were decreasing in all 27 seasons by 0,13 per year per club. The relationship between the dependent variable of Czech players and independent variable of foreign...
54

Material design using surrogate optimization algorithm

Khadke, Kunal R. 28 February 2015 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Nanocomposite ceramics have been widely studied in order to tailor desired properties at high temperatures. Methodologies for development of material design are still under effect. While finite element modeling (FEM) provides significant insight on material behavior, few design researchers have addressed the design paradox that accompanies this rapid design space expansion. A surrogate optimization model management framework has been proposed to make this design process tractable. In the surrogate optimization material design tool, the analysis cost is reduced by performing simulations on the surrogate model instead of high fidelity finite element model. The methodology is incorporated to and the optimal number of silicon carbide (SiC) particles, in a silicon-nitride(Si3N4) composite with maximum fracture energy [2]. Along with a deterministic optimization algorithm, model uncertainties have also been considered with the use of robust design optimization (RDO) method ensuring a design of minimum sensitivity to changes in the parameters. These methodologies applied to nanocomposites design have a significant impact on cost and design cycle time reduced.
55

A Comprehensive Pan-Cancer Analysis for Pituitary Tumor-Transforming Gene 1

Gong, Siming, Wu, Changwu, Duan, Yingjuan, Tang, Juju, Wu, Panfeng 04 April 2023 (has links)
Pituitary tumor-transforming gene 1 (PTTG1) encodes a multifunctional protein that is involved in many cellular processes. However, the potential role of PTTG1 in tumor formation and its prognostic function in human pan-cancer is still unknown. The analysis of gene alteration, PTTG1 expression, prognostic function, and PTTG1-related immune analysis in 33 types of tumors was performed based on various databases such as The Cancer Genome Atlas database, the Genotype-Tissue Expression database, and the Human Protein Atlas database. Additionally, PTTG1-related gene enrichment analysis was performed to investigate the potential relationship and possible molecular mechanisms between PTTG1 and tumors. Overexpression of PTTG1 may lead to tumor formation and poor prognosis in various tumors. Consequently, PTTG1 acts as a potential oncogene in most tumors. Additionally, PTTG1 is related to immune infiltration, immune checkpoints, tumor mutational burden, and microsatellite instability. Thus, PTTG1 could be potential biomarker for both prognosis and outcomes of tumor treatment and it could also be a promising target in tumor therapy.
56

Data driven marketing : How to gain relevant insights through Google Analytics

Carlsson Ståbi, Jenny January 2019 (has links)
In this report, problems regarding the retrieving, measuring, and analysis of data when analysing marketing effects in the web analytics tool Google Analytics will be discussed. A correct setup, configuration, maintenance, campaign tracking and the understanding of the data in Google Analytics is essential to be able to achieve relevant insights. This is important since many Swedish marketing departments experience issues related to their setup of Google Analytics as well as the ongoing configuration and maintenance. A literature study has been conducted to gather information, focusing on collecting theories from researchers and experts in the field of web analytics and marketing analytics. Google Analytics data and reports from several Swedish companies have been studied to gain a deep understanding of how the tool is used for the measuring and analysis of the marketing effects. Interviews with marketing department and media bureau/agency employees have been conducted and analysed in a qualitative manner. A thematic analysis of the interviews has been done, resulting in 8 themes which are presented in the result section. The result has been analysed and discussed in relation to the theory. The interviews showed that there is a difference in knowledge and experience between the senior and junior analysts, and that there is a significant learning curve when working in Google Analytics. The junior analysts trusted the data, and did not know about campaign tracking and filters, in contrast to the senior analysts, who did not trust the data as a control mechanism, and did work with campaign tracking and filters. Furthermore, the senior analysts had more understanding of the data models in Google Analytics, such as attribution models, which are known to show different stories based on which attribution model is being used. The conclusions are four capabilities that address a need for more and better control over the setup and over the data, a wider use of campaign tracking, and wider knowledge of the data and the data models in Google Analytics, and of the business the organisation is conducting, to be able to gain relevant insights. / I den här rapporten diskuteras problemen med att insamla, mäta och analysera data vid analys av marknadseffekter i webbanalys-verktyget Google Analytics. Korrekt installation, konfiguration, underhåll, kampanjspårning och förståelsen av datan i Google Analytics är viktigt för att kunna uppnå relevanta insikter. Detta är viktigt eftersom att många svenska marknadsföringsavdelningar upplever problem i samband med installationen av Google Analytics samt den pågående konfigurationen och underhållet av data som ska mätas och analyseras. En litteraturstudie har gjorts för att samla in information, med inriktning på att samla teori från forskare och experter inom webbanalys och marknadsanalys. Google Analytics-data och rapporter från flera svenska företag har studerats för att få en djupare förståelse för hur verktyget används för att mäta och analysera marknadsföringseffekter. Intervjuer med medarbetare på marknadsavdelningar och mediebyråer har genomförts och analyserats på ett kvalitativt sätt. En tematisk analys av intervjuerna har gjorts, vilket resulterat i 8 teman som presenteras i resultatavsnittet. Resultatet har analyserats och diskuterats i förhållande till teorin. Intervjuerna visade att det finns skillnad i kunskap och erfarenhet mellan seniora och juniora analytiker, och att det finns en signifikant inlärningskurva när en arbetar i Google Analytics. De juniora analytikerna litade på datan och tillämpade inte kampanjspårning och filter i motsats till de seniora analytikerna som inte litade på datan som en kontrollmekanism, samt tillämpade kampanjspårning och filter. Vidare hade de seniora analytikerna större förståelse för datamodellerna i Google Analytics, till exempel attributionsmodeller, som är kända för att indikera olika saker baserat på vilken modell som används. Slutsatserna är fyra förmågor som relaterar till ett behov av mer och bättre kontroll över datan och installationen av Google Analytics, en bredare användning av kampajspårning, bredare kunskaper om både datan och de olika datamodellerna i Google Analytics, och verksamheten som organisationen utför för att kunna tillskansa sig relevanta insikter som är lämpliga att grunda beslut utifrån.
57

An Integrated Framework for Automated Data Collection and Processing for Discrete Event Simulation Models

Rodriguez, Carlos 01 January 2015 (has links)
Discrete Events Simulation (DES) is a powerful tool of modeling and analysis used in different disciplines. DES models require data in order to determine the different parameters that drive the simulations. The literature about DES input data management indicates that the preparation of necessary input data is often a highly manual process, which causes inefficiencies, significant time consumption and a negative user experience. The focus of this research investigation is addressing the manual data collection and processing (MDCAP) problem prevalent in DES projects. This research investigation presents an integrated framework to solve the MDCAP problem by classifying the data needed for DES projects into three generic classes. Such classification permits automating and streamlining the preparation of the data, allowing DES modelers to collect, update, visualize, fit, validate, tally and test data in real-time, by performing intuitive actions. In addition to the proposed theoretical framework, this project introduces an innovative user interface that was programmed based on the ideas of the proposed framework. The interface is called DESI, which stands for Discrete Event Simulation Inputs. The proposed integrated framework to automate DES input data preparation was evaluated against benchmark measures presented in the literature in order to show its positive impact in DES input data management. This research investigation demonstrates that the proposed framework, instantiated by the DESI interface, addresses current gaps in the field, reduces the time devoted to input data management within DES projects and advances the state-of-the-art in DES input data management automation.
58

Integrative and Comprehensive Pancancer Analysis of Regulator of Chromatin Condensation 1 (RCC1)

Wu, Changwu, Duan, Yingjuan, Gong, Siming, Kallendrusch, Sonja, Schopow, Nikolas, Osterhoff, Georg 11 December 2023 (has links)
Regulator of Chromatin Condensation 1 (RCC1) is the only known guanine nucleotide exchange factor that acts on the Ras-like G protein Ran and plays a key role in cell cycle regulation. Although there is growing evidence to support the relationship between RCC1 and cancer, detailed pancancer analyses have not yet been performed. In this genome database study, based on The Cancer Genome Atlas, Genotype-Tissue Expression and Gene Expression Omnibus databases, the potential role of RCC1 in 33 tumors’ entities was explored. The results show that RCC1 is highly expressed in most human malignant neoplasms in contrast to healthy tissues. RCC1 expression is closely related to the prognosis of a broad variety of tumor patients. Enrichment analysis showed that some tumor-related pathways such as “cell cycle” and “RNA transport” were involved in the functional mechanism of RCC1. In particular, the conducted analysis reveals the relation of RCC1 to multiple immune checkpoint genes and suggests that the regulation of RCC1 is closely related to tumor infiltration of cancer-associated fibroblasts and CD8+ T cells. Coherent data demonstrate the association of RCC1 with the tumor mutation burden and microsatellite instability in various tumors. These findings provide new insights into the role of RCC1 in oncogenesis and tumor immunology in various tumors and indicate its potential as marker for therapy prognosis and targeted treatment strategies.
59

Data Summarization for Large Time-varying Flow Visualization and Analysis

Chen, Chun-Ming 29 December 2016 (has links)
No description available.
60

Finite element analysis of subregions using a specified boundary stiffness/force method

Jara-Almonte, C. C. January 1985 (has links)
The accurate finite element analysis of subregions of large structures is difficult to carry out because of uncertainties about how the rest of the structure influences the boundary conditions and loadings of the subregion model. This dissertation describes the theoretical development and computer implementation of a new approach to this problem of modeling subregions. This method, the specified boundary stiffness/force (SBSF) method, results in accurate displacement and stress solutions as the boundary loading and the interaction between the stiffness of the subregion and the rest of the structure are taken into account. This method is computationally efficient because each time that the subregion model is analyzed, only the equations involving the degrees of freedom within the subregion model are solved. Numerical examples are presented which compare this method to some of the existing methods for subregion analysis on the basis of both accuracy of results and computational efficiency. The SBSF method is shown to be more accurate than another approximate method, the specified boundary displacement (SBD) method and to require approximately the same number of computations for the solution. For one case, the average error in the results of the SBD method was +2.75% while for the SBSF method the average error was -0.3%. The comparisons between the SBSF method and the efficient and exact zooming methods demonstrate that the SBSF method is less accurate than these methods but is computationally more efficient. In one example, the error for the exact zooming method was -0.9% while for the SBSF method it was -3.7%. Computationally, the exact zooming method requires almost 185% more operations than the SBSF method. Similar results were obtained for the comparison of the efficient zooming method and the SBSF method. Another use of the SBSF method is in the analysis of design changes which are incorporated into the subregion model but not into the parent model. In one subregion model a circular hole was changed to an elliptical hole. The boundary forces and stiffnesses from the parent model with the circular hole were used in the analysis of the modified subregion model. The results of the analysis of the most refined mesh in this example had an error of only -0.52% when compared to the theoretical result for the modified geometry. The results of the research presented in this dissertation indicate that the SBSF method is better suited to the analysis of subregions than the other methods documented in the literature. The method is both accurate and computationally efficient as well as easy to use and implement. The SBSF method can also be extended to the accurate analysis of subregion models with design changes which are not incorporated into the parent model. / Ph. D.

Page generated in 0.0662 seconds