• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 290
  • 24
  • 21
  • 16
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 475
  • 475
  • 117
  • 99
  • 99
  • 88
  • 67
  • 62
  • 62
  • 54
  • 48
  • 47
  • 47
  • 45
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

E-tjänstutveckling ur ett medborgarperspektiv : Att skapa beslutsunderlag baserat på medborgarärendens lämplighet för olika kommunikationskanaler / Citizen-centric e-service development

Abrahamsson, Johan, Sjöberg, Robin January 2009 (has links)
Citizens’ interaction with governments is an area with unique implications for channel management. Governments need to take the citizens perspective into further consideration in order to be successful in delivering high-quality e-services. This paper aims to determine if a categorization of citizen-initiated contacts from a citizen-centric perspective can be a valuable basis for decisions regarding e-service development. The study consisted of three steps. The first step was an examination of the existing related literature, which resulted in the uncovering of the most important aspects of citizens channel choice. The second step consisted of an elaboration of a classification based on perceived task characteristics and a subsequent matching of the categories to desirable channel characteristics. The third and final step consisted of an application of the proposed categorization on a content management system containing all citizen-initiated contacts in a Swedish municipality. The application indicated that the proposed categorization could possibly be used to guide investments in e-services towards a channel-appropriate direction.
312

Uso de um método preditivo para inferir a zona de aprendizagem de alunos de programação em um ambiente de correção automática de código

Pereira, Filipe Dwan, 95-99119-6508 29 March 2018 (has links)
Submitted by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-06-04T13:02:42Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Filipe Dwan.pdf: 3617202 bytes, checksum: 21261ba9c1db7a40af29004bd0bb6f52 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-06-04T13:02:58Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Filipe Dwan.pdf: 3617202 bytes, checksum: 21261ba9c1db7a40af29004bd0bb6f52 (MD5) / Made available in DSpace on 2018-06-04T13:02:58Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Filipe Dwan.pdf: 3617202 bytes, checksum: 21261ba9c1db7a40af29004bd0bb6f52 (MD5) Previous issue date: 2018-03-29 / CS1 (first year programming) classes are known to have a high dropout and non-pass rate. Thus, there have been many studies attempting to predict and alleviate CS1 student performance. Knowing about student performance in advance can be useful for many reasons. For example, teachers can apply specific actions to help learners who are struggling, as well as provide more challenging activities to high-achievers. Initial studies used static factors, such as: high school grades, age, gender. However, student behavior is dynamic and, as such, a data-driven approach has been gaining more attention, since many universities are using web-based environments to support CS1 classes. Thereby, many researchers have started extracting student behavior by cleaning data collected from these environments and using them as features in machine learning (ML) models. Recently, the research community has proposed many predictive methods available, even though many of these studies would need to be replicated, to check if they are context-sensitive. Thus, we have collected a set of successful features correlated with the student grade used in related studies, compiling the best ML attributes, as well as adding new features, and applying them on a database representing 486 CS1 students. The set of features was used in ML pipelines which were optimized with two approaches: hyperparameter-tuning with random search and genetic programming. As a result, we achieved an accuracy of 74.44%, using data from the first two weeks to predict student final grade, which outperforms a state-of-the-art research applied to the same dataset. It is also worth noting that from the eighth week of class, the method achieved accuracy between 85% and 90.62%. / Em média, um terço dos alunos no mundo reprova em disciplinas de introdução à programação de computadores (IPC). Assim, muitos estudos vêm sendo conduzidos a fim de inferir o desempenho de estudantes de turmas de IPC. Inicialmente, pesquisadores investigavam a relação das notas dos alunos com fatores estáticos como: notas no ensino médio, gênero, idade e outros. Entretanto, o comportamento dos estudantes é dinâmico e, dessa forma, abordagens orientadas aos dados vêm ganhando atenção, uma vez que muitas universidades utilizam ambientes web para turmas de programação como juízes online. Com efeito, muitos pesquisadores vêm extraindo e tratando os dados dos estudantes a partir desses ambientes e usando-os como atributos de algoritmos de aprendizagem de máquina para a construção de modelos preditivos. No entanto, a comunidade científica sugere que tais estudos sejam reproduzidos a fim de investigar se eles são generalizáveis a outras bases de dados educacionais. Neste sentido, neste trabalho apresentou-se um método que emprega um conjunto de atributos correlacionados com as notas dos estudantes, sendo alguns baseados em trabalhos relacionados e outros propostos nesta pesquisa, a fim de realizar a predição do desempenho dos alunos nas avaliações intermediárias e nas médias finais. Tal método foi aplicado a uma base de dados com 486 alunos de IPC. O conjunto de atributos chamado de perfil de programação foi empregado em algoritmos de aprendizagem de máquina e otimizado utilizando duas abordagens: a) ajuste de hiperparâmetros com random search e b) construção do pipeline de aprendizagem de máquina utilizando algoritmos evolutivos. Como resultado, atingiu-se 74,44% de acurácia na tarefa de identificar se os alunos iriam ser reprovados ou aprovados usando os dados das duas semanas de aula em uma base de dados balanceada. Esse resultado foi estatisticamente superior ao baseline. Destaca-se ainda que a partir da oitava semana de aula, o método atingiu acurácias entre 85% e 90,62%.
313

Data-driven decision making in Marketing : A theoretical approach

Peyne, Benjamin, Chan, Ariane January 2017 (has links)
Customer insight is at the heart of the big data era. This revolution makesit possible to directly obtain high potential data and in large quantitiesabout customers. Thus we take note that, more than ever, a large volumeof big data is collected by companies.We observe that big data have become a necessary tool within marketing.More and more companies orient their decisions according to theinformations provided by data, with the aim of quickly having betterresults.Nevertheless, in order to integrate these big data in a better way and gaina competitive advantage, companies must face new challenges. Tomeasure and understand the impact of big data in marketing decisions,we propose, with the support of our scientific and theoretical resources, areasoning, demonstrating all the issues. Big data is increasinglyubiquitous and necessary for companies (I). Their impact in decisionsneeds to be taken into account (II) and their use is leading to amanagement revolution (III). Moreover, it modifies the close relationbetween decision and intuition. (IV). In this article, we present aperspective that study all these concepts. We close by offering a modeland a conclusion answering our problematic.
314

The application of constraint rules to data-driven parsing

Jaf, Sardar January 2015 (has links)
The process of determining the structural relationships between words in both natural and machine languages is known as parsing. Parsers are used as core components in a number of Natural Language Processing (NLP) applications such as online tutoring applications, dialogue-based systems and textual entailment systems. They have been used widely in the development of machine languages. In order to understand the way parsers work, we will investigate and describe a number of widely used parsing algorithms. These algorithms have been utilised in a range of different contexts such as dependency frameworks and phrase structure frameworks. We will investigate and describe some of the fundamental aspects of each of these frameworks, which can function in various ways including grammar-driven approaches and data-driven approaches. Grammar-driven approaches use a set of grammatical rules for determining the syntactic structures of sentences during parsing. Data-driven approaches use a set of parsed data to generate a parse model which is used for guiding the parser during the processing of new sentences. A number of state-of-the-art parsers have been developed that use such frameworks and approaches. We will briefly highlight some of these in this thesis. There are three specific important features that it is important to integrate into the development of parsers. These are efficiency, accuracy, and robustness. Efficiency is concerned with the use of as little time and computing resources as possible when processing natural language text. Accuracy involves maximising the correctness of the analyses that a parser produces. Robustness is a measure of a parser’s ability to cope with grammatically complex sentences and produce analyses of a large proportion of a set of sentences. In this thesis, we present a parser that can efficiently, accurately, and robustly parse a set of natural language sentences. Additionally, the implementation of the parser presented here allows for some trading-off between different levels of parsing performance. For example, some NLP applications may emphasise efficiency/robustness over accuracy while some other NLP systems may require a greater focus on accuracy. In dialogue-based systems, it may be preferable to produce a correct grammatical analysis of a question, rather than incorrectly analysing the grammatical structure of a question or quickly producing a grammatically incorrect answer for a question. Alternatively, it may be desirable that document translation systems translate a document into a different language quickly but less accurately, rather than slowly but highly accurately, because users may be able to correct grammatically incorrect sentences manually if necessary. The parser presented here is based on data-driven approaches but we will allow for the application of constraint rules to it in order to improve its performance.
315

Data driven SEO / Data-driven SEO

Koutný, Jiří January 2011 (has links)
The Search Engine Optimization (SEO) industry has recently undergone major changes. Many new analytics tools have been put on the market enabling marketing consultants to be finally able to measure and evaluate the results of their work in SEO effectively. The theoretical part of this diploma thesis therefore aims to describe and compare selected SEO tools including practical examples of their use. The paper is focused on backlink databases (MajesticSEO, SEOmoz OpenSiteExplorer and Ahrefs) and keyword suggestion tools from Google (AdWords), Seznam (Sklik), Wordtracker and SEMRush. The final chapter provides an overview of search engine positions tracking tools and techniques. The practical part describes the method of selection, preparation and processing of data obtained from tools mentioned above. The data are used to compute correlation analysis of Seznam.cz search engine results in relation with the best known SEO factors. The results of the analysis will help marketing consultants to clarify which factors are the most important to focus on to obtain more traffic from search engines.
316

A Retrospective-Longitudinal Examination of the Relationship between Apportionment of Seat Time in Community-College Algebra Courses and Student Academic Performance

Roig-Watnik, Steven M 06 December 2012 (has links)
During the past decade, there has been a dramatic increase by postsecondary institutions in providing academic programs and course offerings in a multitude of formats and venues (Biemiller, 2009; Kucsera & Zimmaro, 2010; Lang, 2009; Mangan, 2008). Strategies pertaining to reapportionment of course-delivery seat time have been a major facet of these institutional initiatives; most notably, within many open-door 2-year colleges. Often, these enrollment-management decisions are driven by the desire to increase market-share, optimize the usage of finite facility capacity, and contain costs, especially during these economically turbulent times. So, while enrollments have surged to the point where nearly one in three 18-to-24 year-old U.S. undergraduates are community college students (Pew Research Center, 2009), graduation rates, on average, still remain distressingly low (Complete College America, 2011). Among the learning-theory constructs related to seat-time reapportionment efforts is the cognitive phenomenon commonly referred to as the spacing effect, the degree to which learning is enhanced by a series of shorter, separated sessions as opposed to fewer, more massed episodes. This ex post facto study explored whether seat time in a postsecondary developmental-level algebra course is significantly related to: course success; course-enrollment persistence; and, longitudinally, the time to successfully complete a general-education-level mathematics course. Hierarchical logistic regression and discrete-time survival analysis were used to perform a multi-level, multivariable analysis of a student cohort (N = 3,284) enrolled at a large, multi-campus, urban community college. The subjects were retrospectively tracked over a 2-year longitudinal period. The study found that students in long seat-time classes tended to withdraw earlier and more often than did their peers in short seat-time classes (p < .05). Additionally, a model comprised of nine statistically significant covariates (all with p-values less than .01) was constructed. However, no longitudinal seat-time group differences were detected nor was there sufficient statistical evidence to conclude that seat time was predictive of developmental-level course success. A principal aim of this study was to demonstrate—to educational leaders, researchers, and institutional-research/business-intelligence professionals—the advantages and computational practicability of survival analysis, an underused but more powerful way to investigate changes in students over time.
317

A data-driven approach for Product-Service Systems design : Using data and simulation to understand the value of a new design concept

Chowdhery, Syed Azad January 2020 (has links)
Global challenges such as increasingly competitive markets, low-cost competition, shorter lead time demands, and high quality/value output are transforming the business model of the company to focus beyond the performance requirements. In order to meet these challenges, companies are highly concerned with the customer perceived value, which is to connect the product with the customer in a better way and become more proactive to fulfil the customer needs, via function-oriented business models and Product-Service Systems. In literature, the conceptual phase is distinguished as the most critical phase of the product development process. Many authors have recognized the improvement of design in the conceptual phase as the mean to deliver a successful product in the market. At the decision gate, where concepts are selected for further development, the design team needs knowledge/data about the long-term consequences of their early decision, to see how changes in design propagate to the entire lifecycle of the product. The main goal of the thesis is to describe how the design of Product-Service Systems in the conceptual phase can be improved through the use of a data-driven approach. The latter provides an opportunity to enhance decision making and to provide better support at the early development phase. The study highlights how data are managed and used in current industrial setting and indicates the room for improvement with current practices. The thesis further provides guidelines to efficiently use data into the modelling and simulation activities to increase design knowledge. As a result of this study, a data-driven approach emerged to support the early design decision.  The thesis presents initial descriptive study findings from the empirical investigations, showing a model-based approach that creates awareness about the value of a new design concept, thus acting as a key enabler to use data in design. This will create a link between the product engineering characteristic to the high-level attributes of customer satisfaction and provider’s long-term profitability. The preliminary results indicate that the application of simulation models to frontload the early design stage creates awareness about how performance can lead to value creation, helping multidisciplinary teams to perform quick trade-off and what-if analysis on design configurations. The proposed framework shows how data from various sources are used through a chain of simulations to understand the entire product lifecycle. The proposed approach holds a potential to improve the key performance indicators for Product-Service Systems development: lead time, design quality, cost and most importantly deliver a value-added product to the customer.
318

Development of a process modelling methodology and condition monitoring platform for air-cooled condensers

Haffejee, Rashid Ahmed 05 August 2021 (has links)
Air-cooled condensers (ACCs) are a type of dry-cooling technology that has seen an increase in implementation globally, particularly in the power generation industry, due to its low water consumption. Unfortunately, ACC performance is susceptible to changing ambient conditions, such as dry bulb temperatures, wind direction, and wind speeds. This can result in performance reduction under adverse ambient conditions, which leads to increased turbine back pressures and in turn, a decrease in generated electricity. Therefore, this creates a demand to monitor and predict ACC performance under changing ambient conditions. This study focuses on modelling a utility-scale ACC system at steady-state conditions applying a 1-D network modelling approach and using a component-level discretization approach. This approach allowed for each cell to be modelled individually, accounting for steam duct supply behaviour, and for off-design conditions to be investigated. The developed methodology was based on existing empirical correlations for condenser cells and adapted to model double-row dephlegmators. A utility-scale 64-cell ACC system based in South Africa was selected for this study. The thermofluid network model was validated using site data with agreement in results within 1%; however, due to a lack of site data, the model was not validated for off-design conditions. The thermofluid network model was also compared to the existing lumped approach and differences were observed due to the steam ducting distribution. The effect of increasing ambient air temperature from 25 35  −  C C was investigated, with a heat rejection rate decrease of 10.9 MW and a backpressure increase of 7.79 kPa across the temperature range. Condensers' heat rejection rate decreased with higher air temperatures, while dephlegmators' heat rejection rate increased due to the increased outlet vapour pressure and flow rates from condensers. Off-design conditions were simulated, including hot air recirculation and wind effects. For wind effects, the developed model predicted a decrease in heat rejection rate of 1.7 MW for higher wind speeds, while the lumped approach predicted an increase of 4.9 . MW For practicality, a data-driven surrogate model was developed through machine learning techniques using data generated by the thermofluid network model. The surrogate model predicted systemlevel ACC performance indicators such as turbine backpressure and total heat rejection rate. Multilayer perceptron neural networks were developed in the form of a regression network and binary classifier network. For the test sets, the regression network had an average relative error of 0.3%, while the binary classifier had a 99.85% classification accuracy. The surrogate model was validated to site data over a 3 week operating period, with 93.5% of backpressure predictions within 6% of site data backpressures. The surrogate model was deployed through a web-application prototype which included a forecasting tool to predict ACC performance based on a weather forecast.
319

Simulations and data-based models for electrical conductivities of graphene nanolaminates

Rothe, Tom 13 August 2021 (has links)
Graphene-based conductor materials (GCMs) consist of stacked and decoupled layers of graphene flakes and could potentially transfer graphene’s outstanding material properties like its exceptional electrical conductivity to the macro scale, where alternatives to the heavy and expensive metallic conductors are desperately needed. To reach super-metallic conductivity however, a systematic electrical conductivity optimization regarding the structural and physical input parameters is required. Here, a new trend in the field of process and material optimization are data-based models which utilize data science methods to quickly identify and abstract information and relationships from the available data. In this work such data-based models for the conductivity of a real GCM thin-film sample are build on data generated with an especially improved and extended version of the network simulation approach by Rizzi et al. [1, 2, 3]. Appropriate methods to create data-based models for GCMs are thereby introduced and typical challenges during the modelling process are addressed, so that data-based models for other properties of GCMs can be easily created as soon as sufficient data is accessible. Combined with experimental measurements by Slawig et al. [4] the created data-based models allow for a coherent and comprehensive description of the thin-films’ electrical parameters across several length scales.:List of Figures List of Tables Symbol Directory List of Abbreviations 1 Introduction 2 Simulation approaches for graphene-based conductor materials 2.1 Traditional simulation approaches for GCMs 2.1.1 Analytical model for GCMs 2.1.2 Finite element method simulations for GCMs 2.2 A network simulation approach for GCMs 2.2.1 Geometry generation 2.2.2 Electrical network creation 2.2.3 Contact and probe setting 2.2.4 Conductivity computation 2.2.5 Results obtained with the network simulation approach 2.3 An improved implementation for the network simulation 2.3.1 Rizzi’s implementation of the network simulation approach 2.3.2 An network simulation tool for parameter studies 2.3.3 Extending the network simulation approach for anisotropy investigations and multilayer flakes 3 Data-based material modelling 3.1 Introduction to data-based modelling 3.2 Data-based modelling in material science 3.3 Interpretability of data-based models 3.4 The data-based modelling process 3.4.1 Preliminary considerations 3.4.2 Data acquisition 3.4.3 Preprocessing the data 3.4.4 Partitioning the dataset 3.4.5 Training the model 3.4.6 Model evaluation 3.4.7 Real-world applications 3.5 Regression estimators 3.5.1 Mathematical introduction to regression 3.5.2 Regularization and ridge regression 3.5.3 Support Vector Regression 3.5.4 Introducing non-linearity through kernels 4 Data-based models for a real GCM thin-film 4.1 Experimental measurements 4.2 Simulation procedure 4.3 Data generation 4.4 Creating data-based models 4.4.1 Quadlinear interpolation as benchmark model 4.4.2 KR, KRR and SVR 4.4.3 Enlarging the dataset 4.4.4 KR, KRR and SVR on the enlarged training dataset 4.5 Application to the GCM sample 5 Conclusion and Outlook 5.1 Conclusion 5.2 Outlook Acknowledgements Statement of Authorship
320

Modelem řízený vývoj Spark úloh / Model Driven Development of Spark Tasks

Bútora, Matúš January 2019 (has links)
The aim of the master thesis is to describe Apache Spark framework , its structure and the way how Spark works . Next goal is to present topic of Model- Driven Development and Model-Drive Architecture . Define their advantages , disadvantages and way of usage . However , the main part of this text is devoted to design a model for creating tasks in Apache Spark framework . Text desribes application , that allows user to create graph based on proposed modeling language . Final application allows user to generate source code from created model.

Page generated in 0.1018 seconds