• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 299
  • 24
  • 21
  • 18
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 493
  • 493
  • 122
  • 106
  • 99
  • 88
  • 73
  • 67
  • 62
  • 56
  • 53
  • 47
  • 47
  • 46
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Analytisk CRM för beslutsstöd : Faktorers påverkan på förmågan för beslutsfattande, samt dess genererade sociotekniska förändringar / Analytical CRM for decision support : Factors' impact on decision-making capability, and its generatedsociotechnical changes

Salloum, Alexander, Yousef, Johan January 2023 (has links)
Dagens samhälle genomgår förändringar av betydande karaktär som i stor utsträckningdrivs av digitalisering. En av de mest påtagliga förändringarna som påverkar företagenär förändringarna i konsumentbeteendet. Dessa dramatiska förändringar utgör enbetydande utmaning för företag då traditionella metoder för kundhantering inte längre ärtillräckliga.Betydelsen av Customer Relationship Management (CRM), utifrån ett analytiskttillvägagångssätt, blir då avgörande för att bättre hantera kundrelationer i dagens högtkonkurrerande arbetsmiljö. Analytisk CRM är ett IT-beroende arbetssystem däranvändaren av data och analys utför processer och aktiviteter som gör att erbjudnaprodukter och tjänster i högre grad möter kundernas behov. Studiens övergripande målär att genom insikter förstå hur beslutsfattare upplever olika faktorers påverkan påderas förmåga att använda analytisk CRM för att stödja deras beslutsfattande samt desociotekniska förändringar som genereras av det. För att uppnå detta antogs enkvalitativ forskningsmetod där djupintervjuer genomfördes. Sju respondenter, medvarierande roller som Business Analyst, Data Scientist, Marknadschef och CRMansvarig intervjuades för att få deras insikter och erfarenheter om analytisk CRM.Studiens resultat och slutsats visar på att beslutsfattare anser kundcentrering ochinformationsteknik som avgörande faktorer för användningen av analytisk CRM.Kundcentreringen skapar en datadriven miljö som främjar datadriven beslutsfattandegenom användningen av data och analys. Det genererar sociotekniska förändringar påbåde djupare och ytmässiga nivåer. Informationstekniken spelar en avgörande roll iinsamling, hantering och analys av data. Detta påverkar beslutsprocesserna till att blidatadrivna och stärker beslutsfattarnas förmåga att fatta välgrundade beslut.Sociotekniska förändringar som generades av informationstekniken var på ytliga nivåer. / Society today is undergoing significant changes largely driven by digitalization. Onetangible change that impacts businesses is the shift in consumer behaviour. Thesedramatic changes pose a significant threat for companies as traditional methods forcustomer management are no longer sufficient.The significance of Customer Relationship Management (CRM), based on an analyticalapproach, therefore becomes crucial to better manage customer relationships in today’shighly competing work environment. Analytical CRM is an IT-reliant work system whereparticipants of data and analysis perform processes and activities that enable offeredproducts and services to better meet the needs of customers. The overall goal of thestudy is to understand through insights how decision-makers' experience variousfactors' impact on their ability to utilize analytical CRM to support their decision making,as well as the sociotechnical changes generated by it. To achieve this, a qualitativeresearch method was adopted, where in-depth interviews were conducted. Sevenrespondents, with varying roles as Business Analyst, Data Scientist, Marketing Managerand CRM Manager, were interviewed to get their insights and experiences on analyticalCRM.The study’s results and conclusion show that decision-makers consider customercentricity and information technology (IT) as a pivotal factors' influencing the use ofanalytical CRM. Customer centricity fosters a data-driven environment that promotesdata-driven decision-making through the utilization of data and analysis. It generatessociotechnical changes on both deeper and surface structures. IT plays a critical role inthe collection, management, and analysis of data. This impacts decision-makingprocesses to become data-driven and enhances the decision-makers' ability to makedata-driven decisions. Sociotechnical changes generated by information technologywere at surface structures.
482

ANALYSIS AND MODELING OF STATE-LEVEL POLICY AND LEGISLATIVE TEXT WITH NLP AND ML TECHNIQUES

Maryam Davoodi (20378814) 05 December 2024 (has links)
<p dir="ltr">State-level policy decisions significantly influence various aspects of our daily lives, such as access to healthcare and education. Despite their importance, there is a limited understanding of how these policies and decisions are formulated within the legislative process. This dissertation aims to bridge that gap by utilizing data-driven methods and the latest advancements in machine learning (ML) and natural language processing (NLP). By leveraging data-driven approaches, we can achieve a more objective and comprehensive understanding of policy formation. The incorporation of ML and NLP techniques aids in processing and interpreting large volumes of complex legislative texts, uncovering patterns and insights that might be overlooked through manual analysis. In this dissertation, we pose new analytical questions about the state legislative process and address them in three stages:</p><p><br></p><p dir="ltr">First, we aim to understand the language of political agreement and disagreement in legislative texts. We introduce a novel NLP/ML task: predicting significant conflicts among legislators and sharp divisions in their votes on state bills, influenced by factors such as gender, rural-urban divides, and ideological differences. To achieve this, we construct a comprehensive dataset from multiple sources, linking state bills with legislators’ information, geographical data about their districts, and details about donations and donors. We then develop a shared relational and textual deep learning model that captures the interactions between the bill’s text and the legislative context in which it is presented. Our experiments demonstrate that incorporating this context enhances prediction accuracy compared to strong text-based models.</p><p><br></p><p dir="ltr">Second, we analyze the impact of legislation on relevant stakeholders, such as teachers in education bills. We introduce this as a new prediction task within our framework to better understand the state legislative process. To address this task, we enhance our modeling and expand our dataset using various techniques, including crowd-sourcing, to generate labeled data. This approach also helps us decode legislators’ decision-making processes and voting patterns. Consequently, we refine our model to predict the winners and losers of bills, using this information to more accurately forecast the legislative body’s vote breakdown based on demographic and ideological criteria.</p><p><br></p><p dir="ltr">Third, we enhance our analysis and modeling of state-level bills and policies using two techniques: We normalize the inconsistent, verbose, and complex language of state policies by leveraging Generative Large Language Models (LLMs). Additionally, we evaluate the policies within a broader network context by expanding the number of US states analyzed from 3 to 50 and incorporating new data sources, such as interest groups’ ratings of legislators and public information on legislators’ positions on various issues.</p><p><br></p><p dir="ltr">By following these steps in this dissertation, we aim to better understand the legislative processes that shape state-level policies and their far-reaching effects on society.</p>
483

A data driven machine learning approach to differentiate between autism spectrum disorder and attention-deficit/hyperactivity disorder based on the best-practice diagnostic instruments for autism

Wolff, Nicole, Kohls, Gregor, Mack, Judith T., Vahid, Amirali, Elster, Erik M., Stroth, Sanna, Poustka, Luise, Kuepper, Charlotte, Roepke, Stefan, Kamp-Becker, Inge, Roessner, Veit 22 April 2024 (has links)
Autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD) are two frequently co-occurring neurodevelopmental conditions that share certain symptomatology, including social difficulties. This presents practitioners with challenging (differential) diagnostic considerations, particularly in clinically more complex cases with co-occurring ASD and ADHD. Therefore, the primary aim of the current study was to apply a data-driven machine learning approach (support vector machine) to determine whether and which items from the best-practice clinical instruments for diagnosing ASD (ADOS, ADI-R) would best differentiate between four groups of individuals referred to specialized ASD clinics (i.e., ASD, ADHD, ASD + ADHD, ND = no diagnosis). We found that a subset of five features from both ADOS (clinical observation) and ADI-R (parental interview) reliably differentiated between ASD groups (ASD & ASD + ADHD) and non-ASD groups (ADHD & ND), and these features corresponded to the social-communication but also restrictive and repetitive behavior domains. In conclusion, the results of the current study support the idea that detecting ASD in individuals with suspected signs of the diagnosis, including those with co-occurring ADHD, is possible with considerably fewer items relative to the original ADOS/2 and ADI-R algorithms (i.e., 92% item reduction) while preserving relatively high diagnostic accuracy. Clinical implications and study limitations are discussed.
484

REDUCED ORDER MODELING ENABLED PREDICTIONS OF ADDITIVE MANUFACTURING PROCESSES

Charles Reynolds Owen (19320985) 02 August 2024 (has links)
<p dir="ltr">For additive manufacturing to be a viable method to build metal parts for industries such as nuclear, the manufactured parts must be of higher quality and have lower variation in said quality than what can be achieved today. This high variation in quality bars the techniques from being used in high safety tolerance fields, such as nuclear. If this obstacle could be overcome, the benefits of additive manufacturing would be in lower cost for complex parts, as well as the ability to design and test parts in a very short timeframe, as only the CAD model needs to be created to manufacture the part. In this study, work to achieve this lower variation of quality was approached in two ways. The first was in the development of surrogate models, utilizing machine learning, to predict the end quality of additively manufactured parts. This was done by using experimental data for the mechanical properties of built parts as outputs to be predicted, and in-situ signals captured during the manufacturing process as inputs to the model. To capture the in-situ signals, cameras were used for thermal and optical imaging, leveraging the natural layer-by-layer manufacturing method used in AM techniques. The final models were created using support vector machine and gaussian process regression machine learning algorithms, giving high correlations between the insitu signals and mechanical properties of relative density, elongation to fracture, uniform elongation, and the work hardening exponent. The second approach to this study was in the development of a reduced order model for a computer simulation of an AM build. For project, a ROM was built inside the MOOSE framework, and was developed for an AM model designed by the MOOSE team, using proper orthogonal decomposition to project the problem onto a lower dimensional subspace, using POD to design the reduced basis subspace. The ROM was able to achieve a reduction to 1% the original dimensionality of the problem, while only allowing 2-5% relative error associated with the projection.</p>
485

[en] FAST AND ACCURATE SIMULATION OF DEFORMABLE SOLID DYNAMICS ON COARSE MESHES / [pt] SIMULAÇÃO RÁPIDA E PRECISA DE DINÂMICA DE SÓLIDOS DEFORMÁVEIS EM MALHAS POUCO REFINADAS

MATHEUS KERBER VENTURELLI 23 May 2024 (has links)
[pt] Esta dissertação introduz um simulador híbrido inovador que combina um resolvedor de Equações Diferenciais Parciais (EDP) numérico de Elementos Finitos (FE) com uma Rede Neural de Passagem de Mensagens (MPNN) para realizar simulações de dinâmicas de sólidos deformáveis em malhas pouco refinadas. Nosso trabalho visa fornecer simulações precisas com um erro comparável ao obtido com malhas mais refinadas em discretizações FE,mantendo a eficiência computacional ao usar um componente MPNN que corrige os erros numéricos associados ao uso de uma malha menos refinada. Avaliamos nosso modelo focando na precisão, capacidade de generalização e velocidade computacional em comparação com um solucionador numérico de referência que usa malhas 64 vezes mais refinadas. Introduzimos um novo conjunto de dados para essa comparação, abrangendo três casos de referência numéricos: (i) deformação livre após um impulso inicial, (ii) alongamento e (iii)torção de sólidos deformáveis. Baseado nos resultados de simulação, o estudo discute as forças e fraquezas do nosso método. O estudo mostra que nosso método corrige em média 95,4 por cento do erro numérico associado à discretização, sendo até 88 vezes mais rápido que o solucionador de referência. Além disso, nosso modelo é totalmente diferenciável em relaçao a funções de custo e pode ser incorporado em uma camada de rede neural, permitindo que seja facilmente estendido por trabalhos futuros. Dados e código estão disponíveis em https://github.com/Kerber31/fast_coarse_FEM para investigações futuras. / [en] This thesis introduces a novel hybrid simulator that combines a numerical Finite Element (FE) Partial Differential Equation solver with a Message Passing Neural Network (MPNN) to perform simulations of deformable solid dynamics on coarse meshes. Our work aims to provide accurate simulations with an error comparable to that obtained with more refined meshes in FE discretizations while maintaining computational efficiency by using an MPNN component that corrects the numerical errors associated with using a coarse mesh. We evaluate our model focusing on accuracy, generalization capacity, and computational speed compared to a reference numerical solver that uses 64 times more refined meshes. We introduce a new dataset for this comparison, encompassing three numerical benchmark cases: (i) free deformation after an initial impulse, (ii) stretching, and (iii) torsion of deformable solids. Based on simulation results, the study thoroughly discusses our method s strengths and weaknesses. The study shows that our method corrects an average of 95.4 percent of the numerical error associated with discretization while being up to 88 times faster than the reference solver. On top of that, our model is fully differentiable in relation to loss functions and can be embedded into a neural network layer, allowing it to be easily extended by future work. Data and code are made available on https://github.com/Kerber31/fast_coarse_FEM for further investigations.
486

Optimizing Online Marketing Efficiency By Analyzing the Mutual Influence of Online Marketing Channels with Respect to Different Devices

Nass, Ole 11 June 2019 (has links)
Tesis por compendio / [ES] ¿Cómo es la atribución en un entorno de omnicanal? Se puede determinar una distinción importante en contraste con la atribución en un entorno multicanal. Además de proporcionar el proceso de análisis de marketing, una especificación del proceso estándar intersectorial para la minería de datos (CRISP¿DM), se utiliza un enfoque de método mixto secuencial para analizar la cuestión principal de la investigación. En el primer paso de esta investigación se analizan las características y los requisitos de atribución eficiente en un entorno omnicanal. A partir de entrevistas semiestructuradas con expertos y de un proceso de investigación bibliográfica holística estructurada, se identifica claramente la falta de un enfoque de atribución omnicanal. Los enfoques de atribución existentes se identifican mediante la realización de un proceso estructurado de revisión de la literatura. Estos enfoques identificados se evalúan aplicando los resultados de las entrevistas semiestructuradas con expertos, es decir, los requisitos y características de una atribución omnicanal eficiente. Ninguno de los enfoques de atribución identificados cumple con la mayoría de los requisitos de omnicanal analizados. Al tener la brecha de investigación ¿ la falta de un enfoque de atribución de omnicanales ¿ claramente identificada, se desarrolla un enfoque de atribución de omnicanales en la segunda parte de esta investigación presentada. Utilizando la metodología MAP, la principal laguna de investigación se llena proporcionando el Holistic Customer Journey (HCJ): una base de datos lista para el omni¿canal y un enfoque de atribución de omni¿canal correspondiente. Entre otras cosas, el enfoque de atribución desarrollado consiste en una clasificación de aprendizaje automático. Esta investigación presentada es la primera en utilizar información de casi 240.000.000 de conjuntos de datos de interacción, que contienen información entre dispositivos y entre plataformas. Todas las fuentes de datos subyacentes son proporcionadas por una de las plataformas inmobiliarias más grandes de Alemania. / [CA] Com és l'atribució en un entorn de omnicanal? Es pot determinar una distinció important en contrast amb l'atribució en un entorn multicanal. A més de proporcionar el procés d'anàlisi de màrqueting, una especificació del procés estàndard intersectorial per a la mineria de dades (CRISP¿DM), s'utilitza un enfocament de mètode mixt seqüencial per analitzar la qüestió principal de la investigació. En el primer pas d'aquesta investigació s'analitzen les característiques i els requisits d'atribució eficient en un entorn omnicanal. A partir d'entrevistes semiestructurades amb experts i d'un procés de recerca bibliogràfica holística estructurada, s'identifica clarament la falta d'un enfocament d'atribució omnicanal. Els enfocaments d'atribució existents s'identifiquen mitjançant la realització d'un procés estructurat de revisió de la literatura. Aquests enfocaments identificats s'avaluen aplicant els resultats de les entrevistes semiestructurades amb experts, és a dir, els requisits i característiques d'una atribució omnicanal eficient. Cap dels enfocaments d'atribució identificats compleix amb la majoria dels requisits de omnicanal analitzats. En tenir la bretxa de recerca ¿ la manca d'un enfocament d'atribució de omnicanales ¿ clarament identificada, es desenvolupa un enfocament d'atribució de omnicanales a la segona part d'aquesta investigació presentada. Utilitzant la metodologia MAP, la principal llacuna de recerca s'omple proporcionant el Holistic Customer Journey (HCJ): una base de dades a punt per al omni¿canal i un enfocament d'atribució de omni¿canal corresponent. Entre altres coses, l'enfocament d'atribució desenvolupat consisteix en una classificació d'aprenentatge automàtic. Aquesta investigació presentada és la primera a utilitzar informació de gairebé 240.000.000 de conjunts de dades d'interacció, que contenen informació entre dispositius i entre plataformes. Totes les fonts de dades subjacents són proporcionades per una de les plataformes immobiliàries més grans d'Alemanya. / [EN] What does attribution in an omni¿channel environment look like? A major distinction can be determined in contrast to attribution in a multi¿channel environment. Besides providing the Marketing Analytics Process, a specification of the Cross¿industry standard process for data mining (CRISP¿DM), a sequential mixed method approach is utilized to analyze the main research question. Within the first step of this presented research characteristics, and requirements of efficient attribution in an omni¿channel environment are analyzed. Based on semi¿structured expert interviews and a holistic structured literature research process, the lack of an omni¿channel attribution approach is clearly identified. Existing attribution approaches are identified by conducting the structured literature review process. Those identified approaches are evaluated by applying the results of the semi¿structured expert interviews - the requirements and characteristics of efficient omni¿channel attribution. None of the identified attribution approaches fulfill a majority of the analyzed omni¿channel requirements. By having the research gap - the lack of an omni¿channel attribution approach - clearly identifed, an omni¿channel attribution approach is developed in the second part of this presented research. Utilizing the MAP methodology, the main research gap is filled by providing the Holistic Customer Journey (HCJ): an omni¿channel ready data foundation and a corresponding omni¿channel attribution approach. Among other things the developed attribution approach consists of a machine learning classification. This presented research is the first to utilize information from almost 240.000.000 interaction data sets, containing crossdevice and cross¿platform information. All underlying data sources are provided by one of Germany's largest real¿estate platforms. / Nass, O. (2019). Optimizing Online Marketing Efficiency By Analyzing the Mutual Influence of Online Marketing Channels with Respect to Different Devices [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/122296 / Compendio
487

Data-driven goodness-of-fit tests / Datagesteuerte Verträglichkeitskriteriumtests

Langovoy, Mikhail Anatolievich 09 July 2007 (has links)
No description available.
488

Measuring poverty in the EU : investigating and improving the empirical validity in deprivation scales of poverty

Bedük, Selçuk January 2017 (has links)
Non-monetary deprivation indicators are now widely used for studying and measuring poverty in Europe. However, despite their prevalence, the empirical performance of existing deprivation scales has rarely been examined. This thesis i) identifies possible conceptual problems of existing deprivation scales such as indexing, missing dimensions and threshold; ii) empirically assesses the extent of possible error in measurement related to these conceptual problems; and iii) offer an alternative way for constructing deprivation measures to mitigate the identified conceptual problems. The thesis consists of four stand-alone papers, accompanied by an overarching introduction and conclusion. The first three papers provide empirical evidence on the empirical consequences of the missing dimensions and threshold problems for the measurement and analysis of poverty, while the fourth paper exemplifies a concept-led multidimensional design that can reduce the error introduced by these conceptual problems. The analysis is generally held for 25 EU countries using European Survey of Income and Living Conditions (EU-SILC); only in the second paper, the analysis is done for the UK using British Household Panel Survey (BHPS).
489

Automatické generování testovacích dat informačních systémů / Automatic Test Input Generation for Information Systems

Naňo, Andrej January 2021 (has links)
ISAGENis a tool for the automatic generation of structurally complex test inputs that imitate real communication in the context of modern information systems . Complex, typically tree-structured data currently represents the standard means of transmitting information between nodes in distributed information systems. Automatic generator ISAGENis founded on the methodology of data-driven testing and uses concrete data from the production environment as the primary characteristic and specification that guides the generation of new similar data for test cases satisfying given combinatorial adequacy criteria. The main contribution of this thesis is a comprehensive proposal of automated data generation techniques together with an implementation, which demonstrates their usage. The created solution enables testers to create more relevant testing data, representing production-like communication in information systems.
490

Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries / Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries

Teng, Sin Yong January 2020 (has links)
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.

Page generated in 0.0878 seconds