• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 606
  • 285
  • 85
  • 61
  • 40
  • 18
  • 17
  • 16
  • 16
  • 16
  • 15
  • 12
  • 6
  • 5
  • 5
  • Tagged with
  • 1351
  • 236
  • 168
  • 164
  • 140
  • 125
  • 110
  • 109
  • 103
  • 94
  • 91
  • 90
  • 89
  • 82
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
871

Cultural cues in advertising: Context effects on perceived model similarity, identification processes, and advertising outcomes

Hoplamazian, Gregory J. 08 September 2011 (has links)
No description available.
872

Modelling priority queuing systems with varying service capacity

Chen, M., Jin, X.L., Wang, Y.Z., Cheng, X.Q., Min, Geyong January 2013 (has links)
No / Many studies have been conducted to investigate the performance of priority queuing (PQ) systems with constant service capacity. However, due to the time-varying nature of wireless channels in wireless communication networks, the service capacity of queuing systemsmay vary over time. Therefore, it is necessary to investigate the performance of PQ systems in the presence of varying service capacity. In addition, self-similar traffic has been discovered to be a ubiquitous phenomenon in various communication networks, which poses great challenges to performance modelling of scheduling systems due to its fractal-like nature. Therefore, this paper develops a flow-decomposition based approach to performance modelling of PQ systems subject to self-similar traffic and varying service capacity. It specifically proposes an analytical model to investigate queue length distributions of individual traffic flows. The validity and accuracy of the model is demonstrated via extensive simulation experiments.
873

True-Ed Select: A Machine Learning Based University Selection Framework

Cearley, Jerry C. 01 January 2022 (has links) (PDF)
University/College selection is a daunting task for young adults and their parents alike. This research presents True-Ed Select, a machine learning framework that simplifies the college selection process. The framework uses a four-layered approach including the user survey, machine learning, consolidation, and recommendation. The first layer collects both the objective and subjective attributes from users that best characterize their ideal college experience. The second layer employs machine learning techniques to analyze the objective and subjective attributes. The third layer combines the results from the machine learning techniques. The fourth layer inputs the consolidated result and presents a user-friendly list of top educational institutions that best match the user’s interests. We use our framework to analyze over 3500 United States post-secondary institutions and show search space reduction to top 20 institutions. This drastically reduced search space facilitates effective and assured college selection for end users. Our survey results with 10 participants highlight an average satisfaction rating of 4.11, showing the efficacy of the framework.
874

[en] DATA ENRICHMENT BASED ON SIMILARITY GRAPH STATISTICS TO IMPROVE PERFORMANCE IN CLASSIFICATION SUPERVISED ML MODELS / [pt] ENRIQUECIMENTO DE DADOS COM BASE EM ESTATÍSTICAS DE GRAFO DE SIMILARIDADE PARA MELHORAR O DESEMPENHO EM MODELOS DE ML SUPERVISIONADOS DE CLASSIFICAÇÃO

NEY BARCHILON 19 September 2024 (has links)
[pt] A otimização do desempenho dos modelos de aprendizado de máquina supervisionados representa um desafio constante, especialmente em contextos com conjuntos de dados de alta dimensionalidade ou com numerosos atributos correlacionados. Neste estudo, é proposto um método para o enriquecimento de conjuntos de dados tabulares, fundamentado na utilização de estatísticas provenientes de um grafo construído a partir da similaridade entre as instâncias presentes neste conjunto de dados, buscando capturar correlações estruturais entre esses dados. As instâncias assumem o papel de vértices no grafo, enquanto as conexões entre elas refletem sua similaridade. O conjunto de características originais (FO) é enriquecido com as estatísticas extraídas do grafo (FG) na busca pela melhora do poder preditivo dos modelos de aprendizado de máquina. O método foi avaliado em dez conjuntos de dados públicos de distintas áreas de conhecimento, em dois cenários distintos, sobre sete modelos de aprendizado de máquina, comparando a predição sobre o conjunto de dados inicial (FO) com o conjunto de dados enriquecido com as estatísticas extraídas do seu grafo (FO+FG). Os resultados revelaram melhorias significativas na métrica de acurácia, com um aprimoramento médio de aproximadamente 4,9 por cento. Além de sua flexibilidade para integração com outras técnicas de enriquecimento existentes, o método se apresenta como uma alternativa eficaz, sobretudo em situações em que os conjuntos de dados originais carecem das características necessárias para as abordagens tradicionais de enriquecimento com a utilização de grafo. / [en] The optimization of supervised machine learning models performancerepresents a constant challenge, especially in contexts with high-dimensionaldatasets or numerous correlated attributes. In this study, we propose a methodfor enriching tabular datasets, based on the use of statistics derived from agraph constructed from the similarity between instances in the dataset, aimingto capture structural correlations among the data. Instances take on the role ofvertices in the graph, while connections between them reflect their similarity.The original feature set (FO) is enriched with statistics extracted from thegraph (FG) to enhance the predictive power of machine learning models. Themethod was evaluated on ten public datasets from different domains, in twodistinct scenarios, across seven machine learning models, comparing predictionon the initial dataset (FO) with the dataset enriched with statistics extractedfrom its graph (FO+FG). The results revealed significant improvements inaccuracy metrics, with an average enhancement of approximately 4.9 percent. Inaddition to its flexibility for integration with existing enrichment techniques,the method presents itself as a effective alternative, particularly in situationswhere original datasets lack the necessary characteristics for traditional graph-based enrichment approaches.
875

Policy Diffusion in U.S. Hazard Mitigation Planning: An Intergovernmental Perspective

Xie, Ruixiang 24 May 2024 (has links)
This dissertation contributes to the disaster resilience policy literature by examining the diffusion of hazard mitigation policy in the U.S. Using the three-paper model, it investigates the adoption of local hazard mitigation plans (LHMPs) from an intergovernmental perspective. The first paper focuses on horizontal diffusion in hazard mitigation planning among local communities. Special attention is paid to the potential factors affecting the adoption of FEMA-approved LHMPs, Hazard Mitigation Grant Program (HMGP) projects and Pre-Disaster Program (PDM) projects at the county level. The Event History Analysis (EHA) Logit Model and Spatial Autocorrelation Models test the hypotheses corresponding to external factors such as the neighboring effects and internal factors, including disaster risks, neighborhood disadvantage and affluence, government capacity, local disaster resilience advocacy groups, and political support. The empirical results confirmed the significant influence of neighboring effects, indicating that counties are more likely to implement the same mitigation strategies if neighboring counties have done so. The results also revealed that disaster experience, government capacity, and strong democratic support significantly impact the likelihood of adopting LHMP and HMGP. Additionally, the results suggested that disadvantaged communities were more likely to adopt mitigation policies, while affluent communities were less likely to adopt such policies. The second paper evaluates the effectiveness of the FEMA's Program Administration by State Pilot (PAS). By integrating the Propensity Score Matching (PSM) technique with the Difference-in-Differences (DID) analysis, the empirical evidence demonstrated a significant reduction in the approval times for both LHMP and HMGP in pilot states compared to non-pilot states, with an average reduction nearing 30%. This suggests that the PAS program has effectively streamlined administrative processes, thereby enhancing efficiency in disaster management within pilot states. The analysis also indicated that the impact of PAS on the actual funding received through HMGP was insignificant, suggesting that while administrative processes were expedited, the allocation of financial resources remained unaffected. The third paper attempts to understand how local governments respond to top-down policy pressures in vertical diffusion by analyzing the text similarities of hazard mitigation strategies between state hazard mitigation plans and county LHMPs in Ohio using the word embedding technologies. The study employs the Word2Vec algorithm to assess the policy similarity between the hazard mitigation goals outlined in LHMPs and SHMPs. Building on this initial analysis, this research further uses the Beta Regression model to examine the textual similarities within LHMPs in Ohio, focusing on how the type of author - government versus private consultants, and the nature of the goals, whether action-based or hazard-based, affect these alignments. The regression analysis shows that LHMPs authored by government entities tend to exhibit higher textual similarity, reflecting the influence of standardized approaches driven by state and federal guidelines. This suggests a compliance-driven alignment in government-written plans. Conversely, LHMPs authored by private consultants display greater variability, suggesting that these plans are customized to the specific needs and risk assessments of local communities. Additionally, the regression results indicate that action-based and mixed-goal LHMPs are associated with higher textual similarity across counties. To carry out the empirical analysis mentioned above, this dissertation builds a panel dataset for all counties from 2000 to 2020, which contains data on LHMPs, HMA projects, disaster risks, socioeconomic characteristics, regional economic and political indicators, etc. / Doctor of Philosophy / Hazard mitigation in the United States is a critical issue, especially as the frequency and cost of disasters continue to rise. This dissertation investigates the dynamics of hazard mitigation planning within a multi-level governmental framework, focusing on the adoption of Federal Emergency Management Agency (FEMA) approved Local Hazard Mitigation Plans (LHMPs), Hazard Mitigation Grant Program (HMGP) projects, and Pre-Disaster Program (PDM) projects. across U.S. counties and the influence of federal and state policies on these local initiatives. The first paper examines the horizontal diffusion of LHMPs among local communities, revealing the significant influence of neighboring counties. This "neighboring effect" shows that counties are more likely to adopt similar mitigation strategies if their neighbors have done so, emphasizing the role of regional collaboration in spreading effective disaster resilience practices. Additionally, the study found that counties with more disaster experience and greater governmental capacity are more likely to implement LHMPs, highlighting the importance of preparedness and resources in driving policy adoption. Furthermore, this research finds counties with higher socioeconomic disadvantages are more proactive in adopting mitigation policies, which could be attributed to the higher perceived risks and available federal funding targeted at these communities. The second paper evaluates the impact of FEMA's Program Administration by State (PAS) pilot program on the administrative efficiency of LHMP and HMGP approvals. The findings indicate a significant reduction in approval times in pilot states, suggesting that the PAS program has successfully streamlined administrative processes. However, this expedited process did not lead to increased funding or broader adoption, pointing to the need for further policy enhancements to ensure that administrative improvements translate into real-world benefits for disaster preparedness. The third paper explores the vertical diffusion of policy from state to local governments, using Ohio as a case study. It employs advanced text analysis to measure the similarity between state and local hazard mitigation plans. The results show that government-authored LHMPs tend to closely follow state guidelines, indicating a top-down influence that ensures compliance with federal and state objectives. In contrast, LHMPs authored by private consultants were more varied and aligned to the specific needs and risks of local communities. This suggests that a balance is needed between standardized policies and local customization to effectively address the unique challenges of different regions. By integrating these findings, this dissertation provides a comprehensive overview of how hazard mitigation policies are adopted and implemented across various governmental levels. The research concludes with policy recommendations that advocate for sustained reforms in hazard mitigation funding, emphasizing the need for equitable resource distribution among disadvantaged communities. It also offers critical insights into improving intergovernmental cooperation and policy effectiveness, ensuring that all communities, regardless of their socio-economic status, can enhance their resilience and better prepare for future disasters. This research ultimately serves as a guide for policymakers to refine strategies that foster robust, community-centered resilience practices, enhancing the nation's overall disaster preparedness and response capabilities.
876

Improving efficiency in logistics operations of the wood fiber supply chain

Fallas Valverde, Paula Daniela 01 April 2019 (has links)
There is a gap in the research regarding applications of Lean tools in the wood fiber supply chain. A Value Stream Map (VSM) tool that focused on identifying Lean waste in logistic operations was developed and applied to three case study firms: a paper mill, a sawmill, and a logger. Using the VSM tool an absence of structured methods to select and assess suppliers was found, which promotes a fluctuating environment for suppliers. Therefore, a tool that implements a hierarchy system to categorize suppliers was developed, verified and validated. Through the use of the VSM implementation the author found a lack of information sharing between supply chain stakeholders, which causes a reactive environment for the industry. Improvements in wood flow planning, tract allocation, truck scheduling, and communication were projected as a future state of the system. The annual potential savings by implementing the projected improvements in the total cost were as follows for the paper mill, the sawmill, and the logger respectively: $306,232, $312,085, $756,504. As a result of the findings obtained through the VSMs, a supplier selection model was designed. The tool was implemented into software for the wood industry. The tool was then verified and validated. The verification process consisted of comparing the output through previously known results and was performed through seven interviews with different stakeholders. The appropriate application of the supplier selection tool improves the way in which companies in the wood industry select and assess their suppliers and guarantee that the best alternatives are selected. / Master of Science / In the wood fiber supply chain, integration between different parties within a supply chain has proven to be a difficult task. An innovative lean-logistics tool value-streamed map (VSM) was developed to evaluate the current and future state of a supply chain Once the tool was developed it was used to map the wood fiber supply chain, determine and measure key performance metrics, calculate the cost of logistics operations, and identify potential sources of waste. Three case studies representing common wood fiber supply chains were conducted to develop three current VSMs for selected value streams. The lack of communication between supply chain partners was determined to be the most significant source of waste in all three cases. Lack of communication could lead to idle equipment, unnecessary waiting times, excessive inventories, overproduction, and excessive transportation and movement. As a result of the findings obtained through the VSMs, which revealed the absence of structured methods to select and assess suppliers, a supplier selection model was designed. The tool was implemented into software for the wood industry. The tool was then verified and validated. The verification process consisted of comparing the output through previously known results and was performed through seven interviews with different stakeholders. The appropriate application of the supplier selection tool improves the way in which companies in the wood industry select and assess their suppliers and guarantee that the best alternatives are selected, thus increasing the chance of a successful relationship and increasing the value that the company gets from its supplier base.
877

[pt] MEDIDAS DE SIMILARIDADE ENTRE SÉRIES TEMPORAIS / [en] TIME SERIES SYMILARITY MEASURES

JOSE LUIZ DO NASCIMENTO DE AGUIAR 27 October 2016 (has links)
[pt] Atualmente, uma tarefa muito importante na mineração de dados é compreender como extrair os dados mais informativos dentre um número muito grande de dados. Uma vez que todos os campos de conhecimento apresentam uma grande quantidade de dados que precisam ser reduzidas até as informações mais representativas, a abordagem das séries temporais é definitivamente um método muito forte para representar e extrair estas informações. No entanto nós precisamos ter uma ferramenta apropriada para inferir os dados mais significativos destas séries temporais, e para nos ajudar, podemos utilizar alguns métodos de medida de similaridade para saber o grau de igualdade entre duas séries temporais, e nesta pesquisa nós vamos realizar um estudo utilizando alguns métodos de similaridade baseados em medidas de distância e aplicar estes métodos em alguns algoritmos de clusterização para fazer uma avaliação de se existe uma combinação (método de similaridade baseado em distância / algoritmo de clusterização) que apresenta uma performance melhor em relação a todos os outros utilizados neste estudo, ou se existe um método de similaridade baseado em distância que mostra um desempenho melhor que os demais. / [en] Nowadays a very important task in data mining is to understand how to collect the most informative data in a very amount of data. Once every single field of knowledge have lots of data to summarize in the most representative information, the time series approach is definitely a very strong way to represent and collect this information from it (12, 22). On other hand we need to have an appropriate tool to extract the most significant data from this time series. To help us we can use some similarity methods to know how similar is one time series from another In this work we will perform a research using some distance-based similarity methods and apply it in some clustering algorithms to do an assessment to see if there is a combination (distance-based similarity methods / clustering algorithm) that present a better performance in relation with all the others used in this work or if there exists one distancebased similarity method that shows a better performance between the others.
878

GENERATING SQL FROM NATURAL LANGUAGE IN FEW-SHOT AND ZERO-SHOT SCENARIOS

Asplund, Liam January 2024 (has links)
Making information stored in databases more accessible to users inexperienced in structured query language (SQL) by converting natural language to SQL queries has long been a prominent research area in both the database and natural language processing (NLP) communities. There have been numerous approaches proposed for this task, such as encoder-decoder frameworks, semantic grammars, and more recently with the use of large language models (LLMs). When training LLMs to successfully generate SQL queries from natural language questions there are three notable methods used, pretraining, transfer learning and in-context learning (ICL). ICL is particularly advantageous in scenarios where the hardware at hand is limited, time is of concern and large amounts of task specific labled data is nonexistent. This study seeks to evaluate two strategies in ICL, namely zero-shot and few-shot scenarios using the Mistral-7B-Instruct LLM. Evaluation of the few-shot scenarios was conducted using two techniques, random selection and Jaccard Similarity. The zero-shot scenarios served as a baseline for the few-shot scenarios to overcome, which ended as anticipated, with the few-shot scenarios using Jaccard similarity outperforming the other two methods, followed by few-shot scenarios using random selection coming in at second best, and the zero-shot scenarios performing the worst. Evaluation results acquired based on execution accuracy and exact matching accuracy confirm that leveraging similarity in demonstrating examples when prompting the LLM will enhance the models knowledge about the database schema and table names which is used during the inference phase leadning to more accurately generated SQL queries than leveraging diversity in demonstrating examples.
879

SubRosa: Determining Movie Similarities based on Subtitles

Luhmann, Jan, Burghardt, Manuel, Tiepmar, Jochen 26 June 2024 (has links)
For streaming websites, media shopping platforms and movie databases, movie recommendation systems have become an important technology, where mostly hybrid methods of collaborative and content-based filtering on the basis of user ratings and user-generated content have proven to be effective. However, these methods can lead to popularity-biased results that show an underrepresentation of those movies for which only little user-generated data exists. In this paper we will discuss the possibility of generating movie recommendations that are not based on user-generated data or metadata, but solely on the content of the movies themselves, confining ourselves to movie dialog. We extract low-level features from movie subtitles by using methods from Information Retrieval, Natural Language Processing and Stylometry, and examine a possible correlation of these features’ similarity with the overall movie similarity. In addition we present a novel web application called SubRosa (http://ch01.informatik.uni-leipzig.de:5001/), which can be used to interactively compare the results of different feature combinations.
880

The Vectorian API – A Research Framework for Semantic Textual Similarity (STS) Searches

Burghardt, Manuel, Liebl, Bernhard 26 June 2024 (has links)
No description available.

Page generated in 0.0987 seconds