• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 34
  • 26
  • 22
  • 15
  • 7
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 338
  • 68
  • 61
  • 52
  • 40
  • 39
  • 38
  • 36
  • 34
  • 30
  • 29
  • 29
  • 29
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Thinking Outside The Grid: Structural Design Through Multi-parametric Growth and Self-Adaptive Analysis

Kahn, Sergey 19 September 2017 (has links)
No description available.
52

Inscape

Kang, Joong-Hoon January 2006 (has links)
No description available.
53

Group-Envy Fairness in the Stochastic Bandit Setting

Scinocca, Stephen 29 September 2022 (has links)
We introduce a new, group fairness-inspired stochastic multi-armed bandit problem in the pure exploration setting. We look at the discrepancy between an arm’s mean reward from a group and the highest mean reward for any arm from that group, and call this the disappointment that group suffers from that arm. We define the optimal arm to be the one that minimizes the maximum disappointment over all groups. This optimal arm addresses one problem with maximin fairness, where the group used to choose the maximin best arm suffers little disappointment regardless of what arm is picked, but another group suffers significantly more disappointment by picking that arm as the best one. The challenge of this problem is that the highest mean reward for a group and the arm that gives that reward are unknown. This means we need to pull arms for multiple goals: to find the optimal arm, and to estimate the highest mean reward of certain groups. This leads to the new adaptive sampling algorithm for best arm identification in the fixed confidence setting called MD-LUCB, or Minimax Disappointment LUCB. We prove bounds on MD-LUCB’s sample complexity and then study its performance with empirical simulations. / Graduate
54

Three Essays on HRM Algorithms: Where Do We Go from Here?

Cheng, Minghui January 2024 (has links)
The field of Human Resource Management (HRM) has experienced a significant transformation with the emergence of big data and algorithms. Major technology companies have introduced software and platforms for analyzing various HRM practices, such as hiring, compensation, employee engagement, and turnover management, utilizing algorithmic approaches. However, scholarly research has taken a cautious stance, questioning the strategic value and causal inference basis of these tools, while also raising concerns about bias, discrimination, and ethical issues in the applications of algorithms. Despite these concerns, algorithmic management has gained prominence in large organizations, shaping workforce management practices. This thesis aims to address the gap between the rapidly changing market of HRM algorithms and the lack of theoretical understanding. The thesis begins by conducting a comprehensive review of HRM algorithms in HRM practice and scholarship, clarifying their definition, exploring their unique features, and identifying specific topics and research questions in the field. It aims to bridge the gap between academia and practice to enhance the understanding and utilization of algorithms in HRM. I then explore the legal, causal, and moral issues associated with HR algorithms, comparing fairness criteria and advocating for the use of causal modeling to evaluate algorithmic fairness. The multifaceted nature of fairness is illustrated and practical strategies for enhancing justice perceptions and incorporating fairness into HR algorithms are proposed. Finally, the thesis adopts an artifact-centric approach to examine the ethical implications of HRM algorithms. It explores competing views on moral responsibility, introduces the concept of "ethical affordances," and analyzes the distribution of moral responsibility based on different types of ethical affordances. The paper provides a framework for analyzing and assigning moral responsibility to stakeholders involved in the design, use, and regulation of HRM algorithms. Together, these papers contribute to the understanding of algorithms in HRM by addressing the research-practice gap, exploring fairness and accountability issues, and investigating the ethical implications. They offer theoretical insights, practical recommendations, and future research directions for both researchers and practitioners. / Thesis / Doctor of Philosophy (PhD) / This thesis explores the use of advanced algorithms in Human Resource Management (HRM) and how they affect decision-making in organizations. With the rise of big data and powerful algorithms, companies can analyze various HR practices like hiring, compensation, and employee engagement. However, there are concerns about biases and ethical issues in algorithmic decision-making. This research examines the benefits and challenges of HRM algorithms and suggests ways to ensure fairness and ethical considerations in their design and application. By bridging the gap between theory and practice, this thesis provides insights into the responsible use of algorithms in HRM. The findings of this research can help organizations make better decisions while maintaining fairness and upholding ethical standards in HR practices.
55

A User-Centered Design Approach to Evaluating the Usability of Automated Essay Scoring Systems

Hall, Erin Elizabeth 21 September 2023 (has links)
In recent years, rapid advancements in computer science, including increased capabilities of machine learning models like Large Language Models (LLMs) and the accessibility of large datasets, have facilitated the widespread adoption of AI technology, such as ChatGPT, underscoring the need to design and evaluate these technologies with ethical considerations for their impact on students and teachers. Specifically, the rise of Automated Essay Scoring (AES) platforms have made it possible to provide real-time feedback and grades for student essays. Despite the increasing development and use of AES platforms, limited research has specifically focused on AI explainability and algorithm transparency and their influence on the usability of these platforms. To address this gap, we conducted a qualitative study on an AI-based essay writing and grading platform, with a primary focus to explore the experiences of students and graders. The study aimed to explore the usability aspects related to explainability and transparency and their implications for computer science education. Participants took part in surveys, semi-structured interviews, and a focus group. The findings reveal important considerations for evaluating AES systems, including the clarity of feedback and explanations, impact and actionability of feedback and explanations, user understanding of the system, trust in AI, major issues and user concerns, system strengths, user interface, and areas of improvement. These proposed key considerations can help guide the development of effective essay feedback and grading tools that prioritize explainability and transparency to improve usability in computer science education. / Master of Science / In recent years, rapid advancements in computer science have facilitated the widespread adoption of AI technology across various educational applications, highlighting the need to design and evaluate these technologies with ethical considerations for their impact on students and teachers. Nowadays, there are Automated Essay Scoring (AES) platforms that can instantly provide feedback and grades for student essays. AES platforms are computer programs that use artificial intelligence to automatically assess and score essays written by students. However, not much research has looked into how these platforms work and how understandable they are for users. Specifically, AI explainability refers to the ability of AES platforms to provide clear and coherent explanations of how they arrive at their assessments. Algorithm transparency, on the other hand, refers to the degree to which the inner workings of these AI algorithms are open and understandable to users. To fill this gap, we conducted a qualitative study on an AI-based essay writing and grading platform, aiming to understand the experiences of students and graders. We wanted to explore how clear and transparent the platform's feedback and explanations were. Participants shared their thoughts through surveys, interviews, and a focus group. The study uncovered important factors to consider when evaluating AES systems. These factors include the clarity of the feedback and explanations provided by the platform, the impact and actionality of the feedback, how well users understand the system, their level of trust in AI, the main issues and concerns they have, the strengths of the system, the user interface's effectiveness, and areas that need improvement. By considering these findings, developers can create better essay feedback and grading tools that are easier to understand and use.
56

Artificial intelligence in financial services: systemic implications and regulatory responses

Kapsis, Ilias 08 July 2020 (has links)
No / The article offers information on expansion of Artificial Intelligence (AI) in the financial services industry. Topics include Financial institutions see in it more opportunities for efficiency generation, improved profitability, and opportunities for differentiation for the building of competitive advantages; and develop, to improve reporting, and compliance processes.
57

Celestial Dreams

Knudson, Gary 08 1900 (has links)
Celestial Dreams is a three-movement work for chamber ensemble. This piece employs algorithmic processes (coded in BASIC and Pascal) that generate poetic text and convert it to musical pitches. The three movements contain coherent structures that allow for the complete integration of all ensemble members into the texture and for the flexibility to have one ensemble member emerge as the musical foreground. The chamber ensemble includes strings, tape, slides, and a narrator, who recites the poetic text which forms the foundation of the piece.
58

Bi-Objective Optimization of Kidney Exchanges

Xu, Siyao 01 January 2018 (has links)
Matching people to their preferences is an algorithmic topic with real world applications. One such application is the kidney exchange. The best "cure" for patients whose kidneys are failing is to replace it with a healthy one. Unfortunately, biological factors (e.g., blood type) constrain the number of possible replacements. Kidney exchanges seek to alleviate some of this pressure by allowing donors to give their kidney to a patient besides the one they most care about and in turn the donor for that patient gives her kidney to the patient that this first donor most cares about. Roth et al.~first discussed the classic kidney exchange problem. Freedman et al.~expanded upon this work by optimizing an additional objective in addition to maximal matching. In this work, I implement the traditional kidney exchange algorithm as well as expand upon more recent work by considering multi-objective optimization of the exchange. In addition I compare the use of 2-cycles to 3-cycles. I offer two hypotheses regarding the results of my implementation. I end with a summary and a discussion about potential future work.
59

Programming models for speculative and optimistic parallelism based on algorithmic properties

Cledat, Romain 24 August 2011 (has links)
Today's hardware is becoming more and more parallel. While embarrassingly parallel codes, such as high-performance computing ones, can readily take advantage of this increased number of cores, most other types of code cannot easily scale using traditional data and/or task parallelism and cores are therefore left idling resulting in lost opportunities to improve performance. The opportunistic computing paradigm, on which this thesis rests, is the idea that computations should dynamically adapt to and exploit the opportunities that arise due to idling resources to enhance their performance or quality. In this thesis, I propose to utilize algorithmic properties to develop programming models that leverage this idea thereby providing models that increase and improve the parallelism that can be exploited. I exploit three distinct algorithmic properties: i) algorithmic diversity, ii) the semantic content of data-structures, and iii) the variable nature of results in certain applications. This thesis presents three main contributions: i) the N-way model which leverages algorithmic diversity to speed up hitherto sequential code, ii) an extension to the N-way model which opportunistically improves the quality of computations and iii) a framework allowing the programmer to specify the semantics of data-structures to improve the performance of optimistic parallelism.
60

Teoria da informação algorítmica, eficiência relativa de mercado e perda de memória em séries de retornos de alta frequência em ativos negociados na BM&F BOVESPA. / Algorithmic information theory, relative market efficiency and memory loss in high frequency asset return series traded at BM & F BOVESPA.

Ranciaro Neto, Adhemar 05 July 2010 (has links)
This paper aims to apply the Kolmogorov algorithmic complexity theory using the measure proposed by Lempel and Ziv (1976) to analyze its behavior due to changes in parameters such as window size, jumps and the region of stability of high frequency financial series returns of assets traded on the BM&F BOVESPA, as well as to assess the evolution of such a measure when the intervals between the negotiations are extended and to verify the possible evidence of a relationship between the value of the complexity measure and the behavior of autocorrelation curves presented for each trading interval specified. We also discuss the criterion used to measure the relative efficiency of the market proposed by Giglio (2008). / Fundação de Amparo a Pesquisa do Estado de Alagoas / O presente trabalho tem por objetivos: 1) aplicar a teoria da complexidade de Kolmogorov utilizando a medida proposta por Lempel e Ziv (1976) para analisar o comportamento desta diante de alterações em parâmetros como tamanho de janela, salto e de região de estabilidade em séries financeiras de retornos de alta freqüência de ativos negociados na BM&F BOVESPA; 2) avaliar a evolução da medida ao se ampliarem os intervalos entre as negociações; e finalmente, 3) verificar a possibilidade de existir algum indício de relação entre o valor daquela medida e o comportamento das curvas de autocorrelação apresentadas para cada intervalo de negociação especificado. Foi também discutido o critério utilizado para a medida de eficiência relativa de mercado proposto por Giglio (2008).

Page generated in 0.0605 seconds