• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 360
  • 51
  • 29
  • 24
  • 16
  • 14
  • 13
  • 11
  • 8
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 674
  • 144
  • 84
  • 61
  • 57
  • 54
  • 52
  • 51
  • 50
  • 45
  • 43
  • 40
  • 39
  • 38
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

A study of investigating organisational justice perceptions and experiences of affirmative action in a learning and development organisation

George, Munique January 2011 (has links)
Magister Commercii (Industrial Psychology) - MCom(IPS) / There have been good arguments made for the development of aggressive affirmative action policies with the end goal of quickly moving black South Africans into corporate and high ranks within management of organisations. One of the central arguments in favour of aggressive AA policies is the risk of racial polarization post-apartheid should a quick fix not be initiated. It makes good business and economic sense for AA policies to be implemented as black consumers coupled with black managers will have the eventual end point of lower unemployment and crime, through job creation and security of the representative majority. / South Africa
332

Towards QoE-Aware Dynamic Adaptive Streaming Over HTTP

Sobhani, Ashkan January 2017 (has links)
HTTP Adaptive Streaming (HAS) has now become ubiquitous, and it accounts for a large proportion of multimedia delivery over the Internet. Consequently, it poses new challenges for content providers and network operators. In this study, we aim to improve the user’s Quality of Experience (QoE) for HAS using from two main approaches including client centric approach and network assisted approach. In the client centric approach, we address the issue of enhancing the client’s QoE by proposing a fuzzy logic–based video bitrate adaptation and prediction mechanism for Dynamic Adaptive Streaming over HTTP (DASH) players. This adaptation mechanism allows HAS players to take appropriate actions sooner than existing methods to prevent playback interruptions caused by buffer underrun and reduce the ON-OFF traffic phenomena, which causes instability and unfairness among competing players. Our results show that compared to other studied methods, our proposed method has two advantages: better fairness among multiple competing players by almost 50% on average and as much as 80% as indicated by Jain’s fairness index, and better perceived quality of video by almost 8% on average and as much as 17%, according to the eMOS model. In the network assisted approach, we propose a novel mechanism for HAS stream adaptation in the context of wireless mobile networks. The proposed mechanism leverages recent advances in the 3GPP DASH specification, including the optional feature of QoE measurement and reporting for DASH clients. As part of the proposed mechanism, we formulate a utility-maximization problem that incorporates factors influencing QoE to specify the optimum value of Quality of Service (QoS)-related parameters for HAS streams within a wireless mobile network. The results of our simulations demonstrate that our proposed system results in better perceived quality of video, measured by Mean Opinion Score (MOS), by almost 7% on average, while lowering the freezing period by almost 20% on average across HAS users when compared to other approaches where HAS users only rely on local adaptation logics.
333

In a democracy, what should a healthcare system do?

Oswald, Malcolm Leslie January 2013 (has links)
In a democracy, what should a healthcare system do? It is a question of relevance to many disciplines. In this thesis, I examine that question, and add something original to the existing debate by drawing on, and synthesizing, thinking from several disciplines, and especially philosophical ethics, economics and systems theory. Paper 6 in this thesis, entitled “In a democracy, what should a healthcare system do?”, tackles the thesis question directly. The central conclusion of that paper, and of this thesis, is that a healthcare system in a democracy should do as much good as possible, although sometimes some overall good should be sacrificed for the sake of fairness, as John Broome has argued. However, what counts as the good of healthcare, and when good should be traded off for fairness, depend on your weltanschauung (or worldview). Political pluralism is normal, and every democracy has institutions and processes for making policy when people disagree because their worldviews differ. Ultimately, elected policymakers are accountable for making decisions. This analysis is complemented by paper 5, entitled “Accountability for reasonableness – as unfair as QALYs?”. It assesses the vulnerability of three theories of resource allocation to injustice. It concludes that Daniels and Sabins’ accountability for reasonableness approach is vulnerable because it does not require evidence of costs and benefits. Maximising quality-adjusted life years can also lead to large-scale injustice because it is concerned only with health gain, and not with fairness. I conclude that a “good and fairness framework”, which is drawn from the writing of John Broome, is the least vulnerable to large-scale injustice. There are four other papers in this thesis. “What has the state got to do with healthcare?” (paper 3) makes the case for an important assumption underpinning this thesis, namely that the question of what a healthcare system should do is a question of public policy. Paper 1, entitled “It’s time for rational rationing” argues that efficiency gains are not inexhaustible, and to continue with its austerity programme, policymakers should assess whether the NHS in England could do more good with the same money by doing different things. I explore how philosophical ethics can contribute to policy, and the importance of context when writing papers about policy, in: “Should policy ethics come in one of two colours: green or white?” (paper 2) and “How can one be both a philosophical ethicist and a democrat?” (paper 4).These latter two papers, and much of the narrative within this thesis, explain how my thinking has developed during the course of my PhD, and why I have looked within and beyond philosophical ethics for an answer to my central research question.
334

Quantifying Information Leakage via Adversarial Loss Functions: Theory and Practice

January 2020 (has links)
abstract: Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility matters, average-case information-theoretic measures can be more relevant. However, most such information-theoretic measures do not have clear operational meanings. This dissertation addresses this challenge. This work introduces a tunable leakage measure called maximal $\alpha$-leakage which quantifies the maximal gain of an adversary in inferring any function of a data set. The inferential capability of the adversary is modeled by a class of loss functions, namely, $\alpha$-loss. The choice of $\alpha$ determines specific adversarial actions ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for the two specific values maximal $\alpha$-leakage simplifies to mutual information and maximal leakage, respectively. Maximal $\alpha$-leakage is proved to have a composition property and be robust to side information. There is a fundamental disjoint between theoretical measures of information leakages and their applications in practice. This issue is addressed in the second part of this dissertation by proposing a data-driven framework for learning Censored and Fair Universal Representations (CFUR) of data. This framework is formulated as a constrained minimax optimization of the expected $\alpha$-loss where the constraint ensures a measure of the usefulness of the representation. The performance of the CFUR framework with $\alpha=1$ is evaluated on publicly accessible data sets; it is shown that multiple sensitive features can be effectively censored to achieve group fairness via demographic parity while ensuring accuracy for several \textit{a priori} unknown downstream tasks. Finally, focusing on worst-case measures, novel information-theoretic tools are used to refine the existing relationship between two such measures, $(\epsilon,\delta)$-DP and R\'enyi-DP. Applying these tools to the moments accountant framework, one can track the privacy guarantee achieved by adding Gaussian noise to Stochastic Gradient Descent (SGD) algorithms. Relative to state-of-the-art, for the same privacy budget, this method allows about 100 more SGD rounds for training deep learning models. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
335

Reinforcement Learning Based Fair Edge-User Allocation for Delay-Sensitive Edge Computing Applications

Alchalabi, Alaa Eddin 15 November 2021 (has links)
Cloud Gaming systems are among the most challenging networked-applications, since they deal with streaming high-quality and bulky video in real-time to players’ devices. While all industry solutions today are centralized, we introduce an AI-assisted hybrid networking architecture that, in addition to the central cloud servers, also uses some players’ computing resources as additional points of service. We describe the problem, its mathematical formulation, and potential solution strategy. Edge computing is a promising paradigm that brings servers closer to users, leading to lower latencies and enabling latency-sensitive applications such as cloud gaming, virtual/augmented reality, telepresence, and telecollaboration. Due to the high number of possible edge servers and incoming user requests, the optimum choice of user-server matching has become a difficult challenge, especially in the 5G era where the network can offer very low latencies. In this thesis, we introduce the problem of fair server selection as not only complying with an application's latency threshold but also reducing the variance of the latency among users in the same session. Due to the dynamic and rapidly evolving nature of such an environment and the capacity limitation of the servers, we propose as solution a Reinforcement Learning method in the form of a Quadruple Q-Learning model with action suppression, Q-value normalization, and a reward function that minimizes the variance of the latency. Our evaluations in the context of a cloud gaming application show that, compared to a existing methods, our proposed method not only better meets the application's latency threshold but is also more fair with a reduction of up to 35\% in the standard deviation of the latencies while using the geo-distance, and it shows improvements in fairness up to 18.7\% compared to existing solutions using the RTT delay especially during resource scarcity. Additionally, the RL solution can act as a heuristic algorithm even when it is not fully trained. While designing this solution, we also introduced action suppression, Quadruple Q-Learning, and normalization of the Q-values, leading to a more scalable and implementable RL system. We focus on algorithms for distributed applications and especially esports, but the principles we discuss apply to other domains and applications where fairness can be a crucial aspect to be optimized.
336

A Study of Fairness and Information Heterogeneity in Recommendation Systems

Altaf, Basmah 21 November 2019 (has links)
Recommender systems are an integral and successful application of machine learning in e-commerce industry and in everyday lives of online users. Recommendation algorithms are used extensively for news, musics, books, point of interests, or travel recommendation as well as in many other domains. Although much focus has been paid on improving recommendation quality, however, some real-world aspects are not considered: How to ensure that top-n recommendations are fair and not biased due to any popularity boosting events, such as awards for movies or songs? How to recommend items to entities by explicitly considering information from heterogeneous sources. What is the best way to model sequential recommendation systems as heterogeneous context-aware design, and learning on-the-fly from spatial, temporal and social contexts. Can we model attributes and heterogeneous relations in a heterogeneous information network? The goal of this thesis is to pave the way towards the next generation of realworld recommendation systems tackling fairness and information heterogeneity challenges to improve the user experience, while giving good recommendations. This thesis bridges techniques from recommendation and deep-learning techniques for representation learning by proposing novel techniques to address the above real-world problems. We focus on four directions: (1) model the effect of popularity bias over time on the consumption of items, (2) model the heterogeneous information associated with sequential history of users and social links for sequential recommendation, (3) model the heterogeneous links and rich content of nodes in an academic heterogeneous information network, and (4) learn semantics using topic modeling for nodes based on their content and heterogeneous links in a heterogeneous information network.
337

Cultivating Community-Focused Norms in Law Enforcement: Servant Leadership, Accountability Systems, and Officer Attitudes

Baker, Daniel Brice January 2020 (has links)
No description available.
338

A Solution to optimal and fair rate adaptation in wireless mesh networks

Jansen van Vuuren, Pieter Albertus January 2013 (has links)
Current wireless networks still employ techniques originally designed for their xed wired counterparts. These techniques make assumptions (such as a xed topology, a static enviroment and non-mobile nodes) that are no longer valid in the wireless communication environment. Furthermore, the techniques and protocols used in wireless networks should take the number of users of a network into consideration, since the channel is a shared and limited resource. This study deals with nding an optimal solution to resource allocation in wireless mesh networks. These networks require a solution to fair and optimal resource allocation that is decentralised and self-con guring, as users in such networks do not submit to a central authority. The solution presented is comprised of two sections. The rst section nds the optimal rate allocation, by making use of a heuristic. The heuristic was developed by means of a non-linear mixed integer mathematical formulation. This heuristic nds a feasible rate region that conforms to the set of constraints set forth by the wireless communication channel. The second section nds a fair allocation of rates among all the users in the network. This section is based on a game theory framework, used for modelling the interaction observed between the users. The fairness model is de ned in strategic form as a repeated game with an in nite horizon. The rate adaptation heuristic and fairness model employs a novel and e ective information distribution technique. The technique makes use of the optimized link state routing protocol for information distribution, which reduces the overhead induced by utilising multi-point relays. In addition, a novel technique for enforcing cooperation between users in a network is presented. This technique is based on the Folk theorem and ensures cooperation by threat of punishment. The punishment, in turn, is executed in the form of banishment from the network. The study describes the performance of the rate adaptation heuristic and fairness model when subject to xed and randomised topologies. The xed topologies were designed to control the amount of interference that a user would experience. Although these xed topologies might not seem to re ect a real-world scenario, they provide a reasonable framework for comparison. The randomised network topology is introduced to more accurately represent a real-world scenario. Furthermore, the randomised network topologies consist of a signi cant number of users, illustrating the scalability of the solution. Both data and voice tra c have been applied to the rate adaptation heuristic and fairness model. It is shown that the heuristic e ectively reduces the packet loss ratio which drops below 5% after about 15 seconds for all xed topologies. Furthermore, it is shown that the solution is near-optimal in terms of data rate and that a fair allocation of data rates among all nodes is achieved. When considering voice tra c, an increase of 10% in terms of data rate is observed compared to data tra c. The heuristic is successfully applied to large networks, demonstrating the scalability of the implementation. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
339

Férovost didaktických testů a jejich položek / Test and Item Fairness

Vlčková, Katarína January 2015 (has links)
No description available.
340

Distribution of Control Effort in Multi-Agent Systems : Autonomous systems of the world, unite!

Axelson-Fisk, Magnus January 2020 (has links)
As more industrial processes, transportation and appliances have been automated or equipped with some level of artificial intelligence, the number and scale of interconnected systems has grown in the recent past. This is a development which can be expected to continue and therefore the research in performance of interconnected systems and networks is growing. Due to increased automation and sheer scale of networks, dynamically scaling networks is an increasing field and research into scalable performance measures is advancing. Recently, the notion gamma-robustness, a scalable network performance measure, was introduced as a measurement of interconnected systems robustness with respect to external disturbances. This thesis aims to investigate how the distribution of control effort and cost, within interconnected system, affects network performance, measured with gamma-robustness. Further, we introduce a notion of fairness and a measurement of unfairness in order to quantify the distribution of network properties and performance. With these in place, we also present distributed algorithms with which the distribution of control effort can be controlled in order to achieve a desired network performance. We close with some examples to show the strengths and weaknesses of the presented algorithms. / I och med att fler och fler system och enheter blir utrustade med olika grader av intelligens så växer både förekomsten och omfattningen av sammankopplade system, även kallat Multi-Agent Systems. Sådana system kan vi se exempel på i traffikledningssystem, styrning av elektriska nätverk och fordonståg, vi kan också hitta fler och fler exempel på så kallade sensornätverk i och med att Internet of Things och Industry 4.0 används och utvecklas mer och mer. Det som särskiljer sammankopplade system från mer traditionella system med flera olika styrsignaler och utsignaler är att dem sammankopplade systemen inte styrs från en central styrenhet. Istället styrs dem sammankopplade systemen på ett distribuerat sätt i och med att varje agent styr sig själv och kan även ha individuella mål som den försöker uppfylla. Det här gör att analysen av sammankopplade system försvåras, men tidigare forskning har hittat olika regler och förhållninssätt för agenterna och deras sammankoppling för att uppfylla olika krav, såsom stabilitet och robusthet. Men även om dem sammankopplade systemen är både robusta och stabila så kan dem ha egenskaper som vi vill kunna kontrollera ytterligare. Specifikt kan ett sådant prestandamått vara systemens motståndskraft mot påverkan av yttre störningar och i vanliga olänkade system finns det en inneboende avvägning mellan kostnad på styrsignaler och resiliens mot yttre störningar. Samma avvägning hittar vi i sammankopplade system, men i dessa system hittar vi också ytterligare en dimension på detta problem. I och med att ett visst mått av en nätverksprestanda inte nödvändigtvis betyder att varje agent i nätverket delar samma mått kan agenterna i ett nätverk ha olika utväxling mellan styrsignalskostnad och resiliens mot yttre störningar. Detta gör att vissa agenter kan ha onödigt höga styrsignalskonstander, i den mening att systemen skulle uppnå samma nätverksprestanda men med lägre styrsignalskostnad om flera av agenterna skulle vikta om sina kontrollinsatser. I det här examensarbetet har vi studerat hur olika val av kontrollinsats påverkar ett sammankopplat systems prestanda. Vi har gjort detta för att undersöka hur autonoma, men sammankopplade, agenter kan ändra sin kontrollinsats, men med bibehållen nätverksprestanda, och på det sättet minska sina kontrollkostnader. Detta har bland annat resulterat i en distruberad algoritm för att manipulera agenternas kontrollinsats så att skillnaderna mellan agenternas resiliens mot yttre störningar minskar och nätverksprestandan ökar. Vi avslutar rapporten med att visa ett par exempel på hur system anpassade med hjälp av den framtagna algoritmen får ökad prestanda. Avslutningsvis följer en diskussion kring hur vissa antaganden kring systemstruktur kan släppas upp, samt kring vilka områden framtida forskning skulle kunna fortsätta med.

Page generated in 0.0458 seconds