• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 18
  • 11
  • 8
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 139
  • 26
  • 25
  • 16
  • 15
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Strategies Healthcare Managers Use to Reduce Employee Turnover

Atkins, Christopher Sean 01 January 2019 (has links)
Healthcare managers who are unaware of the various strategies that exist for reducing turnover could adversely affect patient care, organizational morale and performance, and the achievement of organizational goals. The purpose of this qualitative multiple case study was to explore strategies healthcare supervisors used to reduce employee turnover. The participants comprised 3 senior healthcare managers located in central Texas responsible for hiring, firing, training, supervising, and successfully using strategies to reduce employee turnover. Herzberg's motivation-hygiene theory provided the conceptual framework. Data were collected from semistructured interviews and a review of company documents. Thematic analysis of the data resulted in 5 emergent themes: peer-to-peer feedback, valuing employees, rewards and incentives, opportunities for growth, and training programs. The results of this study might contribute to social change by enhancing healthcare managers' understanding of the strategies that can be used to reduce employee turnover and improve existing conditions among patients, their families, staff, communities, and organizations.
22

Can We Create a Circular Pharmaceutical Supply Chain (CPSC) to Reduce Medicines Waste?

Alshemari, Abdullah, Breen, Liz, Quinn, Gemma L., Sivarajah, Uthayasankar 05 December 2020 (has links)
Yes / The increase in pharmaceutical waste medicines is a global phenomenon and financial burden. The Circular Economy, as a philosophy within the pharmaceutical supply chain, aims to promote waste reduction, maximise medicines value, and enable sustainability within this supply chain (increasing circularity). Circularity strategies for pharmaceuticals are not currently implemented in many countries, due to quality and safety barriers. The aim of this study was to determine whether the application of circular economy principles can minimise pharmaceutical waste and support sustainability in the pharmaceutical supply chain; Methods: a detailed narrative literature review was conducted in order to examine pharmaceutical waste creation, management, disposal, and the application of circular economy principles; Results: the literature scrutinised revealed that pharmaceutical waste is created by multiple routes, each of which need to be addressed by pharmacists and healthcare bodies through the Circular Economy 9R principles. These principles act as a binding mechanism for disparate waste management initiatives. Medicines, or elements of a pharmaceutical product, can be better managed to reduce waste, cost, and reduce negative environmental impacts through unsafe disposal. the study findings outline a Circular Pharmaceutical Supply Chain and suggests that it should be considered and tested as a sustainable supply chain proposition.
23

A Map-Reduce-Like System for Programming and Optimizing Data-Intensive Computations on Emerging Parallel Architectures

Jiang, Wei 27 August 2012 (has links)
No description available.
24

On the Feasibility of MapReduce to Compute Phase Space Properties of Graphical Dynamical Systems: An Empirical Study

Hamid, Tania 09 July 2015 (has links)
A graph dynamical system (GDS) is a theoretical construct that can be used to simulate and analyze the dynamics of a wide spectrum of real world processes which can be modeled as networked systems. One of our goals is to compute the phase space of a system, and for this, even 30-vertex graphs present a computational challenge. This is because the number of state transitions needed to compute the phase space is exponential in the number of graph vertices. These problems thus produce memory and execution speed challenges. To address this, we devise various MapReduce programming paradigms that can be used to characterize system state transitions, compute phase spaces, functional equivalence classes, dynamic equivalence classes and cycle equivalence classes of dynamical systems. We also evaluate these paradigms and analyze their suitability for modeling different GDSs. / Master of Science
25

The Effectiveness of Capital Punishment in Reducing the Violent Crime Rate

Pieton, Michael A. 25 May 2017 (has links)
No description available.
26

Scalable and Declarative Information Extraction in a Parallel Data Analytics System

Rheinländer, Astrid 06 July 2017 (has links)
Informationsextraktions (IE) auf sehr großen Datenmengen erfordert hochkomplexe, skalierbare und anpassungsfähige Systeme. Obwohl zahlreiche IE-Algorithmen existieren, ist die nahtlose und erweiterbare Kombination dieser Werkzeuge in einem skalierbaren System immer noch eine große Herausforderung. In dieser Arbeit wird ein anfragebasiertes IE-System für eine parallelen Datenanalyseplattform vorgestellt, das für konkrete Anwendungsdomänen konfigurierbar ist und für Textsammlungen im Terabyte-Bereich skaliert. Zunächst werden konfigurierbare Operatoren für grundlegende IE- und Web-Analytics-Aufgaben definiert, mit denen komplexe IE-Aufgaben in Form von deklarativen Anfragen ausgedrückt werden können. Alle Operatoren werden hinsichtlich ihrer Eigenschaften charakterisiert um das Potenzial und die Bedeutung der Optimierung nicht-relationaler, benutzerdefinierter Operatoren (UDFs) für Data Flows hervorzuheben. Anschließend wird der Stand der Technik in der Optimierung nicht-relationaler Data Flows untersucht und herausgearbeitet, dass eine umfassende Optimierung von UDFs immer noch eine Herausforderung ist. Darauf aufbauend wird ein erweiterbarer, logischer Optimierer (SOFA) vorgestellt, der die Semantik von UDFs mit in die Optimierung mit einbezieht. SOFA analysiert eine kompakte Menge von Operator-Eigenschaften und kombiniert eine automatisierte Analyse mit manuellen UDF-Annotationen, um die umfassende Optimierung von Data Flows zu ermöglichen. SOFA ist in der Lage, beliebige Data Flows aus unterschiedlichen Anwendungsbereichen logisch zu optimieren, was zu erheblichen Laufzeitverbesserungen im Vergleich mit anderen Techniken führt. Als Viertes wird die Anwendbarkeit des vorgestellten Systems auf Korpora im Terabyte-Bereich untersucht und systematisch die Skalierbarkeit und Robustheit der eingesetzten Methoden und Werkzeuge beurteilt um schließlich die kritischsten Herausforderungen beim Aufbau eines IE-Systems für sehr große Datenmenge zu charakterisieren. / Information extraction (IE) on very large data sets requires highly complex, scalable, and adaptive systems. Although numerous IE algorithms exist, their seamless and extensible combination in a scalable system still is a major challenge. This work presents a query-based IE system for a parallel data analysis platform, which is configurable for specific application domains and scales for terabyte-sized text collections. First, configurable operators are defined for basic IE and Web Analytics tasks, which can be used to express complex IE tasks in the form of declarative queries. All operators are characterized in terms of their properties to highlight the potential and importance of optimizing non-relational, user-defined operators (UDFs) for dataflows. Subsequently, we survey the state of the art in optimizing non-relational dataflows and highlight that a comprehensive optimization of UDFs is still a challenge. Based on this observation, an extensible, logical optimizer (SOFA) is introduced, which incorporates the semantics of UDFs into the optimization process. SOFA analyzes a compact set of operator properties and combines automated analysis with manual UDF annotations to enable a comprehensive optimization of data flows. SOFA is able to logically optimize arbitrary data flows from different application areas, resulting in significant runtime improvements compared to other techniques. Finally, the applicability of the presented system to terabyte-sized corpora is investigated. Hereby, we systematically evaluate scalability and robustness of the employed methods and tools in order to pinpoint the most critical challenges in building an IE system for very large data sets.
27

Massively parallel computing for particle physics

Preston, Ian Christopher January 2010 (has links)
This thesis presents methods to run scientific code safely on a global-scale desktop grid. Current attempts to harness the world’s idle desktop computers face obstacles such as donor security, portability of code and privilege requirements. Nereus, a Java-based architecture, is a novel framework that overcomes these obstacles and allows the creation of a globally-scalable desktop grid capable of executing Java bytecode. However, most scientific code is written for the x86 architecture. To enable the safe execution of unmodified scientific code, we created JPC, a pure Java x86 PC emulator. The Nereus framework is applied to two tasks, a trivially parallel data generation task, BlackMax, and a parallelization and fault tolerance framework, Mycelia. Mycelia is an implementation of the Map-Reduce parallel programming paradigm. BlackMax is a microscopic blackhole event generator, of direct relevance for the Large Hadron Collider (LHC). The Nereus based BlackMax adaptation dramatically speeds up the production of data, limited only by the number of desktop machines available.
28

Smluvní pokuta podle obchodního zákoníku (se zaměřením na moderační oprávnění soudu) / Contractual penalty under the Commercial Code (with focus on the discretionary power of a judge to reduce a contractual penalty)

Mináčová, Michala January 2013 (has links)
Contractual Penalty under the Commercial Code (with focus on the discretionary power of a judge to reduce a contractual penalty) Contractual penalty is a concept frequently used by the parties to consolidate the position of the creditor as well as to motivate the debtor to fulfill the obligation as agreed. Not different from the other institutes of private law, the practical application of contractual penalty arises many questions with no uniform answers. The purpose of the thesis is to analyze selected contentious issues concerning the contractual penalty, especially discretionary power of a judge to reduce its unreasonable amount, to confront controversial theoretical opinions as well as non-conforming conclusions drawn from the juristic theory and established practice of the courts and add own opinion on the discussed matters. The paper does not include the exhaustive construction of contractual penalty, and therefore the general aspects are outlined only to the necessary extent. Greater attention is paid to the creation and existence of the right and claim to the contractual penalty. The study shifts the focus on the discretionary power of a judge to mitigate its inappropriate amount comprising different opinions on the related issues. The concept of contractual penalty has been used in private...
29

Building a scalable distributed data platform using lambda architecture

Mehta, Dhananjay January 1900 (has links)
Master of Science / Department of Computer Science / William H. Hsu / Data is generated all the time over Internet, systems sensors and mobile devices around us this is often referred to as ‘big data’. Tapping this data is a challenge to organizations because of the nature of data i.e. velocity, volume and variety. What make handling this data a challenge? This is because traditional data platforms have been built around relational database management systems coupled with enterprise data warehouses. Legacy infrastructure is either technically incapable to scale to big data or financially infeasible. Now the question arises, how to build a system to handle the challenges of big data and cater needs of an organization? The answer is Lambda Architecture. Lambda Architecture (LA) is a generic term that is used for scalable and fault-tolerant data processing architecture that ensures real-time processing with low latency. LA provides a general strategy to knit together all necessary tools for building a data pipeline for real-time processing of big data. LA comprise of three layers – Batch Layer, responsible for bulk data processing, Speed Layer, responsible for real-time processing of data streams and Service Layer, responsible for serving queries from end users. This project draw analogy between modern data platforms and traditional supply chain management to lay down principles for building a big data platform and show how major challenges with building a data platforms can be mitigated. This project constructs an end to end data pipeline for ingestion, organization, and processing of data and demonstrates how any organization can build a low cost distributed data platform using Lambda Architecture.
30

Parallelization of backward deleted distance calculation in graph based features using Hadoop

Pillamari, Jayachandran January 1900 (has links)
Master of Science / Department of Computing & Information Sciences / Daniel Andresen / The current project presents an approach to parallelize the calculation of Backward Deleted Distance (BDD) in Graph Based Features (GBF) computation using Hadoop. In this project the issues concerned with the calculation of BDD are identified and parallel computing technologies like Hadoop are applied to solve them. The project introduces a new algorithm to parallelize the APSP problem in BDD calculation using Hadoop Map Reduce feature. The project is implemented in Java and Hadoop technologies. The aim of this project is to parallelize the calculation of BDD thereby reducing GBF computation time. The process of BDD calculation is examined to identify the key places where it could be parallelized. Since the BDD calculation involves calculating the shortest paths between all pairs of given users, it can viewed as All Pairs Shortest Path (APSP) problem. The internal structure and implementation of Hadoop Map-Reduce framework is studied and applied to the process of APSP problem. The GBF features are one of the features set used in the Ontology classifiers. In the current project, GBF features are used to predict the friendship relationship between the users whose direct link is deleted. The computation involves calculating BDD between all pairs of users. The BDD for a user pair represents the shortest path between them when their direct link is deleted. In real terms, it is the shortest distance between them other than the direct path. The project uses train and test data sets consisting of positive instances and negative instances. The positive instances consist of user pairs having a friendship link between them whereas the negative instances do not have any direct link between them. Apache Hadoop is a latest emerging technology in the market introduced for scalable, distributed computing across clusters of computers. It has a Map Reduce framework used for developing applications which process large amounts of data in parallel on large clusters. The project is developed and implemented successfully and has the best time complexity. The project is tested for its reliability and performance. Different data sets are used in this testing by considering various factors and typical graph representations. The test results were analyzed to predict the behavior of the system. The test results show that the system has best speedup and considerably decreased the processing time from 10 hours to 20 minutes which is rewarding.

Page generated in 0.0396 seconds