• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 58
  • 55
  • 19
  • 15
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 328
  • 78
  • 62
  • 52
  • 41
  • 36
  • 35
  • 32
  • 31
  • 28
  • 27
  • 27
  • 25
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Managing Boundaries, Healing the Homeland: Ecological Restoration and the Revitalization of the White Mountain Apache Tribe, 1933 – 2000

Tomblin, David Christian 01 June 2009 (has links)
The main argument of this dissertation is that the White Mountain Apache Tribe's appropriation of ecological restoration played a vital role in reinstituting control over knowledge production and eco-cultural resources on the Fort Apache Indian Reservation in the second half of the twentieth century. As a corollary, I argue that the shift in knowledge production practices from a paternalistic foundation to a community-based approach resulted in positive consequences for the ecological health of the Apachean landscape and Apache culture. The democratization of science and technology on the reservation, therefore, proved paramount to the reestablishment of a relatively sustainable Apache society. Beginning with the Indian New Deal, the White Mountain Apache slowly developed the capacity to employ ecological restoration as an eco-political tool to free themselves from a long history of Euro-American cultural oppression and natural resource exploitation. Tribal restoration projects embodied the dual political function of cultural resistance to and cultural exchange with Western-based land management organizations. Apache resistance challenged Euro-American notions of restoration, nature, and sustainability while maintaining cultural identity, reasserting cultural autonomy, and protecting tribal sovereignty. But at the same time, the Apache depended on cultural exchange with federal and state land management agencies to successfully manage their natural resources and build an ecologically knowledgeable tribal workforce. Initially adopting a utilitarian conservation model of land management, restoration projects aided the creation of a relatively strong tribal economy. In addition, early successes with trout, elk, and forest restoration projects eventually granted the Tribe political leverage when they sought to reassume control over reservation resources from the Bureau of Indian Affairs and the Fish and Wildlife Service. Building on this foundation, Apache restoration work significantly diverged in character from the typical Euro-American restoration project by the 1990s. While striving toward self-sufficiency, the Tribe hybridized tribal cultural values with Western ecological values in their restoration efforts. These projects evolved the tripartite capacity to heal ecologically degraded reservation lands, to establish a degree of economic freedom from the federal government, and to restore cultural traditions. Having reversed their historical relationship of subjugation with government agencies, the Apache currently have almost full decision-making powers over tribal eco-cultural resources. / Ph. D.
302

Energy-Efficient Key/Value Store

Tena, Frezewd Lemma 11 September 2017 (has links) (PDF)
Energy conservation is a major concern in todays data centers, which are the 21st century data processing factories, and where large and complex software systems such as distributed data management stores run and serve billions of users. The two main drivers of this major concern are the pollution impact data centers have on the environment due to their waste heat, and the expensive cost data centers incur due to their enormous energy demand. Among the many subsystems of data centers, the storage system is one of the main sources of energy consumption. Among the many types of storage systems, key/value stores happen to be the widely used in the data centers. In this work, I investigate energy saving techniques that enable a consistent hash based key/value store save energy during low activity times, and whenever there is an opportunity to reuse the waste heat of data centers.
303

Návrh a implementace testovacího systému na architektuře GRID / Design and Implement Grid Testing System

Hubík, Filip January 2013 (has links)
This project addresses parallelization of building and testing projects written i Java programming language. It proposes software that uses methods of continual integration, parallelization and distribution of computationally intensive tasks to grid architecture. Suggested software helps to accelerate the development of software product and automation of its parts.
304

Energy-Efficient Key/Value Store

Tena, Frezewd Lemma 29 August 2017 (has links)
Energy conservation is a major concern in todays data centers, which are the 21st century data processing factories, and where large and complex software systems such as distributed data management stores run and serve billions of users. The two main drivers of this major concern are the pollution impact data centers have on the environment due to their waste heat, and the expensive cost data centers incur due to their enormous energy demand. Among the many subsystems of data centers, the storage system is one of the main sources of energy consumption. Among the many types of storage systems, key/value stores happen to be the widely used in the data centers. In this work, I investigate energy saving techniques that enable a consistent hash based key/value store save energy during low activity times, and whenever there is an opportunity to reuse the waste heat of data centers.
305

Performance of message brokers in event-driven architecture: Amazon SNS/SQS vs Apache Kafka / Prestanda av meddelandeköer i händelsedriven arkitektur: Amazon SNS/SQS vs Apache Kafka

Edeland, Johan, Zivkovic, Ivan January 2023 (has links)
Microservice architecture, which involves breaking down applications into smaller and loosely coupled components, is becoming increasingly common in the development of modern systems. Connections between these components can be established in various ways. One of these approaches is event-driven architecture, where components in the system communicate asynchronously with each other through message queues.  AWS, Amazon Web Services, the largest provider of cloud-based services, offers such a messaging queue: SQS, Amazon Simple Queue Service, which can be integrated with SNS, Amazon Simple Notification Service, to enable "one-to-many" asynchronous communication.  An alternative tool is Apache Kafka, created by LinkedIn and later open-sourced under the Apache Software Foundation. Apache Kafka is an event logging and streaming platform that can also function as a message queue in an event-driven architecture.  The authors of this thesis have been commissioned by Scania to compare and evaluate the performance of these two tools and investigate whether there are use cases where one might be more suitable than the other. To achieve this, two prototypes were developed, each prototype consisting of a producer microservice and a consumer microservice. These prototypes were then used to conduct latency and load tests by producing messages and measuring the time interval until they were consumed.  The results of the tests show that Apache Kafka has a lower average latency than SNS/SQS and scales more efficiently with increasing data volumes, making it more suitable for use cases involving real-time data streaming. Its potential as a message bus for loosely coupled components in the system is also evident. In this context, SNS/SQS is equally valuable, as it operates as a dedicated message bus with good latency and offers a user-friendly and straightforward setup process. / Mikrotjänstarkitektur, som innebär att applikationer bryts ned till mindre och löst kopplade komponenter, är något som blir allt vanligare vid utvecklingen av moderna system. Kopplingar mellan dessa komponenter kan etableras på olika sätt. Ett av dessa tillvägagångssätt är händelsedriven arkitektur, där komponenterna i systemet kommunicerar asynkront med varandra genom meddelandeköer.  AWS, Amazon Web Services, som är den största leverantören av molnbaserade tjänster, tillhandahåller en sådan meddelandekö: SQS, Amazon Simple Queue Service, som kan integreras med SNS, Amazon Simple Notification Service för att möjliggöra ”en-till-många” asynkron kommunikation.  Ett alternativt verktyg är Apache Kafka, skapat av Linkedin och senare öppen källkodspublicerad under Apache Software Foundation. Apache Kafka är en händelselogg och strömningsplattform som även kan fungera som en meddelandekö i en händelsedriven arkitektur.  Författarna av detta arbete har på uppdrag av Scania blivit ombedda att jämföra och utvärdera prestandan hos de två verktygen samt undersöka om det finns användningsfall där det ena kan vara mer lämpligt än det andra. För att uppnå detta utvecklades två prototyper, där varje prototyp består av en producent- och en konsumentmikrotjänst. Dessa prototyper användes sedan för att genomföra latens- och lasttester genom att producera meddelanden och mäta tidsintervallet till dess att de konsumerades.  Resultatet från testerna visar att Apache Kafka har lägre genomsnittlig latens än SNS/SQS och skalar mer effektivt vid ökande datamängder, vilket gör det mer lämpat för användningsfall med realtidsströmning av data. Dess potential som meddelandebuss för löst kopplade komponenter i systemet är också tydlig. I detta sammanhang är SNS/SQS lika användbart, då det fungerar som en dedikerad meddelandebuss med god latens och en användarvänlig och enkel startprocess.
306

United Nations Declaration on the Rights of Indigenous Peoples: Understanding the Applicability in the Native American Context

Morman, Alaina M. 17 September 2015 (has links)
No description available.
307

Scientific Workflows for Hadoop

Bux, Marc Nicolas 07 August 2018 (has links)
Scientific Workflows bieten flexible Möglichkeiten für die Modellierung und den Austausch komplexer Arbeitsabläufe zur Analyse wissenschaftlicher Daten. In den letzten Jahrzehnten sind verschiedene Systeme entstanden, die den Entwurf, die Ausführung und die Verwaltung solcher Scientific Workflows unterstützen und erleichtern. In mehreren wissenschaftlichen Disziplinen wachsen die Mengen zu verarbeitender Daten inzwischen jedoch schneller als die Rechenleistung und der Speicherplatz verfügbarer Rechner. Parallelisierung und verteilte Ausführung werden häufig angewendet, um mit wachsenden Datenmengen Schritt zu halten. Allerdings sind die durch verteilte Infrastrukturen bereitgestellten Ressourcen häufig heterogen, instabil und unzuverlässig. Um die Skalierbarkeit solcher Infrastrukturen nutzen zu können, müssen daher mehrere Anforderungen erfüllt sein: Scientific Workflows müssen parallelisiert werden. Simulations-Frameworks zur Evaluation von Planungsalgorithmen müssen die Instabilität verteilter Infrastrukturen berücksichtigen. Adaptive Planungsalgorithmen müssen eingesetzt werden, um die Nutzung instabiler Ressourcen zu optimieren. Hadoop oder ähnliche Systeme zur skalierbaren Verwaltung verteilter Ressourcen müssen verwendet werden. Diese Dissertation präsentiert neue Lösungen für diese Anforderungen. Zunächst stellen wir DynamicCloudSim vor, ein Simulations-Framework für Cloud-Infrastrukturen, welches verschiedene Aspekte der Variabilität adäquat modelliert. Im Anschluss beschreiben wir ERA, einen adaptiven Planungsalgorithmus, der die Ausführungszeit eines Scientific Workflows optimiert, indem er Heterogenität ausnutzt, kritische Teile des Workflows repliziert und sich an Veränderungen in der Infrastruktur anpasst. Schließlich präsentieren wir Hi-WAY, eine Ausführungsumgebung die ERA integriert und die hochgradig skalierbare Ausführungen in verschiedenen Sprachen beschriebener Scientific Workflows auf Hadoop ermöglicht. / Scientific workflows provide a means to model, execute, and exchange the increasingly complex analysis pipelines necessary for today's data-driven science. Over the last decades, scientific workflow management systems have emerged to facilitate the design, execution, and monitoring of such workflows. At the same time, the amounts of data generated in various areas of science outpaced hardware advancements. Parallelization and distributed execution are generally proposed to deal with increasing amounts of data. However, the resources provided by distributed infrastructures are subject to heterogeneity, dynamic performance changes at runtime, and occasional failures. To leverage the scalability provided by these infrastructures despite the observed aspects of performance variability, workflow management systems have to progress: Parallelization potentials in scientific workflows have to be detected and exploited. Simulation frameworks, which are commonly employed for the evaluation of scheduling mechanisms, have to consider the instability encountered on the infrastructures they emulate. Adaptive scheduling mechanisms have to be employed to optimize resource utilization in the face of instability. State-of-the-art systems for scalable distributed resource management and storage, such as Apache Hadoop, have to be supported. This dissertation presents novel solutions for these aspirations. First, we introduce DynamicCloudSim, a cloud computing simulation framework that is able to adequately model the various aspects of variability encountered in computational clouds. Secondly, we outline ERA, an adaptive scheduling policy that optimizes workflow makespan by exploiting heterogeneity, replicating bottlenecks in workflow execution, and adapting to changes in the underlying infrastructure. Finally, we present Hi-WAY, an execution engine that integrates ERA and enables the highly scalable execution of scientific workflows written in a number of languages on Hadoop.
308

DEN IDEALA COWBOYEN : En komparativ studie av maskuliniteten inom den amerikanska audiovisuella westerngenren på 1950-talet samt 2010-talet

Falk Renström, Johannes January 2019 (has links)
Den hegemoniska maskuliniteten är försatt i ett stadie av konstant förändring och omförhandling. Saker som i en viss tidsepok kommit att associeras med femininitet kan vid en annan tidpunkt komma att tillskrivas som ett maskulint attribut. Då mannens relation till sig själv  – och hans relation till sin omgivning – påverkas av den maskulina idealbild som förmedlas, i bland annat populärkulturellt material, blir det därför viktigt att undersöka hur denna hegemoniska maskulinitet yttrar sig i olika tidsepoker. Detta dels för att utröna varför den ser ut som den gör idag och hur den porträtterats historiskt, men även för att förutspå hur gestaltningen av maskulinitet kan komma att förändra sig i framtiden. Denna studie har utfört en komparation mellan westernfilmer som av diverse forskare och kritiker ses som typiska tidsenliga exempel på filmer inom westerngenren. Detta gjordes för att fastställa hur den historiska porträtteringen av den hegemoniska maskuliniteten skiljer sig från den i ett modernt populärkulturellt material inom samma genre. Detta gjordes genom att utföra en semiotisk bildanalys av protagonistens och antagonistens slut- samt introduktionsscen, därefter jämfördes de observationer som gjorts i analysen, varpå övergripande analyser gjordes, i vilken återkommande drag, behandlingen av minoriteter, kroppsattribut och miljöerna som dessa befann sig i och bebodde, diskuterades och sammanställdes. Kontentan av resultatet var att attribut som tidigare associerats med protagonister i materialet från 1950-talet numera utgjordes av antagonisterna i materialet från 2010-talet. Den tidigare maskuliniteten som gestaltades av storväxthet och breda axlar har förändrats, och den hegemoniska maskuliniteten har nu gett upphov till en man med mer komplexa motivationer, större känsloliv och hänsynsfullhet än den som skänktes av protagonisten i dom tidigare gestaltningarna. Genom att uppnå sina målsättningar kunde även protagonisterna i materialen från båda tidsperioderna dölja att dessa antagit attribut som tidigare varit kvinnligt kodade i den nya maskulina idealbilden, exempelvis fällandet av tårar.
309

Benchmarking and Scheduling Strategies for Distributed Stream Processing

Shukla, Anshu January 2017 (has links) (PDF)
The velocity dimension of Big Data refers to the need to rapidly process data that arrives continuously as streams of messages or events. Distributed Stream Processing Systems (DSPS) refer to distributed programming and runtime platforms that allow users to define a composition of dataflow logic that are executed on distributed resources over streams of incoming messages. A DSPS uses commodity clusters and Cloud Virtual Machines (VMs) for its execution. In order to meet the required performance for these applications, the DSPS needs to schedule these dataßows efficiently over the resources. Despite their growing use, resource scheduling for DSPSÕs tends to be done in an ad hoc manner, favoring empirical and reactive approaches, rather than a model-driven and analytical approach. Such empirical strategies may arrive at an approximate schedule for the dataflow that needs further tuning to meet the quality of service. We propose a model-based scheduling approach that makes use of performance profiles and benchmarks developed for tasks in the dataßow to plan both the resource allocation and the resource mapping that together form the schedule planning process. We propose the Model Based Allocation (MBA) and the Slot Aware Mapping (SAM) approaches that efectively utilize knowledge of the performance model of logic tasks to provide an efficient and predictable scheduling behavior. We implemented and validate these algorithms using the popular open source Apache Storm DSPS for several micro and application dataflows. The results show that our model-driven approach is able to reduce the amount of required resources (VMs) by 30% − 50% relative to existing techniques. Also we see that our strategies o↵er a predictable behavior that ensures that the expected and actual rates supported and resources used match closely. This can enable deterministic schedule planning even under dynamic conditions. Besides this static scheduling, we also examine the ability to dynamically consolidate tasks onto fewer VMs when the load on the dataßow decreases or the VMs get fragmented. We propose reliable task migration models for Apache Storm dataßows that are able to rapidly move the task assignment in the cluster, and resume the dataflow execution without any message loss.
310

Sistema de control de infracciones y sanciones para vehículos menores “mototaxis”

Roca Ramos, Mauro William, Balboa Padilla, Leyla Angélica January 2015 (has links)
Las entidades públicas como las Municipalidades rurales se encuentran agobiadas por los diferentes desafíos a los cuales deben hacer frente a la ciudadanía. En este caso es la transparencia de información en la administración de control de sanciones de los vehículos menores (Mototaxis). El principal problema que existe es que al no contar con un sistema informático que permita administrar el control de sanciones, existen deficiencias en la administración como: pérdidas de información, informalidad administrativa, pérdida de tiempos en registros y consultas de infracciones. Para este problema se ha investigado en diferentes Municipales rurales las cuales aún no cuentan con un control eficiente para la administración de infracciones; en consecuencia, existen ciertas inquietudes de parte de los propietarios de los vehículos menores por no contar con una información transparente y concisa frente a las infracciones impuestas por parte de los inspectores municipales. Sin embargo, una conclusión importante es que este proyecto de investigación podrá mejorar la administración de sanciones en los vehículos menores, reducir la delincuencia en las calles (secuestro), la informalidad vial vehicular, la mejora de servicios hacia los ciudadanos y la recaudación de ingresos para la municipalidad; teniendo como principales beneficiaros la Municipalidad de Santa Eulalia, los propietarios de los vehículos menores y la población del distrito. Public entities such as rural municipalities are overwhelmed by the different challenges that are facing the public. Here is information transparency in the administration control sanctions smaller vehicles (Mototaxis). The main problem is that there does not have a computerized system to manage the control of sanctions, deficiencies in the administration as loss of information, administrative informality, loss of records and consultations in times of infringements. For this problem it has been investigated in different rural Municipal which do not yet have an efficient management control violations; consequently, there are some concerns on the part of the owners of small vehicles for not having a clear and concise imposed against infringements by the municipal inspectors information. However, an important conclusion is that this research project will improve the administration of sanctions on smaller vehicles, reducing street crime (kidnapping), the vehicular traffic informality, improving services to citizens and revenue collection for the municipality; having as main beneficiaries the Municipality of Santa Eulalia, the owners of small vehicles and the district's population.

Page generated in 0.0295 seconds