• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 58
  • 55
  • 19
  • 15
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 328
  • 78
  • 62
  • 52
  • 41
  • 36
  • 35
  • 32
  • 31
  • 28
  • 27
  • 27
  • 25
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

A Savage Land: Violence and Trauma in the Nineteenth-Century American Southwest

January 2020 (has links)
abstract: This dissertation seeks to understand two universal experiences that have pervaded human society since man first climbed out of the trees: violence and trauma. Using theories gleaned from the Holocaust and other twentieth century atrocities, this work explores narratives of violent action and traumatic reaction as they occurred among peoples of the nineteenth-century American Southwest. By examining the stories of individuals and groups of Apaches, Ethnic Mexicans, Euro-Americans, and other diverse peoples within the lens of trauma studies, a new narrative emerges within US-Mexico borderlands history. This narrative reveals inter-generational legacies of violence among cultural groups that have lived through trauma and caused trauma within others. For both victims and perpetrators alike, trauma and violence can transform into tools of cultural construction and adaptation. Part I of this work establishes the concept of ethnotrauma-- a layered experience of collective trauma among minority populations under racial persecution. By following stories of Mescalero, Chiricahua, and Warm Springs Apaches in the nineteenth-century Southwest, this dissertation reveals how Apaches grappled with ethnotrauma through generations during times of war, imprisonment, and exile. These narratives also reveal how Apaches overcame these legacies of pain through communal solidarity and cultural continuity. Part II explores the concept of perpetrator trauma. By following stories of Mexican norteños, Mexican-Americans on the US-Mexico border, and American settlers, the impact of trauma on violators also comes to light. The concept perpetrator trauma in this context denotes the long-term cultural impacts of committing violence among perpetrating communities. For perpetrating groups, violence became a method of affirming and, in some cases, reconstructing group identity through opposition to other groups. Finally, at the heart of this work stands two critical symbols-- Geronimo, victim and villain, and the land itself, hostile and healing-- that reveal how cycles of violence entangled ethnotrauma and perpetrator trauma within individuals struggling to survive and thrive in a savage land. / Dissertation/Thesis / Doctoral Dissertation History 2020
192

Comparación del índice neutrófilo/linfocito, APACHE II y BISAP como predictores de severidad en pancreatitis aguda durante el periodo 2011-2013 en el Hospital Nacional Daniel Alcides Carrión

Maravi Coronado, Julio Cesar January 2014 (has links)
El documento digital no refieres asesor. / Compara el índice neutrófilo/linfocito con el APACHE II y BISAP como predictores de severidad en pacientes con pancreatitis aguda. El presente estudio fue realizado en el Hospital Nacional Daniel Alcides Carrión- Callao durante el periodo de enero del 2011 a diciembre del 2013, donde se recolectaron los datos de todos los pacientes hospitalizados en el Servicio de Gastroenterología, UCIN y UCI con diagnóstico de pancreatitis aguda. Se trató de un estudio observacional, retrospectivo y analítico. Se evaluaron un total de 201 pacientes, la edad promedio fue 42,09 ± 16.87, el sexo predominante fue el femenino 143 (71.14 %) y la etiología más frecuente fue la biliar 178 (88.55 %). Con respecto a la severidad 178 (88.5 %) fueron leves y 23 (11.5 %) fueron severas; 19 (9.45 %) pacientes presentaron falla orgánica y solamente 10 (4.97 %) presentaron necrosis pancreática. El score de BISAP tuvo una sensibilidad de 13,2%, especificidad de 93,6%, VPP de 27,7% y VPN de 86,6%. El score de APACHE una sensibilidad de 19,7%, especificidad de 85,1%, VPP de 17,2% y VPN de 86,9% y el índice neutrófilo/linfocito una sensibilidad de 34,5%, especificidad de 60,6%, VPP de 10,3% y VPN de 86,4%. Las áreas bajo la curva ROC fueron 81.4 % para BISAP, 55% para APACHE II y 49% para el índice neutrófilo/linfocito. Se concluye que el índice neutrófilo/linfocito no es mejor predictor de severidad que el APACHE II y BISAP en los pacientes con pancreatitis aguda, además de no tener una adecuada capacidad discriminatoria diagnóstica. / Trabajo de investigación
193

Erarbeitung einer grafischen Benutzerschnittstelle fuer das Intensive Computing

Schumann, Merten 21 June 1995 (has links)
Entwicklung einer grafischen Nutzerschnittstelle auf der Basis von WWW, um Jobs fuer das Batchsystem DQS zu aktivieren.
194

AFS and the Web - Using PAM for Apache for an authorized access to AFS filespace from the Web

Müller, Thomas 12 December 2000 (has links)
This are the slides of a talk held at the German AFS Meeting 2000 in Garching. It deals with the using of PAM in the context of the apache web server to allow authorized access to web pages housed in AFS filespace.
195

Secure WebServer

Neubert, Janek 01 July 2004 (has links)
Beschreibung und Implementierung einer Lösungsvariante für den sicheren Einsatz von OpenAFS, Apache und serverseitigen Skriptsprachen wie PHP oder PERL in Multiuserumgebungen.
196

Scalable Data Integration for Linked Data

Nentwig, Markus 06 August 2020 (has links)
Linked Data describes an extensive set of structured but heterogeneous datasources where entities are connected by formal semantic descriptions. In thevision of the Semantic Web, these semantic links are extended towards theWorld Wide Web to provide as much machine-readable data as possible forsearch queries. The resulting connections allow an automatic evaluation to findnew insights into the data. Identifying these semantic connections betweentwo data sources with automatic approaches is called link discovery. We derivecommon requirements and a generic link discovery workflow based on similaritiesbetween entity properties and associated properties of ontology concepts. Mostof the existing link discovery approaches disregard the fact that in times ofBig Data, an increasing volume of data sources poses new demands on linkdiscovery. In particular, the problem of complex and time-consuming linkdetermination escalates with an increasing number of intersecting data sources.To overcome the restriction of pairwise linking of entities, holistic clusteringapproaches are needed to link equivalent entities of multiple data sources toconstruct integrated knowledge bases. In this context, the focus on efficiencyand scalability is essential. For example, reusing existing links or backgroundinformation can help to avoid redundant calculations. However, when dealingwith multiple data sources, additional data quality problems must also be dealtwith. This dissertation addresses these comprehensive challenges by designingholistic linking and clustering approaches that enable reuse of existing links.Unlike previous systems, we execute the complete data integration workflowvia a distributed processing system. At first, the LinkLion portal will beintroduced to provide existing links for new applications. These links act asa basis for a physical data integration process to create a unified representationfor equivalent entities from many data sources. We then propose a holisticclustering approach to form consolidated clusters for same real-world entitiesfrom many different sources. At the same time, we exploit the semantic typeof entities to improve the quality of the result. The process identifies errorsin existing links and can find numerous additional links. Additionally, theentity clustering has to react to the high dynamics of the data. In particular,this requires scalable approaches for continuously growing data sources withmany entities as well as additional new sources. Previous entity clusteringapproaches are mostly static, focusing on the one-time linking and clustering ofentities from few sources. Therefore, we propose and evaluate new approaches for incremental entity clustering that supports the continuous addition of newentities and data sources. To cope with the ever-increasing number of LinkedData sources, efficient and scalable methods based on distributed processingsystems are required. Thus we propose distributed holistic approaches to linkmany data sources based on a clustering of entities that represent the samereal-world object. The implementation is realized on Apache Flink. In contrastto previous approaches, we utilize efficiency-enhancing optimizations for bothdistributed static and dynamic clustering. An extensive comparative evaluationof the proposed approaches with various distributed clustering strategies showshigh effectiveness for datasets from multiple domains as well as scalability on amulti-machine Apache Flink cluster.
197

JOB SCHEDULING FOR STREAMING APPLICATIONS IN HETEROGENEOUS DISTRIBUTED PROCESSING SYSTEMS

Al-Sinayyid, Ali 01 December 2020 (has links)
The colossal amounts of data generated daily are increasing exponentially at a never-before-seen pace. A variety of applications—including stock trading, banking systems, health-care, Internet of Things (IoT), and social media networks, among others—have created an unprecedented volume of real-time stream data estimated to reach billions of terabytes in the near future. As a result, we are currently living in the so-called Big Data era and witnessing a transition to the so-called IoT era. Enterprises and organizations are tackling the challenge of interpreting the enormous amount of raw data streams to achieve an improved understanding of data, and thus make efficient and well-informed decisions (i.e., data-driven decisions). Researchers have designed distributed data stream processing systems that can directly process data in near real-time. To extract valuable information from raw data streams, analysts need to create and implement data stream processing applications structured as a directed acyclic graphs (DAG). The infrastructure of distributed data stream processing systems, as well as the various requirements of stream applications, impose new challenges. Cluster heterogeneity in a distributed environment results in different cluster resources for task execution and data transmission, which make the optimal scheduling algorithms an NP-complete problem. Scheduling streaming applications plays a key role in optimizing system performance, particularly in maximizing the frame-rate, or how many instances of data sets can be processed per unit of time. The scheduling algorithm must consider data locality, resource heterogeneity, and communicational and computational latencies. The latencies associated with the bottleneck from computation or transmission need to be minimized when mapped to the heterogeneous and distributed cluster resources. Recent work on task scheduling for distributed data stream processing systems has a number of limitations. Most of the current schedulers are not designed to manage heterogeneous clusters. They also lack the ability to consider both task and machine characteristics in scheduling decisions. Furthermore, current default schedulers do not allow the user to control data locality aspects in application deployment.In this thesis, we investigate the problem of scheduling streaming applications on a heterogeneous cluster environment and develop the maximum throughput scheduler algorithm (MT-Scheduler) for streaming applications. The proposed algorithm uses a dynamic programming technique to efficiently map the application topology onto a heterogeneous distributed system based on computing and data transfer requirements, while also taking into account the capacity of underlying cluster resources. The proposed approach maximizes the system throughput by identifying and minimizing the time incurred at the computing/transfer bottleneck. The MT-Scheduler supports scheduling applications that are structured as a DAG, such as Amazon Timestream, Google Millwheel, and Twitter Heron. We conducted experiments using three Storm microbenchmark topologies in both simulated and real Apache Storm environments. To evaluate performance, we compared the proposed MT-Scheduler with the simulated round-robin and the default Storm scheduler algorithms. The results indicated that the MT-Scheduler outperforms the default round-robin approach in terms of both average system latency and throughput.
198

Implementierung von Software-Frameworks am Beispiel von Apache Spark in das DBpediaExtraction Framework

Bielinski, Robert 28 August 2018 (has links)
Das DBpedia-Projekt extrahiert zweimal pro Jahr RDF-Datensätze aus den semi-\\strukturierten Datensätzen Wikipedias. DBpedia soll nun auf ein Release-Modell umgestellt werden welches einen Release-Zyklus mit bis zu zwei vollständigen DBpedia Datensätzen pro Monat unterstützt. Dies ist mit der momentanen Geschwindigkeit des Extraktionsprozesses nicht möglich. Eine Verbesserung soll durch eine Parallelisierung mithilfe von Apache Spark erreicht werden. Der Fokus dieser Arbeit liegt auf der effizienten lokalen Nutzung Apache Sparks zur parallelen Verarbeitung von großen, semi-strukturierten Datensätzen. Dabei wird eine Implementierung der Apache Spark gestützten Extraktion vorgestellt, welche eine ausreichende Verringerung der Laufzeit erzielt. Dazu wurden grundlegende Methoden der komponentenbasierten Softwareentwicklung angewendet, Apache Sparks Nutzen für das Extraction-Framework analysiert und ein Überblick über die notwendigen Änderungen am Extraction-Framework präsentiert.
199

An experimental study of memory management in Rust programming for big data processing

Okazaki, Shinsaku 10 December 2020 (has links)
Planning optimized memory management is critical for Big Data analysis tools to perform faster runtime and efficient use of computation resources. Modern Big Data analysis tools use application languages that abstract their memory management so that developers do not have to pay extreme attention to memory management strategies. Many existing modern cloud-based data processing systems such as Hadoop, Spark or Flink use Java Virtual Machine (JVM) and take full advantage of features such as automated memory management in JVM including Garbage Collection (GC) which may lead to a significant overhead. Dataflow-based systems like Spark allow programmers to define complex objects in a host language like Java to manipulate and transfer tremendous amount of data. System languages like C++ or Rust seem to be a better choice to develop systems for Big Data processing because they do not relay on JVM. By using a system language, a developer has full control on the memory management. We found Rust programming language to be a good candidate due to its ability to write memory-safe and fearless concurrent codes with its concept of memory ownership and borrowing. Rust programming language includes many possible strategies to optimize memory management for Big Data processing including a selection of different variable types, use of Reference Counting, and multithreading with Atomic Reference Counting. In this thesis, we conducted an experimental study to assess how much these different memory management strategies differ regarding overall runtime performance. Our experiments focus on complex object manipulation and common Big Data processing patterns with various memory man- agement. Our experimental results indicate a significant difference among these different memory strategies regarding data processing performance.
200

Developing Random Compaction Strategy for Apache Cassandra database and Evaluating performance of the strategy

Surampudi, Roop Sai January 2021 (has links)
Introduction: Nowadays, the data generated by global communication systems is enormously increasing.  There is a need by Telecommunication Industries to monitor and manage this data generation efficiently. Apache Cassandra is a NoSQL database that manages any formatted data and a massive amount of data flow efficiently.  Aim: This project is focused on developing a new random compaction strategy and evaluating this random compaction strategy's performance. In this study, limitations of generic compaction strategies Size Tiered Compaction Strategy and Leveled Compaction Strategy will be investigated. A new random compaction strategy will be developed to address the limitations of the generic Compaction Strategies. Important performance metrics required for the evaluation of the strategy will be studied. Method: In this study, a grey literature review is done to understand the working of Apache Cassandra, different compaction strategies' APIs. A random compaction strategy is developed in two phases of development. A testing environment is created consisting of a 4-node cluster and a simulator. Evaluated the performance by stress-testing the cluster using different workloads. Conclusions: A stable RCS artifact is developed. This artifact also includes the support of generating random threshold from any user-defined distribution. Currently, only Uniform, Geometric, and Poisson distributions are supported. The RCS-Uniform's performance is found to be better than both STCS and  LCS. The RCS-Poisson's performance is found to be not better than both STCS and LCS. The RCS-Geometric's performance is found to be better than STCS.

Page generated in 0.0498 seconds