• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 58
  • 55
  • 19
  • 15
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 328
  • 78
  • 62
  • 52
  • 41
  • 36
  • 35
  • 32
  • 31
  • 28
  • 27
  • 27
  • 25
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Designing and implementing a private cloud for student and faculty software projects / Utformning och implementation av en privat molntjänst för programvaruprojekt av studenter och lärare

Le Fevre, Pierre, Karlsson, Emil January 2022 (has links)
Designing, building, and implementing a private cloud hosting solution can be challenging. This report aims to unify research in multiple areas within cloud hosting to simplify the process by presenting a comprehensive ground-up approach. The proposed approach includes methods used to decide which models and paradigms to be used, such as abstraction level and infrastructure scale. A step-by-step guide is presented, with all considerations made along the way. The result is a platform accessible from a web browser or through a command-line interface and hosts services such as servers for machine learning and containerized applications in Kubernetes. Further work includes increasing the abstraction layer and enabling hardware enrollment over the network. Moreover, whether this implementation will scale in an intended way remains to be examined. / Att designa, bygga och implementera en privat plattform för molntjänster kan vara utmanande. Den här rapportens mål är att sammanställa forskning inom flera olika områden av cloud-hosting genom ett omfattande och grundligt tillvägagångsätt. Det förslagna tillvägagångsättet inkluderar metoder för att bestämma vilka modeller och paradigmer som ska användas, såsom abstraktionsnivå och skala av infrastruktur. Rapporten presenterar en guide till processen, med alla överväganden som gjordes längs vägen. Resultatet är en plattform som är tillgänglig från en webbläsare eller via en kommandotolk, och agerar värd för tjänster som servrar för maskininlärning och containeriserade applikationer i Kubernetes. Ytterligare arbete inkluderar att abstrahera bort fler aspekter och möjliggöra registrering av ny hårdvara över nätverket. Det återstår att undersöka om denna implementering kommer kunna skala på tänkt sätt.
122

Проектирование системы информирования клиентов : магистерская диссертация / Design of a system for informing customers

Кашин, А. А., Kashin, A. A. January 2023 (has links)
Целью работы является моделирование существующего процесса информирования клиентов, оптимизация этого процесса, сравнительный анализ существующих систем информирования, проектирование архитектуры собственной системы. В ходе выполнения работы был проведен сравнительный анализ брокеров сообщений, выявлены достоинства и недостатки каждого из них. Для подключения к существующей корпоративной платформе был разработан план внедрения и выполнена миграция данных в целевую систему с помощью разработанной программы-синхронизатора. / The purpose of the work is to simulate the existing process of informing customers, to conduct a comparative analysis of existing informing systems, and to design the architecture of the same type system. In the course of the work, a comparative analysis of message broker programs was carried out and the advantages and disadvantages of each were identified. In order to connect to the existing corporate platform, an implementation plan was developed and data migration to the target system was performed with the help of the developed synchronization program.
123

A Real- time Log Correlation System for Security Information and Event Management

Dubuc, Clémence January 2021 (has links)
The correlation of several events in a period of time is a necessity for a threat detection platform. In the case of multistep attacks (attacks characterized by a sequence of executed commands), it allows detecting the different steps one by one and correlating them to raise an alert. It also allows detecting abnormal behaviors on the IT system, for example, multiple suspicious actions performed by the same account. The correlation of security events increases the security of the system and reduces the number of false positives. The correlation of the events is made thanks to pre- existing correlation rules. The goal of this thesis is to evaluate the feasibility of using a correlation engine based on Apache Spark. There is a necessity of changing the actual correlation system because it is not scalable, it cannot handle all the incoming data and it cannot perform some types of correlation like aggregating the events by attributes or counting the cardinality. The novelty is the improvement of the performance and the correlation capacities of the system. Two systems are proposed for correlating events in this project. The first one is based on Apache Spark Structured Streaming and analyzed the flow of security logs in real- time. As the results are not satisfactory, a second system is implemented. It uses a more traditional approach by storing the logs into an Elastic Search cluster and does correlation queries on it. In the end, the two systems are able to correlate the logs of the platform. Nevertheless, the system based on Apache Spark uses too many resources by correlation rule and it is too expensive to launch hundreds of correlation queries at the same time. For those reasons, the system based on Elastic Search is preferred and is implemented in the workflow. / Korrelation av flera händelser under en viss tidsperiod är en nödvändighet för plattformen för hotdetektering. När det gäller attacker i flera steg (attacker som kännetecknas av en sekvens av utförda kommandon) gör det möjligt att upptäcka de olika stegen ett efter ett och korrelera dem för att utlösa en varning. Den gör det också möjligt att upptäcka onormala beteenden i IT- systemet, t.ex. flera misstänkta åtgärder som utförs av samma konto. Korrelationen av säkerhetshändelser ökar systemets säkerhet och minskar antalet falska positiva upptäckter. Korrelationen av händelserna görs tack vare redan existerande korrelationsregler. Målet med den här avhandlingen är att utvärdera genomförbarheten av en korrelationsmotor baserad på Apache Spark. Det är nödvändigt att ändra det nuvarande korrelationssystemet eftersom det inte är skalbart, det kan inte hantera alla inkommande data och det kan inte utföra vissa typer av korrelation, t.ex. aggregering av händelserna efter attribut eller beräkning av kardinaliteten. Det nya är att förbättra systemets prestanda och korrelationskapacitet. I detta projekt föreslås två system för korrelering av händelser. Det första bygger på Apache Spark Structured Streaming och analyserade flödet av säkerhetsloggar i realtid. Eftersom resultaten inte var tillfredsställande har ett andra system införts. Det använder ett mer traditionellt tillvägagångssätt genom att lagra loggarna i ett Elastic Searchkluster och göra korrelationsförfrågningar på dem. I slutändan kan de två systemen korrelera plattformens loggar. Det system som bygger på Apache Spark använder dock för många resurser per korrelationsregel och det är för dyrt att starta hundratals korrelationsförfrågningar samtidigt. Av dessa skäl föredras systemet baserat på Elastic Search och det implementeras i arbetsflödet.
124

Scaling Apache Hudi by boosting query performance with RonDB as a Global Index : Adopting a LATS data store for indexing / Skala Apache Hudi genom att öka frågeprestanda med RonDB som ett globalt index : Antagande av LATS-datalager för indexering

Zangis, Ralfs January 2022 (has links)
The storage and use of voluminous data are perplexing issues, the resolution of which has become more pressing with the exponential growth of information. Lakehouses are relatively new approaches that try to accomplish this while hiding the complexity from the user. They provide similar capabilities to a standard database while operating on top of low-cost storage and open file formats. An example of such a system is Hudi, which internally uses indexing to improve the performance of data management in tabular format. This study investigates if the execution times could be decreased by introducing a new engine option for indexing in Hudi. Therefore, the thesis proposes the usage of RonDB as a global index, which is expanded upon by further investigating the viability of different connectors that are available for communication. The research was conducted using both practical experiments and the study of relevant literature. The analysis involved observations made over multiple workloads to document how adequately the solutions can adapt to changes in requirements and types of actions. This thesis recorded the results and visualized them for the convenience of the reader, as well as made them available in a public repository. The conclusions did not coincide with the author’s hypothesis that RonDB would provide the fastest indexing solution for all scenarios. Nonetheless, it was observed to be the most consistent approach, potentially making it the best general-purpose solution. As an example, it was noted, that RonDB is capable of dealing with read and write heavy workloads, whilst consistently providing low query latency independent from the file count. / Lagring och användning av omfattande data är förbryllande frågor, vars lösning har blivit mer pressande med den exponentiella tillväxten av information. Lakehouses är relativt nya metoder som försöker åstadkomma detta samtidigt som de döljer komplexiteten för användaren. De tillhandahåller liknande funktioner som en standarddatabas samtidigt som de fungerar på toppen av lågkostnadslagring och öppna filformat. Ett exempel på ett sådant system är Hudi, som internt använder indexering för att förbättra prestandan för datahantering i tabellformat. Denna studie undersöker om exekveringstiderna kan minskas genom att införa ett nytt motoralternativ för indexering i Hudi. Därför föreslår avhandlingen användningen av RonDB som ett globalt index, vilket utökas genom att ytterligare undersöka lönsamheten hos olika kontakter som är tillgängliga för kommunikation. Forskningen genomfördes med både praktiska experiment och studie av relevant litteratur. Analysen involverade observationer som gjorts över flera arbetsbelastningar för att dokumentera hur adekvat lösningarna kan anpassas till förändringar i krav och typer av åtgärder. Denna avhandling registrerade resultaten och visualiserade dem för att underlätta för läsaren, samt gjorde dem tillgängliga i ett offentligt arkiv. Slutsatserna sammanföll inte med författarnas hypotes att RonDB skulle tillhandahålla den snabbaste indexeringslösningen för alla scenarier. Icke desto mindre ansågs det vara det mest konsekventa tillvägagångssättet, vilket potentiellt gör det till den bästa generella lösningen. Som ett exempel noterades att RonDB är kapabel att hantera läs- och skrivbelastningar, samtidigt som det konsekvent tillhandahåller låg frågelatens oberoende av filantalet.
125

Prestandajämförelse mellan Apache Kafka och Redpanda för realtidsdataapplikationer inom Internet of Things / Performance Comparison Between Apache Kafka and Redpanda for Real-Time Data Applications in the Internet of Things

Alkurdi, Yaman January 2024 (has links)
Det finns en brist på oberoende forskning som jämför Redpandas kapacitet med etablerade alternativ som Apache Kafka, särskilt i IoT-sammanhang där resurseffektivitet är avgörande. Detta arbete jämför prestandan hos de två plattformarna i realtidsdataapplikationer under förhållanden som liknar de i IoT-miljöer. Genom en egenutvecklad applikation genomfördes prestandatester i en lokal containeriserad miljö för att utvärdera genomströmningshastighet och latens vid olika meddelandestorlekar och antal partitioner. Studien visar att Redpanda överträffar Kafka vid mindre meddelandestorlekar, med högre genomströmningshastighet och lägre latens, särskilt vid högre antal partitioner. Däremot utmärker sig Kafka vid större meddelandestorlekar genom att uppnå högre genomströmningshastighet, men med ökad latens. Resultaten indikerar att Redpanda är väl lämpad för IoT-applikationer som kräver snabb hantering av små meddelanden, medan Kafka är mer effektiv för scenarier som involverar större datamängder. Fynden betonar vikten av att välja rätt plattform baserat på specifika applikationsbehov, vilket bidrar med värdefulla insikter inom IoT och realtidsdatahantering. / There is a lack of independent research comparing the capacity of Redpanda to established alternatives like Apache Kafka, particularly in IoT contexts where resource efficiency is critical. This thesis compares the performance of the two platforms in real-time data applications under conditions similar to those in IoT environments. Through a custom-developed application, performance tests were conducted in a local containerized environment to evaluate throughput and latency across various message sizes and partition counts. The study finds that Redpanda outperforms Kafka with smaller message sizes, offering higher throughput and lower latency, particularly at higher partition counts. Conversely, Kafka excels with larger message sizes, achieving higher throughput but with increased latency. The results indicate that Redpanda is well-suited for IoT applications requiring rapid handling of small messages, while Kafka is more efficient for scenarios involving larger data volumes. The findings emphasize the importance of selecting the appropriate platform based on specific application needs, thus contributing valuable insights in IoT and real-time data streaming.
126

Исследование различных механизмов оркестрации расчетов аналитических витрин данных : магистерская диссертация / Investigation of various mechanisms of orchestration of calculations in analytical data marts

Кожин, А. В., Kozhin, A. V. January 2024 (has links)
The final qualification work (master's thesis) is devoted to the study of various mechanisms of orchestration of calculations of analytical data marts. The main purpose of the work is the analysis of various data registration mechanisms and the development of software for the orchestration of calculations of analytical data marts for the needs of one of the regional banks. The result of the work was the choice of an instrument for the orchestration of calculations of analytical data marts, design and development of software for the orchestration of calculations of analytical data marts. / Выпускная квалификационная работа (магистерская диссертация) посвящена исследованию различных механизмов оркестрации расчетов аналитических витрин данных. Основной целью работы является анализ различных механизмов регистрации данных и разработка программного обеспечения для оркестрации расчетов аналитических витрин данных под нужды одного из региональных банков. Результатом работы стал выбор инструмента оркестрации расчетов аналитических витрин данных, проектирование и разработка программного обеспечения для оркестрации расчетов аналитических витрин данных.
127

Compactions in Apache Cassandra : Performance Analysis of Compaction Strategies in Apache Cassandra

Kona, Srinand January 2016 (has links)
Context: The global communication system is in a tremendous growth, leading to wide range of data generation. The Telecom operators in various Telecom Industries, that generate large amount of data has a need to manage these data efficiently. As the technology involved in the database management systems is increasing, there is a remarkable growth of NoSQL databases in the 20th century. Apache Cassandra is an advanced NoSQL database system, which is popular for handling semi-structured and unstructured format of Big Data. Cassandra has an effective way of compressing data by using different compaction strategies. This research is focused on analyzing the performances of different compaction strategies in different use cases for default Cassandra stress model. The analysis can suggest better usage of compaction strategies in Cassandra, for a write heavy workload. Objectives: In this study, we investigate the appropriate performance metrics to evaluate the performance of compaction strategies. We provide the detailed analysis of Size Tiered Compaction Strategy, Date Tiered Compaction Strategy, and Leveled Compaction Strategy for a write heavy (90/10) work load, using default cassandra stress tool. Methods: A detailed literature research has been conducted to study the NoSQL databases, and the working of different compaction strategies in Apache Cassandra. The performances metrics are considered by the understanding of the literature research conducted, and considering the opinions of supervisors and Ericsson’s Apache Cassandra team. Two different tools were developed for collecting the performances of the considered metrics. The first tool was developed using Jython scripting language to collect the cassandra metrics, and the second tool was developed using python scripting language to collect the Operating System metrics. The graphs have been generated in Microsoft Excel, using the values obtained from the scripts. Results: Date Tiered Compaction Strategy and Size Tiered Compaction strategy showed more or less similar behaviour during the stress tests conducted. Level Tiered Compaction strategy has showed some remarkable results that effected the system performance, as compared to date tiered compaction and size tiered compaction strategies. Date tiered compaction strategy does not perform well for default cassandra stress model. Size tiered compaction can be preferred for default cassandra stress model, but not considerable for big data. Conclusions: With a detailed analysis and logical comparison of metrics, we finally conclude that Level Tiered Compaction Strategy performs better for a write heavy (90/10) workload while using default cassandra stress model, as compared to size tiered compaction and date tiered compaction strategies.
128

Stratigraphy and micropaleontology of the Mancos Shale (Cretaceous), Black Mesa Basin, Arizona

Hazenbush, George Cordery, 1919- January 1972 (has links)
No description available.
129

An Assessment of Abundance, Diet, and Cultural Significance of Mexican Gray Wolves in Arizona

Rinkevich, Sarah Ellen January 2012 (has links)
I sampled the eastern portion of the Fort Apache Indian Reservation from June 19 to August 8 in 2008 and from May 6 to June 19 in 2009. I used scat detection dogs to find wolf (Canis lupus baileyi) scat on the Fort Apache Indian Reservation during 2008 and 2009. My population size estimate of the wolf population was 19 individuals (95% CI = 14 - 58; SE = 8.30) during 2008 and 2009. My study also used DNA analyses to obtain an accurate assessment of Mexican wolf diet and, compare prey remains in Mexican gray wolf scat with prey remains in two other sympatric carnivore species (coyote, C. latrans, and puma, Puma concolor). Percent biomass of prey items consumed by Mexican wolves included 89% for elk, 8% for mule deer, and 3% for coyote. Percent biomass of prey items consumed by pumas was 80% for elk, 12% for mule deer, 4% for turkey, and 4% for fox. I included an ethnographic feature to my research. My study showed evidence of shared knowledge about the wolf within Western Apache culture. My data fit the consensus model based upon the large ratio between the first and second eigenvalues. I provided a literature review of how traditional ecological knowledge has enhanced the field of conservation biology but also the challenges of collecting and incorporating it with western science. Lastly, I provide an historical perspective of wolves throughout Arizona, an assessment of their historical abundance, and document a possible mesocarnivore release. Between 1917 and 1964, 506 wolves, 117,601 coyotes, 2,608 mountain lions, 1,327 bears, 19,797 bobcats, and 21 jaguars were killed by PARC agents, bounty hunters, and ranchers as reported in U.S. Bureau of Biological Survey Annual Reports in Arizona. The relationship between the numbers of coyotes and wolves destroyed was investigated using Pearson correlation coefficient. There was a negative correlation between the numbers of wolves and coyotes destroyed in Arizona between 1917 and 1964 (r = -0.40; N = 46; p = 0.01) suggesting a possible mesopredator release of coyotes with the extirpation of the wolf in Arizona.
130

Native American Ethnographic Study of Tonto National Monument

Stoffle, Richard W., Toupal, Rebecca, Van Vlack, Kathleen, Diaz de Valdes, Rachel, O'Meara, Sean, Medwied-Savage, Jessica January 2008 (has links)
Tonto National Monument was established by President Theodore Roosevelt on December 19, 1907 in order to protect and preserve the cliff structures and other archeological sites that were deemed places of “great ethnographic, scientific and educational interest” for future generations. The land that encompasses Tonto National Monument has been used by Native American peoples for at least 10,000 years. For the purpose of addressing their consultation responsibilities under the federal law and mandates, the National Park Service contracted with the Bureau of Applied Research in Anthropology (BARA) at the University of Arizona (UofA) to complete a Native American site interpretation study at Tonto National Monument. The purpose of this study is to bring forth Native American perspectives and understandings of the land and the resources. This study has helped to foster relationships between the Monument and the tribes. Close relationships with contemporary tribes hold the potential of learning more about the Monument’s cultural history and its continuing significance to Indian peoples. This increased awareness of contemporary Indian ties to the Monument, and to the surrounding region, will help the NPS design interpretative programs and manage resources in a culturally sensitive manner.

Page generated in 0.0159 seconds