• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 6
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 26
  • 25
  • 25
  • 25
  • 22
  • 15
  • 15
  • 13
  • 12
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Aspect Analyzer: Ett verktyg för automatiserad exekveringstidsanalys av komponenter och aspekter / Aspect Analyzer: A Tool for Automated WCET Analysis of Aspects and Components

Uhlin, Pernilla January 2002 (has links)
The increasing complexity in the development of a configurable real-time system has emerged new principles of software techniques, such as aspect-oriented software development and component-based software development. These techniques allow encapsulation of the system's crosscutting concerns and increase the modularity of the software. The properties of a component that influences the systems performance or semantics are specified separately in entities called aspects, while basic functionality of the property still remains in the component. When building a real-time system, different sets of configurations of aspects and components can be combined, resulting in different configurations of the system. The temporal behavior of the system changes and a way to ensure the predictability of the system is needed. This thesis presents a tool for aspect-level worst-case execution time analysis, which gives a priori information about the temporal behavior of the system, before the process of composing aspects with components.
32

Verifikation av verktyget aspect analyzer / Aspect analyzer tool verification

Bodin, Joakim January 2003 (has links)
Rising complexity in the development of real-time systems has made it crucial to have reusable components and a more flexible way of configuring these components into a coherent system. Aspect-oriented system development (AOSD) is a technique that allows one to put a system’s crosscutting concerns into"modules"that are called aspects. Applying AOSD in real-time and embedded system development one can expect reductions in the complexity of the system design and development. A problem with AOSD in its current form is that it does not support predictability in the time domain. Hence, in order to use AOSD in real-time system development, we need to provide ways of analyzing temporal behavior of aspects, components and resulting system (made from weaving aspects and components). Aspect analyzer is a tool that computes the worst-case execution time (WCET) for a set of components and aspects, thus, enabling support for predictability in the time domain of aspect-oriented real-time software. A limitation of the aspect analyzer, until now, were that no verification had been made whether the aspect analyzer would produce WCET values that were close to the measured or computed (with another WCET analysis technique) WCET of an aspect-oriented real-time system. Therefore, in this thesis we perform a verification of the correctness of the aspect analyzer using a number of different methods for WCET analysis. These investigations of the correctness of the output from the aspect analyzer gave confidence to the automated WCET analysis. In addition, performing this verification led to the identification of the steps necessary to compute the WCETs of a piece of program, when using a third party tool, which gives the ability to write accurate input files for the aspect analyzer.
33

Log-selection strategies in a real-time system

Gillström, Niklas January 2014 (has links)
This thesis presents and evaluates how to select the data to be logged in an embedded realtime system so as to be able to give confidence that it is possible to perform an accurate identification of the fault(s) that caused any runtime errors. Several log-selection strategies were evaluated by injecting random faults into a simulated real-time system. An instrument was created to perform accurate detection and identification of these faults by evaluating log data. The instrument’s output was compared to ground truth to determine the accuracy of the instrument. Three strategies for selecting the log entries to keep in limited permanent memory were created. The strategies were evaluated using log data from the simulated real-time system. One of the log-selection strategies performed much better than the other two: it minimized processing time and stored the maximum amount of useful log data in the available storage space. / Denna uppsats illustrerar hur det blev fastställt vad som ska loggas i ett inbäddat realtidssystem för att kunna ge förtroende för att det är möjligt att utföra en korrekt identifiering av fel(en) som orsakat körningsfel. Ett antal strategier utvärderades för loggval genom att injicera slumpmässiga fel i ett simulerat realtidssystem. Ett instrument konstruerades för att utföra en korrekt upptäckt och identifiering av dessa fel genom att utvärdera loggdata. Instrumentets utdata jämfördes med ett kontrollvärde för att bestämma riktigheten av instrumentet. Tre strategier skapades för att avgöra vilka loggposter som skulle behållas i det begränsade permanenta lagringsutrymmet. Strategierna utvärderades med hjälp av loggdata från det simulerade realtidssystemet. En av strategierna för val av loggdata presterade klart bättre än de andra två: den minimerade tiden för bearbetning och lagrade maximal mängd användbar loggdata i det permanenta lagringsutrymmet.
34

Tools and Techniques for Efficient Transactions

Poudel, Pavan 07 September 2021 (has links)
No description available.
35

Propuesta de rellenos fluidos de baja resistencia controlada para obras de saneamiento en la Región Ica / A proposal of controlled low strength materials in sanitation projects in the Ica Region

Paucar Gutierrez, Elizabeth Ida Bertha 17 March 2021 (has links)
El Relleno Fluido o también conocido como Material Controlado de Baja Resistencia (CLSM por sus siglas en inglés) vienen siendo requeridos sobre todo en épocas de pandemia por el COVID-19 en lugar del relleno compactado, debido a su gran facilidad y rapidez para los rellenos de cavidades de zanjas, tanto en redes de agua, desagüe y alcantarillado, relleno de cimentaciones de edificios y puentes, entre las principales aplicaciones. El presente estudio contempla el desarrollo de mezclas con contenidos de cemento Portland tipo I de 60 a 90 kg/m3 para rangos de resistencia a compresión entre 5 a 15 kg/cm2 a 28 días, con agregados de la cantera Tinguiña de Ica, y aditivo agente espumante para conferir la fluidez y trabajabilidad necesaria que facilite su colocación en obra. Resultados satisfactorios de fluidez entre 9 ½” a 10 ½” y pérdida de fluidez promedio de 2”/ hora, y rangos de resistencia de hasta 24 kg/cm2 fueron obtenidos, los cuales permitieron un buen comportamiento costo beneficio tanto en ahorro económico, y tiempo de ejecución propuesto para un proyecto real de saneamiento en la ciudad de Ica. Asimismo, gracias a la aplicación de las mezclas de Relleno Fluido propuestas, se preservará el distanciamiento social durante su empleo en obra, dado que solo requiere de una persona para su aplicación en los rellenos de zanjas. / Flowable Fill or also known as Controlled Low Resistance Material (CLSM) have been required especially in times of pandemic by COVID-19 instead of traditional compacted fill, due to its great ease and speed for fillings works of trench cavities, both in water and wastewater networks, filling for foundations in buildings and bridges, among the main applications. The present study contemplates the development of mixtures with Portland cement type I from 60 to 90 kg / m3 for 28 days compressive strength ranges between 5 to 15 kg / cm2, coarse and fine aggregates from the Tinguiña quarry, in Ica city, and foaming agent chemical admixture to provide the adequate fluidity and workability to cast in place at the job site. Satisfactory fluidity results between 9 ½ ” to 10 ½” and average fluidity loss ratio of 2 ”/ hour, and compressive strength ranges of until 24 kg/cm2 were obtained, which a good cost-benefit performance both in economic savings, and execution time for a proposed sanitation project in the city of Ica. Likewise, thanks to the application of the proposed flowable fill mixtures, social distancing will be preserved during its use on site, because it only requires one person to apply it in the trench fillings. / Tesis
36

Evaluation of Generic GraphQL Servers for Accessing Legacy Databases / Evaluation of Generic GraphQL Servers for Accessing Legacy Databases

Ismail, Muhammad January 2022 (has links)
Over a few years back, REST APIs were considered standard web APIs, which nowhave a strong competitor. REST APIs provide some excellent features like stateless serversand structured access to resources. However, over time, it doesn’t offer flexibility withthe access of data and client changing requirements. In 2015 GraphQL was introduced byFacebook, which overcomes the problems with the REST and provides more flexibility andefficiency to the client requirements. For example, remove the over and under fetching.To change the existing APIs into GraphQL APIs require considerable time and effort.Therefore, some server implementation tools are developed to reduce the developmentcost and time. A few of these tools generate GraphQL schema and server implementationsautomatically over a legacy database.This master thesis studies tools that automatically generate GraphQL server implementationover legacy databases and evaluate such generated GraphQL server’s performance.First, we find some GraphQL server implementation tools such as Hasura andPostGraphile and then compare the server’s performance using benchmark methodology.Secondly, we run an experiment on a computer system and use the performance metricsfor assessment. The results of our experiment concluded that PostGraphile has higherthroughput and low query execution time as compared to Hasura. In most of the querytemplates from the benchmark, PostGraphile outperforms Hasura.
37

Návrh metod a nástrojů pro zrychlení vývoje softwaru pro vestavěné procesory se zaměřením na aplikace v mechatronice / DESIGN OF METHODS AND TOOLS ACCELERATING THE SOFTWARE DESIGN FOR EMBEDDED PROCESSORS TARGETED FOR MECHATRONICS APPLICATIONS

Lamberský, Vojtěch January 2015 (has links)
The main focus of this dissertation thesis is on methods and tools which can increase the speed of software development process for embedded processors used in mechatronics applications. The first part of this work introduces software and hardware tools suitable for a rapid development and prototyping of new applications used today. This work focuses on two main topics from the mentioned application field. The first topic is a development of tools for an automatic code generation from the Simulink environment for an embedded processor. The second topic is a development of tools enabling execution time prediction based on a Simulink model. Next chapter of this work describes various aspects and properties of the Cerebot blockset, which is a toolset for a fully automatic code generation from a Simulink environment for an embedded processor. Following chapter describes various methods that are suitable for predicting the execution time on an embedded processor based on a Simulink model. Main contribution of this work presents the created support for a fully automatic code generation from a Simulink software for the MX7 cK hardware, which enables a code generation supporting also a complex peripheral (a graphic display unit). The next important contribution of this work presents the developed method for an automatic prediction of the software execution time based on a Simulink model.
38

Evaluating Random Forest and k-Nearest Neighbour Algorithms on Real-Life Data Sets / Utvärdering av slumpmässig skog och k-närmaste granne algoritmer på verkliga datamängder

Salim, Atheer, Farahani, Milad January 2023 (has links)
Computers can be used to classify various types of data, for example to filter email messages, detect computer viruses, detect diseases, etc. This thesis explores two classification algorithms, random forest and k-nearest neighbour, to understand how accurately and how quickly they classify data. A literature study was conducted to identify the various prerequisites and to find suitable data sets. Five different data sets, leukemia, credit card, heart failure, mushrooms and breast cancer, were gathered and classified by each algorithm. A train split and a 4-fold cross-validation for each data set was used. The Rust library SmartCore, which included numerous classification methods and tools, was used to perform the classification. The results gathered indicated that using the train split resulted in better classification results, as opposed to 4-fold cross-validation. However, it could not be determined if any attributes of a data set affect the classification accuracy. Random forest managed to achieve the best classification results on the two data sets heart failure and leukemia, whilst k-nearest neighbour achieved the best classification results on the remaining three data sets. In general the classification results on both algorithms were similar. Based on the results, the execution time of random forest was dependent on the number of trees in the ”forest”, in which a greater number of trees resulted in an increased execution time. In contrast, a higher k value did not increase the execution time of k-nearest neighbour. It was also found that data sets with only binary values (0 and 1) run much faster than a data set with arbitrary values when using random forest. The number of instances in a data set also leads to an increased execution time for random forest despite a small number of features. The same applied to k-nearest neighbour, but with the number of features also affecting the execution since time is needed to compute distances between data points. Random forest managed to achieve the fastest execution time on the two data sets credit card and mushrooms, whilst k-nearest neighbour executed faster on the remaining three data sets. The difference in execution time between the algorithms varied a lot and this depends on the parameter value chosen for the respective algorithm. / Datorer kan användas för att klassificera olika typer av data, t.ex att filtrera e-postmeddelanden, upptäcka datorvirus, upptäcka sjukdomar, etc. Denna avhandling utforskar två klassificeringsalgoritmer, slumpmässiga skogar och k-närmaste grannar, för att förstå hur precist och hur snabbt de klassificerar data. En litteraturstudie genomfördes för att identifiera de olika förutsättningarna och för att hitta lämpliga datamängder. Fem olika datamängder, leukemia, credit card, heart failure, mushrooms och breast cancer, samlades in och klassificerades av varje algoritm. En träningsfördelning och en 4-faldig korsvalidering för varje datamängd användes. Rust-biblioteket SmartCore, som inkluderade många klassificeringsmetoder och verktyg, användes för att utföra klassificeringen. De insamlade resultaten visade att användningen av träningsfördelning resulterade i bättre klassificeringsresultat i motsats till 4-faldig korsvalidering. Det gick dock inte att fastställa om några attribut för en datamängd påverkar klassificeringens noggrannhet. Slumpmässiga skogar lyckades uppnå det bästa klassificeringsresultaten på de två datamängderna heart failure och leukemia, medan k-närmaste granne uppnådde det bästa klassificeringsresultaten på de återstående tre datamängderna. I allmänhet var klassificeringsresultaten för båda algoritmerna likartade. Utifrån resultaten var utförandetiden för slumpmässiga skogar beroende av antalet träd i ”skogen”, då ett större antal träd resulterade i en ökad utförandetid. Däremot ökade inte ett högre k-värde exekveringstiden för k-närmaste grannar. Det upptäcktes även att datamängder med endast binära värden (0 och 1) körs mycket snabbare än datamängder med godtyckliga värden när man använder slumpmässiga skogar. Antalet instanser i en datamängd leder också till en ökad exekveringstid för slumpmässiga skogar trots ett litet antal egenskaper. Detsamma gällde för k-närmaste granne, men även antalet egenskaper påverkade exekveringstiden då tid behövs för att beräkna avstånd mellan datapunkter. Slumpmässiga skogar lyckades uppnå den snabbaste exekveringstiden på de två datamängderna credit card och mushrooms, medan k-närmaste granne exekverades snabbare på de återstående tre datamängderna. Skillnaden i exekveringstid mellan algoritmerna varierade mycket och detta beror på vilket parametervärde som valts för respektive algoritm.
39

Better Distributed Directories and Transactional Scheduling

Rai, Shishir 27 July 2023 (has links)
No description available.
40

A Query, a Minute: Evaluating Performance Isolation in Cloud Databases

Kiefer, Tim, Schön, Hendrik, Habich, Dirk, Lehner, Wolfgang 02 February 2023 (has links)
Several cloud providers offer reltional databases as part of their portfolio. It is however not obvious how resource virtualization and sharing, which is inherent to cloud computing, influence performance and predictability of these cloud databases. Cloud providers give little to no guarantees for consistent execution or isolation from other users. To evaluate the performance isolation capabilities of two commercial cloud databases, we ran a series of experiments over the course of a week (a query, a minute) and report variations in query response times. As a baseline, we ran the same experiments on a dedicated server in our data center. The results show that in the cloud single outliers are up to 31 times slower than the average. Additionally, one can see a point in time after which the average performance of all executed queries improves by 38 %.

Page generated in 0.0843 seconds