Spelling suggestions: "subject:"deterministic"" "subject:"eterministic""
251 |
Enhancing Task Assignment in Many-Core Systems by a Situation Aware SchedulerMeier, Tobias, Ernst, Michael, Frey, Andreas, Hardt, Wolfram 17 July 2017 (has links)
The resource demand on embedded devices is constantly growing. This is caused by the sheer explosion of software based functions in embedded systems, that are growing far faster than the resources of the single-core and multi-core embedded processors. As one of the limitation is the computing power of the processors we need to explore ways to use this resource more efficiently. We identified that during the run-time of the embedded devices the resource demand of the software functions is permanently changing dependent on the device situation. To enable an embedded device to take advantage of this dynamic resource demand, the allocation of the software functions to the processor must be handled by a scheduler that is able to evaluate the resource demand of the software functions in relation to the device situation. This marks a change in embedded devices from static defined software systems to dynamic software systems. Above that we can increase the efficiency even further by extending the approach from a single device to a distributed or networked system (many-core system). However, existing approaches to deal with dynamic resource allocation are focused on individual devices and leave the optimization potential of manycore systems untouched. Our concept will extend the existing Hierarchical Asynchronous Multi-Core Scheduler (HAMS) concept for individual devices to many-core systems. This extension introduces a dynamic situation aware scheduler for many-core systems which take the current workload of all devices and the system-situation into account. With our approach, the resource efficiency of an embedded many-core system can be increased. The following paper will explain the architecture and the expected results of our concept.
|
252 |
Wireless Broadcasting with Network CodingLu, Lu January 2011 (has links)
Wireless digital broadcasting applications such as digital audio broadcast (DAB) and digital video broadcast (DVB) are becoming increasingly popular since the digital format allows for quality improvements as compared to traditional analogue broadcast. The broadcasting is commonly based on packet transmission. In this thesis, we consider broadcasting over packet erasure channels. To achieve reliable transmission, error-control schemes are needed. By carefully designing the error-control schemes, transmission efficiency can be improved compared to traditiona lautomatic repeat-request (ARQ) schemes and rateless codes. Here, we first study the application of a novel binary deterministic rateless (BDR) code. Then, we focus on the design of network coding for the wireless broadcasting system, which can significantly improve the system performance compared to traditional ARQ. Both the one-hop broadcasting system and a relay-aided broadcasting system areconsidered. In the one-hop broadcasting system, we investigate the application of systematic BDR (SBDR) codes and instantaneously decodable network coding (IDNC). For the SBDR codes, we determine the number of encoded redundancy packets that guarantees high broadcast transmission efficiencies and simultaneous lowcomplexity. Moreover, with limited feedback the efficiency performance can be further improved. Then, we propose an improved network coding scheme that can asymptotically achieve the theoretical lower bound on transmission overhead for a sufficiently large number of information packets. In the relay-aided system, we consider a scenario where the relay node operates in half duplex mode, and transmissions from the BS and the relay, respectively, are over orthogonal channels. Based on random network coding, a scheduling problem for the transmissions of redundancy packets from the BS and the relay is formulated. Two scenarios; namely instantaneous feedback after each redundancy packet, and feedback after multiple redundancy packets are investigated. We further extend the algorithms to multi-cell networks. Besides random network coding, IDNC based schemes are proposed as well. We show that significant improvements in transmission efficiency are obtained as compared to previously proposed ARQ and network-coding-based schemes. / QC 20110907
|
253 |
Tailored Preloading, Precaching and Prefetching Loading Strategies for Applications Through a Multi-Component AIÖfver, Martin January 2022 (has links)
When using a personal computer the user could experience starting an application for the first time, loading a level in a video game or any tedious arbitrary loading process. These loading processes often load mostly the same data each time and consume time where the user is waiting. Making it faster is often preferred. The file-caching tactics Preloading, Precaching and Prefetching (PPP) have received past research, mostly to make generic improvements or better algorithms. The gap in the work is a lack of non-generic file-caching optimisation algorithms. To improve loading times during a Mostly Deterministic File-Loading Process (MDFLP), this research suggests a Multi-Component Artificial Intelligence (MCAI) that analyses the application's runtime results. The proposed MCAI would not be a one size fits all kind of solution but rather generate a tailored File-Loading Strategy (FLS) for the application. The aim of the research is to investigate if a MCAI could improve loading speeds of arbitrary application's MDFLPs. The objectives are to implement a test synthesiser for generating synthetic test environments to use MCAI on, to implement the MCAI and perform the experiments. The research questions regard how the MCAI can analyse inefficient operations during a MDFLP, propose measures that increase efficiency and aid developers independently of the high-level technology used. They also regard how the MCAI through iterative runs of an application can generate an application-specific FLS which is better in terms of PPP performance. The method section goes into detail about how the test synthesiser, the MCAI and the host application is implemented. It also explains how the experiment was made, what the experiment tested, what data was collected and what hardware and software were used. The result first shows in detail how the MCAI works on a simple test and then moves on to three extensive tests. Two of the tests show positive results, where the MCAI manages to generate an optimal FLS, whilst the MCAI fails in the third test. The third test highlights inherent weaknesses in the MCAI. The conclusion is that the MCAI shows potential. In its weakest form, it still manages to produce good results, where the MCAI generates a FLS that improves the load time performance. There is improvement potential for the MCAI to make it more smart, efficient, reliable and make it able to generate a FLS for test three. The research leaves room for follows ups and projects, like developing the MCAI further and performing case studies. / När en dator används kan användaren uppleva starten av en applikation för första gången, inladdningen av en nivå i ett TV-spel eller någon tråkig, godtycklig inladdningsprocess. Dessa laddningsprocesser laddar ofta mestadels samma data varje gång och förbrukar tid där användaren väntar. Att göra det snabbare är ofta att föredra. Det har tidigare forskats om filcachningstaktikerna Preloading, Precaching och Prefetching (PPP), mest för att göra generiska förbättringar eller bättre algoritmer. Klyftan i forskningen är bristen på icke-generiska filcachande-optimeringsalgoritmer. För att förbättra laddningstiderna under en övervägande deterministisk filladdningsprocess (MDFLP), föreslår denna forskning en Multi-Komponents Artificiell Intelligens (MCAI) som analyserar programmets körtidsresultat. Den föreslagna MCAIn skulle generera en skräddarsydd filladdningsstrategi (FLS) för applikationen. Syftet med forskningen är att upptäcka om en MCAI kan förbättra laddningshastigheterna för godtyckliga applikationers MDFLP:er. Målen är att implementera en testsyntetiserare för att generera syntetiska testmiljöer att använda MCAI på, att implementera MCAI och utföra experimenten. Forskningsfrågorna handlar om hur MCAI kan analysera ineffektiva operationer under en MDFLP, föreslå åtgärder som ökar effektiviteten och hjälper utvecklare oberoende av vilken högnivåteknik som används. Det handlar också om hur MCAI genom körningar av en applikation kan generera en FLS som är bättre i PPP-prestanda. Metodavsnittet går in i detalj om hur testsyntesen, MCAI och värdapplikationen implementeras. Den förklarar också hur experimentet gjordes, vad experimentet testade, vilken data som samlades in och vilken hårdvara och mjukvara som användes. Resultatet visar först i detalj hur MCAI fungerar på ett enkelt test och går sedan vidare till tre omfattande tester. Två av testerna visar positiva resultat, där MCAI lyckas generera en optimal FLS, medan MCAI misslyckas helt i det tredje testet. Det tredje testet belyser inneboende svagheter i MCAI. Slutsatsen är att MCAI visar potential. Det finns förbättringspotential för MCAI för att göra den mer smart, effektiv, pålitlig och lyckas generera en bra FLS för test tre. Forskningen lämnar utrymme för uppföljningar och projekt, som att vidareutveckla MCAI och utföra fallstudier.
|
254 |
Contributions to the Theory of Piecewise Deterministic Markov Processes and Applications to Generalized Age Processes and Storage ModelsLöpker, Andreas 09 January 2006 (has links)
Eine Klasse von Markovprozessen mit deterministischem Pfaden und zufälligen Sprüngen wird unter Zuhilfenahme von Martingalen und des erweiterten infinitesimalen Generators untersucht. Dabei steht die Berechnung des Erwartungswertes und der Laplacetransformierten bestimmter Stoppzeiten im Vordergrund. Des weiteren wird die Frage untersucht, wann die in Frage kommenden Prozesse über stationäre Verteilungen verfügen und wie diese im Existenzfall beschaffen sind. Die Methoden werden am Beispiel eines verallgemeinerten Altersprozesses und eines Lager- bzw. Dammprozesses vorgeführt.
|
255 |
Collective Dynamics of Excitable Tree NetworksKhaledi Nasab, Ali 23 September 2019 (has links)
No description available.
|
256 |
Use of Radar Estimated Precipitation for Flood ForecastingWijayarathne, Dayal January 2020 (has links)
Flooding is one of the deadliest natural hazards in the world. Forecasting floods in advance can significantly reduce the socio-economic impacts. An accurate and reliable flood forecasting system is heavily dependent on the input precipitation data. Real-time, spatially, and temporally continuous Radar Quantitative Precipitation Estimates (QPEs) is useful precipitation information source. This research aims to investigate the efficacy of American and Canadian weather radar QPEs on hydrological model calibration and validation for flood forecasting in urban and semi-urban watersheds in Canada. A comprehensive review was conducted on the weather Radar network and its’ hydrological applications, challenges, and potential future research in Canada. First, radar QPEs were evaluated to verify the reliability and accuracy as precipitation input for hydrometeorological models. Then, the radar-gauge merging techniques were assessed to select the best method for urban flood forecasting applications. After that, merged Radar QPEs were used as precipitation input for the hydrological models to assess the impact of radar QPEs on hydrological model calibration and validation. Finally, a framework was developed by integrating hydrological and hydraulic models to produce flood forecasts and inundation maps in urbanized watersheds. Results indicated that dual-polarized radar QPEs could be effectively used as a source of precipitation input to hydrological models. The radar-gauge merging enhances both the accuracy and reliability of Radar QPEs, and therefore, the accuracy of streamflow simulation is also improved. Since flood forecasting agencies usually use hydrological models calibrated and validated using gauge data, it is recommended to use bias-corrected Radar QPEs to run existing hydrological models to simulate streamflow to produce flood extent maps. The hydrological and hydraulic models could be integrated into one framework using bias-corrected Radar QPEs to develop a successful flood forecasting system. / Thesis / Doctor of Science (PhD) / Floods are common and increasing deadly natural hazards in the world. Predicting floods in advance using Flood Early Warning System (FEWS) can facilitate flood mitigation. Radar Quantitative Precipitation Estimates (QPEs) can provide real-time, spatially, and temporally continuous precipitation data. This research focuses on bias-correcting and evaluating radar QPEs for hydrologic forecasting. The corrected QPE are applied into a framework connecting hydrological and hydraulic models for operational flood forecasting in urban watersheds in Canada. The key contributions include: (1) Dual-polarized radar QPEs is a useful precipitation input to calibrate, validate and run hydrological models; (2) Radar-gauge merging enhance accuracy and reliability of radar QPEs; (3) Floods could be more accurately predicted by integrating hydrological and hydraulic models in one framework using bias-corrected Radar QPEs; and (4) Gauge-calibrated hydrological models can be run effectively using the bias-corrected radar QPEs. This research will benefit future applications of real-time radar QPEs in operational FEWS.
|
257 |
Hosting Capacity Methods Considering Complementarity between Solar and Wind Power : A Case Study on a Swedish Regional GridAndersson, Emma, Abrahamsson Bolstad, Maja January 2023 (has links)
The demand for electrical power is growing due to factors such as population growth, urbanisation, and the transition from fossil fuels to renewable energy sources. To be able to keep up with the changes in electricity demand, the Swedish power grid must connect more renewable power generation, but also increase its transmission capacity. Traditionally, power grids are expanded to increase the transmission capacity which requires a lot of time and investments. In order not to hinder the electrification of society, it is important to adequately estimate the current transmission capacity and plan the expansions accordingly. In the past, the generation of electrical power was primarily based on dispatchable energy sources, and the planning of new connections to the grid was assessed according to the stable and controllable nature of the electricity supply. However, renewable sources like solar and wind power are affected by weather variations. Therefore, the traditional methods of planning the power grid are no longer sufficient. Instead, there is a need to develop and implement new methods that account for the variable nature of renewable energy sources, and also the possible complementarity between different renewable power sources. This can possibly allow more connection of renewable power generation to the grid, without the need of expanding it. The aim of this thesis is to investigate two different methods for analysing how much renewable power generation that can be connected to the power grid, so-called hosting capacity methods. The first method is a deterministic method which is traditionally used in power system analyses since it is a fast, simple and conservative method. This method does neither consider the intermittent nature of solar and wind power, nor any complementarity. The second method is a time series method which considers the complementarity and intermittency of solar and wind power but requires much data. The methods are compared in regards to assessed hosting capacities, risks and reliability of results. The study is performed on a regional grid case in the middle of Sweden. Solar and wind power plants with different capacities are modeled at ten buses in the power grid. The power grid is analysed in PSS/E with loading of lines and voltage levels determining the assessed hosting capacities. A correlation map presenting the temporal correlations of solar and wind power over the grid case area is also created in order to evaluate the complementarity in the area and its possible effects on the assessed hosting capacities. The results show that the time series method is more reliable than the deterministic method. This is due to the difficulties in identifying accurate worst case hours that are used for the deterministic method. The time series method is also preferred as it considers complementarity between solar and wind power. However, the correlation map argues that the grid case area has weakly positive correlations, meaning low complementarity between solar and wind power. This suggests that the differences in hosting capacity between the two methods are more likely dependent on the temporal variations in existing load and power generation. The differences in assessed hosting capacity between the ten buses in the power grid are probably not due to the local complementarity either, but rather the structural differences of the grid in terms of components, local loads and existing power generation.
|
258 |
Evaluation of structurally controlled rockfall hazard for underground excavations in seismically active areas of the Kiirunavaara mineFuentes Espinoza, Manuel Alberto January 2023 (has links)
Sublevel caving operations at great depths are subjected both to large stress concentrations that are redistributed as the mining front progresses and to mining-induced seismicity. This is the case for Kiirunavaara mine, Sweden’s largest underground mine. Since the mine was declared seismically active in 2007 / 2008, large rockfalls controlled by structures have happened in many parts of the mine, despite the use of rock support systems designed for bearing dynamic loads. A novel layout for sublevel caving operations, internally named “fork layout” is being tested at a satellite mine. This layout was conceived to place the ore-parallel longitudinal footwall drifts further away from the contact between the orebody and footwall drifts. That way, the differential stresses that generate stress-related damages are expected to be reduced. However, the effect of implementing the fork layout on the hazard potential for structurally controlled rockfalls has not been studied in detail yet. Large rockfalls that occurred in different parts of the mine were analysed with respect to their structures, location of the damage event and type of excavation. The majority of these occurred at footwall drift intersections. Information from damage mapping and seismic events that triggered these rockfalls was used to generate a conceptual model that illustrates the relative spatial relation between the seismic source and damage location. In addition, the seismic source parameters of the events that triggered these rockfalls were processed using scaling laws to obtain ground motion parameters such as peak particle velocity and acceleration at the damage site. The effect of implementing the fork layout on rockfall hazard was tested in the intersections between footwall drifts and crosscuts (FD-CC), and intersections between access and footwall drifts (AD-FD) in two production blocks, using the traditional layout for sublevel caving mining as a point of comparison. Two different fork layouts were tested, FD-CC at 80° (or AD-FD at 100°) and FD-CC at 70° (or AD-FD at 110°). Structural data available from face mapping and oriented core logging was used to define predominant joint sets at the investigated blocks. Using the structural input, wedge volumes at the intersections were modelled deterministically and probabilistically in Unwedge. The variations in wedge volumes formed at the intersections between layouts were used as a proxy for rockfall potential, meaning that if a layout reduced the wedge size, the smaller the rockfall hazard if triggered by a seismic event, and vice versa. It was concluded that most rockfalls at the FD-CC intersections are controlled by structures from three major joint sets. It was observed that rockfalls at FD-CC intersections occurred more often at certain footwall drift orientations. Many seismic events that triggered these rockfalls are located close to the ore passes and generated ground accelerations between 0.5 to 10 times the gravity acceleration. Implementing fork layouts with FD-CC at 80° intersection angle generates larger wedges than the traditional layout and thus, scenarios with a higher rockfall hazard. On the other hand, using fork layouts with FD-CC at 70° intersection angle reduces wedge size at the southern FD-CC intersections; hence, the rockfall hazard is reduced in these intersections. In the northern FD-CC intersections, the wedge volumes are increased and thus, a higher rockfall potential is generated in these intersections. AD-FD at 110° intersection angle generates also a smaller rockfall hazard than the traditional layout in both production blocks.
|
259 |
SENSITIVITY OF QUEUE ESTIMATES TO THE SIZE OF THE TIME INTERVAL USED TO AGGREGATE TRAFFIC VOLUME DATAShrestha, Sajan 19 May 2015 (has links)
No description available.
|
260 |
Capacity allocation and rescheduling in supply chainsLiu, Zhixin 20 September 2007 (has links)
No description available.
|
Page generated in 0.0833 seconds