• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 26
  • 11
  • 7
  • 6
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 110
  • 21
  • 19
  • 15
  • 15
  • 14
  • 14
  • 14
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Estimation à haut-niveau des dégradations temporelles dans les processeurs : méthodologie et mise en oeuvre logicielle / Aging and IC timing estimation at high level : methodology and simulation

Bertolini, Clément 13 December 2013 (has links)
Actuellement, les circuits numériques nécessitent d'être de plus en plus performants. Aussi, les produits doivent être conçus le plus rapidement possible afin de gagner les précieuses parts de marché. Les méthodes rapides de conception et l'utilisation de MPSoC ont permis de satisfaire à ces exigences, mais sans tenir compte précisément de l'impact du vieillissement des circuits sur la conception. Or les MPSoC utilisent les technologies de fabrication les plus récentes et sont de plus en plus soumis aux défaillances matérielles. De nos jours, les principaux mécanismes de défaillance observés dans les transistors des MPSoC sont le HCI et le NBTI. Des marges sont alors ajoutées pour que le circuit soit fonctionnel pendant son utilisation, en considérant le cas le plus défavorable pour chaque mécanisme. Ces marges deviennent de plus en plus importantes et diminuent les performances attendues. C'est pourquoi les futures méthodes de conception nécessitent de tenir compte des dégradations matérielles en fonction de l’utilisation du circuit. Dans cette thèse, nous proposons une méthode originale pour simuler le vieillissement des MPSoC à haut niveau d'abstraction. Cette méthode s'applique lors de la conception du système c.-à-d. entre l'étape de définition des spécifications et la mise en production. Un modèle empirique permet d'estimer les dégradations temporelles en fin de vie d'un circuit. Un exemple d'application est donné pour un processeur embarqué et les résultats pour un ensemble d'applications sont reportés. La solution proposée permet d'explorer différentes configurations d'une architecture MPSoC pour comparer le vieillissement. Aussi, l'application la plus sévère pour le vieillissement peut être identifiée. / Nowadays, more and more performance is expected from digital circuits. What’s more, the market requires fast conception methods, in order to propose the newest technology available. Fast conception methods and the utilization of MPSoC have enabled high performance and short time-to-market while taking little attention to aging. However, MPSoC are more and more prone to hardware failures that occur in transistors. Today, the prevailing failure mechanisms in MPSoC are HCI and NBTI. Margins are usually added on new products to avoid failures during execution, by considering worst case scenario for each mechanism. For the newest technology, margins are becoming more and more important and products performance is getting lower and lower. That’s why the conception needs to take into account hardware failures according to the execution of software. This thesis propose a new methodology to simulate aging at high level of abstraction, which can be applied to MPSoC. The method can be applied during product conception, between the specification phase and the production. An empirical model is used to estimate slack time at circuit's end of life. A use case is conducted on an embedded processor and degradation results are reported for a set of applications. The solution enables architecture exploration and MPSoC aging can thus be compared. The software with most severe impact on aging can also be determined.
102

Vliv přítomnosti proteinu Hsp70 na infekci způsobenou Y virem bramboru / The effect of Hsp70 protein on the infection caused by Potato virus Y

Doričová, Vlasta January 2014 (has links)
Whithin their natural environment, plants are subjected to a combination of stress conditions. Since potential interactions between signal pathways, plants respond to multiple stresses differently from how they do to individual stresses, activating a specific programme. Heat shock proteins (HSP70) overexpressed after heat shock influence the viral infection. On one side HSP70 can participate on refolding of aggregated or partially denaturated proteins, on the other side HSP70 can interact with viral proteins and facilitate propagation of viral replication complexes. In this work the effect of heat shock (42řC, 2. hours) applied before or after the inoculation of plants Nicotiana tabacum L. cv. Petit Havana SR1 with Potato virus Y on viral infection was detected. This effect was studied in two biological experiments. The amount of coat protein of PVYNTN and protein HSP70 were detected simultaneously with the activity assays of Hatch-Slack cycle enzymes, glycosidases and peroxidase. Both experimental approaches (heat shock applied before or after the inoculation by PVYNTN ) enhanced amount of the virus and in the 2nd experiment it accelerated infection development. Immediately after application of heat shock the amount of HSP70 was increased. The enhancement of HSP70 by viral infection occurred...
103

A Lateglacial Paleofire Record for East-central Michigan

Ballard, Joanne P. 07 October 2009 (has links)
No description available.
104

Skolgårdens gömda platser En studie över platser för gränsöverskridande handlingar

Almadani, Haidar January 2016 (has links)
Barn spenderar mycket tid i skolan. Det finns gott om forskning om trygghet och otrygghet i skolan, men de flesta studier fokuserar på klassrumssituationen, relationer mellan barnen eller med lärarna. I den här studien flyttar jag ut fokuset till skolgården, genom att göra en studie om skolgårdens gömda platser på Höjaskolan i Malmö I min studie visar jag att dessa platser kan användas för att bryta mot skolans regler och att detta kan skapa otrygghet för barnen. Samtidigt fungerar platserna som barnens egna platser och kan också vara spännande. Här kan man testa gränser, bryta mot skolans regler, vara för sig själv eller umgås med sina vänner. För att undersöka platser där barn gömmer sig på för att bryta regler har jag gjort en workshop med mellanstadieelever om trygga och otrygga platser på skolgården. Jag har även gjort en deltagande observation genom att hänga med högstadieeleverna på rasten. Jag har även använt mig av mina egna erfarenheter av att ha jobbat i förskoleklassen på Höjaskolan. För att genomföra min studie, använder jag Jeremy Tills begrepp hard space och slack space. Med hjälp av dessa begrepp kan jag se vilka karaktäristiska drag dessa gömda platser har. Jag har kommit fram till tre olika typer av platser där barn bryter regler; stationära, rörliga och skiftande platser som är både stationära och rörliga. Jag beskriver också vilka elever som attraheras av dessa gömda platser och vilka aktiviteter platserna stöttar. / Children spend a lot of time in schools. There is plenty of research on safe and unsafe in the school, but most studies focus on the classroom situation, relationships between the children and the teachers. In this study, I move the focus to the schoolyard, by doing and that may create unsafety for the children. Meanwhile these sites works as children’s own sites and can also be exciting. Here can children test the limits, breaking the school rules, be alone or socialize with their friends. To study sites where children are hiding in order to break the rules, I have done a workshop with middle school students on safe and unsafe sites in the schoolyard. I have also done a participant observation by hanging with high school students at the brakes. I have also used some of my own experience of having worked in the preschool class on Höjaskolan. To carry out my study, I use Jeremy Till concepts hard space and slack space. Using these concepts, I can see the characteristic features of these hidden sites. I will present three different types of sites; stationary, mobile and shifting sites that are both stationary and mobile. I also write about which students are attracted to these sites and what activities they support.
105

Energy-aware scheduling : complexity and algorithms

Renaud-Goud, Paul 05 July 2012 (has links) (PDF)
In this thesis we have tackled a few scheduling problems under energy constraint, since the energy issue is becoming crucial, for both economical and environmental reasons. In the first chapter, we exhibit tight bounds on the energy metric of a classical algorithm that minimizes the makespan of independent tasks. In the second chapter, we schedule several independent but concurrent pipelined applications and address problems combining multiple criteria, which are period, latency and energy. We perform an exhaustive complexity study and describe the performance of new heuristics. In the third chapter, we study the replica placement problem in a tree network. We try to minimize the energy consumption in a dynamic frame. After a complexity study, we confirm the quality of our heuristics through a complete set of simulations. In the fourth chapter, we come back to streaming applications, but in the form of series-parallel graphs, and try to map them onto a chip multiprocessor. The design of a polynomial algorithm on a simple problem allows us to derive heuristics on the most general problem, whose NP-completeness has been proven. In the fifth chapter, we study energy bounds of different routing policies in chip multiprocessors, compared to the classical XY routing, and develop new routing heuristics. In the last chapter, we compare the performance of different algorithms of the literature that tackle the problem of mapping DAG applications to minimize the energy consumption.
106

Residential mortgage loan securitization and the subprime crisis / S. Thomas

Thomas, Soby January 2010 (has links)
Many analysts believe that problems in the U.S. housing market initiated the 2008–2010 global financial crisis. In this regard, the subprime mortgage crisis (SMC) shook the foundations of the financial industry by causing the failure of many iconic Wall Street investment banks and prominent depository institutions. This crisis stymied credit extension to households and businesses thus creating credit crunches and, ultimately, a global recession. This thesis specifically discusses the SMC and its components, causes, consequences and cures in relation to subprime mortgages, securitization, as well as data. In particular, the SMC has highlighted the fact that risk, credit ratings, profit and valuation as well as capital regulation are important banking considerations. With regard to risk, the thesis discusses credit (including counterparty), market (including interest rate, basis, prepayment, liquidity and price), tranching (including maturity mismatch and synthetic), operational (including house appraisal, valuation and compensation) and systemic (including maturity transformation) risks. The thesis introduces the IDIOM hypothesis that postulates that the SMC was largely caused by the intricacy and design of subprime agents, mortgage origination and securitization that led to information problems (loss, asymmetry and contagion), valuation opaqueness and ineffective risk mitigation. It also contains appropriate examples, discussions, timelines as well as appendices about the main results on the aforementioned topics. Numerous references point to the material not covered in the thesis, and indicate some avenues for further research. In the thesis, the primary subprime agents that we consider are house appraisers (HAs), mortgage brokers (MBs), mortgagors (MRs), servicers (SRs), SOR mortgage insurers (SOMIs), trustees, underwriters, credit rating agencies (CRAs), credit enhancement providers (CEPs) and monoline insurers (MLIs). Furthermore, the banks that we study are subprime interbank lenders (SILs), subprime originators (SORs), subprime dealer banks (SDBs) and their special purpose vehicles (SPVs) such as Wall Street investment banks and their special structures as well as subprime investing banks (SIBs). The main components of the SMC are MRs, the housing market, SDBs/hedge funds/money market funds/SIBs, the economy as well as the government (G) and central banks. Here, G either plays a regulatory or policymaking role. Most of the aforementioned agents and banks are assumed to be risk neutral with SOR being the exception since it can be risk (and regret) averse on occasion. The main aspects of the SMC - subprime mortgages, securitization, as well as data - that we cover in this thesis and the chapters in which they are found are outlined below. In Chapter 2, we discuss the dynamics of subprime SORs' risk and profit as well as their valuation under mortgage origination. In particular, we model subprime mortgages that are able to fully amortize, voluntarily prepay or default and construct a discrete–time model for SOR risk and profit incorporating costs of funds and mortgage insurance as well as mortgage losses. In addition, we show how high loan–to–value ratios due to declining housing prices curtailed the refinancing of subprime mortgages, while low ratios imply favorable house equity for subprime MRs. Chapter 3 investigates the securitization of subprime mortgages into structured mortgage products such as subprime residential mortgage–backed securities (RMBSs) and collateralized debt obligations (CDOs). In this regard, our discussions focus on information, risk and valuation as well as the role of capital under RMBSs and RMBS CDOs. Our research supports the view that incentives to monitor mortgages has been all but removed when changing from a traditional mortgage model to a subprime mortgage model. In the latter context, we provide formulas for IB's profit and valuation under RMBSs and RMBS CDOs. This is illustrated via several examples. Chapter 3 also explores the relationship between mortgage securitization and capital under Basel regulation and the SMC. This involves studying bank credit and capital under the Basel II paradigm where risk–weights vary. Further issues dealt with are the quantity and pricing of RMBSs, RMBS CDOs as well as capital under Basel regulation. Furthermore, we investigate subprime RMBSs and their rates with slack and holding constraints. Also, we examine the effect of SMC–induced credit rating shocks in future periods on subprime RMBSs and RMBS payout rates. A key problem is whether Basel capital regulation exacerbated the SMC. Very importantly, the thesis answers this question in the affirmative. Chapter 4 explores issues related to subprime data. In particular, we present mortgage and securitization level data and forge connections with the results presented in Chapters 2 and 3. The work presented in this thesis is based on 2 peer–reviewed chapters in books (see [99] and [104]), 2 peer–reviewed international journal articles (see [48] and [101]), and 2 peer–reviewed conference proceeding papers (see [102] and [103]). / Thesis (Ph.D. (Applied Mathematics))--North-West University, Potchefstroom Campus, 2011.
107

Residential mortgage loan securitization and the subprime crisis / S. Thomas

Thomas, Soby January 2010 (has links)
Many analysts believe that problems in the U.S. housing market initiated the 2008–2010 global financial crisis. In this regard, the subprime mortgage crisis (SMC) shook the foundations of the financial industry by causing the failure of many iconic Wall Street investment banks and prominent depository institutions. This crisis stymied credit extension to households and businesses thus creating credit crunches and, ultimately, a global recession. This thesis specifically discusses the SMC and its components, causes, consequences and cures in relation to subprime mortgages, securitization, as well as data. In particular, the SMC has highlighted the fact that risk, credit ratings, profit and valuation as well as capital regulation are important banking considerations. With regard to risk, the thesis discusses credit (including counterparty), market (including interest rate, basis, prepayment, liquidity and price), tranching (including maturity mismatch and synthetic), operational (including house appraisal, valuation and compensation) and systemic (including maturity transformation) risks. The thesis introduces the IDIOM hypothesis that postulates that the SMC was largely caused by the intricacy and design of subprime agents, mortgage origination and securitization that led to information problems (loss, asymmetry and contagion), valuation opaqueness and ineffective risk mitigation. It also contains appropriate examples, discussions, timelines as well as appendices about the main results on the aforementioned topics. Numerous references point to the material not covered in the thesis, and indicate some avenues for further research. In the thesis, the primary subprime agents that we consider are house appraisers (HAs), mortgage brokers (MBs), mortgagors (MRs), servicers (SRs), SOR mortgage insurers (SOMIs), trustees, underwriters, credit rating agencies (CRAs), credit enhancement providers (CEPs) and monoline insurers (MLIs). Furthermore, the banks that we study are subprime interbank lenders (SILs), subprime originators (SORs), subprime dealer banks (SDBs) and their special purpose vehicles (SPVs) such as Wall Street investment banks and their special structures as well as subprime investing banks (SIBs). The main components of the SMC are MRs, the housing market, SDBs/hedge funds/money market funds/SIBs, the economy as well as the government (G) and central banks. Here, G either plays a regulatory or policymaking role. Most of the aforementioned agents and banks are assumed to be risk neutral with SOR being the exception since it can be risk (and regret) averse on occasion. The main aspects of the SMC - subprime mortgages, securitization, as well as data - that we cover in this thesis and the chapters in which they are found are outlined below. In Chapter 2, we discuss the dynamics of subprime SORs' risk and profit as well as their valuation under mortgage origination. In particular, we model subprime mortgages that are able to fully amortize, voluntarily prepay or default and construct a discrete–time model for SOR risk and profit incorporating costs of funds and mortgage insurance as well as mortgage losses. In addition, we show how high loan–to–value ratios due to declining housing prices curtailed the refinancing of subprime mortgages, while low ratios imply favorable house equity for subprime MRs. Chapter 3 investigates the securitization of subprime mortgages into structured mortgage products such as subprime residential mortgage–backed securities (RMBSs) and collateralized debt obligations (CDOs). In this regard, our discussions focus on information, risk and valuation as well as the role of capital under RMBSs and RMBS CDOs. Our research supports the view that incentives to monitor mortgages has been all but removed when changing from a traditional mortgage model to a subprime mortgage model. In the latter context, we provide formulas for IB's profit and valuation under RMBSs and RMBS CDOs. This is illustrated via several examples. Chapter 3 also explores the relationship between mortgage securitization and capital under Basel regulation and the SMC. This involves studying bank credit and capital under the Basel II paradigm where risk–weights vary. Further issues dealt with are the quantity and pricing of RMBSs, RMBS CDOs as well as capital under Basel regulation. Furthermore, we investigate subprime RMBSs and their rates with slack and holding constraints. Also, we examine the effect of SMC–induced credit rating shocks in future periods on subprime RMBSs and RMBS payout rates. A key problem is whether Basel capital regulation exacerbated the SMC. Very importantly, the thesis answers this question in the affirmative. Chapter 4 explores issues related to subprime data. In particular, we present mortgage and securitization level data and forge connections with the results presented in Chapters 2 and 3. The work presented in this thesis is based on 2 peer–reviewed chapters in books (see [99] and [104]), 2 peer–reviewed international journal articles (see [48] and [101]), and 2 peer–reviewed conference proceeding papers (see [102] and [103]). / Thesis (Ph.D. (Applied Mathematics))--North-West University, Potchefstroom Campus, 2011.
108

Morphodynamics of a bedrock confined estuary and delta: The Skeena River Estuary

Wild, Amanda Lily 07 December 2020 (has links)
Bedrock islands add variation to the estuarine system that results in deviations from typical unconfined estuarine sediment transport patterns. Limited literature exists regarding the dynamics of seabed morphology, delta formation, sediment divergence patterns, and sedimentary facies classifications of non-fjordic bedrock confined systems. Such knowledge is critical to address coastal management concerns adequately. This research presents insights from the Skeena Estuary, a macrotidal estuary in northwestern Canada with a high fluvial sediment input (21.2-25.5 Mtyr-1). Descriptions on sub-environments, stratification, and sediment accumulation within the Skeena Estuary utilize HydroTrend model outputs of riverine sediment and discharge, Natural Resources Canada radiocarbon-dated sediment cores and grain size samples, and acoustic Doppler current profiler and conductivity-temperature-depth measurements from three field campaigns. Research findings delineate a fragmented delta structure with elongated mudflats and select areas of slope instability. Variations from well-mixed water circulation to lateral stratification, govern the slack tide flow transition and sediment transport pathways within seaward and landward passages of the estuary. Fostering a comprehensive understanding of bedrock confined estuary and delta systems has implications for the assessment of coastal management strategies, the productivity of ecological habitats, and the impacts of climate change within coastal areas. / Graduate
109

Efficient Minimum Cycle Mean Algorithms And Their Applications

Supriyo Maji (9158723) 23 July 2020 (has links)
<p>Minimum cycle mean (MCM) is an important concept in directed graphs. From clock period optimization, timing analysis to layout optimization, minimum cycle mean algorithms have found widespread use in VLSI system design optimization. With transistor size scaling to 10nm and below, complexities and size of the systems have grown rapidly over the last decade. Scalability of the algorithms both in terms of their runtime and memory usage is therefore important. </p> <p><br></p> <p>Among the few classical MCM algorithms, the algorithm by Young, Tarjan, and Orlin (YTO), has been particularly popular. When implemented with a binary heap, the YTO algorithm has the best runtime performance although it has higher asymptotic time complexity than Karp's algorithm. However, as an efficient implementation of YTO relies on data redundancy, its memory usage is higher and could be a prohibitive factor in large size problems. On the other hand, a typical implementation of Karp's algorithm can also be memory hungry. An early termination technique from Hartmann and Orlin (HO) can be directly applied to Karp's algorithm to improve its runtime performance and memory usage. Although not as efficient as YTO in runtime, HO algorithm has much less memory usage than YTO. We propose several improvements to HO algorithm. The proposed algorithm has comparable runtime performance to YTO for circuit graphs and dense random graphs while being better than HO algorithm in memory usage. </p> <p><br></p> <p>Minimum balancing of a directed graph is an application of the minimum cycle mean algorithm. Minimum balance algorithms have been used to optimally distribute slack for mitigating process variation induced timing violation issues in clock network. In a conventional minimum balance algorithm, the principal subroutine is that of finding MCM in a graph. In particular, the minimum balance algorithm iteratively finds the minimum cycle mean and the corresponding minimum-mean cycle, and uses the mean and cycle to update the graph by changing edge weights and reducing the graph size. The iterations terminate when the updated graph is a single node. Studies have shown that the bottleneck of the iterative process is the graph update operation as previous approaches involved updating the entire graph. We propose an improvement to the minimum balance algorithm by performing fewer changes to the edge weights in each iteration, resulting in better efficiency.</p> <p><br></p> <p>We also apply the minimum cycle mean algorithm in latency insensitive system design. Timing violations can occur in high performance communication links in system-on-chips (SoCs) in the late stages of the physical design process. To address the issues, latency insensitive systems (LISs) employ pipelining in the communication channels through insertion of the relay stations. Although the functionality of a LIS is robust with respect to the communication latencies, such insertion can degrade system throughput performance. Earlier studies have shown that the proper sizing of buffer queues after relay station insertion could eliminate such performance loss. However, solving the problem of maximum performance buffer queue sizing requires use of mixed integer linear programming (MILP) of which runtime is not scalable. We formulate the problem as a parameterized graph optimization problem where for every communication channel there is a parameterized edge with buffer counts as the edge weight. We then use minimum cycle mean algorithm to determine from which edges buffers can be removed safely without creating negative cycles. This is done iteratively in the similar style as the minimum balance algorithm. Experimental results suggest that the proposed approach is scalable. Moreover, quality of the solution is observed to be as good as that of the MILP based approach.</p><p><br></p>
110

Energy-aware scheduling : complexity and algorithms / Ordonnancement sous contrainte d'énergie : complexité et algorithmes

Renaud-Goud, Paul 05 July 2012 (has links)
Dans cette thèse, nous nous sommes intéressés à des problèmes d'ordonnancement sous contrainte d'énergie, puisque la réduction de l'énergie est devenue une nécessité, tant sur le plan économique qu'écologique. Dans le premier chapitre, nous exhibons des bornes strictes sur l'énergie d'un algorithme classique qui minimise le temps d'exécution de tâches indépendantes. Dans le second chapitre, nous ordonnançons plusieurs applications chaînées de type « streaming », et nous étudions des problèmes contraignant l'énergie, la période et la latence. Nous effectuons une étude de complexité exhaustive, et décrivons les performances de nouvelles heuristiques. Dans le troisième chapitre, nous étudions le problème de placement de répliques dans un réseau arborescent. Nous nous plaçons dans un cadre dynamique, et nous bornons à minimiser l'énergie. Après une étude de complexité, nous confirmons la qualité de nos heuristiques grâce à un jeu complet de simulations. Dans le quatrième chapitre, nous revenons aux applications « streaming », mais sous forme de graphes série-parallèles, et nous tentons de les placer sur un processeur multi-cœur. La découverte d'un algorithme polynomial sur un problème simple nous permet la conception d'heuristiques sur le problème le plus général dont nous avons établi la NP-complétude. Dans le cinquième chapitre, nous étudions des bornes énergétiques de politiques de routage dans des processeurs multi-cœurs, en comparaison avec le routage classique XY, et développons de nouvheuristiques de routage. Dans le dernier chapitre, nous étudions expérimentalement le placement d'applications sous forme de DAG sur des machines réelles. / In this thesis we have tackled a few scheduling problems under energy constraint, since the energy issue is becoming crucial, for both economical and environmental reasons. In the first chapter, we exhibit tight bounds on the energy metric of a classical algorithm that minimizes the makespan of independent tasks. In the second chapter, we schedule several independent but concurrent pipelined applications and address problems combining multiple criteria, which are period, latency and energy. We perform an exhaustive complexity study and describe the performance of new heuristics. In the third chapter, we study the replica placement problem in a tree network. We try to minimize the energy consumption in a dynamic frame. After a complexity study, we confirm the quality of our heuristics through a complete set of simulations. In the fourth chapter, we come back to streaming applications, but in the form of series-parallel graphs, and try to map them onto a chip multiprocessor. The design of a polynomial algorithm on a simple problem allows us to derive heuristics on the most general problem, whose NP-completeness has been proven. In the fifth chapter, we study energy bounds of different routing policies in chip multiprocessors, compared to the classical XY routing, and develop new routing heuristics. In the last chapter, we compare the performance of different algorithms of the literature that tackle the problem of mapping DAG applications to minimize the energy consumption.

Page generated in 0.0412 seconds