• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1752
  • 650
  • 251
  • 236
  • 138
  • 71
  • 54
  • 38
  • 26
  • 19
  • 18
  • 15
  • 15
  • 12
  • 11
  • Tagged with
  • 3763
  • 3763
  • 729
  • 721
  • 601
  • 543
  • 543
  • 475
  • 474
  • 427
  • 403
  • 380
  • 347
  • 332
  • 273
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
911

Bioelectric control of prosthesis.

January 1966 (has links)
Based on a thesis in Electrical Engineering, 1965. / Bibliography: p.79-86. / Contract DA-36-039-AMC-03200(E).
912

Specialization of Perceptual Processes

Horswill, Ian 22 April 1995 (has links)
In this report, I discuss the use of vision to support concrete, everyday activity. I will argue that a variety of interesting tasks can be solved using simple and inexpensive vision systems. I will provide a number of working examples in the form of a state-of-the-art mobile robot, Polly, which uses vision to give primitive tours of the seventh floor of the MIT AI Laboratory. By current standards, the robot has a broad behavioral repertoire and is both simple and inexpensive (the complete robot was built for less than $20,000 using commercial board-level components). The approach I will use will be to treat the structure of the agent's activity---its task and environment---as positive resources for the vision system designer. By performing a careful analysis of task and environment, the designer can determine a broad space of mechanisms which can perform the desired activity. My principal thesis is that for a broad range of activities, the space of applicable mechanisms will be broad enough to include a number mechanisms which are simple and economical. The simplest mechanisms that solve a given problem will typically be quite specialized to that problem. One thus worries that building simple vision systems will be require a great deal of {it ad-hoc} engineering that cannot be transferred to other problems. My second thesis is that specialized systems can be analyzed and understood in a principled manner, one that allows general lessons to be extracted from specialized systems. I will present a general approach to analyzing specialization through the use of transformations that provably improve performance. By demonstrating a sequence of transformations that derive a specialized system from a more general one, we can summarize the specialization of the former in a compact form that makes explicit the additional assumptions that it makes about its environment. The summary can be used to predict the performance of the system in novel environments. Individual transformations can be recycled in the design of future systems.
913

Gene expression and BSE progression in beef cattle

Bartusiak, Robert 11 1900 (has links)
Bovine Spongiform Encephalopathy (BSE) belongs to a group of neurodegenerative diseases known as transmissible spongiform encephalopathies (TSEs) which affect many species. From 1986 more than 184,000 cattle in the UK have been confirmed to be infected with this disease, and in Canada total losses to the economy reached $6 billion. This study examines the gene expression in three major innate immunity components: complement system, toll-like receptors, interleukins, and selected proteins of their signaling pathways. Quantitative real time polymerase chain reaction analyses were performed on caudal medulla samples to identify differentially expressed genes between non-exposed and orally challenged animals. In general, immune genes were down-regulated in comparison to non-challenged animals during first 12 months of disease with a tendency to be up-regulated at terminal stage of BSE. The results from this study provide a basis for further research on the mechanisms modifying immune responses and altering progression of the disease. / Animal Science
914

Acceleration of Transient Stability Simulation for Large-Scale Power Systems on Parallel and Distributed Hardware

Jalili-Marandi, Vahid 11 1900 (has links)
Transient stability analysis is necessary for the planning, operation, and control of power systems. However, its mathematical modeling and time-domain solution is computationally onerous and has attracted the attention of power systems experts and simulation specialists for decades. The ultimate promised goal has been always to perform this simulation as fast as real-time for realistic-sized systems. In this thesis, methods to speedup transient stability simulation for large-scale power systems are investigated. The research reported in this thesis can be divided into two parts. First, real-time simulation on a general-purpose simulator composed of CPU-based computational nodes is considered. A novel approach called Instantaneous Relaxation (IR) is proposed for the real-time transient stability simulation on such a simulator. The motivation of proposing this technique comes from the inherent parallelism that exists in the transient stability problem that allows to have a coarse grain decomposition of resulting system equations. Comparison of the real-time results with the off-line results shows both the accuracy and efficiency of the proposed method. In the second part of this thesis, Graphics Processing Units (GPUs) are used for the first time for the transient stability simulation of power systems. Data-parallel programming techniques are used on the single-instruction multiple-date (SIMD) architecture of the GPU to implement the transient stability simulations. Several test cases of varying sizes are used to investigate the GPU-based simulation. The simulation results reveal the obvious advantage of using GPUs instead of CPUs for large-scale problems. In the continuation of part two of this thesis the application of multiple GPUs running in parallel is investigated. Two different parallel processing based techniques are implemented: the IR method, and the incomplete LU factorization based approach. Practical information is provided on how to use multi-threaded programming to manage multiple GPUs running simultaneously for the implementation of the transient stability simulation. The implementation of the IR method on multiple GPUs is the intersection of data parallelism and program-level parallelism, which makes possible the simulation of very large-scale systems with 7020 buses and 1800 synchronous generators. / Energy Systems
915

Challenges for the Accurate Determination of the Surface Thermal Condition via In-Depth Sensor Data

Elkins, Bryan Scott 01 August 2011 (has links)
The overall goal of this work is to provide a systematic methodology by which the difficulties associated with the inverse heat conduction problem (IHCP) can be resolved. To this end, two inverse heat conduction methods are presented. First, a space-marching IHCP method (discrete space, discrete time) utilizing a Gaussian low-pass filter for regularization is studied. The stability and accuracy of this inverse prediction is demonstrated to be more sensitive to the temporal mesh than the spatial mesh. The second inverse heat conduction method presented aims to eliminate this feature by employing a global time, discrete space inverse solution methodology. The novel treatment of the temporal derivative in the heat equation, combined with the global time Gaussian low-pass filter provides the regularization required for stable, accurate results. A physical experiment used as a test bed for validation of the numerical methods described herein is also presented. The physics of installed thermocouple sensors are outlined, and loop-current step response (LCSR) is employed to measure and correct for the delay and attenuation characteristics of the sensors. A new technique for the analysis of LCSR data is presented, and excellent agreement is observed between this model and the data. The space-marching method, global time method, and a new calibration integral method are employed to analyze the experimental data. First, data from only one probe is used which limits the results to the case of a semi-infinite medium. Next, data from two probes at different depths are used in the inverse analysis which enables generalization of the results to domains of finite width. For both one- and two-probe analyses, excellent agreement is found between the actual surface heat flux and the inverse predictions. The most accurate inverse technique is shown to be the calibration integral method, which is presently restricted to one-probe analysis. It is postulated that the accuracy of the global time method could be improved if the required higher-time derivatives of temperature data could be more accurately measured. Some preliminary work in obtaining these higher-time derivatives of temperature from a voltage-rate interface used in conjunction with the thermocouple calibration curve is also presented.
916

A single-chip real-Time range finder

Chen, Sicheng 30 September 2004 (has links)
Range finding are widely used in various industrial applications, such as machine vision, collision avoidance, and robotics. Presently most range finders either rely on active transmitters or sophisticated mechanical controllers and powerful processors to extract range information, which make the range finders costly, bulky, or slowly, and limit their applications. This dissertation is a detailed description of a real-time vision-based range sensing technique and its single-chip CMOS implementation. To the best of our knowledge, this system is the first single chip vision-based range finder that doesn't need any mechanical position adjustment, memory or digital processor. The entire signal processing on the chip is purely analog and occurs in parallel. The chip captures the image of an object and extracts the depth and range information from just a single picture. The on-chip, continuous-time, logarithmic photoreceptor circuits are used to couple spatial image signals into the range-extracting processing network. The photoreceptor pixels can adjust their operating regions, simultaneously achieving high sensitivity and wide dynamic range. The image sharpness processor and Winner-Take-All circuits are characterized and analyzed carefully for their temporal bandwidth and detection performance. The mathematical and optical models of the system are built and carefully verified. A prototype based on this technique has been fabricated and tested. The experimental results prove that the range finder can achieve acceptable range sensing precision with low cost and excellent speed performance in short-to-medium range coverage. Therefore, it is particularly useful for collision avoidance.
917

Essays on Systematic and Unsystematic Monetary and Fiscal Policies

Cimadomo, Jacopo 24 September 2008 (has links)
The active use of macroeconomic policies to smooth economic fluctuations and, as a consequence, the stance that policymakers should adopt over the business cycle, remain controversial issues in the economic literature. In the light of the dramatic experience of the early 1930s’ Great Depression, Keynes (1936) argued that the market mechanism could not be relied upon to spontaneously recover from a slump, and advocated counter-cyclical public spending and monetary policy to stimulate demand. Albeit the Keynesian doctrine had largely influenced policymaking during the two decades following World War II, it began to be seriously challenged in several directions since the start of the 1970s. The introduction of rational expectations within macroeconomic models implied that aggregate demand management could not stabilize the economy’s responses to shocks (see in particular Sargent and Wallace (1975)). According to this view, in fact, rational agents foresee the effects of the implemented policies, and wage and price expectations are revised upwards accordingly. Therefore, real wages and money balances remain constant and so does output. Within such a conceptual framework, only unexpected policy interventions would have some short-run effects upon the economy. The "real business cycle (RBC) theory", pioneered by Kydland and Prescott (1982), offered an alternative explanation on the nature of fluctuations in economic activity, viewed as reflecting the efficient responses of optimizing agents to exogenous sources of fluctuations, outside the direct control of policymakers. The normative implication was that there should be no role for economic policy activism: fiscal and monetary policy should be acyclical. The latest generation of New Keynesian dynamic stochastic general equilibrium (DSGE) models builds on rigorous foundations in intertemporal optimizing behavior by consumers and firms inherited from the RBC literature, but incorporates some frictions in the adjustment of nominal and real quantities in response to macroeconomic shocks (see Woodford (2003)). In such a framework, not only policy "surprises" may have an impact on the economic activity, but also the way policymakers "systematically" respond to exogenous sources of fluctuation plays a fundamental role in affecting the economic activity, thereby rekindling interest in the use of counter-cyclical stabilization policies to fine tune the business cycle. Yet, despite impressive advances in the economic theory and econometric techniques, there are no definitive answers on the systematic stance policymakers should follow, and on the effects of macroeconomic policies upon the economy. Against this background, the present thesis attempts to inspect the interrelations between macroeconomic policies and the economic activity from novel angles. Three contributions are proposed. In the first Chapter, I show that relying on the information actually available to policymakers when budgetary decisions are taken is of fundamental importance for the assessment of the cyclical stance of governments. In the second, I explore whether the effectiveness of fiscal shocks in spurring the economic activity has declined since the beginning of the 1970s. In the third, the impact of systematic monetary policies over U.S. industrial sectors is investigated. In the existing literature, empirical assessments of the historical stance of policymakers over the economic cycle have been mainly drawn from the estimation of "reduced-form" policy reaction functions (see in particular Taylor (1993) and Galì and Perotti (2003)). Such rules typically relate a policy instrument (a reference short-term interest rate or an indicator of discretionary fiscal policy) to a set of explanatory variables (notably inflation, the output gap and the debt-GDP ratio, as long as fiscal policy is concerned). Although these policy rules can be seen as simple approximations of what derived from an explicit optimization problem solved by social planners (see Kollmann (2007)), they received considerable attention since they proved to track the behavior of central banks and fiscal policymakers relatively well. Typically, revised data, i.e. observations available to the econometrician when the study is carried out, are used in the estimation of such policy reaction functions. However, data available in "real-time" to policymakers may end up to be remarkably different from what it is observed ex-post. Orphanides (2001), in an innovative and thought-provoking paper on the U.S. monetary policy, challenged the way policy evaluation was conducted that far by showing that unrealistic assumptions about the timeliness of data availability may yield misleading descriptions of historical policy. In the spirit of Orphanides (2001), in the first Chapter of this thesis I reconsider how the intentional cyclical stance of fiscal authorities should be assessed. Importantly, in the framework of fiscal policy rules, not only variables such as potential output and the output gap are subject to measurement errors, but also the main discretionary "operating instrument" in the hands of governments: the structural budget balance, i.e. the headline government balance net of the effects due to automatic stabilizers. In fact, the actual realization of planned fiscal measures may depend on several factors (such as the growth rate of GDP, the implementation lags that often follow the adoption of many policy measures, and others more) outside the direct and full control of fiscal authorities. Hence, there might be sizeable differences between discretionary fiscal measures as planned in the past and what it is observed ex-post. To be noted, this does not apply to monetary policy since central bankers can control their operating interest rates with great accuracy. When the historical behavior of fiscal authorities is analyzed from a real-time perspective, it emerges that the intentional stance has been counter-cyclical, especially during expansions, in the main OECD countries throughout the last thirteen years. This is at odds with findings based on revised data, generally pointing to pro-cyclicality (see for example Gavin and Perotti (1997)). It is shown that empirical correlations among revision errors and other second-order moments allow to predict the size and the sign of the bias incurred in estimating the intentional stance of the policy when revised data are (mistakenly) used. It addition, formal tests, based on a refinement of Hansen (1999), do not reject the hypothesis that the intentional reaction of fiscal policy to the cycle is characterized by two regimes: one counter-cyclical, when output is above its potential level, and the other acyclical, in the opposite case. On the contrary, the use of revised data does not allow to identify any threshold effect. The second and third Chapters of this thesis are devoted to the exploration of the impact of fiscal and monetary policies upon the economy. Over the last years, two approaches have been mainly followed by practitioners for the estimation of the effects of macroeconomic policies on the real activity. On the one hand, calibrated and estimated DSGE models allow to trace out the economy’s responses to policy disturbances within an analytical framework derived from solid microeconomic foundations. On the other, vector autoregressive (VAR) models continue to be largely used since they have proved to fit macro data particularly well, albeit they cannot fully serve to inspect structural interrelations among economic variables. Yet, the typical DSGE and VAR models are designed to handle a limited number of variables and are not suitable to address economic questions potentially involving a large amount of information. In a DSGE framework, in fact, identifying aggregate shocks and their propagation mechanism under a plausible set of theoretical restrictions becomes a thorny issue when many variables are considered. As for VARs, estimation problems may arise when models are specified in a large number of indicators (although latest contributions suggest that large-scale Bayesian VARs perform surprisingly well in forecasting. See in particular Banbura, Giannone and Reichlin (2007)). As a consequence, the growing popularity of factor models as effective econometric tools allowing to summarize in a parsimonious and flexible manner large amounts of information may be explained not only by their usefulness in deriving business cycle indicators and forecasting (see for example Reichlin (2002) and D’Agostino and Giannone (2006)), but also, due to recent developments, by their ability in evaluating the response of economic systems to identified structural shocks (see Giannone, Reichlin and Sala (2002) and Forni, Giannone, Lippi and Reichlin (2007)). Parallelly, some attempts have been made to combine the rigor of DSGE models and the tractability of VAR ones, with the advantages of factor analysis (see Boivin and Giannoni (2006) and Bernanke, Boivin and Eliasz (2005)). The second Chapter of this thesis, based on a joint work with Agnès Bénassy-Quéré, presents an original study combining factor and VAR analysis in an encompassing framework, to investigate how "unexpected" and "unsystematic" variations in taxes and government spending feed through the economy in the home country and abroad. The domestic impact of fiscal shocks in Germany, the U.K. and the U.S. and cross-border fiscal spillovers from Germany to seven European economies is analyzed. In addition, the time evolution of domestic and cross-border tax and spending multipliers is explored. In fact, the way fiscal policy impacts on domestic and foreign economies depends on several factors, possibly changing over time. In particular, the presence of excess capacity, accommodating monetary policy, distortionary taxation and liquidity constrained consumers, plays a prominent role in affecting how fiscal policies stimulate the economic activity in the home country. The impact on foreign output crucially depends on the importance of trade links, on real exchange rates and, in a monetary union, on the sensitiveness of foreign economies to the common interest rate. It is well documented that the last thirty years have witnessed frequent changes in the economic environment. For instance, in most OECD countries, the monetary policy stance became less accommodating in the 1980s compared to the 1970s, and more accommodating again in the late 1990s and early 2000s. Moreover, financial markets have been heavily deregulated. Hence, fiscal policy might have lost (or gained) power as a stimulating tool in the hands of policymakers. Importantly, the issue of cross-border transmission of fiscal policy decisions is of the utmost relevance in the framework of the European Monetary Union and this explains why the debate on fiscal policy coordination has received so much attention since the adoption of the single currency (see Ahearne, Sapir and Véron (2006) and European Commission (2006)). It is found that over the period 1971 to 2004 tax shocks have generally been more effective in spurring domestic output than government spending shocks. Interestingly, the inclusion of common factors representing global economic phenomena yields to smaller multipliers reconciling, at least for the U.K., the evidence from large-scale macroeconomic models, generally finding feeble multipliers (see e.g. European Commission’s QUEST model), with the one from a prototypical structural VAR pointing to stronger effects of fiscal policy. When the estimation is performed recursively over samples of seventeen years of data, it emerges that GDP multipliers have dropped drastically from early 1990s on, especially in Germany (tax shocks) and in the U.S. (both tax and government spending shocks). Moreover, the conduct of fiscal policy seems to have become less erratic, as documented by a lower variance of fiscal shocks over time, and this might contribute to explain why business cycles have shown less volatility in the countries under examination. Expansionary fiscal policies in Germany do not generally have beggar-thy-neighbor effects on other European countries. In particular, our results suggest that tax multipliers have been positive but vanishing for neighboring countries (France, Italy, the Netherlands, Belgium and Austria), weak and mostly not significant for more remote ones (the U.K. and Spain). Cross-border government spending multipliers are found to be monotonically weak for all the subsamples considered. Overall these findings suggest that fiscal "surprises", in the form of unexpected reductions in taxation and expansions in government consumption and investment, have become progressively less successful in stimulating the economic activity at the domestic level, indicating that, in the framework of the European Monetary Union, policymakers can only marginally rely on this discretionary instrument as a substitute for national monetary policies. The objective of the third chapter is to inspect the role of monetary policy in the U.S. business cycle. In particular, the effects of "systematic" monetary policies upon several industrial sectors is investigated. The focus is on the systematic, or endogenous, component of monetary policy (i.e. the one which is related to the economic activity in a stable and predictable way), for three main reasons. First, endogenous monetary policies are likely to have sizeable real effects, if agents’ expectations are not perfectly rational and if there are some nominal and real frictions in a market. Second, as widely documented, the variability of the monetary instrument and of the main macro variables is only marginally explained by monetary "shocks", defined as unexpected and exogenous variations in monetary conditions. Third, monetary shocks can be simply interpreted as measurement errors (see Christiano, Eichenbaum and Evans (1998)). Hence, the systematic component of monetary policy is likely to have played a fundamental role in affecting business cycle fluctuations. The strategy to isolate the impact of systematic policies relies on a counterfactual experiment, within a (calibrated or estimated) macroeconomic model. As a first step, a macroeconomic shock to which monetary policy is likely to respond should be selected, and its effects upon the economy simulated. Then, the impact of such shock should be evaluated under a “policy-inactive” scenario, assuming that the central bank does not respond to it. Finally, by comparing the responses of the variables of interest under these two scenarios, some evidence on the sensitivity of the economic system to the endogenous component of the policy can be drawn (see Bernanke, Gertler and Watson (1997)). Such kind of exercise is first proposed within a stylized DSGE model, where the analytical solution of the model can be derived. However, as argued, large-scale multi-sector DSGE models can be solved only numerically, thus implying that the proposed experiment cannot be carried out. Moreover, the estimation of DSGE models becomes a thorny issue when many variables are incorporated (see Canova and Sala (2007)). For these arguments, a less “structural”, but more tractable, approach is followed, where a minimal amount of identifying restrictions is imposed. In particular, a factor model econometric approach is adopted (see in particular Giannone, Reichlin and Sala (2002) and Forni, Giannone, Lippi and Reichlin (2007)). In this framework, I develop a technique to perform the counterfactual experiment needed to assess the impact of systematic monetary policies. It is found that 2 and 3-digit SIC U.S. industries are characterized by very heterogeneous degrees of sensitivity to the endogenous component of the policy. Notably, the industries showing the strongest sensitivities are the ones producing durable goods and metallic materials. Non-durable good producers, food, textile and lumber producing industries are the least affected. In addition, it is highlighted that industrial sectors adjusting prices relatively infrequently are the most "vulnerable" ones. In fact, firms in this group are likely to increase quantities, rather than prices, following a shock positively hitting the economy. Finally, it emerges that sectors characterized by a higher recourse to external sources to finance investments, and sectors investing relatively more in new plants and machineries, are the most affected by endogenous monetary actions.
918

Automatic near real-time characterisation of large earthquakes

Rößler, Dirk, Krüger, Frank, Ohrnberger, Matthias, Ehlert, Lutz January 2008 (has links)
An der Universität Potsdam wird seit 2008 ein automatisiertes Verfahren angewandt, um Bruchparamter großer Erdbeben in quasi-Echtzeit, d.h. wenige Minuten nachdem sich das Beben ereignet hat, zu bestimmen und der Öffentlichkeit via Internet zur Verfügung zu stellen. Es ist vorgesehen, das System in das Deutsch-Indonesische Tsunamifrühwarnsystem (GITEWS) zu integrieren, für das es speziell konfiguriert ist. Wir bestimmen insbesondere die Dauer und die Ausdehnung des Erdbebens, sowie dessen Bruchgeschwindigkeit und -richtung. Dabei benutzen wir die Seismogramme der zuerst eintreffenden P Wellen vom Breitbandstationen in teleseimischer Entfernung vom Beben sowie herkömmliche Arrayverfahren in teilweise modifizierter Form. Die Semblance wir als Ähnlichkeitsmaß verwendet, um Seismogramme eines Stationsnetzes zu vergleichen. Im Falle eines Erdbebens ist die Semblance unter Berücksichtigung des Hypozentrums zur Herdzeit und während des Bruchvorgangs deutlich zeitlich und räumlich erhöht und konzentriert. Indem wir die Ergebnisse verschiedener Stationsnetzwerke kombinieren, erreichen wir Unabhängigkeit von der Herdcharakteristik und eine raum-zeitliche Auflösung, die es erlaubt die o.g. Parameter abzuleiten. In unserem Beitrag skizzieren wir die Methode. Anhand der beiden M8.0 Benkulu Erdbeben (Sumatra, Indonesien) vom 12.09.2007 und dem M8.0 Sichuan Ereignis (China) vom 12.05.2008 demonstrieren wir Auflösungsmöglichkeiten und vergleichen die Ergebnisse der automatisierten Echtzeitanwendung mit nachträglichen Berechnungen. Weiterhin stellen wir eine Internetseite zur Verfügung, die die Ergebnisse präsentiert und animiert. Diese kann z.B. in geowissenschaftlichen Einrichtungen an Computerterminals gezeigt werden. Die Internetauftritte haben die folgenden Adressen: http://www.geo.uni-potsdam.de/arbeitsgruppen/Geophysik_Seismologie/forschung/ruptrack/openday http://www.geo.uni-potsdam.de/arbeitsgruppen/Geophysik_Seismologie/forschung/ruptrack
919

Resource-Predictable and Efficient Monitoring of Events

Mellin, Jonas January 2004 (has links)
We present a formally specified event specification language (Solicitor). Solicitor is suitable for real-time systems, since it results in resource-predictable and efficient event monitors. In event monitoring, event expressions defined in an event specification language control the monitoring by matching incoming streams of event occurrences against the event expressions. When an event expression has a complete set of matching event occurrences, the event type that this expression defines has occurred. Each event expression is specified by combining contributing event types with event operators such as sequence, conjunction, disjunction; contributing event types may be primitive, representing happenings of interest in a system, or composite, specified by event expressions. The formal specification of Solicitor is based on a formal schema that separates two important aspects of an event expression; these aspects are event operators and event contexts. The event operators aspect addresses the relative constraints between contributing event occurrences, whereas the event contexts aspect addresses the selection of event occurrences from an event stream with respect to event occurrences that are used or invalidated during event monitoring. The formal schema also contains an abstract model of event monitoring. Given this formal specification, we present realization issues of, a time complexity study of, as well as a proof of limited resource requirements of event monitoring. We propose an architecture for resource-predictable and efficient event monitoring. In particular, this architecture meets the requirements of realtime systems by defining how event monitoring and tasks are associated. A declarative way of specifying this association is proposed within our architecture. Moreover, an efficient memory management scheme for event composition is presented. This scheme meets the requirements of event monitoring in distributed systems. This architecture has been validated by implementing an executable component prototype that is part of the DeeDS prototype. The results of the time complexity study are validated by experiments. Our experiments corroborate the theory in terms of complexity classes of event composition in different event contexts. However, the experimental platform is not representative of operational real-time systems and, thus, the constants derived from our experiments cannot be used for such systems.
920

Verification and Scheduling Techniques for Real-Time Embedded Systems

Cortés, Luis Alejandro January 2005 (has links)
Embedded computer systems have become ubiquitous. They are used in a wide spectrum of applications, ranging from household appliances and mobile devices to vehicle controllers and medical equipment. This dissertation deals with design and verification of embedded systems, with a special emphasis on the real-time facet of such systems, where the time at which the results of the computations are produced is as important as the logical values of these results. Within the class of real-time systems two categories, namely hard real-time systems and soft real-time systems, are distinguished and studied in this thesis. First, we propose modeling and verification techniques targeted towards hard real-time systems, where correctness, both logical and temporal, is of prime importance. A model of computation based on Petri nets is defined. The model can capture explicit timing information, allows tokens to carry data, and supports the concept of hierarchy. Also, an approach to the formal verification of systems represented in our modeling formalism is introduced, in which model checking is used to prove whether the system model satisfies its required properties expressed as temporal logic formulas. Several strategies for improving verification efficiency are presented and evaluated. Second, we present scheduling approaches for mixed hard/soft real-time systems. We study systems that have both hard and soft real-time tasks and for which the quality of results (in the form of utilities) depends on the completion time of soft tasks. Also, we study systems for which the quality of results (in the form of rewards) depends on the amount of computation allotted to tasks. We introduce quasi-static techniques, which are able to exploit at low cost the dynamic slack caused by variations in actual execution times, for maximizing utilities/rewards and for minimizing energy. Numerous experiments, based on synthetic benchmarks and realistic case studies, have been conducted in order to evaluate the proposed approaches. The experimental results show the merits and worthiness of the techniques introduced in this thesis and demonstrate that they are applicable on real-life examples.

Page generated in 0.0808 seconds