• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 362
  • 211
  • 43
  • 1
  • 1
  • Tagged with
  • 646
  • 646
  • 646
  • 572
  • 521
  • 133
  • 110
  • 104
  • 79
  • 72
  • 71
  • 68
  • 66
  • 65
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

De l'extension des langages à objets à la réalisation de modèles métiers : une évolution du développement logiciel

Lahire, Philippe 10 December 2004 (has links) (PDF)
Ce mémoire a pour objectif de donner un aperçu précis et synthétique des activités de recherches que j'ai développées depuis le début de ma carrière d'enseignant-chercheur. Le travail qui va vous être présenté n'est pas le travail d'une seule personne mais le résultat de la collaboration avec plusieurs personnes que j'ai encadrées ou co-encadrées. Ce petit bout de vie dans la recherche doit bien sûr aussi beaucoup aux autres chercheurs ; je pense bien sûr à ceux du projet OCL et en particulier à Roger Rousseau mais aussi à tous ceux que j'ai rencontrés et côtoyés dans le laboratoire ou dans les divers congrès. Vous l'aurez donc compris toutes les idées présentées ci-après ne sont pas issues d'une seule personne, mais je revendique la démarche qui les a guidée. Je me suis intéressé dès mon stage de DEA aux problèmes liés à la conception de logiciels et en particulier aux aspects concernant la réutilisation, la fiabilité et l'évolution des applications. Ce document retrace mes contributions dans ce domaine.
42

Contribution à la vérication formelle et programmation par contraintes

Collavizza, Hélène 03 December 2009 (has links) (PDF)
Ce mémoire d'Habilitation à Diriger des Recherches présente mes contributions à la vérification formelle des processeurs et des programmes, ainsi qu'à la programmation par contraintes. La vérification formelle, tant de matériel que de logiciel, est cruciale pour la sécurité des systèmes critiques, est un enjeu économique important et reste un défi pour la recherche. Les méthodes de vérification formelle retenues, aussi bien pour la vérification des processeurs que des programmes sont des méthodes entièrement automatiques qui reposent sur l'utilisation de procédures de décision. Pour la vérification de programmes, la résolution de contraintes sur domaines finis fournit une procédure de décision sur les entiers bornés (codables en machine). L'explosion combinatoire est retardée par la combinaison de solveurs spécifiques (booléen, linéaires, domaines finis), ce qui a permis d'obtenir des résultats expérimentaux qui surpassent dans certains cas les outils de "bounded model checking" basés sur l'utilisation de solveurs SAT. Dans un second temps, la vérification formelle des programmes est également abordée sous l'angle du développement conjoint d'une vérification complète et d'une exploration par model checking, basés sur la sémantique formelle du langage définie dans l'assistant de preuves HOL4. Enfin, ce mémoire présente mes contributions sur les contraintes en domaines continus (i.e. où les variables sont des nombres réels). Ces contraintes ont de nombreuses applications pratiques, par exemple en mécanique ou avionique, et leurs mécanismes de résolution peuvent servir de base à la vérification de programmes en présence de nombres flottants.
43

Safety analysis of heterogeneous-multiprocessor control system software

Gill, Janet A. January 1990 (has links) (PDF)
Thesis (M.S. in Computer Science)--Naval Postgraduate School, December 1990. / Thesis Advisor(s): Shimeall, Timothy J. Second Reader: Hefner, Kim A. S. "December 1990." Description based on title screen as viewed on March 31, 2010. DTIC Identifier(s): Computer Program Reliability, System Safety. Author(s) subject terms: Software Safety, Petri Net, Fault Tree, Software Engineering, Integrated System Analysis. Includes bibliographical references (p. 47-51). Also available in print.
44

SOFTVIZ, a step forward

Singh, Mahim. January 2004 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: Eclipse plug-in; tracer; timeline; software visualization; sunburst; SoftViz; ParaVis; error categorization framework; debugging; program understanding. Includes bibliographical references (p. 85-89).
45

Direct adaptive control using artificial neural networks with parameter projection

Tzanzalian, Svetlozara Krasteva 01 January 1994 (has links)
This research is focused on the development of a stable nonlinear direct adaptive control algorithm. The nonlinearity is realized through a one-hidden-layer forward artificial neural network (ANN) of sigmoidal basis functions. The control scheme incorporates a linear adaptive controller, acting in parallel with the ANN, so that if all nonlinear elements are set to zero, a linear controller results. The control scheme is based on inverse identification. An inherent problem with that scheme is the existence of multiple steady states of the controller. This issue is addressed and sufficient conditions for stability and convergence of the algorithm are derived. In particular, it is shown that if (1) the identification algorithm converges so that the prediction error tends to zero, (2) the plant is stably invertible, (3) parameter projection is applied to prevent singularities, then the tracking error converges to zero as well. The one-step-ahead nonlinear controller with the proposed parameter projection has been tested in simulation studies of a CSTR system and in a pilot distillation column experiment. In both studies the nonlinear adaptive controller showed performance superior to that obtained using linear adaptive control. Applying parameter projection proved to be crucial to the successful operation of the control system. The validity of the approach was investigated further by performing a theoretical robustness analysis and generalized to non-invertible systems by applying ideas from predictive control. In particular, it is shown that if the prediction horizon is increased, the nonlinear adaptive controller can be applied to non-minimum-phase systems.
46

Dynamics of global supply chain and electric power networks: Models, pricing analysis, and computations

Matsypura, Dmytro 01 January 2006 (has links)
In this dissertation, I develop a new theoretical framework for the modeling, pricing analysis, and computation of solutions to electric power supply chains with power generators, suppliers, transmission service providers, and the inclusion of consumer demands. In particular, I advocate the application of finite-dimensional variational inequality theory, projected dynamical systems theory, game theory, network theory, and other tools that have been recently proposed for the modeling and analysis of supply chain networks (cf. Nagurney (2006)) to electric power markets. This dissertation contributes to the extant literature on the modeling, analysis, and solution of supply chain networks, including global supply chains, in general, and electric power supply chains, in particular, in the following ways. It develops a theoretical framework for modeling, pricing analysis, and computation of electric power flows/transactions in electric power systems using the rationale for supply chain analysis. The models developed include both static and dynamic ones. The dissertation also adds a new dimension to the methodology of the theory of projected dynamical systems by proving that, irrespective of the speeds of adjustment, the equilibrium of the system remains the same. Finally, I include alternative fuel suppliers, along with their behavior into the supply chain modeling and analysis framework. This dissertation has strong practical implications. In an era in which technology and globalization, coupled with increasing risk and uncertainty, complicate electricity demand and supply within and between nations, the successful management of electric power systems and pricing become increasingly pressing topics with relevance not only for economic prosperity but also national security. This dissertation addresses such related topics by providing models, pricing tools, and algorithms for decentralized electric power supply chains. This dissertation is based heavily on the following coauthored papers: Nagurney, Cruz, and Matsypura (2003), Nagurney and Matsypura (2004, 2005, 2006), Matsypura and Nagurney (2005), Matsypura, Nagurney, and Liu (2006).
47

On some modeling issues in high speed networks

Yan, Anlu 01 January 1998 (has links)
Communication networks have experienced tremendous growth in recent years, and it has become ever more challenging to design, control and manage systems of such speed, size and complexity. The traditional performance modeling tools include analysis, discrete-event simulation and network emulation. In this dissertation, we propose a new approach for performance modeling and we call it time-driven fluid simulation. Time-driven fluid simulation is a technique based on modeling the traffic going through the network as continuous fluid flows and the network nodes as fluid servers. Time is discretized into fixed-length intervals and the system is simulated by recursively computing the system state and advance the simulation clock. When the interval length is large, each chunk of fluid processed within one interval may represent thousands of packets/cells. In addition, since the simulation is synchronized by the fixed time intervals, it is easy to parallelize the simulator. These two factors enable us to tremendously speed up the simulation. For single class fluid with probability routing, we prove that the error introduced by discretizing a fluid model is within a deterministic bound proportional to the discretization interval length and is not related to the network size. For multi-class traffic passing through FIFO servers with class-based routing, we prove that the worst case discretization error for any fluid flow may grow linearly with the number of hops the flow passes but unaffected by the overall network size and the discretization error of other classes. We further show via simulation that certain performance measures are in fact quite robust with respect to the discretization interval length and the path length of the flow (in number of hops), and the discretization error is much smaller than that given by the worst case bound. These results show that fluid simulation can be a useful performance modeling tool filling the gap between discrete-event simulation and analysis. In this dissertation, we also apply another technique, rational approximation, to estimate the cell loss probabilities for an ATM multiplexer fed by a self-similar process. This is another method that compensates the analysis and simulation techniques.
48

On fluid modeling of networks and queues

Guo, Yang 01 January 2000 (has links)
Data communication networks have been experiencing tremendous growth in size, complexity, and heterogeneity over the last decade. This trend poses a significant challenge to the modeling, simulation, and analysis of the networks. In this dissertation, we take the fluid model as the way to attack the issue and apply it to network simulation, and to the analysis of queues. Traditional discrete-event packet-based approaches to simulating computer networks become computationally infeasible as the number of network nodes or their complexity increases. An alternative approach, in which packet-based traffic sources are replaced by fluid sources, has been proposed to address this challenge. We quantitatively characterize the amount of computational effort needed by a simulation scheme using the notion of a simulation's event rate, and derive expressions for the event rate of a packet and fluid flow at both the input and output sides of a queue. We show that the fluid-based simulation of First In First Out (FIFO) networks requires less computational effort when the network is small. However, the so-called “ripple effect” can result in fluid-based simulations becoming more expensive than their packet-based counterparts. Replacing FIFO with weighted fair queuing reduces the ripple effect, however the service rate re-distribution process incurs extra event rate. We then propose time-stepped hybrid simulation (TSHS) to deal with the scalability issue faced by traditional packet-based discrete event simulation method and fluid-based simulation methods. TSHS is a framework that offers the user the flexibility to choose the simulation time scale so as to trade off the computational cost of the simulation with its fidelity. Simulation speedup is achieved by evaluating the system at coarser time-scales. The potential loss of simulation accuracy when fine time-scale behavior is evaluated at a coarser time-scale is studied both analytically and experimentally. In addition, we compare an event-driven TSHS simulator to the time-driven version, and find out that the time-driven TSHS simulator out-performs event-driven simulator due to TSHS simulation model's time-driven nature and the simplicity of time-driven scheme. In this dissertation, we also apply the fluid model, together with the theory of stochastic differential equations, to the queueing analysis. We formulate and solve a number of general questions in this area using sample path methods as an important part of the process. Relying on the theory of stochastic differential equations, this approach brings to bear a heretofore ignored but quite effective problem solving methodology.
49

A dynamic load balancing approach to the control of multiserver polling systems with applications to elevator system dispatching

Lewis, James Alan 01 January 1991 (has links)
This dissertation presents a new technique for the control of multi-server polled queueing systems. The new technique is referred to as dynamic load balancing (DLB). Using a simple cyclic service model, evidence is provided indicating that waiting time will be minimized if the servers of the polled queueing system remain maximally separated via a 'skip-ahead' control policy. Approximations are derived for average job waiting time in two polled queueing system 'modes'--the maximum server separation mode and the minimum server separation mode. These approximations further suggest the desirability of a 'skip-ahead' control policy to maintain maximum server separation. A discrete-event model and corresponding discrete-event simulation of the polled queueing system is developed. The DLB algorithm is developed to achieve the maximum server separation objective in the polled queueing system. Simulation results substantiate the approximations developed for the polled queueing system model over a wide range of system parameters and load levels. DLB is adapted for elevator system control. Changes to DLB were required to account for the presence of car calls and direction switching in the elevator system. Despite the added complexity of the elevator system over the multi-server polled queueing system, DLB is shown via simulation to provide improvement over a state of the art elevator system control algorithm in six of six performance measures (e.g. average waiting time).
50

Parallel computation of large-scale network equilibria and variational inequalities

Kim, Dae-Shik 01 January 1992 (has links)
Equilibrium of a network is obtained when each user who competes to optimize his utility can not improve his utility any further. Equilibrium problems governed by distinct equilibrium concepts can be formulated in one general framework--that of variational inequalities. The synthesis of variational inequalities and networks induces the creation of highly efficient algorithms which are especially suited for the large-scale equilibrium problems. Motivated by the recent technological advances in parallel computing architectures, parallel algorithms of large-scale equilibrium problems were developed using the theory of variational inequalities. In the case where the feasible constraint set of a network equilibrium problem can be expressed as a Cartesian product of subsets, the application of variational inequality decomposition algorithms for the parallel computation becomes possible. A new spatial price equilibrium model, which is not based on the path flows, but, rather, on the link flows to allow the decomposition by time periods, was developed and used as a prototype of large-scale network equilibrium problems. The variational inequality formulations were decomposed first by commodities, then by time periods, and, subsequently, by markets. The coarse grain parallel architectures used were the IBM 3090-600E and the IBM 3090-600J at the Cornell Theory Center with six processors each. The maximum speed-ups obtained were 1.93 for two processors, 3.74 for four processors, and 5.15 for six processors. The market subproblems were further decomposed by links, resulting in a fine grain parallel implementation. The Thinking Machine's Connection Machine, CM-2, with 32,768 processors was used for the numerical experimentation. The fine grain parallel algorithm solved input/output matrix problems more than 20 times faster, when compared to the results on the IBM 3090-600J. It is expected that further enhancements to parallel languages and parallel architectures will make even more efficient implementations realizable, and that parallel computing and the theory of variational inequalities can be successfully applied to solve more efficiently other large-scale problems with an underlying network structure, such as traffic equilibrium problems, general economic equilibrium problems, and financial equilibrium problems.

Page generated in 0.1389 seconds