• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 268
  • 131
  • 41
  • 20
  • 16
  • 15
  • 11
  • 10
  • 8
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 627
  • 85
  • 81
  • 64
  • 63
  • 58
  • 57
  • 49
  • 46
  • 45
  • 41
  • 40
  • 39
  • 39
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Développement d’une approche couplée matériau / structure machine : application au formage incrémental robotisé / Design of a process/machine coupling approach : application to robotized incremental forming

Belchior, Jérémy 10 December 2013 (has links)
Le formage incrémental consiste à utiliser un poinçon de forme simple dont le mouvement va progressivement mettre en forme une tôle. Il ouvre de nouvelles perspectives quant au potentiel des procédés de mise en forme des tôles métalliques. La mise en oeuvre du formage incrémental par des systèmes mécaniques ayant des capacités dynamiques accrues et des volumes accessibles importants tels que les robots manipulateurs sériels ou parallèles est un moyen efficace d’améliorer, d’une part la productivité mais aussi la complexité des pièces formées. L’objectif scientifique de ce travail est de contribuer au développement d’une approche globale du problème, en se plaçant à la fois à l’échelle « mésoscopique » du procédé et à l’échelle « macroscopique » du système de fabrication. C’est dans ce contexte qu’est proposée une approche couplée matériau/structure combinant d’une part l’analyse éléments finis du procédé et d’autre part un modèle élastique de la structure du robot.Tout d’abord, les efforts requis au niveau de l’outil pour former la pièce sont calculés sous l’hypothèse d’une structure de machine parfaitement rigide. Afin de minimiser l’erreur entre la prédiction et la mesure des efforts de formage, trois facteurs identifiés comme influents sur le niveau d’effort sont étudiés. Il est alors démontré, qu’à partir d’un choix de paramètres adapté, il est possible de s’affranchir de la mesure des efforts de formage, ce qui n’est actuellement pas le cas dans la littérature.Les efforts prédits sont ensuite définis comme une donnée d’entrée du modèle élastique de la structure robot afin de calculer les erreurs de poses du centre outil. Pour prendre en compte le comportement élastique de la structure, la modélisation des structures robotisées par des éléments de type poutre est retenue puis appliquée à un robot industriel Fanuc S420if. Elle permet de prédire ce comportement avec une précision maximale de ± 0,35 mm, quelque soit le chargement en bout d’outil supportable par le robot.Afin de valider l’approche, deux pièces sont formées par le robot : un cône tronqué et une pyramide vrillée. La géométrie de ces deux pièces permet de valider à la fois les hypothèses de la simulation ainsi que l’approche globale. Ces deux expérimentations entraînent une amélioration de 80 % de l’exactitude de pose du robot, rapprochant ainsi celui-ci des performances d’une machine à commande numérique à structure cartésienne.Finalement, dans la dernière partie, une boucle d’optimisation permet de prendre en compte, dès le calcul de la trajectoire, l’effet du retour élastique de la tôle avant le débridage de la pièce afin de minimiser l’écart entre le profil nominal et le profil formé. L’application de l’approche couplée à cette trajectoire se traduit par une précision géométrique de ± 0,15 mm du profil formé avant desserrage de la tôle, ouvrant ainsi des perspectives intéressantes quant à l’application de la méthodologie. / The incremental forming is an innovative process which consists in forming a sheet by the progressive movements of a punch. A solution to improve the productivity of the process and the complexity of the parts shapes is to use robots (serial or parallel). The scientific aim of this work is to define a global approach of the problem by studying the mesoscopic scale of the process and the macroscopic scale of the machine. In this context, a process/machine coupling approach which combines a Finite Element Analysis (FEA) of the process and an elastic modeling of the robot structure is presented.First, the punch forces necessary to form the part are computed assuming a machine structure perfectly stiff. To minimize the error between the predicted forming forces and the measured ones, the weight of three numerical and material parameters of the FEA is investigated. This study shows that an appropriate choice of parameters avoids the force measurement step, unlike the available approaches in the literature.Then, the predicted forces are defined as input data of the elastic model of the robot structure to compute the Tool Center Point (TCP) pose errors. To consider the behavior of the elastic structure, the modeling of robotized structures by beam elements is chosen and applied to an industrial robot Fanuc S420if. The identified elastic model permits to predict the TCP displacements induced by the elastic behavior of the robot structure over the workspace whatever the load applied on the tool. The prediction maximum error of ±0.35 mm remains compatible with the process requirements.To validate the approach, two parts are formed by the robot: a truncated cone and a twisted pyramid. The geometry of these two parts confirms the hypothesis of the simulation and the global approach. These two tests give very interesting results since an improvement of 80 % of the TCP poseaccuracy is identified.Finally, an optimization loop based on a parametric trajectory and on a FEA anticipates the springback effects before the unclamping of the sheet, and then minimizes the error between the nominal shape and the formed one. The application of the process/machine coupling approach for this trajectory leads to a geometric accuracy of the part before unclamping of ± 0.15 mm. These results open interesting perspectives for the methodology application.
112

Constructing component-based systems directly from requirements using incremental composition

Nordin, Azlin January 2013 (has links)
In software engineering, system construction typically starts from a requirements specification that has been engineered from raw requirements in a natural language. The specification is used to derive intermediate requirements models such as structured or object-oriented models. Throughout the stages of system construction, these artefacts will be used as reference models. In general, in order to derive a design specification out of the requirements, the entire set of requirements specifications has to be analysed. Such models at best only approximate the raw requirements since these design models are derived as a result of the abstraction process according to the chosen software development methodology, and subjected to the expertise, intuition, judgment and experiences of the analysts or designers of the system. These abstraction models require the analysts to elicit all useful information from the requirements, and there is a potential risk that some information may be lost in the process of model construction. As the use of natural language requirements in system construction is inevitable, the central focus of this study was to use requirements stated in natural language in contrast to any other requirements representation (e.g. modelling artefact). In this thesis, an approach that avoids intermediate requirements models, and maps natural language requirements directly into architectural constructs, and thus minimises information loss during the model construction process, has been defined. This approach has been grounded on the adoption of a component model that supports incremental composition. Incremental composition allows a system to be constructed piece by piece. By mapping a raw requirement to elements of the component model, a partial architecture that satisfies that requirement is constructed. Consequently, by iterating this process for all the requirements, one at a time, the incremental composition to build the system piece by piece directly from the requirements can be achieved. In software engineering, system construction typically starts from a requirements specification that has been engineered from raw requirements in a natural language. The specification is used to derive intermediate requirements models such as structured or object-oriented models. Throughout the stages of system construction, these artefacts will be used as reference models. In general, in order to derive a design specification out of the requirements, the entire set of requirements specifications has to be analysed. Such models at best only approximate the raw requirements since these design models are derived as a result of the abstraction process according to the chosen software development methodology, and subjected to the expertise, intuition, judgment and experiences of the analysts or designers of the system. These abstraction models require the analysts to elicit all useful information from the requirements, and there is a potential risk that some information may be lost in the process of model construction. As the use of natural language requirements in system construction is inevitable, the central focus of this study was to use requirements stated in natural language in contrast to any other requirements representation (e.g. modelling artefact). In this thesis, an approach that avoids intermediate requirements models, and maps natural language requirements directly into architectural constructs, and thus minimises information loss during the model construction process, has been defined. This approach has been grounded on the adoption of a component model that supports incremental composition. Incremental composition allows a system to be constructed piece by piece. By mapping a raw requirement to elements of the component model, a partial architecture that satisfies that requirement is constructed. Consequently, by iterating this process for all the requirements, one at a time, the incremental composition to build the system piece by piece directly from the requirements can be achieved.
113

Shared Complex Event Trend Aggregation

Rozet, Allison M. 07 May 2020 (has links)
Streaming analytics deploy Kleene pattern queries to detect and aggregate event trends against high-rate data streams. Despite increasing workloads, most state-of-the-art systems process each query independently, thus missing cost-saving sharing opportunities. Sharing complex event trend aggregation poses several technical challenges. First, the execution of nested and diverse Kleene patterns is difficult to share. Second, we must share aggregate computation without the exponential costs of constructing the event trends. Third, not all sharing opportunities are beneficial because sharing aggregation introduces overhead. We propose a novel framework, Muse (Multi-query Snapshot Execution), that shares aggregation queries with Kleene patterns while avoiding expensive trend construction. It adopts an online sharing strategy that eliminates re-computations for shared sub-patterns. To determine the beneficial sharing plan, we introduce a cost model to estimate the sharing benefit and design the Muse refinement algorithm to efficiently select robust sharing candidates from the search space. Finally, we explore optimization decisions to further improve performance. Our experiments over a wide range of scenarios demonstrate that Muse increases throughput by 4 orders of magnitude compared to state-of-the-art approaches with negligible memory requirements.
114

Using a Cognitive Architecture in Incremental Sentence Processing

McGhee, Jeremiah Lane 10 December 2012 (has links)
XNL-Soar is a specialized implementation of the Soar cognitive architecture. The version of XNL-Soar described in this thesis builds upon and extends prior research (Lewis, 1993; Rytting,2000) using Soar for natural language processing. This thesis describes the updates made to operators creating syntactic structure and the improved coverage of syntactic phenomena. It describes the addition of semantic structure building capability. This thesis also details the implementation of semantic memory and describes two experiments utilizing semantic memory in structural disambiguation. This thesis shows that XNL-Soar, as currently instantiated, resolves ambiguities common in language using strategies and resources including: reanalysis via snip operators, use of data-driven techniques with annotated corpora, and complex part-of-speech and word sense processing based on WordNet.
115

Scalable Frequent Subgraph Mining

Abdelhamid, Ehab 19 June 2017 (has links)
A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the proposed components significantly outperform existing solutions, scale to a large number of processors and process graphs that previous techniques cannot handle, such as large and dynamic graphs.
116

Incremental State Higher Education Expenditures

Shelley, Gary L., Wright, David B. 01 January 2009 (has links)
Panel regressions are used to analyze various measures of state higher education expenditures for 45 states over a time period from 1986 through 2005. Results of panel stationarity tests indicate that each expenditures series contains a unit root. This finding is consistent with the incremental theory of public expenditures and implies that time series of these variables should be differenced if used as dependent variables in regression models. Regression results indicate that changes in state higher education expenditures are significantly procyclical. State higher education spending appears to fully adjust to population growth and over-adjust to CPI inflation. Larger state governments are associated with significantly larger annual adjustments to per capita real state higher education expenditures. No significant evidence is found that state Medicaid or elementary education expenditures crowd out higher education spending.
117

Examining Alexithymia in Affective Events Theory

Howald, Nicholas 02 May 2019 (has links)
No description available.
118

Implementation of Educational Games-based Instruction for Improving Sight Word Recognition

Weakland , Natalie Lynn 03 May 2013 (has links)
No description available.
119

Temporal Data Mining in a Dynamic Feature Space

Wenerstrom, Brent K. 22 May 2006 (has links) (PDF)
Many interesting real-world applications for temporal data mining are hindered by concept drift. One particular form of concept drift is characterized by changes to the underlying feature space. Seemingly little has been done to address this issue. This thesis presents FAE, an incremental ensemble approach to mining data subject to concept drift. FAE achieves better accuracies over four large datasets when compared with a similar incremental learning algorithm.
120

An Incremental Trace-Based Debug System for Field-Programmable Gate-Arrays

Keeley, Jared Matthew 07 November 2013 (has links) (PDF)
Modern society increasingly relies upon integrated circuits (ICs). It can be very costly if ICs do not function properly, and large portions of designer effort are spent on their verification. The use of field-programmable gate arrays (FPGAs) for verification and debug of ICs is increasing. FPGAs are faster than simulation and cost less than fabricating an ASIC prototype. However, the major challenge of using FPGAs for verification and debug is observability. Designers must use special techniques to observe the values of FPGA's internal signals. This thesis proposes a new method for increasing the observability of FPGAs and demonstrates its feasibility. The new method incrementally inserts trace buffers controlled by a trigger into already placed-and-routed FPGA designs. Incremental insertion allows several drawbacks of typical trace-based approaches to be avoided such as influencing the placing and routing of the design, large area overheads, and slow turnaround times when changes must be made to the instrumentation. It is shown that it is possible to observe every flip flop in Xilinx Virtex-5 designs using the method, given that enough trace buffer capacity is available. We investigate factors that influence the results of the method. It is shown that making the trace buffers wide may lead to routing failures. Congested areas of the circuit must be avoided when placing the trigger or this may also lead to routing failures. A drawback of the method is that it may increase the minimum period of the design, but we show that pipelining can reduce these effects. The method proves to be a promising way to observe thousands of signals in a design, potentially allowing designers to fully reconstruct the internal values of an FPGA over multiple clock cycles to assist in verification and debug.

Page generated in 0.0479 seconds