• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 351
  • 128
  • 49
  • 39
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 716
  • 185
  • 96
  • 88
  • 87
  • 76
  • 69
  • 54
  • 54
  • 53
  • 53
  • 52
  • 49
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Disturbance Robustness Measures and Wrench-Feasible Workspace Generation Techniques for Cable-Driven Robots

Bosscher, Paul Michael 01 December 2004 (has links)
Cable robots are a type of robotic manipulator that has recently attracted interest for large workspace manipulation tasks. Cable robots are relatively simple in form, with multiple cables attached to a mobile platform or end-effector. The end-effector is manipulated by motors that can extend or retract the cables. Cable robots have many desirable characteristics, including low inertial properties, high payload-to-weight ratios, potentially vast workspaces, transportability, ease of disassembly/reassembly, reconfigurability and economical construction and maintenance. However, relatively few analytical tools are available for analyzing and designing these manipulators. This thesis focuses on expanding the existing theoretical framework for the design and analysis of cable robots in two areas: disturbance robustness and workspace generation. Underconstrained cable robots cannot resist arbitrary external disturbances acting on the end-effector. Thus a disturbance robustness measure for general underconstrained single-body and multi-body cable robots is presented. This measure captures the robustness of the manipulator to both static and impulsive disturbances. Additionally, a wrench-based method of analyzing cable robots has been developed and is used to formulate a method of generating the Wrench-Feasible Workspace of cable robots. This workspace consists of the set of all poses of the manipulator where a specified set of wrenches (force/moment combinations) can be exerted. For many applications the Wrench-Feasible Workspace constitutes the set of all usable poses. The concepts of robustness and workspace generation are then combined to introduce a new workspace: the Specified Robustness Workspace. This workspace consists of the set of all poses of the manipulator that meet or exceed a specified robustness value.
72

A framework for low bit-rate speech coding in noisy environment

Krishnan, Venkatesh 21 April 2005 (has links)
State of the art model based coders offer a perceptually acceptable reconstructed speech quality at bit-rates as low as 2000 bits per second. However, the performance of these coders rapidly deteriorates below this rate, primarily since very few bits are available to encode the model parameters with high fidelity. This thesis aims to meet the challenge of designing speech coders that operate at lower bit-rates while reconstructing the speech at the receiver at the same or even better quality than state of the art low bit-rate speech coders. In one of the contributions, we develop a plethora of techniques for efficient coding of the parameters obtained by the MELP algorithm, under the assumption that the classification of the frames of the MELP coder is available. Also, a simple and elegant procedure called dynamic codebook reordering is presented for use in the encoders and decoders of a vector quantization system that effectively exploits the correlation between vectors of parameters obtained from consecutiv speech frames without introducing any delay, distortion or suboptimality. The potential of this technique in significantly reducing the bit-rates of speech coders is illustrated. Additionally, the thesis also attempts to address the issues of designing such very low bit-rate speech coders so that they are robust to environmental noise. To impart robustness, a speech enhancement framework employing Kalman filters is presented. Kalman filters designed for speech enhancement in the presence of noise assume an autoregressive model for the speech signal. We improve the performance of Kalman filters in speech enhancement by constraining the parameters of the autoregressive model to belong to a codebook trained on clean speech. We then extend this formulation to the design of a novel framework, called the multiple input Kalman filter, that optimally combines the outputs from several speech enhancement systems. Since the low bit-rate speech coders compress the parameters significantly, it is very important to protect the transmitted information from errors in the communication channel. In this thesis, a novel channel-optimized multi-stage vector quantization codec is presented, in which the stage codebooks are jointly designed.
73

Relay Misbehavior Detection for Robust Diversity Combining in Cooperative Communications

Chou, Po-heng 23 July 2011 (has links)
Cooperative communications is an emerging technique that has spatial diversity inherent in wireless multiuser communication systems without multiple antennas at each node. Most studies in the literature assume that users acting as the relays are normally operated and trustworthy, which, however, may not always be true in practice. This thesis considers the design of robust cooperative communication in physical layer for combating relay misbehaviors. This thesis considers both models in which the cooperative communications is with direct path (WDP) and without direct path (WODP). Two signal-correlation-detection rules for both WDP and WODP are proposed, respectively. Utilizing the proposed signal-correlation-detection mechanism, the destination identifies the misbehaving relays within the cooperative communication network and then excludes their transmitting messages when performing the diversity combining to infer the symbols of interest sent by the source. The proposed signal-correlation-detection rules are optimally designed in accordance with either the criterion of the minimization of the probability of misbehavior misidentification or the criterion of the maximum generalized likelihood detector. In addition, this thesis also provides the BER analysis of the cooperative communications employing the proposed misbehaving relay detectors. The simulation result demonstrates that the proposed schemes have excellent robust performance when the relay misbehavior is present in the cooperative communication networks.
74

Robustness Analysis of the Matched Filter Detector Through Utilizing Sinusoidal Wave Sampling

Stedehouder, Jeroen 16 January 2010 (has links)
This thesis performs a quantitative study, derived from the Neyman-Pearson framework, on the robustness of the matched filter detector corrupted by zero mean, independent and identically distributed white Gaussian noise. The variance of the noise is assumed to be imperfectly known, but some knowledge about a nominal value is presumed. We utilized slope as a unit to quantify the robustness for different signal strengths, nominals, and sample sizes. Following to this, a weighting method is applied to the slope range of interest, the so called tolerable range, as to analyze the likelihood of these extreme slopes to occur. A ratio of the first and last quarter section of the tolerable range have been taken in order to obtain the likelihood ratio for the low slopes to occur. We finalized our analysis by developing a method that quantifies confidence as a measure of robustness. Both weighted and non-weighted procedures were applied over the tolerable range, where the weighted procedure puts greater emphasis on values near the nominal. The quantitative analysis results show the detector to be non-robust and deliver poor performance for low signal-to-noise ratios. For moderate signal strengths, the detector performs rather well if the nominal and sample size are chosen wisely. The detector has great performance and robustness for high signal-to-noise ratios. This even remains true when only a few samples are taken or when the practitioner is uncertain about the nominal chosen.
75

On the robustness of clustered sensor networks

Cho, Jung Jin 15 May 2009 (has links)
Smart devices with multiple on-board sensors, networked through wired or wireless links, are distributed in physical systems and environments. Broad applications of such sensor networks include manufacturing quality control and wireless sensor systems. In the operation of sensor systems, robust methods for retrieving reliable information from sensor systems are crucial in the presence of potential sensor failures. Existence of sensor redundancy is one of the main drivers for the robustness or fault tolerance capability of a sensor system. The redundancy degree of sensors plays two important roles pertaining to the robustness of a sensor network. First, the redundancy degree provides proper parameter values for robust estimator; second, we can calculate the fault tolerance capability of a sensor network from the redundancy degree. Given this importance of the redundancy degree, this dissertation presents efficient algorithms based on matroid theory to compute the redundancy degree of a clustered sensor network. In the efficient algorithms, a cluster pattern of a sensor network allows us to decompose a large sensor network into smaller sub-systems, from which the redundancy degree can be found more efficiently. Finally, the robustness analysis as well as its algorithm procedure is illustrated using examples of a multi-station assembly process and calibration of wireless sensor networks.
76

Synthesis of PID controller from empirical data and guaranteeing performance specifications.

Lim, Dongwon 15 May 2009 (has links)
For a long time determining the stability issue of characteristic polynomials has played avery important role in Control System Engineering. This thesis addresses the traditionalcontrol issues such as stabilizing a system with any certain controller analyzingcharacteristic polynomial, yet a new perspective to solve them. Particularly, in this thesis,Proportional-Integral-Derivative (PID) controller is considered for a fixed structuredcontroller. This research aims to attain controller gain set satisfying given performancespecifications, not from the exact mathematical model, but from the empirical data of thesystem. Therefore, instead of a characteristic polynomial equation, a speciallyformulated characteristic rational function is investigated for the stability of the systemin order to use only the frequency data of the plant. Because the performance satisfactionis highly focused on, the characteristic rational function for the investigation of thestability is mainly dealt with the complex coefficient polynomial case rather than realone through whole chapters, and the mathematical basis for the complex case is prepared.For the performance specifications, phase margin is considered first since it is avery significant factor to examine the system’s nominal stability extent (nominal performance). Second, satisfying H norm constraints is handled to make a more robustclosed loop feedback control system. Third, we assume undefined, but bounded outsidenoise, exists when estimating the system’s frequency data. While considering theseuncertainties, a robust control system which meets a given phase margin performance, isattained finally (robust performance).In this thesis, the way is explained how the entire PID controller gain setssatisfying the given performances mentioned in the above are obtained. The approachfully makes use of the calculating software e.g. MATLAB® in this research and isdeveloped in a systematically and automatically computational aspect. The result ofsynthesizing PID controller is visualized through the graphic user interface of acomputer.
77

Effect Size Matters: Empirical Investigations to Help Researchers Make Informed Decisions on Commonly Used Statistical Techniques

Skidmore, Susana Troncoso 2009 December 1900 (has links)
The present journal article formatted dissertation assessed the characteristics of effect sizes of commonly used statistical techniques. In the first study, the author examined the American Educational Research Journal (AERJ) and select American Psychological Association (APA) and American Counseling Association (ACA) journals to provide an historical account and synthesis of which statistical techniques were most prevalent in the fields of education and psychology. These reviews represented a total of 17,698 techniques recorded from 12,012 articles. Findings point to a general decrease in the use of the tvtest and ANOVA/ANCOVA and a general increase in the use of regression and factor/cluster analysis. In the second study, the author compared the efficacy of one Pearson r2 and seven multiple R2 correction formulas for the Pearson r2. The author computed adjustment bias and precision under 108 conditions (6 population p2 values, 3 shape conditions and 6 sample size conditions). The Pratt and the Olkin-Pratt Extended formulas more consistently provided unbiased estimates across sample sizes, p2 values and the shape conditions investigated. In the third study, the author evaluated the robustness of estimates of practical significance (n2, e2 and w2) in one-way between subjects univariate ANOVA. There were 360 simulation conditions (5 population Cohen's d values, 4 group proportion ratios, 3 shape conditions, 3 variance conditions, and 2 total sample size conditions) for each of three group configurations (2, 3 and 4 groups). Three indices of practical significance (n2, e2, w2) and two indices of statistical significance (Type I error and power) were computed for each of the 5,400, 000 (5,000 replications x 360 simulation conditions x 3 group configurations). Simulation findings for n2 under heterogeneous variance conditions indicated that for the k=2 and k=3 condition Cohen's d values up to 0.2 (up to 0.5 for k=4) tend to produce overestimated population n2 values. Under heterogeneous variance conditions for e2 and w2 at Cohen's d = 0.0 and 0.2, the negative variance pairing overestimated and the positive variance pairing underestimated the parameter n2 but at Cohen's d greater than or equal to 0.5, both the positive and negative variance conditions resulted in underestimated parameter n2 values.
78

Enhancing Use Case Description with Robustness Analysis

Chang, Chun-Chieh 10 July 2007 (has links)
The completeness and correctness of requirement modeling is the crucial factor that affecting the success of the system development. Use case diagram is the standard tool for modeling the use requirement for the objected-oriented systems analysis and design. However, to model the sequence diagram in the platform independent model (PIM) stage is still not a straightforward task to identify objects, operations and their relationships from the use case diagram. Robustness analysis has been proposed to bridge this gap between the user requirement modeling and the PIM modeling. However, the detailed guideline for the robustness analysis is lacking, while it is important for designer to enhance the completeness and correctness of the user requirement modeling. To alleviate the forgoing problem, we proposed that use case diagram, activity diagram and robustness diagram are used to represent the use requirement. Once a use case diagram is constructed, the activity diagram is used to describe the activity flow and the associated input/output of each use case. Finally, the robustness analysis with the guideline proposed is used to help the identification of boundary, control, and entity objects and enhance the completeness of the user requirement. The outcome can then be used to construct a sequence diagram in the PIM. A real-world case is presented to illustrate the feasibility of using the proposed method. With this methodology, the system developer can enhance the completeness and correctness of user requirement efficiently and thereby reduce the risk of success development failure.
79

Tooth Interior Fatigue Fracture&Robustness of Gears

MackAldener, Magnus January 2001 (has links)
<p>The demands the automotive gear designer has to considerduring the gear design process have changed. To design a gearthat will not fail is still a challenging task, but now lownoise is also a main objective. Both customers and legalregulations demand noise reduction of gears. Moreover, thequality of the product is more in focus than ever before. Inaddition, the gear design process itself must be inexpensiveand quick. One can say that the gear designer faces a newdesign environment. The objective of this thesis is tocontribute to the answer to some of the questions raised inthis new design environment.</p><p>In order to respond to the new design situation, the geardesigner must consider new phenomena of gears that werepreviously not a matter of concern. One such phenomenon is anew gear failure type, Tooth Interior Fatigue Fracture (TIFF).As the gear teeth are made more slender in an attempt to reducethe stiffness variation during the mesh cycle, therebypotentially reducing the noise, the risk of TIFF is increased.The phenomenon of TIFF is explored in detail (paper III-VI)through fractographic analysis, numerical crack initiationanalysis using FEM, determination of residual stress by meansof neutron diffraction measurements, testing for determiningmaterial fatigue properties, fracture mechanical FE-analysis,sensitivity analysis and the development of an engineeringdesign method. The main findings of the analysis of TIFF arethat TIFF cracks initiate in the tooth interior, TIFF occursmainly in case hardened idlers, the fracture surface has acharacteristic plateau at approximately the mid-height of thetooth and the risk of TIFF is more pronounced in slender gearteeth.</p><p>Along with the more optimised gear design, there is atendency for the gear to be less robust. Low robustness, i.e.,great variation in performance of the product, implies a highincidence of rejects, malfunction and/or bad-will, all of whichmay have a negative effect on company earnings. As the use ofoptimisation decreases the safety margins, greater attentionhas to be paid to guaranteeing the products' robustness.Moreover, in order to be cost-effective, the qualities of thegear must be verified early in the design process, implying anextended use of simulations. In this thesis, two robustnessanalyses are presented in which the analysing tool issimulation. The first one considers robust tooth root bendingfatigue strength as the gear is exposed to mounting errors, thesecond one considers robust noise characteristics of a gearexposed to manufacturing errors, varying torque and wear. Bothof these case studies address the problem of robustness ofgears and demonstrate how it can be estimated by use ofsimulations. The main result from the former robustnessanalysis is that wide gears are more sensitive to mountingerrors, while the latter analysis showed that to achieve robustnoise characteristics of a gear it should have large helixangles, and some profile- and lead crowning should beintroduced. The transverse contact ratio is a trade-off factorin the sense that both low average noise levels and low scatterin noise due to perturbations cannot be achieved.</p><p><b>Keywords</b>: robust design, Taguchi method, gear, idler,simulations, Finite Element Method, Tooth Interior FatigueFracture, TIFF</p>
80

SUPPLY CHAIN SCHEDULING FOR MULTI-MACHINES AND MULTI-CUSTOMERS

2015 September 1900 (has links)
Manufacturing today is no longer a single point of production activity but a chain of activities from the acquisition of raw materials to the delivery of products to customers. This chain is called supply chain. In this chain of activities, a generic pattern is: processing of goods (by manufacturers) and delivery of goods (to customers). This thesis concerns the scheduling operation for this generic supply chain. Two performance measures considered for evaluation of a particular schedule are: time and cost. Time refers to a span of the time that the manufacturer receives the request of goods from the customer to the time that the delivery tool (e.g. vehicle) is back to the manufacturer. Cost refers to the delivery cost only (as the production cost is considered as fi xed). A good schedule is thus with short time and low cost; yet the two may be in conflict. This thesis studies the algorithm for the supply chain scheduling problem to achieve a balanced short time and low cost. Three situations of the supply chain scheduling problem are considered in this thesis: (1) a single machine and multiple customers, (2) multiple machines and a single customer and (3) multiple machines and multiple customers. For each situation, di fferent vehicles characteristics and delivery patterns are considered. Properties of each problem are explored and algorithms are developed, analysed and tested (via simulation). Further, the robustness of the scheduling algorithms under uncertainty and the resilience of the scheduling algorithms under disruptions are also studied. At last a case study, about medical resources supply in an emergency situation, is conducted to illustrate how the developed algorithms can be applied to solve the practical problem. There are both technical merits and broader impacts with this thesis study. First, the problems studied are all new problems with the particular new attributes such as on-line, multiple-customers and multiple-machines, individual customer oriented, and limited capacity of delivery tools. Second, the notion of robustness and resilience to evaluate a scheduling algorithm are to the best of the author's knowledge new and may be open to a new avenue for the evaluation of any scheduling algorithm. In the domain of manufacturing and service provision in general, this thesis has provided an e ffective and effi cient tool for managing the operation of production and delivery in a situation where the demand is released without any prior knowledge (i.e., on-line demand). This situation appears in many manufacturing and service applications.

Page generated in 0.066 seconds