• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 58
  • 46
  • 35
  • 21
  • 9
  • 9
  • 8
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 635
  • 66
  • 65
  • 54
  • 54
  • 49
  • 47
  • 45
  • 41
  • 36
  • 35
  • 34
  • 33
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

INCOMPLETE PAIRWISE COMPARISON MATRICES AND OPTIMIZATION TECHNIQUES

Tekile, Hailemariam Abebe 08 May 2023 (has links)
Pairwise comparison matrices (PCMs) play a key role in multi-criteria decision making, especially in the analytic hierarchy process. It could be necessary for an expert to compare alternatives based on various criteria. However, for a variety of reasons, such as lack of time or insufficient knowledge, it may happen that the expert cannot provide judgments on all pairs of alternatives. In this case, an incomplete pairwise comparison matrix is formed. In the first research part, an optimization algorithm is proposed for the optimal completion of an incomplete PCM. It is intended to numerically minimize a constrained eigenvalue problem, in which the objective function is difficult to write explicitly in terms of variables. Numerical simulations are carried out to examine the performance of the algorithm. The simulation results show that the proposed algorithm is capable of solving the minimization of the constrained eigenvalue problem. In the second part, a comparative analysis of eleven completion methods is studied. The similarity of the eleven completion methods is analyzed on the basis of numerical simulations and hierarchical clustering. Numerical simulations are performed for PCMs of different orders considering various numbers of missing comparisons. The results suggest the existence of a cluster of five extremely similar methods, and a method significantly dissimilar from all the others. In the third part, the filling in patterns (arrangements of known comparisons) of incomplete PCMs based on their graph representation are investigated under given conditions: regularity, diameter and number of vertices, but without prior information. Regular and quasi-regular graphs with minimal diameter are proposed. Finally, the simulation results indicate that the proposed graphs indeed provide better weight vectors than alternative graphs with the same number of comparisons. This research problem’s contributions include a list of (quasi-)regular graphs with diameters of 2 and 3, and vertices from 5 up to 24.
282

A Scalable Leader Based Consensus Algorithm

Gulati, Ishaan 10 August 2023 (has links)
Present-day commonly used systems like Cassandra, Spanner, and CockroachDB require high availability and strict consistency guarantees. High availability is attained through redundancy. In the field of computing, redundancy is attained through state machine repli- cation. Protocols like Raft, Multi-Paxos, ZAB, or other variants of Paxos are commonly used to achieve state machine replication. These protocols choose one of the processes from multiple processes running on various machines in a distributed setting as the leader. The leader is responsible for client interactions, replicating client operations on all the followers, and maintaining a consistent view across the system. In these protocols, the leader is more loaded than other nodes or followers in the system, making the leader a significant scalabil- ity bottleneck for multi-datacenter and edge deployments. The overall commit throughput and latency are further exacerbated in majority agreement with the hardware and network heterogeneity. This work aims to reduce the load on the leader by using reduced dynamic latency-aware flexible quorums while maintaining strict correctness guarantees like linearizability. In this thesis, we implement dynamic reduced-size commit quorums to reduce the leader’s load and improve throughput and latency, called FDRaft. The commit quorums are computed based on an exponentially moving weighted average of the followers’ time to respond to the leader, accounting for the heterogeneity in hardware and network. The reduced commit quorum requires a bigger election quorum, but elections rarely happen, and a single leader can serve for significant durations. We evaluate this protocol using a key-value store built on FDRaft and Raft and compare multi-datacenter and edge deployments. The evaluation shows 2x improved throughput and around 55% improved latency over Raft during normal operations and 45% improvement over Raft with vanilla flexible-quorums under failure conditions. / M.S. / In our day-to-day life, we rely heavily on different internet applications, be it Instagram for sharing pictures, Amazon for our shopping, Doordaash for our food orders, Spotify for listening to music, or Uber for traveling. These applications share many commonalities, like the scale at which they operate, maintaining strict latency guarantees, high availability to serve the users, and using databases to maintain shared states. The data is replicated across multiple servers to provide fault tolerance against failures. The replication across multiple servers is achieved through state-machine replication. In state-machine replication, multiple servers start with the same initial state and perform operations in the same order to reach the same final state. This process of replication in computing is achieved through a consensus algorithm. Con- sensus means agreement, and consensus algorithms are used to reach an agreement for a particular value. Raft, Multi-Paxos, or any other variant of Paxos are the commonly used consensus algorithms to achieve agreement on a particular value in a distributed setting. In these algorithms, one of the servers is chosen as the leader responsible for client interactions, replicating and maintaining the same state across all the servers, even when faced with server and network failures. Every time the leader receives a client operation, it starts the consensus process by forwarding the client request to all the servers and committing the client request after receiving an agreement from the majority. As the leader does most of the work, it is more loaded than other servers and becomes a significant scalability bottleneck. The leader bottleneck becomes more evident in multi-datacenters and edge deployments. The hardware and network heterogeneity also severely affects the overall commit throughput and latency in majority agreement. In this thesis, we reduce the load on the leader by building a smaller-sized dynamic commit quorum with latency-aware server selection based on an exponentially weighted moving av- erage of the followers’ response time to the leader’s requests without compromising safety and liveness properties. Our design also provides a higher efficiency for throughput and commit latency. We evaluate this protocol against multiple workloads and failure conditions and find that it outperforms Raft by 2x in terms of throughput and around 55% in latency over Raft during normal operations. It also shows improvement in throughput and latency by 45% over Raft with vanilla flexible-quorums under failure conditions.
283

Consistency and Uniform Bounds for Heteroscedastic Simulation Metamodeling and Their Applications

Zhang, Yutong 05 September 2023 (has links)
Heteroscedastic metamodeling has gained popularity as an effective tool for analyzing and optimizing complex stochastic systems. A heteroscedastic metamodel provides an accurate approximation of the input-output relationship implied by a stochastic simulation experiment whose output is subject to input-dependent noise variance. Several challenges remain unsolved in this field. First, in-depth investigations into the consistency of heteroscedastic metamodeling techniques, particularly from the sequential prediction perspective, are lacking. Second, sequential heteroscedastic metamodel-based level-set estimation (LSE) methods are scarce. Third, the increasingly high computational cost required by heteroscedastic Gaussian process-based LSE methods in the sequential sampling setting is a concern. Additionally, when constructing a valid uniform bound for a heteroscedastic metamodel, the impact of noise variance estimation is not adequately addressed. This dissertation aims to tackle these challenges and provide promising solutions. First, we investigate the information consistency of a widely used heteroscedastic metamodeling technique, stochastic kriging (SK). Second, we propose SK-based LSE methods leveraging novel uniform bounds for input-point classification. Moreover, we incorporate the Nystrom approximation and a principled budget allocation scheme to improve the computational efficiency of SK-based LSE methods. Lastly, we investigate empirical uniform bounds that take into account the impact of noise variance estimation, ensuring an adequate coverage capability. / Doctor of Philosophy / In real-world engineering problems, understanding and optimizing complex systems can be challenging and prohibitively expensive. Computer simulation is a valuable tool for analyzing and predicting system behaviors, allowing engineers to explore different scenarios without relying on costly physical prototypes. However, the increasing complexity of simulation models leads to a higher computational burden. Metamodeling techniques have emerged to address this issue by accurately approximating the system performance response surface based on limited simulation experiment data to enable real-time decision-making. Heteroscedastic metamodeling goes further by considering varying noise levels inherent in simulation outputs, resulting in more robust and accurate predictions. Among various techniques, stochastic kriging (SK) stands out by striking a good balance between computational efficiency and statistical accuracy. Despite extensive research on SK, challenges persist in its application and methodology. These include little understanding of SK's consistency properties, an absence of sequential SK-based algorithms for level-set estimation (LSE) under heteroscedasticity, and the increasingly low computational efficiency of SK-based LSE methods in implementation. Furthermore, a precise construction of uniform bounds for the SK predictor is also missing. This dissertation aims at addressing these aforementioned challenges. First, the information consistency of SK from a prediction perspective is investigated. Then, sequential SK-based procedures for LSE in stochastic simulation, incorporating novel uniform bounds for accurate input-point classification, are proposed. Furthermore, a popular approximation technique is incorporated to enhance the computational efficiency of the SK-based LSE methods. Lastly, empirical uniform bounds are investigated considering the impact of noise variance estimation.
284

An Exploratory Sequential Study of Chinese EFL Teachers' Beliefs and Practices in Reading and Teaching Reading

Gao, Yang 23 August 2018 (has links)
No description available.
285

Examining Inference Processes Underlying Knowledge Complexity Effects on Attitude-Behavior Consistency

Gretton, Jeremy David 03 September 2013 (has links)
No description available.
286

Investigating the Human Element in Corporate Policies

Yonker, Scott E. 30 August 2010 (has links)
No description available.
287

Development of a Virtual Reality Testbed to Study Inconsistencies among Bridge Inspectors

Luis David Fernandez Vasquez (11564227) 17 November 2023 (has links)
<p dir="ltr">The present condition of US infrastructure requires a data-driven, risk-based approach to asset management. In the case of bridges, inspectors in every state visit these structures, collect data, and based on the information they report, departments of transportation evaluate bridge conditions, predict deterioration, and make repair and retrofit decisions. However, current inspection practices are manual and subjective, which could result in inaccurate assessments, with a significant impact on the allocation of economic resources and work schedules oriented to the maintenance and operations bridges. Furthermore, the capacity of inspectors for defect or deficiency detection might be inconsistent due to several cognitive and physical factors, such as the inspectors’ experience or eyesight.</p><p dir="ltr">This thesis describes the development of a Virtual Reality (VR) application supported by advanced computer graphics where the users are engaged in immersive, photo-realistic 3D environments. It provides a testbed to study the variability among bridge inspectors. The outcome will provide statistical information that will be used to enhance the current inspection practices.</p><p dir="ltr">With the use of VR technology, current limitations of inspection evaluation, such as multiple districts and different types of structures, logistics of people and equipment, and weather conditions, are addressed. Besides improving inspection training, time and cost savings are expected, along with safer conditions and innovative training tools. The final product is a state-of-the-art VR set-up with testing models of concrete and steel bridges under controlled conditions that are open to assessing other needs in the future. The system runs on a high-resolution tethered headset supported by a gaming laptop to ease portability across Indiana districts.</p><p dir="ltr">The VR-based application comprises two bridge modules: one for a steel truss bridge and one for a multi-girder concrete bridge. The 3D bridge models are synthetically recreated using reference images from two case studies. Through constant feedback and multiple demonstration sessions with the Indiana Department of Transportation (INDOT) bridge inspectors, the bridge components, the defects and their severity, and the inspection tools to be modeled are defined. Nine types of defects are modeled, including efflorescence, cracking, corrosion, spalling, and delamination. Eight inspection tools are also recreated in the VR scene, such as chain drag, hammer, scratch brush, flashlight, and tape measure.</p><p dir="ltr">After completing the inspection in the VR scene, users are required to fill out an online survey, one for each bridge. Condition rating numbers and comments on the state of the deck, superstructure, and substructure are requested. Besides, factors such as years of experience and work location are asked to ascertain inconsistency patterns when compared with the rating numbers. The VR application also offers the possibility of taking screenshots that inspectors can later attach to their surveys to complement their reports. Statistical analysis, including pie charts and histograms, is automatically generated, giving a multi-faceted approach to consistency evaluation among inspectors.</p>
288

Consistency in Web Design from a User Perspective

Axelsson, Anton January 2012 (has links)
Inom människa-datorinteraktion har det länge spekulerats huruvida inkonsekvent design påverkar användarupplevelsen. Att definiera och kategorisera olika typer av konsekvens har visat sig svårt. Flera studier på området har kategoriserat typer av inkonsekvens med blandade perspektiv av såväl systemet, dess utvecklare samt dess användare. Denna uppsats sätter användarens perspektiv i fokus och kategoriserar typer av inkonsekvens i perceptuell, semantisk och procedurell konsekvens. 21 personer, med måttlig erfarenhet av att bruka nätet, deltog i ett experiment utformat att utforska effekterna av inkonsekvent design på användbarhet. För att pröva såväl huvud- som interaktionseffekter baserades experimentet på en fullständig 2 × 2 × 2 faktordesign för upprepade mätningar. Deltagarnas uppgift var att använda åtta prototyper av en webbutik där en dropdownmeny för ämnesval utsattes för experimentell manipulation. En trevägs variansanalys med kovariat visade att perceptuellt och procedurellt inkonsekvent design påverkade användarupplevelsen negativt. Resultaten pekade också på att hämmande interaktionseffekter uppstod mellan vissa av de tre inkonsekvenserna. Resultaten ger viktiga implikationer för webbutvecklare när de skall utveckla användbara applikationer. Genom ett användarperspektiv kan utvecklare hjälpa användare att undvika felaktiga handlingar. / Within Human-Computer Interaction, it has long been speculated that inconsistency impedes the user experience. However, defining and categorising consistency has been shown to be a challenging task. Several studies on the subject have categorised consistency with mixed perspectives of the system, its developer, and its user. The present thesis considers only the user perspective, and categorises consistency into Perceptual, Semantic, and Procedural consistency. 21 subjects, with moderate experience in using the web, participated in an experiment designed to explore the effect inconsistency might have on usability. In order to test both main and interaction effects between the three proposed consistencies, the experiment was based on a full 2 × 2 × 2 factorial design for repeated measures. The participants’ task was to use eight partly different versions of a mock-up web shop in which a subject selection drop-down menu was experimentally manipulated. Multiple Analysis of Covariance revealed that Perceptual and Procedural inconsistency affected user performance negatively. It also indicated that inhibitory interaction effects occurred between some of the (in)consistencies. The results have important implications for web developers in designing usable applications. By adapting a user perspective, they can aid users to avoid performing faulty actions.
289

Consistency and efficiency in continuous-time system identification

González, Rodrigo A. January 2020 (has links)
Continuous-time system identification deals with the problem of building continuous-time models of dynamical systems from sampled input and output data. In this field, there are two main approaches: indirect and direct. In the indirect approach, a suitable discrete-time model is first determined, and then it is transformed into continuous-time. On the other hand, the direct approach obtains a continuous-time model directly from the sampled data. In both approaches there exists a dichotomy between discrete-time data and continuous-time models, which can induce robustness issues and complications in the theoretical analysis of identification algorithms. These difficulties are addressed in this thesis. First, we consider the indirect approach to continuous-time system identification. For a zero-order hold sampling mechanism, this approach usually leads to a transfer function estimate with relative degree one, independent of the relative degree of the strictly proper true system. Inspired by the indirect prediction error method, we propose an indirect-approach estimator that enforces the desired number of poles and zeros in the continuous-time transfer function estimate, and show that the estimator is consistent and asymptotically efficient. A robustification of this method is also developed, by which the estimates are also guaranteed to deliver stable models. In the second part of the thesis, we analyze asymptotic properties of the Simplified Refined Instrumental Variable method for Continuous-time systems (SRIVC), which is one of the most popular direct identification methods. This algorithm applies an adaptive prefiltering to the sampled input and output that requires assumptions on the intersample behavior of the signals. We present a comprehensive analysis on the consistency and asymptotic efficiency of the SRIVC estimator while taking into account the intersample behavior of the input signal. Our results show that the SRIVC estimator is generically consistent when the intersample behavior of the input is known exactly and subsequently used in the implementation of the algorithm, and we give conditions under which consistency is not achieved. In terms of statistical efficiency, we compute the asymptotic Cramér-Rao lower bound for an output error model structure with Gaussian noise, and derive the asymptotic covariance of the SRIVC estimates. We conclude that the SRIVC estimator is asymptotically efficient under mild conditions, and that this property can be lost if the intersample behavior of the input is not carefully accounted for in the SRIVC procedure. Moreover, we propose and analyze the statistical properties of an extension of SRIVC that is able to deal with input signals that cannot be interpolated exactly via hold reconstructions. The proposed estimator is generically consistent for any input reconstructed using zero or first-order-hold devices, and we show that it is generically consistent for continuous-time multisine inputs as well. Comparisons with the Maximum Likelihood technique and an analysis of the iterations of the method are provided, in order to reveal the influence of the intersample behavior of the output and to propose new robustifications to the SRIVC algorithm. / <p>QC 20200511</p>
290

Is my musculoskeletal model complex enough? The implications of six degree of freedom lower limb joints for dynamic consistency and biomechanical relevance

Pearl, Owen Douglas January 2020 (has links)
Studies have shown that modeling errors due to unaccounted for soft-tissue deformations – known as soft-tissue artifact (STA) – can reduce the efficacy and usefulness of musculoskeletal simulations. Recent work has proven that adding degrees of freedom (DOF) to the joint definitions of a musculoskeletal model’s lower limbs can significantly change the prediction of an individual’s kinematics and dynamics while simultaneously improving estimates of their mechanical work. This indicates that additional modeling complexity may mitigate the effects of STA. However, it remains to be determined whether adding DOF to the lower limb joints can impact a model’s satisfaction of Newton’s Second Law of Motion, or whether a specific number of DOF must be incorporated in order to produce the most biomechanically accurate simulations. To investigate these unknowns, I recruited ten subjects of variable body-mass-indices (BMI) and recorded subject walking data at three speeds normalized by Froude number (Fr) using optical motion capture and an instrumented treadmill (eight male, two females; mean ± s.d.; age 21.6 ± 2.87 years; BMI 25.1 ± 5.1). Then, I added DOF to the lower limb joints of OpenSim’s 23 DOF lower body and torso model until it minimized the magnitude of the pelvis residual forces and moments for a single, representative subject trial (BMI = 24.0, Fr = 0.15). These artificial residual forces and moments are applied at the pelvis to maintain the model’s orientation in space by satisfying Newton’s Second Law. Finally, I simulated all 30 trials with both the original and the edited model and observed how the biomechanical predictions of the two models differed over the range of subject BMIs and walking speeds. After applying both the original and the edited model to the entire data set, I found that the edited model resulted in statistically lower (α = 0.05) residual forces and moments in four of the six directions. Then, after investigating the impact of changes in BMI and Froude number on these residual reductions, I found that two of the six directions exhibited statistically significant correlations with Froude number while none of the six possessed correlations with BMI. Therefore, adding DOF to the lower limb joints can improve a model’s dynamic consistency and combat the effects of STA, and simulations of higher speed behaviors may benefit more from additional DOF. For BMI, it remains to be determined if a higher BMI indicates greater potential for residual reduction, but it was shown that this method of tuning the model for one representative subject was agnostic to BMI. Overall, the method of tuning the model for one representative subject was found to be quite limited. There were multiple subject trials for which reduced residuals corresponded to drastic changes in kinematic and dynamic estimates until they were no longer representative of normal human walking. Therefore, it is possible to improve dynamic consistency by adding DOF to the lower limb joints. But, for biomechanically relevant estimates to be consistently preserved and soft-tissue artifact to be completely minimized, subject-specific model tuning is likely necessary. / Mechanical Engineering

Page generated in 0.0712 seconds