• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 393
  • 258
  • 240
  • 73
  • 23
  • 20
  • 12
  • 11
  • 11
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • Tagged with
  • 1164
  • 240
  • 213
  • 190
  • 189
  • 176
  • 138
  • 126
  • 124
  • 110
  • 104
  • 101
  • 101
  • 95
  • 88
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

VALUE STREAM MAPPING – A CASE STUDY OF CONSTRUCTION SUPPLY CHAINOF PREFABRICATED MASSIVE TIMBER FLOOR ELEMENT

Marzec, Cindy, Gustavsson, Joachim January 2007 (has links)
<p>The purpose of this Master Thesis is to study how the value stream mapping concept can be applied along the construction supply chain for prefabricated massive timber floor elements. Identification and qualification of waste are starting points to propose suggestions on how to reduce and/or eliminate them. In order to use the value stream mapping along the construction supply chain, pertinent data has been collected and analyzed. To conduct the value stream mapping, the first three steps of the lean thinking principles in construction have been followed. The first step aims at defining the customer and his value as well as the value for the delivery team and how it is specified in the product. The second step is based on identifying the value stream and this is done through defining the resources and activities needed to manufacture, deliver and install the floor elements. This is conducted by using the VSMM methodology. In addition the current practice should be standardized and key component suppliers should be defined and located. The third and last step identifies non-value adding activities, in other words waste and suggestions on how to remove and/or reduce waste have been reached. Wastes from product defects, transportation waste and waste of waiting were to be found in the construction supply chain. Propositions to reduce and/or eliminate wastes were to implement a more careful planning of the manufacturing process and production schedule, to apply lean production principles in the manufacturing facility and decrease and or eliminate storage time. The study made has shown that in the supply chain of massive timber floor elements at Limnologen there is a big potential to lower costs and increase customer value as value added-time accounted for only 2% of the total time.</p>
262

Rise Over Thermal Estimation Algorithm Optimization and Implementation / Rise Over Thermal Estimation Algorithm Optimization and Implementation

Irshad, Saba, Nepal, Purna Chandra January 2013 (has links)
The uplink load for the scheduling of Enhanced-Uplink (E-UL) channels determine the achievable data rate for Wideband Code Division Multiple Access (WCDMA) systems, therefore its accurate measurement carries a prime significance. The uplink load also known as Rise-over-Thermal (RoT), which is the quotient of the Received Total Wideband Power (RTWP) and the Thermal Noise Power floor. It is a major parameter which is calculated at each Transmission Time Interval (TTI) for maintaining cell coverage and stability. The RoT algorithm for evaluation of uplink load is considered as a complex and resource demanding among several Radio Resource Management (RRM) algorithms running in a radio system. The main focus of this thesis is to study RoT algorithm presently deployed in radio units and its possible optimization by reducing complexity of the algorithm in terms of memory usage and processing power. The calculation of RoT comprises three main blocks a Kalman filter, a noise floor estimator and the RoT computation. After analyzing the complexity of each block it has been established that the noise floor estimator block is consuming most of the processing power producing peak processor load since it involves many complex floating point calculations. However, the other blocks do not affect the processing load significantly. It was also observed that some block updates can be reduced in order to decrease the average load on the processor. Three techniques are proposed for reducing the complexity of the RoT algorithm, two for the reduction of peak load and one for the reduction of average load. For reducing the peak load, an interpolation approach is used instead of performing transcendental mathematical calculations. Also, the calculations involving noise floor estimation are extended over several TTIs by keeping in view that the estimation is not time critical. For the reduction of average load, the update rate for the Kalman Filter block is reduced. Based on these optimization steps, a modified algorithm for RoT computation with reduced complexity is proposed. The proposed changes are tested by means of MATLAB simulations demonstrating the improved performance with consistency in the output results. Finally, an arithmetic operation count is done using the hardware manual of Power PC (PPC405) used in Platform 4, which gives a rough estimate of decrease in the percentage of calculations after optimization. / saabairshad@gmail.com
263

Experimental testing of a steel gravity frame with a composite floor under interior column loss

Hull, Lindsay A. 21 November 2013 (has links)
Progressive collapse research aims to characterize and quantify the behavior of different structural systems in events of extreme local damage caused by bombings to improve the performance of targeted structures and to protect occupants. The focus of the research program described herein is the performance of steel gravity frame structures with composite floor systems in column loss scenarios. The goal of the project is to contribute to the development of rational design guidelines for progressive collapse resistance and to assess any potential weaknesses in current design standards. This thesis presents the results of a series of tests performed on a steel frame structure with simple framing connections and a composite floor slab under interior column loss. The specimen was designed and constructed in accordance with typical design practices and was subjected to increasing uniform floor loads after static removal of the central column. No significant structural damage was observed up to a load equivalent to the ultimate gravity design load. Further testing was performed after the deliberate reduction of the capacity of the steel framing connections, ultimately resulting in total collapse of the specimen. / text
264

Hydrothermally altered basalts from the Mariana Trough

Trembly, Jeffrey Allen January 1982 (has links)
No description available.
265

Μελέτη της συμπεριφοράς αποκωδικοποιητών LDPC στην περιοχή του Error Floor

Γιαννακοπούλου, Γεωργία 07 May 2015 (has links)
Σε διαγράμματα BER, με τα οποία αξιολογείται ένα σύστημα αποκωδικοποίησης, και σε χαμηλά επίπεδα θορύβου, παρατηρείται πολλές φορές η περιοχή Error Floor, όπου η απόδοση του αποκωδικοποιητή δε βελτιώνεται πλέον, καθώς μειώνεται ο θόρυβος. Με πραγματοποίηση εξομοίωσης σε software, το Error Floor συνήθως δεν είναι ορατό, κι έτσι κύριο ζητούμενο είναι η πρόβλεψη της συμπεριφοράς του αποκωδικοποιητή, αλλά και γενικότερα η βελτιστοποίηση της απόδοσής του σε αυτήν την περιοχή. Στην παρούσα διπλωματική εργασία μελετάται η ανεπιτυχής αποκωδικοποίηση ορισμένων κωδικών λέξεων καθώς και ο μηχανισμός ενεργοποίησης των Trapping Sets, δηλαδή δομών, οι οποίες φαίνεται να είναι το κύριο αίτιο εμφάνισης του Error Floor. Xρησιμοποιείται το AWGN μοντέλο καναλιού και κώδικας με αραιό πίνακα ελέγχου ισοτιμίας (LDPC), ενώ οι εξομοιώσεις επαναληπτικών αποκωδικοποιήσεων πραγματοποιούνται σε επίπεδα (Layers), με αλγορίθμους ανταλλαγής μηνυμάτων (Message Passing). Αναλύονται προτεινόμενοι τροποποιημένοι αλγόριθμοι και μελετώνται οι επιπτώσεις του κβαντισμού των δεδομένων. Τέλος, προσδιορίζεται η επίδραση του θορύβου στην αποκωδικοποίηση και αναπτύσσεται ένα ημιαναλυτικό μοντέλο υπολογισμού της πιθανότητας ενεργοποίησης ενός Trapping Set και της πιθανότητας εμφάνισης σφάλματος κατά τη μετάδοση. / In BER plots, which are used in order to evaluate a decoding system, and at low-noise level, the Error Floor region is sometimes observed, where the decoder performance is no longer improved, as noise is reduced. When a simulation is executed using software, the Error Floor region is usually not visible, so the main goal is the prediction of the decoder's behavior, as well as the improvement in general of its performance in that particular region. In this thesis, we study the conditions which result in a decoding failure for specific codewords and a Trapping Set activation. Trapping Sets are structures in a code, which seem to be the main cause of the Error Floor presence in BER plots. For the purpose of our study, we use the AWGN channel model and a linear block code with low density parity check matrix (LDPC), while iterative decoding simulations are executed by splitting the parity check matrix into layers (Layered Decoding) and by using Message Passing algorithms. We propose and analyze three new modified algorithms and we study the effects caused by data quantization. Finally, we determine the noise effects on the decoding procedure and we develop a semi-analytical model used for calculating the probability of a Trapping Set activation and for calculating the error probability during transmission.
266

Protograph-Based Generalized LDPC Codes: Enumerators, Design, and Applications

Abu-Surra, Shadi Ali January 2009 (has links)
Among the recent advances in the area of low-density parity-check (LDPC) codes, protograph-based LDPC codes have the advantages of a simple design procedure and highly structured encoders and decoders. These advantages can also be exploited in the design of protograph-based generalized LDPC (G-LDPC) codes. In this dissertation we provide analytical tools which aid the design of protograph-based LDPC and G-LDPC codes. Specifically, we propose a method for computing the codeword-weight enumerators for finite-length protograph-based G-LDPC code ensembles, and then we consider the asymptotic case when the block-length goes to infinity. These results help the designer identify good ensembles of protograph-based G-LDPC codes in the minimum distance sense (i.e., ensembles which have minimum distances grow linearly with code length). Furthermore, good code ensembles can be characterized by good stopping set, trapping set, or pseudocodeword properties, which assist in the design of G-LDPC codes with low floors. We leverage our method for computing codeword-weight enumerators to compute stopping-set, and pseudocodeword enumerators for the finite-length and the asymptotic ensembles of protograph-based G-LDPC codes. Moreover, we introduce a method for computing trapping set enumerators for finite-length (and asymptotic) protograph-based LDPC code ensembles. Trapping set enumerators for G-LDPC codes represents a more complex problem which we do not consider here. Inspired by our method for computing trapping set enumerators for protograph-based LDPC code ensembles, we developed an algorithm for estimating the trapping set enumerators for a specific LDPC code given its parity-check matrix. We used this algorithm to enumerate trapping sets for several LDPC codes from communication standards. Finally, we study coded-modulation schemes with LDPC codes and pulse position modulation (LDPC-PPM) over the free-space optical channel. We present three different decoding schemes and compare their performances. In addition, we developed a new density evolution tool for use in the design of LDPC codes with good performances over this channel.
267

A Penalty Function-Based Dynamic Hybrid Shop Floor Control System

Zhao, Xiaobing January 2006 (has links)
To cope with dynamics and uncertainties, a novel penalty function-based hybrid, multi-agent shop floor control system is proposed in this dissertation. The key characteristic of the proposed system is the capability of adaptively distributing decision-making power across different levels of control agents in response to different levels of disturbance. The subordinate agent executes tasks based on the schedule from the supervisory level agent in the absence of disturbance. Otherwise, it optimizes the original schedule before execution by revising it with regard to supervisory level performance (via penalty function) and disturbance. Penalty function, mathematical programming formulations, and quantitative metrics are presented to indicate the disturbance levels and levels of autonomy. These formulations are applied to diverse performance measurements such as completion time related metrics, makespan, and number of late jobs. The proposed control system is illustrated, tested with various job shop problems, and benchmarked against other shop floor control systems. In today's manufacturing system, man still plays an important role together with the control system Therefore, better coordination of humans and control systems is an inevitable topic. A novel BDI agent-based software model is proposed in this work to replace the partial decision-making function of a human. This proposed model is capable of 1) generating plans in real-time to adapt the system to a changing environment, 2) supporting not only reactive, but also proactive decision-making, 3) maintaining situational awareness in human language-like logic to facilitate real human decision-making, and 4) changing the commitment strategy adaptive to historical performance. The general purposes human operator model is then customized and integrated with an automated shop floor control system to serve as the error detection and recovery system. This model has been implemented in JACK software; however, JACK does not support real-time generation of a plan. Therefore, the planner sub-module has been developed in Java and then integrated with the JACK. To facilitate integration of an agent, real-human, and the environment, a distributed computing platform based on DOD High Level Architecture has been used. The effectiveness of the proposed model is then tested in several scenarios in a simulated automated manufacturing environment.
268

Analysis of Failures of Decoders for LDPC Codes

Chilappagari, Shashi Kiran January 2008 (has links)
Ever since the publication of Shannon's seminal work in 1948, the search for capacity achieving codes has led to many interesting discoveries in channel coding theory. Low-density parity-check (LDPC) codes originally proposed in 1963 were largely forgotten and rediscovered recently. The significance of LDPC codes lies in their capacity approaching performance even when decoded using low complexity sub-optimal decoding algorithms. Iterative decoders are one such class of decoders that work on a graphical representation of a code known as the Tanner graph. Their properties have been well understood in the asymptotic limit of the code length going to infinity. However, the behavior of various decoders for a given finite length code remains largely unknown.An understanding of the failures of the decoders is vital for the error floor analysis of a given code. Broadly speaking, error floor is the abrupt degradation in the frame error rate (FER) performance of a code in the high signal-to-noise ratio domain. Since the error floor phenomenon manifests in the regions not reachable by Monte-Carlo simulations, analytical methods are necessary for characterizing the decoding failures. In this work, we consider hard decision decoders for transmission over the binary symmetric channel (BSC).For column-weight-three codes, we provide tight upper and lower bounds on the guaranteed error correction capability of a code under the Gallager A algorithm by studying combinatorial objects known as trapping sets. For higher column weight codes, we establish bounds on the minimum number of variable nodes that achieve certain expansion as a function of the girth of the underlying Tanner graph, thereby obtaining lower bounds on the guaranteed error correction capability. We explore the relationship between a class of graphs known as cage graphs and trapping sets to establish upper bounds on the error correction capability.We also propose an algorithm to identify the most probable noise configurations, also known as instantons, that lead to error floor for linear programming (LP) decoding over the BSC. With the insight gained from the above analysis techniques, we propose novel code construction techniques that result in codes with superior error floor performance.
269

Decision-making at operational level

Spasova, Paraskeva January 2007 (has links)
One of the universal characteristics of all organization concerns their attempts to achieve high product quality at low price. For that reason, the contemporary organizations direct their attempts to improve the utilization of workers’ potential and adopt the line-stopping strategies at the shop .The research presented in this thesis aims at analyzing and revealing to what extent the decisions at the shop floor depend on operators. The conclusions drawn in this paper contribute to determination of the scope of operators’ responsibilities and examination of the ways in which workers maintain the process uninterrupted. The role of operators for attaining the desired product quality is presented as well. These objectives have been accomplished through theoretical work.
270

On guided bone reformation in the maxillary sinus to enable placement and integration of endosseous implants. Clinical and experimental studies.

Cricchio, Giovanni January 2011 (has links)
Dental caries and periodontal disease are the major causes for tooth loss. While dental caries commonly involve the posterior teeth in both jaws, the teeth most commonly lost due to periodontal problems are the first and second molars in the maxilla. As a consequence, the upper posterior jaw is frequently edentulous. Implant therapy today is a predictable treatment modality for prosthetic reconstruction of edentulous patient. Insufficient amounts of bone, due to atrophy following loss of teeth or due to the presence of the maxillary sinus, can make it impossible to insert implants in the posterior maxilla. During the 1970s and 1980s, Tatum, Boyne and James and Wood and Moore first described maxillary sinus floor augmentation whereby, after the creation of a lateral access point, autologous bone grafts are inserted to increase crestal bone height and to create the necessary conditions for the insertion of implants. This surgical procedure requires a two-stage approach and a double surgical site: first, bone is harvested from a donor site and transplanted to the recipient site; then, after a proper healing period of between 4 to 6 months, the implants are inserted. This kind of bone reconstruction, even if well documented, has its limitations, not least in the creation of two different surgical sites and the consequent increased risk of morbidity. In 2004, Lundgren et al. described a new, simplified technique for the elevation of the sinus floor. The authors showed that by lifting the sinus membrane an empty space was created in which blood clot formations resulted in the establishment of new bone. The implants were placed simultaneously to function as “tent poles”, thus maintaining the sinus membrane in a raised position during the subsequent healing period. An essential prerequisite of this technique is to obtain optimal primary implant stability from the residual bone in the sinus floor. An extremely resorbed maxillary sinus floor, with, for example, less than 2-3 mm of poor quality residual bone, could impair implant insertion. The aims of the present research project were (i) to evaluate the donor site morbidity and the acceptance level of patients when a bone graft is harvested from the anterior iliac crest, (ii) to evaluate implant stability, new bone formation inside the maxillary sinus and marginal bone resorption around the implants in long term follow up when maxillary sinus floor augmentation is performed through sinus membrane elevation and without the addition of any grafting material, (iii) to investigate new bone formation inside the maxillary sinus, in experimental design, using a resorbable space-maker device in order to maintain elevation of the sinus membrane where there is too little bone to insert implants with good primary stability. In Paper I, 70 consecutively treated patients were retrospectively evaluated in terms of postoperative donor site morbidity and donor site complications. With regard to donor site morbidity, 74% of patients were free of pain within 3 weeks, whereas 26% had a prolonged period of pain lasting from a few weeks to several months. For 11% of patients there was still some pain or discomfort 2 years after the grafting surgery. Nevertheless, patients acceptance was high and treatment significantly improved oral function, facial appearance, and recreation/social activities and resulted in an overall improvement in the quality of life of formerly edentulous patients. In Paper I and III, some differently shaped space-making devices were tested on primates (tufted capuchin - Cebus apella) in two experimental models aimed at evaluating whether a two-stage procedure for sinus floor augmentation could benefit from the use of a space-making device to increase the bone volume and enable later implant installation with good primary stability, without the use of any grafting material. An histological examination of the specimens showed that it is possible to obtain bone formation in contact with both the Schneiderian membrane and the device. In most cases the device was displaced. The process of bone formation indicated that this technique is potentially useful for two-stage sinus floor augmentation. The lack of device stability within the sinus requires further improvement in space-makers if predictable bone augmentation is to be achieved. In Paper IV, a total of 84 patients were subjected to 96 membrane elevation procedures and the simultaneous placement of 239 implants. Changes of intra-sinus and marginal bone height in relation to the implants were measured in intraoral radiographs carried out during insertion after 6 months of healing, after 6 months of loading and then annually. Computerised tomography was performed pre-surgically and 6 months post-surgically. Resonance frequency analysis measurements were performed at the time of implant placement, at abutment connection and after 6 months of loading. The implant follow-up period ranged from a minimum of one to a maximum of 6 years after implant loading. All implants were stable after 6 months of healing. A total of three implants were lost during the follow-up period giving a survival rate of 98.7%. Radiography demonstrated an average of 5.3 ± 2.1 mm of intra-sinus new bone formation after 6 months of healing. RFA measurements showed adequate primary stability (implant stability quotient 67.4 ± 6.1) and small changes over time. In conclusion, harvesting bone from the iliac crest could result in temporary donor site morbidity, but in 11% of patients pain or discomfort was still present up to 2 years after surgery. However, patient satisfaction was good despite this slow or incomplete recovery, as showed by the quality of life questionnaire. Maxillary sinus membrane elevation without the use of bone grafts or bone substitutes results in predictable bone formation both in animal design, where the sinus membrane is supported by a resorbable device, and in clinical conditions, where the membrane is kept in the upper position by dental implants. This new bone formation is accompanied by a high implant survival rate of 98.7% over a follow-up period of up to 6 years. Intra-sinus bone formation remained stable in the long-term follow-up. It is suggested that the secluded compartment allowed bone formation in accordance with the principle of guided tissue regeneration. This technique reduces the risks of morbidity related to bone graft harvesting and eliminates the costs of grafting materials.

Page generated in 0.0351 seconds