• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1296
  • 937
  • 869
  • 145
  • 143
  • 116
  • 89
  • 25
  • 23
  • 21
  • 15
  • 14
  • 13
  • 13
  • 12
  • Tagged with
  • 4079
  • 372
  • 363
  • 359
  • 329
  • 319
  • 277
  • 264
  • 244
  • 234
  • 228
  • 221
  • 207
  • 202
  • 192
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Théorèmes de h-cobordisme et s-cobordisme semi-algébriques

Demdah, Kartoue Mady 23 July 2009 (has links) (PDF)
Le théorème de h-cobordisme est bien connu en topologie différentielle et PL. Il a été démontré par Stephen Smale et avec comme conséquence la preuve de la conjecture de Poincaré en dimension supérieure à 4. Une généralisation pour les h-cobordismes possiblement non simplement connexe est appelée théorème de s-cobordisme. Dans cette thèse, nous démontrons les versions semi-algébrique et Nash de ces théorèmes. C'est à dire, avec des données semi-algébriques ou Nash, nous obtenons un homéomorphisme semi-algébrique (respectivement un difféomorphisme Nash). Les principaux outils intervenant sont la triangulation semi-algébrique et les approximations Nash. Un aspect de la nature algébrique des objets semi-algébriques et Nash est qu'on peut mesurer leurs complexités. Nous montrons les théorèmes de h et s-cobordisme avec borne uniforme sur la complexité de l'homéomorphisme semi-algébrique (difféomorphisme Nash) voulu, en fonction de complexité des données du cobordisme. Pour finir, nous déduisons la validité de ces théorèmes version semi-algébrique et Nash sur tout corps réel clos.
92

Implementation and Testing of a Semi-Active Damping System

Nordin, Peter January 2007 (has links)
<p>The purpose of this thesis is to implement and test a semi-active damping system based on a concept from an earlier thesis. The project includes implementation of mechanical, hydraulic and electronic hardware, aswell as controller software. The idea is to measure the movements of the vehicle chassis and based on these measurements set the damping torque using hydraulics. To be able to develop, test and evaluate the system, realistic input data must be available. To acquire such data, driving trials have been conducted on a variety of tracks.</p><p>The first part of the system is the sensors that measure chassis movements. Both accelerometers and a gyro has been used. To remove drift and high frequency vibrations, the signals are filtered. The suggested controller from the earlier thesis requests damping torque based on the dampers vertical velocity. When accelerometer signals are integrated, measurement and rounding errors causes drift in the velocity. To compensate for this, a floating average is calculated and used.</p><p>The main hydraulic component is a pressure reduction valve that controls the pressure inside the damper. Higher pressure will give higher damping torque. The reaction speed of the system is mostly depending on the hydraulic components. It is important to know the time delay from a change in the valve control signal, to when the actual pressure in the damper has been reached. Tests have shown that a large step, going from 10 Bar to 60 Bar takes approximately 46ms, and that a small step from 1 Bar to 20 Bar takes 63ms. The valve is faster when higher pressure levels are requested. In addition to the hydraulic response time the delay through the signal filters, measured to about 14ms, must be added.</p><p>The sensors are affected by vibrations. If these can be reduced, the digital filters can be made less sharp with a lower filter delay as result. It is also important to have a good control computer so that large rounding errors in the filter calculations can be avoided. This would greatly decrease drift in the integrated velocity.</p>
93

A Mechanical Model for Erosion in Copper Chemical-Mechanical Polishing

Noh, Kyungyoon, Saka, Nannaji, Chun, Jung-Hoon 01 1900 (has links)
The Chemical-mechanical polishing (CMP) process is now widely employed in the ultralarge scale integration chip fabrication. Due to the continuous advances in semiconductor fabrication technology and decreasing sub-micron feature size, the characterization of erosion, which affects circuit performance and manufacturing throughput, has been an important issue in Cu CMP. In this paper, the erosion in Cu CMP is divided into two levels. The wafer-level and die-level erosion models were developed based on the material removal rates and the geometry of incoming wafers to the Cu CMP process, including the Cu interconnect area fraction, linewidth and Cu deposition thickness. Experiments were conducted to obtain the selectivity values between the Cu, barrier layer and dielectric, and the values of within-wafer material removal rate ratio, β, for the validation of the new erosion model. It was compared with the existing models and was found to agree better with the experimental data. / Singapore-MIT Alliance (SMA)
94

The continuous rheoconversion process Scale-up and optimization.

Bernard, William J. January 2005 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: CRP; Thixocasting; Rheocating; Semisolid. Includes bibliographical references (leaves 46-48).
95

Semi-supervised and active training of conditional random fields for activity recognition

Mahdaviani, Maryam 05 1900 (has links)
Automated human activity recognition has attracted increasing attention in the past decade. However, the application of machine learning and probabilistic methods for activity recognition problems has been studied only in the past couple of years. For the first time, this thesis explores the application of semi-supervised and active learning in activity recognition. We present a new and efficient semi-supervised training method for parameter estimation and feature selection in conditional random fields (CRFs),a probabilistic graphical model. In real-world applications such as activity recognition, unlabeled sensor traces are relatively easy to obtain whereas labeled examples are expensive and tedious to collect. Furthermore, the ability to automatically select a small subset of discriminatory features from a large pool can be advantageous in terms of computational speed as well as accuracy. We introduce the semi-supervised virtual evidence boosting (sVEB)algorithm for training CRFs — a semi-supervised extension to the recently developed virtual evidence boosting (VEB) method for feature selection and parameter learning. sVEB takes advantage of the unlabeled data via mini-mum entropy regularization. The objective function combines the unlabeled conditional entropy with labeled conditional pseudo-likelihood. The sVEB algorithm reduces the overall system cost as well as the human labeling cost required during training, which are both important considerations in building real world inference systems. Moreover, we propose an active learning algorithm for training CRFs is based on virtual evidence boosting and uses entropy measures. Active virtual evidence boosting (aVEB) queries the user for most informative examples, efficiently builds up labeled training examples and incorporates unlabeled data as in sVEB. aVEB not only reduces computational complexity of training CRFs as in sVEB, but also outputs more accurate classification results for the same fraction of labeled data. Ina set of experiments we illustrate that our algorithms, sVEB and aVEB, benefit from both the use of unlabeled data and automatic feature selection, and outperform other semi-supervised and active training approaches. The proposed methods could also be extended and employed for other classification problems in relational data.
96

Semi-synchronous video for Deaf Telephony with an adapted synchronous codec

Ma, Zhenyu January 2009 (has links)
<p>Communication tools such as text-based instant messaging, voice and video relay services, real-time video chat and mobile SMS and MMS have successfully been used among Deaf people. Several years of field research with a local Deaf community revealed that disadvantaged South African Deaf&nbsp / people preferred to communicate with both Deaf and hearing peers in South African Sign Language as opposed to text. Synchronous video chat and video&nbsp / relay services provided such opportunities. Both types of services are commonly available in developed regions, but not in developing countries like South&nbsp / Africa. This thesis reports on a workaround approach to design and develop an asynchronous video communication tool that adapted synchronous video&nbsp / &nbsp / codecs to store-and-forward video delivery. This novel asynchronous video tool provided high quality South African Sign Language video chat at the&nbsp / expense of some additional latency. Synchronous video codec adaptation consisted of comparing codecs, and choosing one to optimise in order to&nbsp / minimise latency and preserve video quality. Traditional quality of service metrics only addressed real-time video quality and related services. There was no&nbsp / uch standard for asynchronous video communication. Therefore, we also enhanced traditional objective video quality metrics with subjective&nbsp / assessment metrics conducted with the local Deaf community.</p>
97

Optimization Models and Techniques for Radiation Treatment Planning Applied to Leksell Gamma Knife(R) Perfexion(TM)

Ghaffari, Hamid 11 December 2012 (has links)
Radiation treatment planning is a process through which a certain plan is devised in order to irradiate tumors or lesions to a prescribed dose without posing surrounding organs to the risk of receiving radiation. A plan comprises a series of shots at di erent positions with di erent shapes. The inverse planning approach which we propose utilizes certain optimization techniques and builds mathematical models to come up with the right location and shape, for each shot, automating the whole process. The models which we developed for PerfexionTM unit (Elekta, Stockholm, Sweden), in essence, have come to the assistance of oncologists in automatically locating isocentres and de ning sector durations. Sector duration optimization (SDO) and sector duration and isocentre location optimization (SDIO) are the two classes of these models. The SDO models, which are, in fact, variations of equivalent uniform dose optimization model, are solved by two nonlinear optimization techniques, namely Gradient Projection and our home-developed Interior Point Constraint Generation. In order to solve SDIO model, a commercial optimization solver has been employed. This study undertakes to solve the isocentre selection and sector duration optimization. Moreover, inverse planning is evaluated, using clinical data, throughout the study. The results show that automated inverse planning contributes to the quality of radiation treatment planning in an unprecedentedly optimal fashion, and signi cantly reduces computation time and treatment time.
98

Research On The Recovery of Semi-Fragile Watermarked Image

Sun, Ming-Hong 03 July 2006 (has links)
In recent years, there are more and more researches on semi-fragile watermarking scheme which can resist JPEG compression. But, there are few researches focused on the recovery of semi-fragile watermarked image. Therefore, in this paper, we not only present a semi-fragile watermarking scheme which can resist JPEG compression but use the error correction code (Reed-Solomon Code) to recover the area being malicious manipulated. At first, we use the semi-fragile watermarking scheme proposed by Lin and Hsieh to detect the counterfeit under the JPEG compression [9]. Its main effect is to resist JPEG compression and to detect the attacked parts without the need of the original image. And then, we will introduce how we use RS code to recover the attacked parts being detected by the semi-fragile watermarking scheme. We use the scheme ¡§Interleaving¡¨ to spread the local pixels to the global area. Next, we encode to each little image block by RS code. The redundant symbols generated by RS code will be signed to be signature attached with the watermarked image. Finally, the receiver can use semi-fragile watermarking scheme to detect attacked part and use the information of the signature to decode these attacked parts. Additionally, we also discuss how to decrease the load of the signature, thus, it can not significant loading of the watermarked image.
99

none

Huang, Ya-Yao 04 July 2002 (has links)
none
100

Methods and applications of text-driven toponym resolution with indirect supervision

Speriosu, Michael Adrian 24 September 2013 (has links)
This thesis addresses the problem of toponym resolution. Given an ambiguous placename like Springfield in some natural language context, the task is to automatically predict the location on the earth's surface the author is referring to. Many previous efforts use hand-built heuristics to attempt to solve this problem, looking for specific words in close proximity such as Springfield, Illinois, and disambiguating any remaining toponyms to possible locations close to those already resolved. Such approaches require the data to take a fairly specific form in order to perform well, thus they often have low coverage. Some have applied machine learning to this task in an attempt to build more general resolvers, but acquiring large amounts of high quality hand-labeled training material is difficult. I discuss these and other approaches found in previous work before presenting several new toponym resolvers that rely neither on hand-labeled training material prepared explicitly for this task nor on particular co-occurrences of toponyms in close proximity in the data to be disambiguated. Some of the resolvers I develop reflect the intuition of many heuristic resolvers that toponyms nearby in text tend to (but do not always) refer to locations nearby on Earth, but do not require toponyms to occur in direct sequence with one another. I also introduce several resolvers that use the predictions of a document geolocation system (i.e. one that predicts a location for a piece of text of arbitrary length) to inform toponym disambiguation. Another resolver takes into account these document-level location predictions, knowledge of different administrative levels (country, state, city, etc.), and predictions from a logistic regression classifier trained on automatically extracted training instances from Wikipedia in a probabilistic way. It takes advantage of all content words in each toponym's context (both local window and whole document) rather than only toponyms. One resolver I build that extracts training material for a machine learned classifier from Wikipedia, taking advantage of link structure and geographic coordinates on articles, resolves 83% of toponyms in a previously introduced corpus of news articles correctly, beating the strong but simplistic population baseline. I introduce a corpus of Civil War related writings not previously used for this task on which the population baseline does poorly; combining a Wikipedia informed resolver with an algorithm that seeks to minimize the geographic scope of all predicted locations in a document achieves 86% blind test set accuracy on this dataset. After providing these high performing resolvers, I form the groundwork for more flexible and complex approaches by transforming the problem of toponym resolution into the traveling purchaser problem, modeling the probability of a location given its toponym's textual context and the geographic distribution of all locations mentioned in a document as two components of an objective function to be minimized. As one solution to this incarnation of the traveling purchaser problem, I simulate properties of ants traveling the globe and disambiguating toponyms. The ants' preferences for various kinds of behavior evolves over time, revealing underlying patterns in the corpora that other disambiguation methods do not account for. I also introduce several automated visualizations of texts that have had their toponyms resolved. Given a resolved corpus, these visualizations summarize the areas of the globe mentioned and allow the user to refer back to specific passages in the text that mention a location of interest. One visualization presented automatically generates a dynamic tour of the corpus, showing changes in the area referred to by the text as it progresses. Such visualizations are an example of a practical application of work in toponym resolution, and could be used by scholars interested in the geographic connections in any collection of text on both broad and fine-grained levels. / text

Page generated in 0.0435 seconds