• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 196
  • 53
  • 21
  • 19
  • 8
  • 7
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 378
  • 378
  • 96
  • 67
  • 66
  • 64
  • 58
  • 51
  • 50
  • 38
  • 37
  • 37
  • 34
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Mesure et suivi d'activité de plusieurs personnes dans un Living Lab en vue de l'extraction d'indicateurs de santé et de bien-être / Activity measurement and monitoring of several people in a Living Lab in order to extract health and well being indicators

Sevrin, Loic 20 September 2016 (has links)
Le vieillissement de la population est un phénomène mondial qui s'accompagne d'une augmentation du nombre de patients atteints de maladies chroniques, ce qui oblige à repenser le système de santé en amenant le suivi de santé et les soins au domicile et dans la cité.En considérant que l'activité est un signe visible de l'état de santé, cette thèse cherche à proposer un moyen technologique pour le suivi des activités de plusieurs personnes dans un living lab composé d'un appartement et de la cité qui l'entoure.En effet, le maintien d'une activité physique soutenue, et en particulier d'une activité sociale fait partie intégrante de la bonne santé d'une personne, il doit donc être étudié au même titre que les capacités à effectuer les activités de la vie quotidienne.Cette étude a permis la mise en place d'une plateforme de conception collaborative et de test grandeur nature autour de la santé à domicile et dans la cité : le living lab de l'INL.Ce dernier a été le théâtre de premières expérimentations permettant de valider la capacité du living lab à la fois de fusionner des données d'activité venant d'un ensemble de capteurs hétérogènes, mais également d'évoluer en intégrant des nouvelles technologies et services.Les scénarios collaboratifs étudiés permettent une première approche de l'analyse de la collaboration par la détection des présences simultanées de plusieurs personnes dans la même pièce. Ces résultats préliminaires sont encourageants et seront complétés lors de captures d'activité plus fines et incluant plus de capteurs dans les mois à venir / The ageing of the population is a global phenomenon which comes with an increase of the amount of patients suffering from chronic diseases. It forces to rethink the healthcare by bringing health monitoring and care at home and in the city.Considering the activity as a visible indication of the health status, this thesis seeks to provide technological means to monitor several people's activities in a living lab composed of an apartment and the city around.Indeed, maintaining substantial physical activity, in particular social activity accounts fo an important part of a person's good health status. Hence, it must be studied as well as the ability to perform the activities of daily living.This study enabled the implementation of a platform for collaborative design and full-scale experimentation concerning healthcare at home and in the city: the INL living lab.The latest was the theatre of some first experimentations which highlighted the living lab ability to perform activity data fusion from a set of heterogeneous sensors, and also to evolve by integrating new technologies and services.The studied collaborative scenarios enable a first approach of the collaboration analysis by detecting the simultaneous presence of several people in the same room. These preliminary results are encouraging and will be completed by more precise measurements which will include more sensors in the coming months
272

Geographic Indexing and Data Management for 3D-Visualisation

Ottoson, Patrik January 2001 (has links)
No description available.
273

Automatic Bayesian Segmentation Of Human Facial Tissue Using 3d Mr-ct Fusion By Incorporating Models Of Measurement Blurring, Noise And Partial Volume

Sener, Emre 01 September 2012 (has links) (PDF)
Segmentation of human head on medical images is an important process in a wide array of applications such as diagnosis, facial surgery planning, prosthesis design, and forensic identification. In this study, a new Bayesian method for segmentation of facial tissues is presented. Segmentation classes include muscle, bone, fat, air and skin. The method incorporates a model to account for image blurring during data acquisition, a prior helping to reduce noise as well as a partial volume model. Regularization based on isotropic and directional Markov Random Field priors are integrated to the algorithm and their effects on segmentation accuracy are investigated. The Bayesian model is solved iteratively yielding tissue class labels at every voxel of an image. Sub-methods as variations of the main method are generated by switching on/off a combination of the models. Testing of the sub-methods are performed on two patients using single modality three-dimensional (3D) images as well as registered multi-modal 3D images (Magnetic Resonance and Computerized Tomography). Numerical, visual and statistical analyses of the methods are conducted. Improved segmentation accuracy is obtained through the use of the proposed image models and multi-modal data. The methods are also compared with the Level Set method and an adaptive Bayesiansegmentation method proposed in a previous study.
274

A gaming perspective on command and control

Brynielsson, Joel January 2006 (has links)
In emergency management and in military operations, command and control comprises the collection of functions, systems and staff personnel that one or several executives draw on to arrive at decisions and seeing that these decisions are carried out. The large amount of available information coupled with modern computers and computer networks brings along the potential for making well-informed and quick decisions. Hence, decision-making is a central aspect in command and control, emphasizing an obvious need for development of adequate decision-supporting tools to be used in command and control centers. However, command and control takes place in a versatile environment, including both humans and artifacts, making the design of useful computer tools both challenging and multi-faceted. This thesis deals with preparatory action in command and control settings with a focus on the strategic properties of a situation, i.e., to aid commanders in their operational planning activities with the utmost goal of ensuring that strategic interaction occurs under the most favorable circumstances possible. The thesis highlights and investigates the common features of interaction by approaching them broadly using a gaming perspective, taking into account various forms of strategic interaction in command and control. This governing idea, the command and control gaming perspective, is considered an overall contribution of the thesis. Taking the gaming perspective, it turns out that the area ought to be approached from several research directions. In particular, the persistent gap between theory and applications can be bridged by approaching the command and control gaming perspective using both an applied and a theoretical research direction. On the one hand, the area of game theory in conjunction with research findings stemming from artificial intelligence need to be modified to be of use in applied command and control settings. On the other hand, existing games and simulations need to be adapted further to take theoretical game models into account. Results include the following points: (1) classification of information with proposed measurements for a piece of information's precision, fitness for purpose and expected benefit, (2) identification of decision help and decision analysis as the two main directions for development of computerized tools in support of command and control, (3) development and implementation of a rule based algorithm for map-based decision analysis, (4) construction of an open source generic simulation environment to support command and control microworld research, (5) development of a generic tool for prediction of forthcoming troop movements using an algorithm stemming from particle filtering, (6) a non-linear multi-attribute utility function intended to take prevailing cognitive decision-making models into account, and (7) a framework based on game theory and influence diagrams to be used for command and control situation awareness enhancements. Field evaluations in cooperation with military commanders as well as game-theoretic computer experiments are presented in support of the results. / QC 20100825
275

Multisensor Segmentation-based Noise Suppression for Intelligibility Improvement in MELP Coders

Demiroglu, Cenk 18 January 2006 (has links)
This thesis investigates the use of an auxiliary sensor, the GEMS device, for improving the quality of noisy speech and designing noise preprocessors to MELP speech coders. Use of auxiliary sensors for noise-robust ASR applications is also investigated to develop speech enhancement algorithms that use acoustic-phonetic properties of the speech signal. A Bayesian risk minimization framework is developed that can incorporate the acoustic-phonetic properties of speech sounds and knowledge of human auditory perception into the speech enhancement framework. Two noise suppression systems are presented using the ideas developed in the mathematical framework. In the first system, an aharmonic comb filter is proposed for voiced speech where low-energy frequencies are severely suppressed while high-energy frequencies are suppressed mildly. The proposed system outperformed an MMSE estimator in subjective listening tests and DRT intelligibility test for MELP-coded noisy speech. The effect of aharmonic comb filtering on the linear predictive coding (LPC) parameters is analyzed using a missing data approach. Suppressing the low-energy frequencies without any modification of the high-energy frequencies is shown to improve the LPC spectrum using the Itakura-Saito distance measure. The second system combines the aharmonic comb filter with the acoustic-phonetic properties of speech to improve the intelligibility of the MELP-coded noisy speech. Noisy speech signal is segmented into broad level sound classes using a multi-sensor automatic segmentation/classification tool, and each sound class is enhanced differently based on its acoustic-phonetic properties. The proposed system is shown to outperform both the MELPe noise preprocessor and the aharmonic comb filter in intelligibility tests when used in concatenation with the MELP coder. Since the second noise suppression system uses an automatic segmentation/classification algorithm, exploiting the GEMS signal in an automatic segmentation/classification task is also addressed using an ASR approach. Current ASR engines can segment and classify speech utterances in a single pass; however, they are sensitive to ambient noise. Features that are extracted from the GEMS signal can be fused with the noisy MFCC features to improve the noise-robustness of the ASR system. In the first phase, a voicing feature is extracted from the clean speech signal and fused with the MFCC features. The actual GEMS signal could not be used in this phase because of insufficient sensor data to train the ASR system. Tests are done using the Aurora2 noisy digits database. The speech-based voicing feature is found to be effective at around 10 dB but, below 10 dB, the effectiveness rapidly drops with decreasing SNR because of the severe distortions in the speech-based features at these SNRs. Hence, a novel system is proposed that treats the MFCC features in a speech frame as missing data if the global SNR is below 10 dB and the speech frame is unvoiced. If the global SNR is above 10 dB of the speech frame is voiced, both MFCC features and voicing feature are used. The proposed system is shown to outperform some of the popular noise-robust techniques at all SNRs. In the second phase, a new isolated monosyllable database is prepared that contains both speech and GEMS data. ASR experiments conducted for clean speech showed that the GEMS-based feature, when fused with the MFCC features, decreases the performance. The reason for this unexpected result is found to be partly related to some of the GEMS data that is severely noisy. The non-acoustic sensor noise exists in all GEMS data but the severe noise happens rarely. A missing data technique is proposed to alleviate the effects of severely noisy sensor data. The GEMS-based feature is treated as missing data when it is detected to be severely noisy. The combined features are shown to outperform the MFCC features for clean speech when the missing data technique is applied.
276

Variation modeling, analysis and control for multistage wafer manufacturing processes

Jin, Ran 10 May 2011 (has links)
Geometric quality variables of wafers, such as BOW and WARP, are critical in their applications. A large variation of these quality variables reduces the number of conforming products in the downstream production. Therefore, it is important to reduce the variation by variation modeling, analysis and control for multistage wafer manufacturing processes (MWMPs). First, an intermediate feedforward control strategy is developed to adjust and update the control actions based on the online measurements of intermediate wafer quality measurements. The control performance is evaluated in a MWMP to transform ingots into polished wafers. However, in a complex multistage manufacturing process, the quality variables may have nonlinear relationship with the parameters of the predictors. In this case, piecewise linear regression tree (PLRT) models are used to address nonlinear relationships in MWMP to improve the model prediction performance. The obtained PLRT model is further reconfigured to be complied with the physical layout of the MWMP for feedforward control purposes. The procedure and effectiveness of the proposed method is shown in a case study of a MWMP. Furthermore, as the geometric profiles and quality variables are important quality features for a wafer, fast and accurate measurements of those features are crucial for variation reduction and feedforward control. A sequential measurement strategy is proposed to reduce the number of samples measured in a wafer, yet provide adequate accuracy for the quality feature estimation. A Gaussian process model is used to estimate the true profile of a wafer with improved sensing efficiency. Finally, we study the multistage multimode process monitoring problem. We propose to use PLRTs to inter-relate the variables in a multistage multimode process. A unified charting system is developed. We further study the run length distribution, and optimize the control chart system by considering the modeling uncertainties. Finally, we compare the proposed method with the risk adjustment type of control chart systems based on global regression models, for both simulation study and a wafer manufacturing process.
277

Geographic Indexing and Data Management for 3D-Visualisation

Ottoson, Patrik January 2001 (has links)
No description available.
278

Bayesian 3D multiple people tracking using multiple indoor cameras and microphones

Lee, Yeongseon. January 2009 (has links)
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Rusell M. Mersereau; Committee Member: Biing Hwang (Fred) Juang; Committee Member: Christopher E. Heil; Committee Member: Georgia Vachtsevanos; Committee Member: James H. McClellan. Part of the SMARTech Electronic Thesis and Dissertation Collection.
279

Design and Test of Algorithms for the Evaluation of Modern Sensors in Close-Range Photogrammetry / Entwicklung und Test von Algorithmen für die 3D-Auswertung von Daten moderner Sensorsysteme in der Nahbereichsphotogrammetrie

Scheibe, Karsten 01 December 2006 (has links)
No description available.
280

An efficient approach for high-fidelity modeling incorporating contour-based sampling and uncertainty

Crowley, Daniel R. 13 January 2014 (has links)
During the design process for an aerospace vehicle, decision-makers must have an accurate understanding of how each choice will affect the vehicle and its performance. This understanding is based on experiments and, increasingly often, computer models. In general, as a computer model captures a greater number of phenomena, its results become more accurate for a broader range of problems. This improved accuracy typically comes at the cost of significantly increased computational expense per analysis. Although rapid analysis tools have been developed that are sufficient for many design efforts, those tools may not be accurate enough for revolutionary concepts subject to grueling flight conditions such as transonic or supersonic flight and extreme angles of attack. At such conditions, the simplifying assumptions of the rapid tools no longer hold. Accurate analysis of such concepts would require models that do not make those simplifying assumptions, with the corresponding increases in computational effort per analysis. As computational costs rise, exploration of the design space can become exceedingly expensive. If this expense cannot be reduced, decision-makers would be forced to choose between a thorough exploration of the design space using inaccurate models, or the analysis of a sparse set of options using accurate models. This problem is exacerbated as the number of free parameters increases, limiting the number of trades that can be investigated in a given time. In the face of limited resources, it can become critically important that only the most useful experiments be performed, which raises multiple questions: how can the most useful experiments be identified, and how can experimental results be used in the most effective manner? This research effort focuses on identifying and applying techniques which could address these questions. The demonstration problem for this effort was the modeling of a reusable booster vehicle, which would be subject to a wide range of flight conditions while returning to its launch site after staging. Contour-based sampling, an adaptive sampling technique, seeks cases that will improve the prediction accuracy of surrogate models for particular ranges of the responses of interest. In the case of the reusable booster, contour-based sampling was used to emphasize configurations with small pitching moments; the broad design space included many configurations which produced uncontrollable aerodynamic moments for at least one flight condition. By emphasizing designs that were likely to trim over the entire trajectory, contour-based sampling improves the predictive accuracy of surrogate models for such designs while minimizing the number of analyses required. The simplified models mentioned above, although less accurate for extreme flight conditions, can still be useful for analyzing performance at more common flight conditions. The simplified models may also offer insight into trends in the response behavior. Data from these simplified models can be combined with more accurate results to produce useful surrogate models with better accuracy than the simplified models but at less cost than if only expensive analyses were used. Of the data fusion techniques evaluated, Ghoreyshi cokriging was found to be the most effective for the problem at hand. Lastly, uncertainty present in the data was found to negatively affect predictive accuracy of surrogate models. Most surrogate modeling techniques neglect uncertainty in the data and treat all cases as deterministic. This is plausible, especially for data produced by computer analyses which are assumed to be perfectly repeatable and thus truly deterministic. However, a number of sources of uncertainty, such as solver iteration or surrogate model prediction accuracy, can introduce noise to the data. If these sources of uncertainty could be captured and incorporated when surrogate models are trained, the resulting surrogate models would be less susceptible to that noise and correspondingly have better predictive accuracy. This was accomplished in the present effort by capturing the uncertainty information via nuggets added to the Kriging model. By combining these techniques, surrogate models could be created which exhibited better predictive accuracy while selecting the most informative experiments possible. This significantly reduced the computational effort expended compared to a more standard approach using space-filling samples and data from a single source. The relative contributions of each technique were identified, and observations were made pertaining to the most effective way to apply the separate and combined methods.

Page generated in 0.0454 seconds