• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 56
  • 21
  • 9
  • 6
  • 6
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 201
  • 201
  • 67
  • 58
  • 57
  • 47
  • 36
  • 29
  • 28
  • 24
  • 22
  • 18
  • 17
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Manipulations of spike trains and their impact on synchrony analysis

Pazienti, Antonio January 2007 (has links)
The interaction between neuronal cells can be identified as the computing mechanism of the brain. Neurons are complex cells that do not operate in isolation, but they are organized in a highly connected network structure. There is experimental evidence that groups of neurons dynamically synchronize their activity and process brain functions at all levels of complexity. A fundamental step to prove this hypothesis is to analyze large sets of single neurons recorded in parallel. Techniques to obtain these data are meanwhile available, but advancements are needed in the pre-processing of the large volumes of acquired data and in data analysis techniques. Major issues include extracting the signal of single neurons from the noisy recordings (referred to as spike sorting) and assessing the significance of the synchrony. This dissertation addresses these issues with two complementary strategies, both founded on the manipulation of point processes under rigorous analytical control. On the one hand I modeled the effect of spike sorting errors on correlated spike trains by corrupting them with realistic failures, and studied the corresponding impact on correlation analysis. The results show that correlations between multiple parallel spike trains are severely affected by spike sorting, especially by erroneously missing spikes. When this happens sorting strategies characterized by classifying only good'' spikes (conservative strategies) lead to less accurate results than tolerant'' strategies. On the other hand, I investigated the effectiveness of methods for assessing significance that create surrogate data by displacing spikes around their original position (referred to as dithering). I provide analytical expressions of the probability of coincidence detection after dithering. The effectiveness of spike dithering in creating surrogate data strongly depends on the dithering method and on the method of counting coincidences. Closed-form expressions and bounds are derived for the case where the dither equals the allowed coincidence interval. This work provides new insights into the methodologies of identifying synchrony in large-scale neuronal recordings, and of assessing its significance. / Die Informationsverarbeitung im Gehirn erfolgt maßgeblich durch interaktive Prozesse von Nervenzellen, sogenannten Neuronen. Diese zeigen eine komplexe Dynamik ihrer chemischen und elektrischen Eigenschaften. Es gibt deutliche Hinweise darauf, dass Gruppen synchronisierter Neurone letztlich die Funktionsweise des Gehirns auf allen Ebenen erklären können. Um die schwierige Frage nach der genauen Funktionsweise des Gehirns zu beantworten, ist es daher notwendig, die Aktivität vieler Neuronen gleichzeitig zu messen. Die technischen Voraussetzungen hierfür sind in den letzten Jahrzehnten durch Multielektrodensyteme geschaffen worden, die heute eine breite Anwendung finden. Sie ermöglichen die simultane extrazelluläre Ableitung von bis zu mehreren hunderten Kanälen. Die Voraussetzung für die Korrelationsanalyse von vielen parallelen Messungen ist zunächst die korrekte Erkennung und Zuordnung der Aktionspotentiale einzelner Neurone, ein Verfahren, das als Spikesortierung bezeichnet wird. Eine weitere Herausforderung ist die statistisch korrekte Bewertung von empirisch beobachteten Korrelationen. Mit dieser Dissertationsschrift lege ich eine theoretische Arbeit vor, die sich der Vorverarbeitung der Daten durch Spikesortierung und ihrem Einfluss auf die Genauigkeit der statistischen Auswertungsverfahren, sowie der Effektivität zur Erstellung von Surrogatdaten für die statistische Signifikanzabschätzung auf Korrelationen widmet. Ich verwende zwei komplementäre Strategien, die beide auf der analytischen Berechnung von Punktprozessmanipulationen basieren. In einer ausführlichen Studie habe ich den Effekt von Spikesortierung in mit realistischen Fehlern behafteten korrelierten Spikefolgen modeliert. Zum Vergleich der Ergebnisse zweier unterschiedlicher Methoden zur Korrelationsanalyse auf den gestörten, sowie auf den ungestörten Prozessen, leite ich die entsprechenden analytischen Formeln her. Meine Ergebnisse zeigen, dass koinzidente Aktivitätsmuster multipler Spikefolgen durch Spikeklassifikation erheblich beeinflusst werden. Das ist der Fall, wenn Neuronen nur fälschlicherweise Spikes zugeordnet werden, obwohl diese anderen Neuronen zugehörig sind oder Rauschartefakte sind (falsch positive Fehler). Jedoch haben falsch-negative Fehler (fälschlicherweise nicht-klassifizierte oder missklassifizierte Spikes) einen weitaus grösseren Einfluss auf die Signifikanz der Korrelationen. In einer weiteren Studie untersuche ich die Effektivität einer Klasse von Surrogatmethoden, sogenannte Ditheringverfahren, welche paarweise Korrelationen zerstören, in dem sie koinzidente Spikes von ihrer ursprünglichen Position in einem kleinen Zeitfenster verrücken. Es zeigt sich, dass die Effektivität von Spike-Dithering zur Erzeugung von Surrogatdaten sowohl von der Dithermethode als auch von der Methode zur Koinzidenzzählung abhängt. Für die Wahrscheinlichkeit der Koinzidenzerkennung nach dem Dithern stelle ich analytische Formeln zur Verfügung. Die vorliegende Arbeit bietet neue Einblicke in die Methoden zur Korrelationsanalyse auf multi-variaten Punktprozessen mit einer genauen Untersuchung von unterschiedlichen statistischen Einflüssen auf die Signifikanzabschätzung. Für die praktische Anwendung ergeben sich Leitlinien für den Umgang mit Daten zur Synchronizitätsanalyse.
12

A Feasible Evaluation and Analysis of Visual Air Quality Index in Urban Areas

Chang, Kuo-chung 21 July 2006 (has links)
This research analyzed the weather information (temperature, wind velocity, visibility, and total cloudiness) from the Taipei and the Kaohsiung Weather Station of Central Weather Bureau, and air pollution from the Air Quality Monitoring Station of Environmental Protection Administration, Executive Yuan ( Shihlin, Shihlin, Jhongshan, Wanhua, Guting, Songshan¡A), Nanzih, Zuoying, Cianjin, and Siaogan ) to evaluate the feasibility of using visibility as the ambient air quality index by statistical analysis¡C In regard to the visibility in Taipei metropolis, the visibility between 1983~1992 showed a steady status between 5~11 kilometers. The visibility after 1993 has increased gradually between 6~16 kilometers, which indicated that the visual air quality has been improved year by year in Taipei metropolis. In regard to the visibility in Kaohsiung metropolis, the index has a trend of decreasing year by year from 10~24 kilometers to 2~12 kilometers, and the decrease was particularly obvious after 1993. Analyzing the air quality index greater than 100 in the metropolis, the visibility is categorized as the level of "poor", which means that the visibility is within 3 kilometers. When the air quality index ranges between 76~100, the visibility is categorized as the level of median, which means the visibility is within 4 kilometers. When the air quality index ranges between 50~75, the visibility is categorized as the level of "good", which means the visibility is within 7 kilometers. When the air quality index ranges between 20~49, the visibility is categorized as the level of "excellent", which means the visibility is beyond 7 kilometers.
13

Improving the Treated Water for Water Quality and Good Tastes from Traditional and Advanced Water Treatment Plants

HAn, Chia-Yun 19 July 2007 (has links)
The purpose of this research is to compare the performance for the water quality of two traditional water treatment plants (WTP) and three advanced water treatment plants (AWTP), and to investigate the treated drinking water in distribution systems in Kaohsiung area for promoting the consumers¡¦ self-confidence. Samples of the treated water from five major water supplies¡¦ WTP(noted numbers: WF1, WF2, WF3, WF4 and WF5) and the tap water at user¡¦s end were selected in planning of this work. It was the traditional WTP stage with treated drinking water and distribution systems in Kaohsiung area During 91 year to 92 year, so we conducted WF1 and WF2 of 8 times sampling and WF3, WF4 and WF5 for 2 times sampling at this stage. In and after 93 year, we conducted WF1, WF2, WF3, WF4 and WF5 of 8 times sampling from 93 year to 94year for the advanced WTP stage. The major tests related with the parameters of influencing operation condition included pH, odor (abbreviated as TON), total trihalomethane (abbreviated as THMs), haloacetic acids (abbreviated as HAAs), nitrogen (abbreviated as, NH3-N, hardness, total dissolved solid (abbreviated as TDS), alkalinity, total organic carbon (abbreviated as TOC), calcium ion, flavor profile analysis (abbreviated as FPA), and suspension observation in boiling with treated waters from two WTP , three AWTP and the tap water at user¡¦s end in a distribution system. It point out the better quality of treated water used the advanced water treatment plants than that of traditional water treatment plant. The items with improvement of water quality, including THMs, HAAs, hardness, TON, 2-MIB, TOC, alkinality and Ca ions concentration, is presented. Their efficiency for improvement are respectively 47%, 29%, 43%, 11%, 29%, 15%, 14% and 34%. The insignificant efficiency were concentrated at TDS, NH3-N, pH and FPA. Water quality of six items are fitted for the drinking water standard at present in Taiwan (such as: odor<3 TON; THMs<0.1 mg/L; NH3-N<0.1 mg/L; TDS< 600 mg/L; Hardness <400 mg as CaCO3/L; 6.0<pH <8.5). The HAAs is fit for water quality USEPA first stage water standard (HAAs<80 £gg/L). In the suspension observation in boiling experimentation, we cooperate with the experiment of suspension observation in boiling to do contrast with TDS and hardness experiment, which can find out, the treated water after the advanced procedure, the time with boiling increases, the condition of its suspended substance has great reduction. It show treated drinking water after the advanced WTP can huge improve the traditional WTP¡¦s white suspended substance or white material precipitate questions in the boiling. In the contour map for water quality , we found that Gushan District, Lingya District, Qianzhen District, Xiaogang District, Fongshan City and Daliao Shiang etc had higher concentration profile in the four season (included spring, summer, fall and winter ) and during two seasons (included raining and drying) in the water supplies systems. We hope the contour map can offer a clear information of conveyer system administrator of drinking water and let administrator know where areas have high concentration produced in water quality management planning, in order to having priority or effective solutions (included washing the pipeline, changing the pipeline, changing the water flow, etc.).
14

Noise Path Identification For Vibro-acoustically Coupled Structures

Serafettinoglu, Hakan A 01 March 2004 (has links) (PDF)
Structures of machinery with practical importance, such as home appliances or transportation vehicles, can be considered as acoustically coupled spaces surrounded by elastic enclosures. When the structures of machinery are excited mechanically by means of prime movers incorporated into these structures through some elastic connections, generation of noise becomes an inevitable by-product. For noise control engineering purposes, a thorough understanding of emission, transmission and radiation of sound from structure is required prior to a possible and practical modeling of noise transfer mechanisms. Finally, development of a model for complete noise generation and transfer mechanisms is needed which is essential for the abatement of annoying sound generation. In this study, an experimental and analytical (finite element) methodology for the modal analysis of acoustical cavities is developed, and successfully applied to a case study. The acoustical transmission problem of the structure is investigated via vector intensity analysis. Results of this investigation are used for a noise path qualification, whereas the transfer functions between sources of noise and some relevant receiving points are obtained by use of vibro-acoustic reciprocity principle. The concept of transfer path analysis is investigated by using the multi input, multi output linear system theory for vibro-acoustic modeling of machinery structures. Finally, resolution and ranking of noise sources and transfer paths are accomplished via spectral correlation methodologies developed. The methodology can be extended to any system with linear, time invariant parameters, where principles of superposition and reciprocity are applicable.
15

High-dimensional statistical data integration

January 2019 (has links)
archives@tulane.edu / Modern biomedical studies often collect multiple types of high-dimensional data on a common set of objects. A representative model for the integrative analysis of multiple data types is to decompose each data matrix into a low-rank common-source matrix generated by latent factors shared across all data types, a low-rank distinctive-source matrix corresponding to each data type, and an additive noise matrix. We propose a novel decomposition method, called the decomposition-based generalized canonical correlation analysis, which appropriately defines those matrices by imposing a desirable orthogonality constraint on distinctive latent factors that aims to sufficiently capture the common latent factors. To further delineate the common and distinctive patterns between two data types, we propose another new decomposition method, called the common and distinctive pattern analysis. This method takes into account the common and distinctive information between the coefficient matrices of the common latent factors. We develop consistent estimation approaches for both proposed decompositions under high-dimensional settings, and demonstrate their finite-sample performance via extensive simulations. We illustrate the superiority of proposed methods over the state of the arts by real-world data examples obtained from The Cancer Genome Atlas and Human Connectome Project. / 1 / Zhe Qu
16

Relationships between Hospital-Centered and Multihospital-Centered Factors and Perceived Effectiveness: A Canonical Study of Nonprofit Hospitals

Yavas, Ugur, Romanova, Natalia 01 December 2003 (has links)
This article reports on the results of a survey which investigated the nature of relationships between hospital and multihospital organization-centered factors and background characteristics, and multihospital organization effectiveness. Canonical correlation is employed in analyzing the data. Results and their implications are discussed.
17

A two-pronged approach to improve distant homology detection

Lee, Marianne M. 26 June 2009 (has links)
No description available.
18

Relationships Between Land Use, Land-Use Change, and Water-Quality Trends in Virginia

Gildea, Jason James 26 May 2000 (has links)
This research examines the relationships between land use and surface water quality trends in Virginia. Data from 168 surface water quality monitoring stations throughout Virginia were analyzed for trends for the period of 1978 to 1995. Water-quality data available at these stations included dissolved oxygen saturation (DO), biochemical oxygen demand (BOD), pH, total residue (TR), non-filterable residue (NFR), nitrate-nitrite nitrogen (NN), total Kjeldahl nitrogen (TKN), total phosphorus (TP), and fecal coliform (FC). A seasonal Kendall analysis was used to determine trends for each water-quality parameter at each station; this analysis produced an indicator (Kendall's tau) of improving or declining water quality. Median values for each water-quality variable were also determined at the monitoring stations. Virginia land use was determined from the USGS Land Use Land Cover (LULC) data (1970s) and the Multi-resolution Land Characteristics (MRLC) data (1990s). Land-use variables included urban, forest, pasture, cropland, total agriculture, and urban change. These six variables were correlated with Kendall's tau to determine if relationships exist between water-quality trends and land use. Water-quality medians and land use were also correlated. In general, highly forested watersheds in Virginia were associated with improving water quality over the 1978 to 1995 study period. These watersheds were also commonly associated with better water quality as measured by the water-quality medians. Watersheds with less agricultural land tended to be associated with improving water quality. Better water quality, as measured by the water-quality medians, was generally associated with watersheds possessing fewer urban acres. There were few significant relationships between water-quality medians and agricultural variables. / Master of Science
19

Canonical Correlation and Clustering for High Dimensional Data

Ouyang, Qing January 2019 (has links)
Multi-view datasets arise naturally in statistical genetics when the genetic and trait profile of an individual is portrayed by two feature vectors. A motivating problem concerning the Skin Intrinsic Fluorescence (SIF) study on the Diabetes Control and Complications Trial (DCCT) subjects is presented. A widely applied quantitative method to explore the correlation structure between two domains of a multi-view dataset is the Canonical Correlation Analysis (CCA), which seeks the canonical loading vectors such that the transformed canonical covariates are maximally correlated. In the high dimensional case, regularization of the dataset is required before CCA can be applied. Furthermore, the nature of genetic research suggests that sparse output is more desirable. In this thesis, two regularized CCA (rCCA) methods and a sparse CCA (sCCA) method are presented. When correlation sub-structure exists, stand-alone CCA method will not perform well. To tackle this limitation, a mixture of local CCA models can be employed. In this thesis, I review a correlation clustering algorithm proposed by Fern, Brodley and Friedl (2005), which seeks to group subjects into clusters such that features are identically correlated within each cluster. An evaluation study is performed to assess the effectiveness of CCA and correlation clustering algorithms using artificial multi-view datasets. Both sCCA and sCCA-based correlation clustering exhibited superior performance compare to the rCCA and rCCA-based correlation clustering. The sCCA and the sCCA-clustering are applied to the multi-view dataset consisted of PrediXcan imputed gene expression and SIF measurements of DCCT subjects. The stand-alone sparse CCA method identified 193 among 11538 genes being correlated with SIF#7. Further investigation of these 193 genes with simple linear regression and t-test revealed that only two genes, ENSG00000100281.9 and ENSG00000112787.8, were significance in association with SIF#7. No plausible clustering scheme was detected by the sCCA based correlation clustering method. / Thesis / Master of Science (MSc)
20

Aggregation of Group Prioritisations for Energy Rationing with an Additive Group Decision Model : A Case Study of the Swedish Emergency Preparedness Planning in case of Power Shortage

Petersen, Rebecca January 2016 (has links)
The backbone of our industrialised society and economy is electricity. To avoid a catastrophic situation, a plan for how to act during a power shortage is crucial. Previous research shows that decision models provide support to decision makers providing efficient energy rationing during power shortages in the Netherlands, United States and Canada. The existing research needs to be expanded with a group decision model to enable group decisions. This study is conducted with a case study approach where the Swedish emergency preparedness plan in case of power shortage, named Styrel, is explored and used to evaluate properties of a proposed group decision model. The study consist of a qualitative phase and a quantitative phase including a Monte Carlo simulation of group decisions in Styrel evaluated with correlation analysis. The qualitative results show that participants in Styrel experience the group decisions as time-consuming and unstructured. The current decision support is not used in neither of the two counties included in the study, with the motivation that the preferences provided by the decision support are misleading. The proposed group decision model include a measurable value function assigning values to priority classes for electricity users, an additive model to represent preferences of individual decision makers and an additive group decision model to aggregate preferences of several individual decision makers into a group decision. The conducted simulation indicate that the proposed group decision model evaluated in Styrel is sensitive to significant changes and more robust to moderate changes in preference differences between priority classes.

Page generated in 0.0968 seconds