• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 40
  • 16
  • 11
  • 8
  • 8
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 144
  • 144
  • 51
  • 49
  • 47
  • 32
  • 31
  • 28
  • 26
  • 23
  • 23
  • 17
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Moving Data Analysis into the Acquisition Hardware

Buckley, Dave 10 1900 (has links)
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA / Data acquisition for flight test is typically handled by dedicated hardware which performs specific functions and targets specific interfaces and buses. Through the use of an FPGA state machine based design approach, performance and robustness can be guaranteed. Up to now sufficient flexibility has been provided by allowing the user to configure the hardware depending on the particular application. However by allowing custom algorithms to be run on the data acquisition hardware, far greater control and flexibility can be offered to the flight test engineer. As the volume of the acquired data increases, this extra control can be used to vastly reduce the amount of data to be recorded or telemetered. Also real-time analysis of test points can now be done where post processing would previously have been required. This paper examines examples of data acquisition, recording and processing and investigates where data reduction and time savings can be achieved by enabling the flight test engineer to run his own algorithms on the hardware.
12

Estudo das variabilidades espectroscópicas da estrela η Centauri / Study of spectroscopic variabilities of star η Centauri

Ronaldo Savarino Levenhagen 14 August 2000 (has links)
A espectrocopia de estrelas Be realizada com alta resolução e alta relação sinal/ruído permite investigar variações temporais rápidas nos perfis de linhas de absorção, usualmente atribuidas às pulsações não radiais, entre outros mecanismos. O fenômeno Be é transitorio para esse tipo de estrelas. Com efeito, seus espectros podem apresentar as características de uma estrela B normal, ou então de uma estrela com envelope circunstelar "frio" (presença de linhas finas de emissão e absorção no espectro visível). Essas estrelas se caracterizam por apresentarem altas velocidades de rotação, entretanto insuficientes para explicar as altas taxas de perda de massa, cujos mecanismos permanecem ainda incompreendicos. Neste trabalho, adotamos o modelo de pulsações não radiais (PNR) para tentar explicar as variações temporais presentes nos perfis de linha centrados em Hel λ 667.8nm. Para este fim, foi utilizado o método CLEANEST para efetuar a determinação de periodicidades, após alguns testes de comparação prévios entre o desempenho desse método e o CLEAN. Ambos os métodos possuem grande eficiência em análises temporais, entre tanto na média o método CLEANEST se sobressaiu tanto na determinação de freqüências como de amplitudes. Além disso, foram estudados outros tipos de variabilidades também presentes nesses perfis, como as variabilidades nas intensidades relativas das asas da linha e a variabilidade na posição de um pico central em quase emissão. Os dados espectroscópicos analisados constituem quatro conjuntos de espectros obtidos nos anos 1995, 1996, 1997 e 1998 no MCT/LNA. Há ainda um conjunto de dados fotométricos proveniente do satélite HIPPARCOS relativo aos anos de 1990, 1991 e 1992. As análises dos dados espectroscópicos e fotométricos forneceram resultados que concordam com outros trabalhos sobre η Centauri. / The spectroscopy of Be stars carried out with high resolution and signal to noise relation allows to investigate quick time variations in line absorption profiles, usually due to non radial pulsations, among other mechanisms. The Be phenomenon is transitory for such stars. Indeed, their spectra may show characteristics of a normal B star, or a star with a "cold" circumstellar envelope (presence of sharp emission and absorption lines in visible spectrum). These stars are characterized by high speed rotation, however insufficient to explain high mass loss rates, whose mechanisms still remains misunderstood. In this work, we adopted the non-radial pulsation model (NRP) in order to explain the observed time series variations present in line profiles centered in HeI λ 667.8 nm. For this sake, it was used the CLEANEST method to carry out periodicity determinations, after some previous comparative tests of performance between this method and CLEAN. Both of them possess great efficiency in time analysis, however in average the CLEANEST method stood out both in determining frequencies and amplitudes. Besides, it were studied also other kinds of variability also present in these profiles, such as relative intensity variations in the wings of the profile and the variability in position of the central quasi-emission peak. The analyzed spectroscopic data constitute four sets of spectra obtained in 1995, 1996, 1997 and 1998 at the MCT/LNA. There is still one photometric data set coming from HIPPARCOS satellite spanning from 1990 to 1992. The spectroscopic and photometric data analysis have furnished results that agree with other works on η Centauri
13

Návrh projektu a využití metodiky projektového managementu v podniku / Proposal for Project and Application of Project Management Methodology in the Company

Semančiková, Karolína January 2017 (has links)
This diploma thesis deals with the topic of offshore outsourcing in a chosen company. The strategy of outsourcing will be carried out with using tools and methods of project management. First part of this thesis will analyse relevant characteristics of the company and target market and should explain importance of this project. Next part will cover detailed planning of scope and timeline of the project, risk analysis and cost analysis, which will also provide the basis for evaluation of the outcomes of this project.
14

DATA ACQUISITION SYSTEM FOR AIRCRAFT QUALIFICATION

Eccles, Lee, O’Brien, Michael, Anderson, William 10 1900 (has links)
International Telemetering Conference Proceedings / October 13-16, 1986 / Riviera Hotel, Las Vegas, Nevada / The Boeing Commercial Airplane Company presently uses an Airborne Data Analysis and Monitor System (ADAMS) to support extensive qualification testing on new and modified commercial aircraft. The ADAMS system consists of subsystems controlled by independent processors which preprocess serial PCM data, perform application-specific processing, provide graphic display of data, and manage mass storage resources. Setup and control information is passed between processors using the Ethernet protocol on a fiber optic network. Tagged data is passed between processors using a data bus with networking characteristics. During qualification tests, data are dynamically selected, analyses performed, and results recorded. Decisions to proceed or repeat tests are made in real time on the aircraft. Instrumentation in present aircraft includes up to 3700 sensors, with projections for 5750 sensors in the next generation. Concurrently, data throughput rates are increasing, and data preprocessing requirements are becoming more complex. Fairchild Weston Systems, Inc., under contract to Boeing, has developed an Acquisition Interface Assembly (AIA) which accepts multiple streams of PCM data, controls recording and playback on analog tape, performs high speed data preprocessing, and distributes the data to the other ADAMS subsystems. The AIA processes one to three streams in any of the standard IRIG PCM formats using programmable bit, frame and subframe synchronizers. Data from ARINC buses with embedded measurement labels, bus ID’s, and time tags may also be processed by the AIA. Preprocessing is accomplished by two high-performance Distributed Processing Units (DPU) operating in either pipeline or parallel environments. The DPU’s perform concatenation functions, number system conversions, engineering unit conversions, and data tagging for distribution to the ADAMS system. Time information, from either a time code generator or tape playback, may be merged with data with a 0.1 msec resolution. Control and status functions are coordinated by an embedded processor, and are accessible to other ADAMS processors via both the Ethernet interface and a local operator’s terminal. Because the AIA assembly is used in aircraft, the entire functional capability has been packaged in a 14-inch high, rack-mountable chassis with EMI shielding. The unit has been designed for high temperature, high altitude, vibrating environments. The AIA will be a key element in aircraft qualification testing at Boeing well into the next generation of airframes, and specification, design, development, and implementation of the AIA has been carried out with the significance of that fact in mind.
15

Mapeamento das áreas de inundação utilizando imagens C–SAR e SRTM , nas províncias de Santa Fé e Entre Ríos, Argentina.

Graosque, Jones Zamboni January 2018 (has links)
Eventos de inundação são fenômenos geralmente associados a eventos de chuvas intensas. Nesses eventos a cobertura de nuvens, normalmente, prejudica o mapeamento com uso de imagens ópticas. Assim, este trabalho tem como objetivo avaliar os resultados de mapeamento de áreas de inundação utilizando imagens SAR e SRTM. Para aplicação dos métodos foram analisadas as áreas de inundação nas cidades de Santa Fe e Parana, na Argentina. Embora a maior inundação registrada tenha sido no ano de 2003, registros de inundação são frequentemente observados nas províncias de Santa Fé e Entre Ríos. Foi utilizado imagens do satélite Sentinel-1, equipado com sensor C-SAR com dupla polarização (VV/VH). As imagens obtidas são do tipo Interferométrico (IW) Ground Range Detected (GRDH) com resolução espacial de 10 m. Foram utilizadas imagens em períodos com e sem eventos de inundação entre 2016 e 2017, calibradas e coregistradas. Sobre as imagens foram aplicadas técnicas de limiarização e de análise temporal para mapear a mancha de inundação. Também foi elaborado mapa a partir do Modelo Digital de Elevação (MDE) utilizando como referência estações de medição de nível da água dos rios. A validação de todos os métodos foi totalmente remota, baseando-se em um mapeamento da inundação de abril de 2003 na cidade de Santa Fe. Além disso, imagens publicadas de eventos de inundação complementaram a validação e foi possível comparar os resultados com uma imagem óptica Landsat – 8 com resolução de 15 m do dia 22 de fevereiro de 2016, quando o nível do rio Paraná estava acima do nível de alerta Os resultados dos três mapeamentos foram somados para formar uma única imagem com a mancha de inundação em comum. Entre as melhores acurácias, o método de análise do MDE atingiu o melhor resultado, 82% da área de inundação, no entanto, considerando os três métodos, a acurácia atinge mais de 91% de precisão. A técnica de limiarização foi mais eficiente em áreas sem alvos verticais, como áreas urbanas por exemplo. O MDE foi eficiente para simular a inundação em todos os alvos, no entanto em modelos de elevação com melhor resolução, o resultado final do mapeamento será mais preciso. A análise temporal mostrou ser uma técnica promissora para mapeamentos de inundação, no entanto um mapa detalhado de uso de solo é fundamental para aprimorar o resultado desta análise. Todos os processos foram feitos remotamente, possibilitando o desenvolvimento no futuro de um sistema automático para detecção de evento de inundação que pode ser aplicado em áreas com características similares. / Flood events usually go hand in hand with intensive rainfall during which clouds compromise any mapping attempts with optical imagery. Thus, this thesis aims at evaluate the results of mapping flood areas using SAR and SRTM images. For this purpose, flood areas in the cities Santa Fe and Parana in Argentina were analyzed. While the worst flood was registered in 2003, flood events frequently occur in both provinces Santa Fé and Entre Ríos. The employed Sentinel-1 satellite carrying a C-SAR sensor with dual polarization (VV/VH) provided interferometric (IW) Ground Range Detected (GRDH) imagery with a spatial resolution of 10 meters. Images from periods with and without flood events between 2016 and 2017 were calibrated and co-registered. Subsequently on the images were applied threshold and time analysis techniques, as well as a Digital Elevation Model (DEM) analysis with data from stations which measure the rivers’ water levels. The validation of all methods was totally remote, based on a flood mapping of April 2003 in the city of Santa Fe. In addition, published images of flood events complemented the validation and it was possible to compare the results with an optical image Landsat - 8 with 15 m resolution of February 22, 2016, when the level of the Paraná River was above the alert level The three maps were summed to form a single image with the flood spot in common. Among the best accuracy, the MDE analysis method achieved the best result, 82% of the flood area, however, considering all three methods, the accuracy reaches more than 91% accuracy. The thresholding technique was more efficient in areas with no vertical targets, such as urban areas. The DEM was efficient to simulate flooding on all targets, however using elevation models with better resolution, the final result of the mapping will be more accurate. The temporal analysis showed to be a promising technique for flood mapping, however a detailed map of land use is fundamental to improve the results of this analysis. All processes were done remotely, allowing the future development of an automatic flood event detection system that can be applied in areas with similar characteristics.
16

Reliability of a Commercially Available and Algorithm-Based Kinetic Analysis Software Compared to Manual-Based Software

Carroll, Kevin M., Wagle, John P., Sato, Kimitake, DeWeese, Brad H., Mizuguchi, Satoshi, Stone, Michael H. 26 September 2017 (has links)
There is a need for reliable analysis techniques for kinetic data for coaches and sport scientists who employ athlete monitoring practices. The purpose of the study was: (1) to determine intra- and inter-rater reliability within a manual-based kinetic analysis program; and (2) to determine test-retest reliability of an algorithm-based kinetic analysis program. Five independent raters used a manual analysis program to analyse 100 isometric mid-thigh pull (IMTP) trials obtained from previously collected data. Each trial was analysed three times. The same IMTP trials were analysed using an algorithm-based analysis software. Variables measured were peak force, rate of force development from 0 to 50 ms (RFD50) and RFD from 0 to 200 ms (RFD200). Intraclass correlation coefficients (ICC) and coefficient of variation (CV) were used to assess intra- and inter-rater reliability. Nearly perfect reliability was observed for the manual-based (ICC > 0.92). However, poor intra- and inter-rater CV was observed for RFD (CV > 16.25% and CV > 32.27%, respectively). The algorithm-based method resulted in perfect reliability in all measurements (ICC = 1.0, CV = 0%). While manual methods of kinetic analysis may provide sufficient reliability, the perfect reliability observed within the algorithm-based method in the current study suggest it is a superior method for use in athlete monitoring programs.
17

Hierarchical server-based communication with switched Ethernet

Yekeh, Farahnaz January 2010 (has links)
<p>Server-based architectures have recently generated more interests and are currently considered for usage for communication in networks. In parallel, switched Ethernet technology has been widely adopted and used in lots of networked systems. Current requirements of networks for supporting real-time guarantees while being flexible at the same time have made the network designers to consider addition of some features to common switches. The FTT-Enabled Ethernet switch is a switch that has been developed to support the FTT (Flexible Time Triggered) paradigm. Recently, servers have been added in these types of switches in order to efficiently manage their allocated bandwidth to different types of messages.</p><p>A hierarchical network of Ethernet switches might be designed in different ways according to the overall goals and properties of the network. In this thesis, after a study on different design solutions, an architecture has been proposed based on FTT-enabled switches, motivated by their support of real-time constraints and server-based communication features. After having created the architecture, a protocol for bandwidth reservation for this hierarchically composed Ethernet switch architecture is developed. Behavior of the designed protocol is described in detail and it has been modeled using Uppaal. Moreover, the temporal behavior (timing) of the network is presented.</p>
18

Analysis and Optimisation of Distributed Embedded Systems with Heterogeneous Scheduling Policies

Pop, Traian January 2007 (has links)
The growing amount and diversity of functions to be implemented by the current and future embedded applications (like, for example, in automotive electronics) have shown that, in many cases, time-triggered and event-triggered functions have to coexist on the computing nodes and to interact over the communication infrastructure. When time-triggered and event-triggered activities have to share the same processing node, a natural way for the execution support can be provided through a hierarchical scheduler. Similarly, when such heterogeneous applications are mapped over a distributed architecture, the communication infrastructure should allow for message exchange in both time-triggered and event-triggered manner in order to ensure a straightforward interconnection of heterogeneous components. This thesis studies aspects related to the analysis and design optimisation for safety-critical hard real-time applications running on hierarchically scheduled distributed embedded systems. It first provides the basis for the timing analysis of the activities in such a system, by carefully taking into consideration all the interferences that appear at run-time between the processes executed according to different scheduling policies. Moreover, due to the distributed nature of the architecture, message delays are also taken into consideration during the timing analysis. Once the schedulability analysis has been provided, the entire system can be optimised by adjusting its configuration parameters. In our work, the entire optimisation process is directed by the results from the timing analysis, with the goal that in the end the timing constraints of the application are satisfied. The analysis and design methodology proposed in the first part of the thesis is applied next on the particular category of distributed systems that use FlexRay as a communication protocol. We start by providing a schedulability analysis for messages transmitted over a FlexRay bus, and then by proposing a bus access optimisation algorithm that aims at improving the timing properties of the entire system. For all the problems that we investigated, we have carried out extensive experiments in order to measure the efficiency of the proposed solutions. The results have confirmed both the importance of the addressed aspects during system-level design, and the applicability of our techniques for analysing and optimising the studied systems.
19

Hierarchical server-based communication with switched Ethernet

Yekeh, Farahnaz January 2010 (has links)
Server-based architectures have recently generated more interests and are currently considered for usage for communication in networks. In parallel, switched Ethernet technology has been widely adopted and used in lots of networked systems. Current requirements of networks for supporting real-time guarantees while being flexible at the same time have made the network designers to consider addition of some features to common switches. The FTT-Enabled Ethernet switch is a switch that has been developed to support the FTT (Flexible Time Triggered) paradigm. Recently, servers have been added in these types of switches in order to efficiently manage their allocated bandwidth to different types of messages. A hierarchical network of Ethernet switches might be designed in different ways according to the overall goals and properties of the network. In this thesis, after a study on different design solutions, an architecture has been proposed based on FTT-enabled switches, motivated by their support of real-time constraints and server-based communication features. After having created the architecture, a protocol for bandwidth reservation for this hierarchically composed Ethernet switch architecture is developed. Behavior of the designed protocol is described in detail and it has been modeled using Uppaal. Moreover, the temporal behavior (timing) of the network is presented.
20

Equipment data analysis study : failure time data modeling and analysis / Failure time data modeling and analysis

Zhu, Chen, master of science in engineering 16 August 2012 (has links)
This report presents the descriptive data analysis and failure time modeling that can be used to find out the characteristics and pattern of failure time. Descriptive data analysis includes the mean, median, 1st quartile, 3rd quartile, frequency, standard deviation, skewness, kurtosis, minimum, maximum and range. Models like exponential distribution, gamma distribution, normal distribution, lognormal distribution, Weibull distribution and log-logistic distribution have been studied for failure time data. The data in this report comes from the South Texas Project that was collected during the last 40 years. We generated more than 1000 groups for STP failure time data based on Mfg Part Number. In all, the top twelve groups of failure time data have been selected as the study group. For each group, we were able to perform different models and obtain the parameters. The significant level and p-value were gained by Kolmogorov-Smirnov test, which is a method of goodness of fit test that represents how well the distribution fits the data. The In this report, Weibull distribution has been proved as the most appropriate model for STP dataset. Among twelve groups, eight groups come from Weibull distribution. In general, Weibull distribution is powerful in failure time modeling. / text

Page generated in 0.084 seconds