• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 47
  • 15
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Comparison of Fault Detection Methods For a Transcritical Refrigeration System

Janecke, Alex Karl 2011 August 1900 (has links)
When released into the atmosphere, traditional refrigerants contribute to climate change several orders of magnitude more than a corresponding amount of carbon dioxide. For that reason, an increasing amount of interest has been paid to transcritical vapor compression systems in recent years, which use carbon dioxide as a refrigerant. Vapor compression systems also impact the environment through their consumption of energy. This can be greatly increased by faulty operation. Automated techniques for detecting and diagnosing faults have been widely tested for subcritical systems, but have not been applied to transcritical systems. These methods can involve either dynamic analysis of the vapor compression cycle or a variety of algorithms based on steady state behavior. In this thesis, the viability of dynamic fault detection is tested in relation to that of static fault detection for a transcritical refrigeration system. Step tests are used to determine that transient behavior does not give additional useful information. The same tests are performed on a subcritical air-conditioner showing little value in dynamic fault detection. A static component based method of fault detection which has been applied to subcritical systems is also tested for all pairings of four faults: over/undercharge, evaporator fouling, gas cooler fouling, and compressor valve leakage. This technique allows for low cost measurement and independent detection of individual faults even when multiple faults are present. Results of this method are promising and allow distinction between faulty and fault-free behavior.
12

Adaptable, scalable, probabilistic fault detection and diagnostic methods for the HVAC secondary system

Li, Zhengwei 30 March 2012 (has links)
As the popularity of building automation system (BAS) increases, there is an increasing need to understand/analyze the HVAC system behavior with the monitoring data. However, the current constraints prevent FDD technology from being widely accepted, which include: 1)Difficult to understand the diagnostic results; 2)FDD methods have strong system dependency and low adaptability; 3)The performance of FDD methods is still not satisfactory; 4)Lack of information. This thesis aims at removing the constraints, with a specific focus on air handling unit (AHU), which is one of the most common HVAC components in commercial buildings. To achieve the target, following work has been done in the thesis. On understanding the diagnostic results, a standard information structure including probability, criticality and risk is proposed. On improving method's adaptability, a low system dependency FDD method: rule augmented CUSUM method is developed and tested, another highly adaptable method: principal component analysis (PCA) method is implemented and tested. On improving the overall FDD performance (detection sensitivity and diagnostic accuracy), a hypothesis that using integrated approach to combine different FDD methods could improve the FDD performance is proposed, both deterministic and probabilistic integration approaches are implemented to verify this hypothesis. On understanding the value of information, the FDD results for a testing system under different information availability scenarios are compared. The results show that rule augmented CUSUM method is able to detect the abrupt faults and most incipient faults, therefore is a reliable method to use. The results also show that overall improvement of FDD method is possible using Bayesian integration approach, given accurate parameters (sensitivity and specificity), but not guaranteed with deterministic integration approach, although which is simpler to use. The study of information availability reveals that most of the faults can be detected in low and medium information availability scenario, moving further to high information availability scenario only slightly improves the diagnostic performance. The key message from this thesis to the community is that: using Bayesian approach to integrate high adaptable FDD methods and delivering the results in a probability context is an optimal solution to remove the current constraints and push FDD technology to a new position.
13

Implementation Aspects of 3GPP TD-LTE

Guo, Ningning January 2009 (has links)
3GPP LTE (Long Term Evolution) is a project of the Third Generation Partnership Project to improve the UMTS (Universal Mobile Telecommunications System) mobile phone standard to cope with future technology evolutions. Two duplex schemes FDD and TDD are investigated in this thesis. Several computational intensive components of the baseband processing for LTE uplink such as synchronization, channel estimation, equalization, soft demapping, turbo decoding is analyzed. Cost analysis is hardware independent so that only computational complexity is considered in this thesis. Hardware dependent discussion for LTE baseband SDR platform is given according the analysis results.
14

Evaluate Operational Modal Analysis and Compare the Result to Visualized Mode Shapes

Song, Baiyi January 2017 (has links)
The prototypes vibration test carried out for obtaining reliable information concerning machine’s dynamic properties in development process. Analysis results should be able to correlate with FE model to determine if some underlying assumptions (material properties & boundary conditions) were correct. EMA used for extracting structure modal parameter under laboratory condition. However, EMA can generally not provide all required information concerning machine dynamic property. To simulate vibration in operating, it commonly requires the model based on dynamic properties of the machine under operating. Thus, vibration tests need carried out under operational condition. OMA is a useful tool for extracting information concerning dynamic properties of operating machine. This report concerns vibration test of part of mining machine under operating condition. Modal parameters extracted by two kinds of OMA methods. Results from OMA was compared with corresponding EMA results, illustrates reader the advantages of OMA.
15

Algoritmy monitorování a diagnostiky elektrických pohonů založené na modelu / Algorithms of Model based Electrical Drives Monitoring and Diagnostics

Kozel, Martin January 2014 (has links)
The aim of this thesis is to investigate PMSM models with internal faults. Two fault models are introduced. One of them is suitable for simulation of stator winding inter-turn short fault in case of one pole-pair motor and other one for simulation of inter-turn fault in case of multiple pole-pair motor. There are described some methods for model based fault detection of internal faults and sensor faults.
16

Experiências com desenvolvimento ágil / Experiences with agile development

Bassi Filho, Dairton Luiz 18 March 2008 (has links)
A crescente demanda por sistemas e a alta velocidade com que seus requisitos evoluem têm evidenciado que desenvolvimento de software exige flexibilidade, pois muitas decisões precisam ser tomadas durante o projeto. Além disso, as dificuldades para a produção de sistemas vão muito além das questões técnicas. Fatores estratégicos, comerciais e humanos são responsáveis por algumas das variáveis que contribuem para tornar o desenvolvimento de sistemas de software uma atividade altamente complexa. Modelos tradicionais de desenvolvimento de software propõem processos prescritivos que não consideram toda essa complexidade. Por outro lado, Métodos Ágeis de desenvolvimento de software sugerem uma abordagem mais humanística com foco na entrega rápida e constante de software com valor de negócios. Porém, para conseguir isto, é preciso escolher um conjunto de práticas de desenvolvimento adequado às características do projeto e da equipe. Desta forma, a natureza única de cada projeto e a necessidade de alta qualidade e produtividade tornam importante a busca por práticas de desenvolvimento. A partir de projetos que conduzimos usando métodos ágeis na academia e na indústria, identificamos e descrevemos 22 práticas para desenvolvimento de software que podem ser adotadas por equipes para aumentar o seu desempenho e/ou a qualidade do software. / The growing demand for systems and the high speed with which their requirements evolve has shown that software development requires flexibility because many decisions need to be taken during the project. Also, the difficulties for the production of software systems go far beyond the technical issues. Strategic, commercial and human factors are responsible for some variables that contribute to make the software development a highly complex activity. Traditional models of software development propose prescritive processes that do not consider all this complexity. On the other hand, Agile Methods of software development suggest an humanistic approach focused on fast and often business valuable software deliveries. But, in order to get it, one needs to choose an appropriated group of development practices accordingly to the project and team features. In this way, the individuality of each project and the need for better quality and productivity motivate the search for software development practices. Based on projects that we conducted by using agile methods in academic and industry environments we identified and described 22 software development practices that can be used by teams to increase their performance and/or the software quality.
17

Signalakquisition in DS-Spreizspektrum-Systemen und ihre Anwendung auf den 3GPP-FDD-Mobilfunkstandard

Zoch, André 03 November 2004 (has links) (PDF)
Robust signal acquisition is an important task in DS-SS receivers. The objective of the acquisition is to coarsely estimate the signal parameters such that the succeeding parameter tracking algorithms can be initialized. In particular, acquisition is needed to coarsely synchronize the receiver to the timing and frequency of the received signal. For this purpose mainly data aided and feedforward algorithms are applied. Using the maximum likelihood (ML) criterion, an estimator for the joint estimation of receive timing and frequency offset can be derived which determines the maximum of the Likelihood function over the whole parameter uncertainty region. Due to its high complexity the ML synchronizer is difficult to implement for practical applications. Hence, complexity reduced algorithms need to be derived. This thesis gives a systematic survey of acquisition algorithms and of performance analysis methods for analyzing such algorithms under mobile radio propagation conditions. The exploitation of multiple observations is investigated in order to improve the acquisition performance in terms of false alarm rate and acquisition time. In particular, optimal and suboptimal combining schemes for a fixed observation interval as well as sequential utilization of successive observations resulting in a variable observation length are analyzed. Another possibility to make the signal acquisition more efficient in terms of the acquisition time is to use multi stage acquisition algorithms. One class of those algorithms are the well known multiple dwell algorithms. A different approach is to design acquisition procedures in which the information about the unknown parameters is distributed among several stages such that each stage has to cope with a smaller uncertainty region in comparison to the overall parameter uncertainty. Analysis of multi stage algorithms followed by an extensive discussion of the 3GPP FDD downlink acquisition procedure as an example of a multi stage procedure with distributed information conclude the work. / Die zuverlässige Signalakquisition, die auch als Grobsynchronisation bezeichnet wird, stellt eine wichtige Aufgabe in DS-SS-Systemen dar. Das Ziel hierbei ist es, Schätzwerte fur die Übertragungsparameter derart zu bestimmen, dass die der Grobsynchronisation nachfolgende Feinsynchronisation initialisiert werden kann, d. h. dass die bestimmten Schätzwerte innerhalb des Fangbereiches der Feinsynchronisationsalgorithmen liegen. Insbesondere ist es für die Bestimmung von Synchronisationszeitpunkt und Frequenzversatz sinnvoll, eine Grobsynchronisation durchzuführen. Im Interesse einer begrenzten Komplexität sowie einer möglichst schnellen Akquisition finden vor allem datengestützte und vorwärtsverarbeitende Algorithmen Anwendung. Ausgehend vom Maximum-Likelihood-Kriterium (ML-Kriterium) können geeignete Schätzer für die gemeinsame Bestimmung von Synchronisationszeitpunkt und Frequenzversatz abgeleitet werden. Dabei ist das Maximum der Likelihood-Funktion innerhalb der Parameterunsicherheitsregion zu bestimmen. Aufgrund seiner hohen Komplexität ist der ML-Schatzer fur die Akquisition wenig geeignet; vielmehr müssen aufwandsgünstige Algorithmen mit ausreichender Leistungsfähigkeit gefunden werden. In dieser Arbeit werden verschiedene Algorithmen zur Parameterakquisition systematisierend gegenübergestellt. Weiterführend sind Verfahren zur Verbesserung des Akquisitionsverhaltens bezüglich Fehlalarm-Wahrscheinlichkeit und Akquisitionszeit unter Ausnutzung mehrfacher Beobachtung Gegenstand der Betrachtungen. Insbesondere optimale und suboptimale Verfahren mit fester Beobachtungsdauer sowie die sequentielle Auswertung aufeinander folgender Beobachtungen, bei der sich die Beobachtungsdauer nach der erreichten Entscheidungssicherheit bestimmt, werden analysiert. Als eine weitere Möglichkeit, die Signalakquisition in Bezug auf die Akquisitionszeit effizienter zu gestalten, werden mehrstufige Akquisitionsverfahren diskutiert. Es werden zum einen die häufig genutzten Mehrfach-Dwell-Algorithmen sowie mehrstufige Algorithmen mit verteilter Information betrachtet. Bei Letzteren Algorithmen wird jeder Akquisitionsstufe ein Teil der zur Synchronisation benötigten Information zugeordnet, wodurch sich die Parameter-Unsicherheit für jede einzelne Stufe verringert. Ziel hierbei ist es, durch Erhöhung der Entscheidungssicherheit der einzelnen Stufen die mittlere Akquisitionszeit zu reduzieren. Die Diskussion und die Analyse von mehrstufigen Akquisitionsverfahren bilden den Abschluss der Arbeit, wobei besonders auf die 3GPP-FDD Downlink-Akquisition als ein Beispiel fur mehrstufige Verfahren mit verteilter Information eingegangen wird.
18

Experiências com desenvolvimento ágil / Experiences with agile development

Dairton Luiz Bassi Filho 18 March 2008 (has links)
A crescente demanda por sistemas e a alta velocidade com que seus requisitos evoluem têm evidenciado que desenvolvimento de software exige flexibilidade, pois muitas decisões precisam ser tomadas durante o projeto. Além disso, as dificuldades para a produção de sistemas vão muito além das questões técnicas. Fatores estratégicos, comerciais e humanos são responsáveis por algumas das variáveis que contribuem para tornar o desenvolvimento de sistemas de software uma atividade altamente complexa. Modelos tradicionais de desenvolvimento de software propõem processos prescritivos que não consideram toda essa complexidade. Por outro lado, Métodos Ágeis de desenvolvimento de software sugerem uma abordagem mais humanística com foco na entrega rápida e constante de software com valor de negócios. Porém, para conseguir isto, é preciso escolher um conjunto de práticas de desenvolvimento adequado às características do projeto e da equipe. Desta forma, a natureza única de cada projeto e a necessidade de alta qualidade e produtividade tornam importante a busca por práticas de desenvolvimento. A partir de projetos que conduzimos usando métodos ágeis na academia e na indústria, identificamos e descrevemos 22 práticas para desenvolvimento de software que podem ser adotadas por equipes para aumentar o seu desempenho e/ou a qualidade do software. / The growing demand for systems and the high speed with which their requirements evolve has shown that software development requires flexibility because many decisions need to be taken during the project. Also, the difficulties for the production of software systems go far beyond the technical issues. Strategic, commercial and human factors are responsible for some variables that contribute to make the software development a highly complex activity. Traditional models of software development propose prescritive processes that do not consider all this complexity. On the other hand, Agile Methods of software development suggest an humanistic approach focused on fast and often business valuable software deliveries. But, in order to get it, one needs to choose an appropriated group of development practices accordingly to the project and team features. In this way, the individuality of each project and the need for better quality and productivity motivate the search for software development practices. Based on projects that we conducted by using agile methods in academic and industry environments we identified and described 22 software development practices that can be used by teams to increase their performance and/or the software quality.
19

Signalakquisition in DS-Spreizspektrum-Systemen und ihre Anwendung auf den 3GPP-FDD-Mobilfunkstandard

Zoch, André 03 May 2004 (has links)
Robust signal acquisition is an important task in DS-SS receivers. The objective of the acquisition is to coarsely estimate the signal parameters such that the succeeding parameter tracking algorithms can be initialized. In particular, acquisition is needed to coarsely synchronize the receiver to the timing and frequency of the received signal. For this purpose mainly data aided and feedforward algorithms are applied. Using the maximum likelihood (ML) criterion, an estimator for the joint estimation of receive timing and frequency offset can be derived which determines the maximum of the Likelihood function over the whole parameter uncertainty region. Due to its high complexity the ML synchronizer is difficult to implement for practical applications. Hence, complexity reduced algorithms need to be derived. This thesis gives a systematic survey of acquisition algorithms and of performance analysis methods for analyzing such algorithms under mobile radio propagation conditions. The exploitation of multiple observations is investigated in order to improve the acquisition performance in terms of false alarm rate and acquisition time. In particular, optimal and suboptimal combining schemes for a fixed observation interval as well as sequential utilization of successive observations resulting in a variable observation length are analyzed. Another possibility to make the signal acquisition more efficient in terms of the acquisition time is to use multi stage acquisition algorithms. One class of those algorithms are the well known multiple dwell algorithms. A different approach is to design acquisition procedures in which the information about the unknown parameters is distributed among several stages such that each stage has to cope with a smaller uncertainty region in comparison to the overall parameter uncertainty. Analysis of multi stage algorithms followed by an extensive discussion of the 3GPP FDD downlink acquisition procedure as an example of a multi stage procedure with distributed information conclude the work. / Die zuverlässige Signalakquisition, die auch als Grobsynchronisation bezeichnet wird, stellt eine wichtige Aufgabe in DS-SS-Systemen dar. Das Ziel hierbei ist es, Schätzwerte fur die Übertragungsparameter derart zu bestimmen, dass die der Grobsynchronisation nachfolgende Feinsynchronisation initialisiert werden kann, d. h. dass die bestimmten Schätzwerte innerhalb des Fangbereiches der Feinsynchronisationsalgorithmen liegen. Insbesondere ist es für die Bestimmung von Synchronisationszeitpunkt und Frequenzversatz sinnvoll, eine Grobsynchronisation durchzuführen. Im Interesse einer begrenzten Komplexität sowie einer möglichst schnellen Akquisition finden vor allem datengestützte und vorwärtsverarbeitende Algorithmen Anwendung. Ausgehend vom Maximum-Likelihood-Kriterium (ML-Kriterium) können geeignete Schätzer für die gemeinsame Bestimmung von Synchronisationszeitpunkt und Frequenzversatz abgeleitet werden. Dabei ist das Maximum der Likelihood-Funktion innerhalb der Parameterunsicherheitsregion zu bestimmen. Aufgrund seiner hohen Komplexität ist der ML-Schatzer fur die Akquisition wenig geeignet; vielmehr müssen aufwandsgünstige Algorithmen mit ausreichender Leistungsfähigkeit gefunden werden. In dieser Arbeit werden verschiedene Algorithmen zur Parameterakquisition systematisierend gegenübergestellt. Weiterführend sind Verfahren zur Verbesserung des Akquisitionsverhaltens bezüglich Fehlalarm-Wahrscheinlichkeit und Akquisitionszeit unter Ausnutzung mehrfacher Beobachtung Gegenstand der Betrachtungen. Insbesondere optimale und suboptimale Verfahren mit fester Beobachtungsdauer sowie die sequentielle Auswertung aufeinander folgender Beobachtungen, bei der sich die Beobachtungsdauer nach der erreichten Entscheidungssicherheit bestimmt, werden analysiert. Als eine weitere Möglichkeit, die Signalakquisition in Bezug auf die Akquisitionszeit effizienter zu gestalten, werden mehrstufige Akquisitionsverfahren diskutiert. Es werden zum einen die häufig genutzten Mehrfach-Dwell-Algorithmen sowie mehrstufige Algorithmen mit verteilter Information betrachtet. Bei Letzteren Algorithmen wird jeder Akquisitionsstufe ein Teil der zur Synchronisation benötigten Information zugeordnet, wodurch sich die Parameter-Unsicherheit für jede einzelne Stufe verringert. Ziel hierbei ist es, durch Erhöhung der Entscheidungssicherheit der einzelnen Stufen die mittlere Akquisitionszeit zu reduzieren. Die Diskussion und die Analyse von mehrstufigen Akquisitionsverfahren bilden den Abschluss der Arbeit, wobei besonders auf die 3GPP-FDD Downlink-Akquisition als ein Beispiel fur mehrstufige Verfahren mit verteilter Information eingegangen wird.
20

Energibesparing med automatiserad inneklimat- och ventilationsstyrning – drivkrafter och barriärer

Selhammer, Andreas January 2022 (has links)
Energiförbrukningen i Sveriges fastigheter uppgår till närmare 40% av totalprimärenergin för Sverige. Denna siffra förväntas öka ytterligare under den kommande 20-årsperioden. Av denna energimängd motsvarar 67% byggnadens operativa fas. I denna studie undersöks hur energiförbrukning kan minskas genom att införa mera byggnadsautomation och högre automationsgrad. Detta för att få fastigheter i ökad grad att anpassa sina energibehov efter faktiska rådande behov i stället för mer statiska driftfall.  En litteraturgranskning inom forskningsfältet utfördes mot forskningsfråga 1, Finns det en korrelation mellan energibesparing och automationsgrad i inneklimat och ventilationsstyrning? Här har tekniker som building management system (BMS) och building energy management system (BEMS) påvisat besparingar runt 30% vid införande. Vidare har tekniker som digitala tvillingar påvisat besparingar mellan 6,2%- 21,5% samt lägre påverkan på fastighetens brukare. Detta genom ett mer prediktivt underhåll och bättre förhandsanalyser av energieffektiva utfall innan implementering. Även artificiell intelligens (AI) påvisade goda energibesparingar vid införande med energibesparingar mellan 14% - 44%. Här indikerar studien att det finns problem med implementeringen. Detta har sin härkomst i, dels felaktigt konstruerat metadata och för få sensorer som ger AI för litet beslutsfattande underlag att arbete mot. Här har studier funnit att AI införd på en för dålig dataplattform kan bli direkt kontraproduktivt och öka energiförbrukningen i stället för att sänka denna. Dock framkommer det att det finns en positiv koppling mellan energibesparing och automationsgrad. Detta då besparingen ligger i fastighetens förmåga att adaptera sig till rådande omständigheter.  Forskningsfråga 2 avser: Vad föreligger det för hinder och drivkrafter för ökad implementering av automationsgrad i inneklimat och ventilationsstyrning? Gällande barriärer och hinder påvisar svaren från enkätundersökningen utförd i denna studie att det förkommer främst kunskapshinder och ekonomiska hinder för vidare implementering av automation inom fastigheterna. Vidare kan det utrönas att förvaltare och drifttekniker arbetar mer aktivt med energiledningsfrågor än de övriga skråna som undersöks i denna undersökning. Här visar svaren på att framför allt styrentreprenörerna och konsulterna behöver informera i högre grad om den nytta deras lösningar kan erbjuda för energikonservering. Detta på ett sätt som mottagaren förstår och kan relatera till för att motivera prisskillnader initialt i byggprocessen och med detta försöka överbrygga energiparadoxen, där kostnadseffektiva och energieffektiva lösningar uteblir som en konsekvens. / Energy consumption in Sweden, which originates from buildings and facilities, amounts to almost 40% of the total primary energy in Sweden. This figure is expected to increase further over the next 20 years. From this, 67% corresponds to the operational phase of the building. This study examines how this energy consumption can be reduced by increasingly adding a higher degree of building automation into the buildings, to get properties to increasingly adapt their energy needs to the actual prevailing needs instead of a more static operation. A literature review in the research field was performed against research question 1, Is there a correlation between energy saving and degree of automation in indoor climate and ventilation control? Here, technologies such as building management systems (BMS) and building energy management systems (BEMS) had demonstrated savings around 30% upon introduction. By adding technologies such as digital twins have demonstrated savings between 6.2%-21.5% and lower the effect on the occupants’ residences comfort through better predictive maintenance and preliminary analysis of energy and comfort outcomes before real life implementation. AI also showed good energy saving potential with energy reduction between 14.4%-44.36%. However, there are problems that could occur with the implementation, as this study stats. This has its origin in partly incorrectly constructed metadata and a lack of sensors and actuators. This in turn gives the AI insufficient data for training basis and incorrect bases to build their forecasts on. This study also found that AI, or other analysis tools, on an insufficient databased platform can be directly counterproductive and increase energy consumption instead. However, it appears that there is a positive connection between energy saving and degree of automation. This is because the savings lie in the property's ability to adapt to prevailing circumstances. Research question 2 refers to what are the obstacles and driving forces for increased implementation of the degree of automation in indoor climate and ventilation control? The answers from the questionnaire show that there are mainly knowledge barriers and financial obstacles for further implementation of automation within the properties. Furthermore, it can be ascertained that facility managers and technicians are more actively engaged with energy management issues than the other guilds in this survey. Here, the answers show that, above all, the automation-contractors and consultants need to provide better information about the benefits if their automation solutions and how it could reduce energy waste and thereby try to bridge the energy paradox, where cost-effective and energy-efficient solutions are overlooked due to hinders and barriers.

Page generated in 0.4672 seconds