Spelling suggestions: "subject:"sie"" "subject:"sien""
191 |
Serious Gaming : Serious content in an entertaining frameworkRichvoldsen, Håvard January 2009 (has links)
<p>This thesis is based on work done at the Norwegian University of Science and Technology (NTNU) in the field of serious gaming. The motivation for the work is to create a serious game with the purpose of recruiting high school students that undertake studies at NTNU within engineering and science. After considerations of several available tools, Blender was chosen as the best development tool for this kind of game, and used to create "Student Quest - A First Person Student Game". The game analysis shows that the game's Primary Learning Principle is Marketing, the Primary Educational Content is Knowledge Gain through Exploration, the Target Age Group is Middle and High School, and it is developed for a Computer Platform. By extracting the fun factors, we conclude that the game passes the Playability threshold and reaches the Enjoyability threshold. By implementing the potential features suggested, the game may reach the Super Fun threshold and thus has the potential of becoming an extremely entertaining serious game.</p>
|
192 |
Wilkinson Power Divider : A Miniaturized MMIC Lumped Component EquivalentTorgersen, Tron January 2009 (has links)
<p>This report will describe the simulation of a Wilkinson Power Divider, realised using lumped components to minimize its size. Every step in the process, from calculating the lumped component values to the final momentum s-parameter simulation is discussed. All relevant theory is described in the theory section. The main goal of this project is to produce the Wilkinson Power Divider using TriQuints 0.5 um TQPED process in as small area as possible. The response of the circuit should also be made as close as possible to the ideal Wilkinson Power Divider. An important additional goal is to learn to use a relevant high-frequency design tool (Agilent ADS) and to get a good understanding of MMIC technology, including the components used and various effects such as cross-talk. During the project a practical measurement on components produced using the TriQuint process will be done, which gives a good understanding of practical measurements using probe station and network analyzer. The final layout, that is arrived at in three steps from a regular Wilkinson Power Divider, should be ready for production, and shows good performance while occupying only a 403 um * 271 um area. The design is thoroughly simulated using Momentum simulation and compared to the ideal response. Any discrepancy between the two responses is explained and commented. All the measurements is compared to simulation results, and deviations between the two is pointed out, and the most probable causes of these are described.</p>
|
193 |
Multicell Battery monitoring and balancing with AVRBorgersen, Ole Johnny January 2009 (has links)
<p>Today Lithium Ion batteries are extensively used in all kinds of electronic equipment due to its superior properties. However, Lithium Ion batteries need to have all the individual cells monitored to ensure the safety and long life time. This master thesis' objective is to design a managing system for a ten cell Lithium Ion battery with an Atmel AVR microcontroller. The main challenge was to scale down the high voltage level a 10 cell battery has and still maintain accuracy when reading this voltage with the AVR. This was solved by using current sense monitors which can handle large common mode voltages. Hardware was made to show proof of concept. It was found that the scaling circuitry had an accuracy of 46mV. In competition with other single chip devices, some other methods have to be found. The design in this thesis is physically too large and too expensive to be of any commercial use. However some other methods worth looking into have been proposed in the last chapter.</p>
|
194 |
Radio Planning and Coverage Prediction of Mobile WiMAX in TrondheimMonrad-Hansen, Jens Wiel January 2009 (has links)
<p>Challenged by the LTE system, Mobile WiMAX is set to be the next generation broadband wireless system. Providing high data rates over large distances gives unlimited possibilities for services provided to the end users. As for all undeveloped systems, Mobile WiMAX has also been exposed to rumors and hypes. This thesis is based on the work performed in cite{prosjekt}, and aims to provide radio planning of a Mobile WiMAX network in the populated areas of Trondheim, Norway. Moreover, preparatory work and suggestions for field testing of the deployed system have been provided. The coverage prediction have been performed by using Astrix 5.0, the radio planning tool of Teleplan. A total of 32 base stations have been suggested to provide ubiquitous coverage of -94 dBm using 92 sectors within the $35.63 km^2$ large area. Furthermore, it has been recommended that fixed or nomadic users purchases the si-CPE or CPE PRO for better channel quality and throughput performances at indoor locations. In the preparatory phase prior to field testing, a python script has been created to perform automated performance testing. The reason for automating the performance measurements has been to increase the test efficiency, and to reduce the possibility of human errors in parameter setting, and file naming. This thesis will hopefully serve as a guide for future radio planners, where an Astrix user case, measurement scripts, and data processing codes are provided for revision and editing. The work has been performed on the initiative of Wireless Trondheim.</p>
|
195 |
The Optimal Packet Duration of ALOHA and CSMA in Ad Hoc Wireless NetworksCorneliussen, Jon Even January 2009 (has links)
<p>In this thesis the optimal transmission rate in ad hoc wireless networks is analyzed. The performance metric used in the analysis is probability of outage. In our system model, users/packets arrive randomly in space and time according to a Poisson point process, and are thereby transmitted to their intended destinations using either ALOHA or CSMA as the MAC protocol. Our model is based on an SINR requirement, i.e., the received SINR must be above some predetermined threshold value, for the whole duration of a packet, in order for the transmission to be considered successful. If this is not the case an outage has occurred. In order to analyze how the transmission rate affects the probability of outage, we assume packets of K bits, and let the packet duration, T, vary. The nodes in the network then transmit packets with a requested transmission rate of Rreq=K/T bits per second. We incorporate transmission rate into already existing lower bounds on the probability of outage of ALOHA and CSMA, and use these expressions to find the optimal packet duration that minimizes the probability of outage. For the ALOHA protocol, we derive an analytic expression for the optimal spectral efficiency of the network as a function of path loss, which is used to find the optimal packet duration Topt . For the CSMA protocol, the optimal packet duration is observed through simulations. We find that in order to minimize the probability of outage in our network, we should choose our system parameters such that our requested transmission rate divided by system bandwidth is equal to the optimal spectral efficiency of our network.</p>
|
196 |
ULTRA LOW POWER APPLICATION SPECIFIC INSTRUCTION-SET PROCESSOR DESIGN : for a cardiac beat detector algorithmYassin, Yahya H. January 2009 (has links)
<p>High efficiency and low power consumption are among the main topics in embedded systems today. For complex applications, off-the-shelf processor cores might not provide the desired goals in terms of power consumption. By optimizing the processor for the application, or a set of applications, one could improve the computing power by introducing special purpose hardware units. The execution cycle count of the application would in this case be reduced significantly, and the resulting processor would consume less power. In this thesis, some research is done in how to optimize a software and hardware development for ultra low power consumption. A cardiac beat detector algorithm is implemented in ANSI C, and optimized for low power consumption, by using several software power optimization techniques. The resulting application is mapped on a basic processor architecture provided by Target Compiler Technologies. This processor is optimized further for ultra low power consumption by applying application specific hardware, and by using several hardware power optimization techniques. A general processor and the optimized processor has been mapped on a chip, using a 90 nm low power TSMC process. Information about power dissipation is extracted through netlist simulation, and the results of both processors have been compared. The optimized processor consume 55% less average power, and the duty cycle of the processor, i.e., the time in which the processor executes its task with respect to the time budget available, has been reduced from 14% to 2.8%. The reduction in the total execution cycle count is 81%. The possibilities of applying power gating, or voltage and frequency scaling are discussed, and it is concluded that further reduction in power consumption is possible by applying these power optimization techniques. For a given case, the average leakage power dissipation is estimated to be reduced by 97.2%.</p>
|
197 |
Development of a Patch Antenna Array between 2-6 GHz with Phase Steering Network for a Double CubeSatBolstad, Anton Johan January 2009 (has links)
<p>To make a double CubeSat with limited power resources capable of transmitting large amounts of data to Earth a high gain antenna is needed. In this thesis a switched beam MSA array operating at 5.84 GHz has been designed to operate on a double CubeSat. The array has 5 beams and uses a switched-line phase shifter to switch between beams. Three different array geometries has been proposed. Computer simulations suggest that the array should be capable of an effective beamwidth of over 60 degrees with a directivity of over 11 dBi. A feed network has been designed to fit the best suited geometry. A ground plane will separate the feed network from the antenna elements. Along with the full array solution all the sub parts has been realized as test circuits. This allows for an evaluation of their characteristics. A TRL calibration kit has also been designed so that the sub parts could be more accurately evaluated. When sending the circuits to fabrication it appeared to be a problem with the selected substrate used for the antenna elements. A redesign using the same substrate for the feed network and antennas was done and production commenced. As it turns out, the TRL calibration kit was not good enough so the S-parameters had to be measured with regular SOLT calibration. Significant problems with connection to ground and mismatches due to a poor SMA-to-microstrip transition was encountered. This caused large deviations between measured and simulated results. It was also discovered that the wrong dielectric constant had been used. This error caused the antenna elements to be dimensioned for operation at 5.70 GHz instead of 5.84 GHz. Problems was also encountered in the switched line phase shifter design. Beam-lead PIN-diodes has been used and due to their small size, a sufficient quality of the soldering was not achieved. This lead to different losses through the phase shifter which again caused the different beam directions to vary from simulations. Only one beam had characteristics similar to simulations. Measurements on the array without phase shifters showed good correspondence with simulation results (adjusted for the correct dielectric constant). It is concluded that by making a better SMA-to-microstrip transition, improve the soldering work and do a redesign with the correct dielectric constant, the array configuration should work as outlined in the design process.</p>
|
198 |
Compact Modeling of the Current through Nanoscale Double-Gate MOSFETs.Holen, Åsmund January 2009 (has links)
<p>In this thesis a compact drain current model for nanoscale double-gate MOSFETs is presented. The model covers all operation regimes and bias voltages up to 0.4V. The modeling is done using conformal mapping techniques to solve the 2D Laplace equation in sub-threshold, and using a long channel model in strong-inversion. In near threshold, a quasi-Fermi level model which uses empirical constants is used to find the current. A continuous model is found by expressing asymptotes in the sub-threshold and strong inversion regimes, and combining them using a interpolation function. The interpolation function uses a parameter that is decided analytically from the near threshold calculations. The model shows good agreement with numerical simulations for bias voltages below 0.4V and channel lengths bellow 50nm.</p>
|
199 |
Switching in multipliersKalis, Jakub Jerzy January 2009 (has links)
<p>Digital multipliers are an important part of most of digital computation systems, such as microcontrollers and microprocessors. Multiplication operation is a quite complex task, thus there is many different solution varying in area, speed and power consumption. An important notice is that multipliers often are a part of critical path of a system which makes them especially important for these factors. During last decade, power efficiency has become an important issue in digital design and a lot of design methods has been created and investigated to meet this subject. It is a known fact that most of power consumed by arithmetic circuit is dissipated by hazards and toggles (up to 75%), that do not bring any information to final result. The method of evaluating the amount of spurious switching and its effect on power dissipation is investigated here. This thesis aims to find a method to estimate switching characteristics and its effect on power dissipation of eight supplied multipliers given in form of HDL net-list with some software overhead. As switching generally stands for majority of power consumption in digital CMOS circuits, this effect gives also good indication of overall power dissipation. One of the difficulties in estimating average power and transition density is pattern dependency problem. The method based on Monte Carlo technique is used where an adequate accuracy is obtained within moderate time and resource usage. Three of investigated multipliers are net-lists created by using methodology developed in [21]. These are synthesized and laid out in the technology used by Atmel Norway. The amount of logical state changes is compared from pre- and post- synthesis net-lists. The technology mapped net-lists are also examined for power consumption to see the connection between switching and dynamic power dissipation. The fan-out delay model used to estimate total toggling gives a good approximation of circuit properties; it is however too simple to give a good estimate of spurious toggling inside the circuit and its effect on power consumption. The same estimation technique is used to investigate a DesignWare circuit (DW02) which is an industrial approach of building fast and power efficient multipliers. The results show that this is the most power effective solution among the examined circuits (45-47% less than the most power efficient circuit from [21]) It is also a solution with smallest amount of hazards during a multiplication operation (38-52%). A circuit generated by module generation software (ModGen) is also investigated. This solution is quite power efficient, it has however largest amount of power dissipated by the spurious toggling (62-68%). It is also noticed that transition density and what follows the power dissipation in strongly dependent on the process, temperature and voltage variation. In fact the higher temperature gives reduction in power consumption.</p>
|
200 |
The Effect of Gain Saturation in a Gain Compensated Perfect LensSkaldebø, Aleksander Vatn January 2009 (has links)
<p>Perfect lenses operating in the near visible spectrum has only recently been introduced, and these kind of metamaterials seem to have a large potential. One problem encountered with these perfect lenses are exceedingly large intrinsic losses, making them impractical for use in applications. This project has explored some of the limitations in using gain to compensate for these losses, specifically the effect of gain saturation has been considered. Gain saturation has been proven to limit the maximum parallel spatial frequency that can be reproduced by the lens. Even though, it has been shown that amplification has the potential to increase the resolution limit by a measurable factor. In the case of several waves traversing the lens simultanously, the critical factor is how much of the total amplitudes lies in waves close to the resolution limit. Waves with relatively small parallel spatial frequencies requires small amplifications, and those with high parallel spatial frequencies will get attenuated or reflected almost immediately, meaning both these types contribute little to gain saturation.</p>
|
Page generated in 0.0391 seconds