• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1879
  • 978
  • 10
  • 10
  • 7
  • 6
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 2895
  • 2759
  • 611
  • 592
  • 557
  • 498
  • 498
  • 459
  • 415
  • 381
  • 379
  • 378
  • 339
  • 314
  • 301
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Processing Core for Compressing Wireless Data : The Enhancement of a RISC Microprocessor

Olufsen, Eskil Viksand January 2006 (has links)
<p>This thesis explores the ability of the proprietary Texas Instruments embedded 16 bits RISC microprocessor, NanoRisc, to process common lossless compression algorithms, and propose extensions in order to increase its performance on this task. In order to measure performance of the NanoRisc microprocessor, the existing software tool chain was enhanced for profiling and simulating the improvements, and three fundamentally different adaptive data compression algorithms with different supporting data structures were implemented in the NanoRisc assembly language. On the background of profiling results, some enhancements were proposed. The new enhancements improved throughput of the three implemented algorithms by between 18% and 103%, and the code sizes decreased between 6% and 31%. The bit field instructions also reduced RAM access by up to 53%. The enhancements were implemented in the NanoRisc VHDL model and synthesized. Synthesis reports showed an increase in gate count of 30%, but the whole NanoRisc core is still below 7k gates. Power consumption per MIPS increased by 7%, however reduced clock cycle count and memory access decreased the net power consumption of all tested algorithms. It is also shown that data compression with the NanoRisc prior to transmission in a low power RF transceiver may increase battery lifetime 4 times. Future work should include a comprehensive study of the effect of the proposed enhancements to more common applications for the NanoRisc microprocessor.</p>
572

Efficient Algorithms for Video Segmentation

Kosmo, Vegard Andre January 2006 (has links)
<p>Describing video content without watching the entire video is a challenging matter. Textual descriptions are usually inaccurate and ambiguous, and if the amount of video is large this manual task is almost endless. If the textual description is replaced with pictures from the video, this is a much more adequate method. The main challenge will then involve which pictures to pick to make sure the entire video content is covered by the description. TV stations with an appurtenant video archive would prefer to have an effective and automated method to perform this task, with focus on holding the time consumption to an absolute minimum and simultaneously get the output results as precise as possible compared with the actual video content. In this thesis, three different methods for automatic shot detection in video files have been designed and tested. The goal was to build a picture storyline from input video files, where this storyline contained one picture from each shot in the video. This task should be done in a minimum of time. Since video files actually are one long series of consecutive pictures, various image properties have been used to detect the video shots. The final evaluation has been done based both on output quality and overall time consumption. The test results show that the best method detected video shots with an average accuracy of approximately 90%, and with an overall time consumption of 8.8% of the actual video length. Combined with some additional functionality, these results may be further improved. With the solutions designed and implemented in this thesis, it is possible to detect shots in any video file, and create a picture storyline to describe the video content. Possible areas of application are TV stations and private individuals that have a need to describe a collection of video files in an effective and automated way.</p>
573

IV and CV characterization of 90nm CMOS transistors

Lund, Håvard January 2006 (has links)
<p>A 90nm CMOS technology has been characterized on the basis of IV and CV measurements. This was feasible by means of a state of the art probe station and measurement instrumentation, capable of measuring current and capacitance in the low fA and fF area respectively. From IV results it was found that the static power consumption is an increasing challenge as the technology is scaled down. The IV measurements also showed the impact from small-channel effects, which was not as prominent as expected. Investigation of literature has resulted in a methodology for accomplishing accurate CV measurements on thin-oxide transistors. By using extraction methods on the capacitance measured, key parameters have been obtained for the CMOS technology. Some of the extracted results suffer however from the choice of test setup.</p>
574

Applicability and Identified Benefits of Agent Technology : -Implementation and Evaluation of an Agent System

Haug, Mari Torgersrud, Kristensen, Elin Marie January 2006 (has links)
<p>Agent oriented approaches are introduced with intention to facilitate software development in situations where other methods have shortcomings. Agents offer new possibilities and solutions to problems due to their properties and characteristics. Agent technology offer a high abstraction level and is therefore a more appropriate tool for making intelligent systems. Multi-agent systems are well suited in application areas with dynamic and challenging environments, and is advantageous in support for decision making and automation of tasks. Reduced coupling, encapsulation of functionality and a high abstraction level are some of the claimed benefits related to agent technology. Empirical studies are needed to investigate if agent technology is as good as asserted. This master thesis will give a deeper understanding of agent technology and benefits related to it. To investigate different aspects, an experiment was designed to reveal the applicability and the benefits. Two multi-agent systems were implemented and used as objects to be studied in the empirical study. As part of the investigation, a proper application area were chosen. The application area can be characterized as a scheduling problem with a dynamic and complex environment. Prometheus and JACK were used as development and modeling tools. Achieved experiences related to the development process will be presented in this report. The findings of the empirical study indicate reduced coupling and increased encapsulation of functionality. To achieve these benefits, the number of entities and functions had to be increased, and thus the number of code-lines. Further, the results indicate that more entities and lines of code will not have a significant influence on the development effort, due to the high abstraction level of agent technology.</p>
575

WebSys- Robustness Assessment and Testing

Pham, Thuy Hue Thi January 2006 (has links)
<p>In recent years, the World Wide Web (WWW) has become a popular platform for system development. There are several factors that make Web-development special. There is a large number of quality requirements in the Web-based system. Web projects involve people with a diverse background, such as technical people with background in programming and non-technical people with background in graphical design. In addition, the Web-based system are often not developed separately, but is integrating existing subsystems. The time-to-marked requirement is strong. Web-based system must tolerate errors and abnormal situations caused by internal component failure or user mistakes. Therefore, robustness is considered to be a critical factor for Web-based systems. Building a robust Web-based system is never an easy task. Furthermore, the end users of Web-based systems have different backgounds. Many have knowledge of the Web, others have little or no knowledge of the Web. Since Web-systems are used by people with a rather diverse background, it is important that the Web-based systems must have error tolerance and ability to survive due to user mistake. The main focus of this project is analyzing robustness of Web-based system. In order to analyze robustness of Web-based system, it is necessary to carry out a robustness assessment. Assessment methods are used to evaluate the robustness and give an estimating of the system's robustness. Further, robustness testing of a Web-based system has to be performed to get an idea of the system's current robustness. The result of estimating and test result will also be discussed, compared and evaluated. An Automatic Acceptance Testing of Web Applications (AutAT) will be used to test the robustness of a Web-based system. DAIM (Norwegian: Digtal Arkivering og Innlevering av Masteroppgaver) is the target system that will be tested the robustness of. Keywords: Robustness, testing, robustness assessment, robustness estimating, Web-based system, AutAT, DAIM.</p>
576

Programming graphic card for fast calculation of sound field in marine acoustics

Haugehåtveit, Olav January 2006 (has links)
<p>Commodity computer graphics chips are probably today’s most powerful computational hardware one can buy for money. These chips, known generically as Graphics Processing Units or GPUs, has in recent years evolved from afterthought peripherals to modern, powerful programmable processor. Due to the movie and game industry we are where we are to today. One of Intel’s co-founder Gordon E. Moore said once that the number of transistors on a single integrated chip was to double every 18 month. So far this seems to be correct for the CPU. However for the GPU the development has gone much faster, and the floating point operations per second has increased enormously. Due to this rapid evolvement many researchers and scientists has discovered the enormous floating point potential can be taken advantage of, and a numerous applications has been tested such as audio and image algorithms. Also in the area of marine acoustics this has become interesting, where the demand for high computational power is increasing. This master report investigates how to make a program capable to run on a GPU for calculating an underwater sound field. To do this a graphics chips with programmable vertex and fragment processor is necessary. Programming this will require graphics API like OpenGL, a shading language like GLSL, and a general purpose GPU library like Shallows. A written program in Matlab is the basic for the GPU program. The goal is to reduce calculation time spent to calculate a underwater sound field. From this the increment from Matlab to GPU was found to be around 40-50 times. However if Matlab was able to calculate the same number of rays as maximum on the GPU, the increment would probably be bigger. Since this study was done on a laptop with nVidia GeForce Go 6600 graphics chip, a higher gain would theoretically be obtainable by a desktop graphics chip.</p>
577

A Connectionist Language Parser and Symbol Grounding : Experimental coupling of syntax and semantics through perceptual grounding

Monsen, Sveinung January 2006 (has links)
<p>The work in this thesis is about natural language processing and understanding, within the context of artificial intelligence research. What was attempted to achieve here was to investigate how meaning is contained in language, particularly with respect to how that information is encoded and how it can be decoded, or extracted. The aspects deemed most relevant for this quest was automated processing of the syntactic structure of sentences, and their semantic components. Artificial neural networks was chosen as the tool to perform the research with, and as such part of the goal became research on connectionist methods. A side-goal of interest was to look into the possibility of using insight into neural networks to gain deeper understanding of how the human brain processes information, particularly language. This area was not explicitly focussed on during the research. The methodology selected for achieving the goals was to design and implement a framework for developing neural network models, and further to implement NLP and NLU systems within this framework. The systems selected to explore and implement were: a parser for handling the syntactic structure and a symbol grounding system for dealing with the semantic component. A third system was also implemented for investigation into an evolutionary-based communication model on the development of a shared vocabulary between autonomous agents. All implementations were based on recent research and results by others.</p>
578

BUCS: Patterns and Robustness : Experimentation with Safety Patterns in Safety-Critical Software Systems

Ljosland, Ingvar January 2006 (has links)
<p>In modern society, we rely on safely working software systems. This is the final report in a masters degree project to reveal key issues in the science field of computer software architecture and design of safety-critical software systems. A pre-study of a navigation system implied that functionality related problems and safety-critical problems do not stack one to one, but rather is a case of solving these aspects in different layers. This means that changes in software systems functionality do not necessary mean that change in safety-critical modules has to be done as well, and visa versa. To further support the findings in the pre-study, an experiment was created to investigate these matters. A group of twenty-three computer science students from the Norwegian University of Science and Technology (NTNU) participated as subjects in the experiment. They were asked to make two functional additions and two safety-critical additions to a software robot emulator. A dynamic web tool was created to present information to the subjects, and they could here answer surveys and upload their task solutions. The results of the experiment shows that there were not found any evidence that the quality attributes got affected by the design approaches. This means that the findings of this study suggest that there is difficult to create safety-critical versions of software architectural design patterns, because all design patterns have a set of additions and concequences to a system, and all sides of the implications of the design pattern should be discussed by the system architects before used in a safety-critical system.</p>
579

eGovernment Services in a Mobile Environment

Olaussen, Gunn, Torgersen, Kirsti Nordseth January 2006 (has links)
<p>This report was written as part of our thesis based on an assignment provided by Kantega. It deals with the use of mobile phones to access eGovernment services using the Liberty Identity Web Services Framework (ID-WSF). Additionally, it explores the use of strong authentication mechanisms on mobile phones while using the phone as the terminal to access such services. The first part of the report describes the project and its goals. In this process, it defines three research questions that will be answered in the course of the thesis. Furthermore, it outlines how these questions should be answered. This part also includes a presentation of the prototype that was developed later in the project. The second part of the report concentrates on the theoretical basis of the assignment. Existing standards and technologies for strong authentication and Liberty-enabled products are explored and evaluated. The focus of the evaluation is upon whether the technologies could be used in the prototype. The third part of the report contains the requirements specification, design, implementation and testing documentation for the prototype. This part aims to describe all aspects of the prototype development and enables us to show that it is a valid proof-of-concept. Requirements and design incorporating strong authentication into the prototype are also provided, although this functionality was not implemented as specified in the assignment. The last part of the report evaluates the results obtained in the course of the thesis and especially the resulting prototype. The prototype fulfills our requirements well, but there where some reservations on the security in the strong authentication design. This part also looks at what can be done in the future to further explore the topic and improve the results. Finally, it shows how the report has answered the research questions we defined in the beginning of the thesis by completing a prototype that accesses eGovernment services using Liberty ID-WSF.</p>
580

Financial News Mining: : Extracting useful Information from Continuous Streams of Text

Lægreid, Tarjei, Sandal, Paul Christian January 2006 (has links)
<p>Online financial news sources continuously publish information about actors involved in the Norwegian financial market. These are often short messages describing temporal relations. However, the amount of information is overwhelming and it requires a great effort to stay up to date on both the latest news and historical relations. Therefore it would have been advantageous to automatically analyse the information. In this report we present a framework for identifying actors and relations between them. Text mining techniques are employed to extract the relations and how they evolve over time. Techniques such as part of speech tagging, named entity identification, along with traditional information retrieval and information extraction methods are employed. Features extracted from the news articles are represented as vectors in a vector space. The framework employs the feature vectors to identify and describe relations between entities in the financial market. A qualitative evaluation of the framework shows that the approach has promising results. Our main finding is that vector representations of features have potential for detecting relations between actors, and how these relations evolve. We also found that the approach taken is dependent on an accurate identification of named entities.</p>

Page generated in 0.037 seconds