• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33431
  • 12646
  • 9864
  • 1112
  • 799
  • 552
  • 383
  • 323
  • 323
  • 323
  • 323
  • 323
  • 321
  • 238
  • 235
  • Tagged with
  • 67718
  • 32928
  • 16429
  • 15813
  • 13147
  • 13123
  • 13022
  • 10614
  • 5374
  • 4632
  • 4423
  • 4325
  • 3889
  • 3820
  • 3549
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Residual generation for fault diagnosis

Frisk, Erik January 2001 (has links)
The objective when supervising technical processes is to alarm an operator when a fault is detected and also identify one, or possibly a set of components, that may have been the cause of the alarm. Diagnosis is an expansive subject, partly due to the fact that nowadays, more applications have more embedded computing power and more available sensors than before. A fundamental part of many model-based diagnosis algorithms are so called residuals. A residual is a signal that reacts to a carefully chosen subset of the considered faults and by generating a suitable set of such residuals, fault detection and isolation can be achieved. A common thread is the development of systematic design and analysis methods for residual generators based on a number of different model classes, namely deterministic and stochastic linear models on state-space, descriptor, or transfer function form, and non-linear polynomial systems. In addition, it is considered important that there exist readily available computer tools for all design algorithms. A key result is the minimal polynomial basis algorithm that is used to parameterize all possible residual generators for linear model descriptions. It also, explicitly, finds those solutions of minimal order. The design process and its numerical properties are shown to be sound. The algorithms and its principles are extended to descriptor systems, stochastic systems, nonlinear polynomial systems, and uncertain linear systems. Kew results from these extensions include: increased robustness by introduction of a reference model, a new type of whitening filters for residual generation for stochastic systems both on state-space form and descriptor form, and means to handle algorithmic complexity for the non-linear design problem. In conclusion, for the four classes of models studied, new methods have been developed. The methods fulfills requirements generation of all possible solutions, availability of computational tools, and numerical soundness. The methods also provide the diagnosis system designer with a set of tools with well specified and intuitive design freedom
102

Low and Medium Level Vision Using Channel Representations

Forssén, Per-Erik January 2004 (has links)
This thesis introduces and explores a new type of representation for low and medium level vision operations called channel representation. The channel representation is a more general way to represent information than e.g. as numerical values, since it allows incorporation of uncertainty, and simultaneous representation of several hypotheses. More importantly it also allows the representation of “no information” when no statement can be given. A channel representation of a scalar value is a vector of channel values, which are generated by passing the original scalar value through a set of kernel functions. The resultant representation is sparse and monopolar. The word sparse signifies that information is not necessarily present in all channels. On the contrary, most channel values will be zero. The word monopolar signifies that all channel values have the same sign, e.g. they are either positive or zero. A zero channel value denotes “no information”, and for non-zero values, the magnitude signifies the relevance. In the thesis, a framework for channel encoding and local decoding of scalar values is presented. Averaging in the channel representation is identified as a regularised sampling of a probability density function. A subsequent decoding is thus a mode estimation technique.' The mode estimation property of channel averaging is exploited in the channel smoothing technique for image noise removal. We introduce an improvement to channel smoothing, called alpha synthesis, which deals with the problem of jagged edges present in the original method. Channel smoothing with alpha synthesis is compared to mean-shift filtering, bilateral filtering, median filtering, and normalized averaging with favourable results. A fast and robust blob-feature extraction method for vector fields is developed. The method is also extended to cluster constant slopes instead of constant regions. The method is intended for view-based object recognition and wide baseline matching. It is demonstrated on a wide baseline matching problem. A sparse scale-space representation of lines and edges is implemented and described. The representation keeps line and edge statements separate, and ensures that they are localised by inhibition from coarser scales. The result is however still locally continuous, in contrast to non-max-suppression approaches, which introduce a binary threshold. The channel representation is well suited to learning, which is demonstrated by applying it in an associative network. An analysis of representational properties of associative networks using the channel representation is made. Finally, a reactive system design using the channel representation is proposed. The system is similar in idea to recursive Bayesian techniques using particle filters, but the present formulation allows learning using the associative networks.
103

Performance evaluation of a wired and a wireless iSCSI network using simulation model

Kawatra, Kshitij 06 May 2016 (has links)
<p> This paper compares the performance of a wired and a partial wireless network sending storage data from client&rsquo;s end to the server&rsquo;s end. The purpose of this project is to observe if a wireless network can be an efficient alternative to a wired network to send storage specific protocols and data. The comparison has been made on the basis of throughput i.e., number of bits being transferred per second, and delay i.e., time taken by one bit to transfer from one end to another. For a wired network, delay can be added to the transmission at each node depending on their processing speed as well as the distance between two nodes. For a wireless network other factors like channel bandwidth and buffer size play an important role. In this paper, we have also observed the effect of buffer size on a wireless network and how it can be manipulated to minimize delay and packet loss in the network. We have implemented and simulated our network scenario in Opnet Modeler by Riverbed. </p>
104

Reasoning with Rough Sets and Paraconsistent Rough Sets

Vitória, Aida January 2010 (has links)
This thesis presents an approach to knowledge representation combining rough sets and para-consistent logic programming. The rough sets framework proposes a method to handle a specific type of uncertainty originating from the fact that an agent may perceive different objects of the universe as being similar, although they may have di®erent properties. A rough set is then defined by approximations taking into account the similarity between objects. The number of applications and the clear mathematical foundation of rough sets techniques demonstrate their importance. Most of the research in the rough sets field overlooks three important aspects. Firstly, there are no established techniques for defining rough concepts (sets) in terms of other rough concepts and for reasoning about them. Secondly, there are no systematic methods for integration of domain and expert knowledge into the definition of rough concepts. Thirdly, some additional forms of uncertainty are not considered: it is assumed that knowledge about similarities between objects is precise, while in reality it may be incomplete and contradictory; and, for some objects there may be no evidence about whether they belong to a certain concept. The thesis addresses these problems using the ideas of paraconsistent logic programming, a recognized technique which makes it possible to represent inconsistent knowledge and to reason about it. This work consists of two parts, each of which proposes a di®erent language. Both languages cater for the definition of rough sets by combining lower and upper approximations and boundaries of other rough sets. Both frameworks take into account that membership of an object into a concept may be unknown. The fundamental difference between the languages is in the treatment of similarity relations. The first language assumes that similarities between objects are represented by equivalence relations induced from objects with similar descriptions in terms of a given number of attributes. The second language allows the user to define similarity relations suitable for the application in mind and takes into account that similarity between objects may be imprecise. Thus, four-valued similarity relations are used to model indiscernibility between objects, which give rise to rough sets with four-valued approximations, called paraconsistent rough sets. The semantics of both languages borrows ideas and techniques used in paraconsistent logic programming. Therefore, a distinctive feature of our work is that it brings together two major fields, rough sets and paraconsistent logic programming.
105

Methods for Visually Guided Robotic Systems : Matching, Tracking and Servoing

Larsson, Fredrik January 2009 (has links)
This thesis deals with three topics; Bayesian tracking, shape matching and visual servoing. These topics are bound together by the goal of visual control of robotic systems. The work leading to this thesis was conducted within two European projects, COSPAL and DIPLECS, both with the stated goal of developing artificial cognitive systems. Thus, the ultimate goal of my research is to contribute to the development of artificial cognitive systems. The contribution to the field of Bayesian tracking is in the form of a framework called Channel Based Tracking (CBT). CBT has been proven to perform competitively with particle filter based approaches but with the added advantage of not having to specify the observation or system models. CBT uses channel representation and correspondence free learning in order to acquire the observation and system models from unordered sets of observations and states. We demonstrate how this has been used for tracking cars in the presence of clutter and noise. The shape matching part of this thesis presents a new way to match Fourier Descriptors (FDs). We show that it is possible to take rotation and index shift into account while matching FDs without explicitly de-rotate the contours or neglecting the phase. We also propose to use FDs for matching locally extracted shapes in contrast to the traditional way of using FDs to match the global outline of an object. We have in this context evaluated our matching scheme against the popular Affine Invariant FDs and shown that our method is clearly superior. In the visual servoing part we present a visual servoing method that is based on an action precedes perception approach. By applying random action with a system, e.g. a robotic arm, it is possible to learn a mapping between action space and percept space. In experiments we show that it is possible to achieve high precision positioning of a robotic arm without knowing beforehand how the robotic arm looks like or how it is controlled.
106

Stages of faculty concern about teaching online| Relationships between faculty teaching methods and technology use in teaching

Randall, John H. 20 July 2016 (has links)
<p> As more online courses and programs are created, it is imperative institutions understand the concern of their faculty toward teaching online, the types of technology they use, and the methods they use to instruct students in order to provide appropriate resources to support them. This quantitative study measures these concerns, using the Stages of Concern Questionnaire, of full-time faculty at a small Christian liberal arts university in Southern California relative to teaching online, technology use, and teaching methods. The majority of faculty reported being unconcerned about teaching online. </p><p> The correlations conducted between faculty&rsquo;s concerns about teaching online and their teaching methods showed that while some relationships exist, the strength of the relationships are weak. The same was true for the relationships between faculty&rsquo;s technology use and their concern about teaching online. Additionally, analysis of variance revealed faculty who practice more student-centered teaching methods are more likely to focus on coordinating and cooperating with others regarding teaching online. </p><p> It can be concluded that the majority of faculty at the institution are not concerned about teaching online and that overall, their technology use and specific teaching methods do not contribute to their concerns about teaching online. However, it was found that faculty who are more student-centered are more likely to cooperate and coordinate with others in regards to teaching online. These findings have implications for the institution where this research was conducted. The administration can be more confident knowing that many of their faculty are not highly concerned about teaching online, therefore, may be less likely to resist teaching these types of classes. The administration now has information that shows faculty who are more student-centered are more likely to cooperate with others in regards to teaching online. These faculty may be more inclined to promote online teaching and ultimately help fulfill the strategic plans of the University.</p>
107

Conceptual ProductDevelopment in SmallCorporations

Nilsson, Per January 2010 (has links)
<p>The main objective of the thesis “Conceptual Product Development inSmall Corporations” is by the use of a case study test the MFD™-method(Erixon G. , 1998) combined with PMM in a product development project.(Henceforth called MFD™/PMM-method). The MFD™/PMM-methodused for documenting and controlling a product development project hassince it was introduced been used in several industries and projects. Themethod has been proved to be a good way of working with the early stagesof product development, however, there are almost only projects carriedout on large industries which means that there are very few references tohow the MFD™/PMM-method works in a small corporation. Therefore,was the case study in the thesis “Conceptual Product Development inSmall Corporations” carried out in a small corporation to find out whetherthe MFD™/PMM-method also can be applied and used in such acorporation.</p><p>The PMM was proposed in a paper presented at Delft University ofTechnology in Holland 1998 by the author and Gunnar Erixon. (Seeappended paper C: The chart of modular function deployment.) The title “Thechart of modular function deployment” was later renamed as PMM,Product Management Map. (Sweden PreCAD AB, 2000). The PMMconsists of a QFD-matrix linked to MIM (Module Indication Matrix) via acoupling matrix which makes it possible to make an unbroken chain fromthe customer domain to the designed product/modules. (See Figure 3-2).The PMM makes it easy to correct omissions made in creating newproducts and modules.</p><p>In the thesis “Conceptual Product Development in Small Corporations”the universal MFD™/PMM-method has been adapted by the author tothree models of product development; original-, evolutionary- andincremental development.</p><p>The evolutionary adapted MFD™/PMM-method was tested as a casestudy at Atlings AB in the community Ockelbo. Atlings AB is a smallcorporation with a total number of 50 employees and an annual turnoverof 9 million €. The product studied at the corporation was a steady rest for supporting long shafts in turning. The project team consisted ofmanagement director, a sales promoter, a production engineer, a designengineer and a workshop technician, the author as team leader and acolleague from Dalarna University as discussion partner. The project teamhas had six meetings.</p><p>The project team managed to use MFD™ and to make a complete PMMof the studied product. There were no real problems occurring in theproject work, on the contrary the team members worked very well in thegroup, having ideas how to improve the product. Instead, the challenge fora small company is how to work with the MFD™/PMM-method in thelong run! If the MFD™/PMM-method is to be a useful tool for thecompany it needs to be used continuously and that requires financial andpersonnel resources. One way for the company to overcome the probablelack of recourses regarding capital and personnel is to establish a goodcooperation with a regional university or a development centre.</p> / QC 20100622
108

The HiPIMS Process

Lundin, Daniel January 2010 (has links)
The work presented in this thesis involves experimental and theoretical studies related to a thin film deposition technique called high power impulse magnetron sputtering (HiPIMS), and more specifically the plasma properties and how they influence the coating. HiPIMS is an ionized physical vapor deposition technique based on conventional direct current magnetron sputtering (DCMS). The major difference between the two methods is that HiPIMS has the added advantage of providing substantial ionization of the sputtered material, and thus presents many new opportunities for the coating industry. Understanding the dynamics of the charged species and their effect on thin film growth in the HiPIMS process is therefore essential for producing high-quality coatings. In the first part of the thesis a new type of anomalous electron transport was found. Investigations of the transport resulted in the discovery that this phenomenon could quantitatively be described as being related and mediated by highly nonlinear waves, likely due to the modified two-stream instability, resulting in electric field oscillations in the MHz-range (the lower hybrid frequency). Measurements in the plasma confirmed these oscillations as well as trends predicted by the theory of these types of waves. Using electric probes, the degree of anomalous transport in the plasma could also be determined by measuring the current density ratio between the azimuthal current density (of which the Hall current density is one contribution) and the discharge current density, Jϕ / JD. The results were verified in another series of measurements using Rogowski probes to directly gain insight into the internal currents in the HiPIMS discharge. The results provided important insights into understanding the mechanism behind the anomalous transport. It was furthermore demonstrated that the current ratio Jϕ / JD is inversely proportional to the transverse resistivity, η⊥ , which governs how well momentum in the direction of the current is transferred from the electrons to the ions in the plasma. By looking at the forces involved in the charged particle transport it was expected that the azimuthally rotating electrons would exert a volume force on the ions tangentially outwards from the circular race track region. The effect of having an anomalous transport would therefore be that the ions were transported across the magnetic field lines and to a larger extent deflected sideways, instead of solely moving from the target region towards a substrate placed in front of the target some distance away. From the experiments it was confirmed that a substantial fraction of sputtered material is transported radially away from the cathode and lost to the walls in HiPIMS as well as in DCMS, but more so for HiPIMS giving one possible explanation to why the deposition rate is lower for HiPIMS compared to DCMS. Moreover, in a separate investigation on the energy flux it could be determined that the heating due to radial energy flux reached as much as 60 % of the axial energy flux, which is likely a result of the anomalous transport of charged species present in the HiPIMS discharge. Also, the recorded ion energy flux confirmed theoretical estimations on this type of transport regarding energy and direction.In order to gain a better understanding of the complete discharge regime, as well as providing a link between the HiPIMS and DCMS processes, the current and voltage characteristics were investigated for discharge pulses longer than 100 μs. The current behavior was found to be strongly correlated with the chamber gas pressure. Based on these experiments it was suggested that high-current transients commonly seen in the HiPIMS process cause a depletion of the working gas in the area in front of the target, and thereby a transition to a DCMS-like high voltage, lower current regime, which alters the deposition conditions. In the second part of the thesis, using the results and ideas from the fundamental plasma investigations, it was possible to successfully implement different coating improvements. First, the concept of sideways deposition of thin films was examined in a dual-magnetron system providing a solution for coating complex shaped surfaces. Here, the two magnetrons were facing each other having opposing magnetic fields forcing electrons, and thereby also ionized material to be transported radially towards the substrate. In this way deposition inside tubular substrates can be made in a beneficial way. Last, the densification process of thin films using HiPIMS was investigated for eight different materials (Al, Ti, Cr, Cu, Zr, Ag, Ta, and Pt). Through careful characterization of the thin film properties it was determined that the HiPIMS coatings were approximately 5-15 % denser compared to the DCMS coatings. This could be attributed to the increased ion bombardment seen in the HiPIMS process, where the momentum transfer between the growing film and the incoming ions is very efficient due to the equal mass of the atoms constituting the film and the bombarding species, leading to a less pronounced columnar microstructure. The deposition conditions were also examined using a global plasma model, which was in good agreement with the experimental results.
109

Supporting Collaborative Work through ICT : How End-users Think of and Adopt Integrated HealthInformation Systems

Rahimi, Bahlol January 2009 (has links)
Health Information Systems (HISs) are implemented to support individuals,organizations, and society, making work processes integrated andcontributing to increase service quality and patient safety. However, theoutcomes of many HIS implementations in both primary care and hospitalsettings have either not met yet all the expectations decision-makersidentified or have failed in their implementation. There is, therefore, agrowing interest in increasing knowledge about prerequisites to be fulfilledin order to make the implementation and adoption of HIS more effective andto improve collaboration between healthcare providers. The general purpose of the work presented in this thesis is to explore issuesrelated to the implementation, use, and adoption of HISs and its contributionfor improving inter- and intra-organizational collaboration in a healthcarecontext. The studies included have, however, different research objectivesand consequently used different research methods such as case study,literature review, meta-analysis, and surveys. The selection of the researchmethodology has thus depended on the aim of the studies and their expectedresults. In the first study performed we showed that there is no standard frameworkto evaluate effects and outputs of implementation and use of ICT-basedapplications in the healthcare setting, which makes the comparison ofinternational results not possible yet. Critical issues, such as techniques employed to teach the staff when usingintegrated system, involvement of the users in the implementation process,and the efficiency of the human computer interface were particularlyreported in the second study included in this thesis. The results of this studyalso indicated that the development of evidence-based implementation processes should be considered in order to diminish unexpected outputs thataffect users, patients and stakeholders. We learned in the third study, that merely implementing of a HIS will notautomatically increase organizational efficiency. Strategic, tactical, andoperational actions have to be taken into consideration, includingmanagement involvement, integration in healthcare workflow, establishingcompatibility between software and hardware, user involvement, andeducation and training. When using an Integrated Electronic Prescribing System (IEPS), pharmaciesstaff declared expedited the processing of prescriptions, increased patientsafety, and reduced the risk for prescription errors, as well as the handingover of erroneous medications to patients. However, they stated also that thesystem does not avoid all mistakes or errors and medication errors stilloccur. We documented, however, in general, positive opinions about theIEPS system in the fifth article. The results in this article indicated thatsafety of the system compared to a paper-based one has increased. Theresults showed also an impact on customer relations with the pharmacy; andprevention of errors. However, besides finding an adoption of the IEPS, weidentified a series of undesired and non planned outputs that affect theefficiency and efficacy of use of the system. Finally, we captured in the sixth study indications for non-optimality in thecomputer provider entry system. This is because; the system was not adaptedto the three-quarters of physicians and one-half of nurses’ specificprofessional practice. Respondents pointed out also human-computerinteraction constrains when using the system. They indicated also the factthat the system could lead to adverse drug events in some circumstances. The work presented in this thesis contributes to increase knowledge in thearea of health informatics on how ICT supports inter- and intraorganizationalcollaborative work in a healthcare context and to identifyfactors and prerequisites needed to be taken into consideration whenimplementing new generations of HIS.
110

Color Emotions in Large Scale Content Based Image Indexing

Solli, Martin January 2011 (has links)
Traditional content based image indexing aims at developing algorithms that can analyze and index images based on their visual content. A typical approach is to measure image attributes, like colors or textures, and save the result in image descriptors, which then can be used in recognition and retrieval applications. Two topics within content based image indexing are addressed in this thesis: Emotion based image indexing, and font recognition. The main contribution is the inclusion of high-level semantics in indexing of multi-colored images. We focus on color emotions and color harmony, and introduce novel emotion and harmony based image descriptors, including global emotion histograms, a bag-of-emotions descriptor, an image harmony descriptor, and an indexing method based on Kobayashi's Color Image Scale. The first three are based on models from color science, analyzing emotional properties of single colors or color combinations. A majority of the descriptors are evaluated in psychophysical experiments. The results indicate that observers perceive color emotions and color harmony for multi-colored images in similar ways, and that observer judgments correlate with values obtained from the presented descriptors. The usefulness of the descriptors is illustrated in large scale image classification experiments involving emotion related image categories, where the presented descriptors are compared with global and local standard descriptors within this field of research. We also investigate if these descriptors can predict the popularity of images. Three image databases are used in the experiments, one obtained from an image provider, and two from a major image search service. The two from the search service were harvested from the Internet, containing image thumbnails together with keywords and user statistics. One of them is a traditional object database, whereas the other is a unique database focused on emotional image categories. A large part of the emotion database has been released to the research community. The second contribution is visual font recognition. We implemented a font search engine, capable of handling very large font databases. The input to the search engine is an image of a text line, and the output is the name of the font used when rendering the text. After pre-processing and segmentation of the input image, eigenimages are used, where features are calculated for individual characters. The performance of the search engine is illustrated with a database containing more than 2700 fonts. A system for visualizing the entire font database is also presented. Both the font search engine, and the descriptors that are related to emotions and harmony are implemented in publicly available search engines. The implementations are presented together with user statistics.

Page generated in 0.0756 seconds