• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11693
  • 2106
  • 1106
  • 946
  • 844
  • 499
  • 271
  • 259
  • 245
  • 226
  • 178
  • 132
  • 104
  • 71
  • 68
  • Tagged with
  • 23268
  • 3426
  • 2891
  • 2207
  • 2099
  • 2024
  • 1944
  • 1761
  • 1716
  • 1658
  • 1585
  • 1551
  • 1514
  • 1498
  • 1489
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
711

Optimization of Component Connections for an Embedded Component System

Azumi, Takuya, Takada, Hiroaki, Oyama, Hiroshi 29 August 2009 (has links)
No description available.
712

Solving the Vehicle Routing Problem : using Search-based Methods and PDDL

Agerberg, Gösta January 2013 (has links)
In this project the optimization of transport planning has been studied. The approach was that smaller transport companies do not have the capability to fully optimize their transports. Their transport optimization is performed at a company level, meaning that the end result might be optimal for their company, but that potential for further optimization exists. The idea was to build a collaboration of transport companies, and then to optimize the transports globally within the collaboration. The intent was for the collaboration to perform the same driving assignments but at a lower cost, by using fewer vehicles and drivers, or travel shorter distance, or both combined. This should be achieved by planning the assignments in a smarter way, for example using a company's empty return journey to perform an assignment for another company. Due to the complexity of these types of problems, called Vehicle Routing Problem (VRP), shown to be NP-complete, search methods are often used. In this project the method of choice was a PDDL-based planner called LPG-td. It uses enforced hill-climbing together with a best-first search to find feasible solutions. The method was tested for scaling, performance versus another method and against time, as well as together with a real-life based problem. The results showed that LPG-td might not be a suitable candidate to solve the problem considered in this project. The solutions found for the collaboration were worse than for the sum of individual solutions, and used more computational time. Since the solution for the collaboration at most should be equal to the sum of individual solutions, in theory, this meant that the planner failed.
713

Distinguishing successful from unsuccessful venture capital investments in technology-based new ventures: How investment decision criteria relate to deal performance

Pries, Fred January 2001 (has links)
This study investigates variability in the importance of investment decision criteria used by venture capitalists in assessing new technology-based ventures and relates the criteria to the subsequent performance of the investment in the new venture. Variability was measured using interval and ordinal scale approaches for both criteria ratings and rankings. The analyses found that the criteria used by venture capitalists form a general hierarchy that is consistently ranked across ventures. However, there are some criteria that do not form part of this hierarchy and whose importance varies depending on the specific venture being evaluated. The criteria that are consistently considered important by venture capitalists can be thought of as necessary conditions for investment. The hypotheses concerning the relationship between the criteria and subsequent deal performance are that:· deal performance can be assessed by venture capitalists earlier for Internet-related ventures than for other-technology based ventures (H1);· Internet-related ventures have more extreme levels of deal performance (H2);· a small number of criteria will distinguish between successful and unsuccessful deal performance (H3);· criteria that do distinguish have above average variability (H4); and· criteria related to first-mover advantage distinguish between successful and unsuccessful deals (H5). The study was conducted in two parts. The original study (n=100) conducted by Bachher (2000) gathered information about the importance of the investment criteria using a web-based survey. The follow-up study (n=40) gathered information about the success of the investments by surveying the original participants and gathering information from the Internet. Limitations of the study include a nonrandom sample, a small sample size for the follow-up survey and the very small number (n=5) of unsuccessful investments identified. Evidence for hypotheses H1 and H2 was in the predicted direction but failed to achieve statistical significance. The evidence is supportive of H3. Evidence for H4 and H5 was not found. Additional analysis of the results suggests that venture capitalists whose investments were ultimately unsuccessful placed less importance on technology-related criteria than did venture capitalists investing in the other ventures. This finding implies that venture capitalists need to perform detailed assessments of the technology of new ventures.
714

A Web-based Statistical Analysis Framework

Chodos, David January 2007 (has links)
Statistical software packages have been used for decades to perform statistical analyses. Recently, the emergence of the Internet has expanded the potential for these packages. However, none of the existing packages have fully realized the collaborative potential of the Internet. This medium, which is beginning to gain acceptance as a software development platform, allows people who might otherwise be separated by organizational or geographic barriers to come together and tackle complex issues using commonly available data sets, analysis tools and communications tools. Interestingly, there has been little work towards solving this problem in a generally applicable way. Rather, systems in this area have tended to focus on particular data sets, industries, or user groups. The Web-based statistical analysis model described in this thesis fills this gap. It includes a statistical analysis engine, data set management tools, an analysis storage framework and a communication component to facilitate information dissemination. Furthermore, its focus on enabling users with little statistical training to perform basic data analysis means that users of all skill levels will be able to take advantage of its capabilities. The value of the system is shown both through a rigorous analysis of the system’s structure and through a detailed case study conducted with the tobacco control community.
715

CBKR+: A Conceptual Framework for Improving Corpus Based Knowledge Representation

Ivkovic, Shabnam January 2006 (has links)
In Corpus Based Knowledge Representation [CBKR], limited association capability, that is, no criteria in place to extract substantial associations in the corpus, and lack of support for hypothesis testing and prediction in context, restricted the application of the methodology by information specialists and data analysts. In this thesis, the researcher proposed a framework called CBKR+ to increase the expressiveness of CBKR by identifying and incorporating association criteria to allow the support of new forms of analyses related to hypothesis testing and prediction in context. <br /><br /> As contributions of the CBKR+ framework, the researcher (1) defined a new domain categorization model called Basis for Categorization model, (2) incorporated the Basis for Categorization model to (a) facilitate a first level categorization of the schema components in the corpus, and (b) define the Set of Criteria for Association to cover all types of associations and association agents, (3) defined analysis mechanisms to identify and extract further associations in the corpus in the form of the Set of Criteria for Association, and (4) improved the expressiveness of the representation, and made it suitable for hypothesis testing and prediction in context using the above. <br /><br /> The application of the framework was demonstrated, first, by using it on examples from the CBKR methodology, and second, by applying it on 12 domain representations acquired from multiple sources from the physical-world domain of Criminology. The researcher arrived at the conclusion that the proposed CBKR+ framework provided an organized approach that was more expressive, and supported deeper analyses through more diagnostic and probability-based forms of queries.
716

Nonrigid Image Registration Using Physically Based Models

Yi, Zhao January 2006 (has links)
It is well known that biological structures such as human brains, although may contain the same global structures, differ in shape, orientation, and fine structures across individuals and at different times. Such variabilities during registration are usually represented by nonrigid transformations. This research seeks to address this issue by developing physically based models in which transformations are constructed to obey certain physical laws. <br /><br /> In this thesis, a novel registration technique is presented based on the physical behavior of particles. Regarding the image as a particle system without mutual interaction, we simulate the registration process by a set of free particles moving toward the target positions under applied forces. The resulting partial differential equations are a nonlinear hyperbolic system whose solution describes the spatial transformation between the images to be registered. They can be numerically solved using finite difference methods. <br /><br /> This technique extends existing physically based models by completely excluding mutual interaction and highly localizing image deformations. We demonstrate its performance on a variety of images including two-dimensional and three-dimensional, synthetic and clinical data. Deformable images are achieved with sharper edges and clearer texture at less computational cost.
717

Malaria in the Amazon: An Agent-Based Approach to Epidemiological Modeling of Coupled Systems

King, Joshua Michael Lloyd 17 August 2009 (has links)
The epidemiology of malaria considers a complex set of local interactions amongst host, vector, and environment. A history of reemergence, epidemic transition, and ensuing endemic transmission in Iquitos, Peru reveals an interesting case used to model and explore such interactions. In this region of the Peruvian Amazon, climate change, development initiatives and landscape fragmentation are amongst a unique set of local spatial variables underlying the endemicity of malaria. Traditional population-based approaches lack the ability to resolve the spatial influences of these variables. Presented is a framework for spatially explicit, agent-based modeling of malaria transmission dynamics in Iquitos and surrounding areas. The use of an agent-based model presents a new opportunity to spatially define causal factors and influences of transmission between mosquito vectors and human hosts. In addition to spatial considerations, the ability to model individual decisions of humans can define socio-economic and human-environment interactions related to malaria transmission. Three interacting sub-models representing human decisions, vector dynamics, and environmental factors comprise the model. Feedbacks between the interacting sub-models define individual decisions and ultimately the flexibility that will allow the model to function in a diagnostic capacity. Sensitivity analysis and simulated interactions are used to discuss this diagnostic capability and to build understanding of the physical systems driving local transmission of malaria.
718

Effectiveness of a Computer-Based Program for Improving the Reading Performance of Deaf Students

Moore, Kenneth L 11 May 2012 (has links)
The purpose of this study was to determine if the use of the reading component of Ticket to Read®, a computer-based educational program, developed to improve hearing students’ fluency could improve deaf students’ fluency in order to improve comprehension. Fluency, the ability to read text accurately and automatically, forms a bridge from decoding to comprehension. This research is significant because the median reading level of deaf students who graduate high school has remained around a fourth grade level equivalent for the past thirty years, and there is a paucity of research that examines evidence-based practices to improve the reading performance of deaf students. There were 27 subjects in this study from an urban day school for the deaf. A dependent t-test was conducted using the subjects’ scores on a pretreatment and posttreatment reading assessment after nine weeks of treatment. No significant difference from pretreatment to posttreatment assessment was found, t(26) = 1.813, p > .05. In addition, an exploratory analysis using treatment and control groups was conducted using a quasi-experimental design based on mean gain scores from a pretreatment and posttreatment reading assessment. Twenty-seven pairs of subjects were matched on ethnicity, gender, and grade level to determine the main effect of treatment, the interaction effect of treatment and gender, and the interaction effect of treatment and grade level. No significant difference was found for the main effect of treatment, F(1,42) = 1.989, p >.05. Statistical significance was not found for the interaction between treatment and gender, F(1,50) = 1.209, p >.05. Statistical significance was not found for the interaction between treatment and grade level, F(2,48) = .208, p >.05. The results of this study have implications in the field of deaf education and are congruent with the findings of similar studies involving Repeated Readings to influence comprehension. Although significant tests were non-significant regarding students’ improvement on the reading assessment after the intervention, the direction and magnitude of the mean differences effect sizes for students in the treatment group support the need for further research regarding the evaluation of computer-based educational programs that can be used as effective educational strategies to improve deaf students’ reading performance.
719

Design of Automated Generation of Residual Generators for Diagnosis  of Dynamic Systems

Duhan, Isac January 2011 (has links)
Diagnosis and Supervision of technical systems is used to detect  faults when they occur. To make a diagnosis, tests based on residuals can be used. Residuals are used to compare observations of  the system with a model of the system, to detect inconsistencies. There are often many different types of faults which affects the  state of the system. These states are modeled as fault modes. The  difference between fault modes are the presence of faults in the  model. For each fault mode a different set of model equations is  used to describe the behaviour of the real system. When doing fault  diagnosis in real time it is good, and sometimes vital, to be able to change fault mode of the model, when a fault suddenly occurs in the real system. If multiple faults can occur the number of  combinations of faults is often so big, even for relatively small  systems, that residuals for all fault modes can not be prepared. To  handle this problem, the residuals are to be generated when they are  needed. The main task in this thesis has been to investigate how residuals  can be automatically generated, given a fault mode with a  corresponding model. An algorithm has been developed and to verify  the algorithm a model of a satellite power system, called  ADAPT-Lite, has been used. The algorithm has been made in two versions. One is focusing on numerical calculations and the other is  allowing algebraical calculations. A numerical algorithm is preferred in an automatized process because  of generally shorter calculation times and the possibility to apply it to systems which can not be solved algebraically but the  algebraical algorithm gives slightly more accurate results in some  cases. / Diagnos och övervakning av tekniska system används för att upptäcka fel när de inträffar. För att ställa en diagnos kan tester baserade på residualer användas. Residualer används för att jämföra observationer av ett system med en model av system för att upptäcka inkonsistens. Det finns ofta många typer av fel som påverkar ett systems tillstånd.Dessa tillstånd modelleras med olika felmoder. För varje felmod används olika uppsättningar av modellekvationer för att beskriva systemets beteende. När diagnoser ska ställas i realtid är det ofta bra och ibland avgörande att kunna byta felmod när ett fel plötsligt inträffar i systemet. Om multipelfel kan inträffa blir antalet kombinationer av fel ofta så stort att residualekvationerna för alla felmoder inte kan förberedas. Detta gäller även för relativt små system. För att hantera problemet bör residualerna kunna genereras vid den tidpunkt då de behövs. Examensarbetets huvuduppgift handlar om att undersöka hur residualerna kan genereras automatiskt, givet en felmod och en modell. En algoritm har utvecklats och verifierats med en model av ett kraftsystem för en satellit, kallad ADAPT-Lite. Algoritmen har gjorts i två versioner. Den ena tillåts göra algebraiska beräkningar men den andra, i så storutsträckning som möjligt, tillåts endast göra numeriska beräkningar. En numerisk algoritm föredras i en automatiserad process p.g.a. generellt sett kortare beräkningstid och dess egenskap att kunna lösa vissa problem som inte kan lösas algebraiskt. Den algebraiska algoritmen har dock visats sig ge aningen noggrannare resultat i många fall.
720

A framework for the analysis of failure behaviors in component-based model-driven development of dependable systems

Javed, Muhammad Atif, Faiz UL Muram, Faiz UL Muram January 2011 (has links)
Currently, the development of high-integrity embedded component-based software systems is not supported by well-integrated means allowing for quality evaluation and design support within a development process. Quality, especially dependability, is very important for such systems. The CHESS (Composition with Guarantees for High-integrity Embedded Software Components Assembly) project aims at providing a new systems development methodology to capture extra-functional concerns and extend Model Driven Engineering industrial practices and technology approaches to specifically address the architectural structure, the interactions and the behavior of system components while guaranteeing their correctness and the level of service at run time. The CHESS methodology is expected to be supported by a tool-set which consists of a set of plug-ins integrated within the Eclipse IDE. In the framework of the CHESS project, this thesis addresses the lack of well integrated means concerning quality evaluation and proposes an integrated framework to evaluate the dependability of high-integrity embedded systems. After a survey of various failure behavior analysis techniques, a specific technique, called Failure Propagation and Transformation Calculus (FPTC), is selected and a plug-in, called CHESS-FPTC, is developed within the CHESS tool-set. FPTC technique allows users to calculate the failure behavior of the system from the failure behavior of its building components. Therefore, to fully support FPTC, CHESS-FPTC plug-in allows users to model the failure behavior of the building components, perform the analysis automatically and get the analysis results back into their initial models. A case study about AAL2 Signaling Protocol is presented to illustrate and evaluate the CHESS-FPTC framework. / CHESS Project - http://chess-project.ning.com/

Page generated in 0.0671 seconds