• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 44
  • 28
  • 20
  • 9
  • 7
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 242
  • 43
  • 41
  • 39
  • 30
  • 29
  • 26
  • 22
  • 20
  • 18
  • 17
  • 17
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An extension of the PPSZ Algorithm to Infinite-Domain Constraint Satisfaction Problems / En utökning av PPSZ Algoritmen till Oändlig-Domän Constraint Satisfaction Problem

Einarson, Carl January 2017 (has links)
The PPSZ algorithm (Paturi et al., FOCS 1998) is the fastest known algorithm for solving k-SAT when k >= 4. Hertli et al. recently extended the algorithm to solve (d, k)-Clause Satisfaction problems ((d,k)-ClSP) for which it is the fastest known algorithm for all k >= 3 (Hertli et al. CP 2016). We analyze their algorithm and extend it to solve problems over an infinite domain. More specifically we show how the extended algorithm can solve problems that have an infinite domain but where we can, for each instance of the problem, find a finite subset of the domain which has the following properties: If there exists a solution to the problem instance, then there exists a solution using only values from this subset and the size of this subset is polynomial in the size of the problem instance. We show numerically that our algorithm is the fastest known for problems over bounded disjunction languages for some values of k <= 500 and we look at the branching time temporal language, which is a bounded disjunction language, to show how to transform a specific problem to (d,k)-ClSP. We also look at Allen's interval algebra but conclude that there is already a faster algorithm for solving this problem.
22

Upper Bounds on the Time Complexity of Temporal CSPs

Stockman, Peter January 2016 (has links)
The temporal constraint satisfaction problem (CSP) offers a formalized way to reason about in which order tasks should be accomplished. Using this we can model a wide set of specific problems from scheduling to AI. In this thesis we present two algorithms, Algorithm A and Algorithm B, to solve temporal CSPs focused on improving the worst case time complexity. The first algorithm solves temporal CSPs by an exhaustive search of all weak orderings. The time complexity is in <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cmathcal%7BO%7D((0.53n)%5E%7Bn%7D%20%5Ccdot%20poly(n))" />, thus within a polynomial factor of the corresponding Ordered Bell Number. We also show that it can solve CSP in Allen’s algebra within a polynomial factor of the corresponding number of relations between intervals on a line, thus in <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cmathcal%7BO%7D((0.75n)%5E%7B2n%7D%20%5Ccdot%20poly(n))" />  time. The second algorithm also solves temporal CSPs but where the constraints have a bounded number of disjuncts. It will assume some order and then make a backtracking search guided by the constraints. In this case the time complexity will be in <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cmathcal%7BO%7D(n!%20%5Ccdot%20poly(n))%20=%20%5Cmathcal%7BO%7D((0.37n)%5E%7Bn%7D%20%5Ccdot%20poly(n))" />. Finally we will show that this also improves the time complexity of CSP in Allen’s to <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cmathcal%7BO%7D((2n)!%20%5Ccdot%20poly(n))%20=%20%5Cmathcal%7BO%7D((0.74n)%5E%7B2n%7D%20%5Ccdot%20poly(n))" />.
23

Solving Temporal CSPs via Enumeration and SAT Compilation

Eriksson, Leif January 2019 (has links)
The constraint satisfaction problem (CSP) is a powerful framework used in theoretical computer science for formulating a  multitude of problems. The CSP over a constraint language Γ (CSP(Γ)) is the decision problem of verifying whether a set of constraints based on the relations in Γ admits a satisfying assignment or not. Temporal CSPs are a special subclass of CSPs frequently encountered in AI. Here, the relations are first-order definable in the structure (Q;&lt;), i.e the rationals with the usual order. These problems have previously often been solved by either enumeration or SAT compilation. We study a restriction of temporal CSPs where the constraint language is limited to logical disjunctions of &lt;-, =-, ≠- and ≤-relations, and were each constraint contains at most k such basic relations (CSP({&lt;,=,≠,≤}∨k)).   Every temporal CSP with a finite constraint language Γ is polynomial-time reducible to CSP({&lt;,=,≠,≤}∨k) where k is only dependent on Γ. As this reduction does not increase the number of variables, the time complexity of CSP(Γ) is never worse than that of CSP({&lt;,=,≠,≤}∨k). This makes the complexity of CSP({&lt;,=,≠,≤}∨k) interesting to study.   We develop algorithms combining enumeration and SAT compilation to solve CSP({&lt;,=,≠,≤}∨k), and study the asymptotic behaviour of these algorithms for different classes. Our results show that all finite constraint languages Γ first order definable over (Q;&lt;) are solvable in O*(((1/(eln(2))-ϵk)n)^n) time for some ϵk&gt;0 dependent on Γ. This is strictly better than O*((n/(eln(2)))^n), i.e. O*((0.5307n)^n), achieved by enumeration algorithms. Some examples of upper bounds on time complexity achieved in the thesis are CSP({&lt;}∨2) in O*((0.1839n)^n) time, CSP({&lt;,=,≤}∨2) in O*((0.2654n)^n) time, CSP({&lt;,=,≠}∨3) in O*((0.4725n)^n) time and CSP({&lt;,=,≠,≤}∨3) in O*((0.5067n)^n) time. For CSP({&lt;}∨2) this should be compared to the bound O*((0.3679n)^n), from previously known enumeration algorithms.
24

Klasifikační systémy v nemocnicích / Classification systems in the hospitals

Saliba, Walaa January 2013 (has links)
Cílem této diplomové práce bylo nastudovat problematiku klasifikačních systémů v nemocnicích, které sledují především hospodářské požadavky hospitalizace. Následně bylo navrženo a naprogramováno webové rozhraní v prostředí Caché Server Pages (CSP), které poskytuje přístup k databázi CLINICOM. Pomocí webového rozhraní, je možno provádět klasifikaci pacientů do tříd MDC (Major Diagnostic Category), archivaci dat, výpočet DRG a další vybrané úkony. Webovou aplikaci by mohli využívat technicko-hospodářští pracovníci nemocnic a klinik jako jednoduchou pomůcku při své práci, respektive jako učební pomůcka biomedicínských oborů pří výuce zdravotnických informačních a klasifikačních systémů.
25

Kompilační přístupy pro automatické plánování / Compilation-based Approaches for Automated Planning

Pantůčková, Kristýna January 2020 (has links)
One of the possible approaches to automated planning is compilation to sat- isfiability or constraint satisfaction. Compilation enables to take advantage of the advancement of SAT or CSP solvers. In this thesis, we implement three of the encodings recently proposed for compilation of planning problems: the model TCPP, the R2 ∃-Step encoding and the Reinforced Encoding. All these approaches search for parallel plans; however, since they use different definitions of parallel step and different variables and constraints, we decided to compare their per- formance on standard benchmarks from international planning competitions. As the R2 ∃-Step encoding was not suitable for our implementation, we present a mod- ified version of this encoding with a reduced number of variables and constraints. We also demonstrate how different definitions of parallel step in the Reinforced Encoding affect the performance. Furthermore, we suggest redundant constraints extending these encodings. Although they did not prove to be beneficial in gen- eral, they could slightly improve the performance on some benchmarks, especially in the R2 ∃-Step encoding.
26

Modular Hybridization of Solar Thermal Power Plants For Developing Nations

Darwish, Mazen January 2012 (has links)
The current energy scenario in the developing nations with abundant sun resource (e.g. southern Mediterranean countries of Europe, Middle-East &amp; North Africa) relies mainly on fossil fuels to supply the increasing energy demand. Although this long adopted pattern ensures electricity availability on demand at all times through the least cost proven technology, it is highly unsustainable due to its drastic impacts on depletion of resources, environmental emissions and electricity prices. Solar thermal Hybrid power plants among all other renewable energy technologies have the potential of replacing the central utility model of conventional power plants, the understood integration of solar thermal technologies into existing conventional power plants shows the opportunity of combining low cost reliable power and Carbon emission reduction. A literature review on the current concentrating solar power (CSP) technologies and their suitability for integration into conventional power cycles was concluded, the best option was found be in the so called Integrated solar combined cycle systems (ISCCS); the plant is built and operated like a normal combined cycle, with a solar circuit consisting of central tower receiver and heliostat field adding heat to the bottoming Rankine cycle. A complete model of the cycle was developed in TRNSYS simulation software and Matlab environment, yearly satellite solar insolation data was used to study the effect of integrating solar power to the cycle throw-out the year. A multi objective thermo economic optimization analysis was conducted in order to identify a set of optimum design options. The optimization has shown that the efficiency of the combined cycle can be increased resulting in a Levelized electricity cost in the range of 10 -14 USDcts /Kwhe. The limit of annual solar share realized was found to be around 7 % The results of the study indicate that ISCCS offers advantages of higher efficiency, low cost reliable power and on the same time sends a green message by reducing the environmental impacts in our existing power plant systems.
27

Assembly Yield Model for Area Array Packages

Sharma, Sanjay 05 April 2000 (has links)
The traditional design of printed circuit board assembly focuses on finding a set of parameter values (that characterizes the process), such that the desired circuit performance specifications are met. It is usually assumed that this set of values can be accurately realized when the circuit or the assembly is built. Unfortunately, this assumption is not realistic for assemblies produced in mass scale. Fluctuations in manufacturing processes cause defects in actual values of the parameters. This variability in design parameters, in turn, causes defects in the functionality of the assemblies. The ratio of the acceptable assemblies to total assemblies produced constitutes the yield of the assembly process. Assembly yields of area array packages are heavily dependent on design of the board as much as package and process parameters. The economics of IC technology is such that the maximization of yield rather than the optimization of performance has become the topic of prime importance. The projected value of yield has always been a factor for consideration in the advancement of Integrated Chip technology. Due to considerable reduction in the package size, minimum allowable tolerance and tight parameter variations, electronic assemblies have to be simulated, characterized and tested before translating them to a production facility. Also, since the defect levels are measured in parts per million, it is impractical to build millions of assemblies for the purpose of identifying the best parameter. A mathematical model that relates design parameters and their variability to assembly yield can help in the effective estimation of the yield. This research work led to the development of a mathematical model that can incorporate variability in the package, board and assembly related parameters and construction of an effective methodology to predict the assembly yield of area array packages. The assembly yield predictions of the model are based on the characteristics of input variables (whether they follow a normal, empirical or experimental distribution). By incorporating the tail portion of the parameter distribution (up to ±6 standard deviation on normal distribution), a higher level of accuracy in assembly yield prediction is achieved. An estimation of the interaction of parameters is obtained in terms of the expected number of defective joints and/or components and a degree of variability around this expected value. As an implementation of the mathematical model, a computer program is developed. The software is user friendly and prompts the user for information on the input variables, it predicts the yield as expected number of defective joints per million and expected number of defective components (assemblies) per million. The software can also be used to predict the number of defects for a user-specified number of components (less or more than one million assemblies). The area array assembly yield model can be used to determine the impact of process parameter variations on assembly yields. The model can also be used to assess the manufacturability of a new design, represent the capability of an assembly line for bench marking purposes, help modify designs for better yield, and to define the minimum acceptable manufacturability standards and tolerances for components, boards and designs. / Master of Science
28

Lönar sig socialt ansvarstagande? : En kvantitativ studie på 211 företag noterade på Stockholmsbörsen / Does corporate responsibility pay off? : A quantitative study of 211 companies listed on the Stockholm Stock Exchange

Boman, Josefin, Lindström, Patrik January 2016 (has links)
Syfte: Det föreligger ett lagförslag som kan komma att ställa krav på omkring 2000 svenska företag vad det gäller redovisning inom Corporate Social Responsibility (CSR), en redovisning som idag ännu är frivillig. Denna studies huvudsakliga syfte är att undersöka om det finns någon relation mellan sådan redovisning och fööetags finansiella prestation. I forskningen kring denna relation har begreppet Corporate Social Performance (CSP) använts för att kvantifiera redovisad CSR. Relationen studeras dels utifrån ett samlat mått på CSP, dels separat utifrån de två dimensionerna miljö och mänskliga rättigheter. Metod: Föreliggande studie har utgått ifrån den positivistiska forskningsfilosofin och antagit en hypotetiskt-deduktiv ansats. Vidare har den strategi som antagits varit kvantitativ, där en tvärsnittsdesign använts och där data som samlats in uteslutande varit av sekundär art. Studiens urval uppgår till 211 företag noterade på Nasdaq OMX Stockholm. Den data som använts som måttåå CSP har hämtats från Folksams index för ansvarsfullt företagande för år 2013. Måtten på finansiell prestation har hämtats från årsredovisningar för räkenskapsåret 2013, vilka erhållits genom databasen Retriever och företagens egna hemsidor. Vidare har all data behandlats i statistikprogrammet SPSS där den analyserats med hjälp av multipla regressionsanalyser. Resultat &amp; slutsats: Studien visade att det fanns en positiv relation mellan CSP och företags finansiella prestation. Studien gav vidare stöd åt en relation både utifrån det totala måttet av CSP samt dimensionen miljö. CSR-arbete kan följaktligen anses vara en värdeskapande strategi om man ser till företags finansiella prestation, i synnerhet vad det gäller miljörelaterat arbete. Förslag till fortsatt forskning: Föreliggande studie är av tvärsnittsdesign vilket ibland kritiserats för att vara otillräcklig när relationen som är central i detta arbete studeras. Ett förslag vi lämnar till fortsatta studier är därför att göra en liknande studie över tid för att kunna kartlägga förändringar. Vidare anser vissa forskare att det inte är tillräckligt att mäta företags finansiella prestation endast genom redovisningsmässiga mått, varpå ett förslag är att undersöka samma relation och då även inkludera marknadsmässiga mått. Vi tar i kapitel 5.4 upp fler förslag till vidare forskning. Uppsatsens bidrag: Denna studie bidrar med ytterligare kunskap kring de varierande resultaten av relationen mellan företags CSP och finansiella prestation. Vidare bidrar studien praktiskt till en ökad förståelse för hur CSP påverkar företags finansiella prestation i en svensk kontext. / Aim: The Swedish ministry of justice has submitted a legislative proposal that could require about 2000 Swedish companies to report Corporate Social Responsibility (CSR), something that today is voluntary. The main purpose of the study is to investigate whether there is any relationship between reported CSR and corporate financial performance. In this area of research, Corporate Social Performance (CSP) has been used to quantify reported CSR. The relationship is studied partly based on an aggregate measure of CSP, but also from the two dimensions environment and human rights. Method: The study is based in positivist research philosophy and adopts a hypothetical-deductive approach. The strategy adopted is quantitative, using a cross-sectional design and secondary data only. The study's sample amounts to 211 companies listed on Nasdaq OMX Stockholm. The data used as a measure of CSP is taken from Folksam’s index for corporate responsibility for 2013. The measures of financial performance is taken from the annual reports for the financial year of 2013, obtained through the database Retriever and the companies' own websites. Furthermore, all data is processed in SPSS where it was analyzed using multiple regression analysis. Results &amp; conclusions: The study showed that there was a positive relationship between CSP and corporate financial performance. The study provided further support for a relationship both from the overall dimension of CSP and the environmental dimension. CSR-related work can therefore be regarded as a value-creating strategy in terms of the company's financial performance, particularly in terms of environment-related work. Suggestions for further research: This study has a cross-sectional design, which is sometimes criticized for being insufficient when the relationship between CSP and financial performance is studied. One suggestion we leave for further studies is to do a similar study over time to identify changes. Furthermore, some researchers believe that it’s not enough to measure the company's financial performance only through accounting-based measures. Therefore, another suggestion is to investigate the relationship including market-based measures. In chapter 5.4 more suggestions for further research are presented. Contributions of the thesis: This study contributes more knowledge to the varying results of the relationship between companies CSP and financial performance. Furthermore, the study's practical contribution is to give a better understanding of how CSP affects financial performance of companies in a Swedish context.
29

Formalisations and applications of business process modelling notation

Wong, Peter Yung Ho January 2011 (has links)
Business Process Modelling Notation (BPMN) is a standardised diagram notation for modelling interactive workflow processes graphically at the design stage. The primary objective of this thesis is to provide a framework for precise specifications and formal verifications of workflow processes modelled as BPMN diagrams. We provide two behavioural semantics for BPMN in the process algebra Communicating Sequential Processes (CSP). We apply existing CSP refinement orderings to both the refinement of business process diagrams and the verification of behavioural compatibility of business process collaborations. The first semantic model is an untimed model, focusing on the control flow and communication of business processes. The second semantic model extends the first one to capture the timing aspect of behaviour. We also consider the applications of the semantic models. The secondary objective of this thesis is to apply BPMN and the semantic models to reason about long running empirical studies (e.g. laboratory experiments, clinical trials). We introduce a declarative workflow model Empiricol for recording trials and experiments precisely, and define bidirectional transformation functions between BPMN and Empiricol. Using the transformation functions, we make graphical specification, simulation, automation and verification of trials and experiments possible. We provide two case studies on the applications of BPMN’s formalisations.
30

Reconstrução tomográfica de imagens SPECT a partir de poucos dados utilizando variação total / Tomographic reconstruction of SPECT images from few data using total variation

Araujo, João Guilherme Vicente de 13 April 2017 (has links)
Para realizar a correção de atenuação em uma tomografia computadorizada por emissão de fóton único (SPECT, em inglês) é necessário medir e reconstruir o mapa dos coeficientes de atenuação utilizando uma leitura de um tomógrafo de transmissão, feita antes ou simultaneamente à leitura de emissão. Essa abordagem encarece a produção da imagem e, em alguns casos, aumenta consideravelmente a duração do exame, sendo a imobilidade do paciente um fator importante para o sucesso da reconstrução. Uma alternativa que dispensa a leitura de transmissão é reconstruir tanto a imagem de atividade quanto o mapa de atenuação somente através dos dados de uma leitura de emissão. Dentro dessa abordagem propusermos um método baseado no algoritmo criado por Censor, cujo objetivo é resolver um problema misto de viabilidade côncavo-convexo para reconstruir simultaneamente as imagens. O método proposto é formulado como um problema de minimização, onde a função objetivo é dada pela variação total das imagens sujeita à viabilidade mista de Censor. Os teste foram feitos em imagens simuladas e os resultados obtidos na ausência de ruídos, mesmo para uma pequena quantidade de dados, foram satisfatórios. Na presença de dados ruidosos com distribuição de Poisson o método foi instável e a escolha das tolerâncias, nesse caso, ainda é um problema aberto. / In order to perform attenuation correction in single photon emission computed tomography (SPECT), we need to measure and reconstruct the attenuation coefficients map using a transmission tomography scan, performed either sequentially or simultaneously with an emission scan. This approach increases the cost required to produce the image and, in some cases, increases considerably the scanning time, therefore the patient immobility is an important factor to the reconstruction success. An alternative that dispense the transmission scan is reconstruct both the activity image and the attenuation map only from emission scan data. In this approach we proposed a method based on the Censors algorithm, which objective is to solve a mixed convex-concave feasibility problem to reconstruct simultaneously all images. The method proposed is formulated as a minimization problem, where the objective function is given by the total variation of the images subject to Censors mixed feasibility. In the simulations, artificial images were used and the obtained results without noised data, even for small amount of data, were satisfactory. The method was unstable in the presence of Poisson distributed noise and the tolerance choice, in this case, is an open problem yet.

Page generated in 0.094 seconds