111 |
Les différences entre la correction de textes manuscrits et la correction de textes dactylographiés et imprimés par ordinateurGodin, Caroline January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
|
112 |
L'apport des correcticiels pour la correction de textes d'élèves du secondaireMireault, Marie-Hélène January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
|
113 |
Vad driver de svenska småhuspriserna?Bergendahl, Robin January 2014 (has links)
Syftet med denna uppsats är att utreda vilka faktorer som påverkar de svenska småhuspriserna, och i så fall hur och i vilken utsträckning. Med stöd av tidigare studier som enhetlig pekar ut bolåneräntan och disponibel inkomst som de faktorer vilka har tydligast inverkan på fastighetspriserna i Sverige, utökas de förklarande variablerna i denna studie med hjälp av en stock-flow modell. Tidsseriedata från 1993-2013 behandlas för enhetsrötter och kointegration för att skattas i en regressionsanalys i form en "Error Correction Model", med avsikten att utreda både ett kort- och långsiktigt samband. Resultatet bekräftar reporäntan och disponibel inkomst som två viktiga faktorer för att förklara det långsiktiga sambandet med priserna på småhus i Sverige, tillsammans med ytterligare faktorer såsom BNP, hushållens skuldsättning och arbetslösheten. På kort sikt är dels den historiska utvecklingen av huspriserna en nyckelfaktor, men faktorer som disponibel inkomst, ränta, BNP och hushållens skuldsättning är också viktiga krafter för att förklara småhuspriserna. En slutsats som kan dras är att hushållens förmåga till ökad konsumtion, när inkomsterna ökar, avspeglas i småhuspriserna. En låg ränta gör samtidigt att fler än någonsin har råd att låna på en marknad med ett redan mycket begränsat bostadsutbud / The purpose of this study is to investigate which factors affect the Swedish real estate prices of small house dwellings, and if so, how and to what extent. With the use of earlier studies, that coherently claims mortgage rate and household disposable income to be the most valuable factors to explain the Swedish real estate prices, this study will consider additional determinant factors with the respect to a stock-flow model. 1993-2013 time series data will be tested for unit roots and cointegration before its run in a regression as an "Error Correction Model", which considers both long- and short run equilibrium. The result confirms the short run rate and disposable income as two determinant key factors when it comes to explaining the long run Swedish housing prices, together with other factors such as GDP, household debt and unemployment rate. In the short run, the historical development of housing prices act as a key determinant, but disposable income, short term rate, GDP and household debt are also important explanatory factors. The study shows that the increased income, and the ability to increase household spending, will be reflected in the housing prices. A low loan rate will concurrently make it possible for more households than ever to loan at a market with an already very restricted housing supply
|
114 |
Exploiting the implicit error correcting ability of networks that use random network coding / by Suné von SolmsVon Solms, Suné January 2009 (has links)
In this dissertation, we developed a method that uses the redundant information implicitly
generated inside a random network coding network to apply error correction to the transmitted
message. The obtained results show that the developed implicit error correcting method can
reduce the effect of errors in a random network coding network without the addition of
redundant information at the source node. This method presents numerous advantages
compared to the documented concatenated error correction methods.
We found that various error correction schemes can be implemented without adding
redundancy at the source nodes. The decoding ability of this method is dependent on the
network characteristics. We found that large networks with a high level of interconnectivity
yield more redundant information allowing more advanced error correction schemes to be
implemented.
Network coding networks are prone to error propagation. We present the results of the
effect of link error probability on our scheme and show that our scheme outperforms
concatenated error correction schemes for low link error probability. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2010.
|
115 |
Exploiting the implicit error correcting ability of networks that use random network coding / by Suné von SolmsVon Solms, Suné January 2009 (has links)
In this dissertation, we developed a method that uses the redundant information implicitly
generated inside a random network coding network to apply error correction to the transmitted
message. The obtained results show that the developed implicit error correcting method can
reduce the effect of errors in a random network coding network without the addition of
redundant information at the source node. This method presents numerous advantages
compared to the documented concatenated error correction methods.
We found that various error correction schemes can be implemented without adding
redundancy at the source nodes. The decoding ability of this method is dependent on the
network characteristics. We found that large networks with a high level of interconnectivity
yield more redundant information allowing more advanced error correction schemes to be
implemented.
Network coding networks are prone to error propagation. We present the results of the
effect of link error probability on our scheme and show that our scheme outperforms
concatenated error correction schemes for low link error probability. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2010.
|
116 |
Improving attenuation corrections obtained using singles-mode transmission data in small-animal PETVandervoort, Eric 05 1900 (has links)
The images in positron emission tomography (PET) represent three dimensional dynamic distributions of biologically interesting molecules labelled with positron emitting radionuclides (radiotracers). Spatial localisation of the radio-tracers is achieved by detecting in coincidence two collinear photons which are emitted when the positron annihilates with an ordinary electron. In order to obtain quantitatively accurate images in PET, it is necessary to correct for the effects of photon attenuation within the subject being imaged. These corrections can be obtained using singles-mode photon transmission scanning. Although suitable for small animal PET, these scans are subject to high amounts of contamination from scattered photons. Currently, no accurate correction exists to account for scatter in these data. The primary purpose of this work was to implement and validate an analytical scatter correction for PET transmission scanning. In order to isolate the effects of scatter, we developed a simulation tool which was validated using experimental transmission data. We then presented an analytical scatter correction for singles-mode transmission data in PET. We compared our scatter correction data with the previously validated simulation data for uniform and non-uniform phantoms and for two different transmission source radionuclides. Our scatter calculation correctly predicted the contribution from scattered photons to the simulated data for all phantoms and both transmission sources. We then applied our scatter correction as part of an iterative reconstruction algorithm for simulated and experimental PET transmission data for uniform and non-uniform phantoms. We also tested our reconstruction and scatter correction procedure using transmission data for several animal studies (mice, rats and primates). For all studies considered, we found that the average reconstructed linear attenuation coefficients for water or soft-tissue regions of interest agreed with expected values to within 4%. Using a 2.2 GHz processor, the scatter correction required between 6 to 27 minutes of CPU time (without any code optimisation) depending on the phantom size and source used. This extra calculation time does not seem unreasonable considering that, without scatter corrections, errors in the reconstructed attenuation coefficients were between 18 to 45% depending on the phantom size and transmission source used.
|
117 |
The relationship between market value and book value for five selected Japanese firmsOmura, Teruyo January 2005 (has links)
Studies of the value relevance of accounting number in capital market research are consistent with the simple view that, in equilibrium, book values are equal to or have some long-term relationship with market values, and that market returns are related to book returns. This dissertation examines the value relevance of annually-reported book values of net assets, earnings and dividends to the year-end market values of five Japanese firms between 1950 and 2004 (a period of 54 years). Econometric techniques are used to develop dynamic models of the relationship between markets, book values and a number of macro-economic variables. In constructing the models, the focus is to provide an accurate statistical description of the underlying relationships between market and book value. It is expected that such research will add to the body of knowledge on factors that are influential to Japanese stock prices. The significant findings of the study are as follows: 1) well-specified models of the data generating process for market value based on the information set used to derive the models are log-linear in form. Additive, linear models in untransformed variables are not well-specified and forecast badly out of sample; 2) the book value of net assets has relevance for market value in the five Japanese firms examined, in the long run.
|
118 |
Investigating the relationship between market values and accounting numbers for 30 selected Australian listed companiesClout, Victoria Jane January 2007 (has links)
In capital market research (CMR) studies of the value relevance of accounting numbers are founded upon the concept that, in equilibrium, the book values are equal to or have some long-term relationship with the market value and that market returns are related to book returns. This thesis seeks to resolve a gap in the CMR by examining 30 selected individual firms listed on the Australian stock market during the period 1950 to 2004, using equilibrium correction modelling techniques. Even these limited prior works used cross-sectional techniques rather than the long-run, time-series, analysis used in this study. Moreover, dynamic analysis in the CMR has tended to focus on indexes or portfolio data rather than using firm-specific case study data of the type modelled here. No prior research has taken this approach using Australian data. The results of this thesis indicated that an equilibrium correction relationship between market values and book values for firms listed on the Australian Stock Exchange (ASX) could be determined by using accounting and macroeconomic regressors. The findings of the thesis were consistent with the literature in terms of the variables suggested and important in the firm's valuation from the three main approaches, the analysts (industry) approach, the finance and accounting theory (textbook) approach and the CMR literature approach. The earnings, dividends and book value variables are significant in their relationships with the firm's market values. The models constructed were typically more informative and had an increased forecasting performance compared with the a priori models tested, based on theory and the literature.
|
119 |
Dynamiques de l'institutionnalisation de l'enfance délinquante et en besoin de protection le cas des écoles de réforme et d'industrie de l'Hospice Saint-Charles de Québec, 1870-1950 /Gilbert, Dale. January 1900 (has links) (PDF)
Thèse (M.A.)--Université Laval, 2006. / Titre de l'écran-titre (visionné le 28 mars 2007). Bibliogr.
|
120 |
Development of a motion correction and partial volume correction algorithm for high resolution imaging in Positron Emission TomographySegobin, Shailendra Hemun January 2012 (has links)
Since its inception around 1975, Positron Emission Tomography (PET) has proved to be an important tool in medical research as it allows imaging of the brain function in vivo with high sensitivity. It has been widely used in clinical dementia research with [18F]2-Fluoro-2-Deoxy-D-Glucose (FDG) and amyloid tracers as imaging biomarkers in Alzheimer's Disease (AD). The high resolution offered by modern scanner technology has the potential to provide new insight into the interaction of structural and functional changes in AD. However, the high resolution of PET is currently limited by movement and resolution (even for high resolution dedicated brain PET scanner) which results in partial volume effects, the undersampling of activity within small structures. A modified frame-by-frame (FBF) realignment algorithm has been developed that uses estimates of the centroid of activity within the brain to detect movement and subsequently reframe data to correct for intra-frame movement. The ability of the centroid to detect motion was assessed and the added benefit of reframing data for real clinical scans with patient motion was evaluated through comparison with existing FBF algorithms. Visual qualitative analysis on 6 FDG PET scans from 4 blinded observers demonstrated notable improvements (ANOVA with Tukey test, p < 0.001) and time-activity curves were found to deliver biologically more plausible activity concentrations. A new method for Partial Volume Correction (PVC) is also proposed, PARtially-Segmented Lucy-Richardson (PARSLR),that combines the strength of image based deconvolution approach of the Lucy-Richardson (LR) Iterative Deconvolution Algorithm with a partial segmentation of homogenous regions. Such an approach is of value where reliable segmentation is possible for part but not all of the image volume or sub-volume. Its superior performance with respect to region-based methods like Rousset or voxel-based methods like LR was successfully demonstrated via simulations and measured phantom data. The approach is of particular importance for studies with pathological abnormalities where complete and accurate segmentation across or with a sub-volume of the image volume is challenging and for regions of the brain containing heterogeneous structures which cannot be accurately segmented from co-registered images. The developed methods have been shown to recover radioactivity concentrations from small structures in the presence of motion and limited resolution with higher accuracy when compared to existing methods. It is expected that they will contribute significantly to future PET studies where accurate quantitation in small or atrophic brain structures is essential.
|
Page generated in 0.1278 seconds