Since the theory of chaos was introduced in cryptography, the use of chaotic dynamical systems to secure communications has been widely investigated, particularly to generate chaotic pseudorandom numbers as cipher-keys. The emergent property of the ultra-weak multidimensional coupling of p one-dimensional dynamical systems lead to randomness preserving chaotic properties of continuous models in numerical simulations. This paper focuses on such families called multiparameter chaotic pseudo random number generators (M-p CPRNG) and proposes algorithm approach to test the robustness of time series generated by M-p CPRNG. First, a single one-dimensional chaotic map to construct a **regular** chaotic **subsampling** is considered. Parameters on which depends the map are estimated using only the sequences generated by this map to cipher a message. A previous study [1] using the Extended Kalman Filter (EKF) has shown that a necessary minimum shift value corresponding to a particular **subsampling** of a chaotic cubic map is obtained from which it is not possible to estimate the parameters. In this paper, new cipher breaking methods are considered for the same purpose: assessing the security of the time series. These methods are investigated in the same way than EKF one and compared to the results provided by EKF. The EKF was first improved by introducing a modified Gram-Schmidt method and the nonlinear least squares method was also tested. The one-dimensional cubic map was again considered and a new **parameter** leading to EKF oscillations is especially studied.

Show more
12 Read more

− In the range of short time of neural network training (up to 12 minutes) the accuracy of **estimation** remained on the same level. The neural network, when trained on several examples, behaved almost the same as when trained for a longer time. All of the analyzed examples were allowed for **assessment** of the surface roughness Ra **parameter** for a stable **process**, for a normal work of a tool. The occurrence of a more significant increase in surface roughness (both in surface roughness parameters and in surface image features) caused an immediate increase in the predicted Ra value. In further steps this predicted Ra value kept growing in spite of the decrease of reference Ra value and of values of image wavelet features.

Show more
11 Read more

dividuals to estimate the population level parameters as well as the distribution of the individual parameters. The importance of capturing the **parameter** variability among individuals has long been recognized, and recently it has been shown that different genotypes can play a significant role on how a drug is metabolized and processed [13, 18]. Thus, capturing the inter-individual variation in model parameters is crucial, especially when considering extrapolation between species, doses, and exposure routines. As mentioned previously, several methods have been developed specifically for the case of estimating the distribution of individual parameters so long as longitudinal individ- ual data is available. Yet there are a lack of methods available aimed at accounting for **parameter** variability when true individual data is not available. To the authors’ knowledge, there is only a little previous work (see [1, 4, 8]) in the mathematical or pharmacological literature which attempt to account for inter-individual variability when individual subjects cannot be tracked over time. When aggregate data is treated as individual data, as is traditionally done in most physiological pharmacokinetic studies, then one can only account for the average dynamics, and all **parameter** values (either estimated through means of an inverse problem, or measured in separate experiments) represent the average behavior. There has been a recent push in the pharmacology modeling com- munity to better understand the uncertainty and variability associated with the model development and calibration, and incorporating this information into the risk **assessment** **process** (see [9, 10, 19] and the references therein). We emphasize that capturing the uncertainty due to inter-individual variability when the individual is not available for repeated measurements has yet to be addressed. We propose that the methods outlined in this work provide a more realistic quantification of pa- rameter uncertainty in the case where only unidentified individual data is available, which in turn leads to more reliable predictive capabilities and risk **assessment** analysis.

Show more
22 Read more

and 2b show the MEWE’s bivariate marginal sampling distributions for (a, b) and (g, κ) respec- tively, as n ranges from 500 to 10 5 (colors from red to white to blue as n increases). Note that the sample sizes here are 10 times larger than in the plots for the i.i.d. setting. For each n, we plot M = 1, 000 estimators **based** on independent data sets. Each estimator was computed with p = 1, m = 10 4 , k = 20, and one iteration of MCEM. The intersections of the black lines indicate data-generating parameters. Figure 2c shows the MEWE’s marginal distribution for κ for the dif- ferent levels of n, centered and rescaled by √ n, illustrating the rate of convergence anticipated by Theorem 2.3, but that the asymptotic variance is larger than in the i.i.d. case. Figure 2d shows the autocorrelation function of a data set generated with θ ? = (3, 1, 2, 0.5), ρ = 0.75, and n = 1, 000. 13 3 Estimators in the well-specified sum of log-Normals model, as described in Section 4.2. Figure

Show more
24 Read more

For the simulation part, the AC1A excitation system model and AC8B excitation system model have been implemented in MALTAB/Simulink, **based** on the IEEE standard 421.5, which is updated in 2005. On the other hand, for the optimization part, the goal is to look for suitable parameters such that, with the same input, the simulation output will match the field data from the real machine. We formulated the problem as a least square problem and applied Damped Gauss-Newton method (DGN) and Levenberg-Marquardt (LM) method to solve it. We used both the MATLAB **Parameter** **Estimation** Toolbox and the MATLAB programs developed by us to implement the algorithms and get the parameters. For both of the AC1A models and AC8B, we did the case studies and validation. And this is also a project sponsored by Progress Energy, who provided two suites of “bump-test” field data of AC1A excitation system and AC8B excitation system as well. Besides the results, we

Show more
91 Read more

The kernel function and convolution-smoothing methods developed to estimate a probability density function and distribution are essen- tially a way of smoothing the empirical distributio[r]

18 Read more

In this example, it is possible to show that th~ mean square error can be decreased because the isotonic regression estimates are the nearest points in the restricted parameter space to [r]

141 Read more

We said above that DALEχ runs the tendril search “until convergence”. Convergence is a tricky concept in Bayesian samplers. It is even trickier in DALEχ . Because DALEχ is not attempting to sample a posterior probability distribution, there is no summary statistic analogous to Gelman and Rubin’s R metric for MCMC ( Gelman and Rubin 1992 ; Brooks and Gelman 1998 ) that will tell us that DALEχ has found everything it is going to find. We therefore adopt the heuristic convergence metric that as long as DALEχ is exploring previously undiscovered **parameter** space volumes with χ 2 ≤ χ 2 lim , DALEχ has not converged. The onus is thus placed on us to determine in a way that can be easily encoded which volumes in **parameter** space have already been discovered. In Appendix A we describe an algorithm for taking a set of **parameter** space points { ~ P } and finding an approximately minimal D-dimensional ellipsoid containing those points. Each time DALEχ runs a tendril search as described in the previous paragraphs, we amass the χ 2 ≤ χ 2 lim points discovered into a set { ~ T } and fit an ellipsoid to that set as described in Appendix A . While running the tendril search, DALEχ keeps track of the end point of each individual leg of the search. If the simplex search ends in a point that is contained in one of the ellipsoids resulting from a previous tendril search (the set of “exclusion ellipsoids” {X} referenced earlier), the simplex search is considered to be doubling back on a previously discovered region of **parameter** space and a “strike” (analogous to what happens when a batter in baseball swings at the ball and misses) is recorded. Similarly, if the simplex point ends inside of the ellipsoid constructed from the points { ~ T } discovered by the previous legs comprising the current tendril search (and the volume of that ellipsoid has not expanded due to the current simplex) a “strike” is recorded.

Show more
36 Read more

The method of automatic identification of se- mantic frames is **based** on probabilistic generative **process**. Training data for the algorithm consists of tuples of grammatical relation realizations ac- quired using a dependency parser from the train- ing corpus for every lexical unit. For example, sup- pose that the goal is to generate semantic frames of verbs from a corpus for grammatical relations sub- ject and object. The training data for lexical unit eat may look like {(peter, cake), (man, breakfast), (dog, meat), ...}, where the first component of the tuples corresponds to sub- ject and the second to object.

Show more
when a dataset is available, the distribution in which the data set is derived from is a key question. In parametric **estimation**, a distribution is postulated and then the parameters are estimated, but if the postulated distribution is not right, then the analysis became questionable. But in nonparametric density **estimation**, no distributional assumptions are made. Hence it became a popular tool in density **estimation**. There are several nonparametric density **estimation** procedures. Some of the procedures are given below:

110 Read more

The fractional Brownian motion (fBm for short) has already been widely applied in hydrol- ogy, traﬃc volume prediction, **estimation** of Hurst exponent of seismic signal, ﬁnance, and various other areas due to its properties such as long-range dependence, self-similarity, and stationarity of its increments. However, fBm is not suﬃcient for some random phe- nomena, so many researchers have chosen more general stochastic processes to construct stochastic models. For instance, Azzaoui and Clavier [] studied impulse response of the -Ghz channel by using α-stable processes. Lin and Lin [] studied pricing debt value in stochastic interest rate by using Lévy processes. Meanwhile, the weighted fractional Brownian motion (wfBm), which is a kind of generalizations of the fBm, can be also used for modeling.

Show more
16 Read more

Many studies on the modeling of akin **process** have been reported [1-3]. From previous findings, the mathematical modeling of the hydrolysis **process** leads to a nonlinear **parameter** **estimation** problem and the model parameters of the **process** have been determined either using conventional graphical-**based** technique [4] or nonlinear regression method [5]. However, the usual approach to estimate model parameters of a biological **process** is by using nonlinear techniques since the graphical methods have shown inferior **parameter** estimates compared to those generated using nonlinear techniques [6]. On the other hand, the nonlinear techniques also have their drawbacks. This scheme often fails in the search for global optimum if the search space is non linear in parameters [7]. For a large value of least squares sum, a slow convergence often appears [8]. A common practice to deal with the local convergence problem is to test different initial guess parameters. However, the probability of finding an initial condition suitable for all parameters decreases as the number of involved **parameter**, increases [9]. Because of the limitations imposed by those methods, an attempt was made to estimate the model parameters of the tapioca starch hydrolysis using genetic algorithm.

Show more
10 Read more

Abstract—A sequential method of unknown autoregressive parameters **estimation** of TAR(p)/ARCH(1) model, which all are assumed to be unknown, is presented. This procedure is **based** on the construction of the special stopping rule and weights for weighted least square **estimation** method, which allow us to guarantee the prescribe accuracy of the **estimation**. Also a sequential procedure of change point detection is proposed. Upper bounds for its basic characteristics, such as the probability of false alarm and the delay probability, are obtained.

Dynamic thermal rating (DTR) of transmission lines **based** on actual environmental parameters can greatly improve line capacity [1]. Without reconstructing the existing transmission lines, DTR can ease the contra- diction between electricity consumption and power supply and improve line utilization with great eco- nomic benefits. DTR can be determined by line ampacity calculation model **based** on CIGRE standard [2–4]. The ambient environmental parameters of transmission lines are significant factors that affect the DTR, but the difference between the measured value and the true value cannot be ignored, and the uncertainty of DTR needs to be evaluated [5–8].

Show more
10 Read more

estimators of amplitude, delay, and phase are proved to be in turn consistent. Moreover, simulations are carried out to show that, for finite N, the CCAP CFO estimator vari- ance can be smaller than that of the CCAN estimator. It is worthwhile to emphasize that the considered algorithm is not **based** on the usual assumption of white and/or Gaussian am- bient noise, and it exhibits the typical interference and noise immunity of the algorithms **based** on the cyclostationarity properties of the involved signals.

After determining the gist of the time **parameter** prediction, the β distribution of the construction duration can be further obtained. In [13], three fixed- time probabilistic estimations are used to solve this problem, that is, estimate the duration on the basis of fixed three- time probabilities. Three fixed -time probability **estimation** methods can be divided into two categories, the first type is an empirical formula algorithm **based** on a large number of experimental data, such as several kinds of mean and variance calculation formulas for fixed probability combinations. The second type is the β probability density distribution fitting method using computer numerical solution, to find the **parameter** a, b, r and s which result in the smallest variance of the fit, and then to determine the β distribution. The fundamental difference between the two methods is reflected in the initial data access, where the first category requires fixed probabilities and combinations while the second category of fitting method is relatively flexible. According to the literature [12], the error of the fitting method is smaller and more reasonable. The β distribution variance can be expressed as

Show more
Control charts, viewed as the most powerful and simplest tool in Statistical **Process** Control (SPC), are widely used in manufacturing and service industries. The double sampling (DS) X chart detects small to moderate **process** mean shifts effectively, while reduces the sample size. The conventional application of the DS X chart is usually investigated assuming that the **process** parameters are known. Nevertheless, the **process** parameters are usually unknown in practical applications; thus, they are estimated from an in-control Phase-I dataset. In this thesis, the effects of **parameter** **estimation** on the DS X chart’s performance are examined. By taking into consideration of the **parameter** **estimation**, the run length properties of the DS X chart are derived. Since the shape and the skewness of the run length distribution change with the magnitude of the **process** mean shift, the number of Phase-I samples and sample size, the widely applicable performance measure, i.e. the average run length (ARL) should not be used as a sole measure of a chart’s performance. For this reason, the ARL, the standard deviation of the run length (SDRL), the median run length (MRL), the percentiles of the run length distributions and the average sample size (ASS) are recommended to effectively evaluate the proposed DS X chart with estimated parameters. The key idea of this thesis consists of proposing four new optimal designs for the ARL-**based** and MRL-**based** DS X chart with estimated parameters. In particular, these newly developed optimal designs are the ARL-**based** DS X chart with estimated parameters obtained by minimizing (i) the out-of-control

Show more
59 Read more

In this article, we address the **parameter** **estimation** of micromotion targets in synthetic aperture radar (SAR), where scattering parameters and micromotion parameters of targets are coupled resulting in a nonlinear **parameter** **estimation** problem. The conventional methods address this nonlinear problem by matched filter, which are computationally expensive and of lower resolutions. In contrast, we address this problem by linearizing the forward model as a linear combination of elements of an over-complete dictionary. The essential idea of sparse signal representation models comes from the fact that SAR micromotion targets are sparsely distributed in the

Show more
Abstract: Abduction is a kind of logical inference, and has been studied in computer science and artificial intelligence (Fin- lay and Dix 1996). Recently, Sawa and Gunji (2010) introduced a diagram to represent three types of inference: i.e. deduc- tion, induction, and abduction, which are articulated by C.S.Peirce. Sawa-Gunji’s representation provides a new approach to a numerical aspect of abduction. In the present paper, we show that Sawa-Gunji's representation of abduction is consistent with Finlay-Dix's one, and integrate the two representations. Both **parameter** **estimation** and abduction occupy a similar position on the integrated representation, although they are not completely corresponding. We present "incomplete" pa- rameter **estimation** as a sort of "simulated abduction", which is a numerical aspect of abduction. It is applied to a first-order autoregressive (AR(1)) model. As a result of numerical analyses on AR(1), the incompletely estimated **parameter** (IEP) follows a Cauchy distribution, which has a power law of the slope -2 in the tail, although conventionally estimated **parameter** is normally distributed. It is shown that the Cauchy distribution of the IEP is **based** on structure of ratio distribution of normal random variables generated from the AR(1). This research suggests that the distribution of the IEP is not **based** on a mech- anism of system itself, but on relationship between data structure on the given system (i.e. the given AR(1) **process**) and one on the system observer (i.e. the estimator of the AR(1) **parameter**).

Show more
Terminology Prior → posterior Posterior expectation Credible intervals Binomial example Beta distribution... A Bayesian statistician.[r]

16 Read more