PReMo version 1.3.1: * bug fixes PReMo version 1.3: * support for specifying discrete-time Quasi-Birth-Death (QBD) processes via a probabilistic 1-counter automata (p1CA) * support for analyzing termination probabilities, expected reward, second moment of the reward and stationary distribution of a QBD (p1CA) * steady state probability analysis gives meaningful result only if the QBD is irreducible and positive recurrent * meaning of the variables: g_i_j - (i,j)-th entry of matrix G; r_i_j - (i,j)-th entry of matrix R; rw_i_j - expected reward starting at (i,1) and if terminating at (j,0); q_i_j - second moment of that reward; u0_i - steady state probability of being at (i,0); u_i - steady state prob. of being at (i,1); for level k it can be computed as uk_i = u_i * R^k * small example QBDs can be downloaded from the homepage * C++ style // comments can be used in a source code of any model * changes in fonts and syntax coloring * equations can be edited/added after begin generated PReMo version 1.2: * strategy improvement finally accessible from the GUI * generating expected termination time equations for Stochastic Context Free Grammars * new universal solvers: Bad Broyden and a modified version of Sparse Newton that caches the Jacobian matrix so that it can be modified later * new efficient Dense and Sparse Linear Solvers for linear equations systems * cached Sparse Newton is designed to be used with the strategy improvement for non-linear equations systems arising in termination 1-RSSGs whilst dense and sparse linear solvers should be used for equations systems arising in the context of reward 1-RSSGs * Dense Linear Solver is robust in a sense that it returns the correct solution even if some of the values of the least fixed point (LFP) are infinite and on top of that it generates warnings if the solution might be significantly influenced by the arithmetic errors. * strategy improvement available for the maximizer and as well as for the minimizer (rather experimental feature, since using it the found solution might not be the LFP solution) * three variants of strategy improvement for each of the players resulting in few hundreds ways of solving a given min-max equation system * PReMo restarts itself with 256MB heap memory available to it. It is also possible to run it with any given amount of memory by issuing eg: java -jar PReMo-1.2-win.jar 512 to allocate 512MB to PReMo. If your system does not have 256MB available then you have to run it from the command line as desribed since otherwise it will just generate an exception and exit.