While running the code "MAXSEEK" (downloaded from Hamilton's website), I get the following error. Hamilton said he did not face the following errors. Can someone please advise? Is there any option of attaching a file?
Line 153 in C:\gauss10\mrs\procs
Syntax error G0008 : 'endp'
Line 182 in C:\gauss10\mrs\MAXSEEK
Nested procedure definition G0155
Line 182 in C:\gauss10\mrs\MAXSEEK
Syntax error G0008 : 'proc matpm(xth)'
2 Answers
0
MAXSEEK and PROCS codes are given below
MAXSEEK
@ This GAUSS file reads in data, sets options, and calls numerical
optimization routine for numerical estimation of switching model @
output file=junk reset;
/* ======================================================================= */
@ Replace this first section with a section to read in your data
Example 1 reproduces the GNP results from Hamilton, "A New
Approach to the Economic Analysis of Nonstationary Time Series
and the Business Cycle," Econometrica, March 1989
Example 2 reproduces the interest rate results from Hamilton,
"Rational Expectations Econometric Analysis of Changes in Regime:
An Investigation of the Term Structure of Interest Rates," Journal
of Economic Dynamics and Control, June 1988
Example 3 reproduces the results for the French exchange rate in "Long
Swings in the Exchange Rate: Are They in the Data and Do Markets
Know It?" by Charles Engel and James D. Hamilton (AER, Sept. 1990).
Example 4 fits a three-state model to US real GNP data; estimation
algorithm bogs down due to bumping against corners with nonnegative
definite hessian; Example 5 reestimates with zeros imposed @
/*===============================================
EXAMPLE 1:*/
@ The following lines read in the raw data, and convert to
100 times the first difference of the log @
capt = 135; @ capt is the sample size @
load bbbb[capt+1,1] = C:\gauss10\swarch\gnp82.dat;
bbbb = bbbb[1:capt,1]~bbbb[2:capt+1,1];
y = 100*ln(bbbb[.,2]./bbbb[.,1]);
@ Adjust any of the following to control specification desired @
ns = 2; @ ns is the number of primitive states @
ps = 4; @ ps is the number of lagged states that matter
for y; use ps = 0 if only the current state
matters @
pphi = 4; @ pphi is the number of lags in autoregression
for y; pphi should be greater than or equal
to ps @
isig = 1; @ isig = 1 for constant variances,
isig =ns for changing variances @
ipm = 1; @ ipm specifies way in which transition probs
are parameterized
ipm = 1 implies p11 and p22 estimated
ipm = 2 implies pij for i=1,..,n j=1,..,n-1
ipm = 3 user input code @
a=0;b=0;c=0; @ These parameters control the Bayesian prior
as suggested in Hamilton, "A Quasi-Bayesian
Approach to Estimating Parameters for Mixtures
of Normal Distributions", Journal of Business
and Economic Statistics, January 1991. If
no Bayesian adjustment is desired, set all
parameters to zero @
/* Input initial values for parameters
The order in which variables are represented is as follows
first ns elements: means for states
next pphi elements: autoregressive coefficients
next isig elements: variances for states when izz =1
std. deviations for states when izz =2
next elements: when izz =1 these are the transition probs
when izz =2 these are params v(i,j) such that
p(i,j) = v(i,j)^2 / sum j v(i,j)^2
elements are ordered as p11, p22 when ns =2
ordered as p(1,1),p(2,1),...,p(ns,1),
p(2,1),...,p(ns-1,ns) when ns > 2 */
nth = 9; @ nth is the number of params to be estimated@
mu1 = 1; mu2 = 0; phi1 = .1; phi2 = 0; phi3 = 0; phi4 = 0;
sig = 1; p11 = 1.5; p22 = 1.5;
th = mu1|mu2|phi1|phi2|phi3|phi4|sig|p11|p22;
/* =============================================
EXAMPLE 2:
capt = 103;
load y[capt,1] = rradj.dat;
ns = 2;
ps = 4;
pphi = 4;
isig = 2;
ipm = 1;
a=0;b=0;c=0;
nth = 10;
mu1 = 2; mu2 = 1; phi1 = .9; phi2 = 0.; phi3 = 0.;
phi4 = 0; sig1 = .8; sig2 = .2; p11 = 1; p22 = 1;
th = mu1|mu2|phi1|phi2|phi3|phi4|sig1|sig2|p11|p22; */
/*==============================================
EXAMPLE 3
capt = 58;
load y[capt,3] = exdata.txt; @ this loads exchange rate data
for France, U.K. and Germany @
y = y[.,1]; @ this selects French data @
ns = 2;
ps = 0;
pphi = 0;
isig = 2;
ipm = 1;
a = .2; b = 1.0; c = .1;
nth = 6;
mu1 = 3; mu2 = -3; sig1 = 3; sig2 = 6; p11 = 2; p22 = 2;
th = mu1|mu2|sig1|sig2|p11|p22; */
/* ============================================================
EXAMPLE 4
capt = 135; @ capt is the sample size @
load bbbb[capt+1,1] = gnp82.dat;
bbbb = bbbb[1:capt,1]~bbbb[2:capt+1,1];
y = 100*ln(bbbb[.,2]./bbbb[.,1]);
a=.1;b=.1;c=.1;
ns = 3; ps = 1; pphi = 1; isig = 1; ipm = 2;
nth = 11;
mu1 = -1; mu2 = .2; mu3 = 1.2; phi1 = .1; sig = .8;
p11 = 1.1; p21 = 1.8; p31 = .1; p12 = .15; p22 = 3; p23 = .33;
th = mu1|mu2|mu3|phi1|sig|p11|p21|p31|p12|p22|p23; */
/* ============================================================
EXAMPLE 5
capt = 135; @ capt is the sample size @
load bbbb[capt+1,1] = gnp82.dat;
bbbb = bbbb[1:capt,1]~bbbb[2:capt+1,1];
y = 100*ln(bbbb[.,2]./bbbb[.,1]);
a=.1;b=.1;c=.1;
ns = 3; ps = 1; pphi = 1; isig = 1; ipm = 3;
nth = 8;
mu1 = -1; mu2 = .2; mu3 = 1.2; phi1 = .1; sig = .8;
p11 = 1; p21 = 1; p23 = 1;
th = mu1|mu2|mu3|phi1|sig|p11|p21|p23; */
/* ======================================================================= */
/* ======================================================================= */
@ In general no parts of this section should be changed @
proc startval; @ This defines starting value for iteration to be th @
retp(th); endp;
nk = pphi+1; @ nk is the first observation for which the
likelihood will be evaluated @
izz = 1; @ izz = 1 when params read in so as to assure inequality
constraints; izz = 2 for final reporting of results @
n = ns^(ps+1); @ n is the dimension of the state vector @
kc = 1; @ kc = 2 to echo parameter values @
ks = 1; @ ks = 2 if smoothed probs are to be calculated @
captst = capt - nk +1; @ captst is the effective sample size @
skif = zeros(captst,n); @ skif is the matrix of filtered probs @
skis = zeros(captst,n); @ skis is the matrix of smoothed probs @
id = eye(ns); @ used in certain calculations below @
"Bayesian prior used";
"a=";;a;;"b=";;b;;"c=";;c;
proc pattern1; @ This proc returns a (ps+1)*ns x n matrix. The ith
column contains a one in row j if st = j, contains a
one in row ns+j if st-1 = j, and so on @
local i1,ix,iq,na;
na = n/ns;
ix = eye(ns).*.ones(1,na);
i1 = 1;
do until i1 > ps;
na = na/ns;
iq = ones(1,ns^i1).*.(eye(ns).*.ones(1,na));
ix = iq|ix;
i1 = i1+1;
endo;
retp(ix);
endp;
hp = pattern1;
#include procs;
/* ================================================================= */
/* ======================================================================= */
proc matpm(xth); @This proc defines the user's conventions for reading
elements of Markov transition probabilities from
parameter vector @
local pm,ixth;
ixth = rows(xth);
pm = zeros(ns,ns);
if ipm == 1; @ for ns =2 this option has parameters as p11 and p22 @
if izz == 1;
pm[1,1] = xth[1,1]^2/(1 +xth[1,1]^2);
pm[2,2] = xth[2,1]^2/(1 + xth[2,1]^2);
else;
pm[1,1] = xth[1,1];
pm[2,2] = xth[2,1];
endif;
pm[2,1] = 1 - pm[1,1];
pm[1,2] = 1 - pm[2,2];
elseif ipm == 2; @ general case has parameters pij for i = 1,...,n and
j = 1,...,n-1 @
pm[1:ns-1,.] = reshape(xth[1:ixth,1],ns-1,ns);
if izz == 1;
pm[ns,.] = ones(1,ns);
pm = pm^2;
pm = pm./(sumc(pm)');
else;
pm[ns,.] = (1 - sumc(pm))';
endif;
elseif ipm == 3; @ This section can be rewritten by user to impose zeros
and ones where desired @
if izz == 1;
pm[1,1] = xth[1,1]^2/(1 + xth[1,1]^2);
pm[1,2] = xth[2,1]^2/(1 + xth[2,1]^2);
pm[2,3] = xth[3,1]^2/(1 + xth[3,1]^2);
elseif izz == 2;
pm[1,1] = xth[1,1];
pm[1,2] = xth[2,1];
pm[2,3] = xth[3,1];
endif;
pm[3,1] = 1 - pm[1,1];
pm[2,2] = 1 - pm[1,2];
pm[3,3] = 1 - pm[2,3];
endif;
retp(pm);
endp;
/* ======================================================================= */
/* ==================================================================== */
@ Set parameters to use Gauss numerical optimizer @
library optmum;
#include optmum.ext;
__btol = 1.e-06; @ This controls convergence criterion for coefficients@
__gtol = 1.e-06; @ This controls convergence criterion for gradient @
__algr = 1; @ This chooses BFGS optimization @
_opalgr = 2; @ This chooses BFGS optimization for newer versions of
GAUSS @
__miter = 150; @ This controls the maximum number of iterations @
__output = 1; @ This causes extra output to be displayed @
__covp = 0; @ This speeds up return from OPTMUM; note that the
program makes a reparameterization to calculate
std. errors @
@ Next call the GAUSS numerical optimizer @
output off;
{x,f,g,h} =optmum(&ofn,startval);
output file=junk on;
"";"";"MLE as parameterized for numerical optimization ";
"Coefficients:";x';
"";"Value of log likelihood:";;-f;
"";"Gradient vector:";g';
/* ======================================================================= */
/*======================================================================== */
@ In general no parts of this section need be changed @
@ Reparameterize for reporting final results @
izz = 2;
"";"Vector is reparameterized to report final results as follows";
"Means for each state:";x[1:ns,1]';
If pphi > 0;
"Autoregressive coefficients:";x[ns+1:ns+pphi,1]';
endif;
ncount = ns + pphi ;
x[ncount+1:ncount+isig,1] = x[ncount+1:ncount+isig,1]^2;
"Variances:";x[ncount+1:ncount+isig,1];
ncount = ncount + isig;
/* ======================================================================= */
/* ======================================================================= */
proc pmth; @ This proc converts the last elements of parameter vector
(which relates to transition probabilities) from the
th[i,j]^2/{sum j th[i,j]^2} form that is used for
numerical estimation into the p[i,j] form that is used
to calculate standard errors @
local pm;
if ipm == 1;
x[ncount+1,1] = x[ncount+1,1]^2/(1 + x[ncount+1,1]^2);
x[ncount+2,1] = x[ncount+2,1]^2/(1 + x[ncount+2,1]^2);
elseif ipm == 2;
pm = zeros(ns,ns);
pm[1:ns-1,.] = reshape(x[ncount+1:nth],ns-1,ns);
pm[ns,.] = ones(1,ns);
pm = pm^2;
pm = pm./(sumc(pm)');
x[ncount+1:nth,1] = reshape(pm[1:ns-1,.],ns*(ns-1),1);
elseif ipm == 3; @ User may want to alter these next lines @
x[ncount+1:nth,1]
= (x[ncount+1:nth,1]^2)./(1 + x[ncount+1:nth,1]^2);
endif;
retp(x);
endp;
/* ======================================================================= */
/* ======================================================================= */
@ In general no changes are necessary from here out @
call pmth;
h = (hessp(&ofn,x));
va = eigrs(h);
kc = 2;
ks = 2;
call ofn(x);
if minc(eigrs(h)) <= 0;
"Negative of Hessian is not positive definite";
"Either you have not found local maximum, or else estimates are up "
"against boundary condition. In latter case, impose the restricted "
"params rather than estimate them to calculate standard errors";
else;
h = invpd(h);
std = diag(h)^.5;
"For vector of coefficients parameterized as follows,";x';
"the standard errors are";std';
endif;
"";"-------------------------------";"";
"Probabilities for primitive states";
"filtered probabilities";format /rd 1,0;
"Obs ";;
t = 0;
do until t > ps;
i = 1;
do until i == ns;
"P(st-";;t;;"=";;i;;") ";;
i = i+1;
endo;
t = t+1;
endo;"";
format /rd 6,4;
skif = (skif*hp')*(eye(ps+1).*.id[.,1:ns-1]);
skif = seqa(nk,1,captst)~skif;skif;
"";"smoothed probabilities";
format /rd 1,0;
"Obs ";;
i = 1;
do until i > ns;
"P(st = ";;i;;") ";;
i = i+1;
endo;
format /rd 6,4;
skis = skis*hp';
skis = seqa(nk,1,captst)~skis[.,1:ns];skis;
/*========================================================================*/
PROCS
@ These GAUSS procedures evaluate Markov transition matrix, evaluate
likelihood function, and evaluate filter and smoothed probababilities @
/* ============================================================== */
proc matf(pm); @This proc returns the (n x n) matrix F of Markov
transition probabilities for state vector @
local iz,iw,ib,na,nb,nc,fz,fm;
@ set initial values for use with iteration @
na = 1;
nb = ns;
nc = ns*ns;
fm = pm;
iz = 1;
do until iz > ps;
fz = fm;
fm = zeros(nc,nc);
iw = 1;
do until iw > ns;
fm[((iw-1)*nb+1):(iw*nb),((iw-1)*na+1):(iw*na)]
= fz[1:nb,((iw-1)*na+1):iw*na];
iw = iw+1;
endo;
ib = 2;
do until ib > ns;
fm[1:nc,((ib-1)*nb+1):ib*nb] = fm[1:nc,1:nb];
ib = ib+1;
endo;
na = na*ns;
nb = nb*ns;
nc = nc*ns;
iz = iz+1;
endo;
retp(fm);
endp;
/* ==================================================================== */
/* ================================================================= */
proc ofn(th); @ this proc evaluates filter probs and likelihood @
local ncount,mu,phi,sig,pm,eta,iz,fm,chsi,it,f,fit,fx,hw,fj,
ij,ap,const,pq,hk,ihk;
@ Convert parameter vector to convenient form @
mu = th[1:ns,1];
ncount = ns + 1; @ ncount is the number of params read @
if pphi == 0;
phi = 1;
else;
phi = 1 | -th[ncount:ncount+pphi-1];
endif;
ncount = ncount + pphi;
if isig == 1;
sig = th[ncount,1]*ones(n,1);
ncount = ncount +1;
elseif isig == ns;
sig = th[ncount:ncount+ns-1,1].*hp[1:ns,.];
sig = sumc(sig);
ncount = ncount + ns;
endif;
if izz == 1;
sig = sig^2;
endif;
pm = matpm(th[ncount:nth,1]);
@ Construct constant term to be subtracted for each observation @
const = phi .*. mu;
const = const'*hp;
@ Convert data to AR resids @
eta = y[nk:capt,1];
iz = 1;
do until iz > pphi;
eta = eta ~y[nk-iz:capt-iz,1];
iz = iz+1;
endo;
eta = ((eta*phi - const)^2)./sig';
@ Adjust data to avoid numeric overflows and underflows
eta = reshape(eta,1,n*(captst));
eta = minc(eta|800*ones(1,n*captst));
eta = reshape(eta',captst,n); @
eta = (1./sqrt(sig')).*exp(-eta/2);
@ Calculate ergodic probabilities @
fm = matf(pm);
ap = (eye(n)-fm)|ones(1,n);
chsi = sumc((invpd(ap'*ap))');
chsi = maxc(chsi'|zeros(1,n)); @ This line eliminates roundoff error @
if kc > 1;
"";"Matrix of Markov transition probabilities:";pm;
"";"Ergodic probs for full state vector:";chsi';
"";"Ergodic probs for primitive states:";
pq = hp*chsi;pq[1:ns,1]';
endif;
@ Filter iteration @
f = 0;
it = 1;
do until it > captst;
fx = chsi .* eta[it,.]';
fit = sumc(fx);
skif[it,.] = fx'/fit;
f = f + ln(fit);
chsi = fm*fx/fit;
it = it+1;
endo;
@ Make Bayesian adjustment to likelihood if desired @
if a > 0;
fj = 0;
ij = 1;
do until ij > ns;
fj = fj + a*ln(sig[ij,1]) + b/sig[ij,1]
+ c*(mu[ij,1]^2/sig[ij,1]);
ij = ij +1;
endo;
f = f - fj/2;
endif;
@ Calculate smoothed probs if desired @
if ks == 2;
skis[captst,.] = skif[captst,.];
it = 1;
do until it == captst;
if minc(skif[captst-it,.]') > 1.e-150;
skis[captst-it,.] = skif[captst-it,.].*
((skis[captst-it+1,.]./(skif[captst-it,.]*fm'))*fm);
else; @ adjust code so as not to divide by zero @
hk = skif[captst-it,.]*fm';
ihk = 1;
do until ihk > n;
if hk[1,ihk] > 1.e-150;
hk[1,ihk] = skis[captst-it+1,ihk]/hk[1,ihk];
else;
hk[1,ihk] = 0;
endif;
ihk = ihk + 1;
endo;
skis[captst-it,.] = skif[captst-it,.].*(hk*fm);
endif;
it = it+1;
endo;
endif;
@ Print out value of log likelihood if desired @
if kc == 2;
"";"Log likelihood:";f;
endif;
retp(-f);
endp;
/*=========================================================================*/
0
I copied and pasted your code from above into two separate files, markov_mod.gss and procs. I had to do some find and replace, because the forum website changed some of the quotation marks to two single quotes, subtraction signs to dashes, etc. I also had to close the comment at the very last line of your post. As a note for others, I also did not copy the line that says MAXSEEK or the line that says PROCS into my files. I used those as demarcations between the two files.
After this, I downloaded the gnp82.dat dataset from the link that says "Programs for estimation of Markov switching models by numerical optimization" on this page. I placed the gnp82.dat file in the same directory as the other two files.
I was then able to run the code successfully with GAUSS version 13.1 on Linux. I have attached a zip file with these contents.
Final note: this code requires the GAUSS Optmum application module.
Your Answer
2 Answers
MAXSEEK and PROCS codes are given below
MAXSEEK
@ This GAUSS file reads in data, sets options, and calls numerical
optimization routine for numerical estimation of switching model @
output file=junk reset;
/* ======================================================================= */
@ Replace this first section with a section to read in your data
Example 1 reproduces the GNP results from Hamilton, "A New
Approach to the Economic Analysis of Nonstationary Time Series
and the Business Cycle," Econometrica, March 1989
Example 2 reproduces the interest rate results from Hamilton,
"Rational Expectations Econometric Analysis of Changes in Regime:
An Investigation of the Term Structure of Interest Rates," Journal
of Economic Dynamics and Control, June 1988
Example 3 reproduces the results for the French exchange rate in "Long
Swings in the Exchange Rate: Are They in the Data and Do Markets
Know It?" by Charles Engel and James D. Hamilton (AER, Sept. 1990).
Example 4 fits a three-state model to US real GNP data; estimation
algorithm bogs down due to bumping against corners with nonnegative
definite hessian; Example 5 reestimates with zeros imposed @
/*===============================================
EXAMPLE 1:*/
@ The following lines read in the raw data, and convert to
100 times the first difference of the log @
capt = 135; @ capt is the sample size @
load bbbb[capt+1,1] = C:\gauss10\swarch\gnp82.dat;
bbbb = bbbb[1:capt,1]~bbbb[2:capt+1,1];
y = 100*ln(bbbb[.,2]./bbbb[.,1]);
@ Adjust any of the following to control specification desired @
ns = 2; @ ns is the number of primitive states @
ps = 4; @ ps is the number of lagged states that matter
for y; use ps = 0 if only the current state
matters @
pphi = 4; @ pphi is the number of lags in autoregression
for y; pphi should be greater than or equal
to ps @
isig = 1; @ isig = 1 for constant variances,
isig =ns for changing variances @
ipm = 1; @ ipm specifies way in which transition probs
are parameterized
ipm = 1 implies p11 and p22 estimated
ipm = 2 implies pij for i=1,..,n j=1,..,n-1
ipm = 3 user input code @
a=0;b=0;c=0; @ These parameters control the Bayesian prior
as suggested in Hamilton, "A Quasi-Bayesian
Approach to Estimating Parameters for Mixtures
of Normal Distributions", Journal of Business
and Economic Statistics, January 1991. If
no Bayesian adjustment is desired, set all
parameters to zero @
/* Input initial values for parameters
The order in which variables are represented is as follows
first ns elements: means for states
next pphi elements: autoregressive coefficients
next isig elements: variances for states when izz =1
std. deviations for states when izz =2
next elements: when izz =1 these are the transition probs
when izz =2 these are params v(i,j) such that
p(i,j) = v(i,j)^2 / sum j v(i,j)^2
elements are ordered as p11, p22 when ns =2
ordered as p(1,1),p(2,1),...,p(ns,1),
p(2,1),...,p(ns-1,ns) when ns > 2 */
nth = 9; @ nth is the number of params to be estimated@
mu1 = 1; mu2 = 0; phi1 = .1; phi2 = 0; phi3 = 0; phi4 = 0;
sig = 1; p11 = 1.5; p22 = 1.5;
th = mu1|mu2|phi1|phi2|phi3|phi4|sig|p11|p22;
/* =============================================
EXAMPLE 2:
capt = 103;
load y[capt,1] = rradj.dat;
ns = 2;
ps = 4;
pphi = 4;
isig = 2;
ipm = 1;
a=0;b=0;c=0;
nth = 10;
mu1 = 2; mu2 = 1; phi1 = .9; phi2 = 0.; phi3 = 0.;
phi4 = 0; sig1 = .8; sig2 = .2; p11 = 1; p22 = 1;
th = mu1|mu2|phi1|phi2|phi3|phi4|sig1|sig2|p11|p22; */
/*==============================================
EXAMPLE 3
capt = 58;
load y[capt,3] = exdata.txt; @ this loads exchange rate data
for France, U.K. and Germany @
y = y[.,1]; @ this selects French data @
ns = 2;
ps = 0;
pphi = 0;
isig = 2;
ipm = 1;
a = .2; b = 1.0; c = .1;
nth = 6;
mu1 = 3; mu2 = -3; sig1 = 3; sig2 = 6; p11 = 2; p22 = 2;
th = mu1|mu2|sig1|sig2|p11|p22; */
/* ============================================================
EXAMPLE 4
capt = 135; @ capt is the sample size @
load bbbb[capt+1,1] = gnp82.dat;
bbbb = bbbb[1:capt,1]~bbbb[2:capt+1,1];
y = 100*ln(bbbb[.,2]./bbbb[.,1]);
a=.1;b=.1;c=.1;
ns = 3; ps = 1; pphi = 1; isig = 1; ipm = 2;
nth = 11;
mu1 = -1; mu2 = .2; mu3 = 1.2; phi1 = .1; sig = .8;
p11 = 1.1; p21 = 1.8; p31 = .1; p12 = .15; p22 = 3; p23 = .33;
th = mu1|mu2|mu3|phi1|sig|p11|p21|p31|p12|p22|p23; */
/* ============================================================
EXAMPLE 5
capt = 135; @ capt is the sample size @
load bbbb[capt+1,1] = gnp82.dat;
bbbb = bbbb[1:capt,1]~bbbb[2:capt+1,1];
y = 100*ln(bbbb[.,2]./bbbb[.,1]);
a=.1;b=.1;c=.1;
ns = 3; ps = 1; pphi = 1; isig = 1; ipm = 3;
nth = 8;
mu1 = -1; mu2 = .2; mu3 = 1.2; phi1 = .1; sig = .8;
p11 = 1; p21 = 1; p23 = 1;
th = mu1|mu2|mu3|phi1|sig|p11|p21|p23; */
/* ======================================================================= */
/* ======================================================================= */
@ In general no parts of this section should be changed @
proc startval; @ This defines starting value for iteration to be th @
retp(th); endp;
nk = pphi+1; @ nk is the first observation for which the
likelihood will be evaluated @
izz = 1; @ izz = 1 when params read in so as to assure inequality
constraints; izz = 2 for final reporting of results @
n = ns^(ps+1); @ n is the dimension of the state vector @
kc = 1; @ kc = 2 to echo parameter values @
ks = 1; @ ks = 2 if smoothed probs are to be calculated @
captst = capt - nk +1; @ captst is the effective sample size @
skif = zeros(captst,n); @ skif is the matrix of filtered probs @
skis = zeros(captst,n); @ skis is the matrix of smoothed probs @
id = eye(ns); @ used in certain calculations below @
"Bayesian prior used";
"a=";;a;;"b=";;b;;"c=";;c;
proc pattern1; @ This proc returns a (ps+1)*ns x n matrix. The ith
column contains a one in row j if st = j, contains a
one in row ns+j if st-1 = j, and so on @
local i1,ix,iq,na;
na = n/ns;
ix = eye(ns).*.ones(1,na);
i1 = 1;
do until i1 > ps;
na = na/ns;
iq = ones(1,ns^i1).*.(eye(ns).*.ones(1,na));
ix = iq|ix;
i1 = i1+1;
endo;
retp(ix);
endp;
hp = pattern1;
#include procs;
/* ================================================================= */
/* ======================================================================= */
proc matpm(xth); @This proc defines the user's conventions for reading
elements of Markov transition probabilities from
parameter vector @
local pm,ixth;
ixth = rows(xth);
pm = zeros(ns,ns);
if ipm == 1; @ for ns =2 this option has parameters as p11 and p22 @
if izz == 1;
pm[1,1] = xth[1,1]^2/(1 +xth[1,1]^2);
pm[2,2] = xth[2,1]^2/(1 + xth[2,1]^2);
else;
pm[1,1] = xth[1,1];
pm[2,2] = xth[2,1];
endif;
pm[2,1] = 1 - pm[1,1];
pm[1,2] = 1 - pm[2,2];
elseif ipm == 2; @ general case has parameters pij for i = 1,...,n and
j = 1,...,n-1 @
pm[1:ns-1,.] = reshape(xth[1:ixth,1],ns-1,ns);
if izz == 1;
pm[ns,.] = ones(1,ns);
pm = pm^2;
pm = pm./(sumc(pm)');
else;
pm[ns,.] = (1 - sumc(pm))';
endif;
elseif ipm == 3; @ This section can be rewritten by user to impose zeros
and ones where desired @
if izz == 1;
pm[1,1] = xth[1,1]^2/(1 + xth[1,1]^2);
pm[1,2] = xth[2,1]^2/(1 + xth[2,1]^2);
pm[2,3] = xth[3,1]^2/(1 + xth[3,1]^2);
elseif izz == 2;
pm[1,1] = xth[1,1];
pm[1,2] = xth[2,1];
pm[2,3] = xth[3,1];
endif;
pm[3,1] = 1 - pm[1,1];
pm[2,2] = 1 - pm[1,2];
pm[3,3] = 1 - pm[2,3];
endif;
retp(pm);
endp;
/* ======================================================================= */
/* ==================================================================== */
@ Set parameters to use Gauss numerical optimizer @
library optmum;
#include optmum.ext;
__btol = 1.e-06; @ This controls convergence criterion for coefficients@
__gtol = 1.e-06; @ This controls convergence criterion for gradient @
__algr = 1; @ This chooses BFGS optimization @
_opalgr = 2; @ This chooses BFGS optimization for newer versions of
GAUSS @
__miter = 150; @ This controls the maximum number of iterations @
__output = 1; @ This causes extra output to be displayed @
__covp = 0; @ This speeds up return from OPTMUM; note that the
program makes a reparameterization to calculate
std. errors @
@ Next call the GAUSS numerical optimizer @
output off;
{x,f,g,h} =optmum(&ofn,startval);
output file=junk on;
"";"";"MLE as parameterized for numerical optimization ";
"Coefficients:";x';
"";"Value of log likelihood:";;-f;
"";"Gradient vector:";g';
/* ======================================================================= */
/*======================================================================== */
@ In general no parts of this section need be changed @
@ Reparameterize for reporting final results @
izz = 2;
"";"Vector is reparameterized to report final results as follows";
"Means for each state:";x[1:ns,1]';
If pphi > 0;
"Autoregressive coefficients:";x[ns+1:ns+pphi,1]';
endif;
ncount = ns + pphi ;
x[ncount+1:ncount+isig,1] = x[ncount+1:ncount+isig,1]^2;
"Variances:";x[ncount+1:ncount+isig,1];
ncount = ncount + isig;
/* ======================================================================= */
/* ======================================================================= */
proc pmth; @ This proc converts the last elements of parameter vector
(which relates to transition probabilities) from the
th[i,j]^2/{sum j th[i,j]^2} form that is used for
numerical estimation into the p[i,j] form that is used
to calculate standard errors @
local pm;
if ipm == 1;
x[ncount+1,1] = x[ncount+1,1]^2/(1 + x[ncount+1,1]^2);
x[ncount+2,1] = x[ncount+2,1]^2/(1 + x[ncount+2,1]^2);
elseif ipm == 2;
pm = zeros(ns,ns);
pm[1:ns-1,.] = reshape(x[ncount+1:nth],ns-1,ns);
pm[ns,.] = ones(1,ns);
pm = pm^2;
pm = pm./(sumc(pm)');
x[ncount+1:nth,1] = reshape(pm[1:ns-1,.],ns*(ns-1),1);
elseif ipm == 3; @ User may want to alter these next lines @
x[ncount+1:nth,1]
= (x[ncount+1:nth,1]^2)./(1 + x[ncount+1:nth,1]^2);
endif;
retp(x);
endp;
/* ======================================================================= */
/* ======================================================================= */
@ In general no changes are necessary from here out @
call pmth;
h = (hessp(&ofn,x));
va = eigrs(h);
kc = 2;
ks = 2;
call ofn(x);
if minc(eigrs(h)) <= 0;
"Negative of Hessian is not positive definite";
"Either you have not found local maximum, or else estimates are up "
"against boundary condition. In latter case, impose the restricted "
"params rather than estimate them to calculate standard errors";
else;
h = invpd(h);
std = diag(h)^.5;
"For vector of coefficients parameterized as follows,";x';
"the standard errors are";std';
endif;
"";"-------------------------------";"";
"Probabilities for primitive states";
"filtered probabilities";format /rd 1,0;
"Obs ";;
t = 0;
do until t > ps;
i = 1;
do until i == ns;
"P(st-";;t;;"=";;i;;") ";;
i = i+1;
endo;
t = t+1;
endo;"";
format /rd 6,4;
skif = (skif*hp')*(eye(ps+1).*.id[.,1:ns-1]);
skif = seqa(nk,1,captst)~skif;skif;
"";"smoothed probabilities";
format /rd 1,0;
"Obs ";;
i = 1;
do until i > ns;
"P(st = ";;i;;") ";;
i = i+1;
endo;
format /rd 6,4;
skis = skis*hp';
skis = seqa(nk,1,captst)~skis[.,1:ns];skis;
/*========================================================================*/
PROCS
@ These GAUSS procedures evaluate Markov transition matrix, evaluate
likelihood function, and evaluate filter and smoothed probababilities @
/* ============================================================== */
proc matf(pm); @This proc returns the (n x n) matrix F of Markov
transition probabilities for state vector @
local iz,iw,ib,na,nb,nc,fz,fm;
@ set initial values for use with iteration @
na = 1;
nb = ns;
nc = ns*ns;
fm = pm;
iz = 1;
do until iz > ps;
fz = fm;
fm = zeros(nc,nc);
iw = 1;
do until iw > ns;
fm[((iw-1)*nb+1):(iw*nb),((iw-1)*na+1):(iw*na)]
= fz[1:nb,((iw-1)*na+1):iw*na];
iw = iw+1;
endo;
ib = 2;
do until ib > ns;
fm[1:nc,((ib-1)*nb+1):ib*nb] = fm[1:nc,1:nb];
ib = ib+1;
endo;
na = na*ns;
nb = nb*ns;
nc = nc*ns;
iz = iz+1;
endo;
retp(fm);
endp;
/* ==================================================================== */
/* ================================================================= */
proc ofn(th); @ this proc evaluates filter probs and likelihood @
local ncount,mu,phi,sig,pm,eta,iz,fm,chsi,it,f,fit,fx,hw,fj,
ij,ap,const,pq,hk,ihk;
@ Convert parameter vector to convenient form @
mu = th[1:ns,1];
ncount = ns + 1; @ ncount is the number of params read @
if pphi == 0;
phi = 1;
else;
phi = 1 | -th[ncount:ncount+pphi-1];
endif;
ncount = ncount + pphi;
if isig == 1;
sig = th[ncount,1]*ones(n,1);
ncount = ncount +1;
elseif isig == ns;
sig = th[ncount:ncount+ns-1,1].*hp[1:ns,.];
sig = sumc(sig);
ncount = ncount + ns;
endif;
if izz == 1;
sig = sig^2;
endif;
pm = matpm(th[ncount:nth,1]);
@ Construct constant term to be subtracted for each observation @
const = phi .*. mu;
const = const'*hp;
@ Convert data to AR resids @
eta = y[nk:capt,1];
iz = 1;
do until iz > pphi;
eta = eta ~y[nk-iz:capt-iz,1];
iz = iz+1;
endo;
eta = ((eta*phi - const)^2)./sig';
@ Adjust data to avoid numeric overflows and underflows
eta = reshape(eta,1,n*(captst));
eta = minc(eta|800*ones(1,n*captst));
eta = reshape(eta',captst,n); @
eta = (1./sqrt(sig')).*exp(-eta/2);
@ Calculate ergodic probabilities @
fm = matf(pm);
ap = (eye(n)-fm)|ones(1,n);
chsi = sumc((invpd(ap'*ap))');
chsi = maxc(chsi'|zeros(1,n)); @ This line eliminates roundoff error @
if kc > 1;
"";"Matrix of Markov transition probabilities:";pm;
"";"Ergodic probs for full state vector:";chsi';
"";"Ergodic probs for primitive states:";
pq = hp*chsi;pq[1:ns,1]';
endif;
@ Filter iteration @
f = 0;
it = 1;
do until it > captst;
fx = chsi .* eta[it,.]';
fit = sumc(fx);
skif[it,.] = fx'/fit;
f = f + ln(fit);
chsi = fm*fx/fit;
it = it+1;
endo;
@ Make Bayesian adjustment to likelihood if desired @
if a > 0;
fj = 0;
ij = 1;
do until ij > ns;
fj = fj + a*ln(sig[ij,1]) + b/sig[ij,1]
+ c*(mu[ij,1]^2/sig[ij,1]);
ij = ij +1;
endo;
f = f - fj/2;
endif;
@ Calculate smoothed probs if desired @
if ks == 2;
skis[captst,.] = skif[captst,.];
it = 1;
do until it == captst;
if minc(skif[captst-it,.]') > 1.e-150;
skis[captst-it,.] = skif[captst-it,.].*
((skis[captst-it+1,.]./(skif[captst-it,.]*fm'))*fm);
else; @ adjust code so as not to divide by zero @
hk = skif[captst-it,.]*fm';
ihk = 1;
do until ihk > n;
if hk[1,ihk] > 1.e-150;
hk[1,ihk] = skis[captst-it+1,ihk]/hk[1,ihk];
else;
hk[1,ihk] = 0;
endif;
ihk = ihk + 1;
endo;
skis[captst-it,.] = skif[captst-it,.].*(hk*fm);
endif;
it = it+1;
endo;
endif;
@ Print out value of log likelihood if desired @
if kc == 2;
"";"Log likelihood:";f;
endif;
retp(-f);
endp;
/*=========================================================================*/
I copied and pasted your code from above into two separate files, markov_mod.gss and procs. I had to do some find and replace, because the forum website changed some of the quotation marks to two single quotes, subtraction signs to dashes, etc. I also had to close the comment at the very last line of your post. As a note for others, I also did not copy the line that says MAXSEEK or the line that says PROCS into my files. I used those as demarcations between the two files.
After this, I downloaded the gnp82.dat dataset from the link that says "Programs for estimation of Markov switching models by numerical optimization" on this page. I placed the gnp82.dat file in the same directory as the other two files.
I was then able to run the code successfully with GAUSS version 13.1 on Linux. I have attached a zip file with these contents.
Final note: this code requires the GAUSS Optmum application module.