Hello,
I posted a question (on April 15) regarding the problem that I have been facing while passing the gradient to the maxlikmt call. I have been waiting for the answer (the error message points to the following line of the maxlikmt.src).
relgtest = (abs(g1).*maxc(abs(x1)'|ones(1,rows(x1))))/maxc(abs(vof1)|1) < c1.tol or abs(g1) < 1e-15; In the meanwhile I wanted to check the following: According to the MaxlikMT manual, the gradient function can be a NxK matrix, where k is the number of parameters, and N is the number of cases. Now my question is whether k also includes the parameters that are fixed to some values. I assume that K includes all the parameters (i.e. include the parameters that are fixed to some values as well). I was just wondering whether that is what causing the problem. I would appreciate your response regarding the issue. Thanks.
10 Answers
0
Does your log-likelihood procedure return an Nx1 vector of log-probabilities by observation, or does it return a scalar value? Does that procedure return an NxK matrix of derivatives computed by observation, or is it returning a Kx1 vector or a 1xK vector gradient?
You could identify the conflict if you would run your problem in the debugger with a breakpoint on that line in maxlikmt.src. Then examine x1 and g1 to see if there is any conflict in their size and orientations.
0
Hello Ron,
Thank you for your response.
My log-likelihood procedure returns an Nx1 vector of log-probabilities by observation and an NxK matrix of derivatives computed by observation.
But my question was whether k should include the fixed parameters or not.
Thanks,
Annesha
0
No, derivatives are computed for the fixed values as well. If you are providing derivatives, and you are only computing the derivatives for the free parameters then the run would fail. If you didn't want to compute a derivative for the fixed parameter, return a missing value for that parameter (or a vector of missing values if you are computing an NxK matrix of derivatives).
0
Thanks, I am providing derivative for all the parameters (including those that are fixed as well).
So, do not know why still getting that error.
0
If you could run your problem in the debugger with a breakpoint set at that line, you could examine x1 and g1 to see if there's a conflict.
0
0
The MaxlikMT manual talks about fixed parameters in two ways.
First type of 'fixed parameters'
One is parameters in matrices that contain a mixture of fixed matrix elements and free parameters to be estimated, for example, a diagonal matrix where only the diagonal contains free parameters, and the off-diagonal elements are fixed to zero.
This type of fixed elements of a matrix is accomplished using a mask when setting the start values. The mask is conformable to the matrix in the second argument of pvPack* and contains zeros and ones, zeros for elements of the matrix set to zero (usually), and ones for the free parameters being estimated. The elements of the matrix that are fixed to zero, or some value, are not parameters so it's a misnomer to call them fixed parameters, they are rather fixed elements of the matrix. With this method a gradient is computed only for the free parameters. The fixed elements are not parameters.
Second type of 'fixed parameters'
The other kind of fixed parameter is a parameter that one might estimate normally, but in the current estimation it is to be fixed to its starting value. This method uses a member of the
maxlikmtControl
structure, named active:
struct maxlikmtControl c0; c0 = maxlimtControlCreate; c0.active = { 0, 0, 1, 1, 1, 1, 1, 1, 1 };
With this method you compute a gradient for all of the parameters whether fixed to their starting value or free parameters to be estimated.
p0 = pvPack(p0,b,"beta"); . . . struct maxlikmtControl c0; c0 = maxlikmtControlCreate(); c0.printIters = 1; //first 7 are 'active'/'free' parameters //last two are 'fixed' parameters c0.active = { 1, 1, 1, 1, 1, 1, 1, 0, 0 };
0
Sorry for asking questions here: I am learning CMLMT. How do you obtain an NxT matrix of gradient (instead of a Kx1 vector) and Nx1 vector of log likelihood (instead of a scalar)?
Thank you very much.
Shuping
0
New to this forum too: how can I a start a new topic? I click Ask A Question button, but it leads me to Yahoo search page.
Thanks.
Shuping
0
An NxK gradient is the gradient of the ln of probability of each individual, so if you have N observation and K is the number of parameters, you need to take derivative of individual choice probabilities with respect to each K parameter, that will give you an NxK matrix of gradient.Similarly in case of loglikelihood an Nx1 vector is nothing but the column vector of individual choice probabilities, the scalar loglikelihood is obtained by taking the column sum of this vector.
To ask question you just need to click on the Ask a Question Tab located on top.
Your Answer
10 Answers
Does your log-likelihood procedure return an Nx1 vector of log-probabilities by observation, or does it return a scalar value? Does that procedure return an NxK matrix of derivatives computed by observation, or is it returning a Kx1 vector or a 1xK vector gradient?
You could identify the conflict if you would run your problem in the debugger with a breakpoint on that line in maxlikmt.src. Then examine x1 and g1 to see if there is any conflict in their size and orientations.
Hello Ron,
Thank you for your response.
My log-likelihood procedure returns an Nx1 vector of log-probabilities by observation and an NxK matrix of derivatives computed by observation.
But my question was whether k should include the fixed parameters or not.
Thanks,
Annesha
No, derivatives are computed for the fixed values as well. If you are providing derivatives, and you are only computing the derivatives for the free parameters then the run would fail. If you didn't want to compute a derivative for the fixed parameter, return a missing value for that parameter (or a vector of missing values if you are computing an NxK matrix of derivatives).
Thanks, I am providing derivative for all the parameters (including those that are fixed as well).
So, do not know why still getting that error.
If you could run your problem in the debugger with a breakpoint set at that line, you could examine x1 and g1 to see if there's a conflict.
The MaxlikMT manual talks about fixed parameters in two ways.
First type of 'fixed parameters'
One is parameters in matrices that contain a mixture of fixed matrix elements and free parameters to be estimated, for example, a diagonal matrix where only the diagonal contains free parameters, and the off-diagonal elements are fixed to zero.
This type of fixed elements of a matrix is accomplished using a mask when setting the start values. The mask is conformable to the matrix in the second argument of pvPack* and contains zeros and ones, zeros for elements of the matrix set to zero (usually), and ones for the free parameters being estimated. The elements of the matrix that are fixed to zero, or some value, are not parameters so it's a misnomer to call them fixed parameters, they are rather fixed elements of the matrix. With this method a gradient is computed only for the free parameters. The fixed elements are not parameters.
Second type of 'fixed parameters'
The other kind of fixed parameter is a parameter that one might estimate normally, but in the current estimation it is to be fixed to its starting value. This method uses a member of the
maxlikmtControl
structure, named active:
struct maxlikmtControl c0; c0 = maxlimtControlCreate; c0.active = { 0, 0, 1, 1, 1, 1, 1, 1, 1 };
With this method you compute a gradient for all of the parameters whether fixed to their starting value or free parameters to be estimated.
p0 = pvPack(p0,b,"beta"); . . . struct maxlikmtControl c0; c0 = maxlikmtControlCreate(); c0.printIters = 1; //first 7 are 'active'/'free' parameters //last two are 'fixed' parameters c0.active = { 1, 1, 1, 1, 1, 1, 1, 0, 0 };
Sorry for asking questions here: I am learning CMLMT. How do you obtain an NxT matrix of gradient (instead of a Kx1 vector) and Nx1 vector of log likelihood (instead of a scalar)?
Thank you very much.
Shuping
New to this forum too: how can I a start a new topic? I click Ask A Question button, but it leads me to Yahoo search page.
Thanks.
Shuping
An NxK gradient is the gradient of the ln of probability of each individual, so if you have N observation and K is the number of parameters, you need to take derivative of individual choice probabilities with respect to each K parameter, that will give you an NxK matrix of gradient.Similarly in case of loglikelihood an Nx1 vector is nothing but the column vector of individual choice probabilities, the scalar loglikelihood is obtained by taking the column sum of this vector.
To ask question you just need to click on the Ask a Question Tab located on top.