Home › Forums › Onyx Helpdesk › "Model is overspecified" despite enough observed statistics
This topic contains 7 replies, has 2 voices, and was last updated by Rayne 1 year ago.

AuthorPosts

May 9, 2019 at 11:09 am #887
Hello!
I’m new using Onyx and also new to SEM, so maybe this is an easy fix!
I’m creating a SEM using Onyx and am constantly being told that my model is overspecified. Now, I checked the FAQ, and my estimated parameters are 57 and my observed statistics are 435, so this shouldn’t be the problem. I also made sure to fix one loading per measurement model. What could be causing this warning?Best,
RayneMay 16, 2019 at 1:08 pm #889Hi Rayne,
welcome to the Onyx community, it’s great to have you!
The overspecificationtest works numerically, so with complex models (and yours seems to be in view of 435 observed statistics :), it does happen that the warning is really nothing more but a warning, and the model is absolutely find. However, it can also happen that the model is empirically overspecified. Could you maybe send me the .xml file of the saved model? Then I could simulate some data and check whether I see where an overspecification may be burried.Cheers,
Timo
May 21, 2019 at 4:14 pm #891Hello Timo,
thank you so much for your response! I played around with the model and I seem to have fixed the problem. Maybe someone who stumbles across this forum has the same problem, so I can say what I did:
1) I made sure that every measurement model / every factor had one loading fixed to 1.
2) I made sure that every latent variable has a residual (this is what caused this particular problem).
3) I also made sure, if I made a model formative (the arrows point from the manifest variables to the latent variable, not vice versa) that I deleted the residuals on the manifest variables.Thank you again for your work and I hope this can help someone who’s also new in SEM!
Cheers,
RayneMay 23, 2019 at 11:12 am #892Hello again,
I ran into the “Model is overspecified” problem again once I added a few paths to allow covariance between certain manifest variables. Notably, the problem only occurs once I connect my data with the model! My sample size is too small to draw definite conclusions (n = 150), but as it is my bachelor thesis, this shouldn’t be too big of a problem, it’s more exploratory. Could this be causing the problem?
https://www.dropbox.com/s/14rxfvuhok0u8cw/SSQ_SEM.xml?dl=0 This is the .xml code of my model.
Cheers,
RayneMay 24, 2019 at 10:11 am #893Hi Rayne,
now this is greatlooking model 🙂 !
I’ve run it with simulated data, that worked without overspecification, so there doesn’t seem to be anything conceptually wrong. Your data may create an empirical overspecification, or it could be that your data runs into a situation which is so close to oversspecification that the numerical test misstakes it as such. As long as you get only one solution or the solutions are virtually identical (you can switch between solutions by clicking ALT+1, ALT+2, and so on; be careful only to compare MLsolutions, you will also be shown LS (=Least Squares) solutions, which necessarily will be different), you’re good.
If not, there are two tricks to avoid empirical overspecification situations: The first is to normalize the data (which seems okay here since you are not interested in means). For this, just rightclick on an observed variable (or select multiple and do the steps on one of them to reduce the work) and choose “Apply ztransform” in the context menu. This may solve your problem already, and it may also make effects more visible.
The second trick is to do the analogous thing on the latents by fixing all factor variances to one instead of fixing one of the loadings to one.
Let me know if this worked! If not, if you can send me an anonymized version of your data set, I can play around with it.
BTW, 150 participants are usually fully enough and fairly impressive for a Bachelor thesis!
Cheers,
Timo
May 24, 2019 at 10:44 am #894Hello Timo,
thank you for answering so quickly!
Clicking ALT+1, ALT+2 etc. doesn’t really change the model, in fact, it doesn’t change any parameters, which makes me think that it’s not working. Also when I select “Show best LS estimate” nothing changes. Is there a way to click through the estimates manually?I fixed the variance of the latent variables instead of one loading, and ztransformed all observed variables, too.
What would it mean for my sample if Onyx tells me it’s overspecified only based on the sample?Cheers,
ElisabethMay 24, 2019 at 10:53 am #895Hi Elisabeth,
When switching between MLsolutions, the title line should change (saying something like “Maximum Likelihood Estimate (best)”, with the “best” being changed for the other solutions. You can also manually choose the solution by selecting “Estimation > Select Estimate”; if only one ML solution is shown there (after giving the model some time to find potential alternatives) or if the shown ones have the same parameter values, all’s fine, and you can work with the results.
> What would it mean for my sample if Onyx tells me it’s overspecified only based on the sample?
potentially nothing; as the test for overspecification is numerical only, it may simply be wrong. If you get different solutions with the same fit value, then you may want to investigate what the differences are (e.g., if you find that the loadings of one factor seem arbitrary and you find the factor in fact has almost no variance, then you can conclude that the indicators seem to have no reliable common factor).
Cheers,
Timo
May 24, 2019 at 10:57 am #896Hello Timo,
oh, I see! Yes, that’s why I thought nothing had happened – the second ML solution is the same as the first one, probably only so slightly different that rounding leads to the same solution. Great, I’m glad that I can work with these results now.
Thank you so much for your help!
Cheers,
Elisabeth 
AuthorPosts
The forum ‘Onyx Helpdesk’ is closed to new topics and replies.
Recent Comments