```Date: Wed, 20 May 2009 01:54:41 -0300 Reply-To: Hector Maletta Sender: "SPSSX(r) Discussion" From: Hector Maletta Subject: Re: Classification: LPA and cluster analysis Comments: To: glaserconsult@sbcglobal.net In-Reply-To: <411323.60247.qm@web81804.mail.mud.yahoo.com> Content-Type: multipart/alternative; With hierarchical cluster analyhsis (CLUSTER command in SPSS) there is no single solution: the procedure starts with N clusters of 1 member each, and finishes with one cluster including all N cases as members. Therefore it is not surprising that you found hierarchical cluster to come up with a "one cluster solution": that is just the last step in the procedure, not "the solution". What you have to do next is examine the various "solutions", with 1 to N clusters including all the intermediate results (the penultimate one was a solution with two clusters) to see whether any of them is of your liking. Remember, in all this, that clustering is not a parametric but a heuristic procedure. There is no "correct" solution. You can check, externally, which clustering solution is better for your particular purposes. For instance, if you are interested in some particular criterion, and we seek forming clustering that are maximally homogeneous internally, and maximally distinct between them, in some other variable, you can use one-way ANOVA with different clustering solutions to see which is best for that purpose. Likewise, if you want to have a moderate number of clusters, from 2 to six say, you can restrict yourself to those "solutions" and try to choose the one you judge the best. As each procedure uses a different algorithm to include or exclude cases in/from clusters, it is not surprising either that solutions are not necessarily coincident case by case. Even within the same procedure, say Hierarchical or quick cluster, using different criteria may end up with different clustering decisions for specific cases. Such is the nature of clustering. Hector _____ From: SPSSX(r) Discussion [mailto:SPSSX-L@LISTSERV.UGA.EDU] On Behalf Of Dale Glaser Sent: 20 May 2009 01:09 To: SPSSX-L@LISTSERV.UGA.EDU Subject: Classification: LPA and cluster analysis Good evening all.......I would be interested in gathering your insights into classification differences in hierarchal and nonhierarchical cluster analysis vs. latent profile analysis. I obtained (n = 111) a 2-class model using Mplus (8 continuous level predictors), and decided to compare the classification with cluster analysis in SPSS. Using the k-means QuickCluster option and constraining to 2-cluster solution, the classification results were very similar to the latent profile analysis. However, using the hierarchical approach (with Euclidian distance measure and average linkage method) essentially a one-cluster solution results. I was searching some texts/aritcles today trying to find out why there may be congruity between the finite mixture modeling and nonhierarchial cluster analysis methods but not necessarily so with the hierarchical approach, but I couldn't find any sources. One of my multivariate texts did state that based on the seed/type of partitioning as well as type of clustering algorithm, it may not be atypical to have discordance between the two types of clustering methods (hierarchical vs. nonhierarchical) so I wonder if this extends to finite mixture modeling? Any insights would be most appreicated. thank you............... Dale Dale Glaser, Ph.D. Principal--Glaser Consulting Lecturer/Adjunct Faculty--SDSU/USD/AIU President, San Diego Chapter of American Statistical Association 3115 4th Avenue San Diego, CA 92103 phone: 619-220-0602 fax: 619-220-0412 email: glaserconsult@sbcglobal.net website: www.glaserconsult.com [text/html] ```

Back to: Top of message | Previous page | Main SPSSX-L page