This is an automated email from the ASF dual-hosted git repository.

paulk pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/groovy-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 5f095d5  wording tweak
5f095d5 is described below

commit 5f095d544a29a7d47f1a4f6bab49025dba257d21
Author: Paul King <[email protected]>
AuthorDate: Fri May 2 11:47:23 2025 +1000

    wording tweak
---
 site/src/site/blog/whisky-revisited.adoc | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/site/src/site/blog/whisky-revisited.adoc 
b/site/src/site/blog/whisky-revisited.adoc
index 23b70ef..4110e4f 100644
--- a/site/src/site/blog/whisky-revisited.adoc
+++ b/site/src/site/blog/whisky-revisited.adoc
@@ -192,8 +192,8 @@ Let's now cluster the distilleries using k-means, and place 
the cluster allocati
 [source,groovy]
 ----
 def ml = Underdog.ml()
-def d = df[features] as double[][]
-def clusters = ml.clustering.kMeans(d, nClusters: 3)
+def data = df[features] as double[][]
+def clusters = ml.clustering.kMeans(data, nClusters: 3)
 df['Cluster'] = clusters.toList()
 ----
 
@@ -279,8 +279,8 @@ let's project our data onto 2 dimensions using PCA and 
store those projections b
 
 [source,groovy]
 ----
-def pca = ml.features.pca(d, 2)
-def projected = pca.apply(d)
+def pca = ml.features.pca(data, 2)
+def projected = pca.apply(data)
 df['X'] = projected*.getAt(0)
 df['Y'] = projected*.getAt(1)
 ----
@@ -301,7 +301,7 @@ The output looks like this:
 
 image:img/underdogClusterKmeans.png[scatter plot kmeans,50%]
 
-We can go and change our clustering algorithm, e.g. 
`ml.clustering.agglomerative(d, nClusters: 3)`,
+We can go and change our clustering algorithm, e.g. 
`ml.clustering.agglomerative(data, nClusters: 3)`,
 in which case the cluster allocation counts will look like this:
 
 ----
@@ -365,7 +365,7 @@ the whiskies which are somewhat _fruity_ and somewhat 
_sweet_ in flavor:
 
 [source,groovy]
 ----
-def selected= m.subset{ it.Fruity > 0.5 && it.Sweetness > 0.5 }
+def selected = m.subset { it.Fruity > 0.5 && it.Sweetness > 0.5 }
 println selected.dimensions()
 println selected.head(10)
 ----
@@ -421,7 +421,7 @@ Let's apply K-Means, and place the allocated clusters back 
into the matrix:
 ----
 def iterations = 20
 def data = m.selectColumns(*features) as double[][]
-def model = KMeans.fit(data,3, iterations)
+def model = KMeans.fit(data, 3, iterations)
 m['Cluster'] = model.group().toList()
 ----
 

Reply via email to