This is an automated email from the ASF dual-hosted git repository.
wusheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/skywalking-website.git
The following commit(s) were added to refs/heads/master by this push:
new e86cfb14441 Publish two blogs. `How AI Changed the Economics of
Architecture` and `SkyWalking GraalVM Distro: Design and Benchmarks`.
e86cfb14441 is described below
commit e86cfb144413e71714ed37b07f65dd4354b6a8c9
Author: Wu Sheng <[email protected]>
AuthorDate: Sun Mar 15 11:24:16 2026 +0800
Publish two blogs. `How AI Changed the Economics of Architecture` and
`SkyWalking GraalVM Distro: Design and Benchmarks`.
---
.../index.md | 91 +++++++++
.../index.md | 220 +++++++++++++++++++++
2 files changed, 311 insertions(+)
diff --git
a/content/blog/2026-03-13-how-ai-changed-the-economics-of-architecture/index.md
b/content/blog/2026-03-13-how-ai-changed-the-economics-of-architecture/index.md
new file mode 100644
index 00000000000..ffc235159e6
--- /dev/null
+++
b/content/blog/2026-03-13-how-ai-changed-the-economics-of-architecture/index.md
@@ -0,0 +1,91 @@
+---
+title: "How AI Changed the Economics of Architecture"
+date: 2026-03-13
+author: Sheng Wu
+description: "A case study in how AI changed the economics of architecture in
a mature project by making better designs cheaper to prototype, validate, and
refine."
+tags:
+- GraalVM
+- Native Image
+- Performance
+- Cloud Native
+- Serverless
+- AI
+- Claude
+---
+
+*SkyWalking GraalVM Distro: A case study in turning runnable PoCs into a
repeatable migration pipeline.*
+
+The most important lesson from this project is not that AI can generate a
large amount of code. It is that AI changes the economics of architecture. When
runnable PoCs become cheap to build, compare, discard, and rebuild, architects
can push further toward the design they actually want instead of stopping early
at a compromise they can afford to implement.
+
+That shift matters a lot in mature open source systems. Apache SkyWalking OAP
has long been a powerful and production-proven observability backend, but it
also carries all the realities of a large Java platform: runtime bytecode
generation, reflection-heavy initialization, classpath scanning, SPI-based
module wiring, and dynamic DSL execution that are friendly to extensibility but
hostile to GraalVM native image.
+
+**SkyWalking GraalVM Distro** is the result of treating that challenge as a
design-system problem instead of a one-off porting exercise. The goal was not
only to make OAP run as a native binary, but to turn GraalVM migration itself
into a repeatable automation pipeline that can stay aligned with upstream
evolution.
+
+For the full technical design, benchmark data, and getting-started guide, see
the companion post: [SkyWalking GraalVM Distro: Design and
Benchmarks](../2026-03-13-skywalking-graalvm-distro-design-and-benchmarks/index.md).
+
+## From Paused Idea to Runnable System
+
+This journey actually began years ago. Shortly after this repository was
created, [yswdqz](https://github.com/yswdqz) spent several months exploring the
transition. The project proved much harder in practice than the individual
GraalVM limitations sounded on paper, and the work eventually paused for years.
+
+That pause is important. The missing ingredient was not ideas. Mature
maintainers usually have more ideas than time. The real constraint was
implementation economics. Even when the architect can see several promising
directions, limited developer resources force an earlier trade-off: choose the
path that is cheapest to implement, not necessarily the path that is cleanest,
most reusable, or most future-proof.
+
+This is a very common reality, not an exceptional one. In open source
communities, much of the work depends on volunteers or limited company
sponsorship. In commercial products, the pressure is different but the
constraint is still real: roadmap commitments, staffing limits, and delivery
deadlines keep engineering resources tight. In both worlds, good ideas are
often abandoned not because they are wrong, but because they are too expensive
to validate and implement thoroughly.
+
+There is another constraint that matters just as much: the architect is
usually also a very senior engineer, not a full-time implementation machine.
That means limited personal coding energy, fragmented time, and a constant need
to explain ideas to other senior engineers before the code exists.
Traditionally, that explanation happens through diagrams, documents, and
conversations. It is slow, lossy, and unpredictable. We all know some version
of the Telephone Game: even simple words are [...]
+
+What changed in late 2025 was that AI engineering made multiple runnable ideas
affordable. Instead of picking an early compromise because implementation
capacity was scarce, we could switch repeatedly between designs, validate them
with code, discard weak directions quickly, and keep iterating until the
architecture became solid, practical, and efficient enough to hold.
+
+That design freedom was critical. GraalVM documentation gives clear guidance
on isolated limitations, but a mature OSS platform hits them as a connected
system. Fixing only one dynamic mechanism is not enough. To make native image
practical, we had to turn whole categories of runtime behavior into build-time
artifacts and automated metadata generation.
+
+There was also a very concrete mountain in front of us in the early history of
this distro. In the first several commits of the repository, upstream
SkyWalking still relied heavily on Groovy for LAL, MAL, and Hierarchy scripts.
In theory, that was just one more unsupported runtime-heavy component. In
practice, Groovy was the biggest obstacle in the whole path. It represented not
only script execution, but a whole dynamic model that was deeply convenient on
the JVM side and deeply unfrien [...]
+
+To bridge the gap, we re-architected the core engines of OAP around an
AOT-first model. Earlier experiments had to confront Groovy-era runtime
behavior directly and explore alternative script-compilation approaches to get
around it. The finalized direction went further: align with the upstream
compiler pipeline, move dynamic generation to build time, and add automation so
the migration stays controllable as upstream keeps moving. Concretely, that
meant turning OAL, MAL, LAL, and Hierarch [...]
+
+## AI Speed Changed the Design Loop
+
+The scale of this transformation was not only about coding faster. AI changed
the loop between idea, prototype, validation, and redesign. We could build
runnable PoCs for different approaches, throw away weak ones quickly, and
preserve the promising abstractions until they formed a coherent migration
system.
+
+That does not reduce the role of human architecture. It raises the value of
it. Human judgment was still required to decide what should become build-time,
what should stay configurable, where to introduce same-FQCN replacements, how
to keep upstream sync controllable, and which abstractions were worth
preserving. But AI speed made it realistic to pursue those better designs
instead of settling for a simpler compromise too early.
+
+This is the real change in the economics of architecture. In the past, an
architect might already know the cleaner direction, but limited engineering
capacity often forced that vision back toward a cheaper compromise. Now the
architect can return much closer to being a fast developer again: building
code, shaping high-abstraction interfaces, and using design patterns to prove
the vision directly in the real world.
+
+That changes communication as much as implementation. In open source, we often
say, `talk is cheap, show me the code`. With AI engineering, showing the code
becomes much more straightforward. The design no longer depends so heavily on a
slow top-down translation from idea to documents to interpretation to
implementation. The code can appear earlier, and it can run earlier.
+
+Other senior engineers benefit from this too. They do not need to reconstruct
the whole design only from diagrams, meetings, or long explanations. They can
review the actual abstraction, see the behavior in code, run it, challenge it,
and refine it from something concrete. That makes architectural collaboration
faster, clearer, and less lossy.
+
+This is also where I think the current AI discussion is often noisy. Many
projects are fun, surprising, and worth exploring, but advanced engineering
work is not improved merely by attaching an agent to a codebase. The important
question is not which demo looks most magical. The important question is which
engineering capabilities are actually being accelerated without losing the
discipline of software development itself.
+
+For architects and senior engineers, the capabilities that mattered most here
were:
+- **Fast comparative prototyping:** Building several runnable approaches in
code instead of defending one idea with slides and documents.
+- **Large-scale code comprehension:** Reading across many modules quickly
enough to keep the whole system in view.
+- **Systematic refactoring:** Converting reflection-heavy or runtime-dynamic
paths into designs that fit AOT constraints.
+- **Automation construction:** When a migration step must be repeated every
upstream sync, doing it manually once is already expensive. Doing it manually
again next time is even more expensive. AI made it practical to invest in
generators, inventories, consistency checks, and drift detectors that turn
repeated manual work into repeatable automation.
+- **Review at breadth:** Checking edge cases, compatibility boundaries, and
repeatability across a large surface area.
+
+Those capabilities were visible in the resulting design. Same-FQCN
replacements created a controlled boundary for GraalVM-specific behavior.
Reflection metadata was generated from build outputs instead of maintained as a
hand-written guess list. Inventories and drift detectors turned upstream sync
from a vague maintenance risk into an explicit engineering workflow.
+
+For junior engineers, I think the lesson is equally important. AI does not
remove the need to learn architecture, invariants, interfaces, testing, or
maintenance. It makes those skills more valuable, because they determine
whether accelerated implementation produces a durable system or just more code
faster. The leverage comes from engineering judgment, not from novelty.
+
+**Claude Code** and **Gemini AI** acted as engineering accelerators throughout
this process. In the GraalVM Distro specifically, they helped us:
+- **Explore migration strategies as running code:** Instead of debating which
approach might work, we built and compared multiple real prototypes, discarded
the weak ones, and kept what held up.
+- **Refactor reflection-heavy and dynamic code paths:** Replace
runtime-hostile patterns with AOT-friendly alternatives across the codebase.
+- **Make upstream sync sustainable:** Every time the distro pulls from
upstream SkyWalking, metadata scanning, config regeneration, and recompilation
must happen again. AI helped build the pipeline so that each sync is a
controlled, largely automated process rather than a fresh manual effort that
grows longer each time.
+- **Review logic and edge cases at scale:** Especially in places where feature
parity mattered more than raw implementation speed.
+
+The result was not just a large rewrite. It was a repeatable system:
precompilers, manifest-driven loading, reflection-config generation,
replacement boundaries, and drift detectors that make upstream migration
reviewable and automatable.
+
+For the broader methodology behind this style of development, see [Agentic
Vibe Coding in a Mature OSS
Project](https://builder.aws.com/content/3AgtzlikuD9bSUJrWDCjGW5Q5nW/agentic-vibe-coding-in-a-mature-oss-project-what-worked-what-didnt).
This post is the next step in that story: not only enhancing an active mature
codebase, but reviving a paused effort and making it actually runnable.
+
+## What Actually Changed
+
+The most important outcome of this project is not a benchmark table. The
benchmark results belong to the distro itself, and they matter because they
prove the system is real. But for this post, the deeper result is
methodological: AI engineering changed how architecture could be explored,
validated, and refined.
+
+Instead of treating architecture as a mostly document-driven activity followed
by a long and expensive implementation phase, we were able to move much faster
between idea, prototype, comparison, and redesign. That made it realistic to
pursue higher-abstraction solutions, preserve cleaner boundaries, and build the
automation needed to keep the migration maintainable over time.
+
+The technical evidence for that work is the SkyWalking GraalVM Distro itself:
not only a runnable system, but a migration pipeline expressed as precompilers,
generated reflection metadata, controlled replacement boundaries, and drift
checks. The benchmark data matter because they prove the system works in
practice, but the architectural result is that the migration became a
repeatable system rather than a one-time port. For detailed benchmark
methodology, per-pod data, and the full techn [...]
+
+The project is hosted at
[apache/skywalking-graalvm-distro](https://github.com/apache/skywalking-graalvm-distro).
We invite the community to test it, report issues, and help move it toward
production readiness.
+
+For me, the deeper takeaway is broader than this distro. AI engineering does
not make architecture less important. It makes architecture more worth
pursuing. When implementation speed rises enough, we can afford to test more
ideas in code, keep the good abstractions, and build systems that would
previously have been judged too expensive to finish well.
+
+For senior engineers, that means the bottleneck shifts away from raw typing
speed and toward taste, system judgment, and the ability to define stable
boundaries. For junior engineers, it means the path forward is not to chase
every exciting AI workflow, but to become stronger at the fundamentals that let
acceleration compound: understanding requirements, reading unfamiliar systems,
questioning assumptions, and recognizing what must remain correct as everything
around it changes. AI chang [...]
diff --git
a/content/blog/2026-03-13-skywalking-graalvm-distro-design-and-benchmarks/index.md
b/content/blog/2026-03-13-skywalking-graalvm-distro-design-and-benchmarks/index.md
new file mode 100644
index 00000000000..90bb7f8190e
--- /dev/null
+++
b/content/blog/2026-03-13-skywalking-graalvm-distro-design-and-benchmarks/index.md
@@ -0,0 +1,220 @@
+---
+title: "SkyWalking GraalVM Distro: Design and Benchmarks"
+date: 2026-03-13
+author: Sheng Wu
+description: "A technical deep-dive into SkyWalking GraalVM Distro — how we
turned a mature, reflection-heavy Java observability backend into a native
binary with a repeatable migration pipeline."
+tags:
+- GraalVM
+- Native Image
+- Performance
+- Cloud Native
+- Serverless
+- BanyanDB
+---
+
+*A technical deep-dive into how we migrated Apache SkyWalking OAP to GraalVM
Native Image — not as a one-off port, but as a repeatable pipeline that stays
aligned with upstream.*
+
+For the broader story of how AI engineering made this project economically
viable, see [How AI Changed the Economics of
Architecture](/blog/2026-03-13-how-ai-changed-the-economics-of-architecture/).
+
+## Why GraalVM Is Not Optional
+
+GraalVM Native Image compiles Java applications Ahead-of-Time (AOT) into
standalone executables. For an observability backend like SkyWalking OAP, this
is not a performance optimization — it is an operational necessity.
+
+An observability platform must be the most reliable component in the
infrastructure. It has to survive the failures it is supposed to observe. In
cloud-native environments where workloads scale, migrate, and restart
constantly, the backend that watches everything cannot itself be the slow,
heavy process that takes seconds to recover and gigabytes to idle.
+
+Our benchmarks make the case concrete:
+
+- **Startup:** ~5 ms vs ~635 ms. In a Kubernetes cluster where an OAP pod gets
evicted or rescheduled, a 635 ms gap means lost telemetry — traces, metrics,
and logs that arrive during that window are simply dropped. At 5 ms, the new
pod is receiving data before most clients even notice the disruption.
+- **Idle memory:** ~41 MiB vs ~1.2 GiB. Observability backends run 24/7. In a
multi-tenant or edge deployment, a 97% reduction in baseline RSS is the
difference between fitting the observability stack on a small node and needing
a dedicated one.
+- **Memory under load:** ~629 MiB vs ~2.0 GiB at 20 RPS. A 70% reduction at
production-like traffic means fewer nodes, lower cloud bills, and more headroom
before the backend itself becomes a scaling bottleneck.
+- **No warm-up penalty:** Peak throughput is available from the first request.
The JVM's JIT compiler needs minutes of traffic before it optimizes hot paths —
during that window, tail latency is worse and data processing lags behind. A
native binary has no such phase.
+- **Smaller attack surface:** No JDK runtime means fewer CVEs to track and
patch. For a component that ingests data from every service in the cluster,
that matters.
+
+These are not incremental improvements. They change what deployment topologies
are practical. Serverless observability backends, sidecar-model collectors,
edge nodes with tight memory budgets — all become realistic when the backend is
this light and this fast.
+
+## The Challenge: A Mature, Dynamic Java Platform
+
+SkyWalking OAP carries all the realities of a large Java platform: runtime
bytecode generation, reflection-heavy initialization, classpath scanning,
SPI-based module wiring, and dynamic DSL execution. These patterns are friendly
to extensibility but hostile to GraalVM native image.
+
+The documented GraalVM limitations are only the beginning. In a mature OSS
platform, those limitations are deeply entangled with years of runtime design
decisions. Standard GraalVM native images struggle with runtime class
generation, reflection, dynamic discovery, and script execution — all of which
had deep roots in SkyWalking OAP.
+
+There was also a very concrete mountain in the early history of this distro.
Upstream SkyWalking relied heavily on Groovy for LAL, MAL, and Hierarchy
scripts. In theory, that was just one more unsupported runtime-heavy component.
In practice, Groovy was the biggest obstacle in the whole path. It represented
not only script execution, but a whole dynamic model that was deeply convenient
on the JVM side and deeply unfriendly to native image.
+
+## The Design Goal: Make Migration Repeatable
+
+The final design is not just "run native-image successfully." It is a system
that keeps migration work repeatable:
+
+1. **Pre-compile runtime-generated assets at build time.** OAL, MAL, LAL,
Hierarchy rules, and meter-related generated classes are compiled during the
build and packaged as artifacts instead of being generated at startup.
+2. **Replace dynamic discovery with deterministic loading.** Classpath
scanning and runtime registration paths are converted into manifest-driven
loading.
+3. **Reduce runtime reflection and generate native metadata from the build.**
Reflection configuration is produced from actual manifests and scanned classes
instead of being maintained as a hand-written guess list.
+4. **Keep the upstream sync boundary explicit.** Same-FQCN replacements are
intentionally packaged, inventoried, and guarded with staleness checks.
+5. **Make drift visible immediately.** If upstream providers, rule files, or
replaced source files change, tests fail and force explicit review.
+
+That is the architectural shift that matters most. Reusable abstraction and
foresight did not become less important in the AI era. They became more
important, because they determine whether AI speed produces a maintainable
system or just a fast-growing pile of code.
+
+## Turning Runtime Dynamism into Build-Time Assets
+
+SkyWalking OAP has several dynamic subsystems that are natural in a JVM world
but problematic for native image:
+
+- OAL generates classes at runtime.
+- LAL, MAL, and Hierarchy were historically tied to Groovy-heavy runtime
behavior, which became one of the biggest practical blockers in the early
distro work.
+- MAL, LAL, and Hierarchy rules depend on runtime compilation behavior.
+- Guava-based classpath scanning discovers annotations, dispatchers,
decorators, and meter functions.
+- SPI-based module/provider discovery expects a more dynamic runtime
environment.
+- YAML/config initialization and framework integrations depend on reflective
access.
+
+In SkyWalking GraalVM Distro, these are not solved one by one as isolated
patches. They are pulled into a build-time pipeline.
+
+The precompiler runs the DSL engines during the build, exports generated
classes, writes manifests, serializes config data, and generates native-image
metadata. That means startup becomes class loading and registration, not
runtime code generation. The runtime path is simpler because the build path
became richer.
+
+This is also why the project is more than a performance exercise. The design
goal was to move complexity into a place where it is easier to verify, easier
to automate, and easier to repeat.
+
+## Same-FQCN Replacements as a Controlled Boundary
+
+One of the most practical design choices in this distro is the use of
same-FQCN replacement classes. We do not rely on vague startup tricks or
undocumented ordering assumptions. Instead, the GraalVM-specific jars are
repackaged so the original upstream classes are excluded and the replacement
classes occupy the exact same fully-qualified names.
+
+This matters for maintainability. It creates a very clear boundary:
+
+- the upstream class still defines the behavior contract,
+- the GraalVM replacement provides a compatible implementation strategy,
+- and the packaging makes that swap explicit.
+
+For example, OAL loading changes from runtime compilation into manifest-driven
loading of precompiled classes. Similar replacements handle MAL and LAL DSL
loading, module wiring, config initialization, and several reflection-sensitive
paths. The goal is not to fork everything. The goal is to replace only the
places where the runtime model is fundamentally unfriendly to native image.
+
+That boundary is then guarded by tests that hash the upstream source files
corresponding to the replacements. When upstream changes one of those files,
the build fails and tells us exactly which replacement needs review. This is
what turns "keeping up with upstream" from an anxiety problem into a visible
engineering task.
+
+## Reflection Config Is Generated, Not Guessed
+
+In many GraalVM migrations, `reflect-config.json` becomes a manually
accumulated artifact. It grows over time, gets stale, and nobody is fully sure
whether it is complete or why each entry exists. That approach does not scale
well for a large, evolving OSS platform.
+
+In this distro, reflection metadata is generated from the build outputs and
scanned classes:
+
+- manifests for OAL, MAL, LAL, Hierarchy, and meter-generated classes,
+- annotation-scanned classes,
+- Armeria HTTP handlers,
+- GraphQL resolvers and schema-mapped types,
+- and accepted `ModuleConfig` classes.
+
+This is a much healthier model. Instead of asking people to remember every
reflective access path, the system derives reflection metadata from the actual
migration pipeline. The build becomes the source of truth.
+
+## Keeping Upstream Sync Practical
+
+If this distro were only a one-time engineering sprint, it would be much less
interesting. The real challenge is keeping it alive while upstream SkyWalking
continues to evolve.
+
+That is why the repo includes explicit inventories and drift detectors:
+
+- provider inventories that force new upstream providers to be categorized,
+- rule-file inventories that force new DSL inputs to be acknowledged,
+- SHA watchers for precompiled YAML inputs,
+- and SHA watchers for upstream source files with GraalVM-specific
replacements.
+
+Good abstraction is not only about elegant code structure. It is about
choosing a migration design that can survive contact with future change.
+
+## Benchmark Results
+
+We benchmarked the standard JVM OAP against the GraalVM Distro on an Apple M3
Max (macOS, Docker Desktop, 10 CPUs / 62.7 GB), both connecting to BanyanDB.
+
+### Boot Test (Docker Compose, no traffic, median of 3 runs)
+
+| Metric | JVM OAP | GraalVM OAP | Delta |
+|--------|---------|-------------|-------|
+| Cold boot startup | 635 ms | 5 ms | ~127x faster |
+| Warm boot startup | 630 ms | 5 ms | ~126x faster |
+| Idle RSS | ~1.2 GiB | ~41 MiB | ~97% reduction |
+
+Boot time is measured from OAP's first application log timestamp to the
`listening on 11800` log line (gRPC server ready).
+
+### Under Sustained Load (Kind + Istio 1.25.2 + Bookinfo at ~20 RPS, 2 OAP
replicas)
+
+30 samples at 10s intervals after 60s warmup.
+
+| Metric | JVM OAP | GraalVM OAP | Delta |
+|--------|---------|-------------|-------|
+| CPU median (millicores) | 101 | 68 | -33% |
+| CPU avg (millicores) | 107 | 67 | -37% |
+| Memory median (MiB) | 2068 | 629 | **-70%** |
+| Memory avg (MiB) | 2082 | 624 | **-70%** |
+
+Both variants reported identical entry-service CPM, confirming equivalent
traffic processing capability.
+
+Service metrics collected every 30s via swctl for all discovered services:
+`service_cpm`, `service_resp_time`, `service_sla`, `service_apdex`,
`service_percentile`.
+
+Full benchmark scripts and raw data are in the
[benchmark/](https://github.com/apache/skywalking-graalvm-distro/tree/main/benchmark)
directory of the distro repository.
+
+## Current Status
+
+The project is a runnable experimental distribution, hosted in its own
repository:
[apache/skywalking-graalvm-distro](https://github.com/apache/skywalking-graalvm-distro).
+
+The current distro intentionally focuses on a modern, high-performance
operating model:
+
+- **Storage:** BanyanDB
+- **Cluster modes:** Standalone and Kubernetes
+- **Configuration:** none or Kubernetes ConfigMap
+- **Runtime model:** fixed module set, precompiled assets, and AOT-friendly
wiring
+
+This focus is deliberate. A repeatable migration system starts by making a
clear scope runnable, then expanding without losing control.
+
+## Getting Started
+
+Because the SkyWalking GraalVM Distro is designed for peak performance, it is
optimized to work with **BanyanDB** as its storage backend. The current
published image is available on Docker Hub, and you can boot the stack using
the following `docker-compose.yml`.
+
+```yaml
+version: '3.8'
+
+services:
+ banyandb:
+ image:
ghcr.io/apache/skywalking-banyandb:e1ba421bd624727760c7a69c84c6fe55878fb526
+ container_name: banyandb
+ restart: always
+ ports:
+ - "17912:17912"
+ - "17913:17913"
+ command: standalone --stream-root-path /tmp/stream-data
--measure-root-path /tmp/measure-data --measure-metadata-cache-wait-duration 1m
--stream-metadata-cache-wait-duration 1m
+ healthcheck:
+ test: ["CMD", "sh", "-c", "nc -nz 127.0.0.1 17912"]
+ interval: 5s
+ timeout: 10s
+ retries: 120
+
+ oap:
+ image: apache/skywalking-graalvm-distro:0.1.1
+ container_name: oap
+ depends_on:
+ banyandb:
+ condition: service_healthy
+ restart: always
+ ports:
+ - "11800:11800"
+ - "12800:12800"
+ environment:
+ SW_STORAGE: banyandb
+ SW_STORAGE_BANYANDB_TARGETS: banyandb:17912
+ SW_HEALTH_CHECKER: default
+ healthcheck:
+ test: ["CMD-SHELL", "nc -nz 127.0.0.1 11800 || exit 1"]
+ interval: 5s
+ timeout: 10s
+ retries: 120
+
+ ui:
+ image: ghcr.io/apache/skywalking/ui:10.3.0
+ container_name: ui
+ depends_on:
+ oap:
+ condition: service_healthy
+ restart: always
+ ports:
+ - "8080:8080"
+ environment:
+ SW_OAP_ADDRESS: http://oap:12800
+```
+
+Simply run:
+```bash
+docker compose up -d
+```
+
+We invite the community to test this new distribution, report issues, and help
us move it toward a production-ready state.
+
+*Special thanks to the GraalVM team for the technology foundation.*