In 0.8.2 we gave plugins a heartbeat. In 0.8.3 we let GoatFlow act on it.
That is the quiet shape of this release: the platform now recovers more of its own state, survives more deployment topologies, and asks users for fewer fragile secrets. Login gets passkeys and hardware security keys. Plugins get auto-restart with backoff and a crash-loop guard. PWA/plugin UI routing gets a real cache policy. The release pipeline gets a fresh vulnerability sweep before the tag goes out.

0.8.3 is not one giant feature. It is a trust release. Trust that a login ceremony can finish on a different app instance than the one that started it. Trust that a plugin with a dead gRPC channel will not sit there looking healthy forever. Trust that a service worker will not cache an SSE stream and ruin somebody’s afternoon. Small promises, made concrete.
Passkeys
GoatFlow now supports passwordless passkey login for both agents and customers.
Registered WebAuthn credentials are created as resident, user-verified credentials, so the browser or platform authenticator can identify the account during sign-in. The agent and customer login pages both expose passkey buttons, but the selected credential decides which account type to open. A customer passkey opens a customer session; an agent passkey opens an agent session.

The important part is where the ceremony state lives. Begin and finish no longer have to hit the same process.
browser GoatFlow instance A database
| | |
| passkey begin | |
|---------------------------->| |
| | store ceremony |
| |------------------------->|
| challenge + pending cookie | |
|<----------------------------| |
| |
| passkey finish GoatFlow instance B |
|---------------------------->| |
| | consume ceremony |
| |------------------------->|
| | resolve credential type |
| session cookie | |
|<----------------------------| |
Ceremonies live in gk_webauthn_ceremony, are consumed once on finish, and are guarded by HttpOnly pending cookies. Account status checks still run before a session is issued. The login buttons are new, but the shape underneath is deliberately boring: short-lived state, stored server-side, consumed once, then normal session cookies.
That boring bit matters.
WebAuthn as MFA
Security keys also work as a second factor now.
Agents and customers can register browser-backed FIDO2 credentials from the profile 2FA area and use them on the pending 2FA login screen instead of a TOTP code or recovery code. Credentials live in gk_webauthn_credential with public-key material, counters, friendly names, last-used metadata, and account type.
This release also fixes the rough edges that show up once security keys are not just a demo path:
| Area | What changed |
|---|---|
| Setup | Security-key registration uses an in-page password modal with a visibility toggle |
| Login | Security-key-only accounts get a key-first challenge instead of an authenticator-code form |
| Profile | Security status distinguishes authenticator apps from security keys |
| Recovery | Recovery-code counts stay hidden when TOTP is not enabled |
| Admin override | Clearing 2FA also clears registered security keys |
| i18n | MFA and WebAuthn login/profile strings now use translations in all 15 supported languages |
Runtime WebAuthn configuration can be set with GOATFLOW_WEBAUTHN_RP_ID, GOATFLOW_WEBAUTHN_RP_NAME, and GOATFLOW_WEBAUTHN_ORIGINS. Local development still gets request-derived localhost-friendly defaults.
The 2FA State Bug
The most practical auth fix in 0.8.3 is not passkeys. It is pending TOTP session persistence.
Before this release, password login created a pending 2FA session in memory. That is fine in a single process. It is not fine when password login lands on customer-fe and the TOTP verification POST lands on backend, or when a container restarts between the two steps. The code could verify a perfectly valid authenticator code and still reject it because the second process had never seen the pending session.
Pending TOTP sessions now live in gk_totp_pending_session, keyed by a hash of the pending cookie token. The old in-memory map remains as a local fast cache, but the database is the source of truth.
This is exactly the sort of bug that only appears when the system starts looking like production. The cryptography was fine. The topology was not.
Plugin Auto-Recovery
0.8.2 added plugin health checks. 0.8.3 adds the next step: recovery.
The health checker still probes loaded plugins every 60 seconds via __health_ping__ on the existing Call path. Three consecutive 5-second timeouts mark a plugin unhealthy. Now, when that happens, the manager asks the loader to reload it.
healthy
|
| 3 consecutive health timeouts
v
unhealthy -> reload attempt -> backoff
|
| success
v
healthy
more than 5 restart attempts in 10 minutes
|
v
crash-loop abandoned until an admin resets it
Backoff is exponential: 5s, 10s, 20s, and so on, capped at 5 minutes. A successful probe resets the backoff to zero. More than five restart attempts inside a 10-minute rolling window flips PluginHealth.CrashLoopAbandoned and stops further automatic restarts until an admin clears the flag.
There is a UI for that now. /admin/plugins has Healthy / Unhealthy summary cards and a Health column per plugin showing pending, healthy, unhealthy with restart attempt count, or crash-loop with an inline reset button. The reset endpoint is POST /api/v1/plugins/:name/reset-crashloop.
Plugins can also return JSON from __health_ping__. The manager validates and stores it on PluginHealth.Payload, so the admin surface can show useful runtime detail instead of just a boolean. The bundled examples now use this path too.
Plugin UIs and Offline Support
The service worker grew up.
/sw-config.json now exposes a versioned cache configuration built from global ServiceWorker::* sysconfig values and enabled plugin UI pwa.cache_routes. The root /sw.js consumes that config and supports four route strategies:
| Strategy | Use case |
|---|---|
network-first | Fresh app pages with offline fallback |
cache-first | Static plugin assets |
stale-while-revalidate | Fast UI shell routes that can refresh in the background |
network-only | Routes that must never be cached |
The service worker now bypasses SSE/EventSource streams, which is one of those details you only remember after a streaming endpoint behaves haunted. Plugin UI shells register the root service worker when PWA support is enabled, so standalone plugin experiences can install offline support without each plugin reinventing the plumbing.
Admins also get /admin/plugin-uis, a management surface for registered plugin UIs: filters, deep links, PWA/manifest visibility, enable/disable controls, custom domain editing, and branding overrides. The admin API preserves plugin-owned config while merging admin branding changes, then rebuilds the dynamic engine so route state matches the settings immediately.
Cascades, Lazy Loading, and Plugin Contracts
0.8.3 closes a gap between what plugin manifests could say and what the host actually did.
manifest.Cascades entries are now wired into the deletion service at plugin load time. When an entity is soft or hard deleted, every plugin that declared an OnSoftDelete or OnHardDelete handler for that entity type gets called with {"id": entityID}.
That sounds straightforward. The edge cases were not.
Lazy-loaded plugins might not have registered their cascade handlers yet. So deletion.Service.runCascades now calls a plugin-manager pre-dispatch hook that ensures discovered-but-unloaded plugins are loaded before the cascade registry is iterated. Lazy mode also eager-loads every discovered plugin at boot, not just gRPC plugins, so the first delete after boot does not pay the surprise load cost inside the delete request.
The same theme shows up in reload behavior. ReplacePlugin now shares the same manifest side-effect path as Register, so custom-field definitions, translations, error codes, template overrides, UIs, and cascade handlers are re-applied on hot reload.
Plugin contracts are only contracts if the host keeps them after startup.
Security Sweep
This release had a deliberate pre-release security pass, and it found real work.
Go defaults, container build references, CI, toolbox fallbacks, and helper scripts now pin Go 1.25.10 instead of the vulnerable 1.25.9 image stream. golang.org/x/net moved to v0.53.0, and golang.org/x/image moved to v0.39.0. The release scan came back clean for reachable Go vulnerabilities after that bump.
Other hardening landed alongside it:
| Area | Change |
|---|---|
| Web push | webpush-go moved to v1.4.0, removing the legacy github.com/golang-jwt/jwt v3 dependency |
| URL tokens | ?token= auth is now limited to WebSocket upgrades and known same-origin SSE routes |
| OAuth2 skeleton | Dormant OAuth2 provider routes now fail closed unless real auth/admin middleware is supplied |
| PKCE | OAuth2 PKCE validation supports plain and S256 challenge checks |
| Cookies | Login, logout, 2FA, WebAuthn, session refresh, and auth middleware share one production-safe cookie policy |
The cookie change is less flashy than passkeys, but it is the kind of consolidation that prevents future drift. Sensitive auth/session cookies now go through a shared helper that preserves local development defaults while forcing Secure in production and applying the configured SameSite policy.
Benchmarks and Load Testing
0.8.3 also adds a small but useful performance harness.
make bench runs a curated Go benchmark suite across routing, middleware, API setup, config, model, sanitizer, and LDAP helper hot paths. Captures land under generated/benchmarks/, and make bench-compare compares two runs with benchstat.
make load-test runs the new k6 smoke profile in tests/load/k6/goatflow_smoke.js against the test stack or a supplied base URL. VUs, duration, endpoints, thresholds, and JSON summaries are configurable.
This is not the 1.0 load test. It is the runway for it.
By the Numbers
- 2 WebAuthn modes: passwordless passkeys and MFA security keys
- 3 auth persistence tables involved: credentials, ceremonies, pending TOTP sessions
- 15 supported languages with translated MFA/WebAuthn login and profile strings
- 60s health probe interval, 5s timeout, 3 consecutive failures before recovery
- 5min maximum plugin restart backoff
- 5 restart attempts in 10min before crash-loop abandonment
- 4 service-worker cache strategies
- Go 1.25.10,
x/net v0.53.0,x/image v0.39.0 - 0 known dependency vulnerabilities after the release scan
What’s Next
0.8.4 (July 2026) turns toward OAuth2/OIDC Provider & Client Management: the real data model, migrations, admin UI for clients, redirect URI management, scopes, grant types, secret rotation, OIDC discovery, JWKS, and key rotation.
0.9.0 (August 2026) is still aimed at the first-party open source plugin set: FAQ/Knowledge Base, Calendar & Appointments, and Process Management.
1.0.0 (November 2026) remains the production cut: security audit, load testing, comprehensive documentation, and migration tooling.
Bonus Track: The Bug That Needed Two Containers
The TOTP bug is the one worth remembering.
It was tempting to treat it like an authenticator problem. Codes were being rejected. A user had just changed TOTP config. The failure looked like “the code is wrong.”
But the code was not wrong. The system was split.
Password login created a pending 2FA session in one process. Verification sometimes ran in another process. The token was in the browser cookie, but the server-side pending session was only in the first process’s memory. From the second process’s point of view, the user was trying to verify a session that did not exist.
That is the lesson: authentication state is not just security logic. It is deployment logic. If a login flow has two steps, and those steps can land on different containers, the state between them has to be shared, short-lived, and consumed exactly once.
So that is what 0.8.3 does. The pending cookie token is hashed. The session is stored in the database. The local map stays as a cache. Verification can land anywhere.
Sometimes the fix is not “make the crypto cleverer.” Sometimes it is “put the state where the architecture can actually see it.”
- Source & containers: GoatFlow on GitHub
- Full changelog: CHANGELOG.md
- Helm chart:
oci://ghcr.io/goatkit/charts/goatflow
Questions? Feedback? Open a GitHub Discussion and let us know what you think!