🌤

HTMX Is Worse Than React, and WebSocket Is Obsolete?

webdevarchitecturefrontendbackend
Loading...
 

22

 

Published on

July 15, 2025

Trying To Fix The Web Dev: Part 2, The Solution?

If you missed an introduction to the series, please check it out first.

This is the part where it gets serious.

Disqualified Candidates: New Wave Front-End Frameworks.

Svelte, Preact, etc.

  • Some of them try to address the complexity (making simple things more straightforward and complex things harder).
  • Some client-side bloat
  • Some state management burnout
  • Some are distilled versions of others

But none of them solves The Issue ™ (even Juris) because they do nothing with the server. Image description

Candidate #1: Hypermedia-Driven

HTMX, Unpoly, fixi.js, etc.

"HTML enhancement" that enables fragment fetching and partial page updates in response to user actions without writing JS code.

How It Works

Image description

General Flow:

  1. User triggers an event on the element (e.g., clicks a button or submits a form).
  2. The hypermedia library inspects the element's attributes to determine the request URI, then sends the request via AJAX.
  3. The server returns an HTML fragment
  4. The hypermedia library inspects attributes to determine where and how to apply the HTML patch.

What Is Good:

  • Logic is on the surface (you don't have to understand component state and run JS code in your head to figure out what will happen after user action)
  • No need for the client-side input validation logic
  • Extremely light on the client
  • SSR without hydration
  • Feels like a natural evolution of HTML (there is even a standardization initiative)

Why I Will Never Use It:

Control Flow Distribution

Imagine that every time you write a function, you do something like this:

  1. Start as always: "function onForm(data) {"
  2. Create a separate file submit.js and write the function body there
  3. In the original file, you write a meaningless text macro to include the body at runtime and then return the result.
function onForm(data) {
  @submit.js
  return result
}

Does it look like a haven?

That's how I feel about HTMX: function onForm:

<!--  post to "/submit", then replace the form itself with the response body -->
<form hx-post="/submit" hx-target="this" hx-swap="outerHTML">
    <input type="text" name="name">
    <input type="email" name="email">
    <button type="submit">Submit</button>
</form>

submit.js:

// http handler, that processes form data
app.post("/submit", (req, res) => {
  /* authorization, validation */
  const { name, email } = req.body;
  // respond with HTML (usually using a template engine)
  res.send(`<p>Submitted: <strong>${name}</strong> (${email})</p>`);
});

But isn't it the same as the "old" approach? No, because in native HTML after submission, the new page will be loaded, so control is just passed to the back-end and never returned. Isn't calling the API a similar thing? No, because in 90% cases, the API is just a DB wrapper; it has nothing to do with the control flow.

Shallow Simplicity

Arguably, the hypermedia stack is the fastest way to launch. However, the limitations of the architecture complicate the implementation of an already complex UI.

It's like you have only one way to build user flow - you store stateless functions that always output a single HTML fragment in a huge map, and then for each interactive element, specify the map key and arguments as constants. Does it make simple things more straightforward? Yes. Does it make tricky UIs even messier? Also, yes.

What if you need a multi-step form or non-linear user flow? Store intermediate values in HTML, cookies, or external memory. Is it more transparent than using a variable?

Update several places on the page in response to user action? Make a total control flow mess.

UI state and state management libraries are here for a reason, and complex web apps are a modern-day requirement, keeping web relevant in the mobile apps era. There is a particular hate towards state in the community, but it's probably because it allows us to create more elaborate things.

Text-driven

It's not wrong. However, I don't feel comfortable describing an algorithm via HTML attributes.

Endpoint Security

Although the "API issue" from the previous article is addressed - "DB wrapper" is gone (there can't be more "Backend For Frontend" than HTMX server), you still need to care a lot about proper endpoint authorization logic (with state-in-request, there are even more things to keep in mind).

Verdict:

Front-end hypermedia tools scale the "classic" approach, without improving it, even if there are (or will be) more "sugar" containing libs that partially address my points. Don't get me wrong, it's great technology that has its niche. However, it's essential to consider its limitations when selecting a stack for a long-term project.

So, does it address The Issue? Partially. Is it The Solution? Hell no.

Candidate #2: Server-Driven

Hotwire Turbo Streams, Phoenix Live View, Laravel Livewire, Blazor Server

Main UI logic is written for the server and runs on the server; the client acts as a "remote" renderer and user input provider.

General Flow:

  1. User triggers an event on the element (e.g., clicks a button or submits a form).
  2. Client lib intercepts the event and sends it to the server
  3. Server prepares HTML updates and how to apply them, wires that to the client
  4. The client follows the server's instructions

Compared to the previous candidate, the server owns control flow completely. The browser has no idea what will happen in response to the event. You can think of it like a backwards API - the server calls the client to modify HTML on the fly.

What Is Good:

  • Business logic has returned to the server 🔥
  • Light on the client
  • SSR without hydration
  • State-full* server superpowers, like page instance & element scoped endpoints/handlers (no developer-defined and exposed API at all)

* Not all of them are stateful

So this is it? The Issue is solved: extra-thin client, no API, business logic executed in a controlled and safe environment.

Compromises:

Blocking Event Processing

Guilty: Phoenix Live View and Blazor

Page by design lives in a single thread or process (Phoenix). In other words, when interacting with multiple page elements, processing, database interaction, and rendering occur in series, with the next event being processed after the previous cycle is complete.

That makes internal framework logic much simpler (no concurrency issues), but it looks like a blocker for mass adoption. You don't want your UI to be unresponsive or accumulate an event queue when a time-consuming task is running. You can mitigate this by leveraging background processes and asynchronous tasks; however, the complexity cost seems too high to justify the effort (task management & UI synchronization).

Hotwire lacks concurrency control at all. Livewire, by default, blocks the component during a request on the client side, which is most of the time good enough.

50/50 State Situation

You can argue that modern web UI can operate without state. But it's here already, relied on everywhere, and keeps being reinvented.

Hotwire No built-in state management.

Laravel Livewire Serializes component state to hidden inputs under the hood, which seems cool.

Laravel server is stateless and does not "live" in memory. It "reacts" only to requests - restores component state from request data and session storage, then uses it to render an HTML update; However, it has built-in mechanics to trigger the update of multiple components (within one request lifecycle).

Blazor Server Full support: component state, global state, derive functionality, reactivity. But C#, so who cares (

Phoenix Live View Global state per LiveView (root app component) instance: propagated to components as props and triggers rendering, similar to storing state in the root component in React and passing down the tree as props. Components state, which operates in a similar way to React.

Grandpa WebSocket

UI updates rely on it in: Phoenix Live View, Blazor Server, Hotwire Turbo Streams (truly server-driven part of Hotwire, because Turbo Frames are just like HTMX) Image description

Let's address the clickbait elephant in the room.

Not compatible with QUIC (HTTP3).

95% browsers now support QUIC. It offers higher resistance to network conditions, lower latency, and faster connection establishment. If your app users live in a datacenter, that makes no difference. Otherwise, for a server-driven UI, lack of QUIC hurts.

WS via QUIC may be enabled in the future. However, even WS via HTTP/2 (after 10 years) adoption remains rare. Also, with wider streaming request body (hello safari) and BYOB support or WebTransport, we won't need WS that much.

Sloppy Reconnect

Connection loss detection is guaranteed only by pings or heartbeats. For example, 10 seconds between pings, with at least two skips required to initiate reconnect, resulting up to 20 seconds of unusual UI behavior.

Timeout-based disconnect detection is unavoidable in real-time scenarios, regardless of whether you are using WS or not. But it adds up to weaker network condition resistance of TCP compared to QUIC, longer handshake, and separate connection circuit (meaning other network app activity failing does not help WS disconnect detection)

App Users Notice

I am talking from experience. WebSocket-controlled UI comes with the deal: there will be cases, despite the internet already being reachable, where the app is not responding for a noticeable time. Maybe it will happen not so often, and perhaps it's not ruining UX, but it's a design flaw.

SSE situation is also not great. The browser is extremely sloppy at reconnecting. I have no idea why. Especially after PC wake-ups from sleep, and again, you will rely on heartbeat (and manual event source recreation).

Non-Native Forms

Guilty: Phoenix Live View and Blazor

By default, it is serialized and passed via WebSocket. Not a deal breaker, but requires care when dealing with files.

Endpoint Security

Laravel Livewire, Hotwire

Server is stateless and does not maintain in‑memory UI component instances with unique "hooks" between requests. Endpoints are static, so you must enforce authentication and authorization on every handler, just as you would with HTMX.

Stateful Cost

Phoenix Live View and Blazor

  • Load balancing is not so trivial, because a page "lives" on a specific server instance
  • Each active page consumes memory on the server
  • Non-cachable HTML
  • UI will lose state after a period of inactivity (server memory is not infinite) or due to server restart

Seems like a reasonable compromise for me. With more users, you probably can afford more computing resources.

Verdict:

Phoenix Live View is fascinating as a concept (app lives on the server, browser as remote actor), but quirks like "single-threaded" blocking UI and HTTP/1.1 reliance make it hard to recommend. Not to mention that elixir is not everyone's cup of coffee.

From a theoretical perspective, Lavarel Livewire represents an evolution (real one this time) of the classic web — stateful Components and "live" page updates, all while the server remains stateless. However, it's not factually "live". There is no app runtime; you can't initiate UI Updates from the server because there's no reactivity, endpoints are static and require care.

Blazor (the server-side variant) is conceptually similar to Phoenix LiveView, but it has a distinct Microsoft flavor. Users' reports confirm it is sluggish and has high resource consumption.

Hotwire looks like it is either advanced HTMX or inferior Livewire.

So, does it address The Issue? Yes. Is it The Solution? Unfortunately, no, too huge of a compromise.

There is no end to our suffering?

I think there is, and now I know The Formula. So, I quit my job as CTO and dived in.

Subscribe for the Part 3: The Formula.

React, comment and follow on