Saturday, August 6, 2016

Websocket + SockJS + Apache as proxy

In a previous post, I mentioned about my journey into Understanding websocket with the Spring framework that I realised turned out more about exploring spring-security. Nevertheless, it was still part of the same journey, since I probably wouldn't have used spring-security if it weren't for its presence as a spring-websocket dependency.

That said, throughout my coding, it was generally uneventful. That is, until I tried deploying the package into the cloud. The WildFly was sitting behind Apache acting as the proxy, and this was where the problem manifested itself. The issue was compounded by the use of HTTPS, but only running on the proxy; with WildFly only listening for HTTP.

It took quite some time for me to figure out a solution on my own. If your setup is using the stack from Bitnami similar to mine then I certainly hope you'll find this post useful.

Assumptions:
  1. Your Bitnami stack is up and running correctly;
  2. WildFly is listening internally on :8080 port;
  3. You have complete access to your WildFly management console;
  4. Your app can be deployed successfully to WildFly;
  5. Both WildFly and Apache are sited on the same machine/instance;
  6. Apache has been configured with a SSL certificate for proper HTTPS operation;
  7. Apache is functioning normally for regular HTTP/HTTPS traffic
  8. Apache error log should indicate an excessive amount of traffic due to the websocket 
  9. Apache access log should indicate HTTP 502 or similar erroneous codes
  10. httpd.conf should already have uncommented the LoadModule lines for mod_proxy and all its modules, especially mod_proxy_wstunnel.so
 Next, add the following lines to your /opt/bitnami/apache2/conf/bitnami/bitnami.conf file:
  <IfModule proxy_wstunnel_module>
  RewriteCond %{HTTP:Upgrade} =websocket [NC]
  RewriteRule ^/(.*)$ ws://localhost:8080/$1 [P,L]
  </IfModule>
 Do the same for both HTTP (port :80) and HTTPS (port :443) traffic. And then restart Apache.

It took me a while to realise, after enabling the Apache debug log level, that
  1. There is never any wss:// used by WildFly because I'm do not have the HTTPS listener setup; just the vanilla HTTP;
  2. Redirecting to ws://www.mysite.com/$1 would not work either, because that's still throwing the same request to Apache;
  3. Redirecting to ws://localhost/$1 would not work, because that's equally requesting to Apache itself;
If you have SSL for both Apache and WildFly, I guess it'd be possible to utilise SSLProxyEngine that would probably circumvent all of the above, although it'd potentially add overheads to a small cloud instance.

Monday, August 1, 2016

FileUploadException: UT000020: Connection terminated as request was larger than 10485760

In the course of our limits testing, it turns out that it wasn't enough to just set the file upload limit in our own application for the CommonsMultipartResolver, using our spring-*context.xml configuration. WildFly had other ideas of its own. It comes out of the box with the default limit of 10485760 bytes.

While I managed to locate a couple of results such as this, this, this and this, which pointed me in the right direction, they were all referencing older versions, namely WildFly 8. And being lazy that I am, I'm not about to poke around the XML making changes manually, much less use the CLI to amend the value. I wanted to make the change via the WildFly Admin Console UI.

Thus I had to make explorations of my own. Based on those clues, I've identified the whereabouts to change said value.

Navigate to the Configuration tab > Subsystems > Web/HTTP - Undertow > HTTP and click View


HTTP Server tab > "default-server" > View

HTTP Listener > Edit

Then edit the "Max post size" to your desired value. Naturally, I'd think that the value should match whatever you've configured in your own application.

And don't forget to restart your WildFly!

Edit: Also, don't forget to tweak your database e.g. MySQL for the max_allowed_packet alongside this setting!