Relatude is already incredibly fast and we have several clients hosting sites with millions of users each month on a single server (without page caching). However, sometimes you need more capacity and the only way to go is to distribute the load on multiple servers using load balancing.
We have been working hard to componentize the system so that it could be broken into independent components. Each component can run on a separate server. For most situations you will run some components on the same servers while others run on a separate server. The optimal setup will vary for each website and depend on a number of factors. Here are the different components:
This is the load balancer that distributes the requests of each session to an available front-end server. Relatude does not provide this component, but there are several third party solutions that you can use, from simple carousel type load balancers to more advanced systems. A key requirement to Relatude is that each request in a user session should be redirected to the same frontend server. This is not a requirement for anonymous users.
Front-end visit server 1…10
These are the frontend servers that handle the visitor’s request. These are normal Relatude installations, all connected to the same database. There is no theoretical limit to how many you can install, but for most cases you will not see much improvement in capacity if you have more than 10 servers. At this size other components become the bottleneck, and typically the database.
This is a normal Relatude installation that is used to handle the “/edit” interface. (You could have several instances of this if needed).
This is a normal Relatude installation dedicated to running heavy workflows. It is beneficial to run this on a separate server so that for instance sending out thousands of newsletters or synchronization with other systems does not affect the responsiveness of the edit UI or public website. (You could have several instances of this if needed).
File server (CDN)
You can now store all files that belong to content objects on a separate system. The system is provider based so you can easily write your own in case you want to integrate with a CDN (ContentDeliveryNetwork)
There are no changes here to a normal setup. In the current version it is not possible to spread the database onto multiples servers. (We are working on a No-SQL based system where this will be possible). In high traffic websites you should strive write queries that the system can cache effectively. (Like minimizing the use of DateTime.Now filters)
This is an internal messaging system that is used to update various information across the different frontend and components. (Like flagging caches as outdated). Typically this system is not under much load, but it is critical so it must run on a stable server.
This is a memory based cache for content queries. It can run on a separate server. It is not critical, and the setup can live without it and will handle downtime gracefully. Each front-end server has its own query cache. This cache allows each front-end to share and reuse data between eachother.
This is a file based cache that serializes content objects. It can run on a separate server, and like the query cache it is not critical. The setup can live without it and will handle downtime gracefully.
This server holds the Lucene index. It communicates directly with all frontend servers.
About the setup
All communication between the the servers are HTTP and REST based, so you can install servers in separate locations.
The setup is done with a configuration text that can be edited in the System module under “System-Webfarm”
Please contact us for more information about advice in setting it up and how it affects your license cost. We will normally offer consulting hours in projects that want to use this feature of our product