We’ve already discussed the benefits of using the Cloud as a platform (PaaS) when building applications. Now we’ll focus on some of the design and architectural elements that need to be kept in mind during the planning process. The following core concepts are a good start to develop scalable, Cloud-centric applications.
Partitioning in the Cloud typically refers to the separation of application sections into clusters amongst different servers. Partitioning is most commonly applied to data storage, but can also apply to other items like background processing servers. This helps your application to run faster because the server isn’t getting bogged down with data.
Separation of Functionality
Cloud systems have the ability to scale sub-sections of the application, as long as the sub-sections are appropriately defined. For example, an application that sends emails to its users will benefit from having the email engine as a separate entity from the website used by administrators. This allows the email system to scale up only when emails are being sent but not the administration website as it is irrelevant to the workload at hand.
Asynchronous threading refers to concepts that are intended to create faster load times, better scalability options, and significantly cheaper hosting costs. We need to know what this list is about before we list it.
- Offloading actions to background servers to keep workload on edge servers (web/API servers) streamlined. This also helps separate functions to different sub-sections for independent scaling.
- Asynchronous threading is best used when accessing network or data storage. Performing network calls with an asynchronous threading model allows for better processor utilization, which directly translates to more active users per virtualized server. Network access cost is significantly higher on Cloud environments over locally optimized networks.
Stateless execution makes sure that no one server is storing all of the user info, because if you replace that server or it fails, you’re running the risk of a user losing all of their data. Essentially, it is not putting all of your eggs in one basket. Stateless execution is a major component of optimized load balancing. An application that doesn’t support stateless execution will have issues when scaling the application up/down or during automated server maintenance. Special care must be taken to ensure that these issues are avoided:
- Do not store information on local hard drives
- Ensure user sessions operate across servers
- Ensure caching plan operates across servers (also usually implies centralized caching, such as using Redis cache)
Those were just a few of the elements that you need to keep in mind when developing an application in the Cloud. In Part 3, we’ll discuss tips on how to ensure your applications success, as much as possible, that is.