The Key Terms You Should Know About System Architecture Design
I have started my career in 2018. At first, everything related to software development and server-side architecture seems tricky to me. Then I intended to focus on software development first and put the server-side black box on second in the list. During the last couple of months, I have started to learn about server-related concepts. In this post, I will sum up all of the key concepts and other related terms that I have learned and will represent them briefly. This post will target only the beginners who might want to start the server-side journey.
Normalization and Denormalization —
Normalization removes redundancy from the database. It makes the database consistent. One issue with this is that the join query can become very slow when the system gets bigger. Though denormalization adds redundant data, it is efficient in terms of joining data. If you need to run join query often, you may use either denormalization or NoSql. NoSql does not support any joining feature. It handles data differently.
Database Partitioning —
For system needs, you may need to partition your database across multiple machines. Commonly used partitioning techniques are Vertical Partitioning, Hash-based partitioning, Directory-Based Partitioning, etc. Each of them can provide the best solutions according to the system needs.
The core responsibility of the caching mechanism is to provide rapid results. It situates in between the application and database layer. It is nothing but a key-value data structure. An application first search data on cache. If the required data is not available in the cache, then it will be searched in the database.
Horizontal and Vertical Scaling
There are generally two ways of scaling a system to adapt to the current situation.
a. Vertical Scaling- Adding additional power like Ram, CPU to the existing server.
a. Horizontal Scaling- Adding additional servers to work along with the existing server.
Bandwidth — How much data can be transferred in a unit of time.
Throughput — Actual amount of data, that is transferred in a unit of time.
Latency — Time needs to transfer the data from one end to another end.
Client-server architecture is suitable for a wide variety of applications. It has two major components- server and client. The client issues a request to the server and after processing according to the request server provide responses to the client afterward. This architecture is popular with many-to-one connections, where multiple clients connected to a single server. Based on the application goal, two major client models are,
a. Thin-client model — When most logics, data are processing are implemented in the server and the client has light implementations.
b. Fat-client model — It is the opposite case of the Thin-client model.
Communication between clients and servers accomplished by several layered architectures. They are, Two-tier and Three-tier/N-tier.
a. Two-tier — Where the system is divided into two components- Clients (responsible for presentations of data) and Server (responsible for data processing and data storage).
b. Three-tier /N-tier— Where the system architecture is divided into three or more parts. Layers can be for application, server, database, other third-party APIs, etc.
If you are interested in Big-data analysis, you may already hear about Map-Reduce. It is widely used for processing a large amount of data. Map is intended to convert the data into some key-value pairs and Reduce provides a mechanism in order to reduce the key-value pairs.
Suppose, you have created an application like Facebook and deploy it in a small configured server to let the people use it. But surprisingly, your application gets high numbers of users day by day. As a result, your server reaches its risk level and not able to take any requests, therefore the server becomes down. One solution can be adding multiple servers along with the existing server (Horizontal Scaling). Later in front of the server put a Load Balancer that is going to manage the servers. It is going to serve the load evenly among the servers. If one server is down, it is going to up another server to handle the requests.
I will write the 2nd part on this topic very soon. Till then, read this and don’t forget to tell me what I have missed and what can I add in 2nd part. Your responses will be highly appreciated.
Thank you. Do claps 50 times to make me encouraged to write more.