So while functional requirements define what the system should do, non-functional requirements describe what the system should deal with. Systems can deal with many challenges during the operation:
They can experience a large number of concurrent users.
They can experience servers crash.
They can suffer an extremely high load of requests, and so on.
Non-functional requirements basically describe what is the expected environment for the system with emphasis on edge cases.
There are five non-functional requirements that we will usually deal with those five are:
- Data Volume
- Concurrent Users
- SLA (Service Level Agreement)
Well, the performance sounds like a simple requirement, right.
What is the required performance for this system? Fast.
When we are talking about performance, there are two things we should keep in mind:
- Always talk in numbers
- Latency and Throughput
Always talk in Numbers
When the client asks for a fast system, your next question should be, what is fast? Fast can mean a lot of things in a lot of systems.
I worked on systems that were fast, meant 30 milliseconds and on systems that were fast meant 5 seconds.
The problem is that your client probably wasn’t thinking about the exact number, and you will have to help her with it.
The rule of thumb is that when there is an end-user at the end of the flow, we usually need the task to be complete in less than a second.
When working in a B2B environment that’s a Business-to-Business, we usually look at faster systems that can measure even 100 milliseconds per task.
The reason for that is that we human beings are less sensitive to subsequent delays. And for us, data that is displayed in one second or 700 milliseconds looks almost the same. While for software running on a machine with CPU cycles of few milliseconds, this would be a very long time.
But again, the most important thing is to work out this number together with the client or system analyst.
The latency answer the question of how much time does it take to perform a single task in the application.
For example, how much time will it take for the API to set user data in the database? Or how much time will it take to read a single file from the filesystem?
You can see that latency deals with the time it takes to perform a single task.
On the other hand, the throughput answer completely different question. How many tasks can be performed in a given time unit?
For example, how many users can be saved in the database in a minute? Or how many files can be read in a second?
The load, non-functional requirement defines the load or quantity of work the application will have to withstand without crashing. The exact finishing of the load depends on the exact type of application. For example, for a WebAPI based application, the load will usually be defined as how many concurrent requests are going to be received by the system without crushing.
Load vs Throughput
Note that load requirement looks similar to throughput, which defines how many requests can be handled in a specific time unit.
The difference between the two is that throughput defines the time unit; the load defines the availability of the system, meaning the system should be able to handle the load without crashing down.
This requirement defines how much data in gigabytes or terabytes the system will accumulate over time.
This requirement is important for a few reasons:
- It will dictate what kind of database we are going to use since not all databases can handle large quantities of data equally.
- It will also determine what type of queries we are going to write because a query in a table of 100,000 rows will be completely different from a query in a table of 100 million rows.
- And, of course, it will help us plan ahead of the storage we need to allocate
The data volume usually has two aspects:
- How much data is required On “Day One”?
- What is a forecasted data growth?
For example, the system might need 500 megabytes on its first day and is expected to grow by two terabytes annually.
Of course, the growth period can be different and could be weekly, monthly, quarterly, etc.
This requirement defines how many users will be using the system simultaneously.
This requirement is quite similar to the load requirement, which also defines how many requests should be handled by the system simultaneously.
But with one big difference, the concurrent users’ requirement describes how many users will be using the system, not how many users will be performing requests.
This distinction is important when a user is using a system, there are a lot of dead times when no action is actually taken.
SLA, which stands for Service Level Agreement, describes the required uptime for the system in percentage.
Almost all public cloud providers widely use this term.
One of the biggest competitions between them is on the SLA. For example, Azure Cosmos DB takes pride in its 99.99% SLAs.
This is translated to less than an hour of downtime in a year.
The SLA has a great influence on the design of the system.
For example, a system that cannot be brought down must have a sophisticated uptime mechanism that won’t require turning off the system while it’s updating.
One important thing to note about SLA is client expectations.
If you ask the client what is the required SLA for the system? he will usually give you an answer along the lines of 100% or the famous five lines, which is 99.999.
When this happens, I usually tell him no problem. For this, we will need to build at least three data centres in a different area with independent and dual power stations in automatic fell over between them.
What do you say? This generally brings him down to earth, and we discuss more realistic SLA goals.
So these were the most common non-functional requirements you will need to have for the system, and again never start working on the architecture or solution before you have set those requirements.