Secure by design, in software engineering, means that software products and capabilities have been designed to be foundationally secure.
Alternate security strategies, tactics and patterns are considered at the beginning of a software design, and the best are selected and enforced by the architecture, and they are used as guiding principles for developers.[1] It is also encouraged to use strategic design patterns that have beneficial effects on security, even though those design patterns were not originally devised with security in mind.[2]
Secure by Design is increasingly becoming the mainstream development approach to ensure security and privacy of software systems. In this approach, security is considered and built into the system at every layer and starts with a robust architecture design. Security architectural design decisions are based on well-known security strategies, tactics, and patterns defined as reusable techniques for achieving specific quality concerns. Security tactics/patterns provide solutions for enforcing the necessary authentication, authorization, confidentiality, data integrity, privacy, accountability, availability, safety and non-repudiation requirements, even when the system is under attack.[3]
In order to ensure the security of a software system, not only is it important to design a robust intended security architecture but it is also necessary to map updated security strategies, tactics and patterns to software development in order to maintain security persistence.
Expect attacks
Malicious attacks on software should be assumed to occur, and care is taken to minimize impact. Security vulnerabilities are anticipated, along with invalid user input.[4] Closely related is the practice of using "good" software design, such as domain-driven design or cloud native, as a way to increase security by reducing risk of vulnerability-opening mistakes—even though the design principles used were not originally conceived for security purposes.
Avoid security through obscurity
Generally, designs that work well do not rely on being secret. Often, secrecy reduces the number of attackers by demotivating a subset of the threat population. The logic is that if there is an increase in complexity for the attacker, the increased attacker effort to compromise the target will discourage them. While this technique implies reduced inherent risks, a virtually infinite set of threat actors and techniques applied over time will cause most secrecy methods to fail. While not mandatory, proper security usually means that everyone is allowed to know and understand the design because it is secure. This has the advantage that many people are looking at the source code, which improves the odds that any flaws will be found sooner (see Linus's law). The disadvantage is that attackers can also obtain the code, which makes it easier for them to find vulnerabilities to exploit. It is generally believed, though, that the advantage of the open source code outweighs the disadvantage.
Fewest privileges
Also, it is important that everything works with the fewest privileges possible (see the principle of least privilege). For example, a web server that runs as the administrative user ("root" or "admin") can have the privilege to remove files and users. A flaw in such a program could therefore put the entire system at risk, whereas a web server that runs inside an isolated environment, and only has the privileges for required network and filesystem functions, cannot compromise the system it runs on unless the security around it in itself is also flawed.
Standards and Legislation exist to aide secure design by controlling the definition of "Secure", and providing concrete steps to testing and integrating secure systems.
Some examples of standards which cover or touch on Secure By Design principles:
ETSI TS 103 645 [5] which is included in part in the UK Government "Proposals for regulating consumer smart product cyber security" [6]
In server/client architectures, the program at the other side may not be an authorised client and the client's server may not be an authorised server. Even when they are, a man-in-the-middle attack could compromise communications.
Often the easiest way to break the security of a client/server system is not to go head on to the security mechanisms, but instead to go around them. A man in the middle attack is a simple example of this, because you can use it to collect details to impersonate a user. Which is why it is important to consider encryption, hashing, and other security mechanisms in your design to ensure that information collected from a potential attacker won't allow access.
Another key feature to client-server security design is good coding practices. For example, following a known software design structure, such as client and broker, can help in designing a well-built structure with a solid foundation. Furthermore, if the software is to be modified in the future, it is even more important that it follows a logical foundation of separation between the client and server. This is because if a programmer comes in and cannot clearly understand the dynamics of the program, they may end up adding or changing something that can add a security flaw. Even with the best design, this is always a possibility, but the better the standardization of the design, the less chance there is of this occurring.
^Santos, Joanna C. S.; Tarrit, Katy; Mirakhorli, Mehdi (2017). A Catalog of Security Architecture Weaknesses. 2017 IEEE International Conference on Software Architecture Workshops (ICSAW). pp. 220–223. doi:10.1109/ICSAW.2017.25. ISBN978-1-5090-4793-2. S2CID19534342.
^Dan Bergh Johnsson; Daniel Deogun; Daniel Sawano (2019). Secure By Design. Manning Publications. ISBN9781617294358.
^Hafiz, Munawar; Adamczyk, Paul; Johnson, Ralph E. (October 2012). "Growing a pattern language (For security)". Proceedings of the ACM international symposium on New ideas, new paradigms, and reflections on programming and software. pp. 139–158. doi:10.1145/2384592.2384607. ISBN9781450315623. S2CID17206801.