Tips on Security Architecture and Design
- System architecture is a formal tool used to design computer systems in a manner that ensures each of the stakeholders’ concerns is addressed.
- A system’s architecture is made up of different views, which are representations of system components and their relationships. Each view addresses a different aspect of the system (functionality, performance, interoperability, security).
- ISO/IEC 42010:2007 is an international standard that outlines how system architecture frameworks and their description languages are to be used.
- A CPU contains a control unit, which controls the timing of the execution of instructions and data, and an ALU, which performs mathematical functions and logical operations.
- Memory managers use various memory protection mechanisms, as in base (beginning) and limit (ending) addressing, address space layout randomization, and data execution prevention.
- Operating systems use absolute (hardware addresses), logical (indexed addresses), and relative address (indexed addresses, including offsets) memory schemes.
- Buffer overflow vulnerabilities are best addressed by implementing bounds checking.
- A garbage collector is a software tool that releases unused memory segments to help prevent “memory starvation.”
- Different processor families work within different microarchitectures to execute specific instruction sets.
- Early operating systems were considered “monolithic” because all of the code worked within one layer and ran in kernel mode, and components communicated in an ad hoc manner.
- Operating systems can work within the following architectures: monolithic kernel, microkernel, or hybrid kernel.
- Mode transition is when a CPU has to switch from executing one process’s instructions running in user mode to another process’s instructions running in kernel mode.
- CPUs provide a ringed architecture, which operating systems run within. The more trusted processes run in the lower-numbered rings and have access to all or most of the system resources. Nontrusted processes run in higher- numbered rings and have access to a smaller amount of resources.
- Operating system processes are executed in privileged or supervisor mode, and applications are executed in user mode, also known as “problem state.”
- Virtual storage combines RAM and secondary storage so the system seems to have a larger bank of memory.
- The more complex a security mechanism is, the less amount of assurance it can usually provide.
- The trusted computing base (TCB) is a collection of system components that enforce the security policy directly and protect the system. These components are within the security perimeter.
- Components that make up the TCB are hardware, software, and firmware that provide some type of security protection.
- A security perimeter is an imaginary boundary that has trusted components within it (those that make up the TCB) and untrusted components outside it.
- The reference monitor concept is an abstract machine that ensures all subjects have the necessary access rights before accessing objects. Therefore, it mediates all access to objects by subjects.
- The security kernel is the mechanism that actually enforces the rules of the reference monitor concept.
- The security kernel must isolate processes carrying out the reference monitor concept, must be tamperproof, must be invoked for each access attempt, and must be small enough to be properly tested.
- Processes need to be isolated, which can be done through segmented memory addressing, encapsulation of objects, time multiplexing of shared resources, naming distinctions, and virtual mapping.
- The level of security a system provides depends upon how well it enforces its security policy.
- A multilevel security system processes data at different classifications (security levels), and users with different clearances (security levels) can use the system.
- Data hiding occurs when processes work at different layers and have layers of access control between them. Processes need to know how to communicate only with each other’s interfaces.
- A security model maps the abstract goals of a security policy to computer system terms and concepts. It gives the security policy structure and provides a framework for the system.
- A closed system is often proprietary to the manufacturer or vendor, whereas the open system allows for more interoperability.
- The Bell-LaPadula model deals only with confidentiality, while the Biba and Clark-Wilson models deal only with integrity.
- A state machine model deals with the different states a system can enter. If a system starts in a secure state, all state transitions take place securely, the system shuts down and fails securely, and the system will never end up in an insecure state.
- A lattice model provides an upper bound and a lower bound of authorized access for subjects.
- An information flow security model does not permit data to flow to an object in an insecure manner.
- The Bell-LaPadula model has a simple security rule, which means a subject cannot read data from a higher level (no read up). The *-property rule means a subject cannot write to an object at a lower level (no write down). The strong star property rule dictates that a subject can read and write to objects at its own security level.
- The Biba model does not let subjects write to objects at a higher integrity level (no write up), and it does not let subjects read data at a lower integrity level (no read down). This is done to protect the integrity of the data.
- The Bell-LaPadula model is used mainly in military and government-oriented systems. The Biba and Clark-Wilson models are used in the commercial sector.
- The Clark-Wilson model dictates that subjects can only access objects through applications. This model also illustrates how to provide functionality for separation of duties and requires auditing tasks within software.
- If a system is working in a dedicated security mode, it only deals with one level of data classification, and all users must have this level of clearance to be able to use the system.
- Trust means that a system uses all of its protection mechanisms properly
to process sensitive data for many types of users. Assurance is the level of confidence you have in this trust and that the protection mechanisms behave properly in all circumstances predictably. - The Orange book, also called Trusted Computer System Evaluation Criteria (TCSEC), was developed to evaluate systems built to be used mainly by the government. Its use was expanded to evaluate other types of products.
- The Orange Book deals mainly with stand-alone systems, so a range of books were written to cover many other topics in security. These books are called the Rainbow Series.
- ITSEC evaluates the assurance and functionality of a system’s protection mechanisms separately, whereas TCSEC combines the two into one rating.