Cram Notes
Introduction to Key Concepts:
We will cover, at a high level, the following concepts which will be required on the exam:
3.1 - Research, Implement, and Manage Engineering Processes Using Secure Design Principles
Traditional Concepts:
-
Threat Modeling: Systematic approach of identifying, assessing, and mitigating potential vulnerabilities in a system.
-
Least Privilege: Grant users the minimal levels of access or permissions they need to perform their work.
-
Defense in Depth: Multilayered security approach designed to provide redundancy and mitigate the potential impact of a security breach.
-
Secure Defaults: Configuration settings preset by manufacturers to minimize security risks.
-
Fail Securely: Systems should default to a secure state in the event of a failure.
Contemporary Concepts:
-
Keep It Simple: Simplicity in design reduces the potential for security vulnerabilities.
-
Zero Trust: Security model where every request is fully authenticated, authorized, and encrypted before granting access.
-
Privacy by Design: Integrate data privacy protections from the initial design stages of systems or processes.
-
Trust but Verify: Always verify the legitimacy of information, even from trusted sources.
-
Shared Responsibility: Security is not just the responsibility of one party but should be shared among all stakeholders involved.
3.2 - Understand the Fundamental Concepts of Security Models
Examples: Biba model, Bell-LaPadula model, and State Transition model.
3.3 - Select Controls Based on System Security Requirements
This involves identifying appropriate security measures based on the specific security requirements of a system.
3.4 - Understand Security Capabilities of Information
For instance, the application of encryption and decryption techniques to protect data (like TPN).
3.5 - Assess and Mitigate the Vulnerabilities of Security Architectures, Designs, and Solution Elements
Identifying potential security weaknesses in systems and implementing measures to reduce the risk of these vulnerabilities being exploited.
3.6 - Select and Determine Cryptographic Solutions
This involves choosing appropriate cryptographic techniques based on the system's security requirements.
3.7 - Understand Methods of Cryptanalytic Attacks
These attacks are covered extensively in the Attacks and Countermeasures chapter. Some examples include:
3.8 - Apply Security Principles to Site and Facility Design
This involves integrating security considerations into the design and layout of physical spaces where systems or information are housed.
3.9 - Design Site and Facility Security Controls
Establishing security measures in the physical environment to protect system and information resources. This could involve barriers, surveillance, or controlled access points, among others.
Zero Trust Security
Zero Trust Security seeks to address the shortcomings of traditional perimeter-based security models. At its core, it operates on the principle of "never trust, always verify."
Key Elements:
-
User Identity as Control Plane: This shifts the focus from merely securing the network perimeter to treating user identity as the core security element.
-
Assumption of Breach: Zero Trust inherently assumes a potential compromise or breach. It operates on the premise that every request, even those from within the organization, could be a threat.
Core Components of Zero Trust Security:
-
Identity Verification: This involves rigorous identity verification protocols to authenticate each user. For instance, multifactor authentication and strict password policies can be used.
-
Device Management: Only devices compliant with the organization's security standards are allowed access to resources. This may involve ensuring devices are updated, have enabled firewalls, and use antivirus software.
-
Application Management: Only secure, organization-approved applications are permitted access to sensitive data. These applications are regularly scanned and updated to eliminate potential vulnerabilities.
-
Data Protection: Data is encrypted both at rest and in transit to ensure its safety, even if an unauthorized entity were to gain access.
Example: Consider a digital library that hosts thousands of rare and valuable books. In the past, this library used a simple username-password system for access. However, they faced a series of breaches due to stolen credentials and decided to move towards a Zero Trust Security model.
Under the Zero Trust approach, every access request to the digital library is treated as a potential threat, regardless of whether it comes from a long-time member or a new visitor.
Each user is required to verify their identity via multifactor authentication. The library also checks the security status of the device making the request to ensure it doesn't pose a risk. Only approved reading apps can access the digital books, and all the data is encrypted to protect it from unauthorized access.
In this way, the digital library successfully transitions to a Zero Trust Security model, ensuring the safety and integrity of its rare and valuable collection.
Secure Defaults
This principle states that the default configuration of any system, application, or service should inherently reflect a restrictive and conservative enforcement of the security policy. In essence, systems should be 'secure out of the box'. This principle applies not only to the practices within your organization, but also to the expectations you should have of your hardware, software, and service vendors.
A server should come with the minimal set of open ports necessary for its operation, and an application should have all its optional features turned off by default.
Fail Securely
"Fail Securely" dictates that components should default to a state that denies access when a failure occurs, rather than granting access. This principle ensures that even in the event of an unexpected system or application failure, security is maintained.
Example: if an authentication server fails, the system should not allow all users to log in freely; instead, it should prevent all users from logging in until the issue is resolved. This principle protects against unauthorized access that could occur during system malfunctions or failures.

Trust but Verify
Historically, Trust but Verify was the norm in security. Under this principle, once a user gained access to the 'secured' area of a system (for example, after entering a password), they were largely trusted to move within that area without constant verification.
However, the evolution of cyber threats rendered this approach inadequate. Adversaries learned how to bypass initial security checks or exploit the trust granted within the system. Imagine a burglar breaking into a house and then freely roaming inside, asking the family for sensitive information. Any sensible person wouldn't trust the burglar just because they're already inside the house.
This realization led to the emergence of Zero Trust Security. This modern model operates on the belief that threats can come from anywhere, even from within the system. Therefore, it continuously verifies the identity of everyone and everything trying to connect to the system, regardless of their prior status. This strategy is akin to having security cameras in every room of the house, not just at the entrance. By doing so, the system can better fortify itself against potential threats.
Privacy by Design
Privacy by Design is a framework that integrates privacy considerations into the fabric of systems, technologies, policies, and design processes. It's rooted in seven foundational principles outlined by the International Association of Privacy Professionals (IAPP).
Applying these principles as part of a layered defense strategy (defense in depth) within a Zero Trust framework helps to ensure privacy while maintaining a robust security posture.
1. Proactive not Reactive
This principle encourages a forward-thinking approach to privacy, where potential issues and privacy breaches are anticipated and prevented before they occur, rather than addressed after the fact.
2. Privacy as Default Setting
Systems should automatically protect users' privacy; individuals shouldn't have to take extra steps to secure their private data. By default, personal data should not be collected or shared without the individual's consent.
3. Privacy Embedded into Design
Privacy is not an afterthought or an add-on feature; it's a core component that should be part of the system's design and architecture from the very beginning.
4. Positive-Sum not Zero-Sum
The positive-sum approach means that privacy and other considerations, like security or usability, can all be achieved in tandem without sacrificing one for the other. The zero-sum approach, by contrast, views privacy and other factors as trade-offs, where improving one would degrade the other.
5. End-to-End Security — Full Lifecycle Protection
This principle mandates the protection of data from the moment it's collected until its final disposition. This means securing it during storage, processing, and transmission, as well as when it is deleted or anonymized.
6. Visibility and Transparency
Organizations must be open and transparent about their data practices, including how data is collected, used, and stored. This principle is often implemented through comprehensive privacy policies and clear user communications.
7. Respect for User Privacy
User-centric privacy means giving users control over their data. They should be informed about their data use and have the power to opt in or out. It also includes complying with regulations like the General Data Protection Regulation (GDPR), which strengthens individuals' privacy rights.
Keep It Simple Stupid (KISS)
Complexity is the worst enemy of security.
—Bruce Schneier
The KISS principle is a timeless concept that extends beyond cybersecurity. At its core, the principle argues that simpler designs are often the best.
Let's take Bob, the enthusiastic cybersecurity manager, unveils a security system so intricate that it takes 10 authentication steps and referencing a 500-page manual to send an email:
- During the launch, Bob asks Alice, the CEO, to demo the system. She spends 15 minutes to log in, only to get blocked: "Suspicious activity detected."
- In the following weeks, employees become so frustrated with the cumbersome system that they start to bypass it.
- They share passwords, keep themselves permanently logged in, and even start using personal email for official communication.
Despite Bob's high-tech approach, security is now weaker than ever due to non-compliance and workarounds.
A good example of the KISS principle in action is the secure operating system, Qubes OS. The team behind Qubes OS chose Xen for its simplicity, despite the fact that Kernel-based Virtual Machine (KVM) has more features. While KVM may offer more functionalities, its complexity could lead to potential security vulnerabilities, reinforcing why simplicity can be paramount in cybersecurity.
Best-in-Suite vs Best-in-Breed
"Best-in-suite" and "best-in-breed" are two approaches to choosing software solutions. "Best-in-suite" refers to a collection of products that work well together because they're from the same vendor. In contrast, "best-in-breed" selects the best product for each function, regardless of the vendor.
For example, choosing a single vendor like Microsoft for your organization's needs would mean using Office 365 for document collaboration, Outlook for email, and Teams for communication. This is a best-in-suite approach. It simplifies defense-in-depth because these products are designed to integrate smoothly, minimizing compatibility issues and gaps in security.

On the other hand, a best-in-breed approach might involve selecting Google Docs for document collaboration, Outlook for email, and Slack for communication, because each is arguably the best in its respective category. However, integrating these disparate systems can create complexity and potential vulnerabilities.
The Value of Simplicity
Simplicity helps to avoid configuration mistakes and leads to better-integrated and smarter security layers. It doesn't necessarily mean you'll have a single security vendor, but you may have fewer vendors, and you'll likely rely on a standardized suite that serves as your organization's foundation.
For instance, you might choose a Google suite for all your collaborative needs or a Microsoft 365 suite, but not both. Simplicity allows organizations to focus on incremental improvements, rather than striving for unattainable perfection.
Security as a Service (SECaaS)
Security as a Service, often abbreviated as SECaaS, refers to a cloud computing model where security services are provided remotely by an online entity. Instead of an organization having to maintain its own security infrastructure and team, it outsources these functions to a SECaaS provider. These services can encompass a broad range of security aspects, including intrusion detection, malware scanning, data loss prevention, and more.
Internet of Things (IoT)
The Internet of Things, or IoT, refers to a network of physical devices — everything from home appliances to industrial machinery — that are connected to the internet. These devices, which are often equipped with sensors, software, and other technologies, collect and exchange data, enabling automation, remote control, and AI processing capabilities in home or business settings.
For instance, a smart thermostat in your home can adjust the temperature based on your preferences, the time of day, or even the weather forecast, all automatically. On a larger scale, IoT devices in manufacturing plants can monitor equipment performance, detecting potential problems before they cause failures.
Smart Devices
Smart devices are a subset of IoT devices characterized by their ability to offer customization options, typically through the installation of apps. These mobile devices, such as smartphones or tablets, can use on-device or in-the-cloud artificial intelligence (AI) processing to deliver personalized and intelligent services.
For example, your smartphone might use AI to learn your daily patterns and automate certain tasks, like turning on "do not disturb" mode during your typical sleeping hours. Similarly, voice assistants like Amazon's Alexa or Google's Assistant use AI processing to understand spoken commands and provide relevant responses or actions.
(maybe get some cool AI generated pictures to help with this)
Imagine a city, buzzing with life, people going about their business, and traffic flowing through its veins. Now, imagine this city is your network, and the SIEM is the high-tech surveillance system constantly monitoring the city's heartbeat.
SIEM, or Security Information Event Management, serves as the control tower of this bustling metropolis. It gathers data from various sources across the network, akin to the many CCTV cameras across our city, watching for unusual activities and traffic anomalies.
Like a seasoned detective, it sifts through this wealth of information, interpreting it, looking for clues and patterns. It utilizes advanced technologies like User Behavior Analytics (UBA), Artificial Intelligence (AI), and Machine Learning (ML) to identify potential threats. Imagine our control tower spotting a suspicious vehicle, moving erratically through traffic, and sounding the alarm - that's the SIEM alerting the security teams of potential threats before they escalate, keeping our city safe and secure.
Security Orchestration Automation, & Response (SOAR)
Now, enter SOAR, the highly efficient and proactive police force of our city. When the SIEM control tower spots a potential threat and sounds the alarm, the Security Orchestration Automation, & Response (SOAR) springs into action.
Acting as a centralized command center, SOAR organizes the response to these alerts. Equipped with a playbook for different threat scenarios - like our police force having specific protocols for dealing with a suspicious vehicle, a burglary, or a missing person - SOAR ensures a swift and effective response. It could be an automated chase by drone or a single-click authorization for a roadblock - the response depends on the nature of the threat.
Working in unison, SIEM and SOAR create a harmonious symphony of modern cybersecurity. The vigilant eyes of SIEM, combined with the quick response of SOAR, provide a comprehensive defense mechanism, keeping our city - your network - safe from threats.
Microservices and Service Orientated Architecture (SOA)
Service-Oriented Architecture (SOA) is all about creating distinct, user-accessible services that operate in a black-box fashion. However, you might not hear much about it these days. Its relevance has faded somewhat as it's been largely replaced by a newer concept: microservices. Let's take a real-world example: constructing a building. SOA would be akin to building separate rooms (services) in a house (the application). Each room has a specific function but doesn't need to know the specifics of the others; it operates in a 'black-box' fashion.
Microservices are essentially more refined services that perform specific functions. They represent a modern twist on the traditional SOA model, but they're better suited for cloud computing environments. For instance, they're designed to perform optimally on containerized platforms such as Docker or Kubernetes. Continuing with our example, as times change, though, we find ourselves preferring an open-floor concept (microservices). This newer approach still has discrete areas serving different purposes, but they're more integrated, flexible, and cloud-oriented, like a modern home designed for the digital age.
At the coding level, it's crucial to spot potential vulnerabilities early in the development lifecycle. This task can be accomplished using tools such as static code analysis and dynamic testing. These should be integrated early in the Continuous Integration/Continuous Delivery (CI/CD) process. The goal is to pinpoint and correct deficiencies before the product is released, enhancing its security and reliability.
Identifying vulnerabilities in this 'construction' process is like hiring a building inspector to identify flaws in your house's design or construction. Static code analysis (SAST) is like the pre-construction blueprint examination, ensuring everything seems solid before building commences. Dynamic testing (DAST), on the other hand, is akin to checking the house's stability and function after it's been built. These are vital steps to make sure your building, or in our case, your application, is secure and functioning as expected.
Finally, think of static code analysis like a routine health check-up, except for your code. It's also known as Static Application Security Testing (SAST), which we'll dive into more in domain 8. Similarly, dynamic testing is another method of ensuring your code's health. It's also referred to as Dynamic Application Security Testing (DAST), and we'll explore it further as well.
Containerization
Containerization is a flexible, efficient way to package applications for multiple platforms, distinct from virtualization. It's akin to packing only necessary items in a suitcase, as opposed to taking your entire house on a trip.
Containers don't carry a full operating system, making them lighter and quicker than virtual machines. They share the host system's OS kernel, enhancing resource usage efficiency.
Containerization excels in software development, offering consistent functioning across various environments. It reduces discrepancies between local and production environments, improving the development lifecycle's efficiency.

In terms of security, containerization's focus spans two main areas: DevOps and application-level security. For DevOps, it offers isolation at the container level, safeguarding against potential vulnerabilities in one container affecting others. This is a key aspect of DevOps security, ensuring a contained environment for each service or microservice, reducing the risk of system-wide failures or breaches.
Regarding application-level security, containerization emphasizes authentication (AuthN) and authorization (AuthZ). Authentication verifies the identity of a user, device, or system. Authorization, on the other hand, determines what permissions an authenticated entity has, dictating what it can and cannot do. These security measures further bolster the application's protection within the container, contributing to a safer deployment and operational environment.
APIs (SOAP or REST)
APIs, or Application Programming Interfaces, are sets of exposed interfaces that enable programmatic interaction between services. Essentially, they're like a menu in a restaurant, offering predefined ways to interact with a service.
In the past, SOAP (Simple Object Access Protocol) was the predominant standard. However, REST (Representational State Transfer) is now the more commonly adopted standard due to its simplicity and compatibility with web technologies.
A classic example of API usage can be found in Amazon's early days. Jeff Bezos instituted a policy stating that any service created should be made available for other teams or businesses through APIs. Similarly, many modern digital platforms, such as Twitter, Google, and Facebook, provide APIs for developers to interact with their services, fostering an ecosystem of interconnected apps and services.
RESTful APIs operate over the HTTP/HTTPS protocol, offering API endpoints for different services. They're stateless, meaning each request from a client to a server must contain all the information needed to understand and process the request.
When it comes to security, all communications between the client and server should be encrypted, typically using SSL/TLS for HTTPS connections. Access to APIs should be limited and controlled using API keys, acting as unique identifiers for users or services. These keys should be stored, distributed, and transmitted securely to prevent unauthorized access. Remember, the handling of API keys is as important as the protection of passwords or any other sensitive data.
Embedded Systems
Embedded systems are compact computer systems embedded within larger devices, crucial for Internet of Things (IoT) devices. Examples include printers, GPS drones, and semi-autonomous vehicles.
In a printer, the embedded system processes printing commands and manages resources. GPS drones use them to process geolocation data and control flight. In semi-autonomous vehicles, they handle tasks from obstacle detection to internal systems management.
Enforce solid, Lightweight and robust measure authentication practices, moving beyond 'implied trust'. Examples: like two-factor authentication, digital signatures, or certificate-based authentication.
Distributed computing encompasses a wide range of systems where tasks are spread across multiple machines to enhance performance, provide redundancy, or both.
Beyond the Typical Client-Server
- Distributed Systems' Examples:
Grid Computing
Grid computing, a subset of distributed computing, harnesses the power of many loosely coupled computers to perform sizable tasks.
-
Characteristics:
- Resource Pooling: Often described as "virtual supercomputing," grid computing pools resources, sometimes from globally scattered computers.
- Voluntary Participation: Projects like SETI@home exemplify this, where unused computing resources are tapped into.
- Heterogeneity: Grids can consist of varied machines, possibly with different operating systems and hardware configurations.
- Middleware Requirement: Essential for managing diverse resources, handling security, and orchestrating tasks.
-
Comparison:
- SETI Project: Fits the grid computing model where global volunteers contribute idle computer time.
- Blockchain: A form of distributed computing due to its decentralized nature but deviates from the traditional grid model. Its focus isn't pooling computational resources for large tasks but ensuring secure transaction data and consensus.
Key concerns with Grid computing includes protecting the grid controller from takeover or influence from bad actors.
Edge Computing
Fog Computing:
-
Definition: An extension of edge computing, fog computing utilizes gateway devices in the field to gather, process, and send data more efficiently.
-
How It Works: Rather than sending all data directly to the cloud, fog computing aggregates and processes data at the edge first, then sends only the most relevant or processed data to the central system.
-
Purpose: By collecting and correlating data centrally at the edge, fog computing minimizes latency and enhances efficiency, especially when bandwidth is a concern.
Security in Edge and Fog Computing: