Cisco Cyberops 200–201 study guide

Ben Bosteter
43 min readApr 7, 2023

--

Cisco Cyberops 200–201 Study Guide

This is a study guide where I took the exam topics from the Cisco Cyberops Associate study guide and attempted to answer them. It has been cobbled together through a combination of various web sources and my own words. I did this as way to solidify my knowledge and put something out there that I would have found useful when I first started studying for this certification. I made an effort to write things in a way which are clear and stray away from the sometimes overthought wording of technical concepts. Worded in ways which I found helped me to really comprehend them. This is not a one size fits all for those looking to take the exam. Each topic should be looked into and explored. Each person is different in the methodology of test preparations and I advise that you use this in conjunction with other mediums such as videos, reading materials, notes both a first and second draft, and flash cards. The very field of information security demands that you must constantly be eager to do a deeper dive into any one concept. Keeping this mentality for life is the key to success in this field. Stay curious, consistent and disciplined. I hope this is helpful to those looking to take the Cisco Cyberops 200–201 certification test.

1.0 Security Concepts

1.1 Describe the CIA triad.

-The CIA triad is a core concept for information security. The three elements Confidentiality, Availability, and Integrity are used as principles to consider, define and measure against security implementations

Confidentiality — is your data private? Protections from unauthorized access to sensitive data.

Examples: Encryption Algorithms, MFA, Authentication, Passwords, TLS/SSL, and more.

Integrity — has the integrity of your data remained unchanged? Protections from data being altered or manipulated.

Examples: Hashing Algorithms, Digital Signatures, Access Control, Encryption Algorithms, Data Checksums, and more.

Availability — are your systems up and running? Protections from system crashes, failures, and becoming unavailable.

Examples: System redundancies, Load Balancing, Fault Tolerance designs, Defense in Depth, and more.

1.2 Compare Security Deployments

1.2 a) Network — This involves extensive and orchestrated network monitoring. Use of internet edge firewalls, DMZ’s, honeypots, network access control. Coordination of various agents, devices, and network nodes. All aggregating data such as alert, session, and transactional. Using 5 tuple with either ipfix or netflow’s to monitor and track anomalies that stray from network baselines. All of which is piped through SIEM’s, SOAR’s, and ticketing systems for NOC and SOC teams to monitor and correlate based upon alerts or flags.

1.2 a) Endpoint — This would be host based anti-virus and anti-malware installed as an agent. Host based firewalls, intrusion detection and prevention. Security software or tailored access control within machines where a user in the network operates from.

1.2 a) Application Security Systems — An security-centric application used as an agent on a host or distributed in a system or network. Or security which applies to existing apps of a host, system or network.

1.2 b) Agent-Based — Agents represent specialized software components that are installed on devices within a network. Agent-based systems use a pull communication style. This is where a central server will pull data from the agents [deployed throughout the network] on demand.

Agentless — Performs many of the same actions as agent-based just without the agents. In practice, this means you can inspect and review scans and vulnerabilities on remote machines without having to install an agent. You may have to install software on a networking layer to properly use and monitor. Based on push communication style, that same installed software would push data to a central system on a periodic bases. This also allows the ability for an infrastructure scan of the network.

1.2 c) Legacy Anti-virus & Anti-malware — an older solution which can remove basic forms of viruses (worms, trojans, malware, spyware, etc.) on infected hosts. It is a signature-based and is vulnerable to unknown threats. Generally endpoint detection and response is a more robust solution, but both can be implemented to compliment each other.

1.2 d) SIEM — Security Information and Event Management — This is the brain of a Security Operations Center which pulls, collects, aggregates, and correlates data, specifically event log data from devices and hosts within the network it is deployed. It supports policy enforcement, compliance, security incident management. Log collection, management, and analysis of all other corresponding data is at the core of SIEM.

1.2 d) SOAR — Security Orchestration Automation and Response — This is a security software system similar to a SIEM, often it is connected to a SIEM, or both are combined into on single service. A SOAR will have automation and incident response capabilities. This includes policy enforcement, vulnerability management, SOC playbook automation’s, security operations automation and many more.

1.2 d) Log Management — This is the systems, frameworks or standard operating procedures. Which are implemented specifically to collect, store, organize and correlate the various logs and alerts within a network or system. Protocols to push, pull, aggregate, normalize, and retain data are used in conjunction with hosts, networks, and security devices in order to collect from the flow of various alerts and logs of triggered within these endpoints. Essentially it is an approach to dealing with large volumes of computer generated logs.

1.3 Describe Security Terms

1.3 a) Threat Intelligence — This is a source of data regarding threats which have been previously discovered, recorded and cataloged. Threat intelligence covers many different information security dangers. APT’s, TTP’s, Malware. Malicious IP’s, File hashes of malware, etc. It is usually fed into SIEMS’s, SOAR’s, and SOC’s. This type of processed data regarding information security threats gives you the ability to make faster, data-backed decisions. It is something that gives you a better chance and acting in a proactive mindset versus a reactive one.

1.3 b) Threat Hunting — Is a role usually within the elevated tier’s of a SOC. It involves actively seeking out and researching threats and vulnerabilities which are within a system or have a potential to arrive within a system. You are seeking out yet to be discovered malicious activities. This can be based of a hypothesis of the threat hunter or triggered by and indicator of compromise. It can also be informed and set in motion by received threat intelligence.

1.3 c) Malware Analysis — The act of analyzing malware. Often this is performed in a sandbox software environment in order to observe and reverse engineer the code of the malware as it executes and functions. Threat intelligence and malware samples can be correlated and compared in an analysis. Analysis can be conducted in a manner that can be:

Static Analysis — This is where no code is run within the malware. Static analysis examines the file for technical indicators of malicious intent. Things such as file names, hashes which connect with specific threat intelligence, strings, historically malicious IP addresses or domains, file header data, and many more. You can use disassembler’s and network analyzers to observe malware without running it. A caveat to note is certain sophisticated strains of malware can include malicious runtime behavior that can go undetected. Static analysis looks at the surface layer and for signature based indicators. Not behavioral indicators due to no code being run.

Dynamic Analysis — This method executes suspected malware within a quarantined environment known as a sandbox. This allows threat hunters and incident responders to have a deeper visibility and uncover the true nature of a threat. Adversaries can hide code inside malware specifically to remain dormant until certain conditions are met. Only then does the code run.

Hybrid — A combination of both. Using this method helps you detect malicious code which has been hidden. This also helps in detetcting unkown threats.

1.3 d) Threat Actor — The bad guy. The entity creating a security incident by exploiting a vulnerability in order to achieve a goal against an asset. An entity taking part in an action that is intended to cause harm to the cyber realm.

1.3 e) Runbook Automation — A runbook is a compilation of routine procedures and operations that the security operator carries out. Runbook automation is where certain runbook procedures are implemented with automation functionality. It is the ability to define, build, orchestrate, manage and report on workflows and SOC processes in an automated system.

1.3 f) Reverse Engineering — The act of dismantling and object to see how it works. The process has three stages

1) Info Extraction — The object, in this case it would be malware is studied. Info about the malware is extracted.

2) Modeling — Mapping out the info into a conceptual model.

3) Review — Testing in different contexts

With Reverse Engineering, you can use a disassembler which will read the binary code and display executable code as text. Attackers will use code obfuscation techniques (writing extra code in a confusing manner) in order to deter reverse engineering.

1.3 g) Sliding Window Anomaly Detection — Is the process of taking a subset of data from a given array or string, expanding or shrinking that subset to satisfy certain conditions. This one I had a hard time really nailing down. From what I can gather it’s a cyclical analysis of data, a window which moves across a string examining each section.

1.3 h) Principle of Least Privilege — Is the concept of giving a user access to an object, asset or role. But, only allowing a specific and measured amount of permissions to accomplish the task. No more no less. A user or entity should only have access to the specific data, resources and applications needed to complete a required task.

1.3 i) Zero Trust — Is a concept where consistent authentication occurs in order to create defense in depth redundancies. It eliminates implicit trust and continuously validates at every stage of a digital interaction.

1.4 j) Threat Intelligence Platform — A software application which has a dashboard with a constantly updated threat intelligence feed. It can integrate with systems to correlate threat data with system events and indicators of compromise. It collects, aggregates and organizes threat intelligence data from multiple sources and formats.

1.4 Compare Security Concepts

1.4 a) Risk — is the potential of a vulnerability to be exploited within a system in order to gain unauthorized access by a threat actor. It is the potential loss, impact, or measure that exists when an identified vulnerability or threat is not mitigated. 1.4 b) Threat — an entity, subject, action or circumstance which can cause harm, allow unwanted access and or cause damage to an asset. Any potential event that could cause an undesired outcome.

1.4 c) Vulnerability — A flaw within a system which can potentially be exploited and used to gain unauthorized access or damage to an asset.

1.4 d) Exploit — A way or method in which a vulnerability is manipulated to gain unauthorized access or damage to an asset.

The equation which applies to the above is

RISK = THREAT x VULNERABILITY

1.5 Describe the Principles of the Defense-in-depth Strategy.

- It is a method of layering many security protocols, procedures, and applications in a way that multiple redundancies are implemented to better secure a system.

1.6 Compare Access Control Models

1.6 a) Discretionary Access Control — Access is controlled by the owner/manager of the object or data. It’s access is determined by the object or data owner.

1.6 b) Mandatory Access Control — Access is controlled by a predetermined set of rules or parameters which are enforced through access to the object or data. This method of access control assigns classifications or security group labels to each file system object. Each user and device is assigned a clearance level to gain access to the correlating classification or security group label. This is the strictest model of access control. Basically both users and objects are assigned clearance levels.

1.6 c) Non-discretionary Access Control — Any access control model that does not allow users to pass on access at their discretion can be considered a non-discretionary access control model.

Examples: Role-based Access Control — HR has access to HR files but not access to Security files. Defined by the role in a system. Access is based on the roles of individuals within an enterprise.

Rule-based Access Control — You can only log in to your machine with a pin and access token.

Attribute-based Access Control — You can only log into an asset at a specific time of day and if you are in the specified geographic location.

1.6 d) Authentication — Is proving your identity and that you are actually permitted access to an object.

Authorization — Is confirming and allowing you access to an object.

Accounting — Is taking a record of what you did to an object and when.

1.7 Describe Terms as Defined in CVSS (COMMON VULNERABILTY SCORING SYSTEM)

In CVSS these are metrics used to measure risk. It is a system used to calculate the risk based upon the indicators you have gathered.

1.7 a) Attack Vector — How a threat actor exploits a vulnerability. The specific context and methodology by which a vulnerability exploit is possible.

Measured by: (N)etwork, (A)djacent, (L)ocal, (P)hysical.

1.7 b) Attack Complexity — How easy or difficult it is to exploit a vulnerability. Describes the conditions which are beyond an attackers control that must exist in order to exploit the vulnerability.

Measured by: (H)igh, (L)ow.

1.7 c) Privileges Required — The level of privileges an attacker must posses before successfully exploiting the vulnerability.

Measured by: (N)one, (L)ow, (H)igh.

1.7 d) User Interaction — Determines whether the vulnerability can be exploited by just the attacker or a separate user or a user initiated process is necessary.

Measured by: (N)one, (R)equired.

1.7 e) Scope — Is a determination on whether a vulnerability in one system or component can have carry over impact to another system or component.

Measured by: ©hanged, (U)nchanged.

1.7 f) Impact — Using the CIA triad as a metric for impact, this measures how confidentiality, integrity, and availability are effected by an attack.

Measured by: (N)one, (L)ow, (H)igh.

1.8 Identify the Challenges of Data Visibility (network, host, cloud) in Detection

1.8 a) Network — Network and port address translation (NAT, PAT) can obscure the source IP address for traffic. Improper deployment of the network time protocol (NTP). Traffic fragmentation techniques can splinter malicious payloads in way which obfuscates malware in packets. Encryption can also hide malicious traffic. The improper configuration of hosts, or misuse of protocols and their connection to network monitoring devices.

1.8 b) Host — Hosts using encryption which hides payloads. The improper configuration of hosts, or misuse of protocols and their connection to network monitoring devices.

1.8 c) Cloud — Lack of control of data traveling throughout a cloud network. Multiple layers of cloud network nodes SAAS implementations and servers which can equate to data loss or seepage. Tracking this data and ensuring it is properly configured for traversal can affect visibility greatly.

1.9 Identify Potential Data Loss From Provided Traffic Profiles

This is where using netflows or ipfix for network monitoring. Tools like Cisco stealthwatch can use network baselines to observe traffic profiles for fluctuations and anomalies.

1.10 Interpret the 5-Tuple approach to Isolate a Compromised Host in Grouped set of Logs.

This involves using the 5-tuple to observe layer 3 activity which has flagged as malicious in some way. The 5-tuple being destination and source IP, destination and source ports, and protocol. This information set can assist in identifying activity that has, malicious IP addresses, geolocation which corresponds to APT’s or cybercriminals, protocol misuse or manipulation, data loss, or traffic requests/callouts to command and control servers.

1.11 Compare Rule-based Versus Behavioral and Statistical Detection.

Rule-based — Defined by an if/then basis and predetermined parameters for what is acceptable activity in a system.

Statistical-based — Defines legitimate data of users over a period of time.

Behavioral-based — Defined by searching for evidence of compromise rather than the attack itself.

2.0 Security Monitoring

2.1 Compare Attack Surface and Vulnerability

Attack Surface — Is the grouping of attack vectors that represent a methodology. It is the total number of attack vectors a threat actor can use to manipulate a network, computer, system, extract data.

Example: A threat actor uses a rogue access point placed near your office building to conduct a man in the middle attack and extract credentials which they use to gain initial access and proceed to pivot through the network to gain access to data which is then exfiltrated to a command and control server.

Attack Surface — The network of the office.

Attack Vector — The weak password of a wifi access point of the office.

Vulnerability — Is a flaw within a system which can be exploited in order to gain access to data or asset. In the example above the ability to see to create a rogue access point from outside is the vulnerability.

2.2 Identify the Types of Data Provided by These Technologies

2.2 a) Tcpdump — Full packet capture traveling in a network.

2.2 b) Netflow — This is network flow data. Data which approximates entry points into a network, source and destination IP addresses, IP protocol types, source and destination ports, type of services being used. Basically think the 5 tuple but with just a few added pieces of data.

2.2 c) Next-gen Firewall — Deep packet inspection data such as layer seven (application) examination. IDS and IDP alerts, NAT and Pat translations.

2.2 d) Traditional Stateful Firewall — Incoming and outgoing traffic, traffic ACL’s and the connected alert and log data. What type of traffic is flowing as well was the session data.

2.2 e) Application Visibility and Control — The applications deny or allow list (white and black listing), applications in use and how they are being used, transactional data, and protocol use within each application or API.

2.2 f) Web-Content Filtering — IP address and user agent deny and allow listings, what web protocols and content behavior is allowed, restrictions of content any one user is capable of accessing, that includes setting security controls to regulate access to websites and the alerts/logs generated from attempts to access.

2.2 g) Email Content Filtering — Evaluations for spam, filtration of inbound and outbound traffic, examinations of potentially malicious links, files coming in. Will use statistical analysis to monitor for spam coming into email applications.

2.3 Describe the Impact of These Technologies on Data Visibility

2.3 a) Access Control Lists — This is a table which defines the privileges for an operating system or network. ACL’s give granular control to security operators over traffic, protocols, file access, object permissions, and user groups. This in turn can improve network performance and manageability of data and it’s visibility.

2.3 b) NAT/PAT — Both protocols can obscure the true source IP address of Traffic. This occurs due to traffic passing through a device which is actively using NAT/PAT to translate the private IP address. In a network with many devices, they could potentially be translated from their individual IP address (private) to a single, public IP address. Usually the address of the internet edge device actively using NAT/PAT to translate. Cisco’s Stealthwatch can use a method called NAT stitching to view the private address of devices active within a network.

2.3 c) Tunneling — This is a method of discretely transmitting data across an otherwise public network. This is used in VPN’s and SSH protocol. A packets payload is encapsulated in another packet. This encapsulation or creation of an encrypted tunnel (entry and endpoints of a packets travel path are encrypted/decrypted) obfuscates the payload and can bypass through a firewall, directly connecting to a destination. The header and payload of a packet go inside the payload section of a another packet. In a malicious sense this is like someone putting on a disguise to get into a building.

2.3 d) TOR — The onion router is a browser which anonymizes and encrypts traffic multiple times. Traffic is routed across several servers known as TOR relays, each relay encrypting the traffic as it passes through. The TOR relay path is global, thus obscuring geolocation and encrypting traffic making it very difficult to track and examine traffic for malicious activities. If you catch TOR traffic in a network your defending, its really never a good thing.

2.3 e) Encryption — A tool which greatly impacts visibility of data. If you do not have a decryption key or mechanism it will remain locked in cipher text. You can correlate various data types such as protocols used, payload or file size and the circumstance of the traffic itself to deduce what is encrypted but this is still a daunting task.

2.3 f) Peer-to-peer (P2P) — A decentralized communication model in which each party has the same capabilities and either party can initiate a communication session. It is a type of distributed file sharing network where each network node provides access to resources and data. This has a security risk of data leakage due to the very nature of this networking model.

2.3 g) Encapsulation — The method of placing a packets header and payload into another packet. Using the outer packet as a shell which obfuscates the data of the innermost packets payload.

2.3 h) Load Balancing — Load balancing logically distributes traffic to multiple servers. This improves the efficiency of server requests, responses, and visibility of traffic. Security implementations in each server gains clear sight of traffic and its distributions.

2.4 Describe the Uses of These Data Types in Security Monitoring

2.4 a) Full Packet Capture — This is known as a pcap file. This is used by network packet sniffers like tcpdump, wireshark, tshark, etc. When a full packet is captured (excluding encrypted packets like ssh or https) you can examine all layers using a tool like wireshark. This would be deployed in a situation where security operator needed to examine traffic which has been deemed to be suspicious or they need to trouble shoot network issues.

2.4 b) Session Data — Session Data means as applicable, usage information, such as IP Address, type of browser, type of operating system, referring URL, date, time, and duration of a visitor’s visit, the number of visits to a website, the pages viewed, order of pages viewed, number of cookies accumulated, bytes sent and received, user agent, etc. This is the summary of the communication between two network devices. In some instances this is a flow, netflow, or ipfix string. A very useful form of data for network security monitoring. Conversations generated from network traffic.

2.4 c) Transaction Data — Application specific records generated from network traffic. A sequence of information exchange and related work (such as database updating). This record of info exchange can be invaluable to security monitoring and response. It gives you a view of what has data has been exchanged or altered within a digital exchange.

2.4 d) Statistical Data — An overall summary or profile of network traffic. This type of data is used in network baselines which helps a SIEM or SOAR take note of anomalies or spikes in traffic, or data exchanged.

2.4 e) Metadata — This is data about data. Things like who made the data

and when. How big is the data, location of the data, its file format, etc.

2.4 f) Alert Data — These are notifications sent from a device to inform an entity or user regarding security or system issues. Usually these are directed and aggregated to a central security monitoring device, agent, SIEM, SOAR, or SOC workstation.

2.5 Describe network attacks, such as protocol-based, denial of service, distributed denial of service, and man-in-the-middle.

2.5 a) Denial of Service — The goal of this attack is to make the target machine, service, system, or network unavailable. The attacker sends a flood of traffic which overwhelms the target forcing it to crash. Buffer overflow is the most common DOS attack. This is where an attacker sends more traffic/data to a network address than the system can handle. ICMP flood aka Ping flood is another DOS attack where an attacker uses a botnet to send a barrage of ICMP echo requests packets to the target machine. The target subsequently crashes attempting to respond to every request packet. Think of it as a juggler who is very talented and can handle say up to six pins to juggle but someone throws them more and more until the juggler can do nothing but drop everything due to being inundated with things to juggle.

2.5 b) Distributed Denial of Service — This is a DOS attack but specifically using a botnet to conduct the attack. Many attackers use botnets which is a distributed network of various hosts who have been infected and are now under control of the attacker to use as they see fit. These can be a dozen machines to several thousand depending on the sophistication of the attacker. Botnets can also be rented, using cloud services or just taken over.

2.5 c) Man in the Middle — this is a type of attack where an attacker finds a way to tap into traffic, or places themselves in line with traffic which allows them to observe and collect data as it travels across a network or the internet. This is achieved by many methods. One instance is where an attacker can introduce a rogue access point which imitates an normally used access point in a network. When victims connect they are unaware that their traffic is now passing through the digital hands of an attacker. The attacker can relay, alter, or extract data that is traveling through this rogue access point.

2.6 Describe web application attacks, such as SQL injection, command injection, and cross site scripting.

2.6 a) SQL Injection — Is a technique where code is injected into an input which is connected to an SQL database. A websites login portal is an example of an input where this could occur. An attacker would use SQL query and command language, writing it in the login portal entry bars. This can return data which should not be accessible. This can potentially give the attacker write privileges to a database which can cascade into a much larger impact. This can be avoided by setting up parameterized queries. This means using proper coding characters to bind their values. You can achieve this by using code analysis tools which can analyze the code written for security holes where potential SQL injection criteria is present.

2.6 b) Command Injection — Is an attack where a vulnerable application is used to extend access into a host operating system. With this access an attacker will inject commands into the system shell of a server. This vulnerability is similar to SQL injection in the way it takes advantage of insufficient input validation. Other methods can be exploiting a vulnerability in configuration files like .xml files.

2.6 c) Cross-site Scripting — This is where an attack tricks a web application into sending data in a form that a users browser can execute. This is another type of injection attack and as before can be avoided if proper input validation is implemented in the code of the web application. There are three types of common XSS attacks.

Persistent XSS — In this method malicious data submitted by the attacker to the target web application is saved and permanently displayed on pages of the web application which expose other users in the course of regular browsing. Storage of the malicious injection can be left on a server, database, message forum, visitor log, a blog, comment field, etc. Essentially somewhere stored and waiting to expose itself to users.

Reflected XSS — Uses a vulnerable web application to execute an attack, using it as a sort of unwitting weapon on a victim. The attacker starts by sending link. This link will contain malicious code waiting for execution and is usually embedded into an anchor of text which convinces the user to click it. This is accomplished using social engineering or phishing. When the user clicks on the link, it sends a URL request to the vulnerable web application. The reflection is that the URL response traffic executes the malicious code in the users browser which in turn collects and exports data such as credentials, cookies, session tokens, etc, to the attacker.

DOM XSS — first DOM is document object model. It is a model of HTML elements contained within a web site or application. It is basically a data array/structure of HTML code. Acting as a programming interface that defines how to create, modify or erase elements in an HTML or XML document. So in DOM XSS malicious code is crafted and sent to the client (web application) it is injected into the browser, which enters the DOM structure. Like other injection attacks this vulnerability results from unsanitized code. This specific attack is local and doesn’t interact with a server/ The script is all within web page injecting into the DOM of it.

2.7 Describe Social Engineering Attacks

Social Engineering — This term covers a broad range of attack methods which use techniques to convince or manipulate a person in order to perform an action, surrender data, or gain access.

Examples of this are

Phishing — a threat actor sends an email crafted in a way to targets and convince the recipient to click a malicious link in order to obtain credentials or infect the target.

Vishing — Voice phishing is when a threat actor calls a victim using a pretext usually using something emotional, financial, or some extremely urgent scenario to gain information or access. The attacker can be sophisticated enough to use open source intelligence to use build a strong scam or as low end as using bots to send recordings.

In person impersonations — Dressing up as a contractor and piggybacking into the building in order to plant a malicious usb or keylogger.

Dumpster Diving — Rummaging around in a targets garbage to find valuable info.

In essence Social Engineering attacks use psychology, social expectations, and open source intelligence to gain access to information. It is like human directory traversal.

2.8 Describe Endpoint-based Attacks

Buffer Overflow — A buffer is a place where data is stored. Buffers have a fixed size and any program must be aware of how much data it puts into the buffer. If it puts to much in the buffer will overflow, damaging the data in the surrounding memory. A buffer overflow attack is where an attacker purposely sends too much data, using the overflow to create a denial of service or overwriting the surrounding memory. This can potentially damage the surrounding environment or overwrite code with a malicious replacement.

Command and Control (C2 Server) — This in essence is a computer controlled by an attacker. It is used to send commands to a system compromised by malware. This is also potentially where an attacker can stage, prep, or weaponize further steps in their attack. As well as where an attacker can receive exfiltrated data from a compromised host with the target network.

Ransomware — Is a type of attack where a victims computer is infected with malware which encrypts its data. It will display or send a message stating the malware will destroy/render useless/or publish the data unless the victim pays a specified ransom or meet some sort of demands from the attacker who released the malware.

2.9 Describe evasion and obfuscation techniques, such as tunneling, encryption, and proxies.

Tunneling — is a technique where hidden private network are sent across the internet. The hidden private part (aka the tunnel) is completed using the technique of encapsulation. This is where you a packet is hidden within another packet replacing the outer shell in order to obfuscate the data that can be observed as it travels. Sort of like smuggling something. This process allows an attacker to sneak through a firewall. SSH and GRE (Generic routing encapsulation) are examples of this in action.

Encryption — The ability to encrypt has a dual nature as it can both protect sensitive data, it also has the capacity to obfuscate malicious activity and data as it travels.

Proxies — this is basically the middle man between a user and an internet resource. It’s a server which hides the user, does all the requests of the user but with a totally different IP address and possibly different geolocation.

2.10 Describe the impact of certificates on security (includes PKI, Public/Private crossing the network, Asymmetric/Symmetric)

Public Key Infrastructure (PKI) — is foundational to security across the internet. Instead of ID’s and passwords for authentication, certificates are used. PKI encrypts communication using asymmetric encryption algorithms (this is where both a public and private key are used). PKI also deals with the management of certificates, keys, and ensuring the validity of both by utilizing certificate authorities (CA). CA’s are entities which are authentic trusted issuers of certificates to other entities and web sources. All of this is used in TLS/SSL communications. There are several steps involved in PKI but overall the core principle is that certificates and the implementation of PKI is integral to secure network transmissions.

2.11 Identify the certificate components in a given scenario.

2.11 a) Cipher Suite — Are sets of instructions that enable secure connections through transport layer security (TLS). This all happens behind the scenes of HTTPS in order to preform the TLS handshake. Its basically a list of security functions and encryptions both sides of the connection need to match in order to communicate. This includes a

key exchange algorithm (how symmetric keys will be exchanged)

authentication or digital signature algorithm (how client and server

authentication is implemented)

bulk encryption cipher (how the data is encrypted)

hash/mac function (how data integrity checks occur)

There are variations on each and it depends on what both the client and server can use in each category of the cipher suite.

2.11 b) X.509 Certificates — this is the standardized form used for digital certificates in the PKI. Generally an X.509 PKI certificate will contain the following.

Version Number: The X.509 version of that specific certificate issued.

Serial Number: A unique number the certificate authority issued.

Signature Algorithm Identifier: The algorithm used for signing the certificate.

Issuer Name: The certificate authority who created and signed the certificate.

Period of Validity: The time frame for the certificates validity.

Subject Name: The name of the user/entity who the certificate has been issued to.

Subjects Public Key Info: Subjects public key and algorithm used for the keys encryption.

Extension Block: Additional info regarding the certificate.

Signature: Hash code of all other fields which has been encrypted by the certificate authorities private key.

2.11 c) Key Exchange — This is where cryptographic keys are exchanged between two entities. They make a mixture of the PKI keys, combining both sets of each entities public and private keys in a way which creates a shared secret key between both entities. This is done using a cryptographic algorithm called the diffie-hellman exchange.

2.11 d) Protocol Version — This refers to the different versions of protocols used in PKI. The oldest is SSL 2.0 which was deprecated in 2011 and the newest being TLS 3.1 made in 2018. Each versions has updated protocols, cipher suites and standards in order to meet the ever evolving infosec landscape.

2.11 e) PKCS — Public key cryptography standards are a set of protocols numbered one to fifteen which were developed to enable secure info exchange across the internet using public key infrastructure.

PKCS #1 — RSA cryptography standard. The format for RSA public and private keys, encryption and decryption, and producing and verifying signatures.

PKCS #2 — A withdrawn standard (used to be message digest standard, now absorbed by PKCS #1)

PKCS #3 — Diffie-Hellman key exchange.

PKCS #4 — Withdrawn (absorbed by PKCS #1)

PKCS #5 — Password based cryptography standard. Usage of passwords in the key agreement. Applying hash functions to passwords to form a secret key. This in turn adds another layer of encryption.

PKCS #6 — Obsolete.

PKCS #7 — Cryptographic message syntax standard. This specifies the syntax of stored, encrypted data. It is used by certificate authorities to store digital certificates that they have issued. It is also a standard for storage of digital signatures. This standard formed the base for S/MIME (protocol for sending digitally signed and encrypted messages). This is the standard used in single sign on applications.

PKCS #8 — Private key information syntax standard. The standard for storing private key information. Uusually used along with PKCS #5 using a passcode with salt to store private keys.

PKCS #9 — Selected attributed types standard. Defines data type, length, and other details of attributes necessary fro certificates, signatures, and private keys.

PKCS #10 — Certification request syntax standard. Specifies the format of the messages (certificate signing request) sent to a certificate authority.

PKCS #11 — Cryptographic token (cryptoki) interface standard. Describes a platform independent, generic API for cyrptographic tokens. The API allows generation, modification, and deletion of the different types of keys and certificates used by security hardware. This ensures that encryption is platform agnostic allowing various devices to communicate.

PKCS #12 — Personal information exchange syntax standard. Defines a file format used to store privet keys with accompanying public key certificates (can have multiples inside). Protected and encrypted with a password.

PKCS #13 — Elliptic Curve cryptography standard. Cryptographic algorithm which finds distinct logarithms within a random elliptic curve.

PKCS #14 — Psuedo random number generation standard. Defines random number generation.

PKCS #15 — Cryptographic token information format standard. Specifies format of credentials required by cryptographic tokens to identify themselves.

3.0 Host-Based Analysis

3.1 Describe the functionality of these endpoint technologies in regard to security monitoring

3.1 a) Host-based intrusion detection — This type of deployment would act as an alarm system on a users machine. Triggered by any activity that is determined malicious or out of bounds of policy. This will typically send alerts and telemetry, alert and log data to the SOC/SIEM in a network.

3.1 b) Antivirus and Antimalware — Endpoint software which has the ability to detect and remove basic forms of viruses ( worms, trojans, spyware, etc.) This is signature based and the implementation will connect with a SOC/SIEM to send telemetry, alert and log data.

3.1 c) Host-based firewall — A security implementation which controls incoming and outgoing traffic from a host through the use of access control lists and predetermined rules against specific traffic behaviors, protocols, IP addresses and port numbers. Will connect with a SOC/SIEM to send telemetry, alert and log data.

3.1 d) Application-level listing/blocking — This is a security control that is implemented in a way that blocks or allows applications based upon a predetermined list of being either approved or blocked. The concept of white, black and gray listing comes into effect here. White- being approved list, black — being a list of blocked, gray — a list of undetermined.

3.1 e) System-based sandboxing — Sandboxing is used to ensure that software bugs and exploits of vulnerabilities cannot affect the rest of a system. This gives the host a place to perform testing in an isolated environment in order to run programs or open files without infecting the host or potentially the rest of the network.

3.2 Identify components of an operating system (linux & windows) in a given system.

Windows components — NTFS — New technology file system, this is the current files system used by windows.

Registry — is a file system within the NTFS system. It is a hierarchical data base used to store information necessary to configure the system. It stores user settings and operating system parameters. It uses five folders called hives which contain values pertaining to various parts of the OS.

WMI — Windows management instrumentation. This is a scalable system management infrastructure built around an object oriented interface. Used to manage info shared between management applications.

Handles — is an abstract reference value to a resource. A handle identifies a particular resource you want to work with using the win32 API’s. The resource is often memory, an open file, a pipe, or an object managed by another system.

Services — These are long running executable applications that operate in their own windows session services which run in the background.

Linux components Processes — In linux these are started either in the foreground or background of the OS. (Processes are OS agnostic but I wanted to focus on the diversity of linux functions)

Init — First process of the boot process

Parent process — The origin or application of an active process

Child process — A process created by a parent process in order to execute a task within the existing (parent) process. This occurs with a fork command which means a parent process needs additional executed processes so it creates a child process (forks) in order to do so.

Orphan process — Is the result of a parent process being terminated and the child process it has created is permitted to continue on it own.

Zombie process — is a process that releases it’s associated and memory but remains in the entry table.

Daemons — These are like services in windows. Daemons are processes which run in the background.

Syslog — Is a general purpose logging system. Much like event viewer in windows it logs events in a system. It uses facility codes to describe where the log was generated and priority codes to show how severe the event was (0= emergency, 9= nothing)

3.3 Describe the role of attribution in an investigation.

3.3 a) Assets — This would be the target of an attack. A user, data set or information, system, device, something which is valued and protected within a system or enterprise.

3.3 b) Threat actor — This is the entity who is performing actions which are malicious in nature in order to gain access to an asset. The bad guys, whether it’s a nation state apt, or an ill-informed or disgruntled insider.

3.3 c) Indicators of compromise — These are signs of a sytem intrusion or exploitation of a vulnerability. Pieces of forensic data, such as data found in system log entries or files, strange file changes, or anything that identify potentially malicious activity on a system or network.

3.3 d) Indicators of attack — Signs which indicate an attack is likely in progress. Things like unusual activity from administrator or privileged accounts, request for additional permissions to assets.

3.3 e) Chain of custody — This is they way you document and present evidence from the time you begin a digital forensic investigation to the time the evidence is presented in court. The who, what, when, and how of evidence collection in a digital forensic investigation. It preserves the integrity of evidence collected.

3.4 Identify types of evidence used based on provided logs.

3.4 a) Best evidence — Refers to evidence that can be presented in court. Preferably in the original form (exact copy bit-level of a hard disk) these are properly collected system images and appropriate copies of files that can be used in court.

3.4 b) Corroborating evidence — Is evidence which supports a theory or an assumption deduced by some initial evidence. This would be evidence which can confirm a proposition.

3.4 c) Indirect or circumstantial evidence — This is evidence which does not directly prove anything. However it is evidence of another fact that could lead to a conclusion or inference of something. An example is say “Bill” is in a building the same day it is robbed. This is circumstantial because it is not absolute evidence that “Bill” actually robbed the building.

3.5 Compare tampered and untampered disk images

Untampered — this is your direct evidence. Unaltered with correct and matching file hashes. Correctly executed in its chain of custody documentation and storage. A perfectly mirrored image file.

Tampered — is altered in a way which destroys its validity as direct evidence.

3.6 Interpret OS, application, or command line logs to identify an event.

This would be the ability to access and examine things such as, windows event viewer logs, windows registry logs, windows defender/security logs, using utilities like process explorer in windows in order to observe the parent and child processes active during an event. Using linux commands such as, ps to detail current active processes. Top — to observe processes running in the system and their consumption of resources (cpu, memory, network, etc.)

3.7 Interpret the output report of malware analysis tool ( such as detonation chamber or sandbox)

This would be the ability to review the findings or observations made from code execution in a security quarantine device. An example would be cisco’s threat grid which can dynamically run and examine potentially malicious files in a quarantined environment. Malware will have data which is documented and added to threat intelligence databases.

Hashes — MD5 is the most common hashing algorithm used to fingerprint a malicious file or program.

URL’s — Uniform resource locator is a web address that is a reference to a web resource that specifies its location on a computer network. Ex) http://maliciousstuff.com is a url. Malicious URL’s are recorded and added to threat intelligence databases which are in turn utilized by security technologies in order to identify malicious activity.

Systems, events, and networking — This would be all of the observations made and reported. Incidents or behaviors in the realms of a system or network. Things such as file or directory changes, c2 callouts, data exfil attempts, or major events.

4.0 Network intrusion analysis

4.1 Map the provided events to source technologies

4.1 a) IDS/IPS — An IDS is designed to provide an alert about a potential incident which allows a SOC analyst to investigate the event and determine whether it requires further action. An IPS can take action itself to block an attempted intrusion and remediate an incident.

4.1 b) Firewall — Events would be an application or process making a connection attempt that is against a rule within a ruleset or ACL or whenever there is a change within a firewall configuration.

4.1 c) Network application control — These events would be flags of anomalous behaviors in traffic from various applications on a network.

4.1 d) Proxy logs — These would be events pertaining to traffic traveling through a proxy server. Traffic and its behaviors, requests made by users, applications or services.

4.1 e) Antivirus — Host or network level events of flagged malicious files, activities, or anomalous behavior that matches a signature.

4.1 f) Transaction data — Events flagged which pertain to data being exchanged. An example would be a netflow showing extremely larger than normal amounts of data leaving a network.

4.2 Compare impact and no impact for these items

4.2 a) False positive — No impact. This is a benign event. Activity that has been incorrectly flagged as malicious.

4.2 b) False Negative — Impact. Actual malicious activity which has been allowed to proceed.

4.2 c) True Positive — Impact. This is when malicious activity has been correctly identified and flagged.

4.2 d) True Negative — No impact. This is when benign activity has been correctly ignored.

4.2 e) Benign — No impact. Harmless in nature.

4.3 Compare deep packet inspection with packet filtering and stateful firewall operation.

Packet Filtering — These make decisions based on network addresses, ports, and protocols of individual packets as they pass through a firewall. It examines packet headers that contain IP addresses and packet options and block or allow traffic through based on that information. Once a packet is in a network it can do whatever without a packet filtering firewall reacting. Operates at layer 3 and 4 of OSI.

Stateful firewall — Has the same block/allow functions based on packet headers as packet filtering but it additionally keeps track of the state of communications sessions. It will monitor the incoming and outgoing packets. Monitor activity of packets in a network. Operates at layer 3 and 4 of OSI.

Deep packet inspection — Uses all the same functions of packet filtering and stateful firewall session monitoring, but adds a powerful additional function. The capability of analyzing the actual content of the traffic that is flowing. DPI can reassemble the contents of the traffic to look at what will be delivered to an application. It will examine packet content and operate from layer 2 all the way up to layer 7.

4.4 Compare inline traffic interrogation and TAP or traffic monitoring.

Taps or traffic monitoring — Traffic access points are used for intrusion detection systems. They are typically a dedicated hardware device with no IP address and they act as a relay which sends a copy of the networking traffic to a server for security monitoring and analysis. This connects (taps) into the cabling of a network.

Inline traffic interrogation — This is where a device such as a firewall are in the path of traffic. It will process and pass live traffic based upon its rule set and configuration. This would be used in intrusion prevention systems due to it working with live traffic instead of a copy like a network tap.

4.5 Compare the characteristics of data obtained from taps or traffic monitoring and transactional data (netflow) in the analysis of network traffic.

Netflow — This data will allow you to see what is actually happening across the network live. It uses 5-tuple datasets to examine for anomalous or malicious network activity. Essentially the data is the 5-tuple of live traffic.

Taps — As stated before this uses a copy of traffic versus specific traffic data observed live.

4.6 Extract files from a TCP stream when given a PCAP file and wireshark.

For this you must get the wireshark program. I will explain the steps, but ultimately you need to get your hands dirty with this program. Tryhackme.com has an excellent wireshark course.

1) Open wireshark

2) Go to file tab and open chosen pcap file

3) Once the file is loaded go to the file tab again

4) Click on export objects

5) Click on http

6) A list of files found in all http requests will display in a new window

4.7 Identify key elements in an intrusion from a given pcap file.

Again this will involve you getting g comfortable with wireshark or tshark in order to identify elements that stand out as malicious in a packet.

4.8 Interpret the fields in protocol headers as related to intrusion analysis.

With any protocol used in a network it can be observed using a packet sniffer. In any packet there will be headers with varying types of info/data. Here they want you to examine protocol headers, specifically the fields for data such as source and destination IP, source and destination ports, size, flag, sequence, etc. In order to properly identify any malicious activity from the packet.

4.9 Interpret common artifact elements from an event to identify an alert.

4.9 a) IP address (source/destination) — This would be observing traffic coming from a malicious IP. Things like an IP that is typo squatting ( goggle.com vs google.com) or traffic destined for a C2 server or manipulating a domain to exfiltrate data.

4.9 b) Client and server port identity — Observing ports being misused or potentially entering on one port than switching ports once in the network. Identifying ports used for services which can be potentially abused in either direction of the session.

4.9 c) Process (file or registry) — This could be suspicious registry, permission, or system file changes. Large numbers of requests fro the same file. Compressed files in incorrect locations.

4.9 d) System (API calls) — Finding unknown applications in the system. The ability to observe and distinguish between API calls which are deemed potentially malicious within that given system. An example would be high rate requests for something that gets requested no where near the systems baseline. Or an attacker hijacking a thread in order to load malicious data in a DLL format.

4.9 e) Hashes — This can be files download to a machine where its hash does not match the download sources. Or using hashes from a suspicious file to search for threat intelligence on a site like virus total.

4.9 f) URI/URL — Malicious addresses of C2 servers or cyber-criminals, typosquatting, or potential abuse of the DNS resolve function to exfiltrate data.

4.10 Interpret basic regex

This topic can be daunting due to regex acting almost as a programming language for sifting through data sets. The various characters would be much to long to explain here but I will list some basic ones. I advise you to get comfortable with regex and seek out further knowledge as this is an invaluable tool in data analysis. I recommend visiting regexone.com to get started.

5.0 Security policies and procedures

5.1 Describe management concepts

5.1 a) Asset management — A tracking system of enterprise assets. Security policies, processes and technologies implemented in order to manage and protect organization assets during their life cycle.

5.1 b) Configuration management — The process of tracking, management, and auditing of settings and configurations of assets/devices/systems in an organization.

5.1 c) Mobile device management — The management, trackingand auditing of mobile devices being used in an organization. Various frameworks can be used depending on the deployment. Agent-based, agentless, byod, etc.

5.1 d) Patch management — Is a systems management subset where you identify, acquire, test and install patches, which are intended to fix bugs, vulnerabilities, or add features. This varies based upon the system as patches may need to observed in a sandbox environment before implementation.

5.1 e) Vulnerability management — Is an ongoing cyclical process where you identify, assess, report, manage and remediate vulnerabilities in a system or organization.

5.2 Describe elements in an incident response plan as stated in NIST 800–61

Here I have just copy and pasted the NIST 800–61 elements as this should help give you faster access to the answer than opening the doc and combing through.

Organizations should have a formal, focused, and coordinated approach to responding to incidents, including an incident response plan that provides the roadmap for implementing the incident response capability. Each organization needs a plan that meets its unique requirements, which relates to the organization’s mission, size, structure, and functions. The plan should lay out the necessary resources and management support. The incident response plan should include the following elements:
<

Mission

< Strategies and goals

< Senior management approval

< Organizational approach to incident response

< How the incident response team will communicate with the rest of the organization and with other organizations

< Metrics for measuring the incident response capability and its effectiveness

< Roadmap for maturing the incident response capability

< How the program fits into the overall organization.

The organization’s mission, strategies, and goals for incident response should help in determining the structure of its incident response capability. The incident response program structure should also be discussed within the plan. Once an organization develops a plan and gains management approval, the organization should implement the plan and review it at least annually to ensure the organization is following the roadmap for
maturing the capability and fulfilling their goals for incident response.

Here I will be combining the answers to

5.3(Apply the incident handling process to an event)

5.4(Map elements to these steps of analysis)

5.5(Map the organization stakeholders against the NIST IR categories) as this will be a more efficient way to answer and display this instead of rewriting the incident response plan three times.

5.3/4/5 a) Preparation — This is the first step. Here you will set up initial response capabilities. IDS/IPS systems both host and network based. Quarantine and sandbox capabilities, communication back ups and redundancies, data backups, digital forensic software, user security training.

In this phase of an incident the goal would be to avoid an incident all together by properly preparing. Jim from sales will know not to click on a link in a well executed phishing email, or think to double check with IT if something doesn’t seem right. In this phase all stakeholders are involved as this is where a proper security mindset, culture, and mechanisms are deeply seated within an organization. Management, IT, legal department, HR, business continuity team, physical security staff, financial management, information assurance, all departments and employees are involved at this phase.

5.3/4/5 b) Detection and analysis — Here you are attempting to accurately detect and assess possible security incidents and if so to what extent or impact. Looking for precursors and indicators of an attack or breach. A true understanding of normal baseline behaviors is necessary as is event correlation of events and incidents. This is a very difficult step in the incident handling. Lets say Jim messed up and did click that link in that well crafted phishing email. This phase could involve seeing strange behavior from Jim’s account such as odd use times, or requests to access objects that he does not need access to usually. Here you would probe further based on this activity and examine Jim’s account and device logs. In this phase stakeholders involved would be IT and management. Potentially a user can observe strange activity and report it to someone as well.

5.3/4/5 c) Containment, eradication and recovery — Containment is extremely important before an incident spreads, overwhelms resources or increases damage. This is an aspect which should be considered as soon as possible. Containment decision making such as shutting down a system, disconnecting from a network, disabling functions, pushing the big red button etc. This connects directly to the preparation phase as the containment strategies procedures and scenarios must be carefully considered in order to clearly define the acceptable risks or flesh them out as much as possible. Incidents vary so predetermined containment strategies may need to be abandoned and quick logical decisions must be made on the spot. Eradication phases involve eliminating components of the incident such as deleting malware, disabling breached user accounts, identifying and mitigating all vulnerabilities that were exploited. Recovery phases entail admins restoring systems to normal operations, confirming functionality, remediating any vulnerabilities observed in order to prevent similar incidents from occurring. This can involve restoring systems with clean backups, rebuilding systems from scratch, changing passwords, tightening security. In eradication and recovery it will be a phased approach where high value changes such as overall security and business recovery are prioritized in order to get things back up and running but also avoiding further incidents. So Jim seems to have downloaded a trojan which upon detection begins sending infected broadcast packets. You would immediately kill the network access and remove the device from the network. Here you could run in a sandbox in order to retrieve the trojans file hash in order to investigate further using a site like virus total. Most likely you would just wipe and factory reset the device, upon reset scan for any traces of malware and possibly run in a sandbox network to see if any of the trojan is left. Additionally you would want to make sure the broadcast packets did not reach any other hosts in the network checking for additional infection. In this phase stakeholders involved would be IT, management, and depending on the severity of the incident, physical security, facility management, and business continuity team.

5.3/4/5 d) Post incident activity (lessons learned) — Documentation and review of the events and incidents. Lessons learned such as how and what happened, what can be improved on, how was the response, how can it be avoided in the future, evidence retention and incident data review and dissemination. Taking this data and educating staff on a newly adapted security posture based on the incident and lessons learned. Training Jim on what to look for in a phishing email. In this phase the stake holders involved are management, IT, legal department, HR, business continuity team, physical security staff, financial management, information assurance, nearly all departments and employees are involved at this phase at least in terms of security awareness training.

5.6 Describe concepts as documented in NIST.SP800–86 Forensic investigations.

5.6 a) Evidence collection order — The order in which evidence is collected must be carefully considered due to the act of file collection potentially altering other pieces of data within a system. This must happen in an order where volatile data from RAM, swap files slack spaces, and free spaces are prioritized. Additionally malicious parties can install root kits which are designed to return false info and mimic a secure system. The situation will usually dictate what order to begin collecting data.

5.6 b) Data integrity — Upon collecting data in an investigation, the data integrity should be verified. This is important to prove that the data has remained untampered in order to stand up as evidence in court if needed. Data integrity verification typically consists of using tools to compute hashes or message digests of the original and copied data, then comparing to make sure they match. Ensuring data integrity is incredibly important in a digital forensics investigation.

5.6 c) Data preservation — This connects with data integrity and proper collection and storage. Properly storing copies such as one that is a mirror image and totally untouched. Another copy is used just for analysis, and one more for redundancy. Additionally, following the chain of custody process helps preserve data and keep it preserved.

5.6 d) Volatile data collection — As mentioned in part a collecting volatile data be it ram, swap memory, or any data which can be altered or written over easily must be considered and collected first. Each cas is different so the approach to volatile data collection may differ.

5.7 Identify these elements used for network profiling.

5.7 a) Total throughput — Throughput is the actual amount of traffic flowing from a specific source or group of sources to a specific destination or group of destinations at a specific point in time. Basically how much traffic is flowing and the rate of data delivery over a period of time. So if you normally have only 32 Kbps on network at 10 pm and now its reading 40 Mbps that is a total throughput anomaly.

5.7 b) Session duration — This is defined as the time frame during which there are regular active interactions between a client and a server. Say your baseline is a user connecting to a data base fro about forty minutes max over their normal day of work, that is your network session baseline for that. If then that same user connects at an odd time and it’s a three hour session that would be an incident.

5.7 c) Ports used — Any organization will have a handful of ports used consistently based upon the work being done within that enterprise. If say you never use port 22 for SSH and all of a sudden you see activity moving through port 22 that is not good.

5.7 d) Critical address space — This is the address ranges of critical devices or hosts. Assets which if compromised or DOS’d could greatly impact an organization. For example lets say servers hosting the web application and the database for an online store. This would be considered critical address space.

5.8 Identify these elements used for server profiling.

5.8 a) Listening ports — These are the open and active ports. They are waiting or are actively running services on a server such as acting as a communication endpoint.

5.8 b) Logged in users/service accounts — Whose logged into a server. Does their activity and timeline correlate to any anomalies? Services logged in, and does the service need to be logged in?

5.8 c/d/e) Running Processes — What actions are occurring in the server? Events, applications, tasks, activity in general and does any of it stray from the baseline in a way that sticks out to you.

5.9 Identify protected data in a network.

5.9 a) PII — Personally identifiable information. Things like social security numbers, home addresses, full legal names, geolocations.

5.9 b) PSI — Public sector information. Things like, domain name owner, court records, company information, social media.

5.9 c) PHI — Private health information. Documented medical records, health data, surgical history, diseases, prescriptions.

5.9 d) IP — Intellectual property. The source code to software, the recipe to coca cola, the colonels secret eleven herbs and spices.

5.10 Classify intrusion events into categories as defined by security models, such as cyber kill chain and diamond model of intrusion.

Cyber Kill Chain

Recon — A threat actor found the head of the HR department of the company they want to target using open source intelligence.

Weaponization — Threat actor crafts a spearfishing email with an embedded trojan. It is using the head of HR’s involvement and previous donations to a specific charity in order to social engineer and trick them into clicking on the trojan.

Delivery — Email is sent to head of HR.

Exploitation — Head of HR falls victim to the email and clicks the link containing the trojan.

Installation — Trojan begins installing into the HR heads device and collecting data before setting up connections to a command and control server.

Actions on objectives — Data has successfully been exfiltrated and the threat actor has begun selling the collected data on dark web markets.

Diamond model of intrusion

Here I will include a link to a few sites that will explain and elaborate on the diamond model of intrusion. They have done a better job than I could explain here as well as having visual models of how it works.

https://www.activeresponse.org/wp-content/uploads/2013/07/diamond.pdf

https://warnerchad.medium.com/diamond-model-for-cti-5aba5ba5585

5.11 Describe the relationship of SOC metrics to scope analysis (time to detect, time to contain, time to respond, time to control)

This is referring to how quickly and effectively a SOC team can detect, respond and control an incident and how that melds with the teams scope of analysis within their organization they are protecting.

Though they are sometimes used interchangeably, each metric provides a different insight. When used together, they can tell a more complete story about how successful your team is with incident management and where the team can improve. Time to detect tells you how quickly your team can detect an incident. Layer in mean time to respond and you get a sense for how much of the recovery time belongs to the team and how much is your alert system. Further layer in mean time to contain and you start to see how much time the team is spending on detection vs. response and containment.

Add mean time to control to the mix and you start to understand the full scope of fixing and resolving issues beyond the actual downtime they cause.

Fold in mean time between failures and the picture gets even bigger, showing you how successful your team is at preventing or reducing future issues.

--

--

Ben Bosteter
0 Followers

Former physical security contractor now operating in information security as a computer security researcher.