Time-line Alignment of Cyber Incidents in

1 downloads 0 Views 253KB Size Report
FEAT enables log event alignment based on the type, periodicity, and the severity .... format, filtered, normalised, and then sent to the correlation engine for the ...
Time-line Alignment of Cyber Incidents in Heterogeneous Environments Agnė Brilingaitė, Linas Bukauskas, Eduardas Kutka Institute of Computer Science, Vilnius University, Vilnius, Lithuania [email protected] [email protected] [email protected] Abstract: The increasing numbers of cyber incidents in heterogeneous environments demonstrate the definite need for new approaches to protect, analyse, and validate asynchronous systems. Large infrastructures or services span over several networks, different ownerships, and time zones. These assumptions show the need for coordination, timely incident response, and situation assessment capabilities. Industrial and public infrastructures, autonomous devices, and the Internet of Everything might have components of computer systems asynchronous to global time. Some systems run with micro fluctuations in time. Synchronisation of the internal clock of asynchronous systems can be challenging enough. Each network element can have a different system time setup. Such setups could be accidental misconfigurations or reflection of purposeful design with multiple misaligned log historian servers. Unattended and fully autonomous systems will be drifting apart in time without time synchronisation. This behaviour of heterogeneous time systems leads to a time-shift problem during the cyber incident investigation and slow or even impossible incident detection. Merging logs from different time flows does not allow making correct links among actually connected events. Thus, the timeline of events is incorrect and misleading when trying to apply patterns of system or intruder behaviour. This paper presents the Framework of Event Alignment in Time (FEAT). The framework aids cyber incident response teams and digital forensic analysts to mitigate asynchronicity of events within the scope of analysed infrastructures and services. FEAT enables log event alignment based on the type, periodicity, and the severity of the time-shift. Also, the framework supports efficient tooling to prepare, operate, maintain infrastructure with time-shifted systems or analyse events during the cyber incident response. Besides, FEAT can be applied to validate the trustworthiness of distributed services and evaluate the impact of time-shift on used intrusion detection strategies. We demonstrate the capabilities of the FEAT by implementing time-shift cases with the incident simulation within a cyber defence training platform. Keywords: cyber incident detection, event time alignment, asynchronous systems

1.

Introduction

Timing is critical for the development of various standard and high-value applications and services (Weiss, Chandhoke and Melvin, 2015), e.g., smart transportation, power-grid, and financial systems. The time becomes even more critical in digital forensics or security assurance for those applications. Log information serves as the primary source information for digital forensics. For intrusion detection, logs can be analysed using two approaches (Abad et al., 2003). The top-down approach means taking a known attack to analyse its behaviour based on log information. The bottom-up approach means analysis of logs to discover potential attacks. The correct event alignment of security logs or events that happen in the whole organisation helps the CyberSecurity Incident Response Team (CSIRT) analyst to detect certain types of cyber attacks in distributed environments, to get the global perspective of the cyber incident, and apply tools that would help alleviate such work. Live digital forensics encounter challenges (Mazdadi et al., 2017) to estimate the current state of the infrastructure element as well as to consider massive amounts of communication backlogs (Lilis et al., 2016). Synchronisation of logs of asynchronous systems can be challenging or sometimes even impossible. Each of network elements works asynchronously and might have different initial time set up for the system. Such setups could be an accidental misconfiguration or purposeful manipulations of time. Unattended systems if configured without global time or any clock sync will be shifting time. For adversaries, such unattended or detached systems could lead to attacks scenario where time skew, time jumps, and loops in time would render the system incapable of operation or introduce wrong behaviour.

A virtual infrastructure of an organisation with some virtual networks as an example is presented in Figure 1. Enterprises apart from infrastructures and workstations use additional hard-to-manage components. These components are physical network devices, sensors, and industrial devices. As one can see, such infrastructures have many different timelines at different levels. In a perfect world, time is synchronised in all components, and logs are all aligned in time. However, in most situations, time differences occur every day. Some devices will have problems with network date sync, and time will be drifting independently in all systems. As one can see in Figure 1, even from one host, one VM and one service we have at least three different independent timelines. This number of options emphasises that event logging and timestamping on the client side introduces significant time alignment problems among various applications or services.

Figure 1: A simple example of time differences in virtual environment Another example of virtual infrastructures could be discussed. Misconfiguration of Infrastructure as a Service (IaaS) cases can also lead to date and time shifts as well as leaving away the standard notion of time within the services. Sometimes, detached local networks might have a lack of synchronisation to the global time as industrial device internal clock and historian information would be unsynchronised from the rest of the system. Time servers are established locally in industrial applications. Soon, one of the most significant problems might be time synchronisation of autonomous vehicles and global time. Tampering with time or sudden time changes can lead to car crashes instigated by incorrect calculations. Also, with the development of blockchain networks, timestamping becomes an essential player in nonrepudiation cases among private entities. This paper presents the Framework of Event Alignment in Time (FEAT) to simulate time-shift appearance in the heterogeneous infrastructure of services and devices. We discuss challenges of the enterprise or industrial systems that are misaligned in time. The FEAT enables generation and setting of time-shifts similar to those that can be observed in real systems due to natural reasons. Also, some time-shifts are generated to simulate network/system intruder behaviour. We distinguish four types of time-shifts and introduce strategies of log alignments in time of asynchronous systems. The alignment is required when assessing the situation for live digital forensics and evaluating software or service robustness during the cyber incident. The paper is structured as follows. Section 2 presents related work. Section 3 describes various types of misalignments in time when dealing with logs. Section 4 discusses a framework for event alignment architecture and methodology to trigger systems to misalign time as well as to collect forensic data. Section 5 draws conclusions and projects future work.

2.

Related Work

The security of various sectors could be enhanced by correlating common IT security events with those beyond the cyberspace, e.g., anomalies in device recordings for gas, temperature. Thus, intrusion detection, security

information, and event management systems deal with big heterogeneous data. Homogeneous solutions are unable to detect multi-step attacks (Zuech, Khoshgoftaar and Wald, 2015). Therefore, analysis of the log data and identification of attacks is not a trivial task. The event correlation is required to establish the timeline of an incident. Herrerias and Gomez (2007) distinguished two problems of log analysis: diverse log formats and enormous amounts of data recorded in log files. They presented the model that gathered information from different data sources, e.g., application logs, system logs, in diverse formats. Then, the events were joined together in one format, filtered, normalised, and then sent to the correlation engine for the diagnosis. It was assumed that each action was recorded and available unmodified. The engine used compression, generalisation, deletion of low priority events to minimise data amounts. Due to a vast amount of low-priority alerts generated by the intrusion detection systems, it is recommended to do post-correlation analysis (Shittu et al., 2015) of events to group them into high-level structures as metaalerts to find interesting alerts for the analyst. Various methods could be used for event clustering and priority identification. Events must get a high priority if they represent anomalous behaviour at a given time. For example, there could be three categories of attack severity (Low, Medium, and High) assigned to the alerts based on the extracted features (Hassan, 2015). Meera and Geethakumari (2016) addressed the lack of segregation of events in the logs of the cloud infrastructure. They used attribute-based correlation techniques. The attributes could be instance ID or instance IP of virtual machines, and time could also be used as correlation parameter. The investigator could set the time interval to include or exclude particular logs. However, the work did not consider any noise or incomplete logs and required uniform log format. There exist frameworks that support correlation analysis of cyber incidents. Kim, Woo, and Kim (2016) used event relation tree and Event Transition Graph (ETG) to support event relations and temporal features, respectively. ETG structure was used to support a sorted list of events, and due to temporal characteristics, two parent nodes could be assigned to the event. Correlation of events temporally plays an important role in the visualisation of cybersecurity maps to define the impact of cybersecurity events (Ferebee et al., 2011). Secure logging schemes (Rajalakshmi, Rathinraj and Braveen, 2014) ensure that the log information is not corrupted and not modified afterwards. If logging process does not involve encryption, the timestamps could be corrupted in various logs of the environment. Accorsi (2013) presented secure logging infrastructure to ensure authentic archival of log records to provide data for remote audit. Devices sent event information to collectors to store events in log-files. Authorized auditors retrieve obtained log file portions from collectors. Kao and Chiu (2015) presented the iterative model to explore date-time stamps in the file metadata of Windows operating system within cloud storage service to establish event timeline. It was considered when files were created, updated, modified, etc. The authors emphasized that sometimes the date-time stamp differed from the actual time, and this aspect was critical for investigators. The four-stage process was described to work with date-time stamps, and during the comparison phase, it was important to consider the deviation of the actual time from the computer time as the reconstruction of the timeline could lead to the incorrect conclusion. There could be several reasons for the incorrect clock: wrong settings, time zone, or daylight saving. Devices and systems can have a distinct clock speed that is referred to as clock skew. Huang and Teng (2014) presented the algorithm to imitate the network node's clock skew, approach to launching the replication attack on clock skew-based identification in wireless sensor networks and a defence method against the attack based on the time characteristics between two sensor nodes. Sagong et al. (2017) investigated attacks on in-vehicle networks. They emphasized the vulnerability of electronic control units (ECU) in vehicles due to available external interfaces. The Controller Area Network (CAN) protocol is a broadcast protocol, and it does not include encryption, authentication, or timestamps. Therefore, spoofed messages could cause serious safety risks. Each ECU can have a distinct clock speed. The clock skew impacts Inter-departure times of messages sent by the transmitter to the receiver. Sagong et al. (2017) proposed the intelligent masquerade attack that adjusted the inter-departure time of messages to match the clock skew observed at the receiver to bypass intrusion

detection systems. They also introduced the metric to quantify the effectiveness of IDS to detect masquerade attacks.

3.

Misalignment Types

Journals of sequential transactions and/or asynchronous logs serve as the material for the CSIRT investigation at the scene. To observe time-skew or back in time loops is CSIRT’s responsibility. Log l is a sequence of log entries where each ei ∈ D’ where D’ is some generalised value domain. All value domains are represented as set D. D’ varies depending on the source, and the log schema is not identical or fixed for all log sources. Each log entry e i is an m-tuple e = (c1, ..., cm). A number of separators separate the values of m-tuples in log files. The log schema L is defined as a sequence of attribute and domain associations e ∈ L, L = D1 ×...× Dm, Di ∈ D. In Figure 2, the space character is a separator, and c 1 belongs to value domain of months.

Figure 2: Fragment of the network infrastructure log file Among all value domains, there is a subset of time domains TIME ⊂ D. The log schema L = D 1 × ...× Dm represents a continuous log in time if (if and only if) at least one attribute of the schema is the time domain, i. e., ∃ L[Di]: Di ∈ TIME. In Figure 2, c1, c2, and c3 represent attributes with values from time domains. We assume, that each log entry has unique object identifier (UOID) associated. The log has unique object identifiers (UOIDs) if ∃ L[Di]: Di = UOID ∧ |l[ci]| > 0 ∧ |l[ci]| = |distinct(l[ci])|, i.e. there exists some attribute with non-nullable values, and a number of distinct values equals the number of recorded values for the particular attribute. In some cases, UOID acts as auto increment (serial) values. It should be pointed out that UOID is not always a timestamp, however, such a misconception sometimes makes huge difference analysing logs. We define serial UOID with always incrementing values, and these values do not necessarily correspond to the numbers of the log entries due to UOID generation mechanism. But ∀ ej UOID value and variable j represent the order of the entry appearance in the log. For schema L = D 1 ×...× Dm with UOID attribute Dj = UOID the adjacent log entries satisfy the condition ∀ ei, ei+1: l = ∧ ei+1[cj] - ei[cj] > 0. The CSIRT could find several types of time misalignment in the heterogeneous infrastructure at the scene. We assume that CSIRT starts the analysis with an observation of the system properties for a particular length of time. The start of the observation time is denoted as t obs. The time misalignment cases are analysed according to the global time or reference time notated as GT. In Figure 2, the last line shows an example of a time jump into the past. We see that DHCPACK timestamp is earlier than initial DHCPREQUEST.

3.1

Constant Misalignment

Some systems have a permanent shift of time (time skew). Such a misalignment is called a constant misalignment, as the time-shift is constant C. The system keeps aspect ratio of the local time to GT synchronous. This feature implies that time synchronisation with the external time source is maintained, and all log entries are just a constant clock ticks away at any time. We define function constantMisalign: D 1 ×…× Dn → ℕ, Di ∈ D, that evaluates the constant misalignment in the provided log and returns non-zero value of constant misalignment (in clock ticks), otherwise, 0 is returned. The function requires at least one time-related domain within the provided schema, and it makes a system observation window in time. It is important to emphasise that the log must be treated as the continuous data stream, not a finite data set. The function returns undefined value ⊥ if no events happened during the observation period, and the time misalignment was impossible to detect.

The constant misalignment is defined as ∀ i, j, e = (c1, ..., cn): ck ∈ D*∈ TIME: ei[ck] - GT = C = e j[ck] - GT if the condition holds during the observation window that starts at the observation time t obs. Constant C shows the alignment window precision. Examples of logs with constant misalignment are systems running in different time zones within the infrastructure and using local time instead of UTC, database management systems having the wrong initial time, autonomous devices that have the default settings.

3.2

Δ shift Misalignment

Systems logs could have a constant shift C with small observable variation Δ over time. This implies that the machine does not have any synchronisation of time or works improperly. This case might often happen in systems that have a huge latency network connection to time synchronisation server. Two cases variations of misalignment: 1. Δ+ positive skew, increase in time is understood as clock ticks are delayed in comparison to the global time GT, and the change is observable; 2. Δ- negative skew, a decrease in time is understood as clock ticks getting shorter in comparison to the global time and the difference is noticeable. The Δ shift misalignment is defined as ∃ i, j, e = (c1, ..., cn) : ck ∈ D* ∈ TIME : ei[ck] - GT = C1 ∧ ej[ck] - GT = C2 ∧ Δ > |C1| > 0 ∧ Δ > |C2| > 0 ∧ C1 C2 if the condition holds during the observation window that starts at the observation time tobs. The Δ value is the maximum shift found max (|C i|). We define function deltaShift: D1 ×…× Dn → ℕ, Di ∈ D, that evaluates the Δ time-shift in the provided log and returns non-zero value of constant misalignment (in clock ticks), otherwise, 0 is returned. The function returns undefined value ⊥ if no events happened during the observation period, and the time-shift was impossible to detect. We assume that for one log only one variation of this misalignment can be detected. A simple example of this misalignment type is a virtual server that does not sync its time often enough and has an unstable internal clock. In some cases, the internal device clock could be implemented not as a real-time clock but as a derivative result from the other process (for example, small IoTs) or power source frequencies. Such systems will drift apart very fast and might provide inaccurate data in the logs. Investigating systems is a complicated task because in that Δ time might happen a lot of events and their correct timing will be difficult or impossible to restore.

3.3

Z-Shift and F-Shift Misalignment

Logs of systems may contain unpredictable non-linear time differences. This implies that log timestamp is misaligned independently of constant and Δ shift misalignment. In general, we can distinguish two important time-skew cases that might affect the system:  Loop in time (see Figure 3) is a z-shift misalignment. It is an occurrence of facts that are not in the order of time, but the order of appearance. The z-shift misalignment is identified in log l = if ∃ i, j, e = (c1, ..., cn): ck ∈ D* ∈ TIME: j = i + 1, ei [ck] - ej [ck] > Δ ∧ deltaShift(l) = ⊥ ∧ constantShift(l) = ⊥. For logs that do have constant shifts or delta shifts based on observations, the z-shift means the jump forward in time, and that jump is bigger than Δ. In Figure 3, the possible back loop in time is visualised. All journal entries are point-based facts. For a simplicity of visualisation, we use Z-curve style to visualise jump-back that can be derived from observable log journal. The visualisation style made a name for the misalignment type.  Forward in time (see Figure 4) is an f-shift misalignment. An occurrence of large gaps between two facts that would not orderly appear within the system. The f-shift misalignment is identified in log l = Δ ∧ deltaShift(l) = ⊥ ∧ constantShift(l) = ⊥. In Figure 4, the possible forward in time is shown. Possible forward in time produces a large gap in recorded facts that would be observable and/or seen.

Figure 3: Loop in time

Figure 4: Forward in time The z-shift and f-shift misalignments show the cases of intruder behaviour that could cause severe damage to the system. For example, financial or accounting services depend on time, and the intruder might want to bypass constraints built into the system. Jumps in time can be very dangerous in transportation systems or systems that depend on geopositioning systems.

4.

Framework of Event Alignment in Time

We present the Framework of Event Alignment in Time (FEAT) for the incident response team to deal with logs that have misalignment in time. FEAT implementation serves as an environment to experiment with heterogeneous infrastructure that includes services and collection of devices. FEAT enables simulation of coordinated time-shift attacks. It also enables observation of service robustness and correctness of business processes during the possible attack. The FEAT serves as a training platform for CSIRT testing tooling and behaviour of a team during an incident. The FEAT is implemented as a part of the cyber defence platform.

4.1

Architecture

Figure 5 presents the architecture of the FEAT implementation. FEAT prototype is a simulated environment of life-like IT infrastructure. An asynchronous business scenario is observed from the environment with preinstalled agent management station. The agent management station acts as a botmaster of the botnetwork. It can direct instructions to run remote commands for time-shift on agents’ machines. Agents can be used for event simulation and time adjustments as well as certification of timestamps received from agent management station. To precisely control time-shift, bot-network is visually represented as a graph and provides an overall picture of possible targets for the operator. The bot-network is hierarchical, and each agent pulls the cryptographically encrypted messages from the network periodically.

Figure 5: Example of virtual environment with management agents In some cases, the agent can be used to mark all logged events properly for the time alignment purpose in the investigation station as forensic evidence for later assessment. Agent management software simulates different

kinds of cyber incidents. CSIRT analysts have to use additional tooling to detect and if possible stop the timeshift attack efficiently. Live forensic analysts are not allowed to synchronise observed machines with global time or a local time server.

4.2

Methodology

Security logs and service information of interest are gathered in a centralised or decentralised manner. On legacy systems, it is not always possible to have very detailed log information. An expected solution is to go through every server and service logging mechanism, modify legacy systems by standardising columns of interest, and add standard time information. However, it is not always possible to do modifications of the closed source services, technological assets, and the maintenance of the systems is neglect. Incident response team must work on server/service log information as it is accumulating in real time and observe all previous actions from asynchronous log historians. Passive alignment is a centralised log server process that adds additional time tag for each record as the record received at the log server. It is named passive because the central log server neither emits nor requires any synchronisation packets from remote services. A timestamp from the hardware computer clock is used to qualify each log entry. We use a definition of a client as an agent of a software or hardware that emits data to be logged. The log server takes care of actual event handling and writing to a historical journal. The log server is not necessary on the same machine as the client. Arrival to gate tagging. This method records the timestamp only on a record set arrival. The method is viable if the client side is just producing audit logs and submits to the log server. The log server side acts as a timestamping office. Most of the time only one additional column or a special tag is added to identify the referenced timestamp. Gate to gate tagging. This method timestamps log entries as they pass through a monitored system and additionally timestamps as soon as they arrive at the log server. Thus, such method would guarantee that we always know the time when it did leave an observed machine and when it did arrive at the central log system. Two timestamps for any log entry e would be created: time of departure and time of arrival. Active alignment requires an active central log server and time server notification mechanism for time synchronisation and logical shift. Pull. Agent pulls from the time server trusted timestamp token and uses it when merging with logged entries. Such method requires the client machine to be active and capable of communicating with the time server. The method introduces network latency, and in the worst case scenario, it can only be used on a non-frequent periodic data timestamping. Thus, this alignment method is used on chunks of entries rather than on each record. Push. Push timestamp method broadcasts a timestamp in the network to all machines on the network. Then, the client applies the received timestamp to the logged data. Analysis of the z-shift and f-shift log misalignments in time requires defining Δ values. The f-shift function could be defined as an outlier detection algorithm from each adjacent event. The algorithm filters and post-processes only those distances between log entries, that are not frequent and do not adhere to frequent activity patterns. As an example, if there is a gap between facts and it is large enough to be distinct from the rest of the data then such a gap could be caused by malfunctioning or intentional data skew. For the forward in time detection, we use traditional outlier detection algorithm. Z-shift misalignment is a pattern of recorded log entries having the timestamp in the past. Z-shift alignment identification function finds occurrences that corrupt the natural order of serialised UOIDs. Analysing security logs of any kind it is essential to establish a clear timeline over the period of time and get a confirmation from multiple data sources l1 and l2 as seen in Figure 6. In our case, we define time alignment among many logs to identify any time skew attacks. Therefore, the clean scenario would be for each log l i to union Z-curve sequences to get the most precise time loop point of change. Aligning different logs on one time

axis helps us to pinpoint a problem in different systems. This enables us to locate and identify all the time skews in different systems and correlate them. Noticing disruptions in different sources itemises the event and clarifies what problems might have happened in systems. As presented in Figure 6, each log provides different times for the time skew. Combining information from Z-shift and Z-shift’ we would notice that time skew happened from Z-shift’ beginning to Z-shift end. Forward shift F-shift also identifiable as an unusually long gap is noticed between recorded events.

Figure 6: Time alignment of two logs

5.

Conclusions and Future Work

When responding to a cyber incident time aligned logs play the most critical role. Misaligned security logs or events from multiple sources waste precious time of analysts and hide the event, which is the main cause of the cyber incident. The work focused on the definition of possible real-life situations with time misalignments that occur in heterogeneous or detached networks. For the development of event alignment procedures and evaluation of service robustness, we develop a formal model for possible types of shifts in time. We introduced the framework of event alignment that operates as a bot-network. Alignment of events is not a trivial task and most of the time requires sophisticated tools. This work could be extended in several directions. Future development of the methods will include special API for industrial hardware devices with IoT agents. Such extension would bridge a gap between detached systems of different vendors and would readout data in a common way. We will investigate special cases of alignment when pre-aggregation or open source intelligence is used to align data with public events. Some of the events that happen in the industrial systems could be derived from the third party online data sources but not directly from organisational devices.

Acknowledgments The authors are thankful for the resources provided by the Information Technology Research Center of Vilnius University.

References Abad, C., Taylor, J., Sengul, C., Yurcik, W., Zhou, Y. and Rowe, K. (2003) “Log correlation for intrusion detection: a proof of concept,” 19th Annual Computer Security Applications Conference (ACSAC), IEEE, pp. 255–264. Accorsi, R. (2013) “A secure log architecture to support remote auditing,” Mathematical and Computer Modelling, 57, pp. 1578–1591. Ferebee, D., Dasgupta, D., Schmidt, M. and Wu, Q. (2011) “Security visualization: Cyber security storm map and event correlation,” IEEE Symposium on Computational Intelligence in Cyber Security (CICS), IEEE, pp. 171–178. Hassan, D. (2015) “Mining intrusion detection alerts for predicting severity of detected attacks,”11th International Conference on Information Assurance and Security (IAS), IEEE, pp. 38–43. Herrerias, J. and Gomez, R. (2007) “A Log Correlation Model to Support the Evidence Search Process in a Forensic Investigation,” Second International Workshop on Systematic Approaches to Digital Forensic Engineering (SADFE’07), IEEE, pp. 31-42. Huang, D.-J. and Teng, W.-C. (2014) “A defense against clock skew replication attacks in wireless sensor networks,” Journal of Network and Computer Applications. Academic Press, 39, pp. 26–37.

Kao, D.-Y. and Chiu, Y.-H. (2015) “An Iterative Management Model of Exploring Windows Date-Time Stamps in Cloud Storage Forensics,” Digital-Forensics and Watermarking, Lecture Notes in Computer Science, Springer, Cham, pp. 498–512. Kim, D., Woo, J. and Kim, H. K. (2016) “‘I know what you did before’: General framework for correlation analysis of cyber threat incidents,” MILCOM 2016 - 2016 IEEE Military Communications Conference. IEEE, pp. 782–787. Lillis, D., Becker, B., O'Sullivan, T. and Scanlon, M. (2016) “Current Challenges and Future Research Areas for Digital Forensic Investigation”, CoRR, abs/1604.03850, p. 11. Mazdadi, M.I., Riadi, I., and Luthfi, A. (2017) "Live Forensics on RouterOS using API Services to Investigate Network Attacks," International Journal of Computer Science and Information Security 15.2 (2017): pp. 406-410 Meera, G. and Geethakumari, G. (2016) “Event Correlation for Log Analysis in the Cloud,” 6th International Conference on Advanced Computing (IACC). IEEE, pp. 158–162. Rajalakshmi, J. R., Rathinraj, M. and Braveen, M. (2014) “Anonymizing log management process for secure logging in the cloud,” International Conference on Circuits, Power and Computing Technologies [ICCPCT-2014]. IEEE, pp. 1559–1564. Sagong, S. U., Ying, X., Clark, A., Bushnell, L. and Poovendran, R. (2017) “Cloaking the Clock: Emulating Clock Skew in Controller Area Networks”, CoRR, abs/1710.02692, p. 11. Shittu, R., Healing, A., Ghanea-Hercock, R., Bloomfield, R. and Rajarajan, M. (2015) “Intrusion alert prioritisation and attack detection using post-correlation analysis,” Computers & Security. Elsevier Advanced Technology, 50, pp. 1–15. Weiss, M., Chandhoke, S. and Melvin, H. (2015) “Time signals converging within cyber-physical systems,” Joint Conference of the IEEE International Frequency Control Symposium & the European Frequency and Time Forum. IEEE, pp. 684–689. Zuech, R., Khoshgoftaar, T. M. and Wald, R. (2015) “Intrusion detection and Big Heterogeneous Data: a Survey,” Journal of Big Data. Springer International Publishing, 2(1), p. 41.