lunes, 15 de abril de 2013

What is Security Event Management?

Security Event Management (SEM) is often referred to as the brain of the SIEM solution. To manually analysis millions or billions of logs would require a significant investment in head count. The SEM portion of a SIEM solution will allow automated analysis or the billions of logs looking for unusual behaviours.
To be considered a true SEM, the solution should be able to monitor for behaviours, rather than individual events. A number of solutions will be able to correlate events, but not all correlation is the equal in capability.
A Correlation or Behavioural engine should be able to look for multiple TYPES events. For example some SEM vendors will say they do correlation, when they can look for multiple failed logins, that is to say, five failed logins within a specified timeframe, say 60 seconds. While this is useful, it is not true correlation.
Stateful based correlation is the ability to track multiple types of events, for example, five failed logins, followed by a successful login, followed by a new user created, followed by the user added to a privileged group. This is true Correlation, looking for a collection of events that make up a suspicious behaviour and is critical when you are taking many billions of logs and filtering them down to tens or hundreds of behaviours they need to b investigated by the security or compliance team.
Many SIEM projects have failed because the SOC, NOC or platform owners have been flooded with alerts from the SIEM after being switch on. The more intelligent your behavioural engine is, the less likely you will be overloaded with false positives, and he more likely your project will succeed.

Tip #1 - Ask your potential vendor if their Correlation Engine can identify behaviours based on multiple event types occurring in a specified order. 

It is great to have a powerful behavioural engine, however if it takes a genius, or vendor professional services to create new correlation/behavioural rule then the system becomes infinitely less useful. If the success of your project rests on how easy and intuitive it is to identify new behaviours it is critical that your team can create new rules on the fly.

Tip #2 - Ask your potential vendor how easy it is for your team to create new correlation rules to identify unexpected behaviours.
Once you have identified the suspect behaviour you need to inform the appropriate team. This is likely to be a Security Operations Centre (SOC) or Network Operations Centre (NOC) in a large enterprise or the Platform Owners in smaller environments.
The challenge is notification. Most SEM solutions will allow email Alerts to be generated; however these can overload an inbox, particularly true in large environments. A better option would be to populate a dashboard with colour coded alerts indicting the suspect behaviours have been identified.
The dashboard should allow you to see trends of events, for example, every environment on the planet will have failed logins within the network. Getting an email alert every time someone fails to login would quickly overload your inbox, if however you placed this alert in to a trended dashboard you can keep an eye on the failed logins as they occur and look for peaks in the activity, which would typically be an indication of something abnormal happening. Stay away from SEM solutions that require you to run reports to populate a dashboard, no team in the world has time to sit there all day continually running reports looking for suspect behaviours, the dashboard should be automatically generated as the events are processed.

Tip #3 - Any solution that is heavily Reports based, as opposed to being Dashboard based becomes an operational nightmare 
You should still be able to generate reports on the alerts, as these are the events that it is likely management will be concerned with, the logic being; if it was important enough to alert on the event it is important enough to include in a report to management.
Reports should be able to be generated in HTML, PDF, RTF, CSV and XLS formats and should be able to be emailed, uploaded to a server and scheduled to run. Reports that can be output in HTML format are particularly useful as these can be sent to the internal intranet server and are easily accessible to management.

Tip #4 - Reports that can be generated in HTML format and uploaded to the Intranet are easily accessible to management.
Alerts are generated against structured data, that is to say, you might alert when the User Bob, logs in to your server from an external IP number and creates a new user call Sam. To generate this specific alert you need to understand what part of the log is the Username, what part of the log is the source IP number and what part of the log is the Acted Upon user, in our case Sam.
The only way to understand unstructured data, such as logs data, is to write Regular-Expressions that maps out the data in to the appropriate fields. All that is needed then is to populate a structured, relational, database, such as a MYSQL, MSQL or Oracle database with the alerts.
Most vendors refer to these Regular-Expressions as "Normalisation Rules" and they are necessary so that you can create intelligent correlation rules and alerts. Most vendors have off-the-shelf normalisation rules for standard products, such as Windows, Unix, firewalls, routers etc.

Tip #5 - Ensure the vendor has normalisation rules for the log types you intend to collect, or is willing to create new normalisations at no cost to your organisation if it is a standard log source. 


Scalability

Large enterprises should be concerned about scalability. At the SEM layer scalability comes down to two concerns. The alerts database should be scalable and the correlation engine should also scale.
A significant majority of events that occur throughout the day within your enterprise will not be of interest. You should target an alert for every 100,000 - 1,000,000 events that have occurred in your environment. In practice this means that while ALL events will be stored in your forensic data-store, the Log Manager flat files, you will only have a minimal number of events converted to alerts via your behavioural rules. However these events can still be significant number over time.
It is therefore important in large enterprises that the alerts database can be spread across multiple servers for scalability. This is not a concern at the Log Management layer as the data should be stored in ASCii flat files, but is a primary concern at the SEM layer. The best solutions will allow you to have an unlimited number of Alert databases that you can route specific alerts to, with their own data retention times and permissions settings.
You could therefore have an alerts database for "Failed Logins", and a separate Alerts database for "Successful Logins". This would allow you to trend "Failed Logins" over a period of a month, to look for suspect spikes, while, because of the volume, you might keep "Successful Logins" in its Alert database for only three days. By splitting these types of Alerts in to different Alert databases you will be able to scale I large enterprises.
The second scalability concern is the correlation engine. If the correlation engine has to analyse billions of logs, to find the "gold", it needs to be able to scale to this level. In large enterprises this is unlikely to be a single machine. It is therefore important that your solution can have multiple correlation engines that can pass each other alerts for evaluation. This would allow full scalability. 

 
 Fuente: securityinformationeventmanagement.com