This webinar didn’t give any groundbreaking new techniques to deploy in your environment, but it did touch on a lot of solid “basics”. Basics are often poorly implemented or outright neglected. Fundamental techniques and procedures are called basics, but often a lot more complicated than they should be. The topology of your network, tools available, budget, and pre-existing technical debt all play major roles in whether or not an organization is following these recommendations.
Discussions about the basics come up a lot. This is interesting to me because although there is quite a bit of overlap on the suggestions people make, you will get twelve different answers of what you should be doing if you ask ten security practitioners what an organization should be focusing on. Even more interesting is that these discussions always have some evergreen topics that come up: weak credentials, patching, establishing baselines and benchmarks, keeping up with current events, …
Finally, there are a lot of great ideas that come up on methods to protect your enterprise, but very rarely any easy solutions. The concepts for a lot of these issues are easy to understand, but the implementations are usually not very straightforward. A security engineer may suggest not using bad passwords, but how do you prevent users from setting bad passwords? Someone may suggest setting a password policy via GPO, but what about all of the appliances and applications on your network that don’t federate to AD? What about that cloud app that some slick salesman just sold your executive team that lacks any kind of security telemetry? What about all the legacy systems on your network? The devil is in the details.
This webinar gave a lot of really great ideas. I’d like to offer some solutions to some of these that can be implemented on a tight budget or set up in a modest home lab. You can listen to a recording of the webinar here: https://www.sans.org/webcasts/gearing-2019-practices-109185
The facilitators started by mentioning common factors that cause IT-related business breakages:
- Business trends
- Moving to the cloud
- Retaining more data as storage becomes cheaper
- Hiring and retaining security talent
- Threat trends
- Crypto miners
- Side-channel attacks
- Regulatory trends
These factors change year to year and can be hard to predict. At any time, a new class of bug or software vulnerability that affects your infrastructure could surface. New bills and laws get passed all the time that change the way organizations are required to conduct business. Malicious actors think of new and innovative ways to wreak havoc every day.
Since all of these factors are volatile and have the potential to significantly affect the way your business runs, they usually drive what happens in the security world.
Some quotes got brought up that I thought were interesting and relevant to the discussion. First was this one from Bill Gates:
We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don’t let yourself be lulled into inaction.
– Bill Gates
This is very true. I see people having full on meltdowns when new bugs are released. Every named exploit in the last few years has caused a lot of hype, but the world never ended. Sometimes some bad stuff happens like WannaCry. People focus on these short-term issues but are neglecting to learn things like Azure, AWS, containers, configuration management, and learning how to develop software.
I believe that the next decade will bring changes that require systems and network administrators to at least be proficient in configuration management and orchestration software. If you plan on sticking around in this field, you are digging your own grave by not learning about some of these technologies right now.
Next was this quote from Dale Carnegie:
First ask yourself: What is the worst that can happen? Then prepare to accept it. Then proceed to improve on the worst.
– Dale Carnegie
This is great advice. The worst that can happen in most cases is that you get breached by a malicious entity. This is bound to happen. Users are a weak link. You may not be able to patch a widespread bug fast enough. A contractor may plug in a compromised laptop into your network ala Target.
If you accept that you are going to get breached, you can focus on ways to make it harder for an attacker once they have a foothold into your network.
I wish I remembered who said this, but I saw a conversation on Twitter a few months ago talking about how blue teams should be like Bowser from Mario Brothers. Bowser knows that Mario is coming to get him, so he makes it hard on him by deploying spikes, Koopa Troopers, poisonous mushrooms, lava pits, and some fire-breathing plants to make it hard on him.
You know that people are going to try to hack your stuff, so be like Bowser; make sure you don’t have any gaping holes in your internet-facing stuff. Make sure workstations and servers are patched. Be vigilant. Be ugly. Don’t fight fair. Make it hard on them.
Another interesting statistic that was brought up was that organizations that spend more on security don’t necessarily do a better job of securing their systems than ones that have talented people on staff. This is one I’ve believed in for a long time. I often feel that money spent on developing your teams is better than buying a product to solve some arbitrary issue you’re having.
Desired Skills and Software Proficiency For Security Personnel
Some statistics from a poll conducted by SANS for the desired skillsets of new security hires came up:
- Desired Skills
- Ability to write IDS/IPS rules
- Analyzing data with Excel
- Ability to utilize cloud APIs
- Software Proficiency
- BURP Suite
I can’t argue with these lists. I currently use most of these tools or suitable alternatives at work every day. The skills mentioned are also valid and definitely being used by security staff in 2018.
Something interesting that was brought up that I had not heard before was Metasploit being the minimum bar of protection when securing your network. The theory behind this statement is if you have vulnerabilities that do not have Metasploit modules, they are not as likely to be exploited as the ones that have pre-existing modules. I am not sure how valid this school of thought is, but it makes sense on the surface.
They predicted that the cloud is going to have some growing pains as organizations shift their operations to Azure, AWS, and so on. I agree with this because, in many ways, we still haven’t gotten traditional IT security under control, but now we are shifting to the cloud because of the benefits it brings. I am not confident that organizations will utilize the cloud in safer ways than they’ve done traditional IT.
As stated before, this does seem like the direction the world is going. All of these cloud services and platforms have significant learning curves. If you are in IT right now and not making efforts to learn some of these technologies, you will probably be hurting in the future. Do future you a solid and devote some time this year to learning this stuff.
CASB (Cloud Access Security Brokers) were discussed. The fact that these even exist show that the cloud is lacking security controls and telemetry that defenders need to succeed.
The Browser is the New Endpoint
More and more of a typical user’s workload is being done via a web browser these days. Many people can currently do all of their work with nothing more than a Chromebook; webmail, online document editors, and collaboration tools can all be interacted with via a browser right now.
Browsers are very personal and intimate. You can tell a lot about a person by what plugins they have, their cookies, their histories, their bookmarks, and their settings. Users will be carrying this synced data around with them everywhere they log in because it is just too convenient.
More thought should be put into securing browsers and identifying potential risks involved with this shift in this workflow.
Password Spraying Still Works
Trying likely passwords for enumerated usernames remains a solid attack technique. Scanning slow enough to not lock accounts out makes detection harder.
Many people believe that 2FA solves this problem. It does to a degree, but there are bypasses for many types of 2FA, and not all 2FA methods are equal.
One of the “basics” is attempting to detect these types of attacks. How does one do this?
Log Review – Advice From the ’90s That Still Doesn’t Happen
Many recent breaches could be attributed to poor log review. Often, the first indicators of compromise are in the logs well in advance of the actual bad parts of the breach occurring. If someone was paying attention and knew what to look for, many of these attacks could have been prevented.
The problem is that sys/network admins believe that this is security’s problem and just don’t look at the logs from a security standpoint. They may look at logs when troubleshooting an issue, but typically don’t hunt for security issues. Even if they wanted to search for security issues, what do you look for?
Log review is another “basic” that isn’t really that basic:
- How does an attacker attempt to circumvent security?
- Is this logged?
- Where are the logs?
- Do you have access to them?
- Are they being collected?
- Can you easily search for these types of events?
- Do you have to correlate/cross-reference additional logs in order to get an accurate picture of what is going on?
- How long are these logs retained?
- If it is not logged, what can you do in order to achieve this type of visibility?
- Are you able to detect unknown threats?
- Is this logged?
- Is there too much noise in your logs?
- How can you deal with this?
- How often should you be checking logs?
- Do you have a way of triggering alerts when certain events happen?
- Is someone verifying that logging and alerting are working as intended?
A good benchmark with logging is that you should be able to explain to your parents or grandparents what the problem is and how to get an answer with the data in the logs. If this is too much to ask, your logs are probably too complicated.
An example of a great log to have available is Windows Security Log Event ID 4688. This shows processes being executed on a system and can be used in a number of ways to detect illicit activity.
Some of the “basics” brought up here:
- Egress filtering
- ICMP and DNS should be filtered outbound.
- Tunneling attacks
- UDP and ICMP are often ignored by analysts and security tools
- Only be able to connect to authorized hosts
- Who uses this still? Why?
- Only to authorized sites if necessary
- Only your email servers should be able to do outbound email.
- Many Windows worms/malware spread with these services
- Administrators use these legitimately; whitelist their hosts if necessary.
- We should start moving to a “deny all” approach for egress traffic.
- ICMP and DNS should be filtered outbound.
- Endpoints are not servers, so they shouldn’t have things connecting to them. Once they have a service running on them, they are effectively a server.
- Try to move these services elsewhere. Endpoints can be more sensitive than servers.
- TURN IT OFF!@#$%
- Responder is too easy to use.
- Port security
- Turn this on everywhere that it makes sense.
- It can be a burden for helpdesk, but a huge win for security
- Monitor for new MACs on your networks
- Alert/perform a vuln scan against new hosts.
- DNS query logging
- DGA domains
- Newly registered domains
- Known malware sites
- DHCP logging
I agree with all of these points, but the implementation of these can be tricky.
This webcast was full of good advice. If your organization is not doing these, they should start. If I had to start from scratch, I’d focus on inventorying what you have, gathering relevant logs in a centralized place (Splunk, ELK, …), the Metasploit benchmark mentioned above, and setting up some sort of configuration and policy management to push and enforce these settings to the entire infrastructure.