A company's Unix infrastructure is likely the most important piece of its overall IT puzzle. It runs your mail, your Web servers, and probably your most important business applications. The security of this infrastructure cannot be taken lightly. In this article we will explore some best practices that everyone should be aware of, and then next week we'll talk about implementing these ideas with a sample infrastructure.
So what comprises a Unix infrastructure, anyway? This is difficult to define, but in general, most companies have customer-, or public-facing, services. These are "public" servers. Anything that provides a service to the outside world is special. There are also machines that users can login to. This can be an actual user, in the case of an ISP or university, or possibly a development team in the corporate world. We'll call these login servers, and they're treated specially, too.
Then there's everything else probably the bulk of your infrastructure machines that provide services to other servers, which only administrators can access. This is a bit of a gray area, but for the sake of brevity, just go with it.
First and foremost, one must look at all servers that provide services to the world and ask, "Do they need to?" Often times, they can be placed behind a firewall, or a combination firewall and proxy device. If, for example, you run a customer-facing Web site on four Web servers, it may be possible to minimize the exposure of these servers. A proxy server (or pair of redundant proxy servers) placed in front of these Web servers can accept all customer connections, and subsequently inspect and sanitize requests. That's what proxy servers, among other things, are designed to do. The proxy can mitigate the risk of the back-end Web servers, which by the way, no longer need to be Internet-accessible.
The most frequent cause of security problems is unpatched or unknown services. The long-forgotten Web server that runs an old version of Apache or vulnerable PHP scripts, with an outdated kernel. This recipe for disaster is all too common, but if all your Web servers are hidden behind a proxy server, there's little risk of forgetting one.
It's the same story with all other services. Many sites have such extreme restrictions that a firewall administrator must approve any new network usage, and that works well for them. More often, a company is completely wide open. Its Web applications are insecure, and the servers that the applications interact with are Internet-accessible for no reason at all.
Remote users are limited to using the interface they are given, such as e-mail services, Web applications, or B2B transactions. Local users those with shell access are completely unfettered. If you happen to have a malicious user, they will get root access unless extreme caution is taken. Updates, especially those pesky kernel updates that require a reboot, must be applied the day they are released. The operating system must be hardened. Great care must be taken in the entire design of your infrastructure to ensure users have access only to expected areas.
If you also have developers who require root access to some machines, you're in a world of hurt. It's unlikely the developers themselves will be malicious (but don't exclude the possibility). Instead, it's normally the strange and newfangled applications they unknowingly install that will bite when least expected. The Slammer Worm propagated so quickly because MS-SQL was installed on thousands of computers that nobody knew about; oh the joy of automatic installers.
And then there's everything else. In theory, the bulk of a business' machines are not Internet-facing. Assuming this is true, since limiting one's exposure is a high priority for all companies, we can sort of ignore those servers. The only vulnerable point on them is the interface they provide, right? Assuming my Web application is patched regularly, there's no need to worry about the operating system itself. Yes, some people really do believe this.
If you're in the type of business that is able to restrict all login access to sys admins only, then that is mildly true. Keeping the applications patched just might be enough to get by. But when the one security hole is missed, probably because it was so indirect it wasn't even considered, your entire infrastructure is at risk not just that one server. Once an attacker is finally inside, he will generally find that spreading to other servers is very simple. It doesn't have to be.
Thus, there are two approaches to securing an infrastructure: limit exposure and hope the unthinkable doesn't happen, or secure oneself so that an attacker, should he happen to penetrate your defenses, can't do anything harmful afterward. Why not both? While it's almost always true, most organizations are unable to admit their security policy fits into only one of these categories.
Firewalls are extremely easy to circumvent, especially when the exposed applications are at risk themselves. In fact, a seemingly bulletproof firewall often attracts more attackers, not because they want a challenge, but because they know the inside is very likely softer than the shell. That said, we mustn't forget: Most businesses still have a soft exterior as well.
This article was originally published on Enterprise Networking Planet.