There is no magic bullet when it comes to finding performance bottlenecks, but knowing where to look for them enhances your aim.
When you hear the words, "performance bottleneck," the typical hot spots that come to mind are CPU, Memory, Disk and Network. Those are good places to start looking for bottlenecks but they aren't the only places performance problems can hide. This list targets six other potential leads for your investigation into the elusive performance breakdown. Sometimes, just knowing where to look might prevent your own personal breakdown.
Note that listed items are in no particular order.
The CPU is the brain of the computer where calculations and instruction operations occur.
CPUs can handle millions of calculations and instructions, but performance suffers when the numbers of these operations exceeds capacity. CPUs that sustain greater than 75-percent-busy numbers will slow the entire system. CPUs need some room for activity "bursts" where loads can reach 100 percent for short periods of time. CPU load is a common source of performance bottlenecks.
The rule of thumb on memory is "add more." When performance problems point to memory, the general consensus to solve the problem, is to add more. This practice is effective only in the short term, however. Performance bottlenecks that point to memory are often the result of poorly designed software (memory leaks) or other system flaws that manifest themselves as memory issues. The key to solving memory performance problems is to find the root cause of the symptom before adding more RAM.
Disk speed, RAID type, storage type and controller technology all combine to produce what's known as disk I/O. Disk I/O is a common source of performance angst for system administrators and users alike. There are practical and physical limits to performance even when using the best contemporary disk technology. Use best practices when combining and separating workloads on disks. As attractive as leveraged storage is, local disks are still faster than the fastest SAN.
The network is a commonly blamed source of performance bottlenecks, but it is rarely found to be so. Unless there is a network component hardware failure, such as a damaged switch port, bad cable, jabbering network card or router configuration problem, you should look elsewhere for your "network" performance bottleneck. A perceived slowness on the network usually points to one of the list's other nine entries.
Although no application developer wants to hear it, poorly coded applications masquerade themselves as hardware problems. The fickle finger of guilt points to applications when an otherwise quiescent system suffers greatly when the application is on and shows no signs of difficulty when the application is off. It's an ongoing battle between system administrators and developers when performance issue occur. Each wants to allege the other's guilt. A word to the wise after many hundreds of hours of chasing hardware performance bottlenecks: It's the application.
Viruses, trojan horses and spyware account for a large percentage of perceived performance bottlenecks. Users notoriously complain about the network, the application or their computer when nasties raise their ugly heads. Those performance killers can reside on one or more server systems, the user's workstation, or a combination of the two. Malware infections are so common that you must employ multiple defenses against them. Antivirus, antispyware, local firewalls, network firewalls and a regular patching regimen will help protect systems and prevent resultant bottlenecks.
Smart workload management can help prevent performance problems associated with poorly balanced workloads or ill-conceived load balancing schemes. Adding another system to a suffering cluster relieves the pressure, but this is easier to do in a virtual environment than in a physical one. The best advice here is to measure capacity and performance of all systems and heed the numbers reported to you. Move workloads, add systems and keep a watchful eye on performance.
8. Failing or Outdated Hardware
The older the hardware, the more likely it is to fail. Some hardware components fail with a single final breath, while other linger on with random complaints and untraceable glitches. Hardware that causes system reboots, disappearance of data or performance bottlenecks frustrate system administrators because of its unpredictable nature. The best way to prevent such tragedies is to keep hardware fresh, use redundant hardware and monitor your systems carefully.
Did you know that your filesystem choice can have a profound impact on performance? It can. Some filesystems, JFS for example, uses very little CPU. XFS has very high scalability and high performance. NTFS is a recoverable file system with high performance. The new EXT4 filesystem supports very large files efficiently. Each filesystem has a purpose, and using the incorrect one for an application can have disastrous results. Consider your filesystem choices wisely and select the best one for the job. There is no one size fits all filesystem.
The technology you select for your infrastructure plays an important role in performance. For example, if you dedicate your services to a virtual infrastructure technology, you might have performance problems not experienced on equivalent physical systems. Alternatively, there are some workloads that thrive on virtual technology. LAMP (Linux, Apache, MySQL, PHP) workloads, for example, perform at and greater than native speeds on KVM. However, container-type virtualization (OpenVZ, Parallels, Solaris Zones) boasts native performance ratings for any workload.
Ken Hess is a freelance writer who writes on a variety of open source topics including Linux, databases, and virtualization. He is also the coauthor of Practical Virtualization Solutions, which was published in October 2009. You may reach him through his web site at http://www.kenhess.com.
Follow ServerWatch on Twitter