Level: Intermediate
Ken Milberg, Future Tech UNIX Consultant, Technology Writer, and Site Expert, Future Tech
20 Mar 2007
Managing processes is quite straightforward with tools like kill and nice, but what happens when you want to provide even finer management control over your processes? You can assign processes and threads to specific processors in a multi-processor system using AIX®, but how do you chose the right applications and organize a larger system in order to optimize the applications appropriately? In this article, discover the tools available to you for organizing your processes, and take a look at the theory behind organizing and choosing processes and how to prioritize effectively.
Introduction
As an AIX® administrator, you should already know the basics of how to work with processes, including researching, prioritizing, and killing them. You should also know how to tune your processes and optimize them accordingly, using the various tools at your disposal. These tools include some of the more recent tools available to you in AIX 5.3. To provide effective process control on your system, it is imperative that you understand the definition of processes and threads and the difference between them. This article also covers the ps
, nice
, and schedtune
commands, as well as the Process Monitor Console (procmon), AIX Workload Manager (WLM), and other tools available to you. Let's start with definitions of processes and threads:
- Processes -- A process is an activity within the system that is started with a command, shell script, or another process.
- Threads -- A thread is an independent flow of control that operates within the same address space as other independent flows of controls within a process. A kernel thread is a single sequential flow of control.
Another way of looking at this is that the process is the entity that the operating system uses to control the use of system resources, while the threads control actual processor-time consumption. Most system management tools still require you to refer to the process rather then the thread. The process itself actually owns the kernel threads and each process can have one or more kernel threads (for example, multi-threaded applications). With threads, you can have multiple threads running on different CPUs on a system, which really takes advantage of computers with more then one processor (Symmetric Multiprocessing or SMP boxes). Applications can be designed to have user-level threads that are scheduled to work by the application or by the pthreads scheduler in libpthreads. Multiple threads of control allow an application to service requests from multiple users at the same time. With the libpthreads implementation, user threads sit on top of virtual processors, which are themselves on top of kernel threads. During this article, delve into more detail on the kernel aspects of a process and tools available to help you more effectively manage your overall system. To help you manage your environment, go through time-tested UNIX® commands and many of the new tools available to you as an AIX administrator.
Threads and SMT
Allowing threads to run on different CPUs also allows for effective utilization of simultaneous multi-threading (SMT). With the system in SMT mode, the processor fetches instructions from more than one thread. Exclusive to the POWER5 architecture, the concept of SMT is that no single process uses all processor execution units at the same time. The POWER5 design implements two-way SMT on each of the chip's cores. The end result is that each physical processor core is represented by two virtual processors. SMT is primarily beneficial in commercial environments where the speed of an individual transaction is not as important as the total number of transactions that are performed. SMT is expected to increase the throughput of workloads with large or frequently changing working sets, such as database servers and Web servers. Workloads that are floating-point intensive are likely to gain little from SMT and are the ones most likely to lose performance. These workloads heavily use either the floating-point units or the memory bandwidth. Workloads with low cycles per instruction (CPI) and low cache miss rates might see some small benefit. Generally, you should expect to see approximately a 30 percent increase in system performance due to SMT. You must determine whether the critical processes running on your system benefit from SMT. Critical processes typically benefit from SMT; however, if you determine otherwise, you need to shut it down -- it comes enabled by default.