We are going to understand Windows Architecture, Threads and Multitasking, Threads, Messages, and Message Queues, How does the Virtual Memory System work?
Chapter 1: Windows Fundamentals and Architecture
When you develop Windows-based applications, you can choose from a wide variety of programming environments depending on the requirements of your application. Many developers are choosing the C++ language for developing Windows-based applications because it is object-oriented in nature and provides a simplified approach to dealing with the complexity of Windows and the wide range of application programming interface (API) functions. Using C++, combined with a class library, further simplifies the development process by grouping the API functions into logical units and encapsulating the basic behavior of windows and other objects in reusable classes.
This course focuses on the Microsoft Foundation Class (MFC) Library, the class library created by Microsoft to be used in combination with Visual C++, the Microsoft version of C++. You can use these tools together to develop Windows-based C++ applications.
MFC extends the object-oriented programming model used in Windows-based applications. Since MFC is based on the Windows programming model, you need a basic understanding of Windows architecture before learning how to use the classes in your applications. If you’re coming to MFC from a traditional Windows programming background, such as C and the Windows SDK, you’re already familiar with these concepts. If you’re new to Windows programming, then this chapter is for you.
This chapter provides an overview of the Windows programming architecture and briefly takes you behind the scenes to see how Windows-based applications work.
Objectives
At the end of this chapter, you will be able to:
- Define processes, threads, and multitasking.
- Describe the structure of memory management.
- Explain the purpose of messages and the concept of event-driven programming.
- Describe the minimum components of a simple Windows-based application.
- Explain how an application is initialized and windows are created.
Understanding Windows Architecture
Before you begin writing MFC applications, you should understand several key architectural features of Windows-based applications and the Windows operating system.
This section explains the run-time structure of Windows-based applications. Here you will learn about the differences between applications, processes, and threads of execution. You will also learn about how the Windows operating system manages processes and threads in order to maximize performance.
This section includes the following topics:
Processes
The term “process” and the more common term “application” are sometimes used interchangeably. However, in the Windows environment, there is a distinction between a process and an application.
An application is a static sequence of instructions that make up an executable file. A process is usually defined as an instance of a running application. A process has its own private address space, contains at least one thread, and owns certain resources, such as files, allocated memory, and pipes.
A process consists of:
- An executable program
- A private address space in memory
- System resources, such as files, pipes, communications ports, and semaphores
- At least one thread, where a thread is a path of execution
- The Virtual Memory System
In an operating system where multiple processes are allowed, each process must be protected against corruption by other processes in memory. The Windows operating system is designed to provide this protection.
There are two types of memory in the Windows operating system:
Physical memory
Consists of the amount of physical RAM.
Virtual memory
Consists of 4 gigabytes (GB) of addresses, or 232Â bytes of addressable memory that is available to your application. This is not 4 GB of actual physical memory. Each application is given 2 GB of addresses while the operating system reserves 2 GB for its own use.
Note In Windows NT, an application may have up to 3 GB of addresses for its own use.
How does the Virtual Memory System work?
When an application is started, the following process occurs:
- The operating system creates a new process. Each process is assigned 2 GB of virtual addresses (not memory) for its own use.
- The virtual memory manager maps the application code into a location in the application’s virtual addresses, and loads currently needed code into physical memory. (The virtual address has no relationship to the location of the application code in physical memory.)
- If your application uses any dynamic-link libraries, the DLLs are mapped into the process’s virtual address space and loaded into physical memory when needed.
- Space for items such as data and stacks is allocated from physical memory and mapped into the virtual address space.
- The application begins execution by using the addresses in its virtual address space, and the virtual memory manager maps each memory access to a physical location.
The application never directly accesses physical memory. The virtual memory manager controls all access to physical memory through requests for access by using virtual addresses.
To see an animation that explains how virtual memory and physical memory work, click this icon.
Benefits of Using a Virtual Memory System
A virtual memory system helps both to ensure robust application execution and to simplify memory management.
As mentioned earlier, one concern about running an application in a multitasking environment is protecting that application’s execution from intrusion by other applications. Forcing applications to use virtual memory allows the operating system to provide strict physical memory partitioning between applications. If an application requests private memory space, the operating system will provide a map between that application and physical memory.
Virtual memory also allows applications to view memory as a flat, 2 GB of memory space without having to contend with the physical memory management architecture that is used by the operating system.
Threads and Multitasking
While a process can be thought of as a task that the operating system must perform, such as running a spreadsheet application, a thread represents one of the possibly many tasks needed to accomplish the job. For example, controlling the user interface, printing, and calculating the spreadsheet may be tasks of the spreadsheet application that are assigned to individual threads. A thread runs in the address space of its process and uses the resources allocated to its process.
A process can have a single thread, or it can be “multithreaded”. A multithreaded process is useful when a task requires considerable time to process. The task can run within one thread, while another task runs within a separate thread. The threads can be scheduled for execution independently on the processor, which allows both operations to appear to occur at the same time. The benefit to the user is that work can continue while the first thread completes its task. Another benefit is that on a multiprocessor system running Windows NT, two or more threads can run concurrently, one on each processor.
Multitasking is the ability of an operating system to give the appearance of the simultaneous running of multiple threads. The operating system achieves multitasking by allowing each thread to be active for a relatively short amount of time (tens of milliseconds) and then switching to the next scheduled thread. This process, called “context switching”, is done by:
- Running a thread until the thread’s time slot is exhausted or until the thread must wait for a resource to become available.
- Saving the thread’s context.
- Loading another thread’s context.
- Repeating this sequence as long as there are threads waiting to execute.
To see an illustration of context switching in a multitasking operating system, click this icon.
Threads, Messages, and Message Queues
Each thread of execution has its own virtual input queue for processing messages from hardware, from other processes, or from the operating system. These queues operate asynchronously — that is, when one process posts a message to another thread’s queue, the posting function returns without having to wait for the other thread to process the message. The thread that has received the message can access and process the message when it is ready.
Of special interest is the handling of keyboard and mouse events. A special system thread, known as the raw input thread (RIT), receives all key and mouse events. Whenever the RIT receives hardware events from the processor, its sole function is to place them on the virtual input queue of the appropriate thread. Thus, under normal circumstances, no application thread need wait for its hardware events.
To see an animation that shows how messages are handled by the system message queue, click this icon.
Event-Driven Programming
Central to understanding how Windows-based applications work is the concept of event-driven programming. To hear an expert in the field describe event-driven programming, click this icon.
The best way to understand event-driven programming is to contrast it with the procedural programming of MS-DOS. Under MS-DOS, users enter command-line parameters in order to control how an application runs. Under Windows, users start the application first, and then Windows waits until users express their choices by selecting items within a graphical user interface (GUI). A Windows-based application thus starts and then waits until the user clicks a button or selects a menu item before anything happens. This is known as event-driven programming.