Abstract
Nowadays, the demand for Internet broadband access is growing rapidly, which results in lots of new standards of accessing Internet broadband. Together with the increasing development of traditional wired broadband networks, wireless network access is expanding more and more. Wireless broadband access standard IEEE 802.16 came into existence as a result of this fact. IEEE 802.16 standards are established for Wireless Metropolitan Area Network- WMAN, however, the main part of 802.16 packet scheduling still remains unspecified. This thesis, thus, reviews analytical methods to evaluate the efficiency of real time system models with the use of single- server queue that have been used by many researchers. The service principle in the queue is EDF (Earliest Deadline First). Real-time jobs with exponentially distributed deadlines arrive according to a Poisson process, all jobs have deadlines until the end of service. The thesis also introduces a non-preemptive EDF scheduling algorithm and admission control for real-time polling services in WiMAX.
Acknowledgements
It has been an honor that I have had the chance to study in the field that I have been taught and interested in. It was obvious that I had to try my best to finish my work, but without the help of many people, this could not have been completed.
Firstly, I would like to thank Prof. Dr. Nguyen Dinh Thong from the University of Technology, Sydney, for his helpful and enthusiastic instructions throughout the period of time that I worked on this thesis.
I am also very grateful to Mr. Nguyen Quoc Tuan, M.E. of the Department of Telecommunications, College of Technology, Vietnam National University Hanoi, for his sincere help and providing the best setting within the time I studied and researched in the Faculty of Electronics and Telecommunications.
Next, I am very thankful to Dr. Nguyen Thi Minh Tam, Ms. Nguyen Hai Ha, B.A. and Ms. Tran Thanh Thu, B.A. from the College of Foreign Languages, Vietnam National University Hanoi, who have helped me correct the English of this thesis.
I would also like to thank all my teachers and peers that helped me during the time that I have spent here at the university.
Last but not least, I also want to give my sincere thanks to my family who have constantly supported me during my four years studying far away from home.
Hanoi, date month year 2008
Nguyen Minh Khue
Tables of Contents
Tables of Contents 1
Acknowledgements 4
List of figures 4
List of tables 5
Glossary 6
Acronyms 8
Chapter 1. 11
1.1 Quality of Service Fundamentals 12
1.2 QoS in IEEE 802.16 14
1.2.1 Admission Control 15
1.2.2 Scheduling 16
1.3 Research Motivation and objectives 16
1.4 Thesis organization 17
Chapter 2. IEEE 802.16 Standards 18
2.1 Protocol layer in 802.16 18
2.1.1 Physical layer 18
2.1.2 MAC Layer 20
2.2 Admission Control 23
2.2.1. Objective 24
2.2.2 Overview of Admission Control 24
2.2.3 Admission Control Policy 24
2.3 Services and service flows 26
2.3.1 Connections and service flow 26
2.4 QoS architecture model 35
Chapter 3. Scheduling and Admission for Real-Ttime Traffic 38
3.1 Scheduling and admission for real-time traffic. 38
3.1.2 Loss rate for preemptive EDF 44
3.1.3 Non-preemptive EDF 47
3.2 Some current scheduling algorithms for real-time polling services 49
3.2.1 EDF Broadband Wireless Access Scheduling Algorithm 51
3.2.2 Admission Control 55
Chapter 4. Simulation Results 57
4.1 Theoretical Performance of single queue EDF scheduling algorithms 57
4.2 Simulation of NP-EDF scheduling algorithm for rtPS services in WiMAX 59
4.3 Conclusion 61
References 62
63 trang |
Chia sẻ: banmai | Lượt xem: 1934 | Lượt tải: 0
Bạn đang xem trước 20 trang tài liệu Evaluation of existing algorithms scheduling for Real - Time service flows for WiMAX uplink, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
possibly the SS, are reserved resources. The principal resource to be reserved is bandwidth, but this also includes any other memory or time-based resource required to subsequently activate the flow.
ActiveQoSParamSet. This defines a set of QoS parameters defining the service actually being provided to the service flow. Only an active service flow may forward packets. The activation state of the service flow is determined by the ActiveQoSParamSet. If the ActiveQoSParamSet is null, then the service flow is inactive.
Authorisation module. This is a logical function within the BS that approves or denies every change to QoS parameters and classifiers associated with a service flow. As such, it defines an ‘envelope’ that limits the possible values of the AdmittedQoSParamSet and ActiveQoSParamSet.
2.3.3.2 Types of Service Flow
The standard has defined three types of service flow:
Provisioned service flows. This type of service flow is known via provisioning by, for example, the network management system. Its AdmittedQoSParamSet and ActiveQoSParamSet are both null.
Admitted service flow. The standard supports a two-phase activation model that is often used in telephony applications. In the two-phase activation model, the resources for a call are first ‘admitted’ and then, once the end-to-end negotiation is completed, the resources are ‘activated’.
Active service flow. This type of service flow has resources committed by the BS for its ActiveQoSParamSet. Its ActiveQoSParamSet is non-null.
2.3.3.3 Classification and Mapping
Classification is the process by which a MAC SDU is mapped on to a particular connection for transmission between MAC peers. The mapping process associates a MAC SDU with a connection, which also creates an association with the service flow characteristics of that connection. Classification and mapping mechanisms exist in the uplink and downlink. In the case of a downlink transmission, the classifier will be present in the BS and in the case of an uplink transmission it is present in the SS. A classifier is a set of matching criteria applied to each packet entering the WiMAX/802.16 network. The set of matching criteria consists of some protocol-specific packet matching criteria (a destination IP address, for example), a classifier priority and a reference to a CID. If a packet matches the specified packet matching criteria, it is then delivered to the SAP for delivery on the connection defined by the CID. The service flow characteristics of the connection provide the QoS for that packet. The classification mechanism is shown in figure 2.4.
Payload Header Suppression (PHS)
The packets delivered to OSI model layer 2 may have very large headers, sometimes as long as 120 bytes. This is the case for some RTP/UDP/IPv6 packets (RTP, Real-Time Protocol, UDP, User Datagram Protocol). This is very often repetitive (redundant) information and so should not be transmitted on a scarce resource such as a radio channel, which should be used for useful information. This process is known as header compression and decompression in 3G-cellular systems. In the 802.16 standard, the PHS process suppresses repetitive (redundant) parts of the payload header in the MAC SDU of the higher layer.
Figure 2.4: Classification and CID mapping
The receiving entity restores the suppressed parts. Implementation of the PHS capability is optional. Figure 2.5 shows the PHS mechanism at the sending entity. Suppression of parts of the header leads to a compressed header. The receiver has to restore the header before properly using the received packet (Figure 2.6)
.
Figure 2.5: Header suppression at the sending entity.
Figure 2.6: Header suppression mechanism at the receiving entity.
To indicate whether the PHS is present or not, the PHS support field is used. This parameter indicates the level of PHS support. The PHS support field is a field in some MAC management messages, Registration Request and Registration Response. Table 2 shows the possible values of the PHS support field. The default value is 0 (no PHS).
Table 2: Possible values of the PHS support field
Value
Description
0
No PHS support
1
ATM PHS
2
Packet PHS
3
ATM and Packet PHS
PHS Rules
PHS rules application and signaling are different for the two defined CS specifications: ATM CS and packet CS:
The ATM standard defines two modes: the VP-switched mode (Virtual Path) and the VC-switched mode (Virtual Channel). The same PHS operation is applied for the two modes; the only difference between them is the payload header size after suppression. When the PHS is turned off, no part of any ATM cell header including the Header Error Check (HEC) field is suppressed.
In the Packet CS mode, if the PHS is enabled at the MAC connection, each MAC SDU starts with a Payload Header Suppression Index (PHSI), an 8-bit field that references which Payload Header Suppression Field (PHSF) to suppress. Once a PHSF has been assigned to a PHSI, it cannot be changed. To change the value of a PHSF on a service flow, a new PHS rule must be defined. Figure 2.7 shows the operation of the PHS in a Packet CS.
It is the responsibility of the higher-layer service entity to generate a PHS rule that uniquely identifies the suppressed header within the service flow. It is also the responsibility of the higher-layer service entity to guarantee that the byte strings that are being suppressed are constant from packet to packet for the duration of the active service flow.
Figure 2.7: Illustration of PHS operation.
As already mentioned, the sending entity uses the classifier to map packets into a service flow. The classifier uniquely maps packets to the PHS rule associated with this service flow. The receiving entity uses the CID and the PHSI to restore the PHSF. The PHS has a Payload Header Suppression Valid (PHSV) option to verify the payload header before suppressing it. It also has a Payload Header Suppression Mask (PHSM) option to allow the choice of bytes that cannot be suppressed.
The PHS rule provides PHSF, PHSI, PHSM, PHSS (Payload Header Suppression Size) and PHSV.
PHS signaling
PHS rules are created with DSA or Dynamic Service Change (DSC) MAC management messages. The BS defines the PHSI when the PHS rule is created. PHS rules are deleted with the DSC or Dynamic Service Deletion (DSD) MAC management messages. When a classifier is deleted, any associated PHS rule is also deleted. Figure 2.8 shows the use of a DSC message to signal the creation of a PHS rule and whether this rule is initiated by the BS or the SS.
In this figure, the use of the following DSC MAC management messages is introduced:
DSC-REQ. A DSC-REQ is sent by an SS or BS to dynamically change the parameters of an existing service flow. It can be used to create PHS rules.
DSC-RSP. A DSC-RSP is generated in response to a received DSC-REQ.
DSC-ACK. A DSC-ACK is generated in response to a received DSC-RSP.
The main DSC message parameters are SFID (only for DSC-RSP and DSC-ACK), CID. service class name, CS specification[7].
Figure 2.8: DSC MAC management message used for the signaling of a PHS rule.
2.4 QoS architecture model
The process of requesting and granting QoS in a network can be generally divided in two separate layers: application and network layers. The application layer, with the function of providing the end-user with a simplified and standardized view of the quality level that will be granted for a given service, is not aware of the technicalities of service requirements (such as bandwidth, delay, or jitter). Also, it does not depend on the technology-dependent issues associated to the actual networks that will be traversed (such as a fiber-optic, wireless, or xDSL). Meanwhile, the network layer deals with a set of technical QoS parameters. In details, the network layer maps these parameters on network-specific requirements that have to be fulfilled to provide the end-user with the negotiated quality level. This mapping is normally performed at the network layer in wired IP network. However, such an approach is hardly suitable for wireless networks, where there are a number of factors that influence with respect to wired networks, there is high variability of the network capacity due, for instance, to environmental conditions, the link quality experienced by different terminals is location-dependent. Consequently, the implementation of QoS provisioning at the MAC layer, as in IEEE 802.16, is often essential so as to gain a better insight of the present technology-dependent network status and to react as soon as possible to changes that might negatively affect QoS.
In IEEE 802.16 the prominent QoS functions of network provisioning and admission control are logically located on the management plane. As already indicated, admission control is outside the scope of the IEEE 802.16, which only covers the data control plane, as illustrated in figure 2.9. Network provisioning, the process of approving a given type of service, by means of its network-layer set of QoS parameters that might be activated later, can be either static or dynamic. Specifically, it is said to be static if the full set of services that the BS supports is decided a priori. This model is intended for a service provider who want to specify the full set of services that its subscribers can request, by means of manual or semiautomatic configure of the BS’s management information base (MIB). Meanwhile, with dynamic network provisioning, each request to establish a new service is forwarded to an external policy server, which decides whether to approve or not. This model allows a higher degree of flexibility, in terms of the types of service that the provider is able to offer to its subscribers, but it requires a signaling protocol between the BS and the policy server, thus incurring additional communication overhead and increased complexity.
Figure 2.9 Quality of Service model in IEEE 802.16
The admission control function, unlike the network provisioning function, which only deals with services that might be activated later, and that are therefore said deferred, is responsible for resource allocation. Thus, it will only accept a new service if it can provide the full set of QoS guarantees that it has requested, and the QoS level of all the services that have been already admitted would remain above the negotiated threshold. Admission control obviously works with a time scale smaller than that of network provisioning. This is motivated because network provisioning is much more complex than admission control, as shown in a recent study on an integrated end-to-end QoS reservation protocol in a heterogeneous environment, with IEEE 802.16 and IEEE 802.11e devices. Tested result demonstrated that the network provisioning latency of IEEE 802.16 equipments currently available in the market is in the order of several seconds, whereas the activation latency is in the order of milliseconds.
In IEEE 802.16, the set of network parameters that entirely defines the QoS of a unidirectional flow of packets resides into a service flows (SF) specification. The state of each SF can be: provisioned, admitted, active. Provisioned SFs are not bound to any specific connection, because their intentional function is to serve as an indication of what types of service are available at the BS. Then, when an application on the end-user side starts, the state of the provisioned SF will become admitted, thus booking resources that will be shortly needed to fulfill the application requirements. When the SF state becomes admitted, then it is also assigned a connection identifier (CID) that will be used to classify the SDUs among those belonging to different SFs. However, in this phase, resources are still not completely activated; for instance, the connection is not granted bandwidth yet. This last step is performed during the activation of the SF, which happens just before SDUs from the application starts flowing through the network.
Thus a two-phase model is used, where resources are booked before the application is started. At any time it is possible to “put on hold” the application by moving back the state of the SF from active to admitted. When the application stops the SF is set to either provisioned or deleted; in any case, the one-to-one mapping between the service flow identifier (SFID) and the CID is lost, and the CID can be reassigned for other purposes. The SF transition diagram is illustrated in Figure 2.9.
Figure 2.10 shows the blueprint of the functional entities for QoS support, which logically reside within the MAC CPS of the BS and SSs.
Figure 2.10 Service flow transition diagram.
Each DL connections has a packet queue (or queue, for short) at the BS (represented with solid lines). In accordance with the set of QoS parameters and the status of the queues, the BS DL scheduler selects from the DL queues, on a frame basis, the next SDUs to be transmitted to SSs. On the other hand, UL connection queues resides at SSs.
Bandwidth request are used on the BS for estimating the residual backlog of UL connections. In fact, based on the amount of bandwidth requested (and granted) so far, the BS UL scheduler estimates the residual backlog at each UL connection, and allocates future UL grant according to the respective set of QoS parameters and the (virtual) status of the queues. However, as already introduced, although bandwidth requests are per connection, the BS nevertheless grants UL capacity to each SS as a whole. Thus, when an SS receives an UL grant, it cannot deduce from the grant which of its connections it was intended for by the BS. Consequently, an SS scheduler must also be implemented within each SS MAC to redistribute the granted capacity to the SS’s connections (Figure 2.11)[8].
Figure 2.11 Medium Access Control architecture of the Base and Subscriber Station
Chapter 3. Scheduling and Admission for Real-Ttime Traffic
3.1 Scheduling and admission for real-time traffic.
The concept of scheduling a time-constrained job is central to both design and analysis of real-time systems, basing on the job characteristics to assign priorities to scheduling decisions. The need for an accurate estimation of performance measures such as the fraction of real-time jobs violating their timing constraints has resulted in the recently growing concern in developing models for evaluating the performance of such real-time scheduling policies. A real-time job has a deadline before which it is available for service and after which it must leave the system. We consider the model of deadlines until the end of service, where in a job is thrown away and considered lost if it does not complete execution before its deadline. Loss probability can have quite a profound effect on the dependability and/or performability of a real-time system. Examples of systems using such model may be found in many real-life situations, with the involvement of multimedia applications, high speed packet switching networks, avionic systems, industrial process control, and many other critical and non-critical applications. The general categorization of scheduling policies can divide them into: preemptive scheduling, in which processing of the currently running job can be interrupted by a higher priority job and non-preemptive scheduling, in which an arriving job is scheduled only after the completion of the current job execution. Despite greater system exploitation that preemptive scheduling can guarantee, the impossibility or prohibitive expense of preemption due to properties of hardware or software devices are probable cases. For example, in high speed packet switching networks, preemption requires the retransmission of the preempted packet. Scheduling over a shared media such as LAN, WLAN and field buses [11] such as CAN bus [12,13] is inherently non-preemptive, because each node in the network has to ensure that the shared channel is free before it can begin transmission. Further exploitation of non-preemptive processor scheduling is also found in light weight multi-tasking kernels, especially in multimedia applications [14]. Non-preemptive scheduling for real-time embedded systems has also the benefits of accurate response time analysis, ease of implementation, reduced run-time overhead, and guaranteeing exclusive access to shared resources and data which eliminates both the need for synchronization and its associated overhead.
The analysis of queuing systems handling jobs with deadlines has been addressed in numerous papers [18,19], most of them are focused on FCFS (First Come First Serve) scheduling policy. The analysis of scheduling policies that use information on timing constraints is increasingly being of interest in the literature [18,19]. Among such policies, it has been shown that preemptive and non-preemptive EDF are optimal policies within the classes of non-idling service time independent preemptive and non-preemptive scheduling policies, respectively. Despite the particularly valuable analysis of EDF thanks to its optimality, complexity of such analysis explains why there are very few papers on the analysis of EDF. Hong et al. [20] first introduced upper and lower bounds for the performance of an M/M/m/EDF+M queue (m=1 when deadlines are until the end of service). Their results were later improved in [18] for a single-server system.
This thesis makes an overview on the method presented in [17] for Poisson arrival jobs to the case of deadlines until the end of service in preemptive manner. The method to be introduced in this paper covers a wide range of input rates while it is simple to use. It gives a relatively good approximation for an important measure of performance in soft real-time systems, namely, the overall loss probability due to deadline misses and/or transient faults. A key parameter used in this method is , which is the rate of missing deadline when there are n jobs in the system. This important parameter is estimated using an upper bound and a lower bound for the case of preemptive EDF, for which a formulation simpler than the respective one in [17] is used. The latter formulation is then generalized to the case of non-preemptive EDF scheduling policy using a heuristic. To the best of our knowledge, no other analytical or approximation method for this problem exists. Comparison of the analytical and simulation results aims to prove the accuracy of the presented method.
3.1.1 System models [18].
In this thesis, M/M/1/EDF+M queue as in[18] is used. M/M/1/EDF+M queue is an infinite capacity single-server queue with EDF (Earliest Deadline First) scheduling policy, for which jobs arrive according to a Poisson process with average rate λ and each job requires a service time exponentially distributed with average rate µ. Moreover, a relative deadline, i.e. the interval of time between the arrival of a job and its deadline, is associated with each job. The last M indicates that the relative deadlines are random variables of an exponential distribution with mean θ. A model of deadlines until the end of service is considered. In this model, a job is discarded and considered lost if it does not complete execution before its deadline, which can occur while the job is in the queue or while it is in service. The job is thrown out if it is in the queue at the time when the deadline is reached, and it is aborted and then thrown out if, at the time that the job reaches its deadline, it is in service. As indicated in the definition of earliest-deadline-first (EDF) scheduling policy, the job closest to its deadline is to be served. A method for performance analysis of preemptive EDF scheduling policy is discussed in [18], for which a simpler formulation is presented in this subsection. Such analysis for preemptive EDF is made in Subsection 3.1.2, NP-EDF is also presented in subsection 3.1.3.
We begin with some notations that are used in[18] and also throughout this thesis. Let denote the set of natural numbers (including 0) and ℛ+ the set of positive real numbers. We also use E(n) to denote an Erlang random variable with parameters n and µ.(E(0) = 0.) Thus, E(n) has the probability distribution function
For t, and , let
the probability that one of the customers in the system during [t, t + ) misses its deadline, given there are n customers in the system at time t. (2)
Define
above is the (steady-state) rate of missing deadline when there are n jobs in the system (including the one being served). Then, the resulting Markov chain model of the system, M, may be shown as in figure 3.1.
Figure 3.1. State-transition-rate diagram for Markov chain M.
When the population of the system is n, it can be decreased to n-1 because of either completing the service requirements of a job (with rate µ) or missing a job’s deadline (with rate ).
Barrer [21] first derived an expression for , in terms of µ and θ, for a model with deterministic customer inpatience, unlimited capacity, and Poisson arrival process. Kargahi in [18] extends Barrer’s result to a larger class of models, namely, those with an arbitrary capacity, a general customer impatience, and a state-dependent Poisson arrival process. These latter results assume FCFS policy and show that is independent of λ and only depends on the number of jobs in the system. Now, we describe how to calculate for deadlines until the end of service as given in [18]. First, we assign as relative deadline of the nth customer that have to wait in the waiting queue, thus, is a random variable with probability distribution function G(.). We assume that G(0) = 0, thus, the probability distribution function of the relative deadlines is given by
Define to be the Laplace transform of n, i.e.,
Let be defined as in (4). Then, we have
We first assume that the model has an unlimited capacity. Let
Pn(t) = the probability that there are n customers in the system at time t.
We can write:
Let:
Then, in equilibrium, (8) is simplified as:
Using (7), we get:
The normalizing condition is:
Substituting for in (14), we find:
The probability of missing a deadline in the system (loss probability) may then be obtained as:
Which is the average rate of missing deadlines divided by the average rate of job arrivals. To analyze the system with EDF policy, we need to have a formulation for (of EDF) as defined in (4). Next, we review a method for estimating of preemptive EDF for an infinite-capacity system. Afterwards, such estimation will be generalized to non-preemptive EDF in the same system.
An important feature of the queuing system considered in this thesis is that each incoming customer to the system has a deadline until the end of its service. The difference between the deadline of a customer and its arrival time, referred to as a relative deadline, is assumed to be a random variable with a general probability distribution function G(.).
3.1.2 Loss rate for preemptive EDF
In this subsection, we present a method for estimating for preemptive EDF, denoted by , in the case of deadlines until the end of service. To do so, some bounds will be defined for . Combining the bounds, an estimation of the required values will be resulted. It is conjectured in [18] the ordering between the loss rate in each state of the Markov chain M, obtained from some properties of EDF and FCFS policies for deterministic and exponential deadline distributions. Thus, we take as the lower bound and as the upper bound of .
Where:
Since the exact values of for general distribution of deadlines are known, Kargahi [18] exploits this information to come up with some estimation for . Thus, the above two bounds are linearly combined using a multiplier to obtain an appropriate estimation of (if the exact values of were to be known, then solving M would result in an exact analysis of preemptive EDF). More explanation of the above approach for an infinite-capacity queue is given below.
Determination of the bounds. As indicated in [23], for some fixed value of mean relative deadline, θ, in a FCFS system, deterministically distributed deadlines generate the minimum loss probability among all types of deadline distribution. Such property is also assumed valid for loss rate in FCFS and EDF policies, which simulation results strongly confirm. On the other hand, due to the fact that for deterministic deadlines, EDF is the same as FCFS, we have . Moreover, an upper bound for may also be proposed as follows. Owning to the optimality property of preemptive EDF scheduling policy for deadlines until the end of service resulting in minimization of the loss probability, our conjecture is that such minimization property is also valid for loss rate, which is confirmed by simulation results, i.e., we have . There, we find
Determination of the multiplier. Contrary to the case of FCFS policy, simulation results in [18] strongly indicate that the state-dependent loss rate depends on λ for the case of EDF policy. Therefore, [18] takes advantage of some properties of EDF and some simulation results to find a multiplier which linearly combines the bounds defined previously. The multiplier must be adjusted as a function of λ. Therefore, the above determined bounds may be combined linearly to get an appropriate estimation of as follows
where the multiplier must be determined. As discussed previously, it has been shown that for FCFS policy, the loss rate is independent of λ and only depends on the number of jobs in the system.
and
where is given by (1) as following form:
Assume a queue with preemptive EDF scheduling policy and arrival rate λ, for which ρ=λ/µ is the normalized arrival rate with respect to µ. Moreover, and are assumed to be the loss rate and the corresponding multiplier for the queue, respectively. Therefore, the bounds of must be combined linearly using a multiplier as a function of ρ. Replacing in (19) with its equivalent values obtained from simulation, denoted by , and using some simple algebraic manipulations to find
The function used in (23), describing the two bounds of for the case of deadlines until the end of service, mentioned above. Next, we need to find the multiplier in a manner that is close enough to . Noting the simulation, we find that is a function of three parameters, namely, n, µθ, and ρ.
To find a function describing the behavior of with respect to the above three parameters, the following steps are needed to be considered [18]:
Assuming n and ρ to be fixed for exponential relative deadlines, and checking the simulation results of as obtained in (23) for different values of normalized mean relative deadline, it is found that the resulting values of are changed inversely with respect to the square root of µθ. Therefore, should be proportional to the inverse of as indicated in (24).
Assuming ρ and µθ to be fixed for exponential relative deadline, and reviewing the simulation results of for different values of n, it is found that and the number of waiting jobs in the queue, n, should inversely be related as indicated in (24).
Assuming n and µθ to be fixed for exponential relative deadlines, and checking the simulation results of for different values of ρ, it is found that and ρ should also be inversely related. Reviewing the simulation results of , from almost no traffic to very heavy traffic intensities, it is also found that 6.7/ρ1.25 is a good estimation for the input rate dependent part of as indicated in (24). This latter behavior can also be explained by some properties of EDF. Due to the dynamics of the EDF policy with respect to different values of ρ, for very small values of ρ where ρ, converges to ; therefore, tends to be very large as . The reason is that for very light traffic intensities, EDF behaves very similar to FCFS and the improvements of EDF over FCFS are quite limited. On the other hand, for large values of ρ, the behavior of EDF becomes more similar to that of FCFS with a deterministic relative deadline, where converges to the lower bound; therefore, tends to be very small as .
The comparison of the analytical and simulation results show that if we consider the exponential relative deadlines, the formulation derived for from steps 1, 2, and 3 above results in a good estimation for . On the other hand, for other distributions of deadlines, the same formulation does not result in a good enough estimation for and need to be improved.
Based on the above explanation, it is proposed that [18]
3.1.3 Non-preemptive EDF
Above is the analysis of scheduling policy for preemptive EDF; and the following is the presentation of scheduling policy for non-preemptive EDF. When non-preemptive EDF is used, job being served will not be preempted and will continue to get service even if the deadline of an arriving job is earlier than that of the job in the server. The presentation below is for the estimation of for non-preemptive EDF, denoted by , which results in approximating the performance of non-preemptive EDF scheduling policy. This is another view proposed by Kargahi in [18] (assuming n>0):
Since the serving is non-preemptive, after starting service, the behavior of the system with respect to this first job is like that of a system with FCFS scheduling policy.
As the remaining (n-1) jobs in the system follow EDF scheduling policy, the system can be divided into subsystems: the first one which contains the non-preemptive server with rate µ which can be considered as an FCFS system with capacity 1 (no waiting room), and the other which can virtually be assumed as a preemptive EDF system with a virtual server with rate µ’(Figure 3.2), where the job with the highest priority in the virtual system is entered to the FCFS subsystem, if it has no job to process.
Since the job in the FCFS queue leaves the system due to service completion with rate µ or deadline miss with rate , the rate of leaving the system for this job will be .
In [18], it is assumed such leaving rate as the service rate of the virtual server (µ’) in the preemptive EDF subsystem at which the waiting jobs are entered to the actual server. Therefore, .
This viewpoint to the system will lead to a loss rate of in the preemptive EDF subsystem, which can be calculated from (20), (21), (22) and (24) by replacing µ with and ρ with .
Finally, we obtain (assuming n>0) . (25)
Figure 3.2 The modified view to the non-preemptive EDF queue
Replacing with above in M and solving the resulting Markov chain using the method as presented above, we find for the system with non-preemptive EDF scheduling policy.
In which the probability of state n is
and
3.2 Some current scheduling algorithms for real-time polling services
Several packet scheduling algorithms for broadband wireless networks have been published recently, but in this thesis, we present a packet scheduling algorithm similar to that proposed in [24] which provides QoS support for a variety of real time applications as defined in IEEE 802.16.
IEEE 802.16 is capable to support multiple communication services (data, voice, video) with different QoS requirements. The media access control (MAC) layer defines QoS signaling mechanisms and functions that can control BS and SS data transmissions, which is relatively simple on the downlink (from BS to SS) because the BS is the only one that transmits during the downlink subframe. The data packets are broadcasted to all SSs and an SS only picks up the packets destined to it. One of the modes of uplink arbitration (from SS to BS) uses a TDMA MAC. The BS decides on the number of time slots that each SS will be allowed to transmit in an uplink subframe. This information is broadcasted by the BS through the uplink map message (UL-MAP) at the beginning of each frame. UL-MAP contains information element (IE) that include the transmission opportunities, i.e. the time slots in which the SS can transmit data in the predefined time slots as indicated in IE. The BS uplink scheduling module determines the IEs using bandwidth request PDU (BW-Request) sent from SSs to BS. In IEEE 802.16 standard, there are two modes of the BW-Request transmission: contention mode, in which SSs send BW-Request during the contention period. Contention is resolved using back-off resolution, and contention-free mode (polling), in which BS polls each SS and SSs reply by sending BW-Request. Due to the predictable signaling delay of the polling scheme, contention-free mode is suitable for real time applications. IEEE 802.16 defines the required QoS signaling mechanisms described above such as BW-Request and UL-MAP, but it does not define the Uplink Scheduler.
IEEE 802.16 defines five types of service flows for 802.16-2004 fixed WiMAX and an extra one, the extended rtPS, for 802.16e-2005 Mobile WiMAX, each with different QoS requirements and corresponding uplink scheduler policy:
Unsolicited grant service (UGS): this service supports constant bit-rate (CBR) or CBR-like flows such as Voice over IP. These applications require constant bandwidth allocation.
BW-Request: not required.
Uplink Scheduler: BS determines the IEs for the UL-MAP – it allocates a fixed numbers of time slots in each time frame.
Real-time polling service (rtPS): this service is for real-time VBR-like flows such as MPEG video. These applications have specific bandwidth requirements as well as a deadline (maximum delay). Late packets that miss the deadline will be useless.
BW-Request: used only in the contention-free mode. The current queue size that represent the current bandwidth demand is included in the BW-Request.
Uplink Scheduler: not defined in the current IEEE 802.16.
Extended Real-Time Polling Service (ertPS): This service is for realtime traffic with variable data rate (such as VOIP service with silence suppression) over the WiMAX network.
BW-Request: used only in the contention-free mode. The current queue size that represent the current bandwidth demand is included in the BW-Request.
Uplink Scheduler: not defined in the current IEEE 802.16.
Non-real-time polling service (nrtPS): this service is for non-real-time flows which require better than best effort service, i.e. bandwidth intensive file transfer. These applications are time-insensitive and require minimum bandwidth allocation.
BW-Request: uses either contention-free mode or contention mode. Current queue size is included in BW-Request.
Uplink Scheduler: not defined in current IEEE 802.16.
Best effort service (BE): this service is for best effort traffic such as HTTP. There is no QoS guarantee. The applications in this service flow receive the available bandwidth after the bandwidth is allocated to the previous three service flows.
BW-Request: uses only contention mode. Current queue size is included in BW-Request.
Uplink scheduler: not defined in current IEEE 802.16.
3.2.1 EDF Broadband Wireless Access Scheduling Algorithm
In some BWA systems (e.g. WiMAX), the queue size information in the SSs can be obtained by the BS from the bandwidth request messages by the SSs, and from which the number of arriving packets (in bits), their arrival time, hence the relative deadlines can be estimated from the current queue sizes. We will first define some quantities that are used throughout this subsection.
f = duration of a time frame (ms) which includes uplink and downlink subframe.
di = maximum delay requirement of connection i (ms).
qi(t) = queue size (bits) of connection i at time t.
si[ t, t+f] = number of bits of connection i that transmit during time frame interval [t,t+f].
ai[t, t+f] = number of bits of connection i that arrive during time frame interval [t , t +f].
Ndi[t,t+f] = number of bits waiting in queue of connection I with deadline at time interval [t, t+f].
Cuplink = total capacity (bps) allocated for uplink transmission.
Cdownlink = total capacity (bps) allocated for downlink transmission.
Ctotal = total capacity (bps) of the wireless network, Ctotal = Cuplink + Cdownlink.
CrtPS = average capacity (bps) allocated for current rtPS connection.
CBE = average capacity (bps) available for current BE connection = Cuplink-CUGS-CrtPS-CnrtPS.
CNRT= CnrtPS + CBE = Cuplink – CUGS – CrtPS.
Nuplink = total number of bits that SS are allowed to transmit in an uplink subframe, Nuplink=f*Cuplink.
ri = token bucket rate (average data rate) of connection i.
bi = token bucket size of connection i.
To capable to support all types of service flows (UGS, rtPS, ertPS, nrtPS, and BE), the proposed uplink packet scheduling uses a combination of strict priority service discipline, earliest deadline first (EDF) and weight fair queue (WFQ). The hierarchical structure of the bandwidth allocation is shown in Figure 3.3. The proposed UPS consist of three modules: information module, scheduling database module and service assignment module. In [24], it is proposed to apply earliest deadline first (EDF) service discipline for rtPS connection. Packets with earliest deadline will be scheduled first. The information module determines the packet’s deadline.
Information module: the information module performs the following tasks: (1) retrieves the queue size information of each connection from the BW-Request messages, (2) determines the number of packets (in bits) that arrived from rtPS connection in the previous time frame using the arrival-service curve concept, (3) determines rtPS packet’s arrival time and deadline and updates this information in the scheduling database module.
Figure 3.3 Hierarchical structure of bandwidth allocation [24]
Step 1: determines the number of arrivals during the previous time frame, ai[t-f,t]. This is accomplished by using the arrival-service curve concept. At time t = nf, (n = 1,2,3,…)
Input: queue size = qi(nf), Service = si[(n-1)f, nf]
Output: ai[(n-1)f, nf] = qi(nf) + si[(n-1)f, nf] – qi((n-1)f) (4.1)
Step 2: determines the packet’s deadline given the packet’s arrival information during the previous frame, ai[t-f,t] as determined in step1. The deadline from the packet’s arrival time plus the packet’s maximum delay requirement. Notes, ai[t-f,t] is number bits waited in the queue for time f, therefore, to avoid delay violation, these packets have to receive service in frame [(t-f) + (di-f),(t-f) + di]. Therefore, Ndi[(t-f) + (di –f), (t-f) + di] = ai[t-f,t]. In general form, step 2 will be: at time t = nf, (n = 1,2,3,…)
Input: ai[(n-1)f, nf]
Output: Ndi[(nf-f) + (di –f), (nf-f) + di] = ai[(n-1)f, nf]
The output of the information module is Ndi[a,b] will be updates the scheduling database module.
Scheduling database module: the scheduling database module serves as the information database of all connections in the network. Figure 3.4 show the database structure of the scheduling database module.
This is two dimensional database, per connection and deadline (frame). Item (I, [t,t+f]) includes Ndi[t,t+f] (received from the information module), it is the number of bits to be transmitted in frame [t,t+f]. figure 3.4 also shows the number of bits in the database table that correspond to the actual packets waiting in queue.
Service assignment module: the service assignment module determines the UL-MAP, using the database tables created in the scheduling database module. We use strict priority service discipline for allocating bandwidth among service flows (i.e. UGS, rtPS, nrtPS, and BE). Within each service flow we employ the following service discipline: EDF for rtPS, fixed bandwidth for UGS, WFQ for nrtPS and equal bandwidth allocation for BE.
Figure 3.4 rtPS database structure in scheduling database module [24]
Schedule the rtPS packets in the rtPS database until either there are no rtPS packets left or there is no more bandwidth left. After scheduling the packets, update Nuplink (Nuplink Nuplink – ). In case the total number of bits in the column is greater than Nuplink, Nuplink will be distributed to each connection based on its weight (wi = ri/, i = rtPS connection). For example, connection i will be scheduled with wiNuplink bits. If there are still packets left in the current time frame interval and Nuplink is equal to zero, these packets will miss the deadline. We can take the following two actions for the packets that miss their deadline: (1) drop the packets, or (2) reduce the priority of the packets by moving them to the BE database. After scheduling the packets, update the UL-MAP and the rtPS database.
3.2.2 Admission Control
Any wireless network is resource-constrained and can serve a limited number of users with a given QoS level. Hence, a CAC algorithm that decides whether new users should be admitted to the network is required. The goals of the CAC are satisfying QoS requirements for the admitted users, maximizing operator’s revenue by maximizing the network capacity, and supporting fairness and priorities among the users [21,22].
A CAC algorithm admits a new user to the network based on a given criterion. The admission criteria may be different. The known CAC schemes are based on SINR, interference, bandwidth, load, or system capacity [21,22]. Several publications are available that consider the CAC in the OFDM network [23,24]. The algorithms of these publications admit new users based on the analysis of the current status of the queues of the active users.
In the Mobile WiMAX network the most suitable scheme is the one maximizing network capacity while satisfying QoS requirements for all the admitted users. Such scheme maximizes operator’s revenue and guarantees user’s satisfaction. We propose such CAC algorithms based on our system load model.
In subsection, we also introduce another network traffic controlling mechanics called token bucket. It works well for the “bursty” traffic. Two parameters are necessary in the token bucket mechanism: bucket size B and token rate r. Figure 3.5 shows how it works. Each token presents a unit of bytes or a packet data unit. A packet is not allowed to be transmitted unit it processes a token. Therefore, over a period of time t, the maximum data volume to be sent will be
Figure 3.5 Token Bucket mechanism
We adopt the token bucket mechanism to schedule the packets in 802.16 network environments.
A new rtPS connection requests with parameters , and current network parameters are: Cuplink, CUGS, CrtPS, CnrtPS, f. CNRT = Cuplink – CUGS – CrtPS, CNRT,new=CNRT - ri, CrtPS,new = CrtPS + ri. below is the admission control procedure:
Check for available bandwidth: if , there is available bandwidth for the new rtPS connection. Otherwise reject the new connection.
Check for delay guarantees: if , we can provide delay guarantees for the new rtPS connection. Otherwise reject the new connection.
Check for delay violations of existing rtPS connections: for each rtPS connection i, delay guarantee is maintained if . If for any connection this condition is not satisfied, reject the new connection.
Update CrtPS: CrtPS ß CrtPS + ri.
Pass token bucket parameters ri, bi to the traffic policing module.
Traffic policing: each SS will have a traffic policing module which monitors violations of QoS contracts by admitted connections, using the token bucket mechanism. The token bucket parameters for each connection are received from the admission control module.
Chapter 4. Simulation Results
We have developed two simulation models in Matlab: one for theoretical performance of the single queue preemptive and non-preemptive EDF of rtPS and the other for simulation of non-preemptive EDF scheduling of rtPS services in WiMAX
4.1 Theoretical Performance of single queue EDF scheduling algorithms
As previously mentioned, there is no way to identify the exact values of λ, µ, θ for particular application scenario, therefore, to simulate the results of the theories above, it is necessary to use the normalised values: ρ= λ/µ as the normalized input rate, * θ as the nomalized loss rate, and µ*θ as the normalized delay. Hence, the formulas (20, 21, 22) become:
We choose λ/µ = (0,3] and take 30 values within this range. µθ = 4 and 8. The results to be achieved are as follows:
Figure 4.1 P-EDF analytic result with µθ=4
Figure 4.2 P-EDF analytic result with µθ=8
As above, formulae (26,27,28) become:
The results to be achieved are as follows:
Figure 4.3 Loss probability analytic
4.2 Simulation of NP-EDF scheduling algorithm for rtPS services in WiMAX
The next step is to describe the model using Matlab for calculating loss rate according to UPS method mentioned above.
The arrival bits (during the frame time f) from the previous frame is from the output of the Information module:
ai[(n-1)f, nf] = qi(nf) + si[(n-1)f, nf] – qi((n-1)f) (4.1)
Note: Arrival rate = arrival bits/f and service rate = service bits/f. Packet has a fixed number of bits that go together (same delay d), so let packet =1 bit.
We can start the simulation an Uplink N-EDF queue at the SSi, at t=f (or n=1) using equation (4.1) by:
generate a random value for ai(0,f) from a Poisson arrival distribution with average λ bits/frame (equal the token bucket rate, convert it to bits/frame, use frame time = 5ms for WiMAX). We make delay θ and si(.) fixed for simulation, e.g. 60ms or 12f and 25Kbits/frame.
assuming the initial condition that queue qi(0)= 0 and service bits si(0,f)=0, i.e. from (1bis) the queue is qi(f) = ai(0,f) .
In WiMAX, the SSi will feedback this queue size to the BS which then uses equation (4.1) to calculate the bandwidth allocation for SSi in the following/next frame, i.e. si(f, 2f) bits/frame, but here we assume si(.) is fixed.
If si (f,2f.) < qi(f) then [qi(f) – si(f,2f)] will be left in the queue waiting to be served in the next frame.
generate the next Poisson arrival ai(f, 2f),
calculate qi(2f) from (4.1),
qi(2f)= a(f,2f)-s(f,2f)+q(f) = new arrival a(.) + what is left in the queue
repeat step 4) to get ai(2f, 3f) and qi (3f), and so on …..
The arrival time of the bits ai((n-1)f, nf) is assumed to be at ((n-1)f), i.e. arriving at the start of the frame (worst case). Therefore the worst case departure time has to be in the frame as determined below:
Deadline transmitting frame is = [(n-1)f+(mi-1)f, nf+(mi-1)f] (2)
And the result which has been achieved as:
Figure 4.4 Bandwidth allocation for rtPS connection.
Figure 4.5 Arrival and service curve for rtPS connection.
4.3 Conclusion
This thesis has performed several algorithms for scheduling for real-time service flows for WiMAX uplink, according to the mechanism which are preemptive EDF and non-preemptive EDF, this thesis has as well shown a method to scheduling packet uplink and estimating deadline method for considering the probability of missing deadline in M/M/1/EDF+M queue model. Besides, we have introduced token bucket, which is one method using mechanism network traffic control in determining missing deadline .
References
[1] Deepak Pareek. WiMAX, Taking wireless to WiMAX. Auerbach Publications, Taylor & Francis Group.
[2] T. Gaden. Intel pumps $37 million into Unwired. Whirlpool Broadband Multimedia, August 2005.
[3] W.Stalling. Data and Computer Communications. Prentice Hall, NewJersey, 7 edition, 2004.
[4] Deepankar Medhi, Karthikeyan Ramasamy. Network routing algorithms, protocols, and architectures. Morgan Kaufmann Publishers.
[5] IEEE 802.16 Working Group on Broadband Wireless Access. IEEE Standard for Local and Metropolitan Area Networks. Part 16: Air Interface for fixed broadband wireless access systems. IEEE 2004.
[6,7] Loutfi Nuaymi. WiMAX: Technology for Broadband Wireless Access. John Wiley
[8] Yang Xiao. WiMAX /MobileFi Advanced Research and Technology. Auerbach Publications
[9,10] G.S.V. Radha Krishna Rao, G. Radhamani. WiMAX, A wireless technoloty revolution. Taylor & Francis Group
[11] EN 50170. General purpose field communication system. In European Standard, CENELEC, 1996.
[12] CAN-CIA. Can specification 2.0 part B. 1992.
[13] M.A. Livani and J. Kaiser. EDF Consensus on CAN Bus Access for Dynamic Real-Time Application. Proceeding of IEEE workshop on parallel and distributed computing systems in conjuction with 12th international parallel processing symposium / 9th symposium on parallel and distributed processing.
[14] S.Dolev and A. Keizelman. Non-preemptive real-time scheduling of multimedia tasks. Real-time Systems.
[15] X.Castillo, S.R. McConnel, and D.P. Siewiorek. Derivation and Caliberation of a Transient Error Reliability Model. IEEE Trans. On Computers, 31(7): 658-671, 1982
[16] R.K. Iyer, D.J. Rossetti, and M.C. Hsueh. Measurement and Modeling of Computer Reliability as Affected by System Activity. ACM trans. On computer systems. 4(3):214-237, 1986
[17] D.P. Siewiorek, V.Kini, H.Mashburn, S.McConnel, and M.Tsao. A case study of C.mmp, Cm* and C.vmp: Part I- experiences with fault tolerance in multiprocessor systems. Proceedings of the IEEE, 66(10): 1178-1199, 1974.
[18] M.Kargahi and A.Movaghar. A method for performance analysis of Earliest-Deadline-First Scheduling policy. Proceeding of IEEE International Conference on Dependable Systems and Networks.
[19] A. Movaghar. On queueing with customer impatience until the beginning of service. Queueing systems.
[20] J.Hong, X.Tan, and D.Towsley. A Performance Analysis of Minimum Laxity and Earliest Deadline Scheduling in a Real-time system. IEEE Transactions on Computers, 38(12): 1736-1744, 1989.
[21] D.Y.Barrer. Queueing with Impatient Customers and Ordered Service. Oper. Res, 5:650-656, 1957.
[22] Ancker, C.J.; Gafarian, A.V Queueing with impatient customers who leave at random. J.Industrial Eng. 1962, 3, 86-87
[23] A. Movaghar. On queueing with customer impatience until the end of service. Proceedings of IEEE international computer performance and dependability symposium.
[24] K. Wongthavarawat and A. Ganz, “Packet scheduling for QoS support in IEEE 802.16 broadband wireless access systems,” Int. J. Commun. Syst., vol. 16, pp.81-96, 2003.
Các file đính kèm theo tài liệu này:
- services in WiMAX. .doc