An increasing demand for Quality of Service on the Internet has led to various developments in that area. Differentiated Services is a technique to provide such Quality of Service in an efficient and scalable way.
Management of computer networks involves both monitoring of running services as well as the configuration of those services. On the Internet, the SNMP protocol is used to retrieve and set variables in a MIB. In order to facilitate the management of routers equipped with Differentiated Services, the IETF has created the DiffServ MIB (which is still work in progress).
This assignment involves building a prototype implementation of the DiffServ MIB using a router running the GNU/Linux operating system, using the Network Traffic Control facilities in the kernel and the net-snmp SNMP agent software.
The IETF diffserv WG is still working on the DiffServ MIB. The result of implementation work is valuable to the MIB authors as it may help in improving the MIB specification. Therefore anyresults should be reported back to the IETF
Today's Internet provides a best effort service. It processes traffic as quickly as possible, but there is no guarantee at all about timeliness or actual delivery: it just tries its best. However, the Internet is rapidly growing into a commercial infrastructure, and economies are getting more and more dependent on a high service level with regard to the Internet. Massive (research) efforts are put into transforming the Internet from a best effort service into a network service users can really rely upon.
Commercial demands gave rise to the idea of having various classes of service. For instance one can imagine that companies might offer (or buy, for that matter) either a gold, silver or bronze service level. Each of them having their own characteristics in terms of bandwidth and latency with regard to network traffic. This is called Quality of Service (QoS). The Internet Engineering Task Force (IETF), one of the main driving forces behind Internet related technologies, has proposed several architectures to meet this demand for QoS. Integrated Services and Differentiated Services, developed in the intserv and diffserv IETF Working Groups, are probably the best known models and mechanisms. The IETF diffserv WG has also defined a DiffServ Management Information Base, a virtual storage place for management information regarding DiffServ. At time of writing, this MIB is still work in progress. This assignment contributes to the development of the DiffServ MIB by writing a prototype implementation of a DiffServ MIB agent and giving feedback to the IETF community. One of the likely uses of the DiffServ MIB is that it may act as part of a bigger policy-
based management framework. Therefore an implementation of the DiffServ MIB might also help development in that area.
William Stallings wrote in  that a large network cannot be put together and managed by human effort alone. Automated network management tools are necessary to accomplish these tasks in complex systems. The model that is used for TCP/IP network management includes the following key elements:
1. Management Station
2. Management Agent
3. Management Information Base
4. Network Management Protocol
This assignment focuses on a prototype implementation of a management agent for Differentiated Services, given the (draft) DiffServ MIB and SNMP as is implementations of the 3rd and 4th elements. In 2000, Oscar Sanz has carried out a feasibility study of this work. Some work has been done on a manager application (for more information, see) for a DiffServ MIB agent, but given the current status of the DiffServ MIB it'll take some time, although existing generic command line utilities like snmpget can be used as well.
A prototyping environment is selected and a prototype implementation is developed. This prototype focuses on the monitoring part of the DiffServ MIB. It is not possible to do DiffServ configuration using this MIB implementation. The following aspects are the goals this M.Sc assignment:
Does the DiffServ MIB, with this prototype implementation, solve the management issues it is intended to address? In other words, is it possible to manage a DiffServ router with the MIB, especially in the selected prototyping environment?
Â¢ Incrementing the MIB might cause discovery of some problems with the current draft version, like under- or over-specification. The IETF DiffServ Working Group will be interested, so giving them feedback about the results of this work is important.
The prototyping environment chosen for this assignment is a DiffServ MIB Agent on a Linux-based router, using the net-snmp suite. Some motivations for using these software packages:
Â¢ The Linux operating system is freely available, including all of its source code. Linux has an excellent network traffic control infrastructure, support for differentiated services being part of it. The availability of the source code makes it possible to have a close and in-depth look at the DiffServ functionality in the kernel (since the kernel version 2.4 series).
Â¢ Linux, although being a server and desktop operating system by nature, is gaining influence on the marketplace for Internet routers as well. Many of the protocol implementations running on Cisco (et al.) routers, e.g. BGP routing, are also implemented on Linux systems nowadays. Hence it is likely that Linux' share will continue growing in the future, but if and only if implementers keep up with the developments in the global router market.
Â¢ net-snmp is a so-called open source implementation of SNMP and provides applications with a relatively easy to use interface for SNMP communication. The suite supports SNMPv1, version 2 with community-based security and version 3.
This makes Linux and net-snmp an obvious choice for a proof of concept implementation of the DiffServ MIB.
The following figure gives the structure of the report .
This report is intended for those who are:
Â¢ Designing and implementing the DiffServ MIB
Â¢ Interested in Quality of Service on the Internet
Â¢ Interested in Network Management on the Internet
Â¢ Interested in Differentiated Services and Management of DiffServ routers
Â¢ Interested in SNMP Agent programming
Â¢ Interested in Linux Network Traffic Control programming
INTERNET, QUALITY OF SERVICE AND
IP stands for Internet Protocol. The Internet is the largest combination of computer networks ever built, and still rapidly growing. Enormous amounts of IP network traffic is sent over the Internet, while this figure seems to double every nine months.
The Internet protocol stack, known as the TCP/IP reference model. A short description of these four layers follow.
Â¢ The link layer (sometimes called host-to-network layer) is not extensively discussed in this reference model. A host should be able to setup a connection with the network, using some protocol in order to be able to send IP packets through the network. The protocol used in this mechanism is varies from host to host and network to network. Well-known examples are Ethernet on a Local Area Network and PPP on dialup phone lines.
Â¢ On top of this access mechanism the internet protocol layer is stacked. This is the connectionless key element of the whole Internet architecture. It enables hosts to send packets to arbitrary networks, independent of each other in routing and sequence.
Â¢ The transport layer's sole purpose is to provide peer entities on both the source and destination hosts with services to have a sensible. conversation with each other. There are two well-known transport protocols: TCP (Transport Control Protocol) and UDP (User Datagram Protocol).
Â¢ On top of the stack, the application layer is what is most visible to the user. Examples are WWW browsers and e-mail clients. These applications make use of the underlying TCP transport layer to communicate with the server, using IP over PPP using a dialup telephony line, for example.
An IP packet consists of a header and a data part, the latter one often referred to as payload. The structure of an IP packet is given below. To get an idea of the size of such packets: each row in the header part is 16 bits, the option part has a variable size, hence the minimum size of an IP header is 20 bytes. The payload's maximum size is 64 kilobyte, but usually the packet's total size doesn't exceed 1500 octets.
Quality of Service in IP Networks
Within the IETF, various mechanisms that (help to) provide QoS services have been developed. Some of the more relevant models are outlined below.
Â¢ Multiprotocol Label Switching (MPLS) is an approach to apply label-switching to large-scale networks. The key concept of label-switching is identifying and marking IP packets with labels at the ingress of an MPLS domain, and basing any further forwarding in that MPLS domain on those labels.
Â¢ Constraint-based routing (also known as QoS routing) is used to identify an end-to-end path through a network (or series of networks) that has sufficient resources to satisfy a set of constraints (like available bandwidth and latency).
Â¢ Integrated Services (sometimes called intserv) provides end-to-end Quality of Service by means of resource reservation. Each traffic flow may request resources using the RSVP (resource reservation) protocol.
Â¢ Differentiated Services (diffserv or DS) is an architecture that uses small and well-defined building blocks to provide QoS in networks.
Typically an ISP or network operator may provide so-called edge-to-edge Quality of Service. In figure two hosts are connected via 2 different network providers who have their own transit networks, also referred to as domains. A domain may consist of a very complex mesh of network links and devices. When host X sends traffic to host Y, it first enters domain A at one of its edges. Provider A now guarantees some level of service throughout its network. The packets leave the A domain at some other edge and enter the second domain. In order to guarantee the same level of service throughout network B as the packets got in the first domain, provider A and B should have some Service Level Agreement which regulates such exchanges of traffic. Operator B now sends it on to the destination host. In short: edge-to-edge Quality of Service may be extended to combinations of networks, and ultimately towards (near) end-to-end QoS.
Structure of Management Information and MIBs
Management information is viewed as a collection of managed objects, residing in a virtual information store, termed the Management Information Base (MIB). Collections of related objects are defined in MIB modules. These modules are written using an adapted subset of OSI's Abstract Syntax Notation One, ASN.1
The Structure of Management Information is divided into three parts:
1. Module definitions are used to identify various modules, using the MODULE-IDENTITY element. Each MIB has a unique identifier. The example above was taken from the MIB identified by ifMIB, also known as the Interfaces MIB. The module definition also keeps track of revisions and authors of that MIB.
2. Object definitions describe managed objects, e.g. the ifOutOctets counter OBJECT-TYPE. As it is important that no doubt may exist about what the managed object is, the definition includes a concise description and type definition, in the example the type is a 32 bit counter.
3. Notification definitions are used to describe the possibilities for unsolicited transmission of management information, e.g. between a manager and an agent, using the NOTIFICATION-TYPE ASN.1 macro.
4. A name-to-number mapping exists throughout all MIBs, in order to access specific instances of managed objects. These are called object identifiers(OIDs) and are administratively assigned.
An OID consists of a prefix and a part pointing to a specific instance.It is possible to specify new abstract datatypes for MIBs using SMIv2, though 11 base types have been defined:
INTEGER IpAddress TimeTicks
OCTET STRING Counter32 Opaque
OBJECT IDENTIFIER Gauge32 Counter64
Simple Network Management Protocol
The notion that computer networks and devices attached to networks should be manageable has existed for about as long as those networks themselves. The model of network management that is used for TCP/IP network management includes the following key elements:
Â¢ Management Station
Â¢ Management Agent
Â¢ Management Information Base
Â¢ Network Management Protocol
The communication between the Management Station (manager) and Management Agent (agent) uses the Simple Network Management Protocol (SNMP), of which several versions exist. .
The only operations that are supported in SNMP are the inspection and alteration of variables.
Three general-purpose operations may be performed on scalar objects:
Â¢ Get: a management station retrieves a scalar object value from a managed station
Â¢ Set: a management station updates a scalar object value in a managed station
Â¢ Trap: a managed station sends an unsolicited scalar object value to a management station
Note that access is only allowed to leaf objects, hence it's not possible to access an entire table in one atomic action for example.
These operations have resulted in the definition of 7 message types in the SNMPv2 protocol:
GetRequest Request information about objects inside a MIB
GetNextRequest Request information about the next object
GetBulkRequest Request transfer of a potentially large amount of data
SetRequest Request to set objects in a MIB
Response Response to one of the four former messages
SNMPv2-Trap Alarm message from an agent to warn the manager
InformRequest Used for manager-to-manager communication
Authentication of SNMP messages is performed using community strings, which is kind of a password. An SNMP message has also some fields to give information about errors in processing the request..
The network management of an IP network consists of network management stations (managers) communicating with network elements. The pieces of software running on those network elements faciliting management operations are called agents. This communication can be two-way: the manager may ask the agent for some information, or put some information into the agent, or the other way around, when the agent sends some information to the manager without sollication thereof (notifications).
A little explanation about the internal workings of an SNMP agent is necessary to understand certain design decisions taken in this project. As discussed earlier, an SNMP
agent uses a virtual information store, the MIB, to get and set values. The agent manages the devices it is running on. However, it is unlikely that one piece of software can take full advantage of all the device's capabilities. Especially in multi-vendor environments, it is clear that one may want to run multiple SNMP agents on the same box. A possibility is to run the various agents of different UDP ports, but then the manager would have to talk to multiple agents as well. Hence another solution has been developed: the mechanism of master and slaves. There is one master agent, and zero or more subagents. Each of these subagents implement a distinct part 2 of the MIB, and a master agent dispatches requests from the manager to the correct sub agent.
Each of this sub agents is responsible for its own part of the MIB, and is more or less ignorant about the SNMP protocol spoken between the manager and (master) agent. Various protocols have been defined for master and sub agent interaction.
DIFFERENTIATED SERVICES ARCHITECTURE
The key question answered in this chapter is:
What is DiffServ and what can it do?
DiffServ is a protocol for specifying and controlling network traffic by classes so that certain types of traffic get precedence. For example voice traffic, which requires a relatively uninterrupted flow of data, might get precedence over other kinds of traffic, like e-mail. A Service Level Agreement (SLA) is a service contract between a customer and a service provider that, in this case, specifies the forwarding service a customer should receive. Part of an SLA may be a Traffic Conditioning Agreement that contains the more technical details of how the service contract is executed.
The Differentiated Services Architecture lays the foundation for implementing service differentiation in the Internet in an efficient and scalable way. The IETF DiffServ Working Group charter states:
(...)The differentiated services approach to providing quality of service in networks employs a small, well-defined set of building blocks from which a variety of aggregate behaviors may be built. A small bit-pattern in each packet, in the IPv4 TOS octet or the IPv6 Traffic Class octet, is used to mark a packet to receive a particular forwarding treatment, or per-hop behavior, at each network node. A common understanding about the use and interpretation of this bit-pattern is required for inter-domain use, multi-vendor interoperability, and consistent reasoning about expected aggregate behaviors in a network (...)
The Differentiated Services architecture is based on a relatively simple model. Traffic is classified when entering a network and possibly conditioned (e.g. shaped to a certain maximum rate) at the network's boundaries. Also behaviour aggregates are assigned. This means that a predefined number, the DiffServ Code Point (DSCP) is written into the first 6 bits of the IPv4 packets' Type-of-Service header field (figures 3.1 and 2.2). A behaviour aggregate is a collection of packets with the same DSCP value crossing a link in a particular direction.
The core of a network may consist of a mesh of links, routers, switches and other networking equipment. Each router packets traverse is called a hop. Packets, classified at the edge of the network, are forwarded according to the so-called per-hop behaviour (PHB) throughout the core of the network. The PHB is associated with the DSCP value.
Packets may be forwarded across multiple networks on their way from source to destination. Each of those networks is called a DiffServ Domain. Or more specific, a DiffServ Domain is a set of routers implementing the same set of PHBs. Obviously, SLAs between the various operators of those networks are needed if customers want a particular level of service: a DSCP value of 1 in the network A may be associated with an completely different level of service than it is in network B. Reassigning DSCP values when entering the next network is possible, but it would save resources if it is avoided. Note that the inclusion of non-DiffServ-compliant nodes within the path a packet traverses may result in unpredictable performance and hence affect the ability of satisfying the SLA.
DiffServ Building Blocks
The two main elements of the DiffServ conceptual model are Traffic Classification and Traffic Conditioning. A DiffServ router consists of numereous functional elements that implement these tasks. This section discusses these building blocks, with con_guration and management in mind.
In figure 3.3 the logical view of the Classifier and Conditioner elements is given. Packets, when they arrive at the ingress interface of a DiffServ router, typically get classified, after which some actions are performed before the packets are forwarded the next hop. A more detailed look at these functional elements follows in the next subsections.
Packet Classification: Classifier
The packet classification policy identifies the subset of traffic which may receive a differentiated level of service by being conditioned and/or mapped to a specific DSCP value within the DiffServ domain. Classification may be based on the content of the IP header. There are two types of classiers:
1. Behaviour Aggregate Classifiers, which classify packets based on the DSCP value; and 2. Multi- Field Classifiers, selecting packets based on the value of a combination of IP header fields, e.g. source and destination address.
It will be clear that a Classifier takes a stream of traffic as input and steers them to the correct output. In most cases these output channels are connected to the input channels of other functional elements of the DiffServ architecture.
Traffic Profiles: Meter
If the properties of a stream of packets are not exceeding certain predefined parameters, the packets are called in-profile (or in the opposite case: out-of-profile). So traffic profiles specify the (temporal) properties of a traffic stream (which is selected by a classifier), and provide rules for determining whether a particular packet is in-profile or not.
An example traffic profile is maximum rate is 1Mbit/s, but allow bursts of 2Mbit/s with a duration of at the most 20 seconds, if they're not within 1 minute of each other. Different traffic conditioning actions may apply depending on the result of applying such a traffic profile to a particular packet.
Figure 3.5 shows a very generic meter element. It takes a stream of traffic as input, and decides whether that stream is at that moment in time (temporal property) conforming, partially conforming, or not conforming at all to a certain traffic profile. The next functional elements in the DiffServ router for these three conformance levels might be different. A possible usage scenario is that non-conforming traffic is sent through a Counter element for out-of-band purposes like a billing application (to send a bill to a customer who is exceeding the traffic profile that was agreed upon)
Actions on packets: Marker, Counter, Multiplexer, Absolute Dropper
A group of elements that is performing actions on traffic streams, Action elements, is mostly called after the classification and/or metering phase.
Â¢ A Marker is an element that marks packets with a certain DiffServ Code Point (DSCP) value. This is usually done at the edge of a DiffServ domain. Classification of
packets within the core of that domain is done entirely and solely based on the value of this DSCP field in the IP header.
Â¢ The Counter element does nothing more than updating internal registers with the number of packets traversing through this building block. A manager may use this in order to get information about the number of packets belonging to a certain traffic class, which might be important for billing purposes.
Â¢ Multiplexer are used to multiplex or de-multiplex the (logical) stream of traffic, e.g. to combine the output of multiple functional elements as the input of a single counter.
Â¢ When it is necessary to drop packets regardless of their content e.g. because of traffic that is not conforming to a certain profile, the DiffServ architecture provides the user with an Absolute Dropper element that just discards any packets that arrive at its input.
The functions performed by each element is clear from the descriptions above. A complete overview can be found in the DiffServ Architecture document.
Traffic Conditioning: Queueing element
Traffic conditioning has to do with shaping the stream of network traffic according to a predefined set of rules, e.g. based on previous classification. This is often called policing. The functionality of the queueing element is split into sub-components:
Â¢ FIFO queue
This is the most simplistic form of a queue. Packets get sent in order of arrival at the queue: first in, first out. This queueing technique is probably the most well-known and deployed of all possible algorithms. Packets that leave this queue typically go to a Scheduler element in order to get sent over the network.
A scheduler uses an algorithm to determine in what order and at what time packets that arrive at its input are forwarded to the network (using the underlying operating system's network stack). Such algorithms are called service disciplines. Parameters that affect the operation of a scheduler include (but are not limited to): static
parameters such as relative priority associated with the input channels of a scheduler, and dynamic parameters such as the DSCP value of packets currently at the input of a scheduler. Various categories of service discplines are documented, like first come, first served, rate Based and weighted fair bandwidth sharing.
Â¢ Algorithmic Dropper
Like the name says, this element selectively drops packets arriving at its input, using some selection algorithm. Note that the same functionality is achieved by selectively removing packets that are already in a FIFO queue. An algorithmic dropper may be put in front of or just after a (FIFO) queue, the former one known is known as a tail dropper (because it drops packets that are about to be added to the tail of a queue).
Selection of packets that are to be dropped (or to be forwarded) is based on the result of some algorithm, that internally triggers the dropping of a packet. A lot of research is done in this area. A well-known algorithm is RED (Random Early Detection): the size of a queue is taken as input parameter to a function that calculates the probability whether a packet is dropped or not. The larger the queue, the higher this probability will be. In figure 3.6 these three elements of the Queuing functional building block are working together.
It is probably clear now that a DiffServ Router is a complex technological artifact. Therefore the Service Level Agreement, describing what the service a user should get from the operator, is quite complex as well, unfortunately. If the SLA is ambiguous, it may lead to unpredictable performance etcetera. Hence the obvious need for juridically and technically good and clear agreements.
DIFFERENTIATED SERVICES MANAGEMENT
Together with the need for service differentiation comes the need for management thereof. The network service providers delivering DiffServ services to their customers have to configure and monitor their routers in order to be able to satisfy the SLAs. It is likely that in practice DiffServ will be offered to customers as a way to make a distinction between various levels of service, e.g. Premium, Gold, Silver and Bronze. The IETF snmpconf WG is chartered to write a Best Current Practices document describing the configuration management of network devices using SNMP. This includes Policy-Based Management.
DiffServ Informal Management Model
The DiffServ Informal Management Model is based on the DiffServ Architecture but focuses on configuration and management. In the architectural model, numerous functional elements have been defined. A combination of one or more of these building blocks results in a datapath. The Informal Management Model specifies the possible configuration parameters of these elements.
Â¢ The conceptual model of a DiffServ router, including management elements, is depicted in figure 4.1. It can be concluded from this model that the main configurationally aspects of DiffServ management are related to the con_guration of the ingress and egress interfaces. For network interfaces, the DiffServ related parameters are divided in the parameters with information useful to the various building blocks: classification, metering, action and queueing elements
Â¢ The Classifier element is based on filters to select matching and non-,matching packets.Based on this selection, packets are forwarded along the corresponding datapath.
An example configuration that makes a distinction between packets coming from the network 10.0.0.0/24 and other packets is given below:
Filter Matched Output Stream
no match B
IPv4 Source Address: 10.0.0.0
IPv4 Source Mask: 255.255.255.0
Â¢ Metering is a function that may be part of the datapath and is used to detect whether a traffics stream is in- or out-of-profile. In this example, burstiness is not taken into account.
AverageRate: 120 kbps
Delta: 100 msec
Â¢ Action elements are operating on packets, i.e. they may mark, drop, or count packets, or simply do nothing.An example configuration of the DSCP Marker Action element:
Â¢ The Queueing elements are usually the last functional element in the datapath a packet traverses. They take care of modulating the transmission of packets that are part of different traffic streams, by prioritizing, storing and/or discarding and sending them. An example configuration for an algorithmic dropper that drops packets when a certain queue is too long is given below:
Trigger: Fifo1.depth > 10kbyte
The DiffServ MIB describes the con_guration and management aspects of devices implementing the DiffServ Architecture. Specifically, it talks about the con_guration parameters of the various elements that are defined by that architecture. The current status of the MIB is Internet Draft, although it is expected to go onto the IETF Standards Track. The current (draft) version of the MIB can be found in appendix B of this report in tree-format.
The DiffServ MIB contains the functional elements of the datapath, using various tables. The idea is that RowPointers are used to combine the various functional elements into one datapath. The elements of the MIB are described below (note that the ingress and egress portions of a DiffServ device are modeled identically).
Data Path Table
The Data Path Table contains the starting point of the DiffServ datapaths. This is realized by making seperate datapaths using the network interface traffic is received from or sent on, as well as the direction of the traffic stream.
Classifier and Filter tables
The classifier table contains a framework that is extensible with multiple filters, by containing pointers to the tables with the specific filters
The meter tables contain an extensible framework for the meter functional element, as well as an example parameterization table: the token bucket meter.
The absolute dropper, DSCP marker and counter are functional elements that fall into this category. The do-nothing and (de)multiplexing elements are also part of the Action building block, but are not represented in the MIB. Their behaviour can be expressed in the MIB using Row Pointers however.
Queue, Scheduler and Algorithmic Dropper Tables
The queuing elements of the DiffServ architecture and management model include algorithmic droppers, queues and schedulers. These functional elements are represented in the MIB using various tables
DiffServ Policy MIB
The DiffServ Policy MIB module provides a conceptual layer between high-level policy definitions that affect configuration of the DiffServ (sub)system and the instance-specific information that would include such details as the parameters for all the queues associated with each interface in a router. This gives an interface for DiffServ configuration at a conceptually higher layer.
A commonly used example is to make a distinction between various levels of service, e.g. Premium, Gold, Silver and Bronze. These are templates for the configuration of the instance- level DiffServ MIB, i.e. a Silver service could be defined as a datapath that implements the following requirements:
1. Match inbound traffic from IP number 10.0.0.1, and limit this to a rate of 500kb/s and a maximum burst of 800kb, while marking with the DSCP value corresponding to the AF11 Per-Hop Behaviour.
2. Match all other inbound IP traffic, limit this to a rate of 64kb/s and a maximum burst of 80kb, while marking it with AF11 if the traffic in in-profile, otherwise drop the traffic.
Figure4.8 Diffserv Policy MIB and Diffserv MIB implementing the Silver service
TRAFFIC CONTROL IN THE LINUX KERNEL
This chapter gives an overview of the architecture of the Network Traffic Control code and describes its structure, as well as the DiffServ specific parts. It is necessary to look into this as the structure of the DiffServ MIB is different from the DiffServ implementation in Linux. This conflict leads to problems that need to be solved in order to implement the DiffServ MIB
TC can, among other things, decide whether a packet should be queued or dropped (the latter for example in case where the traffic exceeds certain thresholds), in which order packets are sent (hence giving priority to certain network traffic flows) and it can delay the sending of packets (e.g. to limit the rate of outbound traffic). Once TC has released a packet for sending, the device driver picks it up and emits the packet to the network.
The TC framework consists of four major conceptual components that are discusses in the following subsections:
Â¢ queueing disciplines
A more sophisticated queueing discipline might use a filter to determine whether to forward the packet as fast as the interface permits or to enforce a specific maximum traffic rate, depending on the originating IP address of the packet , hence possibly giving priority to one packet over another. .
Every queueing discpline has one ore more classes attached to it. The very existence of classes, and their semantics, are fundamental properties of a queueing discpline (qdisc). A qdisc uses classes to treat various classes of traffic in different ways. Note that this distinction is made using filters. Classes are not storage places, they can use other queueing disciplines for that. So within a queuing discipline attached to a network device, other queuing disciplines may reside. And to these qdisc's, other filters and classes may be attached, hence giving enormous flexibility (and complexity in configuration) to the user of the TC framework.
Filters are used by a queueing discpline to assign incoming packets to one of its classes, at enqueuing time. Filters are kept in filter lists that can be maintained either per qdisc or per traffic class, depending on the design of the queueing discipline, and are ordered by priority.
To prevent network traffic from exceeding certain bounds, policing is used. In the context of Linux Network Traffic Control, policing affects all traffic control actions that depend in some way on the traffic volume. This includes but is not limited to decisions about whether to drop or to enqueue packets in both the inner and outer queueing disciplines.
Possible criteria that are the parameters to this decision are maximum packet size, average rate of the traffic, the peak rate and the burstiness of the traffic. But it is certainly possible to extend this list with other properties, if an implementor would want that.
DiffServ in the Linux kernel
The three main functions (classification, metering and queueing/scheduling) are performed by different elements in the two architectures, as is highlighted by the grey boxes. An imminent conclusion is that the DiffServ architecture has not been designed specifically for Linux, nor has the Network Traffic Control code been tailored for DiffServ.
The good news is that the TC framework nevertheless offers most of the functionality required for implementing DiffServ support. EPFL used this framework to extend the Linux kernel with DiffServ support. Three new elements have been added to the original implementation:
Â¢ To be able to support the Per-Hop Behaviours de_ned by the IETF diffserv WG (expedited and assured forwarding), a queueing discipline implementing the RED algorithm (in particular GRED) has been added (sch gred)
Â¢ The use of the DiffServ Code Point, necessary for the scalable classification of packets throughout the network, leads to the introduction of yet another queueing discipline (sch dsmark)
Â¢ A new classifier that uses this information is needed as well (cls tcindex)
In the Linux kernel, traffic control is focused on the egress part of a router. This is in contrast with the DiffServ architecture, that heavily uses the ingress interface. In order to support this, it is necessary for routers running the Linux operating system to be able to distinguish packets using the inbound network interface. This is done using the netfilter infrastructure and its iptables commandline utility.
Even without the use of the DiffServ MIB, it is possible to configure the traffic control functions in the Linux kernel. This is achieved by using the tc commandline utility.
A full description of this tool is outside the scope of this document. The author is not aware of a reference guide, but an overview of the various options and commands in Portugese is available. In order to test the prototype of the DiffServ MIB on Linux, the router has been setup using this tc utility.
THE DIFFSERV MIB AND THE LINUX KERNEL
A DiffServ implementation for the Linux kernel doesn't know about the DiffServ MIB specification. Therefore, it is necessary to map the various management functions offered by the MIB to functions provided by the DiffServ implementation. The management of both designs is tailored after the respective architectures: the MIB is modeled after the DiffServ Architecture, but the kernel is to be configured using handles that are pointers to the various elements by sending rtnetlink messages.
This leads to the conclusion that conversion needs to be done before the MIB can be filled with values from the kernel (like counters), and that con_guration information that is written into the MIB by a manager application need to be translated to the correct rtnetlink messages. It is clear that various protocols have to be combined.
Representing information from the kernel
The prototyping in this assignment is focused on the monitoring part of the manager-agent paradigm, i.e. the manager retrieves some values from the agent. Therefore the implementation of the agent needs to gather information when it is requested to do so (or on its own). In the case of the DiffServ MIB agent, this means that the agent has to get information from the kernel using the rtnetlink messages. This sections covers the various tables from the DiffServ MIB and explains what needs to be done in order to retrieve the requested information.
Data Path Table
This tables merely is a starting point for the DiffServ management information. The network interfaces are numbered according to the if Table numbering, which happens to be the same as the internal numbering used in the Linux kernel. This information is retrieved using a RTM_GETLINK message. No conversion needs to be done at all, though parsing of the resulting message remains necessary of course.
In the MIB there is the SixTuple Classifier which makes it possible to represent any part of the IP and transport layer headers in the MIB, like IP addresses, DSCPs and port numbers. The Classifier tables are indexed by two separate identifiers, enumerating the classifier entries and the classifier element entries. In Linux, there is no unique identification for a filter. Elements within a single filter are identified with a handle though. Multiple filters belonging to the same queueing discipline or class are ordered by a numerical priority value. Thus a DiffServ MIB implementation must take care of assigning unique identifiers to the filters and their elements itself.
The Meter tables can be found in the diffServMeter and diffServTBParam subtrees in the MIB. Linux uses only three primitives when dealing with traffic control: qdisc, filter and class. None of them is exactly the same as a Meter in the DiffServ architecture. However, the Classifier element in that picture, which is not a real element as it not formally defined in the TC architecture, is what the filter primitive can do in terms of functionality. A conclusion and solution to this problem can easily be drawn: the RTM_FILTER messages can be used to gain knowledge about policing in the kernel as well. This is something that typically only occurs at edge routers at the boundary of a diffserv domain; core routers use filters only for determining the DSCP value.
There are basically two operations that belong to these tables: marking with some DSCP value, and counting the packets. Marking operation in the DiffSer architecture is part of the Class element in the TC architecture. The corresponding primitive, class, provides an implementor with the necessary operations, using the RTM_TCLASS messages. Counting is achieved in the TC architecture using a packet counting FIFO that can be used as an inner queueing discipline. This is controlled with the qdisc primitive. In the Linux kernel this special FIFO queue is enhanced with a limit on its size, but the DiffServ MIB does not support this for the Counting operation. Indexing these tables with the right values is once again up to the agent, though the handles and classid's that are used internally by the Linux TC engine might be helpful.
In the MIB queueing falls apart in three subcategories: algorithmic dropper management, and queuing and scheduling management. They are all part of both the class and qdisc primitives in the Linux TC architecture . Class-Based Queuing (CBQ) is often used in Linux to get this functionality.
The TC implementation provides a so-called Weighted Round Robin (WRR) scheduling method as part of CBQ, and parameters to this algorithm are accessed using the RTM_QDISC messages. WRR can be represented in the MIB in the Scheduler table, using an diffServAssuredRateEntry to store the parameters. The diffServShapingRate table is not used, as there is no corresponding implementation in the Linux Network Traffic Control engine. Administration like indexing values for these tables is up to the DiffServ MIB agent. Theoretically it is possible to share queues in TC, but this is very complicated and introduces other problems,
Setting configuration information in the kernel
No extensive research has been done in this area, but the problems discovered in the previous section apply here as well: there is no exact match between the Linux Network Traffic Control architecture and the DiffServ architecture, so an implementor has to come up with conversions. It seems possible to configure TC using the DiffServ MIB, i.e. to implement the configuration part of the manager-agent paradigm, but this has not been done in this project. The usage of complex tables in the DiffServ MIB makes things very complicated, but not impossible.
DIFFSERV MIB IMPLEMENTATION
In this we get an overview of a prototype implementation of the DiffServ MIB, on a router running the Linux operating system. It starts with a description of the net-snmp SNMP implementation, from an implementers point of view. Because of the amount of ifreusablels code in SNMP agents, program code is often automatically generated using code-generators. In this project, libsmi was used. The report continues with a (global) description of the protocol that is used to retrieve information that is needed in the MIB from the kernel: rtnetlink. Finally a small example, part of the DiffServ MIB implementation, is discussed.
Support routines: net-snmp
Net-snmp is a suite that implements the Simple Network Management Protocol and comes with an extensible (master-) agent, a library with SNMP related support routines and various tools to perform Get and Set operations on information stored in a MIB. This software package may be downloaded from . It can be run on various operating systems, such as Solaris and Linux, while it supports SNMPv1, SNMPv2c and SNMPv3. It also has support for the AgentX (Agent Extensibility) protocol .
Apart from the SNMP Agent, that comes with support for numerous MIBs in the mib-2, the net-snmp suite contains a number of (shared) libraries that independent applications may use to act as an SNMP agent. The library provides functions for almost all general SNMP related routines, like receiving and sending SNMP PDUs. One can imagine that it is not really efficient to have one piece of code implementing every MIB; a modular approach, with different modules implementing different parts of the MIB-tree, is far more attractive. There are various possibilities for extending the net-snmp agent with support for other MIBs, e.g. the DiffServ MIB.
Â¢ Agent Extensibility Protocol (AgentX)
This model uses real master- and sub-agents, communicating over a channel like TCP/IP or UNIX Domain Sockets. The master-agent, i.e. net-snmp, dispatches requests that should be handled by a sub-agent to that particular sub-agent over that channel. The sub-agent returns whatever needs to be returned, and the master-agent sends it back to the manager entity.
Â¢ SNMP Multiplexing Protocol (SMUX)
This protocol allows a user-process, termed a SMUX peer, to register itself with the running SNMP agent when it wants to export a MIB module. This protocol is the predecessor of AgentX, and is regarded as historic by the IETF.
Â¢ dlmod (net-snmp proprietary interface)
With net-snmp it is possible to put directives in the con_guration _le of the SNMP agent that make it load shared object files, i.e. compiled sub-agents that
implement MIBs outside of the net-snmp code base. These sub-agents use the net-snmp shared libraries for the non-instrumentation part of their code. They get loaded by the master-agent at startup. The idea of the master-agent dispatching requests for particular parts of the MIB- tree to those sub-agents remains intact.
To summarize what happens when running the net-snmp suite with a DiffServ MIB
implementation in pseudo-code is
1 start and initialize master agent
2 load DiffServ MIB sub agent
3 while true
4 wait for request from manager
5 if request is for DiffServ subtree
6 dispatch to DiffServ MIB sub agent
(in sub agent)
7 call function that handles this OID
8 create netlink message if necessary
9 send and receive netlink messages
10 parse netlink message
11 send result back to master agent
(end of sub agent processing)
12 else /* request is for another sub agent */
15 construct SNMP response PDU, send to manager
Libsmi is a library to access SMI MIB information. Applications can use it to access SMI information that is stored in various repositories (plain text, but the model allows for network access as well) containing SMIv1 and SMIv2 as well as SMIng MIB module files. Its purpose is to separate the parsing and handling of these MIB module files from the manager- or agent- application itself. On top of this library, various tools are available, e.g. a MIB syntax checker and a tool to dump and convert SMI files in various formats. One of these formats is C program code. This gives the implementer a framework that needs to be filled in, containing most of the reusable code that is present in almost every SNMP agent. This driver was used in this DiffServ MIB prototype to generate the C program code
Other drivers provide for example a conversion from SMI to Java code to use with the JAX AgentX program. The decision was made in this assignment to use C code instead of Java because the environment the agent has to run in: a Linux machine. Linux is quite C-oriented, and especially the netlink part for communication with the kernel is difficult to implement using Java.
Also the big advantage of Java, portability between various operating systems and platforms, is not an issue here as other platforms have very different interfaces to their DiffServ implementation. A DiffServ MIB agent developed for the Linux operating system will most likely not run on other platforms.
Linux has support for a lot of advanced networking features, including policy based routing and Quality of Service. These features are configured and controlled using a netlink socket interface. Routing messages, called rtnetlink, are special netlink messages to control the routing behaviour of Linux, which includes Network Traffic Control and DiffServ functionality.
Netlink is used to transfer information between kernel and user-space processes, over its bidirectional communications links. It consists of a standard socket-based interface for user- processes and an internal application program interface (API) for the kernel. Netlink sockets are raw, but it is a datagram oriented service.
This means that a user can get information from the kernel, as outlined in the following piece of pseudo-code: (user process)
1 open netlink socket
2 construct request message
3 send message over netlink socket
4 receive and parse message
5 find necessary information in internal data
6 construct response message
7 send message over netlink socket
8 wait for messages on the open netlink socket
9 receive and parse message
/* do whatever the user space program needs to do */
CONCLUSIONS AND RECOMMENDATIONS
The DiffServ MIB is supposed to help network service providers operating their network, managing the DiffServ functionality in routers. The MIB is closely modeled after the Differentiated Services Architecture . An operator can manage every aspect of DiffServ, when the DiffServ functionality in the router is modelled after the Architecture defined by the IETF diffserv WG. However in systems where this is not the case the DiffServ functionality may or may not be fully manageble using the DiffServ MIB. The MIB solves this issue by providing functionality to extend the MIB in such cases, by means of using Row Pointers to other MIBs. Vendors can provide MIBs that comply with their own implementation of DiffServ. These can be linked to the DiffServ MIB, so that a network manager is able to fully utilise the router's capabilities. The prototyping environment in this assignment is DiffServ on a router running the Linux operating system. Linux comes with a generic framework for Network Traffic Control, and DiffServ support is implemented on top of that framework. The underlying architecture is different from the DiffServ Architecture, and it turns out that the problem that is outlined above is present. Most of the router's functionality, with regard to DiffServ, is managable with the DiffServ MIB, and it is possible to fill most of the DiffServ MIB with sensible values retrieved from the DiffServ implementation in the Linux kernel. But to achieve this result, it is necessary to map functionality from the DiffServ Architecture to the elements of the Linux Network Traffic Control which is non-trivial.
The second issue addressed in this assignment:
draft version, like under- or over-specification. The IETF DiffServ Working Group will be interested, so giving them feedback about the results of this work is important. The DiffServ MIB seems well equipped: it is perfectly possible to manage every building block mentioned in the Architecture, and it is extendible for most situations when those building blocks do not apply to the managed router. This has been reported to the IETF community via email, as listed in appendix E. During the prototyping process, the author has closely followed the IETF diffserv WG mailinglist, as well as other DiffServ-related lists such as the Linux specific DiffServ list.
The differences between the DiffServ Architecture and the Linux Network Traffic Control are very fundamental. It is important to note that (full) DiffServ functionality can be achieved without conformation to the IETF diffserv WG's architectural document, but the DiffServ MIB might be less suited in such cases. TC is only a relatively small part of Linux' network code. The Classification functionality of DiffServ is implemented in an entirely different part of the kernel: netfilter. Net filter gives a functional framework for filtering, network address translation and packet mangling functions. Marking packets with a particular DSCP value on the ingress side of a router is much easier to do with net filter then it is with TC. But the disadvantage is that net filter can't be configured using the net link interface to the kernel. It is probably possible to create a MIB that configures net filter; the DiffServ MIB may point to that MIB as the parameterization of classifier elements.
1. Werner Almesbergers. Linux network traffic control: Implementation overview. April 1999. http://ftp.lrcftp.epfl.ch/pub/people/alm...urrent.ps.
2. Baker, Chan, and Smith. Management information base for the differentiated services architecture. http://www.ietf.org/internet-drafts/draf...ib-10.txt, June 2001 (work in progress).
3. IETF diffserv WG. Differentiated services working group charter page. http://www.ietf.org/html.charters/diffse...rter.html.
4. Almesberger et al. Differentiated services on Linux. http://ftp.icaftp.epfl.ch/pub/linux/diff...-01.ps.gz, June 1999.
5. Bernet et. al. Diffserv informal management model. http://www.ietf.org/internet-drafts/ draft-ietf-diffserv-model-06.txt, February 2001 (work in progress).
6. Blake et al. Architecture for differentiated services. (RFC 2475), December 1998. http://www.ietf.org/rfc/rfc2475.txt.
7. Crawley et al. A framework for qos-based routing in the internet. (RFC 2386), August 1998. http://www.ietf.org/rfc/rfc2386.txt.
8. Daniele et al. Agent extensibility (agentx) protocol version 1. (RFC 2741), January 2000. http://www.ietf.org/rfc/rfc2741.txt.
9. Heinanen et al. Assured forwarding phb group. (RFC 2597), June 1999. http://www.ietf.org/rfc/rfc2597.txt.
10. Jacobson et al. An expedited forwarding phb. (RFC 2598), June 1999. http://www.ietf.org/rfc/rfc2598.txt.
11. McCloghrie et al. Structure of management information version 2. (RFC 2578), April 1999. http://www.ietf.org/rfc/rfc2578.txt.
12. N. Semret et al. Peering and provisioning of differentiated internet services. Proceedings of the IEEE INFOCOM'2000, March 2000.
13. Ethereal. Ethereal network analyzer. http://www.ethereal.com/.
14. S. Floyd and V. Jacobson. Random early detection gateways for congestion avoidance. IEEE/ACM Transactions on Networking, 1(4):397Ã…â€™413, August 1993.
15. Hazewinkel and Partain. The diffserv policy mib. http://www.ietf.org/internet-drafts/ draft-ietf-snmpconf-diffpolicy-04.txt, March 2001 (work in progress).
16. IETF intserv WG. Integrated services working group charter page. http://www.ietf.org/html.charters/intserv-charter.html.
17. Alexey Kuznetsov. Network traf_c control con_guration tools. http://ftp.ftp.inr.ac.ru/ip-routing/.
18. IETF mpls WG. Multiprotocol label switching working group charter page. http://www.ietf.org/html.charters/mpls-charter.html.
19. NET-SNMP. Project home page. http://www.net-snmp.org/.
20. Instute of Operating Systems and Computer Networks TU Braunschweig. libsmi home page. http://www.ibr.cs.tu-bs.de/projects/libsmi/.
21. DPNM Lab Postech. Diffserv webpage.
22. Rui Prior. Qualidade de servico. 2001. http://telecom.inescn.pt/doc/msc/rprior2001.pdf
23. M. Rose. Snmp mux protocol and mib. (RFC 1227), May 1991. http://www.ietf.org/rfc/rfc1227.txt.
24. Oscar Sanz. Feasibility study of implementing the DiffServ MIB. University of Twente, 2000.
25. J Ã‚Â¨ urgen SchÃ‚Â¨ onwÃ‚Â¨ alder. Internet management standards: Quo vadis? February 1999. http://www.ibr.cs.tu-bs.de/ schoenw/papers/mvs-99.ps.gz.
26. IETF snmpconf WG. Con_guration management with snmp working group charter page. http://www.ietf.org/html.charters/snmpco...rter.html.
27. William Stallings. SNMP, SNMPv2, SNMPv3, and RMON 1 and 2. Addison-Wesley, third edition, 1999.
28. W. Richard Stevens. TCP/IP Illustrated, Volume1: The Protocols. Addison-Wesley, 1994.
29. UTWENTE/TSS-MGT and TUBS/IBR. Simpleweb homepage. http://www.simpleweb.org/
2 Internet, Quality of Service and Network Management
Quality of Service in IP Networks
Structure of Management Information and MIBs
Simple Network Management Protocol
3 Differentiated Services Architecture
DiffServ Building Blocks
Packet Classification: Classifier
Traffic Profiles: Meter
Actions on packets: Marker, Counter, Multiplexer, Absolute Dropper
Traffic Conditioning: Queuing element
4 Differentiated Services Management
DiffServ Informal Management Model
Data Path Table
Classifier and Filter tables
Queue, Scheduler and Algorithmic Dropper Tables
DiffServ Policy MIB
5 Traffic Control in the Linux Kernel
DiffServ in the Linux kernel
6 The DiffServ MIB and the Linux Kernel
Representing information from the kernel
Data Path Table
Setting configuration information in the kernel
7 DiffServ MIB Implementation Overview
Support routines: net-snmp.
RT Netlink Protocol
8 Conclusions and Recommendations
I express my sincere thanks to Prof. M.N Agnisarman Namboothiri (Head of the Department, Computer Science and Engineering, MESCE),
Mr. Zainul Abid (Staff incharge) for their kind co-operation for presenting the seminars.
I also extend my sincere thanks to all other members of the faculty of Computer Science and Engineering Department and my friends for their co-operation and encouragement.
Praseena T Sivan