Developer Guide

Preface

This document is a developer guide for Push Framework. It is intended for developers wishing to create new server applications on top of this framework.

Introduction

Push Framework is a C++ network application framework that serves to the development of asynchronous scalable servers over Windows Server OS and Linux servers. It shields your custom application code from the complexities of having to deal with sockets and multiple threads. The framework makes full use of the complex but powerful IOCP mechanism available on Windows Server and the epoll interface in Linux, while easing the developer perspective through a handful of API classes that he needs to deal with.

Before you start

Download the library. Create new C++ project, add the include directory of PushFramework to your project compiler settings and add the output directory to the additional library path in the linker settings. Make sure that “PushFrameworkInc.h” is included for compilation:

#include "PushFrameworkInc.h"

Visual Studio users : It is common to add such line of code in stdafx.h so the compiler “sees” Push Framework symbols before compiling your custom code.

Understanding the big picture

A single instance object encapsulates all of the library functionalities for you :

PushFramework::Server server;

The entry point of your program should create one instance of PushFramework::Server class, configure it then call the ::start method. When you shutdown your program, call the ::stop method.
The library handles the listening, the IO processing, the demultiplexing and dispatching and all profiling and remote interaction with dashboard. Your main job consists in providing the following information before calling ::start:

  1. How incoming data and outgoing data should be represented. You gather this information in form of a concrete implementation to the PushFramework::Protocol class. Just derive a new class from PushFramework::Protocol, instantiate it and provide its address to your server object via the following member :
    //Assuming that "protocol" is your instantiated Protocol class object :
    server.setProtocol(&protocol);
  2. How incoming connections should be logged in/off and what data structure should represent a logged client. You gather this information by providing a concrete implementation to the PushFramework::ClientFactory class. Note that this class deals with data structures named : PushFramework::LogicalConnection. A connected client should be represented by a LogicalConnection subclass. To make this latter concret you have to override LogicalConnection::getKey which should return a unique key value used for identification and storage.
  3. How incoming request should be handled. You write your servicing logic in separate Service subclasses. An instance of each subclass should be registered with a separate service id so each incoming packet that is deframed by your ptotocol class and identified as belonging to a service id can be routed to that instance. Write your logic by overriding the following method :
    void Service::handle( LogicalConnection* pClient, IncomingPacket* pRequest )
     {
     //Put your servicing code here.
    }

    Register different Service instances with different service id. Your protocol class should supply the service id at decoding time. The id is then used as means to route incoming packet to the corresponding Service instance.

Designing the Protocol

Before going to implement anything, it is very important to first design and implement your protocol. After all, in about all the subsequent code you will have to deal with the following data structures types :

  1. PushFramework::IncomingPacket
  2. PushFramework::OutgoingPacket

Infact the library transforms then communicates the data it receives from peer client in form of IncomingPacket. Examples :

  • int ClientFactory::onFirstRequest(IncomingPacket& request, ConnectionContext* lpContext, LogicalConnection*& lpClient, OutgoingPacket*& lpPacket)
  • void Service::handle( LogicalConnection* pClient, IncomingPacket* pRequest )

It also requires you to gather any outgoing information in form of OutgoingPacket intances. example :

  1. SendResult LogicalConnection::PushPacket( OutgoingPacket* pPacket)
  2. void BroadcastManager::PushPacket( OutgoingPacket* pPacket, BROADCASTQUEUE_NAME channelName )

The library makes zero assumptions about the connection established with peer client : multiple requests can be received and multiple responses can be sent asynchronously. So the a message unit is represented by IncomingPacket and OutgoingPacket. The reason why Push Framework uses two different classes is because some applications have a great divergence in the way how received data is represented and how outgoing data do.

When data is received, your job is to help the library deserialize the incoming stream by gathering messages in form of new IncomingPacket instances. You are also required to gather the service id aka request id corresponding to each deserialized packet so the framework routes it to the Service class instance that is registered with the same service id. Of course if you handle all incoming packet in the one same way, then create and register one Service class instance and use a  one value for teh service id. To make a concrete Protocol subclass, override the following methods :

  • virtual int encodeOutgoingPacket(OutgoingPacket& packet) =  0;
  • virtual int frameOutgoingPacket(OutgoingPacket& packet, DataBuffer& buffer, unsigned int& nWrittenBytes) = 0;
  • virtual int tryDeframeIncomingPacket(DataBuffer& buffer, IncomingPacket*& pPacket, int& serviceId, unsigned int& nExtractedBytes, ConnectionContext* pContext) = 0;
  • virtual int decodeIncomingPacket(IncomingPacket* pPacket, int& serviceId) = 0;
  • virtual void disposeIncomingPacket(IncomingPacket* pPacket) = 0;

Serialization triggers both encoding and framing. Please note that in case of broadcast messages, encoding is also triggered at the time it is pushed. Normally encoding is expected to line your data on an internal stream buffer. As for the framing operation, it should write the encoding result into the provided buffer along side special information to delimit the packet.

The framework calls ::tryDeserializeIncomingPacket whenever it receives new data into the receiving buffer. This function calls your custom version of tryDeframe and decode, so you can analyze the receive buffer and when possible create a new IncomingPacket instance representing the client request. Since it is acting like an objects factory, Protocol class should also be responsible fopr the disposal of IncomingPacket instances when the framework no longer needs them.

Creating the LogicalConnection and ClientFactory classes

It is very important to understand how the framework behaves when a new connection request is received, how an existing connection becomes a logged client how it is disconnected and how client data get the moment to be released afterward. When Server::start is called, the framework creates a thread that listens on the port that you would have already supplied through calling the following function :

Server::setListeningPort(short sPort)

Each incoming connection request is accepted by default. Soon after, the following virtual function is triggered:

virtual OutgoingPacket* ClientFactory::onNewConnection(ConnectionContext*& lpContext ) = 0;

Your job is to decide whether to tell the framework to send a server response before any possible incoming request could be received and processed. In many situations the developer would like to be able to send a dynamic response then judge the received request according to what has been sent. Here comes the role of lpContext where you can store anything that will be provided to you back at the time the first request is received from the same connection.
Note that at this point you are not required to create any data structure containing the contextual information for your client: the connection could still be an illegitimate client : it is at this stage a physical connection.
When the first request is received, the following function is triggered:

virtual int ClientFactory::onFirstRequest(IncomingPacket& request, ConnectionContext* lpContext, LogicalConnection*& lpClient,
 OutgoingPacket*& lpPacket) = 0

At this moment you can tell if the connection is a legitimate client by analyzing the received request and possibly the contextual data referenced by lpContext which you would have stored at the time ::onNewConnection is called. If the request is not accepted, you can tell the framework to close the connection or to send a new response (you store in lpPacket) as another chance for your client. For these information, the framework will rely on the interpretation of the return value that you give.
If the request is OK, store the address of the LogicalConnection instance that you want to associate with this connection in lpClient. (You may also need to delete the content of lpContext). Then expect all upcoming requests to be dispatched to your Service classes instances.
If you continue to give your connection many chances, then it will enjoy so until the duration that you supply in the following function passes:

void Server::setLoginExpiryDuration( unsigned int uLoginExpireDuration )

Note that when a new client is created amid the ::onFirstRequest method, the framework will also call the following method :

virtual void ClientFactory::onClientConnected(LogicalConnection* pClient)=0;

It is important that your concrete LogicalConnection subclass returns a unique value in its virtual ::getKey method. This key is used for storage and will be communicated back to you for all events pertaining to your client.
If you provide an instance that has an existing key, the framework will close the existing connection, dispose the newly created instance keeping the past LogicalConnection instance (so past data are not lost) and attach the latter to the new accepted connection. Also the following event is triggered :

virtual void ClientFactory::onClientReconnected(LogicalConnection* pClient)=0;

When you want to disconnect a client call :

void LogicalConnection::disconnect( bool waitForPendingPackets )

Note that the peer client can also close the connection. In that case, the following event is received :

virtual void ClientFactory::onClientDisconnected(LogicalConnection* pClient) = 0;

In both cases, ie whether you explicitely disconnect the client, or the client is implicitely disconnected by the framework (aside from peer close, there are many other situation causing the disconnection of client : for exampling a decoding pbm), you get the chance to put a common code in the following method :

virtual void ClientFactory::onBeforeDisposeClient(Client* pClient)=0;

When it is time to dispose the LogicalConnection data, the following method is triggered :

virtual void ClientFactory::disposeClient(Client* pClient)=0;

Please do not make the assumption that pClient corresponds to a client that was logged in. Do remember that after the call to ClientFactory::onFirstRequest the framework is subject to deleting a newly allocated client through the call to ::disposeClient.
When the framework communicates back a client key to you, you must translate that key to its corresponding Client object through :

LogicalConnection* FindClient( CLIENT_KEY clientKey )

When you are free with the object, return it via :

void ReturnClient( CLIENT_KEY clientKey )

Creating and registering Services

Once a new connection is associated to a Client object, all incoming requests are routed to the ::handle method of Service instances that are registered by advance via :

void Server::registerService(unsigned int serviceId, Service* pService, const char* serviceName)

Thus a Service is a bloc of code that handles incoming requests identified as belonging to a same type of requests.
It is the job of the Protocol class to make that identification at deframing time. The framework only triggers the hande method of the Service instance that was registered with same service id.

Broadcasting Data

To broadcast information, you need to setup broadcasting channels. This is done through :

void BroadcastManager::CreateQueue( BROADCASTQUEUE_NAME channelKey, unsigned int maxPacket, bool requireSubscription,
 unsigned int uPriority, unsigned int uPacketQuota );

A broadcasting channels is a FIFO queue of OutgoingPacket instances that has a size limit of maxPacket. You fill the queue using :

void BroadcastManager::PushPacket( OutgoingPacket* pPacket, BROADCASTQUEUE_NAME channelName, BROADCASTPACKET_KEY killKey,
 int objectCategory );

When the queue is full, past packets gets deleted using Server::disposeOutgoingPacket. All clients subsceribed to a broadcasting channels receive the data in the queue. Subscription can be implicit (requireSubscription = false) or explicit, in which case you need to call :

bool BroadcastManager::SubscribeConnectionToQueue( CLIENT_KEY clientKey, BROADCASTQUEUE_NAME channelKey );

The framework uses the attributes uPriority and uPacketQuota to achieve QoS. For example if a client x is subscribed to two broadcasting channels that are almost always full, then the share of data is exclusive to the one with a superior priority. If the priority is the same, data will be proportional to the qutas values. In general the followig formula yields itself :

  • Suppose that broadcasting channels {Ci} are ordered in such way that if i < j => ( either Pi > Pj or (Pi = Pj and Qi >= Qj)), P and Q being the priority and quota attributes
  • Let’s denote Fi the rate at which Ci is filled with new packets
  • Let’s assume S to be the total rate of broadcast data sent to client x
  • Further assuming that all outgoing messages have the same length

THEN the share Si of broadcasting channel i is given by :

$$
\begin{Bmatrix}
\sum_{j, j<i}^{N}F_{j} > S \Rightarrow S_{i} = 0
\\
otherwise
\\
S_{i} =\left ( S- \sum_{j, j<i}^{N}F_{j}\right )*Q_{i} / \sum_{k, P_k=P_i}^{N}Q_k
\end{Bmatrix}$$

Finalizing Code

By now, our code near the Server instance should look like:

PushFramework::Server server;
server.setListeningPort(sPort);
server.setClientFactory(pClientFactory);
server.setProtocol(pProtocol);
server.registerService(myServiceId, pService, "myService");

Before calling start, there’s a number of useful if not important configurations API you need to call:

  1. void setWorkerCount(int workerCount); Precise the number of worker threads to spawn for servicing your client requests. If not called the library will choose the number according to the available hardware processing power.
  2. void setConnectionPoolSettings(unsigned int nPreallocationSize, unsigned int nMaxPoolSize); The framework allocates data structures containing data buffers and many other information for each accepted connection. To avoid performance issues associated with allocation/deallocation, you can set the framework to use a pool. The pool will preallocate nPreallocationSize re-usable data before Server::start is called. The number of connections cannot exceed nMaxPoolSize.
  3. void setBuffersSize(unsigned int uMaxReceiveBuffer, unsigned int uMaxBytePerSendOperation, unsigned int uMaxBytePerReceiveOperation, unsigned int uMaxSendBuffer, unsigned int uBroadcastThreshold); Since the framework allocates intermediate buffers for sending /receiving operations, it is important to precise their size as well as the size used for the actual send/receive operations.
Share