VoIP tolerable latency and packet loss rate- configurable?

Discussion in 'UK VOIP' started by Vin, Jul 25, 2007.

  1. Vin

    Vin Guest

    Hi All,

    Got some questions about VoIP which I havent been able to figure out
    yet and was hoping someone can clear it up for me.

    I read about SIP and H.323 and what they do in terms of signaling and
    setting up a session. Also RTP is the session protocol that actually
    carries out the session in terms of transporting over UDP or TCP. What
    I dont understand is who decides the quality issues about a call.

    That is,

    i) where is it configured that a packet should be dropped if it is
    late by X seconds
    ii) What is the tolerable packet loss rate and who specifies it. Based
    on this the application can change the codec which provides better
    packet loss concealement.

    I understand that some of these decisions may be part of proprietary
    code, but I do not understand where these fit in in terms of VoiP
    protocol stack. Also, are there any VoiP clients out there whose above
    two parameters are publicly known or can be configured by user? I want
    to see how these two parameters affect quality of my call.

    Vin
     
    Vin, Jul 25, 2007
    #1
    1. Advertisements

  2. Vin

    Tim Guest

    RTP travels over UDP.
    It isn't. There are loads of places a packet could be dropped.

    Most likely, a packet will get dropped if by a router on the way. If
    the routers packet buffer gets too full, it will drop the packet.
    Depending on the type of router, this will be a hard limit (if buffer
    full, ditch packet) or a softer limit. Google for Random Early
    Detection - this is where routers drop some packets as the buffers fill
    up to avoid everything come to a hard stop. TCP (ie not voip) sessions
    will tend to slow down when the experience packet loss.

    At the receiving end, the phone will buffer the incoming packets as it
    receives them. Into the incoming jitter buffer. The buffer will have a
    length of time from receiving a packet to playout out the audio. Any
    packets that arrive in this time can probably be fitted into places. If
    the packet is too late, it gets dropped - there is no point playing an
    out of sequence packet.

    Of course, some cheapy voip phones don't have jitter buffers.

    Good devices have dynamic jitter buffers. So if some packets are late,
    then the receiver will grow the buffer to minimise the risk is missed.
    A longer buffer leaves more time to receive inbound packets.

    Of course there is a limit to how long the jitter buffer can grow before
    the listener notices the latency on the line.


    Nobody specifies it. You can pretty much hear any loss to some degree.

    Remember that almost all routers on the internet do not care what
    protocol a packet is carrying. They look up the destination IP in their
    routing table and spew the packet out in the right direction.

    At the moment, I don't know of any VoIP devices that will change codec
    mid-call when faced with network problems. It really would be neat.

    Problem is, the phone doesn't know that packet loss is caused by a
    congested network link or whether it is caused by a faulty network link.
    If a 100 Mbs connection somewhere in your ISP is congested, then
    changing the call to a low rate codec is not going to make enough
    difference to be noticeable.


    Tim
     
    Tim, Jul 25, 2007
    #2
    1. Advertisements

  3. Vin

    Vin Guest

    Good explanation but got more questions on those when I think about
    it. I believe it is the software on a device that implements jitter
    buffer, so shouldnt it be more accurate to say the application's
    jitter buffer? As I see it, there is a tolerable latency above which
    the delay exceeds the maximum size of jitter buffer and packet is
    dropped. But isn't that upto the user to decide based on the quality
    of service he desires (or pays for)? So does anyone know what VoiP
    applications out there use for these things and how QoS is done? Also
    where does one implement it? For example, consider skype. Do the
    developers of skype applications decide what is maximum size of jitter
    buffer to use?

    Skype does this, and it is well documented in some research papers.
    But again, skype is an application not a device. Shouldnt we be
    talking of applications rather than devices? Correct me if I am wrong.

    So I am looking for osme application in which I can decide how much
    late the packets can be before my phone/application has to drop it.
    Also, it would be neat if I can specify how much loss I would tolerate
    for my calls. Isn't there anything out there for such preferences? I
    looked at some open source VoIP clients, but was more confused after
    that.

    Vin
     
    Vin, Jul 25, 2007
    #3
  4. Vin

    Tim Guest

    It looks like you are thinking that you should be able to specify a
    maximum packet loss/jitter from your ISP.

    Nobody offers a service like this at all.


    On the other hand, on any decent ISP the packet loss will be none, and
    the jitter will only be around the serialisation delay as packets pass
    through routers.

    That it until you overload your connection.

    Tim
     
    Tim, Jul 26, 2007
    #4
  5. Vin

    News Reader Guest


    Hi,


    You might find XTen's X-Lite series permit a lot of configuration (e.g. size
    of jitter buffer in ms, etc.).


    Best wishes,



    News Reader


    P.s. I would surmise that most implementations work to the principle or idea
    of sustaining any level of packet loss still within the devices ability to
    signal or determine its status (i.e. if it starts to lose enough data or
    signalling to have no idea what is happening or to know if the device is
    still connected or that data is still flowing - then it will have little or
    no choice but to give up... and this seems to be what happens / the
    prevalent implementation - i.e. most implementations seem to allow a pretty
    reasonable period for a connection to resume or restore normal enough order
    [10 to 30 seconds] before calling it a day - ... even if only a dribble of
    data is received, provided enough data and signalling at that get through,
    it will be rendered as audio however corrupt or gappy, etc.).
     
    News Reader, Jul 26, 2007
    #5
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.