Have you ever noticed that when reading specifications for standard analog in/out transmission systems and those for video over IP systems the huge differences between what is, and what is not, specified?
Even the lowest cost fiber systems will provide basic information such as system bandwidth and signal-to-noise ratio (SNR) whereas IP systems rarely mention these or linearity figures such as differential gain (DG) and phase (DP) but will inform you that the picture is available as 4CIF down to QCIF at anything from 25 frames per second (fps) down to 1 fps or less. There are a number of reasons for this but one most certainly is that if you actually measure a typical video over IP system using a standard television measurement system such as the Tektronix VM700 the numbers are usually pretty bad. For example, a simple uncompressed digital video system such as OSD’s OSD8815 offers 10MHz bandwidth with a minimum SNR of 65dB and DG/DP of better than 0.7%/0.7º, numbers that only several years ago would have been considered better than broadcast quality. On the other hand, typical video over IP systems provide 4 to 5MHz bandwidth, SNR of 40 to 45dB and almost immeasurably bad DG/DP figures.
Yet, the world is moving over to video over IP systems!
Why? Because these systems enable customers to use a common building infrastructure for security, access control and building management functions. With great care these functions can even be carried over the same local area network system employed by the IT Department, thus potentially substantially cutting costs. However, in most situations, it is still better to run separate cabling systems with their own horizontal wiring and vertical (backbone) wiring because video systems are very bandwidth hungry and even as few as 20 IP cameras can cause a perceptible slowdown of system throughput on networks with a 1Gbps Backbone, and would be impractical on 100Mbps Ethernet systems. With the advent of economic 10 GBps Ethernet it is likely that some merging of the two systems will become more common.
While it is straightforward to measure the standard video parameters mentioned above using a wide variety of equipment available from scores of manufacturers, things become decidedly murkier when looking at video over IP. One very basic reason is that after encoding it is normal for pictures to remain in compressed format right up to output to a flatscreen monitor so that an end-to-end measurement of an analog signal is often just not possible. Another is that actually trying to objectively measure the quality of a compressed video in what could be one of hundreds of standards or variants of a standard is very difficult. Consequently, until some standardisation occurs we are stuck with vague subjective statements such as “DVD quality” on many manufacturers’ datasheets.
There are better ways of measuring video quality but most are still subjective. For example, the Video Quality Expert Group (VQEG) has created a specification for subjective video quality testing and submitted it to the governing body as ITU-R BT.500 Recommendation. This recommendation describes methods for subjective video quality analysis where a group of human testers analyze the video sequence and grade the picture quality. The grades are combined, correlated and reported as Mean Opinion Score (MOS). The main idea can be summarized as follows:
- Choose video test sequences (known as SRC)
- Create a controlled test environment (known as HRC)
- Choose a test method (DSCQS SSCQE)
- Invite a sufficient number of human testers (20 or more)
- Carry out the testing
- Calculate the average score and scale
All of which is great for assessing a compression system but is not too practical for online performance monitoring.
The industry seems to be now moving from the objective measurements well accepted in the all analog world to ones which may be software interpretations of subjective assessments! Very tricky.