Fiber Types and Using Them Effectively

At OSD we have been saying for years that singlemode fiber is far better technically than multimode and that the cost premium once seen for SM equipment is slowly starting to disappear. In fact, most of OSD’s new products over the past two or three years are much the same price whether they are MM or SM. In some situations, MM is actually more expensive. However, the fact is that there is an awful lot of MM fiber out in the field and people want to continue to use it for as long as possible, even though the technologies they may be employing are constantly being updated. The classic example of this, of course, is Ethernet which has progressed from its humble 10Mbps origins to “Fast” Ethernet at 100Mbps to Gigabit Ethernet and now we are seeing growing deployment of 10G Ethernet. As always, the newest technology tends to be used in the backbone portion of networks but eventually flows down to the desk.

One problem with MM fiber in all this is that it really isn’t all that good. Its limitations come in two flavours:

  1. Most of it has been supplied to meet the old FDDI specification (Fiber Distributed Data Interface, an old standard for ring based backbone networks) and this has serious limitations in terms of fiber bandwidth, ie 160MHz.km @ 850nm and 500MHz.km @ 1300nm. This is fine if you are running Fast Ethernet over a few kilometers but starts looking a bit shaky once you have Gigabit speeds and really problematical once we are talking 10Gbps.
  2. Most fibers made to this spec have been designed to operate with LED (Light Emitting Diode) light sources which tend to “fill” most of the core area of the fiber. Consequently, a serious production flaw in most FDDI fibers does not cause many issues with such source. This flaw is the center dip in refractive index profile of the core region. You may recall that the profile of a Graded Index MM fiber is a peak on the axis of the fiber and drops down towards the cladding and should ideally follow a nearly parabolic curve over this area. Unfortunately, several of the most popular fiber manufacturing techniques have a tendency to leave a dip or other imperfection in the refractive index along the axis. This dip may be just a slight flattening or may be quite extreme, possibly coming as low as the index at the core/cladding interface as indicated in Figure 1 below.
FIGURE 1. Typical refractive index profile of ideal GIMM fibre and some examples of fiber central core profile distortions (dip / flat top / peak).

Figure 1. Typical refractive index profile of ideal GIMM fibre and some examples of fiber central core profile distortions (dip / flat top / peak).

As previously noted, if the light source fills up all of the core area AND it is fairly incoherent light (ie an LED) this isn’t such an issue but it can lead to some very troublesome effects if the source is a laser (which is pretty coherent) AND it is well collimated light travelling down the core’s axis. In such a case the effective bandwidth can be a lot less than the fiber’s specification. The effect on transmission is that different propagating modes see different path lengths which will vary with temperature and laser drive levels in a fairly unpredictable manner. Figure 2 illustrates the effect on a single pulse of light (a data bit). These distortions will vary from pulse to pulse so that severe distortion (Differential Delay Distortion or DMD) will result which in turn leads to so-called intersymbol interference that causes bit errors in the system.

Figure 2: Example of a distorted pulse due to DMD

Figure 2: Example of a distorted pulse due to DMD

One solution to Problem #1 above propounded by anyone with a vested interest in selling optical cables, connectors and equipment is that you change your old FDDI multimode fiber to a new multimode fiber that has a lot more bandwidth. Bizarrely enough, this is a change from the old 62.5/125 FDDI fiber optimised for 1300nm operating wavelength to the “new” 50/125 fiber optimised for 850nm, the operating wavelength of the vast majority of the laser device typically used in these systems (VCSEL, ie Vertical Cavity Surface Emitting Laser). Typically, such systems will not experience much of Problem #2 because the VCSEl illuminates a fairly large part of the fiber core area so that there are not too many issues with light travelling along the core axis.

It is rather interesting that the original fiber widely used for multimode applications was also a 50/125 design optimised for 850nm. Of course, the modern version is lower loss and does have higher bandwidth (2000MHz.km @ 850nm) but given the fact that VCSELs are only marginally lower cost devices than standard 1310nm lasers the economics of this is very questionable since MM fiber is usually a lot more expensive than SM fiber.

The other solution propounded by a few system vendors such as OSD is to totally move over to singlemode fiber and singlemode equipment. This enables you to move seamlessly from generation to generation of technology without touching your network backbone cabling. In fact, to be absolutely sure that your infrastructure does not need any upgrading, also install APC (Angled Physical Contact) connectors on all your new SM fiber with all the benefits APC technology gives you as described in Tech Note 201001. Some of OSD’s customers have been doing this for years.

However, unless the very best and latest MM fiber technology is installed, there may still be an issue with Problem #2. If the light source is 850nm it is normal to employ a VCSEL which injects into the fiber a broad range of ray paths such that those most subject to DMD effects are a very small portion of the total optical power being transmitted through the fiber. In the case of 1310nm systems where the light source is typically an FP laser with well collimated light the solution to this is to try to avoid sending light along the axis of the fiber’s core and one way to do this is by using a Mode Conditioning Patchcord (MCP). The MCP (see Figure 3) usually consists of a singlemode fiber spliced to a multimode fiber with their axes offset by about 19um (62.5/125 fiber) or 12um (50/125 fiber) so that 1310nm light from the 1310nm laser transmitter (typically the 1000Base-LX of Gigabit Ethernet) is connected to the SM fiber and then coupled into higher order modes of the MM fiber. This ensures that little energy travels along the “bad” part of the MM fiber, i.e. the core axis, thus avoiding the DMD illustrated in Figure 2.

Figure 3: Illustrating the offset between SMOF & MMOF axes

Figure 3: Illustrating the offset between SMOF & MMOF axes

MCPs are available from many vendors so it is important to know where and when they should be used. See “Mode Conditioning Patchcords”  examines these issues.

Fiber Operating Wavelengths: Why so many?

In “the good old days” some things in the world of fiber optics were very, very difficult: fibers were often manufactured with very poor control over physical geometry, impurity levels and doping profiles so that they were usually quite lossy, had poor bandwidths and connector and splice losses were sometimes 2 to 3dB and not very consistent.  In addition, it was all very expensive; even more so with singlemode fibers.

However, one good thing for quite a few years was that in the world of data communications applications where almost everyone used multimode fiber they also operated at an optical wavelength of around 800 to 880nm.  Likewise, in the rarefied atmosphere of telecommunications, everyone there used singlemode fiber which was almost always operated at 1310nm over distances of up to 50 ~ 100km and at 1550nm for longer distances.

Nice and simple. In fact, so nice and simple that people started to associate multimode fiber with 850nm and singlemode with 1310nm or 1550nm and this is today still a very common belief.

In reality, it is quite possible to transmit 1310nm or 1550nm over multimode fiber and this is, in fact, very commonly done.  However, perhaps surprisingly, it is also possible to transmit 850nm over singlemode fiber. Modern singlemode fiber has very low attenuation at 850nm as shown in the sketch below:

Operating wavelengthsAt 850nm a good multimode fiber will have around 2.5dB/km and a good singlemode fiber will have about 1.8dB/km. There are two main reasons we don’t normally operate singlemode fibers at 850nm:

  1. Typical sources such as LEDs or VCSELs generally can’t couple a lot of power into the fiber so the use of 850nm would be restricted to quite low speed links because such links generally employ sensitive optical receivers.
  2.  When operating much below 1250nm standard singlemode fibers start operating like multimode fibers so there are several modes capable of propagating through the fiber.  Since most singlemode fibers have a step index refractive index profile this means that the bandwidth is very poor, typically 5~10MHz.km: not much good for high speed data.

Despite these limitations, it is still sometimes OK to operate so-called multimode equipment over singlemode fiber.  For example, Optical Systems Design has occasionally connected up its 850nm multimode modems such as E1 PCM terminals and low speed industrial modems to singlemode fiber when singlemode was all that was available.  While not an ideal solution, it can be very practical and technically safe in some situations and is worth considering if you are, as they say, between a rock and a hard place.  Feel free to call us for advice, even if the equipment isn’t ours!

It may be apparent that if we can send one data signal over a fiber at one wavelength, we should be able to send many such signals if we can employ multiple wavelengths. The telecommunications sector has been using such techniques for many years with some systems in operation carrying well over 100 separate 2.5G or 10Gbps signals each on its own unique wavelength and with all of them on the one fiber.

Simplified, more rugged and lower cost versions of these systems have been finding their way into areas such as transportation, security and many other industrial/commercial applications over the past several years.  We look at this in a little more detail in other Tech Notes.

Multimode Fiber: the Good, the Bad and the Rather Ordinary

Anyone who has read a few of these Tech Corners will have noticed a certain cynicism with respect to the marketing of multimode fibers and our constant exhortation : don’t do it!  By almost any measure and in almost all situations singlemode is the better option.  Rather than explore the world of network equipment and cable marketing (as entertaining as this might be) we will this month continue to stick to our technical knitting and explain some of the more arcane issues with multimode fiber.  Of course, singlemode can also have its moments and in Tech Note “Angled Physical Contact (APC) connectors” we looked at the problems caused by reflection noise which applies to both singlemode and multimode fibers.  Fortunately, such noise is more easily controlled in singlemode systems than in multimode systems.

We have already discussed some of the vagaries of multimode fiber bandwidth which is roughly inversely proportional to distance 0.75 . This exponent can, in fact, vary from 0.5 to 1.0 (best case to worst case) and is very dependent on the specific fibers and how light is coupled into them so calculating the bandwidth budget for a multimode system is not very precise.

Another issue with multimode fiber is actually much more critical: in fact it can be a complete show stopper.  An example:  In the mid 1970’s a major field trial of a 140Mbps PDH system was being commissioned by British Telecom and it was found that the system was very unreliable: it would operate perfectly for a short while then the error rate would go through the roof and the system would crash.  Others had seen some similar effects but nothing quite as spectacular as this system.  The effect was traced to two related phenomena:

  • Modal Noise
  • Modal Distortion

The major problem in this particular  case was modal noise.  Our primary optical emitters in fiber optic systems are Light Emitting Diodes (LEDs) and lasers (either Fabry Perot (FP) or Distributed Feedback (DFB) types) with so-called super luminescent diodes somewhere between the two and only used in specialty applications.  A key difference between LEDs and the two main laser types is the spectrum as shown in Figure 1 below.

Figure 1. TYPICAL SPECTRA OF LED, FP LASER AND DFB LASER

Figure 1. TYPICAL SPECTRA OF LED, FP LASER AND DFB LASER

The LED has a very broad spectrum of around 30 to 40nm at 850nm increasing to 80 to 100nm at about 1300nm.  The light is incoherent and of completely random phases and frequencies within this envelope.  The FP laser is far more coherent with (usually) several discrete spectral peaks at different wavelengths occupying 2 to 5nm total width.  The DFB is even more pure with typically just one peak with a small number of adjacent peaks that are at least 20dB down (ie they are 1/100 of the amplitude of the main peak).

The emitters used in the BT field trial were very high quality FP types (with just one main central wavelength) and this is where things went off the rails.  To simplify the explanation let us assume that we are using a DFB laser to transmit through a MM fiber as illustrated in Figure 2.

FIGURE 2. COUPLING A DFB LASER TO A MULTIMODE FIBER

FIGURE 2. COUPLING A DFB LASER TO A MULTIMODE FIBER

So far so good: the left side of the sketch shows a few of the several hundred ray paths possible within the core of the fiber while the right side shows what you would see at the end of the fiber tens or hundreds of meters away from the laser.  Now, remember what these rays actually are: propagating electromagnetic waves which can be any one of many different modes.  If we look at the end of the fiber we see a pattern of light and dark spots (so-called speckle pattern) which are due to the constructive and destructive interference of the light at the end of the fiber.  An interesting aspect of this phenomena is that the pattern is rarely ever static: the spots are in constant motion.   This is due to a number of reasons:

  1. The distribution of light energy coming out of the laser varies a little with time, temperature and random circuit fluctuations.
  2. This distribution’s changes are also dependent to some extent on the current passing throught the laser (which in a digital system varies from just above the lasing threshold of 5 to 25mA to possibly 50mA above that threshold).
  3. Mechanical changes to the fiber (eg bending, torsional or compressive forces) will affect the light propagation.

Now, what happens if the fiber is terminated in a connector which has a little bit of radial offset as shown in Figure 3 below?  The speckle pattern out of the fiber is mostly coupled to the receiving fiber with very little energy loss BUT because we actually have dark and bright spots moving around over the receiving fiber core there is a variation in received optical power that is seen by the system as noise.

FIGURE 3:  MODAL NOISE DUE TO CONNECTOR OFFSET

FIGURE 3: MODAL NOISE DUE TO CONNECTOR OFFSET

The question is: does this really matter with low loss connectors?  In the case of the BT trial system the connectors had losses of up to 2dB so the answer was emphatically Yes as the effective noise was sufficient to render the system unusable.

These days, it would be very unusual to see connector losses as high as 2dB so this issue should not normally be of major significance with digital systems but the phenomenon can be a major issue with analog systems which normally require a much higher Signal to Noise Ratio (SNR).  As previously mentioned, another serious show stopper, at least with analog systems, is modal distortion.  This comes about because the transmitted modes out of the laser are not stable with time or with drive current through the device.  As the current increases from below threshold to the device’s maximum the dominant modes will change.  This may not be an obvious problem if the laser is connected directly to the receiver but once modes selective mechanisms like imperfect connectors are part of the link we see some very nasty distortion effects.  Similar effects can also occur due to reflections from the fiber back into the laser cavity.

Again, with fully digital systems this may not be too serious but in analog systems it is quite critical.  This is one of the reasons very few manufacturers would encourage transmission of CATV signals over multimode fiber.

Oh, and what did they do about that field trial mentioned at the beginning of this Note?  The equipment designers analysed the problem, figured out what was happening and changed the system from Non Return to Zero (NRZ) operation to Return to Zero (RZ) then operated the lasers from below threshold.  This had the effect of broadening the laser spectrum so that the problem was eliminated and the trial ended up as a great success.  Brilliant engineering!

Don’t hesitate to contact your OSD Systems Engineer for any further information on this subject.

CWDM – Coarse Wavelength Division Multiplexing

Fiber equipment suppliers often use two optical wavelengths to enable bidirectional transmission over the one fiber, usually 1310 and 1550nm on singlemode and 850 and 1310nm on multimode.  See “Fiber Operating Wavelengths Why so many” for more background information on this.  In actual fact, fiber can carry many more than just one or two wavelengths and that some telecommunications systems employ well over a hundred on the one fiber.

Why is this done?  As with most things it comes down to costs.  When transmission distances start climbing beyond hundreds of kilometers a major cost in the system is that of repeating the optical signals every 50 to 150km.  With conventional systems with one electrical signal per fiber this adds enormously to the cost because each signal requires a receiver, regenerator and transmitter.  Added to that are the associated costs of installation, maintenance, powering, etc.  Even if many signals were to be somehow accommodated on the one fiber (thus saving fiber costs) there would still be the considerable costs of all these electronic modules.  One solution might be to avoid the whole business of converting from light to electrical, regeneration then electrical to optical by directly amplifying the light signal(s) at the repeater sites. The Erbium Doped Fiber Amplifier (EDFA) was invented in the late 1980s to do just this and within a few years of its invention was being widely deployed in terrestrial and submarine systems operating over thousands of kilometers.  However, the EDFA can only amplify a relatively narrow spectrum of signals, typically about 30 to 50 nm centered on 1550nm so it is essential if it is to be used effectively to channel as many wavelengths through the fiber as possible within that range.  Systems exploiting the EDFA have been developed and are often described as Dense Wavelength Division Multiplexing (DWDM).  Typical spacings are 200GHz (wavelength separation of about 1.6nm allowing around 25 channels), 100GHz (0.8nm, 50 channels) and 50GHz (0.4nm, 100 channels). As you would expect, the costs of the precision lasers and optical multiplexers and demultiplexers increases with lower channel separation/increasing channel capacity.

Other techniques are becoming more available for field use that extend the optical wavelength range so that systems carrying 160 or even more separate optical signals are now practical.

All of which is great if you need to provide a backbone for a major telecommunications system extending over thousands of kilometers but may be of academic interest only to an operator of infrastructure such as railways or highways.

The good news for such folk is that much of the technology originally developed for the bleeding edge of telecommunications is now available in simplified and vastly less expensive form under the umbrella term Coarse Wavelength Division Multiplexing (CWDM).

Originally, CWDM was intended only for the 8 wavelengths between 1470 and 1610nm on 20nm spacings because operation with many older fibers from around 1350 to 1450nm was a bit questionable due to their poor attenuation characteristics in this region.  However, modern fibers exhibit little or none of the “water peak” that was the cause of this poor attenuation so it is now quite practical to use 18 wavelengths from 1271 to 1611nm.  (Please refer to Tech Note “Fiber Operating Wavelengths Why so many” for a sketch of the typical characteristics of older fibers).

CWDM uses similar basic lasers as DWDM but they usually operate at lower powers and without the temperature controllers essential for DWDM so are far lower cost.  Likewise, CWDM optical muliplexers and demultiplexers are of far less precision so are well suited to typical industrial and commercial applications.

Where are these things used?

Mostly, the technology is used simply to increase raw fiber capacity.  For example, a few years ago OSD supplied a very high end surveillance system operating over 10 to 30km with 48 broadcast quality point to point uncompressed digital video links based on our OSD870 and OSD880 4-channel video multiplexers. This required 13 CWDM channels, 12 for the video and 1 for the reverse data control signals as shown in the sketch below.

CWDM

However, CWDM can provide some other really useful features that go way beyond just adding capacity as shown in Tech Note “More on CWDM” which describes a further two CWDM systems, one in China and the other in Australia.

Fiber Optics for CCTV

Abstract

CCTV has traditionally used analog transmission to move pictures from cameras to displays. Currently, this technology is slowly succumbing to IP oriented technology although analog-like SDI uncompressed digital systems are gaining traction in some specialised applications.  In all cases, copper based technology has limitations in terms of distance and interference immunity so that fiber is often the preferred, sometimes the only, way of distributing signals around CCTV networks.  In this technical corner will discuss fiber optics in CCTV the best ways to make effective use of the technologies now available.

Introduction

Over the past 10 to 15 years CCTV has become ubiquitous and while many applications are related to security almost as many are related to operational and safety matters.  For example, road operators have been using CCTV for well over 30 years for monitoring traffic flows in many cities and large towns with fiber first being used in this application well over 20 years ago.

Until fairly recently, CCTV used technical standards derived from broadcast television and characterising quality performance of systems likewise derived from the techniques and equipment typically used in the broadcast industry.  The advent of IP based CCTV and newer technologies related to High Definition (HD) TV have considerably widened the scope of surveillance systems.  Consequently, we now see a mix of both analog based and network based technologies, the former using coaxial cable and the latter using Unshielded Twisted Pair (UTP) cable, typically so-called Cat-5 cable.

While coaxial cable is very easy and convenient to use, it does have limitations:

  • Link distances are typically restricted hundreds of meters unless in-line amplifiers are used
  • Susceptibility to interference from electrical machinery, lightning and other electronic equipment
  • Ground loops can cause major problems
  • Obviously, there are ways of reducing these issues but eventually coax runs out of puff.

Often it can be inconvenient to install coax cable and if UTP is available then there is a great incentive to use it.  However, similar issues arise when trying to use UTP cables to transmit analog video signals.  Typically, passive or active Balanced-to-Unbalanced converters (Baluns) are used and these can provide reasonable transmission over a few hundred meters (even more with active cable equalisation).  The Ethernet systems typically deployed in local area networks are often used to transfer IP based video signals but as with coax and balun based analog systems, these also have severe distance limitations and are susceptible to EMC issues.

Which is where fiber optic technology comes into play: where distance or EMC are a problem fiber is a very straightforward technical fix for many situations.  Fiber has some excellent technical features:

features

Fiber Types

We have spent some time in past issues discussing optical fiber, how it works and the technical parameters of singlemode and multimode fiber so we will not go into all of it here. Please see Technote  – ‘Fiber types and using them effectively’ and ‘Singlemode vs Multimode’.

Applying Fiber to CCTV

Over the past 15 years CCTV technology has slowly moved from the original broadcast industry analog technology based on coaxial cable to systems in which the video is digitized, compressed and transmitted via local network technologies, typically using Ethernet running Internet Protocol (IP) over UTP cabling.  The digitization and compression may occur within the camera or externally in a local video encoder or perhaps back at the control room within a DVR.  The quality of IP cameras and the schemes used to compress and transmit the video signals has improved dramatically over the past few years but it is still the case that almost all the highest quality cameras are analog output types.  Due to this and the (for now) lower cost of analog cameras as compared to equivalent quality IP cameras, analog systems are still popular for general use while in specific applications such as highway or tunnel traffic monitoring many systems engineers are still specifying analog units for reasons such as video quality, zero latency and ease of control.  A further “back to the future” wrinkle on this progression has been the application of the broadcast industry’s SDI (Serial Digital Interface) technology to high end CCTV systems over the past few years.

Analog Systems
Transmitting analog television signals through fiber has progressed through three phases:

  • AM (Amplitude Modulation ) systems
  • FM (Frequency Modulation) systems
  • Digital systems

The first commercially available fiber CCTV systems were AM.  They offer good video performance over several km but the signal does degrade with distance.  This technology is still widely used for simple video only links on multimode fiber where it very economically provides more than adequate performance.

FM systems came next and overall technical specs are usually better than AM, they tend to have more constant performance with distance and they operate happily over both MM and SM fibers.

Finally, uncompressed digital systems can have very high performance which does not change with distance (until it falls off a cliff) and is probably best with SM but is also commonly used with MM.

Pricing does increase a little going from AM to FM to digital but not excessively.  For example, Figure 1 shows two small transmitter modules that plug onto the camera: the AM unit retails for less than $100 whereas the digital equivalent is about 30 to 50% more expensive.  Of course, the digital’s performance is vastly superior!

Figure 1: Small AM and Digital Video Only Transmitter Modules

Figure 1: Small AM and Digital Video Only Transmitter Modules (OSD365A)

A key issue in this improvement in technology from AM to FM to digital is that the bandwidth requirements do increase fairly dramatically .  An AM system needs just the video bandwidth, ie 5 to 10MHz which means that with standard 500MHz.km MM fiber operation over about 100km might be possible if it wasn’t for the fiber attenuation.

A basic FM system will require 30 to 50MHz so can operate over around 10km of MM.

On the other hand, digital systems operate at bit rates somewhere between 100Mbps (very basic 8-bit systems) to over 350Mbps (high end) which means optical bandwidths of 90 to 300MHz are needed.  Therefore, it is difficult to be able to guarantee reliable operation over much more than a few km on MM fiber for most such products.  Consequently, digital modems and multiplexers are best suited to SM fiber. For more on fiber bandwidth issues see Tech Corner “How far can you go?”.

IP Systems

IP CCTV systems have started to dominate many areas of video surveillance for many good reasons such as the flexibility in placement of cameras and the theoretical ease of integrating the surveillance of a building, campus, etc into the existing IT local area network.   Clearly, there are differing viewpoints about the practicality and/or advisability of incorporating CCTV into an existing IT network but it can be done successfully provided due allowance is made for both average and peak transmission requirements of the CCTV.  Typically, the video image quality seen at the control room rarely approaches that of well designed analog systems because:

  • Basic camera optics, sensor and analog processing are sometimes inadequate
  • Video encoder (within camera or external) and software decoder are not of high quality
  • The transmission rate has been choked in order to allow the network to breathe a little, which typically results in noise, blockiness, reduced frame rates and excessive latency.

All these issues are being addressed by system vendors so that excellent quality is now possible, provided the network can handle the increased data rates required for high quality equipment and software.  For Standard Definition (SD) video this could be an average of 0.3 to 1.0Mbps with peaks of 10Mbps or even greater.  However, megapixel cameras can increase this dramatically with average rates of 5 to 10Mbps for some types.  Most networks will operate at 100Mbps out of the camera or encoder and feed either directly to a switch located in a central equipment room or to that switch via a backbone network.  This backbone will sometimes be 100Mbps but more usually it will be Gigabit Ethernet such as the redundant ring network shown in Figure 2.  Clearly the number of megapixel cameras that can be supported on any network will be a lot less than is possible with standard cameras.  Alternatively, larger networks may need to move towards 10Gbps backbone technology.

Figure 2  Typical Redundant Ring Gigabit Backbone Network

Figure 2 Typical Redundant Ring Gigabit Backbone Network

Unfortunately, most backbone networks use MM fiber which really isn’t all that good. As has already been noted, the fiber used has changed from the original 50/125 design to 62.5/125 and back to 50/125 over the past 20 years or so:

  • Most legacy fiber has been supplied to meet the old FDDI specification (Fiber Distributed Data Interface, an old standard for token ring based backbone networks) and is usually known as OM1. This is 62.5/125um and has serious limitations in terms of fiber bandwidth, ie 160MHz.km @ 850nm and 500MHz.km @ 1300nm. This is fine if you are running Fast Ethernet over a few kilometers but starts looking a bit shaky once you have Gigabit speeds and really problematical once we are talking 10Gbps.
  • OM2, OM3 and now OM4 50/125um fibers have been developed to improve the performance at 850nm when using VCSEL light sources and these enable 1G and 10G operation over hundreds to several hundred meters. Still somewhat limiting when networks move outside buildings.

Operation at 1300nm over multimode fiber is sometimes needed, eg when using SM oriented equipment and some care is needed to ensure reliable operation. We have covered some of the issues in Tech corner that can arise in such situations and the consequent need for Mode Conditioning Patchcords (MCP).

Of course  the other solution to the limitations of MM fibers is just to completely replace them with singlemode fiber and singlemode equipment. This enables you to move seamlessly from generation to generation of technology without touching your network backbone cabling.  In fact, to be absolutely sure that your infrastructure does not need any upgrading, also install APC (Angled Physical Contact) connectors on all your new SM fiber with all the benefits APC technology gives you. In Technote,  “Selection of Optical connectors” goes into some detail on the different connector types and their features.

SDI Systems

Given the enormous emphasis placed on IP systems by vendors, consultants and end users over the past several years it would be reasonable to assume that this is the ultimate technology for CCTV.  Both the original analog technology and then compressed digital technology on which IP systems are based came out of the broadcast industry.  Now another broadcast originated system is being applied to CCTV: Serial Digital Interface (SDI).  This is a series of standards that started with uncompressed Standard Definition (SD) video running at 270Mbps over coaxial cable and which is also now available for High Definition (HD) at 1485Mbps and at 2970Mbps.  It is very likely that the next SDI standard will operate at around 10Gbps.
The great thing about SDI is that it is about as pure a digital video signal as it is possible to get so that with good cameras, lenses and transmission technology there are very few impairments in the pictures.  Thus, for a minority of higher end CCTV applications it is a very useful technique.  An example might be where centralised video analytics are being used in high risk sites such as airports where there is little margin for the distortion and noise that might accompany even high quality IP systems.
In some ways the SDI solution looks like an analog solution: camera, 75Ω coaxial cable and BNC connectors and can, theoretically, use the same copper cabling infrastructure already in place for conventional analog CCTV.  There are now many surveillance oriented cameras that have a native HD-SDI (1.485Gbps) or 3G-SDI (2.97Gbps) interface. Unfortunately, the distance these systems can transmit over the coaxial cable is somewhat limited: from 140 or so meters using HD-SDI, to as little as 70 metres using 3G-SDI – somewhat similar to the old analog technology.  In fact, this compatibility with existing coaxial cable infrastructure is often touted as the key motivation for the move to HD-SDI technology: just replace your cameras and a few components at the control center and “Bingo” – HD zero latency images without the hassle of an IP network.
Note also that these numbers assume good quality coaxial cable: attempting to make SDI work with rubbish cable is contra-indicated.
Such transmission distances are too short to be practical in many CCTV networks, so fibre optic systems are often essential and a growing range of SDI products is available for the CCTV industry.

It is certainly best to use SM fiber with these systems but operation with MM fiber is often practical, although it may be necessary to use MCPs.

Cables and Connectors

There are many different cable types which tend to be divided into those most suitable for either indoor and outdoor deployment.  Most cables used in conventional commercial or industrial outdoor sites do not move once installed so designs such as the loose tube are ideal.  Figure 3 illustrates the cross section of a 6-tube design.  It has great flexibility in that the individual tubes can each carry from 1 to 12 individual fibers or, in some designs, from 1 to 6 or more 12-fiber ribbons.  The cable’s central strength member will usually be dielectric (for example, fiber reinforced plastic, FRP) but can also be metallic.  The FRP core is usually quite stiff so bending radii of 500mm or so are quite common which means that this design is not so suitable for in building use where a fair degree of flexibiliity is needed.  Some sort of filling is often used inside the tubes to prevent water migration up the cable.  Such fillings can be gels or dry powder that expands in contact with water.

Figure 3 Loose tube outdoor cable cross section

Figure 3 Loose tube outdoor cable cross section

Distribution (or riser) cables used within buildings need to be flexible, strong and must often meet stringent requirements such as low smoke, zero halogen flame and flame retardancy.  The most common design is the tight buffer illustrated in Figure 4.

Figure 4  Tight buffer distribution cable

Figure 4 Tight buffer distribution cable

The fibers in tight buffer designs typically have a secondary plastic coating (hytrel or nylon) that takes the diameter up from the usual 0.25mm to 0.9mm.  The fibers are then just bundled together and surrounded by Kevlar for strength and then an outer jacket such as PVC or polyurethane is added.

Single tube variants of the loose tube design are also used in buildings as distribution cables.  Such cables have up to 24 fibers in the central tube that will be surrounded by Kevlar and then a plastic jacket and are flexible enough for indoor use.

Usually, cables are terminated in Fiber Optic Breakout Trays (FOBOTs) with the terminated trunk fiber appearing via a through adapter (SC in most Australian installations).  Connection to the equipment is via a patchcord which will be SC at the FOBOT end and commonly ST, SC or LC at the equipment end. This was earlier discussed in detail in  Tech Corner – “The selection of optical connectors”.

Testing

When a cable is installed and terminated the installer will normally test his work.  This should include loss measurements using a power source and an optical power meter and very often will include bothway OTDR (Optical Time Domain Reflectometer) measurements which give a very good picture of the loss of all components of the cabling: cable(s), connectors and splices.  Such “certification” is highly recommended in all but the most straightforward installations as it not only establishes the real loss performance on Day One but gives you a reference for any future issues that might arise.   Do note that all such OTDR measurements should be carried out at both 850 and 1300nm for MM and at 1300 and 1550nm for SM.  Good results on a SM link at 1300nm do not guarantee good results at 1550nm:  small imperfections in fiber handling that barely affect SM fiber at 1300nm can cause major problems at 1550nm.

After the link has been installed and the fiber installation specialist has been long gone what do you do if you have a problem with the system?  The first and most obvious action is to check if the fibers are still working correctly.  Many transmission equipments will have some sort of indicator for the received optical signal: it may be a simple “OK/NOT OK” LED on the front panel or it may be embedded in the GUI software and actually show the received optical power level.  Or, you may have nothing obvious to look at.

We would strongly recommend to anyone with more than a few fiber links in their plant or network or to anyone involved in installing fiber systems that they consider buying a simple low cost (less than $800) optical power meter: it is a great investment which can save you a lot of wasted time.  With an optical power meter you can measure the output of transmitters, check receiver sensitivity and overload problems and measure the optical loss of cables, connectors and splices.  A power meter is all that’s needed for 90% of organisations.  A visual fault indicator which injects intense red light into the fiber is also handy for identifying breaks or fractures in fibers up to hundreds of meters away.  Of course, if your operation cannot tolerate any downtime at all and/or you are in a remote area then an OTDR and fusion splicer might also be advisable.

Summary

Fiber is a well established technology that can offer enormous benefits to end users in enabling interference free transmission over almost any distance of high quality video, whether it be in analog, IP or SDI formats.  The fiber type used will often have been already decided due to factors outside the user’s control but if not it is recommended that singlemode be looked at very seriously.  Using APC style connectors for such SM infrastructure is also recommended.  Factors such as the type of cable to be used will be determined by site conditions, eg typically loose tube types for the longer outdoor runs and tight buffer (aka distribution) within buildings.  It is always recommended to use FOBOTs to interface between the cabling and the equipment.  Various standards either recommend or mandate the use of the SC type connector for this building cabling but please note that the standards do not specify which connectors should be used on the transmission equipment: this is up to the manufacturer.  Consequently, much use is made of patchcords with an SC on the FOBOT end and an ST or LC or whatever on the equipment end.
It is recommended that OTDR based testing of new installations other than the most simple be carried out and a record kept of these results.  It is also suggested that an optical power meter be readily available for quick checks in case of any issues.

HD-SDI Systems for CCTV

Given the enormous emphasis placed on IP systems by vendors, consultants and end users over the past several years it would be reasonable to assume that this is the ultimate technology for CCTV.

Both the original analog technology and then compressed digital technology on which IP systems are ultimately based came out of the broadcast industry.  Now another broadcast industry originated system is being applied to CCTV: Serial Digital Interface (SDI).  This is a series of standards developed by the Society of Motion Picture and Television Engineers (SMPTE) that started many years ago with uncompressed Standard Definition (SD) video running at 270Mbps over coaxial cable.

This was extended so that it is also now available at a variety of data rates the most important of which for CCTV applications are High Definition (HD) at 1485Mbps and at 2970Mbps. And, it is very likely that the next SDI standard will operate at around 10Gbps. All SDI digital signals operate at a peak to peak voltage level of 800mV ±10% over 75Ω coaxial cable and employ BNC connectors. It is possible with high quality coaxial cable to transmit reliably over at least 400 meters at 270Mbps, 220 meters at 1485Mbps and 200 meters at 2970Mbps, roughly comparable to what is attainable with analog signal transmission over coaxial cable.

Note however that achieving these sorts of distances does assume good quality coaxial cable: attempting to make SDI work over significant distances with some of the cheap cable sometimes used in CCTV installations is not a good idea.The great technical feature of SDI is that it is about as pure a digital video signal as it is possible to get so that with good cameras, lenses and transmission technology there are very few impairments in the pictures with the actual source picture being delivered across the SDI link unaltered.  Thus, for some higher end CCTV applications it is a very useful technique.

An example might be where centralised video analytics are being used in high risk sites such as airports where there is little margin for the distortion and noise that might accompany even high quality IP systems.Other attractive features of HD-SDI are:There are no network issues to worry about so that it is easily installed and commissioned by non-IT installers and technicians. There is no transmission delay (i.e. latency) so camera control is as easy as for analog cameras. In some ways the SDI solution looks like an analog solution and as already noted can often use the same copper cabling infrastructure already in place for conventional analog CCTV.

There are now many surveillance oriented cameras that have a native HD-SDI (1.485Gbps) or 3G-SDI (2.97Gbps) interface.  Of course, distances of around 100 to 200 meters are too short to be practical in many CCTV networks, so fiber optic transmission systems are often essential and there is a growing range of SDI fiber products available for the CCTV industry. It is certainly best to use singlemode fiber with these systems but operation over limited lengths of multimode fiber (eg, 300 meters at 3G-SDI) is also possible using optical components operating at 850nm.So, while HD-SDI can offer superb video quality and can be very convenient to install and to use, what are the downsides?

A minor one is that displays with native SDI inputs tend to be expensive so the usual solution is to use SDI-HDMI converters with HDMI monitors .  These are readily available, generally work well but are an additional expense and yet another bit of equipment in the control room. A more important concern is that of storage.  There are many 4-channel and some 8-channel DVRs which accept HD-SDI signals, compress them (generally using H.264) and store on an internal hard drive. Note three issues here:

  1. This is a DVR so the advantages of NVRs are not possible.
  2. Currently, only a restricted number of channels are available compared to the 16 to 64 typical of conventional DVRs and NVRs.
  3. The signal gets compressed and the level of compression determines the quality of the recorded image, just as it does in conventional IP systems.

Finally, it is important to note that HD-SDI technology is being adapted and customised for CCTV via a consortium of suppliers and users known as the HDcctv Alliance (see www.highdefcctv.org).  The Alliance is developing standards for several areas such as the transmission technology itself (based on the broadcast industry’s SMPTE standards) and the transmission and integration of duplex data and audio signals onto the same coaxial cable carrying the HD-SDI signal.

In addition, lossless compression techniques (such as Dirac) will probably be incorporated to provide greater transmission distances over standard CCTV quality coaxial cables.As often happens with the introduction of new technologies, there is a fair amount of hype about HD-SDI with some extremely optimistic projections for its take up in the marketplace.  The reality is that it does have some very nice features that have appeal for different markets ranging from very high end applications down to retrofits of existing simple analog systems. OSD expects that the trend to IP will continue, particularly in larger systems where the flexibility and built-in redundancy of well designed networked CCTV is of paramount importance, but that HD-SDI provides a very useful option for many applications.

For further information on HD-SDI systems and the options available for optical signal transmission please contact any of OSD’s systems engineers who will be very happy to assist you.