understand the system as a whole, there are quite a few concepts to cover, and
that requires the use of technical terms: these will be explained as
straightforwardly as possible. These fall into some basic categories:
transmission and error correction
reception and storage
Concept 1: digital representation
the most basic level, is the use of just two values. Either the value is 0 or
1. This is unlike most real-world things that can take a wide range of
millennia human civilisation was quite able to get along using ten digits, no
doubt inspired by the collection of fingers and thumbs found to hand. The
Romans used letters for numbers (III for three, VII for seven), the ancient
Egyptians sections of an eye-symbol. It was only several hundred years ago
that 'nothing' gained the familiar ring symbol. This naught gave the ability
of just the ten basic symbols to represent any number by using the position to
denote tens, hundreds and so on.
there is little to be gained by using binary. Although it is quite easy to
understand, it is of little everyday use. However, the concept provides
computers with their awesome calculation and storage powers.
Just as the
number 23 actually means 'two lots of ten' plus 'three lots of one', and 15
means 'one lot of ten' plus 'five lots of one', in binary each column makes
lots of (from right to left) 1, 2, 4, 8, 16, 32, 64. Each column value being
twice that to the right, so:
in binary 10111
in binary 01111
be of limited interest, but making number so simple allows very powerful arrays
of transistors to process the numbers. In the earliest days this was just four
bits at a time (0 to 15), then eight bits (0 to 255), later sixteen bits (0 to
65,535), then 32 bits (0 to 4,294,967,295) and now 64 bits (0 to
representation of data in binary form is therefore desirable as it allows high
power, reliable, computers to perform actions that are truly impossible
otherwise. This is because, it turns out, it is much more practicable and cost
effective to make something very simple run very fast.
just counting numbers can be stored using binary digits: they can be used for
other kinds of data. In the 'ASCII' standard, the capital letter A is stored
Concept 2: analogue to digital conversion
examples have all used positive whole numbers (known as integers), but the real
world is not always like that. Whilst there are plenty of things we can count
(sheep, beans, lamb chops, tins of beans) there are many that we cannot:
temperature, distance, weight or brightness.
If you got
a group of people together and measured their heights you would find two
things. First that you would have a wide range of values, and secondly that
none of them would be exactly a whole number, even if you measured in, say
millimetres. The latter factor would be down to two elements: how carefully
you worked out the value and how accurate your measuring equipment is.
decide to write each value down in millimetres, rounding up or down using a
laser measure. Making this kind of decision turns the analogue values anyone
can be any height into counting values. This process is known as
of turning an analogue values is at the heart of the first process used for
digital audiovisual processing: analogue to digital conversion (ADC).
element to add is time. By setting a fast and accurate timer, we can use the
ADC process to produce a stream of values. A simple form of this takes a mono
sound signal and, 44,000 times a second, makes a value from the current signal
this data and then using a reverse process (DAC) the original sound is
recreated, almost perfectly. If you have ever listened to a compact disc (CD),
you will be familiar with how well this system works.
limitations only frequencies up to half of the 'sample rate' can be coded
time ('temporal encoding') is not the only option. A digital picture is also
using quantized values to represent the picture elements (pixels) that were
analogue in the real world. In this digital system the values represent red,
green and blue levels in a matrix.
It is also
possible, therefore, to digitize a moving image too. This involves taking
'samples' of a 'digital still' many times a second. This is usually 24 (for
movies), 25 (UK and the EU) or 30 (USA) times a second.
Concept 3: data compression
this generates an awful lot of data: a standard definition television picture
(720x576) at 25 frames per second (25fps) with 24 bits per pixel (that is 8
bits per colour), plus the stereo audio generates:
bits per second. 248832000+1408000=250240000 bits per second
convention, we call 1024 bits one kilobit, and 1024 kilobits one megabit.
Using this example we can see that we would need to transfer 238.6 megabits per
second for a digital TV picture. As this is about thirty times the fastest
broadband connection: this is an impracticable amount of data.
space, we need to compress this data. There are two forms of data compression:
lossless and lossy.
compression takes the original data and applies one or more systems of
mathematical analysis to it and (hopefully) spits out less data that can be
then stored. If that stored data is put through the reverse process, the exact
original data is re-created, bit for bit.
principle is used by file format such as ZIP, RAR, and SIT that are used to
transfer big files between desktop computers.
there is a small down-side to this type of compression: it is impossible to
guarantee the level of compression achieved it all depends on the source data.
Sometimes you may get a almost no data output, and sometimes you get as much as
you started with. However you can attempt to compress and decompress any type
of data using lossless compression, the program algorithms do not need to know
anything about what the data represents.
If the data
is to be broadcast (or, say, streamed on-line) then there is a need to ensure
that the amount of data is always reduced, so the compressed data can be
transmitted in real time using the available bandwidth.
for the use for the second type of data compression, called 'lossy'
compression techniques are not general-purpose. They rely on knowing two
things the form of the data that is represented and a little about the target
device for the data: human beings.
example, the retina of the human eye has 'rods' and 'cones' packed together.
The 'cones', located in the centre allow us to perceive three colours: red,
green and blue. The 'rods' are away from the centre and react accurately to
many light levels, but only in monochrome. The human brain takes the
monochrome, red, green and blue elements and combines them into full-colour
this about the human eye provides the simplest form of lossy compression. The
original image is converted from Red, Green, Blue format into three
corresponding values: the hue, saturation and lightness. The first is the
colour, the second the amount of that colour and the final the brightness.
that we can now dispose of some of this data because we humans will still
perceive that the image is the same as demonstrates:
stage is to take the three image components (hue, saturation and lightness) and
break them down into chess-boards. From our original image we will have:
720x576 → 90 x 72 = 6,480 chessboards x 1
360x288 → 45 x 36 = 1,620 chessboards x 2 =
Each of these
9,720 chessboards is an 8x8 matrix of values, ready for compression. There are
'average' value for the whole chessboard is calculated
next each value
on the board is recalculated by subtracting it from the average value
then each of
these new values (which could be positive or negative) are divided by a
Then the values
are read from the chessboard in a special zig-zag pattern
zig-zag values are then 'run length encoded'. Because many of the values from
the zig-zag 'walk' will be zeros, this achieves good data compression.
Only ONE of
the above stages is actually lossy: the division by the compression factor.
All the other forms do not remove any data.
data is eventually used to recreate the image, the higher the compression
factor the less detail there will be in the recreated image. A very large
factor could result in just a single chessboard with the just the 'average
value' in each square. A low factor will have almost all the original detail.
is awkward to compute the compression factor value: a fixed amount of output
data is needed for transmission. Too much data would not fit in the capacity
for broadcast, but too little data would result in a first a blurry and then
Concept 4: Temporal compression
compression technique has the marvellous name 'temporal compression'. Under
normal circumstances some or all of the one frame of a TV picture will be
identical to the previous one. By comparing consecutive frames and identifying
those parts that have not changed, the compression system can just bypass these
sections. If the picture is mainly static (such as a 'talking head', such as a
newsreader) the only data that needs to be transmitted is the small sections
that have changed.
drawback to such a system is that a frame that is dependent on a previous one
cannot be displayed if the previous was received: the viewer does not want to
wait for several seconds when 'flicking' between TV channels or for the picture
to 'unjam' if there is just a momentary reception break.
many situations where a considerable portion of the picture does not change
between frames, but moves slightly. This is the final stage of the MPEG2
compression system and the most computationally intense. Having identifying
those sections of the picture that have remained static between frames, the
encoder has to identify which parts of the image have moved, and where they
have moved to.
This is a
very complicated task! There is an almost infinite combination of movements
that could happen. For example, a camera of a football match may pan
horizontally, but a camera following a cricket ball's trajectory has many
can have scrolling graphics, fades and wipes; material can wobble or shake.
Objects can move around the screen like a tennis ball. And this can all happen
at the same time.
the encoding software is, and the more powerful the hardware the more motion
can be detected. The better the detection is the less data capacity is
required to describe the moving image and the more can be allocated to
accurately reproducing the detail of those sections that have.
wonder how effective this computing is. Using them all in combination will
reduce the initial 238Mb/s (megabits per second) to as low as 2Mb/s, with
higher quality results at 5Mb/s - a compression ration of from 1:50 to 1:120!
Concept 5: Statistical multiplexing and opportunistic data
can be enhanced by using more techniques! On Freeview, for example, each
transmission multiplex carries either 18Mb/s or 24Mb/s. By dynamically
co-coordinating the 'compression factors' of a number of TV channels together
using 'statistical multiplexing' one or two more channels can be fitted onto
there is any capacity left at any time, this is allocated to the interactive
text services (for example BBCi) as 'opportunistic data'.
Concept 6: Audio compression
comparison the audio data compression is simple!
"MP3" encoding of sound in fact refers to "layer III of
MPEG2". This technique uses some mathematical functions called fast Fourier Transforms to convert each small section of
sound into a number of component waveforms. When these waveforms are
recombined, the original sound can be heard.
The audio compression simply
prioritizes the information in the sections of sound that humans can hear, and reduces
or removes sound information that cannot be heard. As this changes from sample
to sample, the compression routines optimize for each one. This produces a constant
stream of bits at a given rate which is included alongside the picture
information in the "multiplex" (see below).
Concept 7: The "transport stream"
It is worth
taking a moment to consider the multiplexing process a little more. As we have
seen above, the video and audio are highly processed and result in a stream of
bits, and there can be many simultaneous audio and videos to be transmitted
of a multiplex has nothing to do with a large cinema, but is a mathematical
concept. The actual implementation is quite complex, but the concept is not
"multiplexing" end of the system, there are a number of "data
pipes" that have audio, video and other forms of data. The "other
forms" can be the "now and next" information, a full Electronic
Programme Guide, subtitles or the text and still images for a MHEG-5 system
(such as BBCi or Digital Teletext).
takes a little data from each "data pipe" in turn. This amount of
data, called a "packet" is the same size for each incoming stream.
Before the packet is sent to be broadcast, it is "addressed" with a
number of the identify the data pipe from where it came.
receiver, these packets are received in turn. Whilst it is perfectly possible
to decode all the original data pipes, this is not normally required as the
user will normally only be able to view one video and listen to one audio
channel at a time.
"demultiplexing" process therefore allows most of the data to be discarded
by the receiver, with only one selected video, one selected audio and one
selected text being used by the rest of the receiver's circuitry.
practice, the receiver will also demultiplex and store information that comes
from a number of special "data pipes" provided by the broadcaster.
This will include EPG information, and a directory of the services included in
example, this includes the Network Information Table (NIT) that lists the names
of the channels provided, and the pipe identifiers for the video (VPID) and
audio (APID) for each. This type of information is provided on a constant
loop as it is required when a tuner is scanning for channels during set-up, and
allows for the allocation of the "logical channel numbers" - the
numbers you type into the remote control to view the channel.
persist in the NIT when they are off-air, allowing channels that broadcast part
time to still be discovered. Radio stations simply have no VPID, with radio
and part-time channels relying on an automatically started text service to
provide some vision.
final note, the term "statistical multiplexing" refers to the
multiplexer. In contrast to "time division multiplexing" where each
of the incoming data pipes are processed in a "round robin" fashion,
each in turn, the "statistical multiplex" processes each pipe in
turn, but allows "extra goes" for those with the most, or most
critical data: priority is for video and audio, with the text and EPG services
being the least important.
Concept 8: Transmission and error correction
all the processes above, we have a single data stream. There are three main
ways this is broadcast:
via cable TV
of these systems are somewhat different, but they all have one thing in common:
they used to be used to broadcast analogue television. This means that they
are one-to-many, synchronous and unidirectional.
differs considerably from most digital computer systems, which are usually
one-to-one (either client-server or peer-to-peer), bi-directional and (usually)
asynchronous. It is for this reason that it has been quite hard to provide
TV services on the internet.
so much data perfectly via satellite, cable and terrestrial means is quite a
challenge. Even the most advanced analogue TV with the best connections, dish
or aerial will not provide a perfect image 100% of the time.
TV transmission system, COFDM (Coded Orthogonal Frequency Division Multiplexing)
assumes that the path between the transmitter and the receiver will be less
than perfect, and uses a number of further techniques.
is "forward error correction". This is vital because the
transmissions are one-way, not allowing the receiver to ask for corrupt data to
be resent. The most simple way of providing FEC, is to just broadcast every
bit twice. As inefficient as this may sound, this is almost what is actually
done. Using a number of mathematical techniques, this can be reduced slightly,
and is often sent NEARLY twice. The FEC system used DVB-T, DVB-S and DVB-CS
(terrestrial, satellite, cable) is usually quoted as "5/6" or
"3/4", meaning the data is sent one and five-sixths times or one and
Concept 9: COFDM
system used is the COFDM itself.
added the FEC to the multiplex data, the COFDM transmitter now takes this and
splits in into 'sub carriers' which are then carried within the analogue
transmission space. The number of sub carriers is 2x2=4 (which gives us Quad
Shift Phase Key), 4x4=16 QAM (quadrature amplitude modulation) or 8x8=64 QAM.
Newer standards such as ATSC (in the US), DVB-S2 and DVB-T2 also use 16x16=256
sub carriers that are used, the more data can be carried by the transmission.
However, increasing the number of carriers means that they are all "spaced
closer together", making them more prone to interfering with each other.
In practice, the system used called "phased key shifting" can
compensate for the closeness problem by transmitting them at higher power.
with the potential interference, the sub carriers do not all broadcast at
once. For much of the time they are unused. The effect of this is that
external interference from analogue transmitters, other digital transmitters or
anywhere else will cause an error that the FEC encoding can correct. The
amount of time each subcategory is not transmitting is called the "guard
sub carriers provides more data capacity, as does lowering the guard interval.
But doing these reduces the reliability of the service.
Concept 10: Reception and storage
receiver simply has to do all these processes in reverse, so it:
COFDM sub carriers;
uses the FEC to
regenerate the multiplex bit stream
required audio, video, text, subtitle, EPG and information data pipes
feature of this system is that the information decoded from the multiplex can
be stored on a local hard disk drive. It can then, at any time later finish
the decoding process to be replayed as video and audio.
is the most basic of computer processes (as no computationally complex encoding
is needed) the cost of digital video recorders (also known as Personal Video
Recorder, PVR) is very low. In addition, as the relevant part of the digital
broadcast is stored, replay on these devices is a perfect replay. This
compares favourably to analogue recordings on clumsy video tape which are
imperfect to start with and decay immediately.
Hi Brian, Thank you for this article, i dont think there is any way i could explain this so well for non tech people & students to understand, with your permission i will forward a copy to my college for use as a hand out to those undertaking their qualifications (nvq / city & guilds) to gain their full rdi status. Mark Aberfan Aerials
Brian - Thats an excellent piece. Good read. Just two things What's the name of the process used to multiplex the SI data together? It been eluding me all morning. Is your fridge really that tidy? I second Mark, trying to explain this to non tech people just isn't possible and it beats trying to explain spectrum with a jug of water. Have fun over the next few days i'm off to visit caldbeck...
Brian - Here was me getting a few days away from the internet, then the hotel announcned in the middle of the countryside it has its own internet cafe. What a wonderful world we live in! Is their anyone in the British Isles with no internet access... Bit stupid putting that on an internet forum however... Curiously i can see caldbeck(s) on the horizon, appears the reserve antenna are fitted. Looks like a series of shf dishes. No top hat as yet. I will confirm at a later stage if the jib is still on the mast. It was in some recent photos. I wonder what the future holds for Sandale looking radiant in the
haze? FM only? Lets hope Paul the nigerian security guard is on he likes anoraks!
Jordy: The only places in the UK without Internet I have found is on a train when it is going though a tunnel. Can you take some photos of Caldbeck, it would be nice to have some. I suspect they may even remove the FM services from Sandale and remove the mast.
Dave Hayfield Saturday 2 August 2008 4:20PM Birchington
Hi Brian, we are a little worried in our area (CT7 0JA) about good digi reception. Our village,some 150 houses, is in dip which sheilds us from the local Dover TX. My enquiries have discovered that the radiated output from the Dover TX will not be increased after analogue switch off, which I think the BBC info says it will be, and since our terrestial digi reception is poor now it will not improve when analogue goes. This is not for the want of trying a selection of aerials in various locations.
Dave Hayfield: To be honest, you might be best just going for Freesat. Freesat is available right now (rather than 2012) and it supports HD. The power output of coastal transmitters has to be controlled to not interfere with TV reception abroad. However, the output of Dover will increase by 40 times at switchover, but this will not help if you can't get a line-of-sight to it.
Dave Hayfield: You have heard wrong! The current Freeview signals are at very low power levels compared to analogue. The total anaogue output is 64528kW, the current digital 1706W, and after switchover 17430kW. So, that's 2.6% for the current digital compared to analogue, and 27% after switchover.
Read your article as suggested on another thread - however I'm still confused as I read this below on a digitalradiotech web site:-
"2K and 8K refer to the different number of subcarriers that DVB-T uses. DVB-T uses OFDM (orthogonal frequency division multiplex) transmission, which means that thousands of carrier frequencies (referred to as subcarriers) are transmitted in parallel, and so the total data rate is divided amongst the thousands of subcarriers.
The 2K-mode uses 1705 subcarriers, and 8K-mode uses 6817 subcarriers. The reason why they are referred to as 2K and 8K stems from the fact that the OFDM signal is generated (and demodulated) by a digital signal processing (DSP) operation called the FFT (Fast Fourier Transform), and the FFT requires the number of data values input to the FFT to be an integer power of 2, i.e. 2n, where n must be an integer. So, 2K-mode uses an FFT with 2048 input data values, and 8K-mode uses an FFT with 8192 input data values." ??
Brian: Thanks for your response, and yet more helpful advice. I am printing all this out to refer to later when I need it again.
I checked out your link - and WOW, absolutely fascinating. So much information to take in in one go, so have bookmarked it to come back to again.
Some of the concepts - such as lossy and lossless compression - I understood right away (being a retired photographer and 'messing about' in PhotoShop). But this is a different ball game.
I must say a BIG thank you for all the help and knowledge that you share, and not least of all - for your time.
Just re-reading this, you have misunderstood the COFDM section.
The number of independent carriers (not sub-carriers) is the transmission mode: 2k, 8k or (T2) 32k. These are very carefully spaced carriers, spaced so the frequencies are not harmonically related. The more carriers, the longer any given pattern is transmitted for, so changing transmission mode doesn't increase or decrease capacity. More carriers means more hardware to scan for each different carrier. However, you can do a Phase Frequency Detector in 22 transistors according to a circuit diagram on Wikipedia, and 704,000 transistors is nothing on a modern IC.
QPSK or QAM is the phase and (for QAM) amplitude of each carrier. 'Phase' is where the peak of the sine wave appears in time, relative to the 'pilot' carriers which don't carry any information. 'Amplitude' is the size of the wave.
It's the keying mode or modulation which actually carries information. QPSK has four values so carries two bits on a carrier. 16QAM has 16 values, four bits. 64QAM carries 6 bits and 256QAM 8 bits.
The problem for adding amplitude levels is you need more signal to detect differences between the levels: 256QAM uses eight different levels and 32 different phases. The rotated constellation actually has even more different levels but the combination of level and phase is closer to unique and the receiver can deduce which is more likely.
Satellite does not use amplitude changes, only phase shifts, because the signals at the receiver are so small (they travel 38,000km after all: the effective radiated power is about 125kW). DVB-S2 does define 16APSK, where two amplitudes are used, but it's recommended only for broadcasters downlinks rather than direct-to-home services. Sky use QPSK.
Errors can be introduced either by a carrier being wiped out completely, or the amplitude and phase being interpreted incorrectly. Reflections on the same frequency have travelled further than the original signal and so are late, and when added to the original signal will change its strength (it may increase or decrease depending on the delay) *and* change its phase. However, because each carrier is a slightly different frequency and hence wavelength, the reflection will appear at a different phase position on each carrier and change it a different amount, so a reflection doesn't completely wipe out the meaning of the whole.
Carriers are only off for short periods - the Guard Interval, which is the amount of time that nothing is broadcast (we use 1/32 on DVB-T, so one-thirty-third of the time nothing is transmitted). A given batch of symbols is broadcast for 224 microseconds (2k mode), 896 microseconds (8k mode) or 3,584 µs (32k mode). All-zeroes are represented by a specific *non-zero* co-ordinate on the phase/amplitude graph (in fact, the top-right corner, maximum amplitude and phase shift, for all constellations).
Your numbers for error handling are also wrong: 5/6 indicates that five out of six bits are useful, 3/4 that three of four are useful, and 2/3 that two out of three are useful.
The raw baud rate of DVB-T is 6 million symbols per second. Transmitting 16QAM symbols (FEC 3/4) means a raw rate of 24 million, then one out of four carries error correction dropping it to 18Mbit/s. Transmitting 64QAM 2/3 gives a raw rate of 36Mbit/s, 24Mbit/s after redundancy is removed. DVB-T2 has a higher raw baud rate as I understand it, coming in part from the 1/128 guard interval.
Mike Dimmick: I'm sorry, but you do seem to have a rather "analogue" view of how this works.
Firstly, the position of the sub-carriers in the main signal is computed using Fast Fourier Transformations, which allow the change of domain between frequencies and amplitude or phase.
They are not, as they might have been with analogue transmission, actual tunable carriers, only reception of the entire signal will the carrier points be re-calculated.
When you see the constellation diagrams, these are the reconstructed sub-carriers in the signal. So, whilst the carriers are independent when pushed thought the FFT calculations, they are all dependent on the whole transmission being provided.
There are no "pilot" carriers that respect, only one calculated from the entire transmission. Digital communication systems rarely use carriers as they are not required.
Secondly, the data pushed into the carriers is not referred to as "bits", but "symbols". This is for several reasons. Firstly, there can be more than two states usable for each symbol, and also to avoid long runs of 000000s or 11111s that having no edges to provide timing synchronization. This is true for every high speed data transmission system.
The error detection system has several components inducing convolutional interleaving, block interleaving and the Forward Error Correction, which is the Convolution technique - Convolutional code - Wikipedia, the free encyclopedia for details, it is (again) more mathematically complex than you suggest.
used to have sky on main tv then transmitted via 'something in loft' and cabled into 3 other rooms.now virgin cable with new aerial on roof for main tv. also another arial on rrof for kitchen freeview tv. how do with get tv s in bedrooms to work? thanks
Where has ch 31 gone? Waltham ch 31 now strength 24% Quality 10% sometimes! Humax 9300 pvr box can receive itv1 belmont but nothing from waltham My post code NG14 7J HOVERINGHAM
Is there anything I can do?
I have a DVD recorder. When recording from Sky+HD box which is connected to DVD recorder by scart socket the picture has a slight grain effect. I have tried using gold scart leads etc but no change. I realise I will not get the full HD Sky picture on the DVD recorder but has anyone else come across this problem. Is this as good as it gets.
I AM PAYING ATTENTION,SEE IM NOW ON CORRECT PAGE,BUT ITS NOT UR FAULT IM 10YRS BEHIND IN THE TECHNO WORLD.I DO TAKE UR ADVICE SERIOUSLY.BUT ITS LOTS OF INFO 2TAKE IN 4R ME.SORRY FOR JOKIN WITH U I CANT HELP IT, U LOOK LIKE UR A GOOD HEARTED TYPE OF GUY IN THAT BOX.BY THE WAY THAT FRIDGE IS VERY COLOURFULL INDEED,SEE THERE I GO LOOKIN AT THE PICTURES AGAIN!.I PROMISE I WILL TRY HARDER TO STAY FOCUSED ON THE ADVICE.from now on and thanks for your patience.
Ron Lake Tuesday 20 September 2011 12:20AM Wakefield
This 'Digital' thing is like the TV Times Brian, 'I never knew there was so much in it' lol. Great explanation at the top of the page, well written so that us numpties out here can, at least, get some of the picture. Often wondered what the 2k, 8k, QAM things were all about, but now well on the way to understanding, (well a little). Really, very interesting read. Thank you so much for all you do to help us non techy guys understand things.
Ron Lake Wednesday 21 September 2011 11:58PM Wakefield
Brian, and I am pleased to find that I am getting uninterrupted tv on all channels in my bedroom, on a Phillips (Pace) DTR220 box fed by an old portable set top aerial. Just a 10 inch loop. Brilliant. You have been a great help during the whole process. Thank you so much.
I have a motorhome with a digital freeviewaerial. As I move from location to location where can I find information about which direction to point the aerial in and whether it should be vertically aligned or horizontal?
william blue Wednesday 24 October 2012 12:50PM Ballynahinch
i have great pictures on all freeview channels but can't recieve any on HD . do i plug the tv aerial into the aerial socket or do i need a plug for the HD socket? ps my tv is full HD. also i live in northern ireland and should be able to recieve RTE channels but don't.