http://www.dsprelated.com/showarticle/176.php
TRANSMITTED SSB SIGNALS
Before we illustrate SSB demodulation, it's useful to quickly review the nature of standard double-sideband amplitude modulation (AM) commercial broadcast transmissions that your car radio is designed to receive. In standard AM communication systems, an analog real-valued baseband input signal may have a spectral magnitude, for example, like that shown in Figure 2(a). Such a signal might well be a 4 kHz-wide audio output of a microphone having no spectral energy at DC (zero Hz). This baseband audio signal is multiplied, in the time domain, by a pure-tone carrier to generate what's called the modulated signal whose spectral magnitude content is given in Figure 2(b).
In this example the carrier frequency is 80 kHz, thus the transmitted AM signal contains pure-tone carrier spectral energy at ±80 kHz. The purpose of a remote AM receiver, then, is to demodulate that transmitted DSB AM signal and generate the baseband signal given in Figure 2(c). The analog demodulated audio signal could then be amplified and routed to a loudspeaker. We note at this point that the two transmitted sidebands, on either side of ±80 kHz, each contain the same audio information.
In an SSB communication system the baseband audio signal modulates a carrier, in what's called the "upper sideband" (USB) mode of transmission, such that the transmitted analog signal would have the spectrum shown in Figure 3(b). Notice in this scenario, the lower (upper) frequency edge of the baseband signal’s USB (LSB) has been translated in frequency so that it’s located at 80 kHz (-80 kHz). (The phasing method of SSB radio frequency (RF) generation is given in Appendix A.)
The purpose of a remote SSB receiver is to demodulate that transmitted SSB signal, generating the baseband audio signal given in Figure 3(c). The analog demodulated baseband signal can then be amplified and drive a loudspeaker.
In a "lower sideband" (LSB) mode of SSB transmission, the transmitted analog signal would have the spectrum shown in Figure 4(b). In this case, the upper (lower) frequency edge of the baseband signal’s LSB (USB) has been translated in frequency so that it’s located at 80 kHz (-80 kHz). The baseband signal in Figure 4(a) is real-valued, so the positive-frequency portion of its spectrum is the complex conjugate of the negative-frequency portion. Both sidebands contain the same information, and that's why LSB transmission and USB transmission communicate identical information.
And again, in the LSB mode of transmission, the remote receiver must demodulate that transmitted LSB SSB signal and generate the baseband audio signal given in Figure 4(c).
makes sense to me. AM is for fossils.
ReplyDeleteIt's junk, and the diagrams are even worse.
ReplyDeleteYou don't get sidebands until you mix the audio up to RF, while the diagram indicates something else. I have no idea where the rf depictions come from.
Some of the problem is that various things were described, and when something else came along, a separate explanation was added, rather than trying to have a unified theory.
Even in 1971, the ARRL Handbook had a chapter about AM and a separate chapter on SSB.
But the basic concept is that mixing creates an image. This is true for the superheterodyne receiver, but it applies to the mixing that occurs in a balanced modulator in a DSB or SSB transmitter, except there the "image" is the unwanted sideband.
And if course, an AM transmitter is mixing the audio signal up to RF in the output stage.
Decades go, people knew about the "phasing method" to generate SSB, but nowadays the same technique is used to drop frequencies in a receiver to a very low IF.
You only have to understand one concept and the others fall into place.
Michael
I notice that Richard Lyons has authored some books on digital signal processing. I will avoid reading anything with his name on it.
ReplyDeleteI notice that Richard Lyons has authored some books on digital signal processing. I will avoid reading anything with his name on it.
ReplyDeleteI probably missed the point ( I'm not a native speaker ) what actually is considered to be wrong? I do know, that the idea of negative frequencies is not that popular among hams, though the according math makes sense.
ReplyDeletePeter
It is very confusing. the 0 as I understand it is the centre frequency (ie were the carrier would be if there was one) the LSB is -ve around that carrier and USB is +ve. Apart from that I'm now doubting my understanding based on this explanation. How does demodulating it change the signal from -80 on LSB to +80 and then back again to -4. Though I always think in terms of double-balanced mixers not the phasing method. This might be telling us how it works on the phasing method???
ReplyDeleteFWIW It looks like he is rebuilding the DSB signal when we are used to thinking about recovered audio.
ReplyDeleteI assume much of the difficulties with the explanation centre on its use of negative frequencies.
ReplyDeleteNegative frequencies are an important part of DSP theory and Fourier transforms.
I still have painful memories of trying to conceptualise negative frequencies during a unit on DSP at uni during my undergrad.
Frequency as a number of events per second was fine but a negative number of events; that took some thinking.
They certainly exist in the maths; the Fourier and inverse Fourier transforms integrate from negative infinity to positive infinity.
I was initially uncertain about the mirroring of the base band signal but I accept that as OK. Still slightly uncertain labeling the halves of the base band as upper and lower side bands.
If you doubt the negative frequency concept the diagrams will look OK if you squint and just look at the right hand side.
Cheers
Mark
VK2WU
The key to understanding it, is its on a DSP blog. DSP usually isn't done by SPICE simulating hardware in real time, its all "mathy". There was a QEX article a long time ago about doing "DSP" by basically running SPICE really fast which was interesting, but I imagine it confused the heck out of most DSP programmers. Anyway the diagrams and flowcharts make perfect sense if you're writing DSP code, but little sense if you're melting solder.
ReplyDeleteI guess a comparison would be the flowchart of microcontroller code for a timer would look really weird if you're used to designing 555 circuits, although in the domain of writing assembly language, it does make sense despite the code not having any resistor dividers or comparators or integrating capacitors.
Or maybe a better comparison would be if you're used to designing high power vacuum tube RF output networks, looking at a bipolar transistor output network is going to feel really weird if you miss the minor detail that its a NPN transistor and not a 6146.
Thanks for all the comments. Glad to see I'm not the only one befuddled by this. Imagine, 42 years a ham and I'd never heard of negative frequencies! Who knew! think about all the possibilities! I can't wait to get on negative 75 meters -- maybe they'll be NICE there! Seriously though, this does all seem to be based on some mathematical artifacts that pop out of Fourier transform math. Or as Vince points out of the way DSP coders do their coding thing. But when people try to take this stuff and use it to explain how an AM broadcast transmitter works, well, no, sidebands and carriers and demodulation just doesn't work that way in the real world.
ReplyDeleteThe negative components do exist. https://en.m.wikipedia.org/wiki/Double-sideband_suppressed-carrier_transmission
DeleteHe does make mention of it partly in it being real valued saying the positive frequency is the complex conjugate of the negative. The math on the wiki page proves the diagram. Abstract...yes. That is how its taught in the engineering ciriculum that ive long forgot. KB0KFX
Stephen So and Lutz do have comments in somewhat of Layman's terms to help explain the negative frequencies.
Deletehttp://www.researchgate.net/post/Can_anyone_explain_the_concept_of_negative_frequency
Check your calendars, it is not April 1st. :)
Engineers are taught to use maths to model the real world. In first year EE we were taught how to model modulation (AM, SSB, DSB) using cos() and sin() trig functions, just like the excerpt from Richard's text above.
ReplyDeleteFor example a mixer is two cosine waves multiplied together:
cos(2*pi*fc*t)cos(2*pi*fm*t) = 0.5*cos(2*pi*(f-fm)*t) + 0.5*cos(2*pi*(f+fm)*t)
Scary? Well fc is the carrier freq, fm is the modulation freq, so the mixer output is at fm-fc (lower sideband) and fm+fc (upper sideband). The 0.5 means the two mixer outputs are half the level of the input signals or 6dB down. "t" is time.
After a bit more playing with trig functions, negative frequencies pop out of the maths, and although unexpected, end up being pretty useful way to describe how mixers and radios work.
Hams get taught about black (or silver if it's a SBL-1) boxes, which is also a useful model. "The mixer produces sum and difference frequencies, the output is 7dB down". See how the maths (6dB loss) is close to predicting the 7dB mixer loss?
I'm a Ham who became a EE, and now have a PhD in DSP. I think it's cool that we can look at the same thing a couple of different ways. No right and wrong, just a different way to learn.
EE or Ham - that's what it's all about. Getting my head inside the soul of the machine and learning.
- David VK5DGR