My work on the S-38Es, on the HRO-dial receiver, on the Mate for the Mighty Midget, and on various mechanical filters has caused me to think (once again) about why we ended up with 455 kHz as the IF frequency for so many radios. I've heard many explanations for this, but unfortunately I've forgotten the explanations and lost the sources. I started digging into this again today. I found the below e-mail from Al N3FRQ on the Boatanchors mailing list (2008).
I contacted Al to find out if he had learned anything else on this topic. He has not. So if anyone out there has answers to Al's questions, or anyother info that would shed light on why they went with 455, please let us know.
-------------------------------
Every so often the question comes up: Why are all the IF’s 455 KHz? I’d
like to get an article together that solves this riddle while the people
who know are still with us. I know parts of the story, but I need help
with a couple of issues.
There are two major consideration is the choice of the intermediate
frequency used in a superheterodyne receiver. The lower the frequency,
the easier it is to attain high selectivity. Also, in the early days,
before tetrode and pentode tubes, it was easier to achieve a high degree
of amplification at lower frequencies. Conversely, a higher IF frequency
results in better image rejection.
Early superhets had the IF at 100KHz or lower in order to get adequate
gain from the available triode tubes. They suffer severely from
“two-spot tuning” (images). By the early 1930’s, broadcast set had
settled in at 175KHz, and automobile receivers would later adopt 262KHz
as a standard.
The advent of the short-wave craze, and multi-band broadcast receivers
dictated a higher IF frequency to achieve adequate image suppression on
the short-wave bands. The broadcast band occupied 550-1500KHz at this
time, and the designer encounters sever problems if his radio tunes
across it’s own IF. Some shortwave sets used 1600-1700KHz for better
image rejection, but one couldn’t go higher if the 160-meter ham band
(1800-2000KHZ) was to be covered. Most multi-band receiver settled in
near 450KHz, a comfortable distance from the first broadcast channel at
550KHz.
Questions:
Odd multiples of 5KHz, 455, 465, etc., were usually chosen so that the
image of the carrier of a broadcast-band station could be zero-beat with
the carrier of the station being tuned to achieve minimal interference.
(This assumes 10KHz channel spacing. Did the Europeans (9KHz) do
something else?)
The Radiotron Designers Handbook, Third Edition, p. 159, states “A
frequency of 455 Kc/s is receiving universal acceptance as a standard
frequency, and efforts are being made to maintain this frequency free
from radio interference.”
(1) Do FCC and international frequency allocations reflect this?
(2) I’ve heard the term “Clear-Channel IF.” Can anyone cite references?
(3) At lease one news group posting claims that broadcast frequencies in
a particular market are assigned to prevent strong inter-modulation
products from falling near 455KHz. Is this factual? Need reference.”
(4) Was this (3) at least part of the reason for “Radio Moving Day” in
1941? See: http://www.dcmemories.com/RadioMovingDay/032341WINXFreqChange.jpg
(5) Many National Radio sets used a 456KHz IF’s and I think I remember a
437 somewhere. Why? Are there different considerations for short-wave CW
operation?
Further input, corrections, and elaborations are greatly appreciated.
Scolarly reference will be looked upon with great favor.
Regards,
Al
--
Al Klase - N3FRQ
Flemington, NJ
http://www.skywaves.ar88.net/