Post by William SommerwerckThe rear channel information in such systems as Columbia SQ
is synthetic, having not been discretely processed as it would
be in a system explicitly designed to capture and then reproduce
such rear-channel information.
That is absolutely incorrect.
In all four-channel matrix systems, there are four inputs and four
outputs. A logic-directed, phase-cancellation decoder is capable of
dynamically "separating" the front and back information.
In fact, an SQ system could not localize a left-rear-only signal nor
a right-rear-only signal without producing some artifacts
in the front channels, given the non-discrete nature of the method employed.
Of course it can, as assuredly as it can simultaneously localize
left-front and right-front signals, without any artifacts in the rear
channels.
It can do this for //any two// isolated channels. The decoder cancels
out their crosstalk in the other two channels. This breaks no laws of
math or physics.
It is an implementation distinction which may be noticed or may
not, but it not at all like you make be thinking of if your objection
views synthetic = "ersatz", unrealistic, etc.
I own two hall synthesizers, which produce synthetic ambience -- which
happens to sound very natural.
I have several hall synthesizers presently including a relatively
elaborate Audyssey processor in my main system, and I have owned many
going back to the AudioPulse 35 years ago (and its annoying hiss and
pushbutton intermittents) and many, many since then. Many if not most of
them sounded and presently sound extremely natural. And this discussion
has absolutely NOTHING to do with their ability to create a convincing
and natural and wonderful sound. I entirely and totally share your
opinion and do not have any disagreement with your assessment of their
performance from a psychoacoustics point of view whatsoever!! Had I been
a critical reviewer of this equipment and been asked my opinion of how
they sounded, I would totally express my vote of approval and
confidence, and have, indeed voted many thousands of my dollars directly
over quite a few decades supporting this very belief. Even my small
audio system in a tiny small home office has a $2K Denon receiver with
an Audyssey X32 processor because I totally enjoy the perceived effects
of its natural surround sound.
However........
I am now (and have been) exclusively talking from a technical,
engineering viewpoint, and as one who is very qualified in this area.
The various systems which do not provide separate and discrete
independent channels for each of the 4 original channels cannot, do not,
and will not separate and maintain independent information for each of
the four channels unless each has its own distinct, isolated, channel. A
channel has a very specific and very defined meaning to a communications
engineer not only based on bandwidth and SNR but also its time domain /
frequency domain characteristics, a snapshot of which can be portrayed
in its transfer function, and measured entirely using both time and
frequency domain techniques including Fourier and Laplace analysis. I
spent 2 years in a Masters program learning this topic quite fully on
top of the (4 courses of) required undergraduate electrical engineering
course work required for this area.
You might be convinced that some matrixed scheme of putting 4 audio
channels into a 2 channel stereo medium can somehow permit the originals
to be faithfully extracted, but I am here to tell you that you are
entirely wrong.
The more advanced version of SQ used gated, voltage controlled
amplifiers not unlike the more recent Dolby ProLogic scheme to move out
of phase information selectively to the rear. The encoder can and
certainly does encode the rear channels to be out of phase so as to
emphasize their rear presentation, BUT..............and this is the
killer issue...........the original stereo mix already has out of phase
information which itself conveys time differences attributable to front
separation alone.
The lack of separate and independent channels forces the scheme to
"guess" at which elements of the signal structure represent true rear
data, which represent original left to right phase differences, and how
to use some form of demodulation to portray them. The appearance of
multiple approaches using several competing matrixing, AGC, companding,
and steering techniques and competing ways to trick the ear clearly
illustrated the absence of a single correct solution, since the 4 into 2
back to 4 channel process is inherently very inexact.
The decoder has no way to "cancel out crosstalk". The 2 channel phase
information does not contain identifiable crosstalk since the front and
rear are not orthogonal, and have no clock or other time reference to
independently serve to distinguish front from back out of phase content
versus left to right out of phase content. Were an ultrasonic clock to
have been recorded (an approach considered as one potential solution
versus an ultrasonic subcarrier used by JVC), and this clock used to
time mux the analog stream, then there could indeed be a way to
explicitly isolate separate channels, but at the expense of front
channel bandwidth and signal to noise. In the subsequent digital era,
these problems disappear, and bit pooling and TDMA or other muxing and
sampling allow streams to be created where time can be used as a
reliable reference to sort things out. In the early 1960s when these
systems were being deployed (and I was in my graduate EE program) this
was not an option.
Try to imagine what a stereo capable LP would contain in order to create
a left rear only output:
If you had only left energy recorded, it would show up in the left
channel regardless of phase. Left energy alone would have no phase
difference to reference, and its absolute phase would either cause the
left front speaker to move its cone first forward then back, or, if 180
degrees reversed, would move the cone in the opposite sense. Any phase
angle you choose for conveying "front to back" for this simple example
fails.
If you want to build an encoder / decoder to use phase as a way to
convey front / rear directionality, you can ***SYNTHESIZE*** an
artificial reference frame, exaggerate the effect with VCAs and gating
logic, and treat shorter phase shifts as if they belong to the front and
longer phase shifts as if they belong to the rear. The ear can indeed be
fooled, and this is fundamentally the way it was done.
Lets go one step further and make an even more drastic engineering
assumption. We are going to assume that the front speakers are spaced
much closer to one another than the rear pair are spaced with respect to
the front. We will then "guess" that phase shifts / time delays longer
than the presumed short left to right delay are entirely attributable to
rear delayed energy. We will choose an arbitrary cut off and declare
that all delays longer than "X" degrees of phase shift are the result of
rear channel content. This might even work were it not that 361 degrees
of phase shift is entirely and totally indistinguishable from 1 degree
of phase shift as far as analog processing is concerned. Phase only
offers a brief impartial piece of evidence as encoded in this analog system.
Could an advanced DSP be used to build an FFT waterfall and distinguish
early and late energy more exactly. Yes, of course. But this has nothing
to do with the way SQ, QS, Dolby ProLogic or any such primitive scheme
worked in the 1960s.
Did I ever say that SQ or other techniques of its ilk were bad,
unnatural, or otherwise flawed. Not at all. I ask you please to not
conflate how things work with how things sound. I am an engineer talking
about how things work.