# Questions about equivalents of audio/video and digital/analog.

Discussion in 'Digital Photography' started by Radium, Aug 19, 2007.

Hi:

I. Audio vs. Video

Digitized (mono) audio has a single sample per each sampling
interval.

In the case of digital video, we could treat each individual sample
point location in the sampling grid (each pixel position in a frame)
the same way as if it was a sample from an individual (mono) audio
signal that continues on the same position in the next frame. For
example, a 640×480 pixel video stream shot at 30 fps would be treated
mathematically as if it consisted of 307200 parallel, individual mono
audio streams [channels] at a 30 Hz sample rate. Where does bit-
resolution enter the equation?

Digital linear PCM audio has the following components:

1. Sample rate [44.1 KHz for CD audio]
2. Channels [2 in stereo, 1 in monaural]
3. Bit-resolution [16-bit for CD audio]

Sample rate in audio = frame rate in video
Channel in audio = pixel in video
Bit-resolution in audio = ? in video

Is it true that unlike the-frequency-of-audio, the-frequency-of-video
has two components -- temporal and spatial?

AFAIK, the-frequency-of-audio only has a temporal component. Do I
guess right?

II. Digital vs. Analog

Sample-rate is a digital entity. In a digital audio device, the sample-
rate must be at least 2x the highest intended frequency of the digital
audio signal. What is the analog-equivalent of sample-rate? In an
analog audio device, does this equivalent need to be at least 2x the
highest intended frequency of the analog audio signal? If not, then
what is the minimum frequency that the analog-equivalent-of-sample-
rate must be in relation to the analog audio signal?

III. My Requests:

No offense but please respond with reasonable answers & keep out the
jokes, off-topic nonsense, taunts, insults, and trivializations. I am
really interested in this.

Thanks for your assistance, cooperation, and understanding,

2. ### Ray FischerGuest

>Hi:
>
>I. Audio vs. Video
>
>Digitized (mono) audio has a single sample per each sampling
>interval.
>
>In the case of digital video, we could treat each individual sample
>point location in the sampling grid (each pixel position in a frame)
>the same way as if it was a sample from an individual (mono) audio
>signal that continues on the same position in the next frame. For
>example, a 640×480 pixel video stream shot at 30 fps would be treated
>mathematically as if it consisted of 307200 parallel, individual mono
>audio streams [channels] at a 30 Hz sample rate. Where does bit-
>resolution enter the equation?
>
>Digital linear PCM audio has the following components:
>
>1. Sample rate [44.1 KHz for CD audio]
>2. Channels [2 in stereo, 1 in monaural]
>3. Bit-resolution [16-bit for CD audio]
>
>Sample rate in audio = frame rate in video
>Channel in audio = pixel in video
>Bit-resolution in audio = ? in video
>
>Is it true that unlike the-frequency-of-audio, the-frequency-of-video
>has two components -- temporal and spatial?

No. Video is converted to a linear data stream corresponding
(roughly) to scan lines. The color and brightness information
is split apart and converted into parallel data streams.

Compression for digital video may group areas of the image
and/or eliminate some of the color components.

>II. Digital vs. Analog
>
>Sample-rate is a digital entity. In a digital audio device, the sample-
>rate must be at least 2x the highest intended frequency of the digital
>audio signal. What is the analog-equivalent of sample-rate?

There is no sampling in analog so there is no sampling rate.

--
Ray Fischer

Ray Fischer, Aug 19, 2007

3. ### Ken MaltbyGuest

"Ray Fischer" <> wrote in message
news:46c8bb30\$0\$14150\$...
>>Hi:
>>
>>I. Audio vs. Video
>>
>>Digitized (mono) audio has a single sample per each sampling
>>interval.
>>
>>In the case of digital video, we could treat each individual sample
>>point location in the sampling grid (each pixel position in a frame)
>>the same way as if it was a sample from an individual (mono) audio
>>signal that continues on the same position in the next frame. For
>>example, a 640×480 pixel video stream shot at 30 fps would be treated
>>mathematically as if it consisted of 307200 parallel, individual mono
>>audio streams [channels] at a 30 Hz sample rate. Where does bit-
>>resolution enter the equation?
>>
>>Digital linear PCM audio has the following components:
>>
>>1. Sample rate [44.1 KHz for CD audio]
>>2. Channels [2 in stereo, 1 in monaural]
>>3. Bit-resolution [16-bit for CD audio]
>>
>>Sample rate in audio = frame rate in video
>>Channel in audio = pixel in video
>>Bit-resolution in audio = ? in video
>>
>>Is it true that unlike the-frequency-of-audio, the-frequency-of-video
>>has two components -- temporal and spatial?

>
> No. Video is converted to a linear data stream corresponding
> (roughly) to scan lines. The color and brightness information
> is split apart and converted into parallel data streams.
>
> Compression for digital video may group areas of the image
> and/or eliminate some of the color components.
>
>>II. Digital vs. Analog
>>
>>Sample-rate is a digital entity. In a digital audio device, the sample-
>>rate must be at least 2x the highest intended frequency of the digital
>>audio signal. What is the analog-equivalent of sample-rate?

>
> There is no sampling in analog so there is no sampling rate.
>
> --
> Ray Fischer
>
>

You might want to check into the posting history of

Luck;
Ken

Ken Maltby, Aug 20, 2007
4. ### Jerry AvinsGuest

> Hi:
>
> I. Audio vs. Video
>
> Digitized (mono) audio has a single sample per each sampling
> interval.

Yes. several bits per sample, many samples per second.

> In the case of digital video, we could treat each individual sample
> point location in the sampling grid (each pixel position in a frame)
> the same way as if it was a sample from an individual (mono) audio
> signal that continues on the same position in the next frame. For
> example, a 640ï¿½480 pixel video stream shot at 30 fps would be treated
> mathematically as if it consisted of 307200 parallel, individual mono
> audio streams [channels] at a 30 Hz sample rate. Where does bit-
> resolution enter the equation?

It might actually make sense to look at it that way in some situations,
but I'll bet you can't think of one. As for bit resolution, what does
that term mean to you? I think it means the number of bits used to
represent each sample, whatever the situation.

> Digital linear PCM audio has the following components:
>
> 1. Sample rate [44.1 KHz for CD audio]

One particular kind of audio. Common uncompressed audio rates range from
8 to 96 KHz.

> 2. Channels [2 in stereo, 1 in monaural]

Up to 5 in home theater systems.

> 3. Bit-resolution [16-bit for CD audio]

So you do know what the term means. Why did you ask then? Easier than
thinking?

> Sample rate in audio = frame rate in video

Bullshit.

> Channel in audio = pixel in video

Bullshit.

> Bit-resolution in audio = ? in video

Bit resolution.

> Is it true that unlike the-frequency-of-audio, the-frequency-of-video
> has two components -- temporal and spatial?

Good question. The signal has a frequency spectrum. A still image has a
spatial spectrum. A video signal represents a series of still images.

> AFAIK, the-frequency-of-audio only has a temporal component. Do I
> guess right?

Yes, until the sound gets into a room. then it has a spatial element
too. Think reflections and standing waves.

> II. Digital vs. Analog
>
> Sample-rate is a digital entity. In a digital audio device, the sample-
> rate must be at least 2x the highest intended frequency of the digital
> audio signal. What is the analog-equivalent of sample-rate? In an
> analog audio device, does this equivalent need to be at least 2x the
> highest intended frequency of the analog audio signal? If not, then
> what is the minimum frequency that the analog-equivalent-of-sample-
> rate must be in relation to the analog audio signal?

There are no samples in an analog system, so there is no sample rate.

> III. My Requests:
>
> No offense but please respond with reasonable answers & keep out the
> jokes, off-topic nonsense, taunts, insults, and trivializations. I am
> really interested in this.

Look, guy: you could probably read by the time you were three years old.
Bully for you! (Precocious reading is almost a /sine qua non/ of
Asperger's.) I have news for you: growing up _doesn't_ mean that one
stops reading. Get a good book or read some of the on-line material
collected at http://www.dspguru.com/ and learn the basics of your
interest. Above all, stop guessing and extrapolating from an erroneous
model that you dreamed up from partial information. You may be smart in
some ways, but if you were wise, you would know that your believing
something doesn't make it real.

As for those snide remarks you want to deflect, you earned them with
but I expect you to shape up.

Jerry
--
Engineering is the art of making what you want from things you can get.
Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯

Jerry Avins, Aug 20, 2007

On Aug 19, 2:50 pm, (Ray Fischer) wrote:

> >Hi:

> >I. Audio vs. Video

> >Digitized (mono) audio has a single sample per each sampling
> >interval.

> >In the case of digital video, we could treat each individual sample
> >point location in the sampling grid (each pixel position in a frame)
> >the same way as if it was a sample from an individual (mono) audio
> >signal that continues on the same position in the next frame. For
> >example, a 640×480 pixel video stream shot at 30 fps would be treated
> >mathematically as if it consisted of 307200 parallel, individual mono
> >audio streams [channels] at a 30 Hz sample rate. Where does bit-
> >resolution enter the equation?

> >Digital linear PCM audio has the following components:

> >1. Sample rate [44.1 KHz for CD audio]
> >2. Channels [2 in stereo, 1 in monaural]
> >3. Bit-resolution [16-bit for CD audio]

> >Sample rate in audio = frame rate in video
> >Channel in audio = pixel in video
> >Bit-resolution in audio = ? in video

> >Is it true that unlike the-frequency-of-audio, the-frequency-of-video
> >has two components -- temporal and spatial?

> No. Video is converted to a linear data stream corresponding
> (roughly) to scan lines. The color and brightness information
> is split apart and converted into parallel data streams.

Okay. So a digital video device with greater bit-resolution can allow
for more levels of luminance?

What is the video-equivalent of bit-resolution?

> Compression for digital video may group areas of the image
> and/or eliminate some of the color components.

Does compression also eliminate some of the brightness components?

> >II. Digital vs. Analog

> >Sample-rate is a digital entity. In a digital audio device, the sample-
> >rate must be at least 2x the highest intended frequency of the digital
> >audio signal. What is the analog-equivalent of sample-rate?

> There is no sampling in analog so there is no sampling rate.

There is no analog-equivalent of sample-rate? Then what the limits the
highest frequency an analog audio device can encode?

What determines the highest frequency signal an analog solid-state
audio device can input without distortion?

Analog solid-state audio device = a purely analog electronic device
that can record, store, playback, and process audio signals without
needing any moving parts.

The above device inputs the electrical signals generated by an
attached microphone. These electric signals are AC and represent the
sound in "electronic" form. Sound with a higher-frequency will
generate a faster-alternating current than sound with a lower-
frequency. A louder sound will generate an alternating-current with a
bigger peak-to-peak wattage than a softer soft.

What mathematically determines the highest-frequency electric signal
such a device can intake without distortion?

6. ### Floyd L. DavidsonGuest

(Ray Fischer) wrote:
>>II. Digital vs. Analog
>>
>>Sample-rate is a digital entity. In a digital audio device, the sample-
>>rate must be at least 2x the highest intended frequency of the digital
>>audio signal. What is the analog-equivalent of sample-rate?

>
>There is no sampling in analog so there is no sampling rate.

But that was not the question. The analog-equivalent is
bandwidth.

In a purely analog channel frequencies higher than the
upper limit of the channel's bandwidth will not be
passed. When using a digital channel no analog signal
frequencies higher than 1/2 the Nyquist rate (i.e., the
sampling rate) will be passed.

Granted, that with an analog channel the limit is never
a sharply defined frequency; hence in practice there is
not a instant cutoff, but rather a number of negative
effects that become more significant as the signal
frequency approaches and goes beyond the arbitrarily set
"upper limit". Generally phase distortion increases and
signal level decreases, for example. The upper limit is
a function of how much distortion is acceptable for the
application.

In a digital channel you cannot pass frequencies higher
1/2 the Nyquist rate, which in theory is a very sharp
cutoff but in practice it becomes very similar to the
gradual analog cutoff. The reason for that the extreme
negative effects associated with distortion of inputs
that are above that frequency virtually always require
analog filters at the input to absolutely avoid any
frequencies above 1/2 the Nyquist rate. (Alias
frequencies are generated at the output rather than a
signal which is the same as the input, and the
distortion is 100%. ) Hence analog filters that have the
exact same effects as would be seen with an analog
channel are used at the input of an analog to digital
conversion.

--
Floyd L. Davidson <http://www.apaflo.com/floyd_davidson>

Floyd L. Davidson, Aug 20, 2007

On Aug 19, 4:39 pm, Jerry Avins <> wrote:

> > In the case of digital video, we could treat each individual sample
> > point location in the sampling grid (each pixel position in a frame)
> > the same way as if it was a sample from an individual (mono) audio
> > signal that continues on the same position in the next frame. For
> > example, a 640?480 pixel video stream shot at 30 fps would be treated
> > mathematically as if it consisted of 307200 parallel, individual mono
> > audio streams [channels] at a 30 Hz sample rate. Where does bit-
> > resolution enter the equation?

> It might actually make sense to look at it that way in some situations,
> but I'll bet you can't think of one.

This would be a start if I want to decrease the frequency of a video
signal without decreasing the playback speed.

The application here is to change the frequency of the video signal
without altering the frame-rate, sample-rate, or tempo of the video
signal.

This is like changing the pitch of audio on playback without modifying
the sample-rate or playback speed.

Using this software, you can also change the tempo of a song without
affecting the pitch.

> As for bit resolution, what does
> that term mean to you? I think it means the number of bits used to
> represent each sample, whatever the situation.

Same here. In audio, a greater bit-resolution provides more levels of
loudness that a smaller bit-resolution. In video, what does a greater
bit-resolution provide that a smaller bit-resolution doesn't? More
levels of light intensity? More colors? I am just guessing.

> > Digital linear PCM audio has the following components:

> > 3. Bit-resolution [16-bit for CD audio]

> So you do know what the term means.

Yes. I know what it means. However, I don't know what its video-
equivalent is?

> > II. Digital vs. Analog

>
> > Sample-rate is a digital entity. In a digital audio device, the sample-
> > rate must be at least 2x the highest intended frequency of the digital
> > audio signal. What is the analog-equivalent of sample-rate? In an
> > analog audio device, does this equivalent need to be at least 2x the
> > highest intended frequency of the analog audio signal? If not, then
> > what is the minimum frequency that the analog-equivalent-of-sample-
> > rate must be in relation to the analog audio signal?

> There are no samples in an analog system, so there is no sample rate.

Okay. Then what is the analog-equivalent of a "sample"?

The analog-equivalent of bit-resolution = dynamic range

The analog-equivalent of sample rate = ?

8. ### Jerry AvinsGuest

...

> Okay. So a digital video device with greater bit-resolution can allow
> for more levels of luminance?

Ir color differentiation. Or both.
\
> What is the video-equivalent of bit-resolution?

Bit resolution.

...

> There is no analog-equivalent of sample-rate? Then what the limits the
> highest frequency an analog audio device can encode?

The capabilities of the transmission and recording media.

> What determines the highest frequency signal an analog solid-state
> audio device can input without distortion?

Distortion, in the commonly used sense is immaterial. On a phonograph
disk, high frequencies are limited by the ability of the cutting stylus
to move rapidly, of the playback stylus to stay in the groove at high
acceleration, and of the microphone to capture the sound.

> Analog solid-state audio device = a purely analog electronic device
> that can record, store, playback, and process audio signals without
> needing any moving parts.

Oh? Just what would the record consist of?

> The above device inputs the electrical signals generated by an
> attached microphone. These electric signals are AC and represent the
> sound in "electronic" form. Sound with a higher-frequency will
> generate a faster-alternating current than sound with a lower-
> frequency. A louder sound will generate an alternating-current with a
> bigger peak-to-peak wattage than a softer soft.

All true. How to you record it with no moving parts? Even a microphone
has a moving diaphragm. You must like the taste of your foot. You keep

> What mathematically determines the highest-frequency electric signal
> such a device can intake without distortion?

Distortion (as the term is commonly meant unless otherwise qualified)
entails harmonics which have higher frequencies than that which is
distorted. Near a system's upper frequency limit, harmonic distortion is
impossible. There is no mathematical limit to an analog system's
frequency response; the limit is physical. One can understand purely
digital systems with mathematics alone. Analog systems are messier by
far. You actually have to understand how real-world things behave in
order to deal with them. Purely digital systems have relatively little
use. All of our senses are analog.

Jerry
--
Engineering is the art of making what you want from things you can get.
Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯

Jerry Avins, Aug 20, 2007
9. ### Jerry AvinsGuest

> On Aug 19, 4:39 pm, Jerry Avins <> wrote:
>

>
>>> In the case of digital video, we could treat each individual sample
>>> point location in the sampling grid (each pixel position in a frame)
>>> the same way as if it was a sample from an individual (mono) audio
>>> signal that continues on the same position in the next frame. For
>>> example, a 640?480 pixel video stream shot at 30 fps would be treated
>>> mathematically as if it consisted of 307200 parallel, individual mono
>>> audio streams [channels] at a 30 Hz sample rate. Where does bit-
>>> resolution enter the equation?

>
>> It might actually make sense to look at it that way in some situations,
>> but I'll bet you can't think of one.

>
> This would be a start if I want to decrease the frequency of a video
> signal without decreasing the playback speed.

Various compression schemes do that with varying degrees of resulting
quality.

> The application here is to change the frequency of the video signal
> without altering the frame-rate, sample-rate, or tempo of the video
> signal.
>
> This is like changing the pitch of audio on playback without modifying
> the sample-rate or playback speed.

No it's like compressing the bit rate; MP3, for example.

> Adobe Audition provides this affect.
>
> Using this software, you can also change the tempo of a song without
> affecting the pitch.
>
>> As for bit resolution, what does
>> that term mean to you? I think it means the number of bits used to
>> represent each sample, whatever the situation.

>
> Same here. In audio, a greater bit-resolution provides more levels of
> loudness that a smaller bit-resolution. In video, what does a greater
> bit-resolution provide that a smaller bit-resolution doesn't? More
> levels of light intensity? More colors? I am just guessing.

Both

>>> Digital linear PCM audio has the following components:

>
>>> 3. Bit-resolution [16-bit for CD audio]

>
>> So you do know what the term means.

>
> Yes. I know what it means. However, I don't know what its video-
> equivalent is?
>
>>> II. Digital vs. Analog
>>> Sample-rate is a digital entity. In a digital audio device, the sample-
>>> rate must be at least 2x the highest intended frequency of the digital
>>> audio signal. What is the analog-equivalent of sample-rate? In an
>>> analog audio device, does this equivalent need to be at least 2x the
>>> highest intended frequency of the analog audio signal? If not, then
>>> what is the minimum frequency that the analog-equivalent-of-sample-
>>> rate must be in relation to the analog audio signal?

>
>> There are no samples in an analog system, so there is no sample rate.

>
> Okay. Then what is the analog-equivalent of a "sample"?

There is none.

> The analog-equivalent of bit-resolution = dynamic range
>
> The analog-equivalent of sample rate = ?

Bandwidth.

>

Use it. Get facts and stop reasoning from false analogies. If you want
to know how many angels can dance on the head of a pin, build a better
microscope. Aquinas can't tell you, and you can't deduce the answer.

Jerry
--
Engineering is the art of making what you want from things you can get.
Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯

Jerry Avins, Aug 20, 2007

On Aug 19, 5:55 pm, Jerry Avins <> wrote:

> > Okay. So a digital video device with greater bit-resolution can allow
> > for more levels of luminance?

> Ir color differentiation. Or both.

Huh?

> > The above device inputs the electrical signals generated by an
> > attached microphone. These electric signals are AC and represent the
> > sound in "electronic" form. Sound with a higher-frequency will
> > generate a faster-alternating current than sound with a lower-
> > frequency. A louder sound will generate an alternating-current with a
> > bigger peak-to-peak wattage than a softer soft.

> All true. How to you record it with no moving parts?

Other than the microphone [obviously], why does there need to be any
moving parts? If a digital audio device can play audio back without
any moving parts, why can't an analog audio device be designed to do
the same?

The device below is *not* analog. It uses sampling so its digital:

http://www.winbond-usa.com/mambo/content/view/36/140/

I'm curious to why there are no purely-analog devices which can
record, store, and playback electric audio signals [AC currents at
least 20 Hz but no more than 20,000 Hz] without having moving parts.
Most of those voice recorders that use chips [i.e. solid-state] are
digital. Analog voice recorders, OTOH, use cassettes [an example of
"moving parts"].

On Aug 19, 6:08 pm, Jerry Avins <> wrote:

> > This would be a start if I want to decrease the frequency of a video
> > signal without decreasing the playback speed.

> Various compression schemes do that with varying degrees of resulting
> quality.

1. Decreasing the temporal frequency of the video signal without low-
pass filtering or decreasing the playback speed - an example of which
would be decreasing the rate at which a bird [in the movie] flaps its
wings. Hummingbirds flap their wings too fast for the human eye to
see. So the flap-rate of the wings could be decreased until the
flapping is visible to the human eye - without decreasing the playback
speed of the video. This decrease in flap-rate without slowing
playback is visually-analogous to decreasing the pitch of a recorded
sound without decreasing the playback speed. In this case, low-pass
filter would involve attenuating rapidly-changing images while
amplifying slowly-changing images -- I don't want this.

2. Decreasing the spatial frequency of the images in the video-signal
without low-pass filtering the images or increasing their sizes. An
example of this would be making the sharp areas of an image look
duller without decreasing the "sharpness" setting [an example of low-
pass filtering] on the monitor or increasing the size of the image.
Normally, when the size of an image is decreased, its sharpness
increases [it's like compressing a lower-frequency sound wave into a
higher-frequency one]. Likewise, when the size of an image is
increased, it looks duller [like stretching a higher-frequency sound
wave into a lower-frequency one]. Low-pass filtering simply decreasing
the sharpness of an image while increasing its dull characteristics --
which is what I don't want.

#1 Decreases the rate at which objects in the video move without
decreasing the video's playback speed or eliminating originally-
rapidly-moving objects [such as the rapidly flapping wings]

#2 Decreases makes a still image less sharp by stretching everything
within the image without increasing the size of the image or
eliminating sharp portions of the original image

Both #1 and #2 are visual-equivalents of decreasing the pitch of a
recorded audio signal without decreasing the audio's playback speed.

12. ### Jerry AvinsGuest

> On Aug 19, 5:55 pm, Jerry Avins <> wrote:

...

>> Ir color differentiation. Or both.

>
> Huh?

Typo: Or color differentiation. Or both.

>>> The above device inputs the electrical signals generated by an
>>> attached microphone. These electric signals are AC and represent the
>>> sound in "electronic" form. Sound with a higher-frequency will
>>> generate a faster-alternating current than sound with a lower-
>>> frequency. A louder sound will generate an alternating-current with a
>>> bigger peak-to-peak wattage than a softer soft.

>
>> All true. How to you record it with no moving parts?

>
> Other than the microphone [obviously], why does there need to be any
> moving parts? If a digital audio device can play audio back without
> any moving parts, why can't an analog audio device be designed to do
> the same?

Describe a motion-free process of recording and playing back. Cutting
grooves on a disk or magnetizing a moving tape both involve motion.

> The device below is *not* analog. It uses sampling so its digital:
>
> http://www.winbond-usa.com/mambo/content/view/36/140/
>
> I'm curious to why there are no purely-analog devices which can
> record, store, and playback electric audio signals [AC currents at
> least 20 Hz but no more than 20,000 Hz] without having moving parts.
> Most of those voice recorders that use chips [i.e. solid-state] are
> digital. Analog voice recorders, OTOH, use cassettes [an example of
> "moving parts"].

It's this simple: nobody has invented a way. I doubt than anyone ever
will. If you know how, communicate with me privately. With your idea and
my ability to bring it to fruition, we'll both get rich. A motion-free
method for printing text would also be a money maker.

Jerry
--
Engineering is the art of making what you want from things you can get.
Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯

Jerry Avins, Aug 20, 2007
13. ### Jerry AvinsGuest

> On Aug 19, 6:08 pm, Jerry Avins <> wrote:
>

>
>>> This would be a start if I want to decrease the frequency of a video
>>> signal without decreasing the playback speed.

>
>> Various compression schemes do that with varying degrees of resulting
>> quality.

>
>
> 1. Decreasing the temporal frequency of the video signal without low-
> pass filtering or decreasing the playback speed - an example of which
> would be decreasing the rate at which a bird [in the movie] flaps its
> wings. Hummingbirds flap their wings too fast for the human eye to
> see. So the flap-rate of the wings could be decreased until the
> flapping is visible to the human eye - without decreasing the playback
> speed of the video. This decrease in flap-rate without slowing
> playback is visually-analogous to decreasing the pitch of a recorded
> sound without decreasing the playback speed. In this case, low-pass
> filter would involve attenuating rapidly-changing images while
> amplifying slowly-changing images -- I don't want this.

You convinced me: there are stupid questions. Video and movies work by
displaying a succession of still pictures close enough together in time
and and position to give us the illusion of continuous motion. Think
about how slow motion is accomplished with film photography. Speculate
about how this might be done with analog video, and extrapolate to
digitized video.

> 2. Decreasing the spatial frequency of the images in the video-signal
> without low-pass filtering the images or increasing their sizes. An
> example of this would be making the sharp areas of an image look
> duller without decreasing the "sharpness" setting [an example of low-
> pass filtering] on the monitor or increasing the size of the image.
> Normally, when the size of an image is decreased, its sharpness
> increases [it's like compressing a lower-frequency sound wave into a
> higher-frequency one]. Likewise, when the size of an image is
> increased, it looks duller [like stretching a higher-frequency sound
> wave into a lower-frequency one]. Low-pass filtering simply decreasing
> the sharpness of an image while increasing its dull characteristics --
> which is what I don't want.

That's a reasonable summary of what you don't want to do. What do you

> #1 Decreases the rate at which objects in the video move without
> decreasing the video's playback speed or eliminating originally-
> rapidly-moving objects [such as the rapidly flapping wings]

Something has to give. If the flapping of the wings is slowed, so is the
motion of everything else.

> #2 Decreases makes a still image less sharp by stretching everything
> within the image without increasing the size of the image or
> eliminating sharp portions of the original image

Huh?

> Both #1 and #2 are visual-equivalents of decreasing the pitch of a
> recorded audio signal without decreasing the audio's playback speed.

Says who? You're reasoning from false analogy again.

Jerry
--
Engineering is the art of making what you want from things you can get.
Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯

Jerry Avins, Aug 20, 2007

On Aug 19, 7:47 pm, Jerry Avins <> wrote:

> > Other than the microphone [obviously], why does there need to be any
> > moving parts? If a digital audio device can play audio back without
> > any moving parts, why can't an analog audio device be designed to do
> > the same?

> Describe a motion-free process of recording and playing back. Cutting
> grooves on a disk or magnetizing a moving tape both involve motion.

The iPod is motion-free yet it's still able to record and playback.

Those Nintendo Entertainment System cartridges were able to playback
without any motion.

> > The device below is *not* analog. It uses sampling so its digital:

> > I'm curious to why there are no purely-analog devices which can
> > record, store, and playback electric audio signals [AC currents at
> > least 20 Hz but no more than 20,000 Hz] without having moving parts.
> > Most of those voice recorders that use chips [i.e. solid-state] are
> > digital. Analog voice recorders, OTOH, use cassettes [an example of
> > "moving parts"].

> It's this simple: nobody has invented a way. I doubt than anyone ever
> will. If you know how, communicate with me privately.

I don't know how but I guessing that it involves the analog equivalent
of Flash RAM [if re-writing is desired] or the analog equivalent of
Masked-ROM [if permanent storage is desired].

15. ### Sjouke BurryGuest

> On Aug 19, 7:47 pm, Jerry Avins <> wrote:
>

>
>>> Other than the microphone [obviously], why does there need to be any
>>> moving parts? If a digital audio device can play audio back without

Ah Radium trolling again i see!!!!

Sjouke Burry, Aug 20, 2007
16. ### Ron N.Guest

someone wrote:
> There is no analog-equivalent of sample-rate? Then what the limits the
> highest frequency an analog audio device can encode?
>
> What determines the highest frequency signal an analog solid-state
> audio device can input without distortion?

The basic physics of material objects leads to some
limitations. At some frequency, a given force can
no longer accelerate the mass of a given physical
transducer or recording substance by an amount
greater than does thermal noise (and other sources
of noise, such as friction, wear, dust, magnetic
particle size, film grain size, etc.)

Ron N., Aug 20, 2007
17. ### Bob MyersGuest

>In the case of digital video, we could treat each individual sample
>point location in the sampling grid (each pixel position in a frame)
>the same way as if it was a sample from an individual (mono) audio
>signal that continues on the same position in the next frame. For
>example, a 640×480 pixel video stream shot at 30 fps would be treated
>mathematically as if it consisted of 307200 parallel, individual mono
>audio streams [channels] at a 30 Hz sample rate. Where does bit-
>resolution enter the equation?

What you are calling "bit resolution" is more commonly
referred to as bits/sample, or in video bits/color or per
component. It "enters into the equation" in all digital
encoding systems by setting the dynamic range that can
be encoded in that system, or, if you prefer, the "accuracy"
with which each sample represents the value of the original
signal at that point. The number of bits, along with the choice
of the maximum value which can be encoded (i.e., what level
"all ones" in the sample corresponds to) determines the value
represented by the least-significant bit.

>>
>>Digital linear PCM audio has the following components:
>>
>>1. Sample rate [44.1 KHz for CD audio]
>>2. Channels [2 in stereo, 1 in monaural]
>>3. Bit-resolution [16-bit for CD audio]

PCM has nothing to do with it.

>>Sample rate in audio = frame rate in video

No. There is no real analog, in audio, to the frame
rate in video, except to the extent that the frame rate
IS a sample rate in terms of capturing one complete
2-D image at that point in time - IF that is the way the
image capture device works (and not all work this way).
More typically, the "sample rate" in audio would be
thought of as corresponding to the pixel rate in video.

>>Channel in audio = pixel in video

Definitely not. A "pixel" in imaging is just what the
name says - it is a "picture element," meaning one
dimensionless point-sample of the original image, at
a specific location within the image plane and, in the
case of motion video, at a specific time. A pixel is
the best analog you will find to a single sample in
the case of digital audio.

>>Bit-resolution in audio = ? in video

Bits per sample is bits per sample, in either case.

>>Is it true that unlike the-frequency-of-audio, the-frequency-of-video
>>has two components -- temporal and spatial?

A better way to say this is that you are concerned
with both temporal and spatial frequencies in the case of
motion video. (And, in the case of still images - as in
digital still photography - spatial frequencies only.)

>>II. Digital vs. Analog
>>
>>Sample-rate is a digital entity.

Not really. While today most sampled systems are, in fact,
"digital" in nature (meaning that the information is encoded in
digital form), there is nothing in sampling theory which restricts
its applicability to that realm. Sampled analog systems are certainly
not very common today (unless you count certain forms of
modulation as "sampling," and in fact there are some very close
parallels there), but the theory remains the same no matter which
form of encoding is used. In any event, you must sample the
original signal at a rate equal to at least twice its bandwidth (actually,
very slightly higher, to avoid a particular degenerate case which
could occur at EXACTLY 2X the bandwidth) in order to preserve
the information in the original and avoid "aliasing."

Bob M.

Bob Myers, Aug 20, 2007
18. ### Jerry AvinsGuest

> On Aug 19, 7:47 pm, Jerry Avins <> wrote:
>

>
>>> Other than the microphone [obviously], why does there need to be any
>>> moving parts? If a digital audio device can play audio back without
>>> any moving parts, why can't an analog audio device be designed to do
>>> the same?

>
>> Describe a motion-free process of recording and playing back. Cutting
>> grooves on a disk or magnetizing a moving tape both involve motion.

>
> The iPod is motion-free yet it's still able to record and playback.

It does that digitally. Did you really not know that? Are you trolling
after all?

> Those Nintendo Entertainment System cartridges were able to playback
> without any motion.

It does that digitally. Did you really not know that? Are you trolling
after all?

>>> The device below is *not* analog. It uses sampling so its digital:

>
>
>>> I'm curious to why there are no purely-analog devices which can
>>> record, store, and playback electric audio signals [AC currents at
>>> least 20 Hz but no more than 20,000 Hz] without having moving parts.
>>> Most of those voice recorders that use chips [i.e. solid-state] are
>>> digital. Analog voice recorders, OTOH, use cassettes [an example of
>>> "moving parts"].

>
>> It's this simple: nobody has invented a way. I doubt than anyone ever
>> will. If you know how, communicate with me privately.

>
> I don't know how but I guessing that it involves the analog equivalent
> of Flash RAM [if re-writing is desired] or the analog equivalent of
> Masked-ROM [if permanent storage is desired].

What would you write into that "RAM"? There are no analog bits. The
analog equivalent of a masked ROM is a phonograph record. Think first.
Blather afterward, but show some sign of thought or you're not worth
bothering with.

Jerry
--
Engineering is the art of making what you want from things you can get.
Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯

Jerry Avins, Aug 20, 2007
19. ### Jerry AvinsGuest

Bob Myers wrote:

> ... you must sample the
> original signal at a rate equal to at least twice its bandwidth (actually,
> very slightly higher, to avoid a particular degenerate case which
> could occur at EXACTLY 2X the bandwidth) in order to preserve
> the information in the original and avoid "aliasing."

Bob,

The degenerate case is just a limit. Signals close to the band edge take
a long time to be resolved. The time is of the order if 1/|f-F|, where F
is the frequency of the nearer band edge. Just as it takes in the order
of 100 seconds to resolve a frequency of .01 Hz, it takes the same time
to resolve a frequency of Fs/2 - .01 Hz. When f = Fs/2, it just takes
forever. The real works tends to be continuous.

Jerry
--
Engineering is the art of making what you want from things you can get.
Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯Â¯

Jerry Avins, Aug 20, 2007
20. ### Dave PlattGuest

In article <>,

>I'm curious to why there are no purely-analog devices which can
>record, store, and playback electric audio signals [AC currents at
>least 20 Hz but no more than 20,000 Hz] without having moving parts.
>Most of those voice recorders that use chips [i.e. solid-state] are
>digital. Analog voice recorders, OTOH, use cassettes [an example of
>"moving parts"].

The fact that it's an AC (inherently-varying) signal being recorded,
means that *something* has to move... if only some amount of
electrical charge. If the electrons don't move, the output can't vary
and all you have is a DC voltage.

And, in fact, this concept of moving electrical charges is the basis
for one type of analog signal storage and playback device which has no
moving (mechanical) parts... the CCD, or Charge Coupled Device. It
consists of a large number of charge storage devices (typically MOSFET
transistors with dielectrically-isolated gates) hooked up as a sort of
shift register or "bucket brigade". Each gate stores a charge which
is proportional to the input signal present at a given moment in time.
Several thousand times per second, a clock pulse causes each storage
cell to generate an output voltage proportional to the charge in its
storage gate, and then to "capture" onto its gate the signal being
presented by the previous gate in the chain.

In effect, the signal is propagated down the chain at a rate
proportional to the clock rate.

Why aren't these devices used more than they are? They're not very
efficient, and they're noisy. Every time the charge is copied from
one cell to the next, a bit of imprecision (noise) creeps in... so the
fidelity isn't great. And, because the device has to be able to hold
a very wide range of charges (since the charge is directly
proportional to the signal level) the storage gates have to be fairly
large.

The net result is that an audio CCD is capable of storing a
decent-quality signal for only a few tens or hundreds of milliseconds,
from input to output.

Another sort of a purely analog signal-storage device, with no moving
parts other than the electrons which convey the signal, is a simple
length of transmission line (with perhaps some amplifiers mid-way).
Put a signal in at one end, get the same signal back out the other end
some number of microseconds or milliseconds later.

Once again, they're not terribly efficient and are prone to be noisy.

For storage of large amounts of information, in a small space, with
high fidelity, using digital storage techniques is much more
efficient - largely because each storage cell must only store 2
different information states (0 and 1) rather than a large number of
possible levels.

--
Dave Platt <> AE6EO